{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Intro to Natural Language Processing with Python\n",
    "\n",
    "## Info\n",
    "- Scott Bailey (CIDR), *scottbailey@stanford.edu*\n",
    "- Javier de la Rosa (CIDR), *versae@stanford.edu*\n",
    "- Ashley Jester (CIDR/SSDS), *ajester@stanford.edu*\n",
    "\n",
    "## What are we covering today?\n",
    "- What is NLP?\n",
    "- Options for NLP in Python\n",
    "- Tokenization\n",
    "- Part of Speech Tagging\n",
    "- Word transformations (lemmatization, pluralization)\n",
    "- Sentiment Analysis\n",
    "- Readability indices"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Goals\n",
    "\n",
    "By the end of the workshop, we hope you'll have a basic understanding of natural language processing, and enough familiarity with one NLP package, Textblob, to perform basic NLP tasks like tokenization and part of speech tagging. Through analyzing presidential speeches, we also hope you'll understand how these basic tasks open up a number of possibilities for textual analysis, such as readability indices. "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## What is NLP\n",
    "\n",
    "NLP stands for Natual Language Processing and it involves a huge variety of tasks such as:\n",
    "- Automatic summarization.\n",
    "- Coreference resolution.\n",
    "- Discourse analysis.\n",
    "- Machine translation.\n",
    "- Morphological segmentation.\n",
    "- Named entity recognition.\n",
    "- Natural language understanding.\n",
    "- Part-of-speech tagging.\n",
    "- Parsing.\n",
    "- Question answering.\n",
    "- Relationship extraction.\n",
    "- Sentiment analysis.\n",
    "- Speech recognition.\n",
    "- Topic segmentation.\n",
    "- Word segmentation.\n",
    "- Word sense disambiguation.\n",
    "- Information retrieval.\n",
    "- Information extraction.\n",
    "- Speech processing.\n",
    "\n",
    "One of the key ideas is to be able to process text without reading it."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## NLP in Python\n",
    "\n",
    "Python is builtin with a very mature regular expression library, which is the building block of natural language processing. However, more advanced tasks need different libraries. Traditionally, in the Python ecosystem the Natural Language Processing Toolkit, abbreviated as `NLTK`, has been until recently the only working choice. Unfortunately, the library has not aged well, and even though it's updated to work with the newer versions of Python, it does not provide us the speed we might need to process large corpora.\n",
    "\n",
    "Another solution that appeared recently is called `spaCy`, and it is much faster since is written in a pseudo-C Python language optimized for speed called Cython.\n",
    "\n",
    "Both these libraries are complex and therefore there exist wrappers around them to simplify their APIs. The two more popular are `Textblob` for NLTK and CLiPS Parser, and `textacy` for spaCy. In this workshop we will be using Textblob since it is more well established and mature and provides with all we need to start learning some NLP basic tasks."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "from textblob import TextBlob"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "# Helper functions\n",
    "import requests\n",
    "from urllib.request import urlopen\n",
    "\n",
    "def get_text(url):\n",
    "    try:\n",
    "        return requests.get(url).text\n",
    "    except:\n",
    "        return urlopen(url).read().decode(\"utf8\")\n",
    "        \n",
    "def get_speech(url):\n",
    "    page = get_text(url)\n",
    "    full_text = page.split('\\n')\n",
    "    return \" \".join(full_text[2:])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'Mr. Speaker, Mr. Vice President, members of Congress, honored guests, my fellow Americans:  We are fortunate to be alive at this moment in history. Never before has our nation enjoyed, at once, so much prosperity and social progress with so little internal crisis and so few external threats. Never before have we had such a blessed opportunity and, therefore, such a profound obligation to build the more perfect Union of our Founders’ dreams.  We begin the new century with over 20 million new jobs; the fastest economic growth in more than 30 years; the lowest unemployment rates in 30 years; the lowest poverty rates in 20 years; the lowest African-American and Hispanic unemployment rates on record; the first back-to-back surpluses in 42 years; and next month, America will achieve the longest period of economic growth in our entire history. We have built a new economy.  And our economic revolution has been matched by a revival of the American spirit: crime down by 20 percent, to its lowest level in 25 years; teen births down seven years in a row; adoptions up by 30 percent; welfare rolls cut in half, to their lowest levels in 30 years.  My fellow Americans, the state of our Union is the strongest it has ever been.  As always, the real credit belongs to the American people. My gratitude also goes to those of you in this chamber who have worked with us to put progress over partisanship.  Eight years ago, it was not so clear to most Americans there would be much to celebrate in the year 2000. Then our nation was gripped by economic distress, social decline, political gridlock. The title of a best-selling book asked: \"America: What Went Wrong?\"  In the best traditions of our nation, Americans determined to set things right. We restored the vital center, replacing outmoded ideologies with a new vision anchored in basic, enduring values: opportunity for all, responsibility from all, a community of all Americans. We reinvented government, transforming it into a catalyst for new ideas that stress both opportunity and responsibility and give our people the tools they need to solve their own problems.  With the smallest federal work force in 40 years, we turned record deficits into record surpluses and doubled our investment in education. We cut crime with 100,000 community police and the Brady law, which has kept guns out of the hands of half a million criminals.  We ended welfare as we knew it, requiring work while protecting health care and nutrition for children and investing more in child care, transportation, and housing to help their parents go to work. We’ve helped parents to succeed at home and at work with family leave, which 20 million Americans have now used to care for a newborn child or a sick loved one. We’ve engaged 150,000 young Americans in citizen service through AmeriCorps, while helping them earn money for college.  In 1992, we just had a roadmap. Today, we have results.  Even more important, America again has the confidence to dream big dreams. But we must not let this confidence drift into complacency. For we, all of us, will be judged by the dreams and deeds we pass on to our children. And on that score, we will be held to a high standard, indeed, because our chance to do good is so great.  My fellow Americans, we have crossed the bridge we built to the 21st century. Now, we must shape a 21st century American revolution of opportunity, responsibility, and community. We must be now, as we were in the beginning, a new nation.  At the dawn of the last century, Theodore Roosevelt said, \"The one characteristic more essential than any other is foresight . . . it should be the growing nation with a future that takes the long look ahead.\" So tonight let us take our long look ahead and set great goals for our Nation.  To 21st century America, let us pledge these things: Every child will begin school ready to learn and graduate ready to succeed. Every family will be able to succeed at home and at work, and no child will be raised in poverty. We will meet the challenge of the aging of America. We will assure quality, affordable health care, at last, for all Americans. We will make America the safest big country on Earth. We will pay off our national debt for the first time since 1835.* We will bring prosperity to every American community. We will reverse the course of climate change and leave a safer, cleaner planet. America will lead the world toward shared peace and prosperity and the far frontiers of science and technology. And we will become at last what our Founders pledged us to be so long ago:  * White House correction.  One nation, under God, indivisible, with liberty and justice for all.  These are great goals, worthy of a great nation. We will not reach them all this year, not even in this decade. But we will reach them. Let us remember that the first American Revolution was not won with a single shot; the continent was not settled in a single year. The lesson of our history and the lesson of the last seven years is that great goals are reached step by step, always building on our progress, always gaining ground.  Of course, you can’t gain ground if you’re standing still. And for too long this Congress has been standing still on some of our most pressing national priorities. So let’s begin tonight with them.  Again, I ask you to pass a real Patients’ Bill of Rights. I ask you to pass common-sense gun safety legislation. I ask you to pass campaign finance reform. I ask you to vote up or down on judicial nominations and other important appointees. And again, I ask you—I implore you to raise the minimum wage.  Now, two years ago—let me try to balance the seesaw here—[laughter]—two years ago, as we reached across party lines to reach our first balanced budget, I asked that we meet our responsibility to the next generation by maintaining our fiscal discipline. Because we refused to stray from that path, we are doing something that would have seemed unimaginable seven years ago. We are actually paying down the national debt. Now, if we stay on this path, we can pay down the debt entirely in just 13 years now and make America debt-free for the first time since Andrew Jackson was President in 1835.  In 1993 we began to put our fiscal house in order with the Deficit Reduction Act, which you’ll all remember won passages in both Houses by just a single vote. Your former colleague, my first Secretary of the Treasury, led that effort and sparked our long boom. He’s here with us tonight. Lloyd Bentsen, you have served America well, and we thank you.  Beyond paying off the debt, we must ensure that the benefits of debt reduction go to preserving two of the most important guarantees we make to every American, Social Security and Medicare. Tonight I ask you to work with me to make a bipartisan downpayment on Social Security reform by crediting the interest savings from debt reduction to the Social Security Trust  Fund so that it will be strong and sound for the next 50 years.  But this is just the start of our journey. We must also take the right steps toward reaching our great goals. First and foremost, we need a 21st century revolution in education, guided by our faith that every single child can learn. Because education is more important than ever, more than ever the key to our children’s future, we must make sure all our children have that key. That means quality preschool and after-school, the best trained teachers in the classroom, and college opportunities for all our children.  For seven years now, we’ve worked hard to improve our schools, with opportunity and responsibility, investing more but demanding more in turn. Reading, math, college entrance scores are up. Some of the most impressive gains are in schools in very poor neighborhoods.  But all successful schools have followed the same proven formula: higher standards, more accountability, and extra help so children who need it can get it to reach those standards. I have sent Congress a reform plan based on that formula. It holds states and school districts accountable for progress and rewards them for results. Each year, our national government invests more than $15 billion in our schools. It is time to support what works and stop supporting what doesn’t.  Now, as we demand more from our schools, we should also invest more in our schools. Let’s double our investment to help states and districts turn around their worst performing schools or shut them down. Let’s double our investments in after-school and summer school programs, which boost achievement and keep people off the streets and out of trouble. If we do this, we can give every single child in every failing school in America—everyone—the chance to meet high standards.  Since 1993, we’ve nearly doubled our investment in Head Start and improved its quality. Tonight I ask you for another $1 billion for Head Start, the largest increase in the history of the program.  We know that children learn best in smaller classes with good teachers. For two years in a row, Congress has supported my plan to hire 100,000 new qualified teachers to lower class size in the early grades. I thank you for that, and I ask you to make it three in a row. And to make sure all teachers know the subjects they teach, tonight I propose a new teacher quality initiative, to recruit more talented people into the classroom, reward good teachers for staying there, and give all teachers the training they need.  We know charter schools provide real public school choice. When I became President, there was just one independent public charter school in all America. Today, thanks to you, there are 1,700. I ask you now to help us meet our goal of 3,000 charter schools by next year.  We know we must connect all our classrooms to the Internet, and we’re getting there. In 1994, only 3 percent of our classrooms were connected. Today, with the help of the Vice President’s E-rate program, more than half of them are, and 90 percent of our schools have at least one Internet connection. But we cannot finish the job when a third of all our schools are in serious disrepair. Many of them have walls and wires so old, they’re too old for the Internet. So tonight I propose to help 5,000 schools a year make immediate and urgent repairs and, again, to help build or modernize 6,000 more, to get students out of trailers and into high-tech classrooms.  I ask all of you to help me double our bipartisan GEAR UP program, which provides mentors for disadvantaged young people. If we double it, we can provide mentors for 1.4 million of them. Let’s also offer these kids from disadvantaged backgrounds the same chance to take the same college test-prep courses wealthier students use to boost their test scores.  To make the American dream achievable for all, we must make college affordable for all. For seven years, on a bipartisan basis, we have taken action toward that goal: larger Pell Grants, more affordable student loans, education IRAs, and our HOPE scholarships, which have already benefited five million young people.  Now, 67 percent of high school graduates are going on to college. That’s up 10 percent since 1993. Yet millions of families still strain to pay college tuition. They need help. So I propose a landmark $30 billion college opportunity tax cut, a middle class tax deduction for up to $10,000 in college tuition costs. The previous actions of this Congress have already made two years of college affordable for all. It’s time make four years of college affordable for all. If we take all these steps, we’ll move a long way toward making sure every child starts school ready to learn and graduates ready to succeed.  We also need a 21st century revolution to reward work and strengthen families by giving every parent the tools to succeed at work and at the most important work of all, raising children. That means making sure every family has health care and the support to care for aging parents, the tools to bring their children up right, and that no child grows up in poverty.  From my first days as President, we’ve worked to give families better access to better health care. In 1997, we passed the Children’s Health Insurance Program—CHIP—so that workers who don’t have coverage through their employers at least can get it for their children. So far, we’ve enrolled two million children. We’re well on our way to our goal of five million.  But there are still more than 40 million of our fellow Americans without health insurance, more than there were in 1993. Tonight I propose that we follow Vice President Gore’s suggestion to make low income parents eligible for the insurance that covers their children. Together with our children’s initiative—think of this—together with our children’s initiative, this action would enable us to cover nearly a quarter of all the uninsured people in America.  Again, I want to ask you to let people between the ages of 55 and 65, the fastest growing group of uninsured, buy into Medicare. And this year I propose to give them a tax credit to make that choice an affordable one. I hope you will support that, as well.  When the baby boomers retire, Medicare will be faced with caring for twice as many of our citizens; yet, it is far from ready to do so. My generation must not ask our children’s generation to shoulder our burden. We simply must act now to strengthen and modernize Medicare.  My budget includes a comprehensive plan to reform Medicare, to make it more efficient and more competitive. And it dedicates nearly $400 billion of our budget surplus to keep Medicare solvent past 2025. And at long last, it also provides funds to give every senior a voluntary choice of affordable coverage for prescription drugs.  Lifesaving drugs are an indispensable part of modern medicine. No one creating a Medicare program today would even think of excluding coverage for prescription drugs. Yet more than three in five of our seniors now lack dependable drug coverage which can lengthen and enrich their lives. Millions of older Americans, who need prescription drugs the most, pay the highest prices for them. In good conscience, we cannot let another year pass without extending to all our seniors this lifeline of affordable prescription drugs.  Record numbers of Americans are providing for aging or ailing loved ones at home. It’s a loving but a difficult and often very expensive choice. Last year, I proposed a $1,000 tax credit for long-term care. Frankly, it wasn’t enough. This year, let’s triple it to $3,000. But this year, let’s pass it.  We also have to make needed investments to expand access to mental health care. I want to take a moment to thank the person who led our first White House Conference on Mental Health last year and who for seven years has led all our efforts to break down the barriers to decent treatment of people with mental illness. Thank you, Tipper Gore.  Taken together, these proposals would mark the largest investment in health care in the 35 years since Medicare was created—the largest investment in 35 years. That would be a big step toward assuring quality health care for all Americans, young and old. And I ask you to embrace them and pass them.  We must also make investments that reward work and support families. Nothing does that better than the earned-income tax credit, the EITC. The \"E\" in the EITC is about earning, working, taking responsibility, and being rewarded for it. In my very first address to you, I asked Congress to greatly expand this credit, and you did. As a result, in 1998 alone, the EITC helped more than 4.3 million Americans work their way out of poverty toward the middle class. That’s double the number in 1993.  Tonight I propose another major expansion of the EITC: to reduce the marriage penalty, to make sure it rewards marriage as it rewards work, and also to expand the tax credit for families that have more than two children. It punishes people with more than two children today. Our proposal would allow families with three or more children to get up to $1,100 more in tax relief. These are working families; their children should not be in poverty.  We also can’t reward work and family unless men and women get equal pay for equal work. Today the female unemployment rate is the lowest it has been in 46 years. Yet, women still only earn about 75 cents for every dollar men earn. We must do better, by providing the resources to enforce present equal pay laws, training more women for high-paying, high-tech jobs, and passing the \"Paycheck Fairness Act.\"  Many working parents spend up to a quarter—a quarter—of their income on child care. Last year, we helped parents provide child care for about two million children. My child care initiative before you now, along with funds already secured in welfare reform, would make child care better, safer, and more affordable for another 400,000 children. I ask you to pass that. They need it out there.  For hard-pressed middle income families, we should also expand the child care tax credit. And I believe strongly we should take the next big step and make that tax credit refundable for low income families. For people making under $30,000 a year, that could mean up to $2,400 for child care costs. You know, we all say we’re pro-work and pro-family. Passing this proposal would prove it.  Ten of millions of Americans live from paycheck to paycheck. As hard as they work, they still don’t have the opportunity to save. Too few can make use of IRAs and 401k plans. We should do more to help all working families save and accumulate wealth. That’s the idea behind the Individual Development Accounts, the IDAs. I ask you to take that idea to a new level, with new retirement savings accounts that enable every low and moderate income family in America to save for retirement, a first home, a medical emergency, or a college education. I propose to match their contributions, however small, dollar for dollar, every year they save. And I propose to give a major new tax credit to any small business that will provide a meaningful pension to its workers. Those people ought to have retirement as well as the rest of us.  Nearly one in three American children grows up without a father. These children are five times more likely to live in poverty than children with both parents at home. Clearly, demanding and supporting responsible fatherhood is critical to lifting all our children out of poverty. We’ve doubled child support collections since 1992. And I’m proposing to you tough new measures to hold still more fathers responsible.  But we should recognize that a lot of fathers want to do right by their children but need help to do it. Carlos Rosas of St. Paul, Minnesota, wanted to do right by his son, and he got the help to do it. Now he’s got a good job, and he supports his little boy. My budget will help 40,000 more fathers make the same choices Carlos Rosas did. I thank him for being here tonight. Stand up, Carlos. [Applause] Thank you.  If there is any single issue on which we should be able to reach across party lines, it is in our common commitment to reward work and strengthen families. Just remember what we did last year. We came together to help people with disabilities keep their health insurance when they go to work. And I thank you for that. Thanks to overwhelming bipartisan support from this Congress, we have improved foster care. We’ve helped those young people who leave it when they turn 18, and we have dramatically increased the number of foster care children going into adoptive homes. I thank all of you for all of that.  Of course, I am forever grateful to the person who has led our efforts from the beginning and who’s worked so tirelessly for children and families for 30 years now, my wife, Hillary, and I thank her.  If we take the steps just discussed, we can go a long, long way toward empowering parents to succeed at home and at work and ensuring that no child is raised in poverty. We can make these vital investments in health care, education, support for working families, and still offer tax cuts to help pay for college, for retirement, to care for aging parents, to reduce the marriage penalty. We can do these things without forsaking the path of fiscal discipline that got us to this point here tonight. Indeed, we must make these investments and these tax cuts in the context of a balanced budget that strengthens and extends the life of Social Security and Medicare and pays down the national debt.  Crime in America has dropped for the past seven years—that’s the longest decline on record— thanks to a national consensus we helped to forge on community police, sensible gun safety laws, and effective prevention. But nobody, nobody here, nobody in America believes we’re safe enough. So again, I ask you to set a higher goal. Let’s make this country the safest big country in the world.  Last fall, Congress supported my plan to hire, in addition to the 100,000 community police we’ve already funded, 50,000 more, concentrated in high-crime neighborhoods. I ask your continued support for that.  Soon after the Columbine tragedy, Congress considered commonsense gun legislation, to require Brady background checks at the gun shows, child safety locks for new handguns, and a ban on the importation of large capacity ammunition clips. With courage and a tie-breaking vote by the Vice President—[laughter] —the Senate faced down the gun lobby, stood up for the American people, and passed this legislation. But the House failed to follow suit.  Now, we have all seen what happens when guns fall into the wrong hands. Daniel Mauser was only 15 years old when he was gunned down at Columbine. He was an amazing kid, a straight-A student, a good skier. Like all parents who lose their children, his father, Tom, has borne unimaginable grief. Somehow he has found the strength to honor his son by transforming his grief into action. Earlier this month, he took a leave of absence from his job to fight for tougher gun safety laws. I pray that his courage and wisdom will at long last move this Congress to make commonsense gun legislation the very next order of business. Tom Mauser, stand up. We thank you for being here tonight. Tom. Thank you, Tom. [Applause]  We must strengthen our gun laws and enforce those already on the books better. Federal gun crime prosecutions are up 16 percent since I took office. But we must do more. I propose to hire more federal and local gun prosecutors and more ATF agents to crack down on illegal gun traffickers and bad-apple dealers. And we must give them the enforcement tools that they need, tools to trace every gun and every bullet used in every gun crime in the United States. I ask you to help us do that.  Every State in this country already requires hunters and automobile drivers to have a license. I think they ought to do the same thing for handgun purchases. Now, specifically, I propose a plan to ensure that all new handgun buyers must first have a photo license from their state showing they passed the Brady background check and a gun safety course, before they get the gun. I hope you’ll help me pass that in this Congress.  Listen to this—listen to this. The accidental gun rate—the accidental gun death rate of children under 15 in the United States is nine times higher than in the other 25 industrialized countries combined. Now, technologies now exist that could lead to guns that can only be fired by the adults who own them. I ask Congress to fund research into smart gun technology to save these children’s lives. I ask responsible leaders in the gun industry to work with us on smart guns and other steps to keep guns out of the wrong hands, to keep our children safe.  You know, every parent I know worries about the impact of violence in the media on their children. I want to begin by thanking the entertainment industry for accepting my challenge to put voluntary ratings on TV programs and video and Internet games. But frankly, the ratings are too numerous, diverse, and confusing to be really useful to parents. So tonight I ask the industry to accept the First Lady’s challenge to develop a single voluntary rating system for all children’s entertainment that is easier for parents to understand and enforce.  The steps I outline will take us well on our way to making America the safest big country in the world.  Now, to keep our historic economic expansion going, the subject of a lot of discussion in this community and others, I believe we need a 21st century revolution to open new markets, start new businesses, hire new workers right here in America, in our inner cities, poor rural areas, and Native American reservations.  Our nation’s prosperity hasn’t yet reached these places. Over the last six months, I’ve traveled to a lot of them, joined by many of you and many far-sighted business people, to shine a spotlight on the enormous potential in communities from Appalachia to the Mississippi Delta, from Watts to the Pine Ridge Reservation. Everywhere I go, I meet talented people eager for opportunity and able to work. Tonight I ask you, let’s put them to work. For business, it’s the smart thing to do. For America, it’s the right thing to do. And let me ask you something: If we don’t do this now, when in the wide world will we ever get around to it?  So I ask Congress to give businesses the same incentives to invest in America’s new markets they now have to invest in markets overseas. Tonight I propose a large new markets tax credit and other incentives to spur $22 billion in private-sector capital to create new businesses and new investments in our inner cities and rural areas. Because empowerment zones have been creating these opportunities for five years now, I also ask you to increase incentives to invest in them and to create more of them.  And let me say to all of you again what I have tried to say at every turn: This is not a Democratic or a Republican issue. Giving people a chance to live their dreams is an American issue.  Mr. Speaker, it was a powerful moment last November when you joined Reverend Jesse Jackson and me in your home state of Illinois and committed to working toward our common goal by combining the best ideas from both sides of the aisle. I want to thank you again and to tell you, Mr. Speaker, I look forward to working with you. This is a worthy joint endeavor. Thank you.  I also ask you to make special efforts to address the areas of our Nation with the highest rates of poverty, our Native American reservations and the Mississippi Delta. My budget includes a $110 million initiative to promote economic development in the Delta and a billion dollars to increase economic opportunity, health care, education, and law enforcement for our Native American communities. We should begin this new century by honoring our historic responsibility to empower the first Americans. And I want to thank tonight the leaders and the members from both parties who’ve expressed to me an interest in working with us on these efforts. They are profoundly important.  There’s another part of our American community in trouble tonight, our family farmers. When I signed the farm bill in 1996, I said there was great danger it would work well in good times but not in bad. Well, droughts, floods, and historically low prices have made these times very bad for the farmers. We must work together to strengthen the farm safety net, invest in land conservation, and create some new markets for them by expanding our programs for bio-based fuels and products. Please, they need help. Let’s do it together.  Opportunity for all requires something else today, having access to a computer and knowing how to use it. That means we must close the digital divide between those who’ve got the tools and those who don’t. Connecting classrooms and libraries to the Internet is crucial, but it’s just a start. My budget ensures that all new teachers are trained to teach 21st century skills, and it creates technology centers in 1,000 communities to serve adults. This spring, I’ll invite high-tech leaders to join me on another new markets tour, to close the digital divide and open opportunity for our people. I want to thank the high-tech companies that already are doing so much in this area. I hope the new tax incentives I have proposed will get all the rest of them to join us. This is a national crusade. We have got to do this and do it quickly.  Now, again I say to you, these are steps, but step by step, we can go a long way toward our goal of bringing opportunity to every community.  To realize the full possibilities of this economy, we must reach beyond our own borders to shape the revolution that is tearing down barriers and building new networks among nations and individuals and economies and cultures: globalization. It’s the central reality of our time.  Of course, change this profound is both liberating and threatening to people. But there’s no turning back. And our open, creative society stands to benefit more than any other if we understand and act on the realities of interdependence. We have to be at the center of every vital global network, as a good neighbor and a good partner. We have to recognize that we cannot build our future without helping others to build theirs.  The first thing we have got to do is to forge a new consensus on trade. Now, those of us who believe passionately in the power of open trade, we have to ensure that it lifts both our living standards and our values, never tolerating abusive child labor or a race to the bottom in the environment and worker protection. But others must recognize that open markets and rule-based trade are the best engines we know of for raising living standards, reducing global poverty and environmental destruction, and assuring the free flow of ideas.  I believe, as strongly tonight as I did the first day I got here, the only direction forward for America on trade—the only direction for America on trade is to keep going forward. I ask you to help me forge that consensus. We have to make developing economies our partners in prosperity. That’s why I would like to ask you again to finalize our groundbreaking African and Caribbean Basin trade initiatives.  But globalization is about more than economics. Our purpose must be to bring together the world around freedom and democracy and peace and to oppose those who would tear it apart. Here are the fundamental challenges I believe America must meet to shape the 21st century world.  First, we must continue to encourage our former adversaries, Russia and China, to emerge as stable, prosperous, democratic nations. Both are being held back today from reaching their full potential: Russia by the legacy of communism, an economy in turmoil, a cruel and self-defeating war in Chechnya; China by the illusion that it can buy stability at the expense of freedom.  But think how much has changed in the past decade: 5,000 former Soviet nuclear weapons taken out of commission; Russian soldiers actually serving with ours in the Balkans; Russian people electing their leaders for the first time in 1,000 years; and in China, an economy more open to the world than ever before.  Of course, no one, not a single person in this chamber tonight can know for sure what direction these great nations will take. But we do know for sure that we can choose what we do. And we should do everything in our power to increase the chance that they will choose wisely, to be constructive members of our global community.  That’s why we should support those Russians who are struggling for a democratic, prosperous future; continue to reduce both our nuclear arsenals; and help Russia to safeguard weapons and materials that remain.  And that’s why I believe Congress should support the agreement we negotiated to bring China into the WTO, by passing permanent normal trade relations with China as soon as possible this year. I think you ought to do it for two reasons: First of all, our markets are already open to China; this agreement will open China’s markets to us. And second, it will plainly advance the cause of peace in Asia and promote the cause of change in China. No, we don’t know where it’s going. All we can do is decide what we’re going to do. But when all is said and done, we need to know we did everything we possibly could to maximize the chance that China will choose the right future.  A second challenge we’ve got is to protect our own security from conflicts that pose the risk of wider war and threaten our common humanity. We can’t prevent every conflict or stop every outrage. But where our interests are at stake and we can make a difference, we should be, and we must be, peacemakers.  We should be proud of our role in bringing the Middle East closer to a lasting peace, building peace in Northern Ireland, working for peace in East Timor and Africa, promoting reconciliation between Greece and Turkey and in Cyprus, working to defuse these crises between India and Pakistan, in defending human rights and religious freedom. And we should be proud of the men and women of our Armed Forces and those of our allies who stopped the ethnic cleansing in Kosovo, enabling a million people to return to their homes.  When Slobodan Milosevic unleashed his terror on Kosovo, Captain John Cherrey was one of the brave airmen who turned the tide. And when another American plane was shot down over Serbia, he flew into the teeth of enemy air defenses to bring his fellow pilot home. Thanks to our Armed Forces’ skill and bravery, we prevailed in Kosovo without losing a single American in combat. I want to introduce Captain Cherrey to you. We honor Captain Cherrey, and we promise you, Captain, we’ll finish the job you began. Stand up so we can see you. [Applause]  A third challenge we have is to keep this inexorable march of technology from giving terrorists and potentially hostile nations the means to undermine our defenses. Keep in mind, the same technological advances that have shrunk cell phones to fit in the palms of our hands can also make weapons of terror easier to conceal and easier to use.  We must meet this threat by making effective agreements to restrain nuclear and missile programs in North Korea, curbing the flow of lethal technology to Iran, preventing Iraq from threatening its neighbors, increasing our preparedness against chemical and biological attack, protecting our vital computer systems from hackers and criminals, and developing a system to defend against new missile threats, while working to preserve our ABM missile treaty with Russia. We must do all these things.  I predict to you, when most of us are long gone but some time in the next 10 to 20 years, the major security threat this country will face will come from the enemies of the nation-state, the narcotraffickers and the terrorists and the organized criminals who will be organized together, working together, with increasing access to ever more sophisticated chemical and biological weapons. And I want to thank the Pentagon and others for doing what they’re doing right now to try to help protect us and plan for that, so that our defenses will be strong. I ask for your support to ensure they can succeed.  I also want to ask you for a constructive bipartisan dialog this year to work to build a consensus which I hope will eventually lead to the ratification of the Comprehensive Nuclear-Test-Ban Treaty.  I hope we can also have a constructive effort to meet the challenge that is presented to our planet by the huge gulf between rich and poor. We cannot accept a world in which part of humanity lives on the cutting edge of a new economy and the rest live on the bare edge of survival. I think we have to do our part to change that with expanded trade, expanded aid, and the expansion of freedom.  This is interesting: From Nigeria to Indonesia, more people got the right to choose their leaders in 1999 than in 1989, when the Berlin Wall fell. We’ve got to stand by these democracies, including and especially tonight Colombia, which is fighting narcotraffickers, for its own people’s lives and our children’s lives. I have proposed a strong two-year package to help Colombia win this fight. I want to thank the leaders in both parties in both Houses for listening to me and the President of Colombia about it. We have got to pass this. I want to ask your help. A lot is riding on it. And it’s so important for the long-term stability of our country and for what happens in Latin America.  I also want you to know I’m going to send you new legislation to go after what these drug barons value the most, their money. And I hope you’ll pass that as well.  In a world where over a billion people live on less than a dollar a day, we also have got to do our part in the global endeavor to reduce the debts of the poorest countries, so they can invest in education, health care, and economic growth. That’s what the Pope and other religious leaders have urged us to do. And last year, Congress made a downpayment on America’s share. I ask you to continue that. I thank you for what you did and ask you to stay the course.  I also want to say that America must help more nations to break the bonds of disease. Last year in Africa, 10 times as many people died from AIDS as were killed in wars—10 times. The budget I give you invests $150 million more in the fight against this and other infectious killers. And today I propose a tax credit to speed the development of vaccines for diseases like malaria, TB, and AIDS. I ask the private sector and our partners around the world to join us in embracing this cause. We can save millions of lives together, and we ought to do it.  I also want to mention our final challenge, which, as always, is the most important. I ask you to pass a national security budget that keeps our military the best trained and best equipped in the world, with heightened readiness and 21st century weapons, which raises salaries for our service men and women, which protects our veterans, which fully funds the diplomacy that keeps our soldiers out of war, which makes good on our commitment to our U.N. dues and arrears. I ask you to pass this budget.  I also want to say something, if I might, very personal tonight. The American people watching us at home, with the help of all the commentators, can tell, from who stands and who sits and who claps and who doesn’t, that there’s still modest differences of opinion in this room. [Laughter] But I want to thank you for something, every one of you. I want to thank you for the extraordinary support you have given, Republicans and Democrats alike, to our men and women in uniform. I thank you for that.  I also want to thank, especially, two people. First, I want to thank our Secretary of Defense, Bill Cohen, for symbolizing our bipartisan commitment to national security. Thank you, sir. Even more, I want to thank his wife, Janet, who, more than any other American citizen, has tirelessly traveled this world to show the support we all feel for our troops. Thank you, Janet Cohen. I appreciate that. Thank you.  These are the challenges we have to meet so that we can lead the world toward peace and freedom in an era of globalization.  I want to tell you that I am very grateful for many things as President. But one of the things I’m grateful for is the opportunity that the Vice President and I have had to finally put to rest the bogus idea that you cannot grow the economy and protect the environment at the same time.  As our economy has grown, we’ve rid more than 500 neighborhoods of toxic waste, ensured cleaner air and water for millions of people. In the past three months alone, we’ve helped preserve 40 million acres of roadless lands in the national forests, created three new national monuments.  But as our communities grow, our commitment to conservation must continue to grow.  Tonight I propose creating a permanent conservation fund, to restore wildlife, protect coastlines, save natural treasures, from the California redwoods to the Florida Everglades. This lands legacy endowment would represent by far the most enduring investment in land preservation ever proposed in this House. I hope we can get together with all the people with different ideas and do this. This is a gift we should give to our children and our grandchildren for all time, across party lines. We can make an agreement to do this.  Last year the Vice President launched a new effort to make communities more liberal—liv-able—[laughter]—liberal, I know. [Laughter] Wait a minute, I’ve got a punchline now. That’s this year’s agenda; last year was livable, right? [Laughter] That’s what Senator Lott is going to say in the commentary afterwards—[laugh-ter] —to make our communities more livable. This is big business. This is a big issue. What does that mean? You ask anybody that lives in an unlivable community, and they’ll tell you. They want their kids to grow up next to parks, not parking lots; the parents don’t have to spend all their time stalled in traffic when they could be home with their children.  Tonight I ask you to support new funding for the following things, to make American communities more liberal—livable. [Laughter] I’ve done pretty well with this speech, but I can’t say that.  One, I want you to help us to do three things. We need more funding for advanced transit systems. We need more funding for saving open spaces in places of heavy development. And we need more funding—this ought to have bipartisan appeal—we need more funding for helping major cities around the Great Lakes protect their waterways and enhance their quality of life. We need these things, and I want you to help us.  The greatest environmental challenge of the new century is global warming. The scientists tell us the 1990s were the hottest decade of the entire millennium. If we fail to reduce the emission of greenhouse gases, deadly heat waves and droughts will become more frequent, coastal areas will flood, and economies will be disrupted. That is going to happen, unless we act.  Many people in the United States, some people in this chamber, and lots of folks around the world still believe you cannot cut greenhouse gas emissions without slowing economic growth. In the industrial age, that may well have been true. But in this digital economy, it is not true anymore. New technologies make it possible to cut harmful emissions and provide even more growth.  For example, just last week, automakers unveiled cars that get 70 to 80 miles a gallon, the fruits of a unique research partnership between government and industry. And before you know it, efficient production of bio-fuels will give us the equivalent of hundreds of miles from a gallon of gasoline.  To speed innovation in these kind of technologies, I think we should give a major tax incentive to business for the production of clean energy and to families for buying energy-saving homes and appliances and the next generation of superefficient cars when they hit the showroom floor. I also ask the auto industry to use the available technologies to make all new cars more fuel-efficient right away.  And I ask this Congress to do something else. Please help us make more of our clean energy technology available to the developing world. That will create cleaner growth abroad and a lot more new jobs here in the United States of America.  In the new century, innovations in science and technology will be key not only to the health of the environment but to miraculous improvements in the quality of our lives and advances in the economy. Later this year, researchers will complete the first draft of the entire human genome, the very blueprint of life. It is important for all our fellow Americans to recognize that federal tax dollars have funded much of this research and that this and other wise investments in science are leading to a revolution in our ability to detect, treat, and prevent disease.  For example, researchers have identified genes that cause Parkinson’s, diabetes, and certain kinds of cancer. They are designing precision therapies that will block the harmful effect of these genes for good. Researchers already are using this new technique to target and destroy cells that cause breast cancer. Soon, we may be able to use it to prevent the onset of Alzheimer’s. Scientists are also working on an artificial retina to help many blind people to see and—listen to this—microchips that would actually directly stimulate damaged spinal cords in a way that could allow people now paralyzed to stand up and walk.  These kinds of innovations are also propelling our remarkable prosperity. Information technology only includes 8 percent of our employment but now accounts for a third of our economic growth along with jobs that pay, by the way, about 80 percent above the private sector average. Again, we ought to keep in mind, government-funded research brought supercomputers, the Internet, and communications satellites into being. Soon researchers will bring us devices that can translate foreign languages as fast as you can talk, materials 10 times stronger than steel at a fraction of the weight, and—this is unbelievable to me—molecular computers the size of a teardrop with the power of today’s fastest supercomputers.  To accelerate the march of discovery across all these disciplines in science and technology, I ask you to support my recommendation of an unprecedented $3 billion in the 21st century research fund, the largest increase in civilian research in a generation. We owe it to our future.  Now, these new breakthroughs have to be used in ways that reflect our values. First and foremost, we have to safeguard our citizens’ privacy. Last year we proposed to protect every citizen’s medical record. This year we will finalize those rules. We’ve also taken the first steps to protect the privacy of bank and credit card records and other financial statements. Soon I will send legislation to you to finish that job. We must also act to prevent any genetic discrimination whatever by employers or insurers. I hope you will support that.  These steps will allow us to lead toward the far frontiers of science and technology. They will enhance our health, the environment, the economy in ways we can’t even imagine today. But we all know that at a time when science, technology, and the forces of globalization are bringing so many changes into all our lives, it’s more important than ever that we strengthen the bonds that root us in our local communities and in our national community.  No tie binds different people together like citizen service. There’s a new spirit of service in America, a movement we’ve tried to support with AmeriCorps, expanded Peace Corps, unprecedented new partnerships with businesses, foundations, community groups; partnerships, for example, like the one that enlisted 12,000 companies which have now moved 650,000 of our fellow citizens from welfare to work; partnerships to battle drug abuse, AIDS, teach young people to read, save America’s treasures, strengthen the arts, fight teen pregnancy, prevent violence among young people, promote racial healing. The American people are working together.  But we should do more to help Americans help each other. First, we should help faith-based organizations to do more to fight poverty and drug abuse and help people get back on the right track, with initiatives like Second Chance Homes that do so much to help unwed teen mothers. Second, we should support Americans who tithe and contribute to charities but don’t earn enough to claim a tax deduction for it. Tonight I propose new tax incentives that would allow low and middle income citizens who don’t itemize to get that deduction. It’s nothing but fair, and it will get more people to give.  We should do more to help new immigrants to fully participate in our community. That’s why I recommend spending more to teach them civics and English. And since everybody in our community counts, we’ve got to make sure everyone is counted in this year’s census.  Within 10 years—just 10 years—there will be no majority race in our largest state of California. In a little more than 50 years, there will be no majority race in America. In a more interconnected world, this diversity can be our greatest strength. Just look around this chamber. Look around. We have members in this Congress from virtually every racial, ethnic, and religious background. And I think you would agree that America is stronger because of it. [Applause]  You also have to agree that all those differences you just clapped for all too often spark hatred and division even here at home. Just in the last couple of years, we’ve seen a man dragged to death in Texas just because he was black. We saw a young man murdered in Wyoming just because he was gay. Last year we saw the shootings of African-Americans, Asian-Americans, and Jewish children just because of who they were. This is not the American way, and we must draw the line.  I ask you to draw that line by passing without delay the \"Hate Crimes Prevention Act\" and the \"Employment Non-Discrimination Act.\" And I ask you to reauthorize the Violence Against Women Act.  Finally tonight, I propose the largest ever investment in our civil rights laws for enforcement, because no American should be subjected to discrimination in finding a home, getting a job, going to school, or securing a loan. Protections in law should be protections in fact.  Last February, because I thought this was so important, I created the White House Office of One America to promote racial reconciliation. That’s what one of my personal heroes, Hank Aaron, has done all his life. From his days as our all-time home run king to his recent acts of healing, he has always brought people together. We should follow his example, and we’re honored to have him with us tonight. Stand up, Hank Aaron. [Applause]  I just want to say one more thing about this, and I want every one of you to think about this the next time you get mad at one of your colleagues on the other side of the aisle. This fall, at the White House, Hillary had one of her millennium dinners, and we had this very distinguished scientist there, who is an expert in this whole work in the human genome. And he said that we are all, regardless of race, genetically 99.9 percent the same.  Now, you may find that uncomfortable when you look around here. [Laughter] But it is worth remembering. We can laugh about this, but you think about it. Modern science has confirmed what ancient faiths have always taught: the most important fact of life is our common humanity. Therefore, we should do more than just tolerate our diversity; we should honor it and celebrate it.  My fellow Americans, every time I prepare for the State of the Union, I approach it with hope and expectation and excitement for our nation. But tonight is very special, because we stand on the mountaintop of a new millennium. Behind us we can look back and see the great expanse of American achievement, and before us we can see even greater, grander frontiers of possibility. We should, all of us, be filled with gratitude and humility for our present progress and prosperity. We should be filled with awe and joy at what lies over the horizon. And we should be filled with absolute determination to make the most of it.  You know, when the Framers finished crafting our Constitution in Philadelphia, Benjamin Franklin stood in Independence Hall, and he reflected on the carving of the sun that was on the back of a chair he saw. The sun was low on the horizon. So he said this—he said, \"I’ve often wondered whether that sun was rising or setting. Today,\" Franklin said, \"I have the happiness to know it’s a rising sun.\" Today, because each succeeding generation of Americans has kept the fire of freedom burning brightly, lighting those frontiers of possibility, we all still bask in the glow and the warmth of Mr. Franklin’s rising sun.  After 224 years, the American revolution continues. We remain a new nation. And as long as our dreams outweigh our memories, America will be forever young. That is our destiny. And this is our moment.  Thank you, God bless you, and God bless America.  Hide Transcript '"
      ]
     },
     "execution_count": 3,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "clinton_url = \"https://raw.githubusercontent.com/sul-cidr/python_workshops/master/data/clinton2000.txt\"\n",
    "clinton_speech = get_speech(clinton_url)\n",
    "clinton_speech"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "True"
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "clinton_blob = TextBlob(clinton_speech[:446])\n",
    "clinton_blob.string == clinton_speech[:446]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Tokenization\n",
    "\n",
    "In NLP, the act of splitting text is called tokenization, and each of the individual chunks is called a token. Therefore, we can talk about word tokenization or sentence tokenization depending on what it is that we need to divide the text into."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "WordList(['Mr', 'Speaker', 'Mr', 'Vice', 'President', 'members', 'of', 'Congress', 'honored', 'guests', 'my', 'fellow', 'Americans', 'We', 'are', 'fortunate', 'to', 'be', 'alive', 'at', 'this', 'moment', 'in', 'history', 'Never', 'before', 'has', 'our', 'nation', 'enjoyed', 'at', 'once', 'so', 'much', 'prosperity', 'and', 'social', 'progress', 'with', 'so', 'little', 'internal', 'crisis', 'and', 'so', 'few', 'external', 'threats', 'Never', 'before', 'have', 'we', 'had', 'such', 'a', 'blessed', 'opportunity', 'and', 'therefore', 'such', 'a', 'profound', 'obligation', 'to', 'build', 'the', 'more', 'perfect', 'Union', 'of', 'our', 'Founders’', 'dreams'])"
      ]
     },
     "execution_count": 5,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "clinton_blob.words"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[Sentence(\"Mr. Speaker, Mr. Vice President, members of Congress, honored guests, my fellow Americans:  We are fortunate to be alive at this moment in history.\"),\n",
       " Sentence(\"Never before has our nation enjoyed, at once, so much prosperity and social progress with so little internal crisis and so few external threats.\"),\n",
       " Sentence(\"Never before have we had such a blessed opportunity and, therefore, such a profound obligation to build the more perfect Union of our Founders’ dreams.\")]"
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "clinton_blob.sentences"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "WordList(['mr.', 'mr.', 'vice president', 'congress', 'never', 'social progress', 'internal crisis', 'external threats', 'never', 'profound obligation', 'perfect union', 'founders’'])"
      ]
     },
     "execution_count": 7,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "clinton_blob.noun_phrases"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "A special way of dividing text in tuples of sequential words or letters is usualy referred to as N-Grams."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[WordList(['Mr', 'Speaker', 'Mr']),\n",
       " WordList(['Speaker', 'Mr', 'Vice']),\n",
       " WordList(['Mr', 'Vice', 'President']),\n",
       " WordList(['Vice', 'President', 'members']),\n",
       " WordList(['President', 'members', 'of']),\n",
       " WordList(['members', 'of', 'Congress']),\n",
       " WordList(['of', 'Congress', 'honored']),\n",
       " WordList(['Congress', 'honored', 'guests']),\n",
       " WordList(['honored', 'guests', 'my']),\n",
       " WordList(['guests', 'my', 'fellow']),\n",
       " WordList(['my', 'fellow', 'Americans']),\n",
       " WordList(['fellow', 'Americans', 'We']),\n",
       " WordList(['Americans', 'We', 'are']),\n",
       " WordList(['We', 'are', 'fortunate']),\n",
       " WordList(['are', 'fortunate', 'to']),\n",
       " WordList(['fortunate', 'to', 'be']),\n",
       " WordList(['to', 'be', 'alive']),\n",
       " WordList(['be', 'alive', 'at']),\n",
       " WordList(['alive', 'at', 'this']),\n",
       " WordList(['at', 'this', 'moment']),\n",
       " WordList(['this', 'moment', 'in']),\n",
       " WordList(['moment', 'in', 'history']),\n",
       " WordList(['in', 'history', 'Never']),\n",
       " WordList(['history', 'Never', 'before']),\n",
       " WordList(['Never', 'before', 'has']),\n",
       " WordList(['before', 'has', 'our']),\n",
       " WordList(['has', 'our', 'nation']),\n",
       " WordList(['our', 'nation', 'enjoyed']),\n",
       " WordList(['nation', 'enjoyed', 'at']),\n",
       " WordList(['enjoyed', 'at', 'once']),\n",
       " WordList(['at', 'once', 'so']),\n",
       " WordList(['once', 'so', 'much']),\n",
       " WordList(['so', 'much', 'prosperity']),\n",
       " WordList(['much', 'prosperity', 'and']),\n",
       " WordList(['prosperity', 'and', 'social']),\n",
       " WordList(['and', 'social', 'progress']),\n",
       " WordList(['social', 'progress', 'with']),\n",
       " WordList(['progress', 'with', 'so']),\n",
       " WordList(['with', 'so', 'little']),\n",
       " WordList(['so', 'little', 'internal']),\n",
       " WordList(['little', 'internal', 'crisis']),\n",
       " WordList(['internal', 'crisis', 'and']),\n",
       " WordList(['crisis', 'and', 'so']),\n",
       " WordList(['and', 'so', 'few']),\n",
       " WordList(['so', 'few', 'external']),\n",
       " WordList(['few', 'external', 'threats']),\n",
       " WordList(['external', 'threats', 'Never']),\n",
       " WordList(['threats', 'Never', 'before']),\n",
       " WordList(['Never', 'before', 'have']),\n",
       " WordList(['before', 'have', 'we']),\n",
       " WordList(['have', 'we', 'had']),\n",
       " WordList(['we', 'had', 'such']),\n",
       " WordList(['had', 'such', 'a']),\n",
       " WordList(['such', 'a', 'blessed']),\n",
       " WordList(['a', 'blessed', 'opportunity']),\n",
       " WordList(['blessed', 'opportunity', 'and']),\n",
       " WordList(['opportunity', 'and', 'therefore']),\n",
       " WordList(['and', 'therefore', 'such']),\n",
       " WordList(['therefore', 'such', 'a']),\n",
       " WordList(['such', 'a', 'profound']),\n",
       " WordList(['a', 'profound', 'obligation']),\n",
       " WordList(['profound', 'obligation', 'to']),\n",
       " WordList(['obligation', 'to', 'build']),\n",
       " WordList(['to', 'build', 'the']),\n",
       " WordList(['build', 'the', 'more']),\n",
       " WordList(['the', 'more', 'perfect']),\n",
       " WordList(['more', 'perfect', 'Union']),\n",
       " WordList(['perfect', 'Union', 'of']),\n",
       " WordList(['Union', 'of', 'our']),\n",
       " WordList(['of', 'our', 'Founders’']),\n",
       " WordList(['our', 'Founders’', 'dreams'])]"
      ]
     },
     "execution_count": 8,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "clinton_blob.ngrams(n=3)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[WordList(['Mr', 'Speaker', 'Mr', 'Vice', 'President']),\n",
       " WordList(['Speaker', 'Mr', 'Vice', 'President', 'members']),\n",
       " WordList(['Mr', 'Vice', 'President', 'members', 'of']),\n",
       " WordList(['Vice', 'President', 'members', 'of', 'Congress']),\n",
       " WordList(['President', 'members', 'of', 'Congress', 'honored']),\n",
       " WordList(['members', 'of', 'Congress', 'honored', 'guests']),\n",
       " WordList(['of', 'Congress', 'honored', 'guests', 'my']),\n",
       " WordList(['Congress', 'honored', 'guests', 'my', 'fellow']),\n",
       " WordList(['honored', 'guests', 'my', 'fellow', 'Americans']),\n",
       " WordList(['guests', 'my', 'fellow', 'Americans', 'We']),\n",
       " WordList(['my', 'fellow', 'Americans', 'We', 'are']),\n",
       " WordList(['fellow', 'Americans', 'We', 'are', 'fortunate']),\n",
       " WordList(['Americans', 'We', 'are', 'fortunate', 'to']),\n",
       " WordList(['We', 'are', 'fortunate', 'to', 'be']),\n",
       " WordList(['are', 'fortunate', 'to', 'be', 'alive']),\n",
       " WordList(['fortunate', 'to', 'be', 'alive', 'at']),\n",
       " WordList(['to', 'be', 'alive', 'at', 'this']),\n",
       " WordList(['be', 'alive', 'at', 'this', 'moment']),\n",
       " WordList(['alive', 'at', 'this', 'moment', 'in']),\n",
       " WordList(['at', 'this', 'moment', 'in', 'history']),\n",
       " WordList(['this', 'moment', 'in', 'history', 'Never']),\n",
       " WordList(['moment', 'in', 'history', 'Never', 'before']),\n",
       " WordList(['in', 'history', 'Never', 'before', 'has']),\n",
       " WordList(['history', 'Never', 'before', 'has', 'our']),\n",
       " WordList(['Never', 'before', 'has', 'our', 'nation']),\n",
       " WordList(['before', 'has', 'our', 'nation', 'enjoyed']),\n",
       " WordList(['has', 'our', 'nation', 'enjoyed', 'at']),\n",
       " WordList(['our', 'nation', 'enjoyed', 'at', 'once']),\n",
       " WordList(['nation', 'enjoyed', 'at', 'once', 'so']),\n",
       " WordList(['enjoyed', 'at', 'once', 'so', 'much']),\n",
       " WordList(['at', 'once', 'so', 'much', 'prosperity']),\n",
       " WordList(['once', 'so', 'much', 'prosperity', 'and']),\n",
       " WordList(['so', 'much', 'prosperity', 'and', 'social']),\n",
       " WordList(['much', 'prosperity', 'and', 'social', 'progress']),\n",
       " WordList(['prosperity', 'and', 'social', 'progress', 'with']),\n",
       " WordList(['and', 'social', 'progress', 'with', 'so']),\n",
       " WordList(['social', 'progress', 'with', 'so', 'little']),\n",
       " WordList(['progress', 'with', 'so', 'little', 'internal']),\n",
       " WordList(['with', 'so', 'little', 'internal', 'crisis']),\n",
       " WordList(['so', 'little', 'internal', 'crisis', 'and']),\n",
       " WordList(['little', 'internal', 'crisis', 'and', 'so']),\n",
       " WordList(['internal', 'crisis', 'and', 'so', 'few']),\n",
       " WordList(['crisis', 'and', 'so', 'few', 'external']),\n",
       " WordList(['and', 'so', 'few', 'external', 'threats']),\n",
       " WordList(['so', 'few', 'external', 'threats', 'Never']),\n",
       " WordList(['few', 'external', 'threats', 'Never', 'before']),\n",
       " WordList(['external', 'threats', 'Never', 'before', 'have']),\n",
       " WordList(['threats', 'Never', 'before', 'have', 'we']),\n",
       " WordList(['Never', 'before', 'have', 'we', 'had']),\n",
       " WordList(['before', 'have', 'we', 'had', 'such']),\n",
       " WordList(['have', 'we', 'had', 'such', 'a']),\n",
       " WordList(['we', 'had', 'such', 'a', 'blessed']),\n",
       " WordList(['had', 'such', 'a', 'blessed', 'opportunity']),\n",
       " WordList(['such', 'a', 'blessed', 'opportunity', 'and']),\n",
       " WordList(['a', 'blessed', 'opportunity', 'and', 'therefore']),\n",
       " WordList(['blessed', 'opportunity', 'and', 'therefore', 'such']),\n",
       " WordList(['opportunity', 'and', 'therefore', 'such', 'a']),\n",
       " WordList(['and', 'therefore', 'such', 'a', 'profound']),\n",
       " WordList(['therefore', 'such', 'a', 'profound', 'obligation']),\n",
       " WordList(['such', 'a', 'profound', 'obligation', 'to']),\n",
       " WordList(['a', 'profound', 'obligation', 'to', 'build']),\n",
       " WordList(['profound', 'obligation', 'to', 'build', 'the']),\n",
       " WordList(['obligation', 'to', 'build', 'the', 'more']),\n",
       " WordList(['to', 'build', 'the', 'more', 'perfect']),\n",
       " WordList(['build', 'the', 'more', 'perfect', 'Union']),\n",
       " WordList(['the', 'more', 'perfect', 'Union', 'of']),\n",
       " WordList(['more', 'perfect', 'Union', 'of', 'our']),\n",
       " WordList(['perfect', 'Union', 'of', 'our', 'Founders’']),\n",
       " WordList(['Union', 'of', 'our', 'Founders’', 'dreams'])]"
      ]
     },
     "execution_count": 9,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "clinton_blob.ngrams(n=5)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<div style=\"font-size: 1em; margin: 1em 0 1em 0; border: 1px solid #86989B; background-color: #f7f7f7; padding: 0;\">\n",
    "<p style=\"margin: 0; padding: 0.1em 0 0.1em 0.5em; color: white; border-bottom: 1px solid #86989B; font-weight: bold; background-color: #AFC1C4;\">\n",
    "Activity\n",
    "</p>\n",
    "<p style=\"margin: 0.5em 1em 0.5em 1em; padding: 0;\">\n",
    "Write a function `count_chars(text)` that receives `text` and returns the total number of characters ignoring spaces and punctuation marks. For example, `count_chars(\"Well, I am not 30 years old.\")` should return `20`.\n",
    "<br/>\n",
    "* **Hint**: You could count the characters in the words.*\n",
    "</p>\n",
    "</div>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "20"
      ]
     },
     "execution_count": 10,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "def count_chars(text):\n",
    "    return sum(len(w) for w in TextBlob(text).words)\n",
    "\n",
    "count_chars(\"Well, I am not 30 years old.\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Part of Speech Tagging\n",
    "\n",
    "Textblob also allows you to perform Part-Of-Speech tagging, a kind of grammatical chunking, out of the box. By default it uses the Penn U Treebank, but other taggers can be plugged in using NLTK classes."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[('Mr.', 'NNP'),\n",
       " ('Speaker', 'NNP'),\n",
       " ('Mr.', 'NNP'),\n",
       " ('Vice', 'NNP'),\n",
       " ('President', 'NNP'),\n",
       " ('members', 'NNS'),\n",
       " ('of', 'IN'),\n",
       " ('Congress', 'NNP'),\n",
       " ('honored', 'VBD'),\n",
       " ('guests', 'NNS'),\n",
       " ('my', 'PRP$'),\n",
       " ('fellow', 'JJ'),\n",
       " ('Americans', 'NNPS'),\n",
       " ('We', 'PRP'),\n",
       " ('are', 'VBP'),\n",
       " ('fortunate', 'JJ'),\n",
       " ('to', 'TO'),\n",
       " ('be', 'VB'),\n",
       " ('alive', 'JJ'),\n",
       " ('at', 'IN'),\n",
       " ('this', 'DT'),\n",
       " ('moment', 'NN'),\n",
       " ('in', 'IN'),\n",
       " ('history', 'NN'),\n",
       " ('Never', 'NN'),\n",
       " ('before', 'IN'),\n",
       " ('has', 'VBZ'),\n",
       " ('our', 'PRP$'),\n",
       " ('nation', 'NN'),\n",
       " ('enjoyed', 'VBN'),\n",
       " ('at', 'IN'),\n",
       " ('once', 'RB'),\n",
       " ('so', 'RB'),\n",
       " ('much', 'JJ'),\n",
       " ('prosperity', 'NN'),\n",
       " ('and', 'CC'),\n",
       " ('social', 'JJ'),\n",
       " ('progress', 'NN'),\n",
       " ('with', 'IN'),\n",
       " ('so', 'RB'),\n",
       " ('little', 'JJ'),\n",
       " ('internal', 'JJ'),\n",
       " ('crisis', 'NN'),\n",
       " ('and', 'CC'),\n",
       " ('so', 'RB'),\n",
       " ('few', 'JJ'),\n",
       " ('external', 'JJ'),\n",
       " ('threats', 'NNS'),\n",
       " ('Never', 'RB'),\n",
       " ('before', 'RB'),\n",
       " ('have', 'VBP'),\n",
       " ('we', 'PRP'),\n",
       " ('had', 'VBD'),\n",
       " ('such', 'JJ'),\n",
       " ('a', 'DT'),\n",
       " ('blessed', 'JJ'),\n",
       " ('opportunity', 'NN'),\n",
       " ('and', 'CC'),\n",
       " ('therefore', 'RB'),\n",
       " ('such', 'PDT'),\n",
       " ('a', 'DT'),\n",
       " ('profound', 'JJ'),\n",
       " ('obligation', 'NN'),\n",
       " ('to', 'TO'),\n",
       " ('build', 'VB'),\n",
       " ('the', 'DT'),\n",
       " ('more', 'RBR'),\n",
       " ('perfect', 'JJ'),\n",
       " ('Union', 'NNP'),\n",
       " ('of', 'IN'),\n",
       " ('our', 'PRP$'),\n",
       " ('Founders’', 'NNP'),\n",
       " ('dreams', 'NN')]"
      ]
     },
     "execution_count": 11,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "clinton_blob.tags"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Mr. NNP\n",
      "Speaker NNP\n",
      "Mr. NNP\n",
      "Vice NNP\n",
      "President NNP\n",
      "members NNS\n",
      "of IN\n",
      "Congress NNP\n",
      "honored VBD\n",
      "guests NNS\n",
      "my PRP$\n",
      "fellow JJ\n",
      "Americans NNPS\n",
      "We PRP\n",
      "are VBP\n",
      "fortunate JJ\n",
      "to TO\n",
      "be VB\n",
      "alive JJ\n",
      "at IN\n",
      "this DT\n",
      "moment NN\n",
      "in IN\n",
      "history NN\n",
      "Never NN\n",
      "before IN\n",
      "has VBZ\n",
      "our PRP$\n",
      "nation NN\n",
      "enjoyed VBN\n",
      "at IN\n",
      "once RB\n",
      "so RB\n",
      "much JJ\n",
      "prosperity NN\n",
      "and CC\n",
      "social JJ\n",
      "progress NN\n",
      "with IN\n",
      "so RB\n",
      "little JJ\n",
      "internal JJ\n",
      "crisis NN\n",
      "and CC\n",
      "so RB\n",
      "few JJ\n",
      "external JJ\n",
      "threats NNS\n",
      "Never RB\n",
      "before RB\n",
      "have VBP\n",
      "we PRP\n",
      "had VBD\n",
      "such JJ\n",
      "a DT\n",
      "blessed JJ\n",
      "opportunity NN\n",
      "and CC\n",
      "therefore RB\n",
      "such PDT\n",
      "a DT\n",
      "profound JJ\n",
      "obligation NN\n",
      "to TO\n",
      "build VB\n",
      "the DT\n",
      "more RBR\n",
      "perfect JJ\n",
      "Union NNP\n",
      "of IN\n",
      "our PRP$\n",
      "Founders’ NNP\n",
      "dreams NN\n"
     ]
    }
   ],
   "source": [
    "for word, pos in clinton_blob.tags:\n",
    "    print(word, pos)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "For what these tags mean, you might check out http://www.clips.ua.ac.be/pages/mbsp-tags"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'Mr./NNP/B-NP/O Speaker/NNP/I-NP/O ,/,/O/O Mr./NNP/B-NP/O Vice/NNP/I-NP/O President/NNP/I-NP/O ,/,/O/O members/NNS/B-NP/O of/IN/B-PP/B-PNP Congress/NNP/B-NP/I-PNP ,/,/O/O honored/VBN/B-VP/O guests/NNS/B-NP/O ,/,/O/O my/PRP$/B-NP/O fellow/NN/I-NP/O Americans/NNPS/I-NP/O :/:/O/O We/PRP/B-NP/O are/VBP/B-VP/O fortunate/JJ/B-ADJP/O to/TO/B-PP/O be/VB/B-VP/O alive/JJ/B-ADJP/O at/IN/B-PP/B-PNP this/DT/B-NP/I-PNP moment/NN/I-NP/I-PNP in/IN/B-PP/B-PNP history/NN/B-NP/I-PNP ././O/O\\nNever/RB/B-ADVP/O before/IN/B-PP/O has/VBZ/B-VP/O our/PRP$/B-NP/O nation/NN/I-NP/O enjoyed/VBD/B-VP/O ,/,/O/O at/IN/B-PP/O once/RB/B-ADVP/O ,/,/O/O so/RB/B-NP/O much/JJ/I-NP/O prosperity/NN/I-NP/O and/CC/O/O social/JJ/B-NP/O progress/NN/I-NP/O with/IN/B-PP/B-PNP so/RB/B-NP/I-PNP little/JJ/I-NP/I-PNP internal/JJ/I-NP/I-PNP crisis/NN/I-NP/I-PNP and/CC/O/O so/RB/B-NP/O few/JJ/I-NP/O external/JJ/I-NP/O threats/NNS/I-NP/O ././O/O\\nNever/RB/B-ADVP/O before/IN/B-PP/O have/VBP/B-VP/O we/PRP/B-NP/O had/VBD/B-VP/O such/JJ/B-ADJP/O a/DT/O/O blessed/VBN/B-VP/O opportunity/NN/B-NP/O and/CC/O/O ,/,/O/O therefore/RB/B-ADVP/O ,/,/O/O such/JJ/B-ADJP/O a/DT/B-NP/O profound/JJ/I-NP/O obligation/NN/I-NP/O to/TO/B-PP/O build/VB/B-VP/O the/DT/B-NP/O more/JJR/I-NP/O perfect/JJ/I-NP/O Union/NNP/I-NP/O of/IN/B-PP/B-PNP our/PRP$/B-NP/I-PNP Founders/NNPS/I-NP/I-PNP ’/NN/I-NP/I-PNP dreams/NNS/I-NP/I-PNP ././O/O'"
      ]
     },
     "execution_count": 13,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "clinton_blob.parse()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "Sentence(\"Mr. Speaker, Mr. Vice President, members of Congress, honored guests, my fellow Americans:  We are fortunate to be alive at this moment in history.\")"
      ]
     },
     "execution_count": 14,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "clinton_blob.sentences[0]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![Sentence tree](data/tree.svg)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'Mr./NNP/B-NP/O Speaker/NNP/I-NP/O ,/,/O/O Mr./NNP/B-NP/O Vice/NNP/I-NP/O President/NNP/I-NP/O ,/,/O/O members/NNS/B-NP/O of/IN/B-PP/B-PNP Congress/NNP/B-NP/I-PNP ,/,/O/O honored/VBN/B-VP/O guests/NNS/B-NP/O ,/,/O/O my/PRP$/B-NP/O fellow/NN/I-NP/O Americans/NNPS/I-NP/O :/:/O/O We/PRP/B-NP/O are/VBP/B-VP/O fortunate/JJ/B-ADJP/O to/TO/B-PP/O be/VB/B-VP/O alive/JJ/B-ADJP/O at/IN/B-PP/B-PNP this/DT/B-NP/I-PNP moment/NN/I-NP/I-PNP in/IN/B-PP/B-PNP history/NN/B-NP/I-PNP ././O/O'"
      ]
     },
     "execution_count": 15,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "clinton_blob.sentences[0].parse()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Word transformations"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'octopus'"
      ]
     },
     "execution_count": 16,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from textblob import Word\n",
    "w = Word(\"octopi\")\n",
    "w.lemmatize()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'octopus'"
      ]
     },
     "execution_count": 17,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "w.lemma"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'be'"
      ]
     },
     "execution_count": 18,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "v = Word(\"is\")\n",
    "v.lemmatize(\"v\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Mr Mr\n",
      "Speaker Speaker\n",
      "Mr Mr\n",
      "Vice Vice\n",
      "President President\n",
      "members member\n",
      "of of\n",
      "Congress Congress\n",
      "honored honored\n",
      "guests guest\n",
      "my my\n",
      "fellow fellow\n",
      "Americans Americans\n",
      "We We\n",
      "are are\n",
      "fortunate fortunate\n",
      "to to\n",
      "be be\n",
      "alive alive\n",
      "at at\n",
      "this this\n",
      "moment moment\n",
      "in in\n",
      "history history\n",
      "Never Never\n",
      "before before\n",
      "has ha\n",
      "our our\n",
      "nation nation\n",
      "enjoyed enjoyed\n",
      "at at\n",
      "once once\n",
      "so so\n",
      "much much\n",
      "prosperity prosperity\n",
      "and and\n",
      "social social\n",
      "progress progress\n",
      "with with\n",
      "so so\n",
      "little little\n",
      "internal internal\n",
      "crisis crisis\n",
      "and and\n",
      "so so\n",
      "few few\n",
      "external external\n",
      "threats threat\n",
      "Never Never\n",
      "before before\n",
      "have have\n",
      "we we\n",
      "had had\n",
      "such such\n",
      "a a\n",
      "blessed blessed\n",
      "opportunity opportunity\n",
      "and and\n",
      "therefore therefore\n",
      "such such\n",
      "a a\n",
      "profound profound\n",
      "obligation obligation\n",
      "to to\n",
      "build build\n",
      "the the\n",
      "more more\n",
      "perfect perfect\n",
      "Union Union\n",
      "of of\n",
      "our our\n",
      "Founders’ Founders’\n",
      "dreams dream\n"
     ]
    }
   ],
   "source": [
    "for word in clinton_blob.words:\n",
    "    print(word, word.lemmatize())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Mr Mr\n",
      "Speaker Speaker\n",
      "Mr Mr\n",
      "Vice Vice\n",
      "President President\n",
      "members members\n",
      "of of\n",
      "Congress Congress\n",
      "honored honor\n",
      "guests guests\n",
      "my my\n",
      "fellow fellow\n",
      "Americans Americans\n",
      "We We\n",
      "are be\n",
      "fortunate fortunate\n",
      "to to\n",
      "be be\n",
      "alive alive\n",
      "at at\n",
      "this this\n",
      "moment moment\n",
      "in in\n",
      "history history\n",
      "Never Never\n",
      "before before\n",
      "has have\n",
      "our our\n",
      "nation nation\n",
      "enjoyed enjoy\n",
      "at at\n",
      "once once\n",
      "so so\n",
      "much much\n",
      "prosperity prosperity\n",
      "and and\n",
      "social social\n",
      "progress progress\n",
      "with with\n",
      "so so\n",
      "little little\n",
      "internal internal\n",
      "crisis crisis\n",
      "and and\n",
      "so so\n",
      "few few\n",
      "external external\n",
      "threats threats\n",
      "Never Never\n",
      "before before\n",
      "have have\n",
      "we we\n",
      "had have\n",
      "such such\n",
      "a a\n",
      "blessed bless\n",
      "opportunity opportunity\n",
      "and and\n",
      "therefore therefore\n",
      "such such\n",
      "a a\n",
      "profound profound\n",
      "obligation obligation\n",
      "to to\n",
      "build build\n",
      "the the\n",
      "more more\n",
      "perfect perfect\n",
      "Union Union\n",
      "of of\n",
      "our our\n",
      "Founders’ Founders’\n",
      "dreams dream\n"
     ]
    }
   ],
   "source": [
    "for word in clinton_blob.words:\n",
    "    print(word, word.lemmatize(\"v\"))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "are be\n",
      "have have\n"
     ]
    }
   ],
   "source": [
    "for word, pos in clinton_blob.tags:\n",
    "    if pos == \"VBP\":\n",
    "        print(word, word.lemmatize(\"v\"))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Mr Mrs\n",
      "Speaker Speakers\n",
      "Mr Mrs\n",
      "Vice Vices\n",
      "President Presidents\n",
      "members memberss\n",
      "of ofs\n",
      "Congress Congresses\n",
      "honored honoreds\n",
      "guests guestss\n",
      "my our\n",
      "fellow fellows\n",
      "Americans Americanss\n",
      "We Wes\n",
      "are ares\n",
      "fortunate fortunates\n",
      "to toes\n",
      "be bes\n",
      "alive alives\n",
      "at ats\n",
      "this these\n",
      "moment moments\n",
      "in ins\n",
      "history histories\n",
      "Never Nevers\n",
      "before befores\n",
      "has hass\n",
      "our ours\n",
      "nation nations\n",
      "enjoyed enjoyeds\n",
      "at ats\n",
      "once onces\n",
      "so soes\n",
      "much muches\n",
      "prosperity prosperities\n",
      "and ands\n",
      "social socials\n",
      "progress progress\n",
      "with withs\n",
      "so soes\n",
      "little littles\n",
      "internal internals\n",
      "crisis crises\n",
      "and ands\n",
      "so soes\n",
      "few fews\n",
      "external externals\n",
      "threats threatss\n",
      "Never Nevers\n",
      "before befores\n",
      "have haves\n",
      "we wes\n",
      "had hads\n",
      "such suches\n",
      "a some\n",
      "blessed blesseds\n",
      "opportunity opportunities\n",
      "and ands\n",
      "therefore therefores\n",
      "such suches\n",
      "a some\n",
      "profound profounds\n",
      "obligation obligations\n",
      "to toes\n",
      "build builds\n",
      "the thes\n",
      "more mores\n",
      "perfect perfects\n",
      "Union Unions\n",
      "of ofs\n",
      "our ours\n",
      "Founders’ Founders’s\n",
      "dreams dreamss\n"
     ]
    }
   ],
   "source": [
    "for word in clinton_blob.words:\n",
    "    print(word, word.pluralize())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Counting"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "defaultdict(int,\n",
       "            {'a': 2,\n",
       "             'alive': 1,\n",
       "             'americans': 1,\n",
       "             'and': 3,\n",
       "             'are': 1,\n",
       "             'at': 2,\n",
       "             'be': 1,\n",
       "             'before': 2,\n",
       "             'blessed': 1,\n",
       "             'build': 1,\n",
       "             'congress': 1,\n",
       "             'crisis': 1,\n",
       "             'dreams': 1,\n",
       "             'enjoyed': 1,\n",
       "             'external': 1,\n",
       "             'fellow': 1,\n",
       "             'few': 1,\n",
       "             'fortunate': 1,\n",
       "             'founders’': 1,\n",
       "             'guests': 1,\n",
       "             'had': 1,\n",
       "             'has': 1,\n",
       "             'have': 1,\n",
       "             'history': 1,\n",
       "             'honored': 1,\n",
       "             'in': 1,\n",
       "             'internal': 1,\n",
       "             'little': 1,\n",
       "             'members': 1,\n",
       "             'moment': 1,\n",
       "             'more': 1,\n",
       "             'mr': 2,\n",
       "             'much': 1,\n",
       "             'my': 1,\n",
       "             'nation': 1,\n",
       "             'never': 2,\n",
       "             'obligation': 1,\n",
       "             'of': 2,\n",
       "             'once': 1,\n",
       "             'opportunity': 1,\n",
       "             'our': 2,\n",
       "             'perfect': 1,\n",
       "             'president': 1,\n",
       "             'profound': 1,\n",
       "             'progress': 1,\n",
       "             'prosperity': 1,\n",
       "             'so': 3,\n",
       "             'social': 1,\n",
       "             'speaker': 1,\n",
       "             'such': 2,\n",
       "             'the': 1,\n",
       "             'therefore': 1,\n",
       "             'this': 1,\n",
       "             'threats': 1,\n",
       "             'to': 2,\n",
       "             'union': 1,\n",
       "             'vice': 1,\n",
       "             'we': 2,\n",
       "             'with': 1})"
      ]
     },
     "execution_count": 23,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "clinton_blob.word_counts"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "1"
      ]
     },
     "execution_count": 24,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "clinton_blob.word_counts['congress']"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "2"
      ]
     },
     "execution_count": 25,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "clinton_blob.words.count('Mr', case_sensitive=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "1"
      ]
     },
     "execution_count": 26,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "clinton_blob.noun_phrases.count('internal crisis')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<div style=\"font-size: 1em; margin: 1em 0 1em 0; border: 1px solid #86989B; background-color: #f7f7f7; padding: 0;\">\n",
    "<p style=\"margin: 0; padding: 0.1em 0 0.1em 0.5em; color: white; border-bottom: 1px solid #86989B; font-weight: bold; background-color: #AFC1C4;\">\n",
    "Activity\n",
    "</p>\n",
    "<p style=\"margin: 0.5em 1em 0.5em 1em; padding: 0;\">\n",
    "Let's define the lexicon of a person as the number of different words she uses to speak. Write a function `get_lexicon(text, n)` that receives `text` and `n` and returns the lemmas of nouns, verbs, and adjectives that are used at least `n` times. For example, `get_lexicon(clinton_speech, 10)` should return\n",
    "\n",
    "```\n",
    "{'A',\n",
    " 'America',\n",
    " 'New',\n",
    " 'So',\n",
    " 'Thank',\n",
    " 'Tonight',\n",
    " 'ask',\n",
    " 'be',\n",
    " 'child',\n",
    " 'do',\n",
    " 'have',\n",
    " 'help',\n",
    " 'make',\n",
    " 'more',\n",
    " 'new',\n",
    " 'people',\n",
    " 'thank',\n",
    " 'tonight',\n",
    " 'want',\n",
    " 'work',\n",
    " 'year'}\n",
    "```.\n",
    "<br/>\n",
    "* **Hint**: In Textblob, when a tag refers to nouns, verbs, or adjectives, the first letter of the tag starts with `n`, `v`, or `j`.*\n",
    "</p>\n",
    "</div>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'A',\n",
       " 'America',\n",
       " 'New',\n",
       " 'So',\n",
       " 'Thank',\n",
       " 'Tonight',\n",
       " 'ask',\n",
       " 'be',\n",
       " 'child',\n",
       " 'do',\n",
       " 'have',\n",
       " 'help',\n",
       " 'make',\n",
       " 'more',\n",
       " 'new',\n",
       " 'people',\n",
       " 'thank',\n",
       " 'tonight',\n",
       " 'want',\n",
       " 'work',\n",
       " 'year'}"
      ]
     },
     "execution_count": 27,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "def get_lexicon(text, n):\n",
    "    blob = TextBlob(text)\n",
    "    return {word.lemma for word, tag in blob.tags\n",
    "            if tag[0].lower() in [\"n\", \"j\", \"v\"] and blob.words.count(word) >= n}\n",
    "    \n",
    "get_lexicon(clinton_speech, 25)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Sentiment analysis\n",
    "\n",
    "Sentiment analysis is a basic form of classification of sentences, commonly into 2 categories."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "Sentiment(polarity=0.17351190476190476, subjectivity=0.4476190476190477)"
      ]
     },
     "execution_count": 28,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "clinton_blob.sentiment"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Mr. Speaker, Mr. Vice President, members of Congress, honored guests, my fellow Americans:  We are fortunate to be alive at this moment in history. 0.25\n",
      "Never before has our nation enjoyed, at once, so much prosperity and social progress with so little internal crisis and so few external threats. 0.049404761904761896\n",
      "Never before have we had such a blessed opportunity and, therefore, such a profound obligation to build the more perfect Union of our Founders’ dreams. 0.3166666666666667\n"
     ]
    }
   ],
   "source": [
    "for sentence in clinton_blob.sentences:\n",
    "    print(sentence, sentence.sentiment.polarity)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "-0.5"
      ]
     },
     "execution_count": 30,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "sad_sent = \"Life is sad.\"\n",
    "sad_blob = TextBlob(sad_sent)\n",
    "sad_blob.sentiment.polarity"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Textblob includes an alternate sentiment analyzer that you can use out of the box. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Mr. Speaker, Mr. Vice President, members of Congress, honored guests, my fellow Americans:  We are fortunate to be alive at this moment in history. Sentiment(classification='pos', p_pos=0.9869582403531282, p_neg=0.013041759646875043)\n",
      "Never before has our nation enjoyed, at once, so much prosperity and social progress with so little internal crisis and so few external threats. Sentiment(classification='pos', p_pos=0.9933821787261897, p_neg=0.006617821273806655)\n",
      "Never before have we had such a blessed opportunity and, therefore, such a profound obligation to build the more perfect Union of our Founders’ dreams. Sentiment(classification='pos', p_pos=0.9143817204404195, p_neg=0.08561827955957835)\n"
     ]
    }
   ],
   "source": [
    "from textblob.sentiments import NaiveBayesAnalyzer\n",
    "blob = TextBlob(clinton_speech[:446], analyzer=NaiveBayesAnalyzer())\n",
    "for sentence in blob.sentences:\n",
    "    print(sentence, sentence.sentiment)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Life is good. 0.7\n",
      "Life sucks. -0.3\n",
      "John hates soda. 0.0\n",
      "John hates nasty soda. -1.0\n",
      "John likes good soda. 0.7\n",
      "John loves soda. 0.0\n",
      "John loves sweet soda. 0.35\n"
     ]
    }
   ],
   "source": [
    "para = \"Life is good. Life sucks. John hates soda. John hates nasty soda. John likes good soda. John loves soda. John loves sweet soda.\"\n",
    "sent_blob = TextBlob(para)\n",
    "for sent in sent_blob.sentences:\n",
    "    print(sent, sent.sentiment.polarity)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 33,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Life is good. Sentiment(classification='pos', p_pos=0.5995917017800413, p_neg=0.4004082982199585)\n",
      "Life sucks. Sentiment(classification='neg', p_pos=0.12196027933237585, p_neg=0.8780397206676244)\n",
      "John hates soda. Sentiment(classification='neg', p_pos=0.22370657856753343, p_neg=0.7762934214324665)\n",
      "John hates nasty soda. Sentiment(classification='neg', p_pos=0.24953488372093016, p_neg=0.7504651162790696)\n",
      "John likes good soda. Sentiment(classification='neg', p_pos=0.24890197779921738, p_neg=0.7510980222007825)\n",
      "John loves soda. Sentiment(classification='neg', p_pos=0.3654245396557949, p_neg=0.6345754603442052)\n",
      "John loves sweet soda. Sentiment(classification='neg', p_pos=0.48973374677919485, p_neg=0.5102662532208059)\n"
     ]
    }
   ],
   "source": [
    "sent_blob_nb = TextBlob(para, analyzer=NaiveBayesAnalyzer())\n",
    "for sent in sent_blob_nb.sentences:\n",
    "    print(sent, sent.sentiment)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "These examples used the built-in analyzers, but a Textblob analyzer can be built with a classifier object with its own methods. Some of them are very useful for model selection if you were building your own. The Textblob docs do give an example of how to build a basic sentiment classifier if you're interested."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<div style=\"font-size: 1em; margin: 1em 0 1em 0; border: 1px solid #86989B; background-color: #f7f7f7; padding: 0;\">\n",
    "<p style=\"margin: 0; padding: 0.1em 0 0.1em 0.5em; color: white; border-bottom: 1px solid #86989B; font-weight: bold; background-color: #AFC1C4;\">\n",
    "Activity\n",
    "</p>\n",
    "<p style=\"margin: 0.5em 1em 0.5em 1em; padding: 0;\">\n",
    "Rather than just get the sentiment of individual sentences, we could try to calculate the average sentiment of a text by averaging the sentiment of its sentences. Write a function `avg_sentiment(text)` that receives `text` and returns the average positive sentiment as the sum of all probability of positive sentences divided by the number of sentences. For example, `avg_sentiment(para)` should return ~`0.3284`.\n",
    "<br/>\n",
    "* **Hint**: Remember to use the `NaiveBayesAnalyzer` analyzer.*\n",
    "</p>\n",
    "</div>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 34,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "0.32840767251929825"
      ]
     },
     "execution_count": 34,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "def avg_sentiment(text):\n",
    "    sentences = TextBlob(text, analyzer=NaiveBayesAnalyzer()).sentences\n",
    "    total = len(sentences)\n",
    "    sent = sum(s.sentiment.p_pos for s in sentences)\n",
    "    return sent / total\n",
    "\n",
    "para = \"Life is good. Life sucks. John hates soda. John hates nasty soda. John likes good soda. John loves soda. John loves sweet soda.\"\n",
    "avg_sentiment(para)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Textblob also lets you simply get the sentiment of a whole text, but you'll notice that this and the average calculated from sentence sentiment are not the same."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 35,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "Sentiment(classification='neg', p_pos=0.10828764508427793, p_neg=0.8917123549157216)"
      ]
     },
     "execution_count": 35,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "sent_blob_nb.sentiment"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Readability indices\n",
    "\n",
    "Readability indices are ways of assessing how easy or complex it is to read a particular text based on the words and sentences it has. They usually output scores that correlate with grade levels.\n",
    "\n",
    "A couple of indices that are presumably easy to calculate are the Auto Readability Index (ARI) and the Coleman-Liau Index:\n",
    "\n",
    "$$\n",
    "ARI = 4.71\\frac{chars}{words}+0.5\\frac{words}{sentences}-21.43\n",
    "$$\n",
    "$$ CL = 0.0588\\frac{letters}{100 words} - 0.296\\frac{sentences}{100words} - 15.8 $$"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 36,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def coleman_liau_index(blob):\n",
    "    chars = count_chars(blob.words)\n",
    "    words = len(blob.words)\n",
    "    sentences = len(blob.sentences)\n",
    "    return (0.0588 * letters_per_100(chars, words)) - (0.296 * sentences_per_100(sentences, words)) - 15.8\n",
    "\n",
    "def letters_per_100(chars, words):\n",
    "    return (chars / words) * 100\n",
    "    \n",
    "def sentences_per_100(sentences, words):\n",
    "    return (sentences / words) * 100\n",
    "\n",
    "def count_chars(words):\n",
    "    return sum(len(w) for w in words)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 37,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "0.2452173913043474"
      ]
     },
     "execution_count": 37,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "coleman_liau_index(sent_blob)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<div style=\"font-size: 1em; margin: 1em 0 1em 0; border: 1px solid #86989B; background-color: #f7f7f7; padding: 0;\">\n",
    "<p style=\"margin: 0; padding: 0.1em 0 0.1em 0.5em; color: white; border-bottom: 1px solid #86989B; font-weight: bold; background-color: #AFC1C4;\">\n",
    "Activity\n",
    "</p>\n",
    "<p style=\"margin: 0.5em 1em 0.5em 1em; padding: 0;\">\n",
    "Write a function `auto_readability_index(blob)` that receives a Textblob `blob` and returns the Auto Readability Index (ARI) score as defined above. For example, `auto_readability_index(sent_blob)` should return ~`0.2815`.\n",
    "<br/>\n",
    "* **Hint**: Rememer to use the `count_chars()` function we defined before.*\n",
    "</p>\n",
    "</div>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 38,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "0.28155279503105746"
      ]
     },
     "execution_count": 38,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "def auto_readability_index(blob):\n",
    "    chars = count_chars(blob.words)\n",
    "    words = len(blob.words)\n",
    "    sentences = len(blob.sentences)\n",
    "    return (4.71 * (chars / words)) + (0.5 * (words / sentences)) - 21.43\n",
    "\n",
    "auto_readability_index(sent_blob)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Corpus\n",
    "  \n",
    "We will work with State of the Union speeches, each from their last year, for Barack Obama, George H.W. Bush, and Bill Clinton, and the recent address to Congress by Donald Trump."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 39,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "clinton_url = \"https://raw.githubusercontent.com/sul-cidr/python_workshops/master/data/clinton2000.txt\"\n",
    "bush_url = \"https://raw.githubusercontent.com/sul-cidr/python_workshops/master/data/bush2008.txt\"\n",
    "obama_url = \"https://raw.githubusercontent.com/sul-cidr/python_workshops/master/data/obama2016.txt\"\n",
    "trump_url = \"https://raw.githubusercontent.com/sul-cidr/python_workshops/master/data/trump.txt\"\n",
    "clinton_speech = get_speech(clinton_url)\n",
    "bush_speech = get_speech(bush_url)\n",
    "obama_speech = get_speech(obama_url)\n",
    "trump_speech = get_speech(trump_url)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 40,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "speeches = {\n",
    "    \"clinton\": TextBlob(clinton_speech, analyzer=NaiveBayesAnalyzer()),\n",
    "    \"bush\": TextBlob(bush_speech, analyzer=NaiveBayesAnalyzer()),\n",
    "    \"obama\": TextBlob(obama_speech, analyzer=NaiveBayesAnalyzer()),\n",
    "    \"trump\": TextBlob(trump_speech, analyzer=NaiveBayesAnalyzer()),\n",
    "}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's get some basic data about the speeches."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 41,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Name\tChars\tWords\tUnique\tSentences\n",
      "obama\t27880\t6124\t1698\t433\n",
      "clinton\t42021\t9067\t2109\t497\n",
      "bush\t27317\t5754\t1673\t311\n",
      "trump\t22394\t4796\t1528\t259\n"
     ]
    }
   ],
   "source": [
    "print(\"Name\", \"Chars\", \"Words\", \"Unique\", \"Sentences\", sep=\"\\t\")\n",
    "for speaker, speech in speeches.items():\n",
    "    print(speaker, count_chars(speech.words), len(speech.words), len(set(speech.words)), len(speech.sentences), sep=\"\\t\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We can calculate the average number of words per sentence for each speech."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<div style=\"font-size: 1em; margin: 1em 0 1em 0; border: 1px solid #86989B; background-color: #f7f7f7; padding: 0;\">\n",
    "<p style=\"margin: 0; padding: 0.1em 0 0.1em 0.5em; color: white; border-bottom: 1px solid #86989B; font-weight: bold; background-color: #AFC1C4;\">\n",
    "Activity\n",
    "</p>\n",
    "<p style=\"margin: 0.5em 1em 0.5em 1em; padding: 0;\">\n",
    "Write a function `avg_sentence_length(blob)` that receives a Textblob `blob` and returns the average sentence length as the sum of all word lengths divided by the total number of sentences. For example, `avg_sentence_length(sent_blob)` should return ~`3.2857`.\n",
    "</p>\n",
    "</div>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 42,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "3.2857142857142856"
      ]
     },
     "execution_count": 42,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "def avg_sentence_length(blob):\n",
    "    return sum(len(s.words) for s in blob.sentences) / len(blob.sentences)\n",
    "\n",
    "avg_sentence_length(sent_blob)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 43,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "obama 14.143187066974596\n",
      "clinton 18.243460764587525\n",
      "bush 18.5016077170418\n",
      "trump 18.517374517374517\n"
     ]
    }
   ],
   "source": [
    "for speaker, speech in speeches.items():\n",
    "#     speech = speech.replace(\"Applause.\", \"\")\n",
    "    print(speaker, avg_sentence_length(speech))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We can also get the most used words. We are going to filter out some common stopwords first."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 44,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "['i', 'me', 'my', 'myself', 'we', 'our', 'ours', 'ourselves', 'you', 'your']"
      ]
     },
     "execution_count": 44,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "stopwords_url = \"https://raw.githubusercontent.com/sul-cidr/python_workshops/master/data/english_stopwords.txt\"\n",
    "stopwords = get_text(stopwords_url).split(\"\\n\")\n",
    "stopwords[:10]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 45,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "obama [('applause', 89), ('us', 34), ('america', 28), ('people', 26), ('that’s', 25), ('world', 24), ('american', 22), ('work', 22), ('it’s', 20), ('want', 19)] \n",
      "\n",
      "clinton [('new', 47), ('ask', 43), ('people', 40), ('make', 38), ('us', 35), ('help', 35), ('years', 32), ('children', 31), ('must', 31), ('every', 29)] \n",
      "\n",
      "bush [('america', 34), ('people', 31), ('must', 29), ('congress', 27), ('year', 25), ('new', 25), ('us', 23), ('ve', 21), ('iraq', 21), ('american', 19)] \n",
      "\n",
      "trump [('america', 31), ('american', 30), ('must', 20), ('new', 19), ('us', 18), ('country', 18), ('world', 17), ('one', 15), ('americans', 15), ('people', 15)] \n",
      "\n"
     ]
    }
   ],
   "source": [
    "def most_used_words(blob, n):\n",
    "    word_counts = sorted(blob.word_counts.items(), key=lambda p: p[1], reverse=True)\n",
    "    return list(filter(lambda p: p[0].lower() not in stopwords, word_counts))[:n]\n",
    "\n",
    "for speaker, speech in speeches.items():\n",
    "    print(speaker, most_used_words(speech, 10), \"\\n\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This sort of exploratory work is often the first step in figuring out how to clean a text for text analysis. "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's assess the lexical richness, defined as the ratio of number of unique words by the number of total words."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 46,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def lexical_richness(words):\n",
    "    return len(set(words)) / len(words)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 47,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "obama 0.27726975832789025\n",
      "clinton 0.23260174258299326\n",
      "bush 0.29075425790754256\n",
      "trump 0.31859883236030023\n"
     ]
    }
   ],
   "source": [
    "for speaker, speech in speeches.items():\n",
    "    print(speaker, lexical_richness(speech.words))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "What about sentiment?"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 48,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "obama 0.7063063471504728\n",
      "clinton 0.718562055046489\n",
      "bush 0.800645610343816\n",
      "trump 0.7594267280342685\n"
     ]
    }
   ],
   "source": [
    "for speaker, speech in speeches.items():\n",
    "    print(speaker, avg_sentiment(speech.string))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Readbility scores\n",
    "\n",
    "For the Automated Readability Index, you can get the appropriate grade level here: https://en.wikipedia.org/wiki/Automated_readability_index"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 49,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "obama ARI: 7.084245395015714 CL: 8.87629000653168\n",
      "clinton ARI: 9.52021940843251 CL: 9.82835336936142\n",
      "bush ARI: 10.18143472400578 CL: 10.515321515467498\n",
      "trump ARI: 9.821126791631379 CL: 10.05703085904921\n"
     ]
    }
   ],
   "source": [
    "for speaker, speech in speeches.items():\n",
    "    print(speaker, \"ARI:\", auto_readability_index(speech), \"CL:\", coleman_liau_index(speech))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 50,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "obama ARI: 8.3908230410747 CL: 8.96712582781457\n",
      "clinton ARI: 9.52021940843251 CL: 9.82835336936142\n",
      "bush ARI: 10.18143472400578 CL: 10.515321515467498\n",
      "trump ARI: 9.821126791631379 CL: 10.05703085904921\n"
     ]
    }
   ],
   "source": [
    "for speaker, speech in speeches.items():\n",
    "    speech = speech.replace(\"Applause.\", \"\")\n",
    "    print(speaker, \"ARI:\", auto_readability_index(speech), \"CL:\", coleman_liau_index(speech))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "To get some comparison, let's also look at some stats calculated through Textacy. You'll note several different scores here, such as the Flesh-Kincaid Grade Level and Readability Ease, the SMOG Index, and the Gunning-Fog Index. Each of these is a measure of readability, with each of them involving the number of syllables overall or number of polysyllabic words. We also see the ARI and CL scores, which use the same formulas we used. However, you might notice that the scores are different. To understand why, you have to dig into the source code for Textacy, where you'll find that it filters out punctuation in creating the word list, which affects the number of characters. It also lowercases the punctuation-filtered words before creating the set of unique words, decreasing that number as well compared to how we calculated it here. These changes affect both the ARI and CL scores."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 51,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'bush': {'ARI': 10.105743095660415,\n",
       "  'CL': 10.373284975782742,\n",
       "  'FK_ease': 63.533602854678094,\n",
       "  'FK_level': 9.015548495431595,\n",
       "  'GF': 12.31341855540742},\n",
       " 'clinton': {'ARI': 9.172024279702171,\n",
       "  'CL': 9.236949903852384,\n",
       "  'FK_ease': 68.20265979605051,\n",
       "  'FK_level': 8.3507883263192,\n",
       "  'GF': 11.56165222833711},\n",
       " 'obama': {'ARI': 7.258372293175114,\n",
       "  'CL': 8.082574134674179,\n",
       "  'FK_ease': 73.1515946068819,\n",
       "  'FK_level': 7.076928361323411,\n",
       "  'GF': 10.361327601233576},\n",
       " 'trump': {'ARI': 9.750467143001387,\n",
       "  'CL': 9.922284358447495,\n",
       "  'FK_ease': 65.47855524889772,\n",
       "  'FK_level': 8.74771792073162,\n",
       "  'GF': 11.973927886256654}}"
      ]
     },
     "execution_count": 51,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "{'obama': {'FK_level': 7.076928361323411, 'FK_ease': 73.1515946068819, 'CL': 8.082574134674179, 'GF': 10.361327601233576, 'ARI': 7.258372293175114}, \n",
    " 'bush': {'FK_level': 9.015548495431595, 'FK_ease': 63.533602854678094, 'CL': 10.373284975782742, 'GF': 12.31341855540742, 'ARI': 10.105743095660415}, \n",
    " 'trump': {'FK_level': 8.74771792073162, 'FK_ease': 65.47855524889772, 'CL': 9.922284358447495, 'GF': 11.973927886256654, 'ARI': 9.750467143001387}, \n",
    " 'clinton': {'FK_level': 8.3507883263192, 'FK_ease': 68.20265979605051, 'CL': 9.236949903852384, 'GF': 11.56165222833711, 'ARI': 9.172024279702171}}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<div style=\"font-size: 1em; margin: 1em 0 1em 0; border: 1px solid #86989B; background-color: #f7f7f7; padding: 0;\">\n",
    "<p style=\"margin: 0; padding: 0.1em 0 0.1em 0.5em; color: white; border-bottom: 1px solid #86989B; font-weight: bold; background-color: #AFC1C4;\">\n",
    "Activity\n",
    "</p>\n",
    "<p style=\"margin: 0.5em 1em 0.5em 1em; padding: 0;\">\n",
    "Write a function `stats(url)` that receives a `url` from a plain text version of a book in Project Gutenberg and returns the a dictionary with statistics (Auto Readability Index, Coleman-Lieu Index, lexical richness, average sentence length in words, average sentiment, number of characters, number of words, number of unique words, number of sentences, and 10 most used words) of the text contained in the URL. For example, `stats(\"http://www.gutenberg.org/cache/epub/345/pg345.txt\")` should return `{'ari': 7.051237118685233,\n",
    " 'average_sentiment': 0.6216963558545169,\n",
    " 'characters': 883114,\n",
    " 'cl': 6.151579188686984,\n",
    " 'lexical_richness': 15.130625285257873,\n",
    " 'sentence_length': 19.343680709534368,\n",
    " 'sentences': 8569,\n",
    " 'top_words': ['said',\n",
    "  'could',\n",
    "  'one',\n",
    "  'us',\n",
    "  'must',\n",
    "  'would',\n",
    "  'may',\n",
    "  'shall',\n",
    "  'see',\n",
    "  'know'],\n",
    " 'unique_words': 10955,\n",
    " 'words': 165756}`.\n",
    "<br/>\n",
    "* **Hint**: Rememer to use the `get_text()` function. Be careful with what parameters to pass in to each function.*\n",
    "</p>\n",
    "</div>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 52,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'ari': 7.051237118685233,\n",
       " 'average_sentiment': 0.6216963558545169,\n",
       " 'characters': 883114,\n",
       " 'cl': 6.151579188686984,\n",
       " 'lexical_richness': 0.066091121890007,\n",
       " 'sentence_length': 19.343680709534368,\n",
       " 'sentences': 8569,\n",
       " 'top_words': ['said',\n",
       "  'could',\n",
       "  'one',\n",
       "  'us',\n",
       "  'must',\n",
       "  'would',\n",
       "  'may',\n",
       "  'shall',\n",
       "  'see',\n",
       "  'know'],\n",
       " 'unique_words': 10955,\n",
       " 'words': 165756}"
      ]
     },
     "execution_count": 52,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "def stats(url):\n",
    "    text = get_text(url)\n",
    "    blob = TextBlob(text)\n",
    "    return {\n",
    "        \"ari\": auto_readability_index(blob),\n",
    "        \"cl\": coleman_liau_index(blob),\n",
    "        \"lexical_richness\": lexical_richness(blob.words),\n",
    "        \"sentence_length\": avg_sentence_length(blob),\n",
    "        \"average_sentiment\": avg_sentiment(blob.string),\n",
    "        \"characters\": count_chars(blob.string),\n",
    "        \"words\": len(blob.words),\n",
    "        \"unique_words\": len(set(blob.words)),\n",
    "        \"sentences\": len(blob.sentences),\n",
    "        \"top_words\": [w for (w, f) in most_used_words(blob, 10)],\n",
    "    }\n",
    "\n",
    "stats(\"http://www.gutenberg.org/cache/epub/345/pg345.txt\")  # Dracula"
   ]
  }
 ],
 "metadata": {
  "anaconda-cloud": {},
  "kernelspec": {
   "display_name": "Python (text)",
   "language": "python",
   "name": "text"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.5.2"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 1
}
