{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "72d39d60",
   "metadata": {},
   "source": [
    "# Instructing LLMs To Match Tone\n",
    "\n",
    "LLMs that generate text are awesome, but what if you want to edit the tone/style it responds with?\n",
    "\n",
    "We've all seen the [pirate](https://python.langchain.com/en/latest/modules/agents/agents/custom_llm_agent.html#:~:text=template%20%3D%20%22%22%22Answer%20the%20following%20questions%20as%20best%20you%20can%2C%20but%20speaking%20as%20a%20pirate%20might%20speak.%20You%20have%20access%20to%20the%20following%20tools%3A) examples, but it would be awesome if we could tune the prompt to match the tone of specific people?\n",
    "\n",
    "Below is a series of techniques aimed to generate text in the tone and style you want. No single technique will likely be *exactly* what you need, but I guarantee you can iterate with these tips to get a solid outcome for your project.\n",
    "\n",
    "But Greg, what about fine tuning? Fine tuning would likely give you a fabulous result, but the barriers to entry are too high for the average developer (as of May '23). I would rather get the 87% solution today rather than not ship something. If you're doing this in production and your differentiator is your ability to adapt to different styles you'll likely want to explore fine tuning.\n",
    "\n",
    "If you want to see a demo video of this, check out the Twitter post. For a full explination head over to YouTube.\n",
    "\n",
    "### 4 Levels Of Tone Matching Techniques:\n",
    "1. **Simple:** As a human, try and describe the tone you would like\n",
    "2. **Intermediate:** Include your description + examples\n",
    "3. **AI-Assisted:** Ask the LLM to extract tone, use their output in your next prompt\n",
    "4. **Technique Fusion:** Combine multiple techniques to mimic tone\n",
    "\n",
    "**Today's Goal**: Generate tweets mimicking the style of online personalities. You could customize this code to generate emails, chat messages, writing, etc.\n",
    "\n",
    "First let's import our packages"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "e65bd69a",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "True"
      ]
     },
     "execution_count": 1,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# LangChain\n",
    "from langchain.chat_models import ChatOpenAI\n",
    "from langchain import PromptTemplate\n",
    "\n",
    "# Environment Variables\n",
    "import os\n",
    "from dotenv import load_dotenv\n",
    "\n",
    "# Twitter\n",
    "import tweepy\n",
    "\n",
    "load_dotenv()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "83886158",
   "metadata": {},
   "source": [
    "Set your OpenAI key. You can either put it as an environment variable or in the string below"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "98123655",
   "metadata": {
    "hide_input": false
   },
   "outputs": [],
   "source": [
    "openai_api_key = os.getenv('OPENAI_API_KEY', 'YourAPIKeyIfNotSet')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "60e572c7",
   "metadata": {},
   "source": [
    "We'll be using `gpt-4` today, but you can swap out for `gpt-3.5-turbo` if you'd like"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "063daa43",
   "metadata": {},
   "outputs": [],
   "source": [
    "llm = ChatOpenAI(temperature=0, openai_api_key=openai_api_key, model_name='gpt-4')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c76a9b7c",
   "metadata": {},
   "source": [
    "## Method #1: Simple - Describe the tone you would like\n",
    "\n",
    "Our first method is going to be simply describing the tone we would like.\n",
    "\n",
    "Let's try a few exmaples"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "0c852071",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\"Sunshine, fresh air, and a scrumptious sandwich in hand 🥪🌳 Just had the perfect afternoon at the park, soaking up nature's beauty while munching on my favorite meal! #ParkPicnic #SandwichLover\"\n"
     ]
    }
   ],
   "source": [
    "prompt = \"\"\"\n",
    "Please create me a tweet about going to the park and eating a sandwich.\n",
    "\"\"\"\n",
    "\n",
    "output = llm.predict(prompt)\n",
    "print (output)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4f31c440",
   "metadata": {},
   "source": [
    "Not bad, but I don't love the emojis and I want it to use more conversational modern language.\n",
    "\n",
    "Let's try again"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "4ad1f61f",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Had a fun day at the park today! I played on the swings and ate a yummy sandwich for lunch. I love spending time outside!\n"
     ]
    }
   ],
   "source": [
    "prompt = \"\"\"\n",
    "Please create me a tweet about going to the park and eating a sandwich.\n",
    "\n",
    "% TONE\n",
    " - Don't use any emojis or hashtags.\n",
    " - Use simple language a 5 year old would understand\n",
    "\"\"\"\n",
    "\n",
    "output = llm.predict(prompt)\n",
    "print (output)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0b738fc0",
   "metadata": {},
   "source": [
    "Ok cool! The tone has changed. Not bad but now I want it to sound like a specific person. Let's try Bill Gates:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "aa86db95",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "There's something truly delightful about spending an afternoon at the park, enjoying a well-crafted sandwich, and contemplating the beauty of nature. It's a simple pleasure that reminds us of the importance of taking a break from our busy lives to appreciate the world around us.\n"
     ]
    }
   ],
   "source": [
    "prompt = \"\"\"\n",
    "Please create me a tweet about going to the park and eating a sandwich.\n",
    "\n",
    "% TONE\n",
    " - Don't use any emojis or hashtags.\n",
    " - Respond in the tone of Bill Gates\n",
    "\"\"\"\n",
    "\n",
    "output = llm.predict(prompt)\n",
    "print (output)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cdf41f09",
   "metadata": {},
   "source": [
    "It's ok, I'd give the response a `C+` right now.\n",
    "\n",
    "Let's give some example tweets so the model can better match tone/style.\n",
    "\n",
    "`⭐ Important Tip: When you're giving examples, make sure to have the examples the same as the desired output format. Ex: Tweets > Tweets, Email > Email`. Don't do `Tweets > Email`"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "06265a43",
   "metadata": {},
   "source": [
    "## Method #2: Intermediate - Specify your tone description + examples\n",
    "\n",
    "Examples speak a thousand words. Let's pass a few along with our instructions to see how it goes\n",
    "\n",
    "### Get a users Tweets\n",
    "\n",
    "Next let's grab a users tweets. We'll do this in a function so it's easy to pull them later"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6aef30ad",
   "metadata": {},
   "source": [
    "Since we are live Tweets, you'll need to grather some Twitter api keys. You can get these on the [Twitter Developer Portal](https://developer.twitter.com/en/portal/dashboard). The free tier is fine, but watch out for rate limits."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "093d7162",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Replace these values with your own Twitter API credentials\n",
    "TWITTER_API_KEY = os.getenv('TWITTER_API_KEY', 'YourAPIKeyIfNotSet')\n",
    "TWITTER_API_KEY_SECRET = os.getenv('TWITTER_API_KEY_SECRET', 'YourAPIKeyIfNotSet')\n",
    "TWITTER_ACCESS_TOKEN = os.getenv('TWITTER_ACCESS_TOKEN', 'YourAPIKeyIfNotSet')\n",
    "TWITTER_ACCESS_TOKEN_SECRET = os.getenv('TWITTER_ACCESS_TOKEN_SECRET', 'YourAPIKeyIfNotSet')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "9c2b68b9",
   "metadata": {},
   "outputs": [],
   "source": [
    "# We'll query 70 tweets because we end up filtering out a bunch, but we'll only return the top 12.\n",
    "# We will also only use a subset of the top tweets later\n",
    "def get_original_tweets(screen_name, tweets_to_pull=70, tweets_to_return=12):\n",
    "    \n",
    "    # Tweepy set up\n",
    "    auth = tweepy.OAuthHandler(TWITTER_API_KEY, TWITTER_API_KEY_SECRET)\n",
    "    auth.set_access_token(TWITTER_ACCESS_TOKEN, TWITTER_ACCESS_TOKEN_SECRET)\n",
    "    api = tweepy.API(auth)\n",
    "\n",
    "    tweets = []\n",
    "    \n",
    "    tweepy_results = tweepy.Cursor(api.user_timeline,\n",
    "                                   screen_name=screen_name,\n",
    "                                   tweet_mode='extended',\n",
    "                                   exclude_replies=True).items(tweets_to_pull)\n",
    "    \n",
    "    # Run through tweets and remove retweets and quote tweets so we can only look at a user's raw emotions\n",
    "    for status in tweepy_results:\n",
    "        if not hasattr(status, 'retweeted_status') and not hasattr(status, 'quoted_status'):\n",
    "            tweets.append({'full_text': status.full_text, 'likes': status.favorite_count})\n",
    "\n",
    "    \n",
    "    # Sort the tweets by number of likes. This will help us short_list the top ones later\n",
    "    sorted_tweets = sorted(tweets, key=lambda x: x['likes'], reverse=True)\n",
    "\n",
    "    # Get the text and drop the like count from the dictionary\n",
    "    full_text = [x['full_text'] for x in sorted_tweets][:tweets_to_return]\n",
    "    \n",
    "    # Conver the list of tweets into a string of tweets we can use in the prompt later\n",
    "    example_tweets = \"\\n\\n\".join(full_text)\n",
    "            \n",
    "    return example_tweets"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d6a8a9f9",
   "metadata": {},
   "source": [
    "Let's grab Bill Gates tweets and use those as examples"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "e70a1173",
   "metadata": {},
   "outputs": [],
   "source": [
    "user_screen_name = 'billgates'  # Replace this with the desired user's screen name\n",
    "users_tweets = get_original_tweets(user_screen_name)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ac488978",
   "metadata": {},
   "source": [
    "Let's look at a sample of Bill's tweets"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "id": "38043e2c",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "These numbers prove why India plays such a crucial role in the world’s fight to improve health, reduce poverty, prevent climate change, and more. https://t.co/xMpmcoYQhi\n",
      "\n",
      "Mann ki Baat has catalyzed community led action on sanitation, health, women’s economic empowerment and other issues linked to the Sustainable Development Goals. Congratulations @narendramodi on the 100th episode. https://t.co/yg1Di2srjE\n",
      "\n",
      "The development of AI is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone. It will change the way people work, learn, travel, get health care, and communicate with each other. https://t.co/uuaOQyxBTl\n",
      "\n",
      "I just returned from my visit to India, and I can’t wait to go back again. I love visiting India because every trip is an incredible opportunity to learn. Here are some photos from my trip and some of the stories behind them: https://t.co/We6PtJWDnp https://t.co/QxZW7gfUmI\n",
      "\n",
      "Superintelligent AIs are in our future. Compared to a computer, our brains operate at a snail’s pace. An electrical signal in the brain moves at ___________ the speed of the signal in a silicon chip. Check your answer here: https://t.co/wqZG1BdoTc\n",
      "\n",
      "Thinking of President Carter and his family. This is a lovely tribute to one of his biggest accomplishments. https://t.co/g53c4ty0qI\n",
      "\n",
      "Uganda’s maternal mortality rate is at least double the global average. That's why Eva Nangalo has dedicated her life to making childbirth in the country safer for everyone involved. https://t.co/29AjdJehNY\n",
      "\n",
      "I am so impressed with Eva Nangalo—it’s hard not to be. She’s spent decades making childbirth safer in Uganda for everyone involved, and she’s become a mentor to countless other midwives in the process. https://t.co/79RHbrCt01\n",
      "\n",
      "I recently had the chance to test drive—or test ride, I guess—one of @wayve_ai’s autonomous vehicles. It was a pretty wild ride: https://t.co/PrwrxU49dd https://t.co/NtnkVx7sBx\n",
      "\n",
      "When I transitioned from @Microsoft to working full-time at the @GatesFoundation, I finally had the time to learn more about physics, chemistry, biology, and other sciences. So, I looked around for the best books and read as many of them as I could find. https://t.co/z2D5xGSeMj\n",
      "\n",
      "As big as the problems facing the world are right now, my visit to India reminded me that our capacity to solve them is even bigger: https://t.co/zp7XfRIpV9 https://t.co/aFHUu987u3\n",
      "\n",
      "I’m grateful for the Lauder family’s dedication to solving Alzheimer’s. https://t.co/vX0qtjBFxt\n"
     ]
    }
   ],
   "source": [
    "print(users_tweets)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6765f2e5",
   "metadata": {},
   "source": [
    "### Pass the tweets as examples"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "da9d25d8",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "Please create me a tweet about going to the park and eating a sandwich.\n",
      "\n",
      "% TONE\n",
      " - Don't use any emojis or hashtags.\n",
      " - Respond in the tone of Bill Gates\n",
      "\n",
      "% START OF EXAMPLE TWEETS TO MIMIC\n",
      "These numbers prove why India plays such a crucial role in the world’s fight to improve health, reduce poverty, prevent climate change, and more. https://t.co/xMpmcoYQhi\n",
      "\n",
      "Mann ki Baat has catalyzed community led action on sanitation, health, women’s economic empowerment and other issues linked to the Sustainable Development Goals. Congratulations @narendramodi on the 100th episode. https://t.co/yg1Di2srjE\n",
      "\n",
      "The development of AI is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone. It will change the way people work, learn, travel, get health care, and communicate with each other. https://t.co/uuaOQyxBTl\n",
      "\n",
      "I just returned from my visit to India, and I can’t wait to go back again. I love visiting India because every trip is an incredible opportunity to learn. Here are some photos from my trip and some of the stories behind them: https://t.co/We6PtJWDnp https://t.co/QxZW7gfUmI\n",
      "\n",
      "Superintelligent AIs are in our future. Compared to a computer, our brains operate at a snail’s pace. An electrical signal in the brain moves at ___________ the speed of the signal in a silicon chip. Check your answer here: https://t.co/wqZG1BdoTc\n",
      "\n",
      "Thinking of President Carter and his family. This is a lovely tribute to one of his biggest accomplishments. https://t.co/g53c4ty0qI\n",
      "\n",
      "Uganda’s maternal mortality rate is at least double the global average. That's why Eva Nangalo has dedicated her life to making childbirth in the country safer for everyone involved. https://t.co/29AjdJehNY\n",
      "\n",
      "I am so impressed with Eva Nangalo—it’s hard not to be. She’s spent decades making childbirth safer in Uganda for everyone involved, and she’s become a mentor to countless other midwives in the process. https://t.co/79RHbrCt01\n",
      "\n",
      "I recently had the chance to test drive—or test ride, I guess—one of @wayve_ai’s autonomous vehicles. It was a pretty wild ride: https://t.co/PrwrxU49dd https://t.co/NtnkVx7sBx\n",
      "\n",
      "When I transitioned from @Microsoft to working full-time at the @GatesFoundation, I finally had the time to learn more about physics, chemistry, biology, and other sciences. So, I looked around for the best books and read as many of them as I could find. https://t.co/z2D5xGSeMj\n",
      "\n",
      "As big as the problems facing the world are right now, my visit to India reminded me that our capacity to solve them is even bigger: https://t.co/zp7XfRIpV9 https://t.co/aFHUu987u3\n",
      "\n",
      "I’m grateful for the Lauder family’s dedication to solving Alzheimer’s. https://t.co/vX0qtjBFxt\n",
      "% END OF EXAMPLE TWEETS TO MIMIC\n",
      "\n",
      "YOUR TWEET:\n",
      "\n"
     ]
    }
   ],
   "source": [
    "template = \"\"\"\n",
    "Please create me a tweet about going to the park and eating a sandwich.\n",
    "\n",
    "% TONE\n",
    " - Don't use any emojis or hashtags.\n",
    " - Respond in the tone of Bill Gates\n",
    "\n",
    "% START OF EXAMPLE TWEETS TO MIMIC\n",
    "{example_tweets}\n",
    "% END OF EXAMPLE TWEETS TO MIMIC\n",
    "\n",
    "YOUR TWEET:\n",
    "\"\"\"\n",
    "\n",
    "prompt = PromptTemplate(\n",
    "    input_variables=[\"example_tweets\"],\n",
    "    template=template,\n",
    ")\n",
    "\n",
    "final_prompt = prompt.format(example_tweets=users_tweets)\n",
    "\n",
    "print (final_prompt)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "id": "8a7b8b12",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "A simple pleasure like visiting the park and enjoying a sandwich can remind us of the importance of preserving our environment and supporting local food systems. Let's continue to innovate for a sustainable future.\n"
     ]
    }
   ],
   "source": [
    "output = llm.predict(final_prompt)\n",
    "print (output)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b65a69b5",
   "metadata": {},
   "source": [
    "Wow! Ok now that is starting to get somewhere. Not bad at all! Sounds like Bill is in the room with us now.\n",
    "\n",
    "Let's see if we can refine it even more."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "afba6f42",
   "metadata": {},
   "source": [
    "## Method #3: AI-Assisted: Ask the LLM help with tone descriptions\n",
    "\n",
    "Turns out I'm not great at describing tone. Examples are a good way to help, but can we do more? Let's find out.\n",
    "\n",
    "I want to have the model tell me what tone *it* sees, then use that output as an *input* to the final prompt where I ask it to generate a tweet.\n",
    "\n",
    "Almost like reverse engineering tone.\n",
    "\n",
    "Why don't I do this all in one step? You likely could, but it would be nice to save this \"tone\" description for future use. Plus, I don't want the model to take too many logic jumps in a single response.\n",
    "\n",
    "I first thought, 'well... what are the qualities of tone I should have it describe?'\n",
    "\n",
    "Then I thought, Greg, c'mon man, you know better than that, see if the LLM has a good sense of what tone qualities there are. Duh.\n",
    "\n",
    "Let's see what are the qualities of tone we should extract"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "id": "20ee37ac",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "1. Pace: The speed at which the story unfolds and events occur.\n",
      "2. Mood: The overall emotional atmosphere or feeling of the piece.\n",
      "3. Tone: The author's attitude towards the subject matter or characters.\n",
      "4. Voice: The unique style and personality of the author as it comes through in the writing.\n",
      "5. Diction: The choice of words and phrases used by the author.\n",
      "6. Syntax: The arrangement of words and phrases to create well-formed sentences.\n",
      "7. Imagery: The use of vivid and descriptive language to create mental images for the reader.\n",
      "8. Theme: The central idea or message of the piece.\n",
      "9. Point of View: The perspective from which the story is told (first person, third person, etc.).\n",
      "10. Structure: The organization and arrangement of the piece, including its chapters, sections, or stanzas.\n",
      "11. Dialogue: The conversations between characters in the piece.\n",
      "12. Characterization: The way the author presents and develops characters in the story.\n",
      "13. Setting: The time and place in which the story takes place.\n",
      "14. Foreshadowing: The use of hints or clues to suggest future events in the story.\n",
      "15. Irony: The use of words or situations to convey a meaning that is opposite of its literal meaning.\n",
      "16. Symbolism: The use of objects, characters, or events to represent abstract ideas or concepts.\n",
      "17. Allusion: A reference to another work of literature, person, or event within the piece.\n",
      "18. Conflict: The struggle between opposing forces or characters in the story.\n",
      "19. Suspense: The tension or excitement created by uncertainty about what will happen next in the story.\n",
      "20. Climax: The turning point or most intense moment in the story.\n",
      "21. Resolution: The conclusion of the story, where conflicts are resolved and loose ends are tied up.\n"
     ]
    }
   ],
   "source": [
    "prompt = \"\"\"\n",
    "Can you please generate a list of tone attributes and a description to describe a piece of writing by?\n",
    "\n",
    "Things like pace, mood, etc.\n",
    "\n",
    "Respond with nothing else besides the list\n",
    "\"\"\"\n",
    "\n",
    "how_to_describe_tone = llm.predict(prompt)\n",
    "print (how_to_describe_tone)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "fd75f2c0",
   "metadata": {},
   "source": [
    "Ok great! Now that we have a solid list of ideas on how to instruct our language model for tone. Let's do some tone extraction!\n",
    "\n",
    "I found that when I asked the model for a description of the tone it would be passive and noncommittal so I included a line in the prompt about taking an active voice "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "id": "8b1a75c5",
   "metadata": {},
   "outputs": [],
   "source": [
    "def get_authors_tone_description(how_to_describe_tone, users_tweets):\n",
    "    template = \"\"\"\n",
    "        You are an AI Bot that is very good at generating writing in a similar tone as examples.\n",
    "        Be opinionated and have an active voice.\n",
    "        Take a strong stance with your response.\n",
    "\n",
    "        % HOW TO DESCRIBE TONE\n",
    "        {how_to_describe_tone}\n",
    "\n",
    "        % START OF EXAMPLES\n",
    "        {tweet_examples}\n",
    "        % END OF EXAMPLES\n",
    "\n",
    "        List out the tone qualities of the examples above\n",
    "        \"\"\"\n",
    "\n",
    "    prompt = PromptTemplate(\n",
    "        input_variables=[\"how_to_describe_tone\", \"tweet_examples\"],\n",
    "        template=template,\n",
    "    )\n",
    "\n",
    "    final_prompt = prompt.format(how_to_describe_tone=how_to_describe_tone, tweet_examples=users_tweets)\n",
    "\n",
    "    authors_tone_description = llm.predict(final_prompt)\n",
    "\n",
    "    return authors_tone_description"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f7dfbe6a",
   "metadata": {},
   "source": [
    "Let's combine the tone description and examples to see what tone attributes the model assigned to Bill Gates"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "id": "183464e3",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "1. Pace: Moderate, allowing for thoughtful reflection on the topics discussed.\n",
      "2. Mood: Optimistic and enthusiastic, highlighting positive aspects and potential solutions.\n",
      "3. Tone: Engaging and informative, with a strong emphasis on personal experiences and opinions.\n",
      "4. Voice: Confident and authoritative, showcasing expertise and passion for the subjects.\n",
      "5. Diction: Clear and concise, using accessible language to convey complex ideas.\n",
      "6. Syntax: Straightforward and well-structured sentences, making the content easy to follow.\n",
      "7. Imagery: Evocative and descriptive, painting vivid pictures of experiences and situations.\n",
      "8. Theme: Focused on innovation, progress, and the potential for positive change.\n",
      "9. Point of View: First person, providing a personal perspective on the topics discussed.\n",
      "10. Structure: Organized and coherent, with a logical flow of ideas and information.\n",
      "11. Dialogue: Limited, but when present, it is engaging and relevant to the topic.\n",
      "12. Characterization: Presents individuals in a positive light, emphasizing their dedication and achievements.\n",
      "13. Setting: Global, with a focus on specific countries or regions where progress is being made.\n",
      "14. Foreshadowing: Hints at future developments and breakthroughs in various fields.\n",
      "15. Irony: Minimal, as the focus is on genuine progress and optimism.\n",
      "16. Symbolism: Limited, with more emphasis on real-world examples and achievements.\n",
      "17. Allusion: Occasional references to other works, events, or individuals to provide context or support.\n",
      "18. Conflict: Implicit, as the challenges faced by humanity are the driving force behind the discussed innovations and solutions.\n",
      "19. Suspense: Minimal, as the focus is on sharing information and insights rather than creating tension.\n",
      "20. Climax: Not applicable, as the content is primarily informative and opinion-based.\n",
      "21. Resolution: Concludes with a sense of hope and optimism for the future, as well as a call to action for continued progress.\n"
     ]
    }
   ],
   "source": [
    "authors_tone_description = get_authors_tone_description(how_to_describe_tone, users_tweets)\n",
    "print (authors_tone_description)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8eeead41",
   "metadata": {},
   "source": [
    "Great, now that we have Bill Gate's tone style, let's put those tone instructions in with the prompt we had before to see if it helps"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "id": "a99dee96",
   "metadata": {},
   "outputs": [],
   "source": [
    "template = \"\"\"\n",
    "% INSTRUCTIONS\n",
    " - You are an AI Bot that is very good at mimicking an author writing style.\n",
    " - Your goal is to write content with the tone that is described below.\n",
    " - Do not go outside the tone instructions below\n",
    " - Do not use hashtags or emojis\n",
    " - Respond in the tone of Bill Gates\n",
    "\n",
    "% Description of the authors tone:\n",
    "{authors_tone_description}\n",
    "\n",
    "% Authors writing samples\n",
    "{tweet_examples}\n",
    "\n",
    "% YOUR TASK\n",
    "Please create a tweet about going to the park and eating a sandwich.\n",
    "\"\"\"\n",
    "\n",
    "prompt = PromptTemplate(\n",
    "    input_variables=[\"authors_tone_description\", \"tweet_examples\"],\n",
    "    template=template,\n",
    ")\n",
    "\n",
    "final_prompt = prompt.format(authors_tone_description=authors_tone_description, tweet_examples=users_tweets)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "id": "d7b48094",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.<locals>._completion_with_retry in 1.0 seconds as it raised RateLimitError: The server is currently overloaded with other requests. Sorry about that! You can retry your request, or contact us through our help center at help.openai.com if the error persists..\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "\"I recently took a leisurely stroll through the park, enjoying the beauty of nature and savoring a delicious sandwich. It's moments like these that remind us of the simple pleasures in life and inspire us to continue working towards a brighter future for all. https://t.co/9YzF8KJ6rP\""
      ]
     },
     "execution_count": 21,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "llm.predict(final_prompt)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f289c285",
   "metadata": {},
   "source": [
    "Hmm, better! Not wonderful.\n",
    "\n",
    "Let's try out the final approach"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c00fe80b",
   "metadata": {},
   "source": [
    "## Method 4 - **Technique Fusion:** Combine multiple techniques to mimic tone\n",
    "\n",
    "After a lot of experimentation I've found the below tips to be helpful\n",
    "\n",
    "* **Don't reference the word 'tweet' in your prompt** - The model has an extremely strong bias towards what a 'tweet' is an will overload you with hashtags and emojis. Rather call it \"a short public statement around 300 characters\"\n",
    "* **Ask the LLM for similar sounding authors** - Whereas model bias on the word 'tweet' (point #1) isn't great, we can use it in our favor. Ask the LLM which authors the writing style sounds like, then ask the LLM to respond like that author. It's not great that the model is basing the tone off *another* person but it's a great 89% solution. I learned of this technique from [Scott Mitchell](https://twitter.com/mitchell360/status/1657909800389464064).\n",
    "* **Examples should be in the output format you want** - Everyone has a different voice. Twitter voice, email voice etc. Make sure that the examples you feed to the prompt are the same voice as the output you want. Ex: Don't exect a book to be written from twitter examples.\n",
    "* **Use the Language Model to extract tone** - If you are at a loss for words on how to describe the tone you want, have the language model describe it for you. I found I needed to tell the model to be opinionated, it was too grey-area before.\n",
    "* **Topics matter** - Have the model propose topics *first*, *then* give you a tweet. Not only is it better to have things the author would actually talk about, but it's also really good to keep the model on track by having it outline the topics *first* then respond\n",
    "\n",
    "Let's first identify authors the model thinks the example tweets sound like, then we'll reference those later. Keep in mind this isn't a true classification exercise and the point isn't to be 100% correct on similar people, it's to get a reference to who the model *thinks* is similar so we can use that inuition for instructions later."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "id": "a24aa819",
   "metadata": {},
   "outputs": [],
   "source": [
    "def get_similar_public_figures(tweet_examples):\n",
    "    template = \"\"\"\n",
    "    You are an AI Bot that is very good at identifying authors, public figures, or writers whos style matches a piece of text\n",
    "    Your goal is to identify which authors, public figures, or writers sound most similar to the text below\n",
    "\n",
    "    % START EXAMPLES\n",
    "    {tweet_examples}\n",
    "    % END EXAMPLES\n",
    "\n",
    "    Which authors (list up to 4 if necessary) most closely resemble the examples above? Only respond with the names separated by commas\n",
    "    \"\"\"\n",
    "\n",
    "    prompt = PromptTemplate(\n",
    "        input_variables=[\"tweet_examples\"],\n",
    "        template=template,\n",
    "    )\n",
    "\n",
    "    # Using the short list of examples so save on tokens and (hopefully) the top tweets\n",
    "    final_prompt = prompt.format(tweet_examples=tweet_examples)\n",
    "\n",
    "    authors = llm.predict(final_prompt)\n",
    "    return authors\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "id": "b0a83b83",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Bill Gates\n"
     ]
    }
   ],
   "source": [
    "authors = get_similar_public_figures(users_tweets)\n",
    "print (authors)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7c1502d1",
   "metadata": {},
   "source": [
    "Ok that's not that exciting! Becuase we used Bill Gates' example tweets. Trust me that it's better with less-known people. We'll try this more later.\n",
    "\n",
    "At last, the final output. Let's bring it all together in a single prompt. Notice the 2 step process in the \"your task\" section below"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "id": "eeb512dc",
   "metadata": {},
   "outputs": [],
   "source": [
    "template = \"\"\"\n",
    "% INSTRUCTIONS\n",
    " - You are an AI Bot that is very good at mimicking an author writing style.\n",
    " - Your goal is to write content with the tone that is described below.\n",
    " - Do not go outside the tone instructions below\n",
    "\n",
    "% Mimic These Authors:\n",
    "{authors}\n",
    "\n",
    "% Description of the authors tone:\n",
    "{tone}\n",
    "\n",
    "% Authors writing samples\n",
    "{example_text}\n",
    "% End of authors writing samples\n",
    "\n",
    "% YOUR TASK\n",
    "1st - Write out topics that this author may talk about\n",
    "2nd - Write a concise passage (under 300 characters) as if you were the author described above\n",
    "\"\"\"\n",
    "\n",
    "method_4_prompt_template = PromptTemplate(\n",
    "    input_variables=[\"authors\", \"tone\", \"example_text\"],\n",
    "    template=template,\n",
    ")\n",
    "\n",
    "# Using the short list of examples so save on tokens and (hopefully) the top tweets\n",
    "final_prompt = method_4_prompt_template.format(authors=authors,\n",
    "                                               tone=authors_tone_description,\n",
    "                                               example_text=users_tweets)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "id": "a99a9e8a",
   "metadata": {
    "scrolled": false
   },
   "outputs": [],
   "source": [
    "# print(final_prompt) # Print this out if you want to see the full final prompt. It's long so I'll omit it for now"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "id": "9e844c7a",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "1. Topics that this author may talk about:\n",
      "- Global health and healthcare innovations\n",
      "- Education and its impact on society\n",
      "- Climate change and sustainable development\n",
      "- Technological advancements, such as artificial intelligence and autonomous vehicles\n",
      "- Poverty reduction and economic empowerment\n",
      "- Personal experiences and learnings from travels\n",
      "- Inspirational stories of individuals making a difference\n",
      "\n",
      "2. Concise passage as the author:\n",
      "I recently visited a remarkable school in Kenya, where students are using solar-powered tablets to access quality education. It's inspiring to see how technology can transform lives and create a brighter future for these children.\n"
     ]
    }
   ],
   "source": [
    "output = llm.predict(final_prompt)\n",
    "print (output)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "bc9653e8",
   "metadata": {},
   "source": [
    "After a ton of iteration, I'm actually happy with that. But let's see this thing spread it's wings on multiple people.\n",
    "\n",
    "## Extra Credit: Loop this process through many twitter accounts\n",
    "\n",
    "Let's see what different twitter accounts sound like. Note, this will burn tokens so use at your own risk!"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "id": "e93f245b",
   "metadata": {},
   "outputs": [],
   "source": [
    "results = {} # To store the results\n",
    "\n",
    "# # Or if you just wanna see the results of the loop below you can open up this json\n",
    "# import json\n",
    "# with open(\"../data/matching_tone_samples.json\", \"r\") as f:\n",
    "#     tone_samples = json.load(f)\n",
    "# print (tone_samples)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "db4e440a",
   "metadata": {},
   "outputs": [],
   "source": [
    "accounts_to_mimic = ['jaltma', 'lindayacc', 'ShaanVP', 'dharmesh', 'sweatystartup', 'levelsio', 'Suhail', \\\n",
    "                     'hwchase17', 'elonmusk', 'packyM', 'benedictevans', 'paulg', 'AlexHormozi', 'DavidDeutschOxf', \\\n",
    "                     'stephsmithio', 'sophiaamoruso']\n",
    "                     \n",
    "\n",
    "for user_screen_name in accounts_to_mimic:\n",
    "    \n",
    "    # Checking to see if we already have done the user. If so, move to the next one\n",
    "    if user_screen_name in results:\n",
    "        continue\n",
    "    \n",
    "    results[user_screen_name] = \"\"\n",
    "    \n",
    "    user_screenname_string = f\"User: {user_screen_name}\"\n",
    "    print (user_screenname_string)\n",
    "    results[user_screen_name] += user_screenname_string\n",
    "    \n",
    "    # Get their top tweets\n",
    "    users_tweets = get_original_tweets(user_screen_name)\n",
    "    \n",
    "    # Get their similar authors\n",
    "    authors = get_similar_public_figures(users_tweets)\n",
    "    authors_string = f\"Similar Authors: {authors}\"\n",
    "    print (authors_string)\n",
    "    results[user_screen_name] += \"\\n\" + authors_string\n",
    "    \n",
    "    # Get their tone description\n",
    "    authors_tone_description = get_authors_tone_description(how_to_describe_tone, users_tweets)\n",
    "    \n",
    "    # Only printing the first four attributes to save space\n",
    "    sample_description = authors_tone_description.split('\\n')[:4]\n",
    "    sample_decscription_string = f\"Tone Description: {sample_description}\"\n",
    "    print(sample_decscription_string)\n",
    "    results[user_screen_name] += \"\\n\" + sample_decscription_string + \"\\n\"\n",
    "    \n",
    "    \n",
    "    # Bring it all together in a single prompt\n",
    "    prompt = method_4_prompt_template.format(authors=authors,\n",
    "                                             tone=authors_tone_description,\n",
    "                                             example_text=users_tweets)\n",
    "    \n",
    "    output = llm.predict(prompt)\n",
    "    results[user_screen_name] += \"\\n\" + output\n",
    "    \n",
    "    print (\"\\n\")\n",
    "    print (output)\n",
    "    print (\"\\n\\n\")"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.13"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
