File size: 3,003 Bytes
5b1fbd1 f1a7843 5b1fbd1 d7af055 5b1fbd1 d7af055 5b1fbd1 d7af055 5b1fbd1 d7af055 5b1fbd1 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 |
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## [ChatGPT Prompt Engineering for Developers](https://learn.deeplearning.ai/chatgpt-prompt-eng/)"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"from auth import API_KEY\n",
"import openai"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"openai.api_key = API_KEY"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [],
"source": [
"def get_completion(prompt, model='gpt-3.5-turbo'):\n",
" messages = [{'role':'user', 'content': prompt}]\n",
" response = openai.ChatCompletion.create(\n",
" model=model,\n",
" messages = messages,\n",
" temperature = 0, # this is the degree of randomness of the model's output\n",
" )\n",
" return response.choices[0].message['content']"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"text = f\"\"\"\n",
"You should express what you want a model to do by \\ \n",
"providing instructions that are as clear and \\ \n",
"specific as you can possibly make them. \\ \n",
"This will guide the model towards the desired output, \\ \n",
"and reduce the chances of receiving irrelevant \\ \n",
"or incorrect responses. Don't confuse writing a \\ \n",
"clear prompt with writing a short prompt. \\ \n",
"In many cases, longer prompts provide more clarity \\ \n",
"and context for the model, which can lead to \\ \n",
"more detailed and relevant outputs.\n",
"\"\"\""
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"prompt = f\"\"\"\n",
"Summarize the text delimited by triple backticks \\ \n",
"into a single sentence.\n",
"```{text}```\n",
"\"\"\""
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Clear and specific instructions should be provided to guide a model towards the desired output, and longer prompts can provide more clarity and context for the model, leading to more detailed and relevant outputs.\n"
]
}
],
"source": [
"response = get_completion(prompt)\n",
"print(response)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.6"
},
"orig_nbformat": 4
},
"nbformat": 4,
"nbformat_minor": 2
}
|