File size: 5,031 Bytes
01d4031 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 |
name: Bug report 🐛
description: Create a bug report for Auto-GPT.
labels: ['status: needs triage']
body:
- type: markdown
attributes:
value: |
### ⚠️ Before you continue
* Check out our [backlog], [roadmap] and join our [discord] to discuss what's going on
* If you need help, you can ask in the [discussions] section or in [#tech-support]
* **Throughly search the [existing issues] before creating a new one**
[backlog]: https://github.com/orgs/Significant-Gravitas/projects/1
[roadmap]: https://github.com/orgs/Significant-Gravitas/projects/2
[discord]: https://discord.gg/autogpt
[discussions]: https://github.com/Significant-Gravitas/Auto-GPT/discussions
[#tech-support]: https://discord.com/channels/1092243196446249134/1092275629602394184
[existing issues]: https://github.com/Significant-Gravitas/Auto-GPT/issues?q=is%3Aissue
- type: checkboxes
attributes:
label: ⚠️ Search for existing issues first ⚠️
description: >
Please [search the history](https://github.com/Torantulino/Auto-GPT/issues)
to see if an issue already exists for the same problem.
options:
- label: I have searched the existing issues, and there is no existing issue for my problem
required: true
- type: markdown
attributes:
value: |
Please provide a searchable summary of the issue in the title above ⬆️.
⚠️ SUPER-busy repo, please help the volunteer maintainers.
The less time we spend here, the more time we spend building AutoGPT.
Please help us help you:
- Does it work on `stable` branch (https://github.com/Torantulino/Auto-GPT/tree/stable)?
- Does it work on current `master` (https://github.com/Torantulino/Auto-GPT/tree/master)?
- Search for existing issues, "add comment" is tidier than "new issue"
- Ask on our Discord (https://discord.gg/autogpt)
- Provide relevant info:
- Provide commit-hash (`git rev-parse HEAD` gets it)
- If it's a pip/packages issue, provide pip version, python version
- If it's a crash, provide traceback.
- type: dropdown
attributes:
label: Which Operating System are you using?
description: >
Please select the operating system you were using to run Auto-GPT when this problem occurred.
options:
- Windows
- Linux
- MacOS
- Docker
- Devcontainer / Codespace
- Windows Subsystem for Linux (WSL)
- Other (Please specify in your problem)
validations:
required: true
- type: dropdown
attributes:
label: GPT-3 or GPT-4?
description: >
If you are using Auto-GPT with `--gpt3only`, your problems may be caused by
the [limitations](https://github.com/Significant-Gravitas/Auto-GPT/issues?q=is%3Aissue+label%3A%22AI+model+limitation%22) of GPT-3.5.
options:
- GPT-3.5
- GPT-4
validations:
required: true
- type: textarea
attributes:
label: Steps to reproduce 🕹
description: |
**⚠️ Issues that we can't reproduce will be closed.**
- type: textarea
attributes:
label: Current behavior 😯
description: Describe what happens instead of the expected behavior.
- type: textarea
attributes:
label: Expected behavior 🤔
description: Describe what should happen.
- type: textarea
attributes:
label: Your prompt 📝
description: >
If applicable please provide the prompt you are using. Your prompt is stored in your `ai_settings.yaml` file.
value: |
```yaml
# Paste your prompt here
```
- type: textarea
attributes:
label: Your Logs 📒
description: |
Please include the log showing your error and the command that caused it, if applicable.
You can copy it from your terminal or from `logs/activity.log`.
This will help us understand your issue better!
<details>
<summary><i>Example</i></summary>
```log
INFO NEXT ACTION: COMMAND = execute_shell ARGUMENTS = {'command_line': 'some_command'}
INFO -=-=-=-=-=-=-= COMMAND AUTHORISED BY USER -=-=-=-=-=-=-=
Traceback (most recent call last):
File "/home/anaconda3/lib/python3.9/site-packages/openai/api_requestor.py", line 619, in _interpret_response
self._interpret_response_line(
File "/home/anaconda3/lib/python3.9/site-packages/openai/api_requestor.py", line 682, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: This model's maximum context length is 8191 tokens, however you requested 10982 tokens (10982 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.
```
</details>
value: |
```log
<insert your logs here>
```
|