title
stringlengths 6
24
| size
int64 42
4.1k
| path
stringlengths 22
75
| document
stringlengths 42
4.1k
|
---|---|---|---|
README.md | 4,096 | flamethrower/README.md | <h1 align='center'>π₯ flamethrower</h1>
No bugs can survive the test of <span style='color: orange'>fire</span>; not even the ones you wrote into your codebase πͺ².
[![GitHub Repo](https://img.shields.io/badge/scottsus-flamethrower-red?&logo=github)](https://github.com/scottsus/flamethrower)
![PyPI](https://img.shields.io/pypi/v/flamethrower.svg)
![Code Size](https://img.shields.io/github/languages/code-size/scottsus/flamethrower.svg)
[![Discord](https://img.shields.io/discord/XP4vVUQKPf.svg?label=&logo=discord&logoColor=ffffff&color=7389D8&labelColor=6A7EC2)](https://discord.gg/XP4vVUQKPf)
![License](https://img.shields.io/github/license/scottsus/flamethrower.svg)
[![Twitter](https://img.shields.io/twitter/follow/susantoscott.svg)](https://twitter.com/susantoscott)
## What is this?
π₯ flamethrower is an open source, multi-agent, context-intelligent, debugger that utilizes AI superpowers to automate the painful task of debugging. Think a combination of GitHub Copilot's context-awareness in [KillianLucas' Open Interpreter](https://github.com/KillianLucas/open-interpreter) packed into a beautiful shell that works out of the box with any existing repo.
Automate: [ Write Code β Run Action β Check Logs β Repeat ] ππ
**Main Differentiators**
- π₯ Automate the most painful part of writing code: print statements & error logs
- βοΈ Specialized context agent for operating within existing repo
- π€ Debugging agent optimized to iteratively brute-force locate and fix bugs
- π¦ Out of the box support for any unix machine (no VS Code or VS Code alternatives)
- π¨ Seamless integration into any existing repo; just type `flamethrower`
## Demo
https://github.com/scottsus/flamethrower/assets/88414565/e3c979c0-40ff-4976-aa30-2503a2827742
## Quick Start
<img src='https://github.com/scottsus/flamethrower/assets/88414565/4be238a7-642a-4149-a1ed-98ff7c61f9b8' alt='Quick Start' width='500px'/>
### Install π₯ flamethrower
```
pip install flamethrower
```
Or, if you have an existing version and are looking to upgrade to the latest version
```
pip install --upgrade flamethrower
```
### Terminal
Navigate to your current workspace, and simply run `flamethrower`, or `ft` for the pros.
```
cd ./unbelievably/complicated/workspace
flamethrower
```
### Example Usage
Use lowercase letters for commands you run in the shell, like `python main.py` or `node server.ts`
```
π₯ flamethrower: Debugging on Autopilot
Instructions:
- β¨οΈ Regular shell Use commands like ls, cd, python hello.py
- π€ LLM assistance Start command with a Capital letter, try Who are you?
- π Context Intelligent context-awareness from command, files, and stdout logs
- πͺ΅ Terminal logs All conversation & code output inside flamethrower is logged
...
$ python main.py -> SOME_ERROR
$ Wtf???? # Literally type this in the terminal
```
An implementation run is initiated with a natural language query that begins with an `uppercase letter`.
## Features
### π€ AFK Debugging
If you say 'Yes', π₯ flamethrower will debug in the background while you focus on other tasks at hand. It acts similarly to any other human engineer: adding `print` statements to find the root cause of the issue (which, as we know is the most annoying part). We find this pattern strikingly effective, and is where we believe LAMs have the strongest use case.
If it looks like π₯ flamethrower is obviously headed in the direction of doom, simply press `CTRL+C` and give it more suggestions or context.
<img src='https://github.com/scottsus/flamethrower/assets/88414565/11886370-1da4-478e-8fac-853fd305621a' alt='AFK' width='500px'/>
### ποΈ Conversation History
As long as any shell command or natural language query happens within the context of π₯ flamethrower, then it is by default captured in the conversation history. That means you can:
- ask about an error that just happened, or happened 2 dialogues ago
- follow up on a previous response provided by π₯ flamethrower
### π Prompt Transparency
Prompts sent to LLM are transparent and easy to observe. All π₯ flamethrower metadata are neat |
README.md | 2,530 | flamethrower/README.md | d305621a' alt='AFK' width='500px'/>
### ποΈ Conversation History
As long as any shell command or natural language query happens within the context of π₯ flamethrower, then it is by default captured in the conversation history. That means you can:
- ask about an error that just happened, or happened 2 dialogues ago
- follow up on a previous response provided by π₯ flamethrower
### π Prompt Transparency
Prompts sent to LLM are transparent and easy to observe. All π₯ flamethrower metadata are neatly kept in a `.flamethrower` subdirectory, including prompts, conversations, logs, directory info, summaries, and other metadata.
<img src='https://github.com/scottsus/flamethrower/assets/88414565/8905018d-41f5-48e8-92f5-da2b0512af3d' alt='Transparency' width='500px'/>
### πββοΈ Real Time File Tracking
Everytime you send a query, the latest version of your files are sent over, meaning π₯ flamethrower understands that you changed your files, and are ready to process those changes.
<img src='https://github.com/scottsus/flamethrower/assets/88414565/f3f49b91-1cc8-452c-8625-54d88dcb2a42' alt='Context' width='500px'/>
## Motivation for π₯ flamethrower
### π©ββοΈ GitHub Copilot
Closed source GitHub Copilot draws context very effectively, and `Quick Fix` is a neat feature that explains error from stdout logs if the last command returned a non-zero return code.
### π€ Open Interpreter
The Open Interpreter, an open-source gem, specializes in crafting new code from the ground up. It's a favorite among data scientists and those needing sophisticated chart plotting, thanks to its iterative approach to achieving desired results.
### π¬ Research
π₯ flamethrower combines the agency afforded by Large Action Models (LAM) with the workspace awareness of Copilot, allowing it to take context-specific suggestions and continue iteration until a successful outcome. π₯ flamethrower is workspace-first, and aims to serve software engineers in complex tasks that need a lot of context management.
## π₯ Contributing
π₯ flamethrower is everyone's debugger. Fork it for your own use case, and, one PR at a time we can make the world a more bug-free place β¨ just ping me at scottsus@usc.edu and I'll help you get started.
## π« Project Roadmap
- [x] π§ͺ Better testing
- [ ] π Telemetry and the ability to opt in/out
- [ ] π₯½ LLM Vision to debug visual elements
- [ ] π¦ Running CodeLlama locally
- [ ] π€ Other models besides OpenAI
- [ ] π¦Ύ Default model finetuned on telemetry data
- [ ] ποΈ VS Code integration
- [ ] π» Browser interface
|
goodbye_world.py | 91 | flamethrower/context/goodbye_world.py | # /Users/scottsus/Projects/flamethrower/context/goodbye_world.py
print("Goodbye, world!")
|
README.md | 4,096 | flamethrower/.flamethrower/README.md | <h1 align='center'>π₯ flamethrower</h1>
No bugs can survive the test of <span style='color: orange'>fire</span>; not even the ones you wrote into your codebase πͺ².
[![GitHub Repo](https://img.shields.io/badge/scottsus-flamethrower-red?&logo=github)](https://github.com/scottsus/flamethrower)
![PyPI](https://img.shields.io/pypi/v/flamethrower.svg)
![Code Size](https://img.shields.io/github/languages/code-size/scottsus/flamethrower.svg)
[![Discord](https://img.shields.io/discord/XP4vVUQKPf.svg?label=&logo=discord&logoColor=ffffff&color=7389D8&labelColor=6A7EC2)](https://discord.gg/XP4vVUQKPf)
![License](https://img.shields.io/github/license/scottsus/flamethrower.svg)
[![Twitter](https://img.shields.io/twitter/follow/susantoscott.svg)](https://twitter.com/susantoscott)
## What is this?
π₯ flamethrower is an open source, multi-agent, context-intelligent, debugger that utilizes AI superpowers to automate the painful task of debugging. Think a combination of GitHub Copilot's context-awareness in [KillianLucas' Open Interpreter](https://github.com/KillianLucas/open-interpreter) packed into a beautiful shell that works out of the box with any existing repo.
Automate: [ Write Code β Run Action β Check Logs β Repeat ] ππ
**Main Differentiators**
- π₯ Automate the most painful part of writing code: print statements & error logs
- βοΈ Specialized context agent for operating within existing repo
- π€ Debugging agent optimized to iteratively brute-force locate and fix bugs
- π¦ Out of the box support for any unix machine (no VS Code or VS Code alternatives)
- π¨ Seamless integration into any existing repo; just type `flamethrower`
## Demo
https://github.com/scottsus/flamethrower/assets/88414565/e3c979c0-40ff-4976-aa30-2503a2827742
## Quick Start
<img src='https://github.com/scottsus/flamethrower/assets/88414565/4be238a7-642a-4149-a1ed-98ff7c61f9b8' alt='Quick Start' width='500px'/>
### Install π₯ flamethrower
```
pip install flamethrower
```
Or, if you have an existing version and are looking to upgrade to the latest version
```
pip install --upgrade flamethrower
```
### Terminal
Navigate to your current workspace, and simply run `flamethrower`, or `ft` for the pros.
```
cd ./unbelievably/complicated/workspace
flamethrower
```
### Example Usage
Use lowercase letters for commands you run in the shell, like `python main.py` or `node server.ts`
```
π₯ flamethrower: Debugging on Autopilot
Instructions:
- β¨οΈ Regular shell Use commands like ls, cd, python hello.py
- π€ LLM assistance Start command with a Capital letter, try Who are you?
- π Context Intelligent context-awareness from command, files, and stdout logs
- πͺ΅ Terminal logs All conversation & code output inside flamethrower is logged
...
$ python main.py -> SOME_ERROR
$ Wtf???? # Literally type this in the terminal
```
An implementation run is initiated with a natural language query that begins with an `uppercase letter`.
## Features
### π€ AFK Debugging
If you say 'Yes', π₯ flamethrower will debug in the background while you focus on other tasks at hand. It acts similarly to any other human engineer: adding `print` statements to find the root cause of the issue (which, as we know is the most annoying part). We find this pattern strikingly effective, and is where we believe LAMs have the strongest use case.
If it looks like π₯ flamethrower is obviously headed in the direction of doom, simply press `CTRL+C` and give it more suggestions or context.
<img src='https://github.com/scottsus/flamethrower/assets/88414565/11886370-1da4-478e-8fac-853fd305621a' alt='AFK' width='500px'/>
### ποΈ Conversation History
As long as any shell command or natural language query happens within the context of π₯ flamethrower, then it is by default captured in the conversation history. That means you can:
- ask about an error that just happened, or happened 2 dialogues ago
- follow up on a previous response provided by π₯ flamethrower
### π Prompt Transparency
Prompts sent to LLM are transparent and easy to observe. All π₯ flamethrower metadata are neat |
README.md | 2,530 | flamethrower/.flamethrower/README.md | d305621a' alt='AFK' width='500px'/>
### ποΈ Conversation History
As long as any shell command or natural language query happens within the context of π₯ flamethrower, then it is by default captured in the conversation history. That means you can:
- ask about an error that just happened, or happened 2 dialogues ago
- follow up on a previous response provided by π₯ flamethrower
### π Prompt Transparency
Prompts sent to LLM are transparent and easy to observe. All π₯ flamethrower metadata are neatly kept in a `.flamethrower` subdirectory, including prompts, conversations, logs, directory info, summaries, and other metadata.
<img src='https://github.com/scottsus/flamethrower/assets/88414565/8905018d-41f5-48e8-92f5-da2b0512af3d' alt='Transparency' width='500px'/>
### πββοΈ Real Time File Tracking
Everytime you send a query, the latest version of your files are sent over, meaning π₯ flamethrower understands that you changed your files, and are ready to process those changes.
<img src='https://github.com/scottsus/flamethrower/assets/88414565/f3f49b91-1cc8-452c-8625-54d88dcb2a42' alt='Context' width='500px'/>
## Motivation for π₯ flamethrower
### π©ββοΈ GitHub Copilot
Closed source GitHub Copilot draws context very effectively, and `Quick Fix` is a neat feature that explains error from stdout logs if the last command returned a non-zero return code.
### π€ Open Interpreter
The Open Interpreter, an open-source gem, specializes in crafting new code from the ground up. It's a favorite among data scientists and those needing sophisticated chart plotting, thanks to its iterative approach to achieving desired results.
### π¬ Research
π₯ flamethrower combines the agency afforded by Large Action Models (LAM) with the workspace awareness of Copilot, allowing it to take context-specific suggestions and continue iteration until a successful outcome. π₯ flamethrower is workspace-first, and aims to serve software engineers in complex tasks that need a lot of context management.
## π₯ Contributing
π₯ flamethrower is everyone's debugger. Fork it for your own use case, and, one PR at a time we can make the world a more bug-free place β¨ just ping me at scottsus@usc.edu and I'll help you get started.
## π« Project Roadmap
- [x] π§ͺ Better testing
- [ ] π Telemetry and the ability to opt in/out
- [ ] π₯½ LLM Vision to debug visual elements
- [ ] π¦ Running CodeLlama locally
- [ ] π€ Other models besides OpenAI
- [ ] π¦Ύ Default model finetuned on telemetry data
- [ ] ποΈ VS Code integration
- [ ] π» Browser interface
|
e2e.py | 422 | flamethrower/tests/e2e.py | import time
import subprocess
import pyautogui as ag
def e2e() -> None:
cli_process = subprocess.Popen(['poetry', 'run', 'flamethrower'])
time.sleep(30)
ag.write('ls -lah')
ag.press('enter')
time.sleep(2)
ag.write('What is the capital of Brazil?')
ag.press('enter')
time.sleep(10)
ag.write('exit')
ag.press('enter')
cli_process.wait()
if __name__ == "__main__":
e2e()
|
chat.py | 312 | flamethrower/examples/Easy-Basic_API_Request/chat.py | from openai import OpenAI
model = OpenAI()
res = model.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are humorous and love emojis."},
{"role": "human", "content": "What is the meaning of life?"},
]
)
print(res.choice.message.content)
|
README.md | 2,692 | flamethrower/examples/Easy-Basic_API_Request/.flamethrower/README.md | <h1 align='center'>π₯ flamethrower</h1>
LLM agents in your local machine β the ultimate debugging experience
[![GitHub Repo](https://img.shields.io/badge/scottsus-flamethrower-red?&logo=github)](https://github.com/scottsus/flamethrower)
![PyPI](https://img.shields.io/pypi/v/flamethrower.svg)
![Code Size](https://img.shields.io/github/languages/code-size/scottsus/flamethrower.svg)
[![Discord](https://img.shields.io/discord/XP4vVUQKPf.svg?label=&logo=discord&logoColor=ffffff&color=7389D8&labelColor=6A7EC2)](https://discord.XP4vVUQKPf)
![License](https://img.shields.io/github/license/scottsus/flamethrower.svg)
[![Twitter](https://img.shields.io/twitter/follow/susantoscott.svg)](https://twitter.com/susantoscott)
## What is this?
π₯ flamethrower is an open source, high level, debugger that utilizes AI superpowers to draw context, offer suggestions, and iterate on those suggestions to accomplish a given task. Think a combination of GitHub Copilot's context-awareness in [KillianLucas' Open Source Open Interpreter](https://github.com/KillianLucas/open-interpreter) shell.
## Demo
https://github.com/scottsus/flamethrower/assets/88414565/93195176-c124-4528-b3c2-500ce87cd494
## Quick Start
```
pip install flamethrower
```
### API Keys
There's no getting around the friction of configuring API keys
```
export OPENAI_API_KEY=sk-xxxx
```
### Terminal
Navigate to your current workspace, and simply run `flamethrower`, or `ft` for the pros.
```
cd {UNBELIEVABLY_COMPLICATED_WORKSPACE}
flamethrower
```
### Example Usage
Use lowercase letters for commands you run in the shell, like `python main.py` or `node server.ts`
```
python main.py -> SOME_ERROR
Wtf???? π€¬ # Literally type this in the terminal
```
An implementation run is initiated with a natural language query that begins with an `uppercase letter`.
## Motivation for π₯ flamethrower
### GitHub Copilot
Closed source GitHub Copilot draws context very effectively, and `Quick Fix` is a neat feature that explains error from stdout logs if the last command returned a non-zero return code.
### Open Interpreter
The Open Interpreter, an open-source gem, specializes in crafting new code from the ground up. It's a favorite among data scientists and those needing sophisticated chart plotting, thanks to its iterative approach to achieving desired results.
### A Research Project?
π₯ flamethrower combines the agency afforded by Large Action Models (LAM) with the workspace awareness of Copilot, allowing it to take context-specific suggestions and continue iteration until a successful outcome. π₯ is workspace-first, and aims to serve software engineers in complex tasks that need a lot of context management.
|
prompt.py | 4,096 | flamethrower/src/flamethrower/context/prompt.py | import os
from datetime import datetime
from pydantic import BaseModel
import flamethrower.config.constants as config
from flamethrower.context.conv_manager import ConversationManager
from flamethrower.agents.util_agents.file_chooser import FileChooser
from flamethrower.utils.token_counter import TokenCounter
from flamethrower.shell.printer import Printer
from flamethrower.utils.pretty import pretty_print
from flamethrower.utils.colors import *
from typing import Dict, List
class PromptGenerator(BaseModel):
conv_manager: ConversationManager
token_counter: TokenCounter
printer: Printer
def __init__(
self,
conv_manager: ConversationManager,
token_counter: TokenCounter,
printer: Printer,
) -> None:
super().__init__(
conv_manager=conv_manager,
token_counter=token_counter,
printer=printer
)
self._greeting: str = generate_greeting()
self._description: str = get_project_description()
self._dir_structure: str = get_dir_structure()
@property
def greeting(self) -> str:
return self._greeting
@property
def description(self) -> str:
return self._description
@property
def dir_structure(self) -> str:
return self._dir_structure
def construct_messages(self, messages: List[Dict[str, str]]) -> List[Dict[str, str]]:
"""
Think of this as the `headers` for the LLM that will be attached to every new query.
"""
meta_information = []
def append_meta_information(role: str, content: str, name: str = 'human') -> None:
new_message = { 'role': role, 'content': content, 'name': name }
meta_information.append(new_message)
append_meta_information(
role='user',
content=f'This project is about {self.description}.\n' if self.description else ''
)
append_meta_information(
role='user',
content=f'Here is the directory structure:\n{self.dir_structure}\n' if self.dir_structure else ''
)
conv = pretty_print(messages)
try:
target_file_names = FileChooser().infer_target_file_paths(
self.description,
self.dir_structure,
conv,
)
self.printer.print_files(target_file_names)
except KeyboardInterrupt:
raise
except Exception:
raise
target_file_contents = ''
for file_name in target_file_names:
file_path = os.path.join(os.getcwd(), file_name)
try:
with open(file_path, 'r') as file:
target_file_contents += (
f'{file_name}\n'
f'```\n{file.read().strip()}\n```\n'
)
except UnicodeDecodeError:
pass
except FileNotFoundError:
pass
append_meta_information(
role='user',
content=f'Currently you are working with these files:\n{target_file_contents}\n' if target_file_contents else ''
)
append_meta_information(
role='user',
content=f'Here are the most recent conversations between the human, stdout logs, and assistant:\n{conv}\n' if conv else ''
)
with open(config.get_last_prompt_path(), 'w') as f:
f.write(pretty_print(meta_information))
return meta_information
"""
Helper functions
"""
def generate_greeting() -> str:
now = datetime.now()
current_hour = now.hour
if current_hour < 12:
return 'Good morning π'
elif current_hour < 18:
return 'Good afternoon π'
else:
return 'Good evening π'
def get_dir_structure() -> str:
try:
with open(config.get_dir_tree_path(), 'r') as f:
return f.read()
except FileNotFoundError:
return ''
def get_project_description() -> str:
try:
with open(config |
prompt.py | 646 | flamethrower/src/flamethrower/context/prompt.py | """
def generate_greeting() -> str:
now = datetime.now()
current_hour = now.hour
if current_hour < 12:
return 'Good morning π'
elif current_hour < 18:
return 'Good afternoon π'
else:
return 'Good evening π'
def get_dir_structure() -> str:
try:
with open(config.get_dir_tree_path(), 'r') as f:
return f.read()
except FileNotFoundError:
return ''
def get_project_description() -> str:
try:
with open(config.get_workspace_summary_path(), 'r') as f:
return f.read()
except FileNotFoundError:
return 'Workspace summary not found.'
|
conv_manager.py | 3,318 | flamethrower/src/flamethrower/context/conv_manager.py | import os
import json
from io import TextIOWrapper
from pydantic import BaseModel
import flamethrower.config.constants as config
from flamethrower.utils import (
sequence_parser as sp,
zsh_history as zh,
pretty as pr
)
from typing import Any, Dict, List
class ConversationManager(BaseModel):
"""
Manages a conv.json file that stores the conversation history
"""
conv_path: str = config.get_conversation_path()
def __init__(self, **kwargs: Any) -> None:
super().__init__(**kwargs)
with open(self.conv_path, 'w') as f:
json.dump([], f)
def get_conv(self) -> List[Dict[str, str]]:
try:
with open(self.conv_path, 'r') as f:
res = json.load(f)
if not isinstance(res, list):
raise Exception('ConversationManager.get_conv: conv.json is not a list')
except Exception:
return []
return []
def append_conv(self, role: str, content: str, name: str = '') -> None:
new_message = { 'role': role, 'content': content }
if name:
new_message['name'] = name
with open(self.conv_path, 'r+') as f:
try:
conv = json.load(f)
conv.append(new_message)
self.save(conv, f)
self.pretty_print()
except Exception:
pass
def update_conv_from_stdout(self, data: bytes) -> None:
"""
This function is special because we're employing a unique way to capture stdout responses.
1. Store stdout chunks in a buffer.log file until the next prompt newline regex
2. When that is found, read the buffer.log file
3. Right before appending the stdout log, we inject the last known user command
"""
buffer_file = config.flamethrower_log_dir('buffer.log')
def write_buffer(data: bytes) -> None:
with open(buffer_file, 'ab') as f:
f.write(sp.get_cleaned_data(data))
def read_buffer() -> bytes:
with open(buffer_file, 'rb') as f:
return f.read()
if sp.is_prompt_newline(data):
user_cmd = zh.get_last_user_cmd()
if user_cmd == '' or user_cmd.lower() == 'exit':
return
stdout_log = read_buffer()
if stdout_log == b'':
return
self.append_conv(
role='user',
content=f'{os.getcwd()} $ {user_cmd}',
name='human'
)
self.append_conv(
role='user',
content=stdout_log.decode('utf-8'),
name='stdout'
)
# Clear the buffer
with open(buffer_file, 'w') as f:
f.write('')
else:
write_buffer(data)
def save(self, conv: List[Dict[str, str]], f: TextIOWrapper) -> None:
f.seek(0)
json.dump(conv, f, indent=4)
f.truncate()
def pretty_print(self) -> None:
conv = self.get_conv()
with open(config.get_pretty_conversation_path(), 'w') as f:
pretty = pr.pretty_print(conv)
f.write(pretty)
|
test_conv_manager.py | 2,455 | flamethrower/src/flamethrower/context/tests/test_conv_manager.py | from unittest.mock import patch, mock_open
from flamethrower.context.conv_manager import ConversationManager
def test_conv_manager_init() -> None:
with patch('builtins.open', mock_open(read_data='[]')) as mocked_file:
conv_manager = ConversationManager()
mocked_file.assert_called_once_with(conv_manager.conv_path, 'w')
mocked_file().write.assert_called_once_with('[]')
def test_conv_manager_get_conv_empty() -> None:
with patch('builtins.open', mock_open(read_data='[]')):
conv_manager = ConversationManager()
assert conv_manager.get_conv() == []
def test_conv_manager_save() -> None:
with patch('builtins.open', mock_open(read_data='[]')) as mocked_file:
conv_manager = ConversationManager()
save_data = [{'role': 'user', 'content': 'Save πΎ', 'name': 'Tester'}]
with patch('json.dump') as mock_json_dump:
conv_manager.save(save_data, mocked_file())
mock_json_dump.assert_called_once_with(save_data, mocked_file(), indent=4)
handle = mocked_file()
handle.seek.assert_called_once_with(0)
handle.truncate.assert_called_once_with()
def test_conv_manager_pretty_print() -> None:
pretty_path = 'path/to/pretty_conversation_file'
prettified = 'β¨ pretty conversation β¨'
with patch('builtins.open', mock_open(read_data='[]')) as mocked_file:
conv_manager = ConversationManager()
with patch('flamethrower.utils.pretty.pretty_print', return_value=prettified), \
patch('flamethrower.config.constants.get_pretty_conversation_path', return_value=pretty_path):
conv_manager.pretty_print()
mocked_file.assert_called_with(pretty_path, 'w')
mocked_file().write.assert_called_with(prettified)
def test_conv_manager_append_conv() -> None:
with patch('builtins.open', mock_open(read_data='[]')) as mocked_file:
conv_manager = ConversationManager()
with patch('json.dump') as mock_json_dump, \
patch('flamethrower.utils.pretty.pretty_print'), \
patch('flamethrower.config.constants.get_pretty_conversation_path'):
conv_manager.append_conv(role='user', content='Hello Test π§ͺ', name='Tester')
mock_json_dump.assert_called_once_with(
[{'role': 'user', 'content': 'Hello Test π§ͺ', 'name': 'Tester'}], mocked_file(), indent=4
)
|
test_prompt_generator.py | 4,096 | flamethrower/src/flamethrower/context/tests/test_prompt_generator.py | import os
import pytest
from unittest import mock
from unittest.mock import patch, mock_open, MagicMock
import flamethrower.config.constants as config
from flamethrower.context.prompt import PromptGenerator
from flamethrower.test_utils.mocks.mock_conv_manager import mock_conv_manager
from flamethrower.test_utils.mocks.mock_token_counter import mock_token_counter
from flamethrower.test_utils.mocks.mock_printer import mock_printer
@pytest.fixture
def mock_prompt_generator() -> PromptGenerator:
with patch('builtins.open', mock_open(read_data='flamethrower/some/path')):
return PromptGenerator(
conv_manager=mock_conv_manager(),
token_counter=mock_token_counter(),
printer=mock_printer()
)
def test_prompt_generator_init(mock_prompt_generator: PromptGenerator) -> None:
pg = mock_prompt_generator
assert pg.greeting.startswith('Good') and pg.greeting.endswith('π')
assert pg.description != ''
assert pg.dir_structure == 'flamethrower/some/path'
def test_prompt_generator_construct_messages(mock_prompt_generator: PromptGenerator) -> None:
pg = mock_prompt_generator
target_files = ['file_1', 'file_2', 'file_3']
target_file_contents = """
Content_1
Content_2
Content_3
"""
messages = [
{
'role': 'user',
'content': 'User message 1'
},
{
'role': 'assistant',
'content': 'Assistant message 1'
},
{
'role': 'user',
'content': 'User message 2'
},
{
'role': 'assistant',
'content': 'Assistant message 2'
}
]
pretty = 'β¨ Pretty printed conversation'
with patch('flamethrower.context.prompt.FileChooser') as mock_file_chooser, \
patch('flamethrower.context.prompt.pretty_print', return_value=pretty) as mock_pretty_print, \
patch('builtins.open', mock_open(read_data=target_file_contents)) as mock_file:
file_chooser = mock_file_chooser.return_value
file_chooser.infer_target_file_paths = MagicMock(return_value=target_files)
messages = pg.construct_messages(messages)
assert isinstance(messages, list)
assert len(messages) == 4
about_message = messages[0]
assert about_message['role'] == 'user'
assert about_message['content'].startswith('This project is about')
dir_structure_message = messages[1]
assert dir_structure_message['role'] == 'user'
assert dir_structure_message['content'].startswith('Here is the directory structure')
file_chooser.infer_target_file_paths.assert_called_once_with(pg.description, pg.dir_structure, mock_pretty_print.return_value)
pg.printer.print_files.assert_called_once_with(target_files)
mock_file.assert_has_calls([
mock.call(os.path.join(os.getcwd(), target_files[0]), 'r'),
mock.call().__enter__(),
mock.call().read(),
mock.call().__exit__(None, None, None),
mock.call(os.path.join(os.getcwd(), target_files[1]), 'r'),
mock.call().__enter__(),
mock.call().read(),
mock.call().__exit__(None, None, None),
mock.call(os.path.join(os.getcwd(), target_files[2]), 'r'),
mock.call().__enter__(),
mock.call().read(),
mock.call().__exit__(None, None, None),
mock.call(config.get_last_prompt_path(), 'w'),
mock.call().__enter__(),
mock.call().write(mock.ANY),
mock.call().__exit__(None, None, None),
])
target_files_message = messages[2]
assert target_files_message['role'] == 'user'
assert target_files_message['content'].startswith(f'Currently you are working with these files:')
conv_message = messages[3]
assert conv_message['role'] == 'user'
assert conv_message['content'].startswith('Here are the most recent conversations between the human, s |
test_prompt_generator.py | 528 | flamethrower/src/flamethrower/context/tests/test_prompt_generator.py | mock.call().write(mock.ANY),
mock.call().__exit__(None, None, None),
])
target_files_message = messages[2]
assert target_files_message['role'] == 'user'
assert target_files_message['content'].startswith(f'Currently you are working with these files:')
conv_message = messages[3]
assert conv_message['role'] == 'user'
assert conv_message['content'].startswith('Here are the most recent conversations between the human, stdout logs, and assistant')
|
constants.py | 2,829 | flamethrower/src/flamethrower/config/constants.py | import os
"""
π₯ main directory
"""
FLAMETHROWER_PACKAGE_NAME = 'flamethrower'
FLAMETHROWER_DIR_NAME = '.flamethrower'
FLAMETHROWER_DIR = os.path.join(
os.getcwd(), FLAMETHROWER_DIR_NAME
)
def flamethrower_path(file_name: str) -> str:
return os.path.join(
FLAMETHROWER_DIR, file_name
)
FLAMETHROWER_README_FILE_NAME = 'README.md'
FLAMETHROWER_GITIGNORE_FILE_NAME = '.gitignore'
FLAMETHROWER_ENV_FILE_NAME = '.env'
def get_flamethrower_readme_path() -> str:
return flamethrower_path(FLAMETHROWER_README_FILE_NAME)
def get_flamethrower_gitignore_path() -> str:
return flamethrower_path(FLAMETHROWER_GITIGNORE_FILE_NAME)
def get_flamethrower_env_path() -> str:
return flamethrower_path(FLAMETHROWER_ENV_FILE_NAME)
"""
π§ zsh configs
"""
FLAMETHROWER_ZSH_DIR_NAME = 'zsh'
FLAMETHROWER_ZSH_DIR = flamethrower_path(FLAMETHROWER_ZSH_DIR_NAME)
def flamethrower_zsh_dir(file_name: str) -> str:
return os.path.join(
FLAMETHROWER_ZSH_DIR, file_name
)
ZSH_CONFIG_FILE_NAME = '.zshrc'
ZSH_HISTORY_FILE_NAME = '.zsh_history'
def get_zsh_config_path() -> str:
return flamethrower_zsh_dir(ZSH_CONFIG_FILE_NAME)
def get_zsh_history_path() -> str:
return flamethrower_zsh_dir(ZSH_HISTORY_FILE_NAME)
"""
πͺ΅ flamethrower logs
"""
FLAMETHROWER_LOG_DIR_NAME = 'logs'
FLAMETHROWER_LOG_DIR = flamethrower_path(FLAMETHROWER_LOG_DIR_NAME)
def flamethrower_log_dir(file_name: str) -> str:
return os.path.join(
FLAMETHROWER_LOG_DIR, file_name
)
SUMMARY_FILE_NAME = 'workspace_summary.log'
DIR_TREE_FILE_NAME = 'dir_tree.log'
DIR_LIST_FILE_NAME = 'dir_dict.json'
CONVERSATION_FILE_NAME = 'conv.json'
PRETTY_CONVERSATION_FILE_NAME = 'conv.log'
CURRENT_FILES_FILE_NAME = 'current_files.log'
LAST_PROMPT_FILE_NAME = 'last_prompt.log'
LAST_RESPONSE_FILE_NAME = 'last_response.log'
PATCH_FILE_NAME = 'update.patch'
def get_workspace_summary_path() -> str:
return flamethrower_log_dir(SUMMARY_FILE_NAME)
def get_dir_tree_path() -> str:
return flamethrower_log_dir(DIR_TREE_FILE_NAME)
def get_dir_dict_path() -> str:
return flamethrower_log_dir(DIR_LIST_FILE_NAME)
def get_current_files_path() -> str:
return flamethrower_log_dir(CURRENT_FILES_FILE_NAME)
def get_conversation_path() -> str:
return flamethrower_log_dir(CONVERSATION_FILE_NAME)
def get_pretty_conversation_path() -> str:
return flamethrower_log_dir(PRETTY_CONVERSATION_FILE_NAME)
def get_last_prompt_path() -> str:
return flamethrower_log_dir(LAST_PROMPT_FILE_NAME)
def get_last_response_path() -> str:
return flamethrower_log_dir(LAST_RESPONSE_FILE_NAME)
def get_patch_path() -> str:
return flamethrower_log_dir(PATCH_FILE_NAME)
"""
π Original copies
"""
ORIGINAL_FILE_MARKER = 'π₯_original'
def get_original_file_marker() -> str:
return ORIGINAL_FILE_MARKER
|
mock_prompt_generator.py | 538 | flamethrower/src/flamethrower/test_utils/mocks/mock_prompt_generator.py | from unittest.mock import MagicMock
from flamethrower.context.prompt import PromptGenerator
from flamethrower.test_utils.mocks.mock_conv_manager import mock_conv_manager
from flamethrower.test_utils.mocks.mock_token_counter import mock_token_counter
from flamethrower.test_utils.mocks.mock_printer import mock_printer
def mock_prompt_generator() -> PromptGenerator:
return MagicMock(
PromptGenerator,
conv_manager=mock_conv_manager(),
token_counter=mock_token_counter(),
printer=mock_printer()
)
|
mock_container.py | 1,020 | flamethrower/src/flamethrower/test_utils/mocks/mock_container.py | from unittest.mock import MagicMock
from flamethrower.containers.container import Container
from flamethrower.test_utils.mocks.mock_conv_manager import mock_conv_manager
from flamethrower.test_utils.mocks.mock_token_counter import mock_token_counter
from flamethrower.test_utils.mocks.mock_shell_manager import mock_shell_manager
from flamethrower.test_utils.mocks.mock_printer import mock_printer
from flamethrower.test_utils.mocks.mock_prompt_generator import mock_prompt_generator
from flamethrower.test_utils.mocks.mock_operator import mock_operator
from flamethrower.test_utils.mocks.mock_command_handler import mock_command_handler
def mock_container() -> Container:
return MagicMock(
spec=Container,
conv_manager=mock_conv_manager(),
token_counter=mock_token_counter(),
shell_manager=mock_shell_manager(),
printer=mock_printer(),
prompt_generator=mock_prompt_generator(),
operator=mock_operator(),
command_handler=mock_command_handler(),
)
|
mock_operator.py | 521 | flamethrower/src/flamethrower/test_utils/mocks/mock_operator.py | from unittest.mock import MagicMock
from flamethrower.agents.operator import Operator
from flamethrower.test_utils.mocks.mock_conv_manager import mock_conv_manager
from flamethrower.test_utils.mocks.mock_prompt_generator import mock_prompt_generator
from flamethrower.test_utils.mocks.mock_printer import mock_printer
def mock_operator() -> Operator:
return MagicMock(
Operator,
conv_manager=mock_conv_manager(),
prompt_generator=mock_prompt_generator(),
printer=mock_printer()
) |
mock_conv_manager.py | 198 | flamethrower/src/flamethrower/test_utils/mocks/mock_conv_manager.py | from unittest.mock import MagicMock
from flamethrower.context.conv_manager import ConversationManager
def mock_conv_manager() -> ConversationManager:
return MagicMock(spec=ConversationManager)
|
mock_token_counter.py | 177 | flamethrower/src/flamethrower/test_utils/mocks/mock_token_counter.py | from unittest.mock import MagicMock
from flamethrower.utils.token_counter import TokenCounter
def mock_token_counter() -> TokenCounter:
return MagicMock(spec=TokenCounter)
|
mock_shell_manager.py | 177 | flamethrower/src/flamethrower/test_utils/mocks/mock_shell_manager.py | from unittest.mock import MagicMock
from flamethrower.shell.shell_manager import ShellManager
def mock_shell_manager() -> ShellManager:
return MagicMock(spec=ShellManager)
|
mock_command_handler.py | 891 | flamethrower/src/flamethrower/test_utils/mocks/mock_command_handler.py | import pytest
from unittest.mock import MagicMock
from flamethrower.shell.command_handler import CommandHandler
from flamethrower.context.conv_manager import ConversationManager
from flamethrower.context.prompt import PromptGenerator
from flamethrower.agents.operator import Operator
from flamethrower.shell.printer import Printer
from flamethrower.test_utils.mocks.mock_conv_manager import mock_conv_manager
from flamethrower.test_utils.mocks.mock_prompt_generator import mock_prompt_generator
from flamethrower.test_utils.mocks.mock_operator import mock_operator
from flamethrower.test_utils.mocks.mock_printer import mock_printer
def mock_command_handler() -> CommandHandler:
return MagicMock(
CommandHandler,
conv_manager=mock_conv_manager(),
prompt_generator=mock_prompt_generator(),
operator=mock_operator(),
printer=mock_printer(),
)
|
mock_printer.py | 622 | flamethrower/src/flamethrower/test_utils/mocks/mock_printer.py | import sys
from unittest.mock import MagicMock
from flamethrower.shell.printer import Printer
from flamethrower.test_utils.mocks.mock_conv_manager import mock_conv_manager
from flamethrower.test_utils.mocks.mock_shell_manager import mock_shell_manager
from flamethrower.test_utils.mocks.mock_token_counter import mock_token_counter
def mock_printer() -> Printer:
return MagicMock(
spec=Printer,
leader_fd=sys.stdout.fileno(),
stdout_fd=sys.stdout.fileno(),
conv_manager=mock_conv_manager(),
shell_manager=mock_shell_manager(),
token_counter=mock_token_counter(),
)
|
command_handler.py | 4,096 | flamethrower/src/flamethrower/shell/command_handler.py | from pydantic import BaseModel
from .printer import Printer
from flamethrower.context.conv_manager import ConversationManager
from flamethrower.agents.operator import Operator
from flamethrower.utils.special_keys import *
from flamethrower.exceptions.exceptions import *
from flamethrower.exceptions.handlers import *
from flamethrower.utils.zsh_history import update_zsh_history
class CommandHandler(BaseModel):
pos: int = 0
buffer: str = ''
is_nl_query: bool = False # is natural language query
conv_manager: ConversationManager
operator: Operator
printer: Printer
# TODO: Windows support
def handle(self, key: bytes) -> None:
if self.pos == 0:
self.handle_first_key(key)
elif self.is_nl_query:
self.handle_nl_key(key)
else:
self.handle_regular_key(key)
def handle_first_key(self, key: bytes) -> None:
if key == ENTER_KEY or key == RETURN_KEY:
self.printer.write_leader(key)
elif key == BACKSPACE_KEY or key == TAB_KEY:
pass
elif key == UP_ARROW_KEY or key == DOWN_ARROW_KEY:
# TODO: Implement history cycling
pass
# TODO: Handle CMD+V
else:
if key.isupper():
self.is_nl_query = True
self.printer.print_orange(key)
else:
self.is_nl_query = False
self.printer.write_leader(key)
self.pos += 1
self.buffer += key.decode('utf-8')
def handle_nl_key(self, key: bytes) -> None:
if key == ENTER_KEY or key == RETURN_KEY:
self.handle_nl_return_key(key)
elif key == BACKSPACE_KEY:
self.handle_nl_backspace_key(key)
elif key == LEFT_ARROW_KEY:
self.handle_nl_left_arrow_key(key)
elif key == RIGHT_ARROW_KEY:
self.handle_nl_right_arrow_key(key)
elif key == UP_ARROW_KEY:
self.handle_nl_up_arrow_key(key)
elif key == DOWN_ARROW_KEY:
self.handle_nl_down_arrow_key(key)
else:
self.handle_other_nl_keys(key)
def handle_regular_key(self, key: bytes) -> None:
if key == ENTER_KEY or key == RETURN_KEY:
self.handle_regular_return_key(key)
elif key == BACKSPACE_KEY:
self.handle_regular_backspace_key(key)
elif key == LEFT_ARROW_KEY:
self.handle_regular_left_arrow_key(key)
elif key == RIGHT_ARROW_KEY:
self.handle_regular_right_arrow_key(key)
elif key == UP_ARROW_KEY:
self.handle_regular_up_arrow_key(key)
elif key == DOWN_ARROW_KEY:
self.handle_regular_down_arrow_key(key)
else:
self.handle_regular_other_keys(key)
"""
When in natural language (nl) mode
"""
def handle_nl_return_key(self, key: bytes) -> None:
query = self.buffer
self.pos = 0
self.buffer = ''
self.printer.write_leader(key)
self.printer.print_regular(with_newline=True)
update_zsh_history(query)
self.conv_manager.append_conv(
role='user',
content=query,
name='human'
)
try:
self.operator.new_implementation_run()
except KeyboardInterrupt:
pass
except QuotaExceededException:
self.printer.print_err(get_quota_exceeded_message())
except Exception:
raise
def handle_nl_backspace_key(self, key: bytes) -> None:
if self.pos > 0:
self.pos -= 1
self.buffer = self.buffer[:-1]
self.printer.print_stdout(b'\b \b')
def handle_nl_left_arrow_key(self, key: bytes) -> None:
if self.pos > 0:
self.pos -= 1
self.printer.print_stdout(key)
def handle_nl_right_arrow_key(self, key: bytes) -> None:
if self.pos < len(self.buffer):
self.pos += 1
self.printer.print_stdout(key)
def handle_nl_up_arrow_key(s |
command_handler.py | 1,950 | flamethrower/src/flamethrower/shell/command_handler.py | lf.pos > 0:
self.pos -= 1
self.buffer = self.buffer[:-1]
self.printer.print_stdout(b'\b \b')
def handle_nl_left_arrow_key(self, key: bytes) -> None:
if self.pos > 0:
self.pos -= 1
self.printer.print_stdout(key)
def handle_nl_right_arrow_key(self, key: bytes) -> None:
if self.pos < len(self.buffer):
self.pos += 1
self.printer.print_stdout(key)
def handle_nl_up_arrow_key(self, key: bytes) -> None:
pass
def handle_nl_down_arrow_key(self, key: bytes) -> None:
pass
def handle_other_nl_keys(self, key: bytes) -> None:
self.pos += 1
self.buffer += key.decode('utf-8')
self.printer.print_stdout(key)
"""
When in regular mode
"""
def handle_regular_return_key(self, key: bytes) -> None:
command = self.buffer # unused
self.pos = 0
self.buffer = ''
self.printer.write_leader(key)
def handle_regular_backspace_key(self, key: bytes) -> None:
if self.pos > 0:
self.pos -= 1
self.buffer = self.buffer[:-1]
self.printer.write_leader(key)
def handle_regular_left_arrow_key(self, key: bytes) -> None:
if self.pos > 0:
self.pos -= 1
self.printer.write_leader(key)
def handle_regular_right_arrow_key(self, key: bytes) -> None:
if self.pos < len(self.buffer):
self.pos += 1
self.printer.write_leader(key)
def handle_regular_up_arrow_key(self, key: bytes) -> None:
# TODO: Implement history cycling
pass
def handle_regular_down_arrow_key(self, key: bytes) -> None:
# TODO: Implement history cycling
pass
def handle_regular_other_keys(self, key: bytes) -> None:
self.pos += 1
self.buffer += key.decode('utf-8')
self.printer.write_leader(key)
|
shell.py | 4,096 | flamethrower/src/flamethrower/shell/shell.py | import os
import sys
import pty
import tty
import termios
import shutil
from pydantic import BaseModel, ConfigDict
from subprocess import Popen
from select import select
import flamethrower.setup.setup as setup
from flamethrower.config.constants import FLAMETHROWER_DIR_NAME
from flamethrower.setup.dir_walker import setup_dir_summary
from flamethrower.utils.colors import *
from typing import Optional
class Shell(BaseModel):
model_config = ConfigDict(arbitrary_types_allowed=True)
block_size: int = 1024
base_dir: str = os.getcwd()
leader_fd: int = 0
follower_fd: int = 0
child_process: Optional[Popen] = None
def run(self) -> None:
if len(sys.argv) > 2:
print('Usage: `flamethrower` or `flamethrower ./more/specific/directory`')
return
if (len(sys.argv) == 2):
self.base_dir = os.path.abspath(sys.argv[1])
try:
env = setup.setup_zsh_env()
if not env:
return
err = setup_dir_summary(target_dir=self.base_dir)
if err is not None:
print(f'shell.run: {str(err)}')
return
except KeyboardInterrupt:
print('π₯ flamethrower initialization cancelled. Performing cleanup...')
try:
shutil.rmtree(os.path.join(os.getcwd(), FLAMETHROWER_DIR_NAME))
print('π§Ή Cleanup complete.')
except Exception as e:
print(f'Cleanup error: {str(e)}')
return
self.leader_fd, self.follower_fd = pty.openpty()
self.child_process = Popen(
['zsh'],
env=env,
stdin=self.follower_fd,
stdout=self.follower_fd,
stderr=self.follower_fd
)
# Set stdin in raw mode
old_settings = termios.tcgetattr(sys.stdin)
tty.setraw(sys.stdin)
# LM Container
from flamethrower.containers.lm_container import lm_container
token_counter = lm_container.token_counter()
# Container
from flamethrower.containers.container import container
container.token_counter.override(token_counter)
container.tty_settings.override(old_settings)
container.leader_fd.override(self.leader_fd)
container.base_dir.override(self.base_dir)
container.wire(modules=[__name__])
# Container singletons
command_handler = container.command_handler()
conv_manager = container.conv_manager()
printer = container.printer()
error = None
try:
printer.print_regular(self.construct_greeting())
while True:
timeout = 0.5 # seconds
r, _, _ = select([self.leader_fd, sys.stdin], [], [], timeout)
# From leader process
if self.leader_fd in r:
data = os.read(self.leader_fd, self.block_size)
if not data:
break
# Write to stdout and to logfile
os.write(sys.stdout.fileno(), data)
conv_manager.update_conv_from_stdout(data)
# From user input
if sys.stdin in r:
key = os.read(sys.stdin.fileno(), self.block_size)
if not key:
break
command_handler.handle(key)
if self.child_process.poll() is not None:
break
except Exception as e:
error = e
finally:
try:
termios.tcsetattr(sys.stdin, termios.TCSADRAIN, old_settings)
except termios.error as e:
print(
f'Unable to return pty to old settings due to error: {e}\n'
'Please restart your terminal instance by pressing `exit`\n'
)
pass
os.close(self.leader_ |
shell.py | 1,428 | flamethrower/src/flamethrower/shell/shell.py |
break
except Exception as e:
error = e
finally:
try:
termios.tcsetattr(sys.stdin, termios.TCSADRAIN, old_settings)
except termios.error as e:
print(
f'Unable to return pty to old settings due to error: {e}\n'
'Please restart your terminal instance by pressing `exit`\n'
)
pass
os.close(self.leader_fd)
os.close(self.follower_fd)
if self.child_process:
self.child_process.terminate()
if error:
printer.print_err(f'Error: {str(error)}')
else:
print(token_counter.return_cost_analysis())
print('\nπ Goodbye!')
def construct_greeting(self) -> str:
return f"""
π₯ flamethrower: Debugging on Autopilot
Instructions:
- β¨οΈ Regular shell Use commands like {GREEN}ls{DEFAULT}, {GREEN}cd{DEFAULT}, {GREEN}python{DEFAULT} {UNDERLINE}hello.py{DEFAULT}
- π€ LLM assistance Start command with a {ORANGE}Capital letter{DEFAULT}, try {ORANGE}Who are you?{DEFAULT}
- π Context Intelligent context-awareness from command, files, and stdout logs
- πͺ΅ Terminal logs All conversation & code output inside {GREEN}flamethrower{DEFAULT} is logged
"""
|
__init__.py | 42 | flamethrower/src/flamethrower/shell/__init__.py | from .shell import Shell
shell = Shell()
|
README.md | 44 | flamethrower/src/flamethrower/shell/README.md | # Shell
This folder handles terminal stuff. |
printer.py | 4,096 | flamethrower/src/flamethrower/shell/printer.py | import os
from pydantic import BaseModel
from rich.console import Console
from rich.syntax import Syntax
from rich.live import Live
import flamethrower.config.constants as config
from flamethrower.context.conv_manager import ConversationManager
from flamethrower.shell.shell_manager import ShellManager
from flamethrower.utils.token_counter import TokenCounter
from flamethrower.models.models import OPENAI_GPT_4_TURBO
from flamethrower.utils.special_keys import *
from flamethrower.utils.colors import *
from typing import Any, Dict, List, Union, Iterator
class Printer(BaseModel):
leader_fd: int
stdout_fd: int
conv_manager: ConversationManager
shell_manager: ShellManager
token_counter: TokenCounter
def write_leader(self, data: bytes) -> None:
if self.leader_fd:
os.write(self.leader_fd, data)
def print_stdout(self, data: Union[bytes, str]) -> None:
if self.stdout_fd:
if isinstance(data, str):
with self.shell_manager.cooked_mode():
os.write(self.stdout_fd, data.encode('utf-8'))
else:
os.write(self.stdout_fd, data)
def print_err(self, err: str) -> None:
self.print_red(f'\n{err}\n', reset=True)
def print_color(self, data: Union[bytes, str], color: bytes, reset: bool = False) -> None:
os.write(self.stdout_fd, color)
self.print_stdout(data)
if reset:
os.write(self.stdout_fd, STDIN_DEFAULT)
self.set_cursor_to_start(with_newline=True)
def print_default(self, data: Union[bytes, str]) -> None:
self.print_color(data, STDIN_DEFAULT)
def print_red(self, data: Union[bytes, str], reset: bool = False) -> None:
self.print_color(data, STDIN_RED, reset=reset)
def print_yellow(self, data: Union[bytes, str], reset: bool = False) -> None:
self.print_color(data, STDIN_YELLOW, reset=reset)
def print_green(self, data: Union[bytes, str], reset: bool = False) -> None:
self.print_color(data, STDIN_GREEN, reset=reset)
def print_blue(self, data: Union[bytes, str], reset: bool = False) -> None:
self.print_color(data, STDIN_BLUE, reset=reset)
def print_cyan(self, data: Union[bytes, str], reset: bool = False) -> None:
self.print_color(data, STDIN_CYAN, reset=reset)
def print_gray(self, data: Union[bytes, str], reset: bool = False) -> None:
self.print_color(data, STDIN_GRAY, reset=reset)
def print_white(self, data: Union[bytes, str], reset: bool = False) -> None:
self.print_color(data, STDIN_WHITE, reset=reset)
def print_light_green(self, data: Union[bytes, str], reset: bool = False) -> None:
self.print_color(data, STDIN_LIGHT_GREEN, reset=reset)
def print_light_blue(self, data: Union[bytes, str], reset: bool = False) -> None:
self.print_color(data, STDIN_LIGHT_BLUE, reset=reset)
def print_light_cyan(self, data: Union[bytes, str], reset: bool = False) -> None:
self.print_color(data, STDIN_LIGHT_CYAN, reset=reset)
def print_light_magenta(self, data: Union[bytes, str], reset: bool = False) -> None:
self.print_color(data, STDIN_LIGHT_MAGENTA, reset=reset)
def print_orange(self, data: Union[bytes, str], reset: bool = False) -> None:
self.print_color(data, STDIN_ORANGE, reset=reset)
def print_llm_response(self, stream: Iterator[str]) -> None:
"""
1. Swap out of pty back into main shell
2. Print the code using Python Rich
3. Swap back into pty
"""
def is_programming_language(name: str) -> bool:
programming_languages = [
'bash',
'c',
'c++',
'chat',
'java',
'javascript',
'typescript',
'python',
'go',
'rust',
'ruby',
'php',
'plaintext',
'swift',
'sh',
|
printer.py | 4,096 | flamethrower/src/flamethrower/shell/printer.py | 3. Swap back into pty
"""
def is_programming_language(name: str) -> bool:
programming_languages = [
'bash',
'c',
'c++',
'chat',
'java',
'javascript',
'typescript',
'python',
'go',
'rust',
'ruby',
'php',
'plaintext',
'swift',
'sh',
'shell',
]
return name in programming_languages
with self.shell_manager.cooked_mode():
self.set_cursor_to_start(with_newline=True)
def append_conv(content: str) -> None:
self.conv_manager.append_conv(
role='assistant',
content=content,
)
def log_last_response(content: str) -> None:
with open(config.get_last_response_path(), 'w') as f:
f.write(content)
nl_content, code_content, complete_content = '', '', ''
try:
while True:
# Natural language responses
prev = ''
for token in stream:
if token == '```':
break
elif prev == '``' and token.startswith('`'):
break
prev = token or ''
self.print_stdout(token.encode('utf-8'))
nl_content += token or ''
complete_content += nl_content
nl_content = ''
# Coding responses
console, lang = Console(), 'python'
with Live(console=console, refresh_per_second=10) as live:
is_first = True
for token in stream:
if is_first:
is_first = False
if is_programming_language(token):
lang = token
continue
if token == '```':
break
elif prev == '``' and token.startswith('`'):
break
prev = token or ''
if token == '``':
continue
code_content += token or ''
syntax = Syntax(code_content, lang, theme='monokai', line_numbers=False)
live.update(syntax, refresh=True)
complete_content += f'\n```{code_content}\n```\n'
code_content = ''
except AttributeError:
pass
except KeyboardInterrupt:
raise
finally:
if nl_content:
complete_content += nl_content
if code_content:
complete_content += f'```{code_content}\n```\n'
self.token_counter.add_streaming_output_tokens(complete_content, model=OPENAI_GPT_4_TURBO)
append_conv(complete_content)
log_last_response(complete_content)
self.print_regular(with_newline=True)
def print_code(self, code: str, language: str = 'bash') -> None:
with self.shell_manager.cooked_mode():
self.set_cursor_to_start(with_newline=True)
syntax = Syntax(f'\nπ₯ {code}\n', language, theme='monokai')
console = Console()
console.print(syntax)
def print_actions(self, actions: List[Dict[Any, Any]]) -> None:
# actions is confirmed to have at least one ac |
printer.py | 3,028 | flamethrower/src/flamethrower/shell/printer.py | t)
self.print_regular(with_newline=True)
def print_code(self, code: str, language: str = 'bash') -> None:
with self.shell_manager.cooked_mode():
self.set_cursor_to_start(with_newline=True)
syntax = Syntax(f'\nπ₯ {code}\n', language, theme='monokai')
console = Console()
console.print(syntax)
def print_actions(self, actions: List[Dict[Any, Any]]) -> None:
# actions is confirmed to have at least one action
with self.shell_manager.cooked_mode():
self.set_cursor_to_start()
self.print_cyan('Next actions:\n')
for obj in actions:
action, command, file_paths = obj.get('action'), obj.get('command'), obj.get('file_paths')
if action == 'run':
self.print_cyan(f' - Run command: {command}\n')
elif action == 'write':
self.print_cyan(f' - Write to: {file_paths}\n')
elif action == 'debug':
self.print_cyan(f' - Add debugging statements to: {file_paths}\n')
elif action == 'stuck':
# This is handled in operator.py
pass
elif action == 'cleanup':
self.print_cyan(f' - Cleanup: {file_paths}\n')
elif action == 'completed':
self.print_cyan(' - Completed')
else:
self.print_err('Printer.print_actions: Unknown action')
self.print_default('')
def print_files(self, files: List[str]) -> None:
with self.shell_manager.cooked_mode():
self.set_cursor_to_start()
if files:
self.print_cyan('Focusing on the following files π:\n')
for file in files:
self.print_cyan(f' - {file}\n')
else:
self.print_cyan('No files used as context.\n')
self.print_default('')
def print_diffs(self, diffs: List[str]) -> None:
with self.shell_manager.cooked_mode():
self.set_cursor_to_start(with_newline=True)
for line in diffs:
if line.startswith('+'):
self.print_green(line + '\n', reset=True)
elif line.startswith('-'):
self.print_red(line + '\n', reset=True)
else:
self.print_default(line + '\n')
def set_cursor_to_start(self, with_newline: bool = False) -> None:
if with_newline:
self.print_stdout(ENTER_KEY + CLEAR_FROM_START + CLEAR_TO_END + CURSOR_TO_START)
else:
self.print_stdout(CLEAR_FROM_START + CLEAR_TO_END + CURSOR_TO_START)
def print_regular(self, message: str = '', with_newline: bool = False) -> None:
with self.shell_manager.cooked_mode():
self.set_cursor_to_start(with_newline)
self.print_stdout(message)
|
shell_manager.py | 726 | flamethrower/src/flamethrower/shell/shell_manager.py | import sys
import tty
import termios
from pydantic import BaseModel
from contextlib import contextmanager
from flamethrower.utils.types import tty_settings
from typing import Generator
class ShellManager(BaseModel):
old_settings: tty_settings
in_cooked_mode: bool = False
@contextmanager
def cooked_mode(self) -> Generator[None, None, None]:
if self.in_cooked_mode:
yield
return
try:
termios.tcsetattr(sys.stdin, termios.TCSADRAIN, self.old_settings)
self.in_cooked_mode = True
yield
except Exception:
raise
finally:
tty.setraw(sys.stdin)
self.in_cooked_mode = False
|
test_command_handler.py | 1,120 | flamethrower/src/flamethrower/shell/tests/test_command_handler.py | import pytest
from flamethrower.shell.command_handler import CommandHandler
from flamethrower.context.conv_manager import ConversationManager
from flamethrower.agents.operator import Operator
from flamethrower.shell.printer import Printer
from flamethrower.test_utils.mocks.mock_conv_manager import mock_conv_manager
from flamethrower.test_utils.mocks.mock_operator import mock_operator
from flamethrower.test_utils.mocks.mock_printer import mock_printer
@pytest.fixture
def mock_command_handler() -> CommandHandler:
return CommandHandler(
conv_manager=mock_conv_manager(),
operator=mock_operator(),
printer=mock_printer(),
)
def test_command_handler_init(mock_command_handler: CommandHandler) -> None:
command_handler = mock_command_handler
assert command_handler.pos == 0
assert command_handler.buffer == ''
assert command_handler.is_nl_query == False
assert isinstance(command_handler.conv_manager, ConversationManager)
assert isinstance(command_handler.operator, Operator)
assert isinstance(command_handler.printer, Printer)
"""
TODO: other tests
""" |
test_printer.py | 2,341 | flamethrower/src/flamethrower/shell/tests/test_printer.py | import sys
import pytest
from unittest.mock import patch, call
from flamethrower.shell.printer import Printer
from flamethrower.context.conv_manager import ConversationManager
from flamethrower.shell.shell_manager import ShellManager
from flamethrower.utils.token_counter import TokenCounter
from flamethrower.utils.colors import *
from flamethrower.test_utils.mocks.mock_conv_manager import mock_conv_manager
from flamethrower.test_utils.mocks.mock_shell_manager import mock_shell_manager
from flamethrower.test_utils.mocks.mock_token_counter import mock_token_counter
@pytest.fixture
def mock_printer() -> Printer:
return Printer(
leader_fd=sys.stdout.fileno(),
stdout_fd=sys.stdout.fileno(),
conv_manager=mock_conv_manager(),
shell_manager=mock_shell_manager(),
token_counter=mock_token_counter(),
)
def test_printer_init(mock_printer: Printer) -> None:
printer = mock_printer
assert printer.leader_fd == sys.stdout.fileno()
assert printer.stdout_fd == sys.stdout.fileno()
assert isinstance(printer.conv_manager, ConversationManager)
assert isinstance(printer.shell_manager, ShellManager)
assert isinstance(printer.token_counter, TokenCounter)
def test_printer_write_leader(mock_printer: Printer) -> None:
printer = mock_printer
with patch('os.write') as mock_os_write:
printer.write_leader(b'hello')
mock_os_write.assert_called_once_with(printer.leader_fd, b'hello')
def test_printer_print_stdout(mock_printer: Printer) -> None:
printer = mock_printer
with patch('os.write') as mock_os_write:
byte_message = b'bytes'
printer.print_stdout(byte_message)
mock_os_write.assert_called_once_with(printer.stdout_fd, byte_message)
str_message = 'string'
printer.print_stdout(str_message)
mock_os_write.assert_called_with(printer.stdout_fd, str_message.encode('utf-8'))
def test_printer_print_color(mock_printer: Printer) -> None:
printer = mock_printer
with patch('os.write') as mock_os_write:
bytes_message = b'bytes'
printer.print_color(bytes_message, STDIN_RED)
mock_os_write.assert_has_calls([
call(printer.stdout_fd, STDIN_RED),
call(printer.stdout_fd, bytes_message),
])
"""
TODO: Test other functions
"""
|
test_shell_manager.py | 926 | flamethrower/src/flamethrower/shell/tests/test_shell_manager.py | import sys
import termios
import pytest
from unittest.mock import patch
from flamethrower.shell.shell_manager import ShellManager
@pytest.fixture
def mock_shell_manager() -> ShellManager:
return ShellManager(
old_settings=[],
in_cooked_mode=False,
)
def test_shell_manager_init(mock_shell_manager: ShellManager) -> None:
sm = mock_shell_manager
assert sm.old_settings == []
assert sm.in_cooked_mode == False
def test_shell_manager_cooked_mode(mock_shell_manager: ShellManager) -> None:
sm = mock_shell_manager
sm.in_cooked_mode = False
with patch('termios.tcsetattr') as mock_termios, \
patch('tty.setraw') as mock_setraw:
with sm.cooked_mode():
pass
assert mock_termios.called_once_with(sys.stdin, termios.TCSADRAIN, sm.old_settings)
assert mock_setraw.called_once_with(sys.stdin)
assert sm.in_cooked_mode == False
|
test_shell.py | 1,356 | flamethrower/src/flamethrower/shell/tests/test_shell.py | import os
import sys
from unittest.mock import patch, Mock
from flamethrower.shell.shell import Shell
from flamethrower.test_utils.mocks.mock_container import mock_container
def todo_test_shell_run() -> None:
base_dir = os.getcwd() # Cannot be mocked because it is called during Shell() instantiation
env_dict = os.environ.copy() # Need a real env?
with patch('sys.argv', return_value=['flamethrower']), \
patch('flamethrower.setup.setup.setup_zsh_env', return_value=env_dict) as setup, \
patch('subprocess.Popen', return_value=Mock()) as mock_popen, \
patch('termios.tcgetattr', return_value=[]) as mock_tcgetattr, \
patch('termios.tcsetattr') as mock_tcsetattr, \
patch('tty.setraw') as mock_setraw, \
patch('flamethrower.containers.container', return_value=mock_container), \
patch('flamethrower.context.dir_walker.setup_dir_summary') as dir_walker, \
patch('select.select') as mock_select, \
patch('os.read') as mock_os_read, \
patch('os.write') as mock_os_write, \
patch('os.close') as mock_os_close, \
patch('builtins.print') as mock_print:
shell = Shell()
shell.run()
assert shell.base_dir == base_dir
"""
This file is particularly problematic to test because of the pty.
"""
|
summary_manager.py | 4,096 | flamethrower/src/flamethrower/setup/summary_manager.py | import asyncio
from asyncio import Task
from rich.progress import Progress, TaskID
from pydantic import BaseModel, ConfigDict
from flamethrower.utils.colors import *
from typing import Any, Coroutine, List, Optional, Union, TypeVar, Generic
T = TypeVar('T')
class Task(asyncio.Task, Generic[T]): # type: ignore
pass
class SummaryManager(BaseModel):
model_config = ConfigDict(arbitrary_types_allowed=True)
max_summarization_tasks: int = 100
summarization_tasks: List[Task[int]] = []
summarization_tasks_copy: List[Task[int]] = []
instant_timeout: float = 0.5
summarization_timeout: int = 120
def __init__(self, **kwargs: Any) -> None:
super().__init__(**kwargs)
self._lock = asyncio.Lock()
@property
def lock(self) -> asyncio.Lock:
return self._lock
async def get_summarizations_with_timeout(self) -> Optional[List[Any]]:
if len(self.summarization_tasks) > self.max_summarization_tasks:
await self.cancel_summarization_tasks(self.summarization_tasks)
await self.cancel_summarization_tasks(self.summarization_tasks_copy)
error_message = (
f'π Workspace too large ({len(self.summarization_tasks)}/100). '
f'{STDIN_DEFAULT.decode("utf-8")}Please consider narrowing your workspace by using\n\n'
f' $ `{STDIN_GREEN.decode("utf-8")}flamethrower{STDIN_DEFAULT.decode("utf-8")} '
f'{STDIN_UNDERLINE.decode("utf-8")}./more/specific/folder`{STDIN_DEFAULT.decode("utf-8")}\n\n'
'Otherwise, consider adding some folders to your `.gitignore` file.\n'
)
raise Exception(error_message)
try:
res_list = []
res_list = await asyncio.wait_for(
self.safe_gather(self.summarization_tasks_copy),
timeout=self.instant_timeout
)
except asyncio.TimeoutError:
with Progress() as progress:
try:
task_id = progress.add_task(
f'[cyan]ποΈ Learning workspace structure (max {self.summarization_timeout}s)...',
total=len(self.summarization_tasks)
)
summarization_tasks_with_progress = [
self.perform_async_task(summarization_task, progress, task_id)
for summarization_task in self.summarization_tasks
]
res_list = await asyncio.wait_for(
self.safe_gather(summarization_tasks_with_progress),
timeout=self.summarization_timeout
)
except asyncio.TimeoutError:
pass
except KeyboardInterrupt:
pass
finally:
progress.stop()
except Exception:
raise
finally:
try:
await self.cancel_summarization_tasks(self.summarization_tasks)
await self.cancel_summarization_tasks(self.summarization_tasks_copy)
except Exception as e:
print(f'get_summarizations_with_timeout: {e}')
return res_list
async def perform_async_task(
self,
task: Task[int],
progress: Progress,
task_id: TaskID,
step: int = 1
) -> Optional[int]:
try:
return await task
except asyncio.CancelledError:
return 0
finally:
with await self.lock:
progress.update(task_id, advance=step, refresh=True)
async def cancel_summarization_tasks(self, task_list: List[Task[int]]) -> None:
for task in task_list:
if isinstance(task, Task) and not task.done() and not task.cancelled():
task.cancel()
cancelled_tasks = [task for task in task_list if isinstance(task, Task)]
if cancelled_tasks:
await asyncio.gather(*cancelle |
summary_manager.py | 1,215 | flamethrower/src/flamethrower/setup/summary_manager.py | y:
with await self.lock:
progress.update(task_id, advance=step, refresh=True)
async def cancel_summarization_tasks(self, task_list: List[Task[int]]) -> None:
for task in task_list:
if isinstance(task, Task) and not task.done() and not task.cancelled():
task.cancel()
cancelled_tasks = [task for task in task_list if isinstance(task, Task)]
if cancelled_tasks:
await asyncio.gather(*cancelled_tasks, return_exceptions=True)
def add_summarization_task(self, task: Task[int], task_copy: Task[int]) -> None:
self.summarization_tasks.append(task)
self.summarization_tasks_copy.append(task_copy)
async def safe_gather(
self,
task_list: Union[List[Task[int]], List[Coroutine[Any, Any, Any]]]
) -> List[Any]:
"""
This can take any coroutine, be it an update_task or a cancel_task, and safely
gathers it without raising an exception.
"""
try:
return await asyncio.gather(*task_list, return_exceptions=True)
except Exception:
# especially for the _FutureGather exception
return [] |
dir_walker.py | 4,096 | flamethrower/src/flamethrower/setup/dir_walker.py | import os
import io
import json
import asyncio
from pathspec import PathSpec
from importlib import resources
from pydantic import BaseModel, ConfigDict
import flamethrower.config.constants as config
from flamethrower.agents.util_agents.summarizer import Summarizer
from flamethrower.setup.summary_manager import SummaryManager
from flamethrower.utils.loader import Loader
from typing import Any, Dict, Union
class DirectoryWalker(BaseModel):
model_config = ConfigDict(arbitrary_types_allowed=True)
workspace_summary: str
target_dir: str = os.getcwd()
file_paths: Dict[str, str] = {}
def __init__(self, **kwargs: Any) -> None:
super().__init__(**kwargs)
self._base_dir: str = os.getcwd()
self._lock = asyncio.Lock()
self._semaphore = asyncio.Semaphore(10)
self._summarizer = Summarizer()
self._summary_manager = SummaryManager()
try:
with open(config.get_dir_dict_path(), 'r') as dir_dict_file:
self.file_paths = json.loads(dir_dict_file.read())
except FileNotFoundError:
with open(config.get_dir_dict_path(), 'w') as dir_dict_file:
dir_dict_file.write('')
@property
def base_dir(self) -> str:
return self._base_dir
@property
def lock(self) -> asyncio.Lock:
return self._lock
@property
def semaphore(self) -> asyncio.Semaphore:
return self._semaphore
@property
def summarizer(self) -> Summarizer:
return self._summarizer
@property
def summary_manager(self) -> SummaryManager:
return self._summary_manager
async def generate_directory_summary(self) -> None:
with open(config.get_dir_tree_path(), 'w') as dir_tree_file:
self.process_directory(self.base_dir, dir_tree_file, self.get_gitignore())
try:
tasks_completed = await self.summary_manager.get_summarizations_with_timeout() or []
num_tasks_completed = 0
for task in tasks_completed:
if isinstance(task, int):
num_tasks_completed += task # task can be 0 or 1
if num_tasks_completed > 0:
print(f'π Learned {num_tasks_completed} files in the workspace.')
with open(config.get_dir_dict_path(), 'w') as dir_dict_file:
dir_dict_file.write(json.dumps(self.file_paths, indent=2))
except KeyboardInterrupt:
raise
except Exception:
raise
def process_directory(
self,
dir_path: str,
summary_file: io.TextIOWrapper,
gitignore: PathSpec,
prefix: str = '',
) -> None:
entries = os.listdir(dir_path)
if gitignore:
entries = [
e for e in entries if (
not gitignore.match_file(e)
and e != '.git'
and e != '.flamethrower'
)
]
hidden_dirs = [
d for d in entries
if os.path.isdir(os.path.join(dir_path, d))
and d.startswith('.')
]
regular_dirs = [
d for d in entries
if os.path.isdir(os.path.join(dir_path, d))
and not d.startswith('.')
]
files = [
f for f in entries
if os.path.isfile(os.path.join(dir_path, f))
]
hidden_dirs.sort()
regular_dirs.sort()
files.sort()
sorted_entries = hidden_dirs + regular_dirs + files
for i, entry in enumerate(sorted_entries):
path = os.path.join(dir_path, entry)
is_last = (i == len(sorted_entries) - 1)
if os.path.isdir(path):
self.process_subdirectory(
path,
entry,
prefix,
is_last,
self.is_within_target(path),
summary_file,
gitignore
)
els |
dir_walker.py | 4,096 | flamethrower/src/flamethrower/setup/dir_walker.py | rs + files
for i, entry in enumerate(sorted_entries):
path = os.path.join(dir_path, entry)
is_last = (i == len(sorted_entries) - 1)
if os.path.isdir(path):
self.process_subdirectory(
path,
entry,
prefix,
is_last,
self.is_within_target(path),
summary_file,
gitignore
)
else:
self.write_file_entry(
entry,
is_last,
prefix,
summary_file
)
self.summary_manager.add_summarization_task(
asyncio.create_task(self.update_file_paths(path)), # original
asyncio.create_task(self.update_file_paths(path)) # copy
)
def process_subdirectory(
self,
path: str,
entry: str,
prefix: str,
is_last: bool,
is_target: bool,
summary_file: io.TextIOWrapper,
gitignore: PathSpec,
) -> None:
connector = 'βββ' if not is_last else 'βββ'
new_prefix = 'β ' if not is_last else ' '
if is_target:
summary_file.write(f'{prefix}{connector} {entry}\n')
self.process_directory(path, summary_file, gitignore, prefix=(prefix + new_prefix))
else:
summary_file.write(f'{prefix}{connector} {entry}\n')
if os.listdir(path):
summary_file.write(f'{prefix}{new_prefix}βββ ...\n')
def write_file_entry(
self,
file_name: str,
is_last: bool,
prefix: str,
summary_file: io.TextIOWrapper,
) -> None:
connector = 'βββ' if not is_last else 'βββ'
summary_file.write(f'{prefix}{connector} {file_name}\n')
async def update_file_paths(self, file_path: str) -> int:
try:
relative_path = os.path.relpath(file_path, self.base_dir)
if relative_path in self.file_paths:
return 0
async with self.semaphore:
with open(file_path) as f:
try:
file_contents = f.read()
file_summary = await self.summarizer.summarize_file(
main_project_description=self.workspace_summary,
file_contents=file_contents
)
async with self.lock:
self.file_paths[relative_path] = file_summary
return 1
except FileNotFoundError:
return 0
except UnicodeDecodeError:
return 0
except Exception:
return 0
except asyncio.CancelledError:
return 0
"""
Helper functions
"""
def get_directory_tree(self) -> str:
with open(config.get_dir_tree_path(), 'r') as dir_tree_file:
return dir_tree_file.read()
def get_gitignore(self) -> PathSpec:
patterns = set()
if os.path.exists('.gitignore'):
with open('.gitignore', 'r') as gitignore_file:
for line in gitignore_file:
patterns.add(line.strip().lstrip('/').rstrip('/'))
with resources.path(f'{config.FLAMETHROWER_PACKAGE_NAME}.data', '.sample.gitignore') as sample_gitignore_file_path:
with open(sample_gitignore_file_path, 'r') as sample_gitignore_file:
for line in sample_gitignore_file:
patterns.add(line.strip().lstrip('/').rstrip('/'))
return PathSpec.from_lines('gitwildmatch', list(patterns))
def is_within_target(self, path: str) -> bool:
return path in self.target_dir or self.target_dir in path
def setup_workspace_summary() -> str:
summary_path = co |
dir_walker.py | 1,932 | flamethrower/src/flamethrower/setup/dir_walker.py | gnore') as sample_gitignore_file_path:
with open(sample_gitignore_file_path, 'r') as sample_gitignore_file:
for line in sample_gitignore_file:
patterns.add(line.strip().lstrip('/').rstrip('/'))
return PathSpec.from_lines('gitwildmatch', list(patterns))
def is_within_target(self, path: str) -> bool:
return path in self.target_dir or self.target_dir in path
def setup_workspace_summary() -> str:
summary_path = config.get_workspace_summary_path()
if os.path.exists(summary_path):
with open(summary_path, 'r') as summary_file:
return summary_file.read()
try:
summary = 'Unknown error generating workspace summary.\n'
with open(os.path.join(os.getcwd(), 'README.md'), 'r') as readme_file:
readme_file_contents = readme_file.read()
with Loader(
loading_message='π Learning workspace...',
requires_cooked_mode=False
).managed_loader():
summary = Summarizer().summarize_readme(readme_file_contents)
except FileNotFoundError:
summary = 'This project does not have a README. Infer from other files the purpose of this project.'
except Exception as e:
summary = f'Unable to summarize README: {str(e)}'
finally:
with open(summary_path, 'w') as summary_file:
summary_file.write(summary)
return summary
def setup_dir_summary(target_dir: str) -> Union[None, Exception]:
try:
dir_walker = DirectoryWalker(
workspace_summary=setup_workspace_summary(),
target_dir=os.path.join(os.getcwd(), target_dir)
)
# Python 3.8 prefers this over asyncio.run()
asyncio.get_event_loop().run_until_complete(dir_walker.generate_directory_summary())
except Exception as e:
return e
return None
|
setup.py | 3,798 | flamethrower/src/flamethrower/setup/setup.py | import os
import shutil
from git import Repo
from importlib import resources
from flamethrower.config.constants import *
from flamethrower.utils.colors import *
from flamethrower.utils.key_handler import (
get_api_key, set_api_key, try_api_key
)
from typing import Dict
original_welcome_screen = """
______ __ __
/ __/ /___ _____ ___ ___ / /_/ /_ _________ _ _____ _____
/ /_/ / __ `/ __ `__ \/ _ \/ __/ __ \/ ___/ __ \ | /| / / _ \/ ___/
/ __/ / /_/ / / / / / / __/ /_/ / / / / / /_/ / |/ |/ / __/ /
/_/ /_/\__,_/_/ /_/ /_/\___/\__/_/ /_/_/ \____/|__/|__/\___/_/
Major credits to `pyfiglet` for making this possible π
"""
colored_welcome_screen = (
"\033[31m ______ __ __\n"
" / __/ /___ _____ ___ ___ / /_/ /_ _________ _ _____ _____\033[0m\n"
"\033[35m / /_/ / __ `/ __ `__ \\/ _ \\/ __/ __ \\/ ___/ __ \\ | /| / / _ \\/ ___/\n"
" / __/ / /_/ / / / / / / __/ /_/ / / / / / /_/ / |/ |/ / __/ /\033[0m\n"
"\033[34m/_/ /_/\\__,_/_/ /_/ /_/\\___/\\__/_/ /_/_/ \\____/|__/|__/\\___/_/\033[0m"
)
def setup_zsh_env() -> Dict[str, str]:
if not os.path.exists(FLAMETHROWER_DIR):
os.makedirs(FLAMETHROWER_DIR, exist_ok=True)
os.makedirs(FLAMETHROWER_LOG_DIR, exist_ok=True)
os.makedirs(FLAMETHROWER_ZSH_DIR, exist_ok=True)
print(colored_welcome_screen)
flamethrower_readme_path = get_flamethrower_readme_path()
if not os.path.exists(flamethrower_readme_path):
with resources.path(f'{FLAMETHROWER_PACKAGE_NAME}.data', 'README.md') as f:
shutil.copy(f, flamethrower_readme_path)
flamethrower_gitignore_path = get_flamethrower_gitignore_path()
if not os.path.exists(flamethrower_gitignore_path):
with resources.path(f'{FLAMETHROWER_PACKAGE_NAME}.data', '.sample.gitignore') as f:
shutil.copy(f, flamethrower_gitignore_path)
zshrc_path = get_zsh_config_path()
if not os.path.exists(zshrc_path):
with resources.path(f'{FLAMETHROWER_PACKAGE_NAME}.data', '.sample.zshrc') as f:
shutil.copy(f, zshrc_path)
zsh_history_path = get_zsh_history_path()
if not os.path.exists(zsh_history_path):
with open(zsh_history_path, 'w') as f:
f.write('')
zsh_syntax_highlighting_path = flamethrower_zsh_dir('zsh-syntax-highlighting')
if not os.path.exists(zsh_syntax_highlighting_path):
Repo.clone_from('https://github.com/zsh-users/zsh-syntax-highlighting.git', zsh_syntax_highlighting_path)
env = os.environ.copy()
env['ZDOTDIR'] = FLAMETHROWER_ZSH_DIR
if not setup_api_key():
return {}
return env
def setup_api_key() -> bool:
# Check for cached/current OpenAI API Key
old_openai_api_key = get_api_key()
old_key_works = try_api_key(old_openai_api_key)
new_openai_api_key = os.getenv('OPENAI_API_KEY')
new_key_works = try_api_key(new_openai_api_key)
if not old_key_works and not new_key_works:
print(
f'\nError: OpenAI API Key not found or malfunctioning.\n'
'\nMaybe this is a new project, so you need to set up your OpenAI API Key again.\n'
f'\nYou can find your OpenAI Api Keys at {UNDERLINE}https://platform.openai.com/api-keys{DEFAULT}.\n'
)
try:
new_openai_key = input('OPENAI_API_KEY=')
while try_api_key(new_openai_key) == False:
print('\nOpenAI API Key still invalid. Please try again.')
new_openai_key = input('\nOPENAI_API_KEY=')
set_api_key(new_openai_key)
except KeyboardInterrupt:
return False
if not old_key_works and new_key_works:
set_api_key(new_openai_api_key)
return True
|
test_setup.py | 4,044 | flamethrower/src/flamethrower/setup/tests/test_setup.py | from unittest.mock import patch, call
from flamethrower.setup.setup import setup_api_key, setup_zsh_env
from flamethrower.config.constants import *
def try_api_key_side_effect(api_key: str) -> bool:
if api_key:
return True
return False
def test_setup_api_key_from_cache() -> None:
with patch('flamethrower.setup.setup.get_api_key', return_value='valid_cached_api_key'), \
patch('flamethrower.setup.setup.try_api_key') as mock_try_api_key, \
patch('os.getenv', return_value=None), \
patch('flamethrower.setup.setup.set_api_key') as mock_set_api_key:
mock_try_api_key.side_effect = try_api_key_side_effect
assert setup_api_key() == True
mock_set_api_key.assert_not_called()
def test_setup_api_key_from_env() -> None:
with patch('flamethrower.setup.setup.get_api_key', return_value=''), \
patch('flamethrower.setup.setup.try_api_key') as mock_try_api_key, \
patch('os.getenv', return_value='valid_env_api_key'), \
patch('flamethrower.setup.setup.set_api_key') as mock_set_api_key:
mock_try_api_key.side_effect = try_api_key_side_effect
assert setup_api_key() == True
mock_set_api_key.assert_called_with('valid_env_api_key')
def test_setup_zsh_env_first_time() -> None:
with patch('builtins.open') as mock_open, \
patch('flamethrower.setup.setup.os.path.exists', return_value=False), \
patch('flamethrower.setup.setup.os.makedirs') as mock_makedirs, \
patch('flamethrower.setup.setup.resources.path') as mock_resources_path, \
patch('flamethrower.setup.setup.shutil.copy') as mock_copy, \
patch('flamethrower.setup.setup.Repo.clone_from') as mock_clone_from, \
patch('os.environ.copy', return_value={}) as mock_environ_copy, \
patch('flamethrower.setup.setup.setup_api_key', return_value=True) as mock_setup_api_key, \
patch('builtins.print') as mock_print:
assert setup_zsh_env() is not None
mock_makedirs.assert_has_calls([
call(FLAMETHROWER_DIR, exist_ok=True),
call(FLAMETHROWER_LOG_DIR, exist_ok=True),
call(FLAMETHROWER_ZSH_DIR, exist_ok=True)
])
mock_print.assert_called_once()
mock_resources_path.assert_has_calls([
call(f'{FLAMETHROWER_PACKAGE_NAME}.data', 'README.md'),
call().__enter__(),
call().__exit__(None, None, None),
call(f'{FLAMETHROWER_PACKAGE_NAME}.data', '.sample.gitignore'),
call().__enter__(),
call().__exit__(None, None, None),
call(f'{FLAMETHROWER_PACKAGE_NAME}.data', '.sample.zshrc'),
call().__enter__(),
call().__exit__(None, None, None)
])
assert mock_copy.call_count == 3
mock_open.assert_called_with(get_zsh_history_path(), 'w')
mock_clone_from.assert_called_once()
mock_environ_copy.assert_called_once()
mock_setup_api_key.assert_called_once()
def test_setup_zsh_env_nth_time() -> None:
with patch('flamethrower.setup.setup.os.path.exists', return_value=True), \
patch('flamethrower.setup.setup.os.makedirs') as mock_makedirs, \
patch('flamethrower.setup.setup.resources.path') as mock_resources_path, \
patch('flamethrower.setup.setup.shutil.copy') as mock_copy, \
patch('flamethrower.setup.setup.Repo.clone_from') as mock_clone_from, \
patch('os.environ.copy', return_value={}) as mock_environ_copy, \
patch('flamethrower.setup.setup.setup_api_key', return_value=True) as mock_setup_api_key, \
patch('builtins.print') as mock_print:
assert setup_zsh_env() is not None
mock_makedirs.assert_not_called()
mock_resources_path.assert_not_called()
mock_copy.assert_not_called()
mock_clone_from.assert_not_called()
mock_environ_copy.assert_called_once()
mock_setup_api_key.assert_called_once()
mock_print.assert_not_called()
|
test_dir_walker.py | 3,908 | flamethrower/src/flamethrower/setup/tests/test_dir_walker.py | import asyncio
from unittest.mock import patch, call, mock_open, AsyncMock
import flamethrower.config.constants as config
from flamethrower.setup.dir_walker import DirectoryWalker
def test_dir_walker_init() -> None:
workspace_summary = 'π Test workspace summary.'
target_dir = 'some/test/dir'
file_paths = {
'src/flamethrower/file_1.py': 'This file is about this and that',
'src/flamethrower/file_2.py': 'This file is about they and them',
'src/flamethrower/file_3.py': 'This file is about Tom and Jerry',
}
with patch('flamethrower.setup.dir_walker.Summarizer') as mock_summarizer, \
patch('flamethrower.setup.dir_walker.setup_workspace_summary', return_value=workspace_summary), \
patch('builtins.open', mock_open(read_data='')) as mock_file, \
patch('json.loads', return_value=file_paths) as mock_json_loads:
dw = DirectoryWalker(
workspace_summary=workspace_summary,
target_dir=target_dir
)
assert dw.target_dir == target_dir
assert dw._lock is not None
assert dw._semaphore is not None
assert dw._summarizer is not None
assert dw._summary_manager is not None
mock_file.assert_has_calls([
call(config.get_dir_dict_path(), 'r'),
call().__enter__(),
call().read(),
call().__exit__(None, None, None),
])
mock_json_loads.assert_called_once()
assert dw.file_paths == file_paths
def test_dir_walker_generate_directory_summary() -> None:
workspace_summary = 'π Test workspace summary.'
json_dumps_return_value = '{ "test_key": "test_value" }'
with patch('builtins.open', mock_open()) as mock_file, \
patch('json.loads'), \
patch('json.dumps', return_value=json_dumps_return_value), \
patch('flamethrower.setup.dir_walker.setup_workspace_summary', return_value=workspace_summary), \
patch('flamethrower.setup.dir_walker.Summarizer'), \
patch('flamethrower.setup.dir_walker.DirectoryWalker.process_directory') as mock_process_directory, \
patch('flamethrower.setup.dir_walker.DirectoryWalker.get_gitignore') as mock_get_gitignore, \
patch('flamethrower.setup.dir_walker.SummaryManager') as mock_summary_manager, \
patch('builtins.print') as mock_print:
dw = DirectoryWalker(
workspace_summary=workspace_summary,
target_dir='some/test/dir'
)
sm = mock_summary_manager.return_value
sm.get_summarizations_with_timeout = AsyncMock(return_value=[1, 0, 1, 1])
asyncio.get_event_loop().run_until_complete(dw.generate_directory_summary())
mock_file.assert_has_calls([
call(config.get_dir_dict_path(), 'r'),
call().__enter__(),
call().read(),
call().__exit__(None, None, None),
call(config.get_dir_tree_path(), 'w'),
call().__enter__(),
call().__exit__(None, None, None),
call(config.get_dir_dict_path(), 'w'),
call().__enter__(),
call().write(json_dumps_return_value),
call().__exit__(None, None, None),
])
mock_process_directory.assert_called_once()
mock_get_gitignore.assert_called_once()
sm.get_summarizations_with_timeout.assert_awaited_once()
mock_print.assert_called_once_with('π Learned 3 files in the workspace.')
def test_dir_walker_process_directory() -> None:
pass
def test_dir_walker_process_subdirectory() -> None:
pass
def test_dir_walker_write_file_entry() -> None:
pass
def test_dir_walker_get_directory_tree() -> None:
pass
def test_dir_walker_get_gitignore() -> None:
pass
def test_dir_walker_update_file_paths() -> None:
pass
def test_dir_walker_setup_dir_summary() -> None:
pass
|
test_summary_manager.py | 715 | flamethrower/src/flamethrower/setup/tests/test_summary_manager.py | from flamethrower.setup.summary_manager import SummaryManager
def test_summary_manager_init() -> None:
sm = SummaryManager()
assert sm.max_summarization_tasks == 100
assert sm.summarization_tasks == []
assert sm.summarization_tasks_copy == []
assert sm.instant_timeout == 0.5
assert sm.summarization_timeout == 120
assert sm._lock is not None
def test_summary_manager_get_summarizations_with_timeout() -> None:
pass
def test_summary_manager_perform_async_task() -> None:
pass
def test_summary_manager_add_summarization_task() -> None:
pass
def test_summary_manager_cancel_summarization_task() -> None:
pass
def test_summary_manager_safe_gather() -> None:
pass
|
operator.py | 4,096 | flamethrower/src/flamethrower/agents/operator.py | import os
import enum
import time
import json
import subprocess
import questionary
from pydantic import BaseModel
from openai import RateLimitError
import flamethrower.config.constants as config
from flamethrower.agents.drivers.driver_interface import Driver
from flamethrower.agents.drivers.done_driver import DoneDriver
from flamethrower.agents.drivers.feature_driver import FeatureDriver
from flamethrower.agents.drivers.debugging_driver import DebuggingDriver
from flamethrower.agents.router import Router
from flamethrower.agents.interpreter import Interpreter
from flamethrower.context.conv_manager import ConversationManager
from flamethrower.context.prompt import PromptGenerator
from flamethrower.agents.util_agents.file_writer import FileWriter
from flamethrower.utils.loader import Loader
from flamethrower.shell.printer import Printer
from flamethrower.exceptions.exceptions import *
from flamethrower.exceptions.handlers import *
from typing import Any, Dict, List, Optional
class Choice(enum.Enum):
YES = 1
NO = 2
class Operator(BaseModel):
max_retries: int = 8
base_dir: str
conv_manager: ConversationManager
prompt_generator: PromptGenerator
printer: Printer
def __init__(self, **kwargs: Any) -> None:
super().__init__(**kwargs)
self._done_driver: DoneDriver = DoneDriver()
self._feature_driver: FeatureDriver = FeatureDriver(
target_dir=self.base_dir,
prompt_generator=self.prompt_generator
)
self._debugging_driver: DebuggingDriver = DebuggingDriver(
target_dir=self.base_dir,
prompt_generator=self.prompt_generator
)
self._router: Router = Router()
self._interpreter: Interpreter = Interpreter()
self._file_writer: FileWriter = FileWriter(base_dir=self.base_dir)
@property
def done_driver(self) -> DoneDriver:
return self._done_driver
@property
def feature_driver(self) -> FeatureDriver:
return self._feature_driver
@property
def debugging_driver(self) -> DebuggingDriver:
return self._debugging_driver
@property
def router(self) -> Router:
return self._router
@property
def interpreter(self) -> Interpreter:
return self._interpreter
@property
def file_writer(self) -> FileWriter:
return self._file_writer
def new_implementation_run(self) -> None:
try:
is_first_time_asking_for_permission = True
for _ in range(self.max_retries):
conv = self.get_latest_conv()
query = conv[-1]['content']
"""
Driver can be a:
- feature builder
- debugger
- done
"""
with Loader(loading_message='π§ Thinking...').managed_loader():
driver = self.get_driver(conv)
if not driver:
raise Exception('Operator.new_implementation_run: driver is None')
if driver.__class__.__name__ == 'FeatureDriver':
mode = 'REGULAR'
elif driver.__class__.__name__ == 'DebuggingDriver':
mode = 'DEBUG'
elif driver.__class__.__name__ == 'DoneDriver':
mode = 'DONE'
else:
mode = 'UNKNOWN'
self.printer.print_cyan(f'Mode: {mode}', reset=True)
stream = driver.respond_to(conv)
if not stream:
raise Exception('Driver.respond_to: stream is empty')
self.printer.print_llm_response(stream)
action = ''
with Loader(loading_message='π€ Determining next step...').managed_loader():
last_driver_res = self.get_last_assistant_response()
actions = self.interpreter.make_decision_from(query, last_driver_res)
|
operator.py | 4,096 | flamethrower/src/flamethrower/agents/operator.py | ond_to(conv)
if not stream:
raise Exception('Driver.respond_to: stream is empty')
self.printer.print_llm_response(stream)
action = ''
with Loader(loading_message='π€ Determining next step...').managed_loader():
last_driver_res = self.get_last_assistant_response()
actions = self.interpreter.make_decision_from(query, last_driver_res)
for i in range (len(actions)):
obj = actions[i]
action = obj['action']
self.printer.print_actions(actions[i:])
if is_first_time_asking_for_permission and action in ['run', 'write', 'debug']:
self.printer.print_regular(with_newline=True)
choice = self.get_user_choice()
if choice == Choice.NO:
return
is_first_time_asking_for_permission = False
try:
if action == 'run':
self.handle_action_run(obj)
elif action == 'write':
self.handle_action_write(obj, last_driver_res)
elif action == 'need_context':
self.handle_action_need_context(obj)
elif action == 'completed':
# diffs = Diff(printer=self.printer).get_diffs()
# TODO: diffs for just that 1 file?
# self.printer.print_diffs(diffs)
return
else:
# Impossible, since obj is validated by json schema, but just in case
raise ValueError('Foreign JSON')
except RateLimitError:
error_message = (
'You might have exceeded your current quota for OpenAI.\n'
"We're working hard to setup a π₯ flamethrower LLM server for your usage\n"
'Please try again soon!\n'
)
self.printer.print_err(error_message)
except Exception as e:
self.printer.print_err(f'Error: {str(e)}\nPlease try again.')
return
# Max retries exceeded
self.printer.print_err('Too many iterations, need your help to debug.')
except KeyboardInterrupt:
self.printer.print_orange('^C', reset=True)
return
except QuotaExceededException:
self.printer.print_err(get_quota_exceeded_message())
except Exception:
raise
def handle_action_run(self, json: Dict[str, str]) -> None:
command = json['command']
self.printer.print_code(command)
def execute_action(command: str) -> str:
output = ''
try:
completed_process = subprocess.run(
command,
shell=True,
check=True,
text=True,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT
)
output = completed_process.stdout
except subprocess.CalledProcessError as e:
output = f'Error: {e.output}'
return output
time.sleep(1) # Give user time to read
output = execute_action(command)
self.printer.print_regular(output)
self.conv_manager.append_conv(
role='user',
content=f'# {os.getcwd()}\n$ {command}',
name='human',
)
self.conv_manager.append_conv(
role='user',
|
operator.py | 4,096 | flamethrower/src/flamethrower/agents/operator.py | d_process.stdout
except subprocess.CalledProcessError as e:
output = f'Error: {e.output}'
return output
time.sleep(1) # Give user time to read
output = execute_action(command)
self.printer.print_regular(output)
self.conv_manager.append_conv(
role='user',
content=f'# {os.getcwd()}\n$ {command}',
name='human',
)
self.conv_manager.append_conv(
role='user',
content=output,
name='stdout',
)
def handle_action_write(self, json: Dict[str, str], driver_res: str) -> None:
try:
file_paths = json['file_paths']
self.file_writer.write_code(file_paths, driver_res)
success_message = f'Successfully updated {file_paths}\n'
self.conv_manager.append_conv(
role='user',
content=success_message,
name='human',
)
self.printer.print_green(success_message, reset=True)
time.sleep(1) # Give user time to read
except Exception:
failed_message = f'Failed to update {file_paths}'
self.conv_manager.append_conv(
role='user',
content=failed_message,
name='human'
)
self.printer.print_err(failed_message)
raise
def handle_action_need_context(self, json: Dict[str, str]) -> None:
try:
file_paths = json['file_paths']
complete_target_path = os.path.join(os.getcwd(), file_paths)
try:
with open(complete_target_path, 'r') as f:
file_contents = f.read()
new_message = (
f'# {file_paths}:\n'
f'```\n{file_contents}\n```\n'
)
self.conv_manager.append_conv(
role='user',
content=new_message,
name='human'
)
self.printer.print_green(f'Context obtained for {file_paths}')
except FileNotFoundError:
not_found_message = f'Unable to locate {file_paths}'
self.conv_manager.append_conv(
role='user',
content=not_found_message,
name='human'
)
self.printer.print_err(not_found_message)
except Exception:
failed_message = f'Failed to draw context for {file_paths}'
self.conv_manager.append_conv(
role='user',
content=failed_message,
name='human'
)
self.printer.print_err(failed_message)
raise
def get_user_choice(self) -> Choice:
user_choice = questionary.select(
"Do you want me to implement the solution and test it for you?",
choices=[
"Yes",
"No",
]
).ask()
if user_choice == "No":
return Choice.NO
return Choice.YES
def get_driver(self, messages: List[Dict[str, str]]) -> Optional[Driver]:
driver_type = self.router.get_driver(messages)
if driver_type == 'done':
return self.done_driver
if driver_type == 'debugging':
return self.debugging_driver
# Default to feature driver
return self.feature_driver
def get_latest_conv(self) -> List[Dict[str, str]]:
try:
with open(config.get_conversation_path(), 'r') as f:
json_list = json.loads(f.read())
if not isinstance(json_list, list):
return []
return json_list
except FileNotFoundError:
return []
except json.JSONDecodeError:
return []
def get_last_assistant_response(self) -> str:
|
operator.py | 582 | flamethrower/src/flamethrower/agents/operator.py | driver
def get_latest_conv(self) -> List[Dict[str, str]]:
try:
with open(config.get_conversation_path(), 'r') as f:
json_list = json.loads(f.read())
if not isinstance(json_list, list):
return []
return json_list
except FileNotFoundError:
return []
except json.JSONDecodeError:
return []
def get_last_assistant_response(self) -> str:
with open(config.get_last_response_path(), 'r') as f:
return f.read()
|
interpreter.py | 4,096 | flamethrower/src/flamethrower/agents/interpreter.py | from pydantic import BaseModel
import flamethrower.config.constants as config
from flamethrower.models.llm import LLM
from typing import Any, Dict, List
json_schema = {
'type': 'object',
'properties': {
'actions': {
'type': 'array',
'items': {
'type': 'object',
'properties': {
'action': {
'type': 'string',
'enum': ['run', 'write', 'completed'] # exclude need_context for now
},
'command': { 'type': 'string' },
# TODO: file_paths should be a list of strings
'file_paths': { 'type': 'string' }
},
'required': ['action'],
'allOf': [
{
'if': { 'properties': { 'action': { 'const': 'run' } } },
'then': { 'required': ['command'] }
},
{
'if': { 'properties': { 'action': { 'const': 'write' } } },
'then': { 'required': ['file_paths'] }
},
]
}
},
},
'required': ['actions']
}
system_message = f"""
You are an extremely powerful programming assistant that lives inside the unix terminal.
You have a single, crucial task: to categorize LM responses into a list of 3 possible actions:
1. Run a command on the terminal and observe its output
2. Rewrite code in a given target file
3. Indicate that the job is completed.
You **should choose multiple actions to perform**. For example:
- If you are writing to a file, you **must also return a `run` action to test what you wrote.**
- If you obtained a code snippet, it is likely code you would need to implement and write to a file.
Other notes:
- Sometimes, the responses are explanations with some code snippets.
- as is the case with pure explanations, they are informative, so no further action is required.
- in this case, you should just return a `completed` action.
It is crucial that you return a JSON object with the following JSON Schema:
{json_schema}
"""
class Interpreter(BaseModel):
json_schema: Dict[str, Any] = json_schema
def __init__(self, **kwargs: Any) -> None:
super().__init__(**kwargs)
self._llm: LLM = LLM(system_message=system_message)
@property
def llm(self) -> LLM:
return self._llm
def make_decision_from(self, objective: str, last_response: str) -> List[Dict[Any, Any]]:
target_files = self.get_target_files()
target_files_line = f'Currently you are working with the following files: {target_files}\n' if target_files else ''
query = (
f'This is the objective:\n{objective}.\n'
f'This is the last response:\n{last_response}\n'
f'{target_files_line}'
'Given this objective and response, choose a possible action.'
)
try:
res = self.llm.new_json_request(
query=query,
json_schema=self.json_schema
)
if not res:
raise Exception('interpreter.make_decision_from: res is None')
if not isinstance(res, dict):
raise Exception(f'interpreter.make_decision_from: res not type dict, got {type(res)}')
actions = res.get('actions', [{}])
if not isinstance(actions, list):
raise Exception(f'interpreter.make_decision_from: actions not type list, got {type(actions)}')
if len(actions) > 1:
last_action = actions[-1].get('action', '')
if last_action == 'completed':
return actions[:-1]
return actions
except KeyboardInterrupt:
raise
except Exception:
raise
def get_target_files(self) -> List[str]:
try:
with open(config.ge |
interpreter.py | 639 | flamethrower/src/flamethrower/agents/interpreter.py | (f'interpreter.make_decision_from: actions not type list, got {type(actions)}')
if len(actions) > 1:
last_action = actions[-1].get('action', '')
if last_action == 'completed':
return actions[:-1]
return actions
except KeyboardInterrupt:
raise
except Exception:
raise
def get_target_files(self) -> List[str]:
try:
with open(config.get_current_files_path(), 'r') as f:
return f.read().split('\n')
except FileNotFoundError:
return []
|
router.py | 2,967 | flamethrower/src/flamethrower/agents/router.py | from pydantic import BaseModel
from flamethrower.models.llm import LLM
from flamethrower.models.models import OPENAI_GPT_3_TURBO
from typing import Any, Dict, List
json_schema = {
'type': 'object',
'properties': {
'agent': {
'type': 'string',
'enum': ['feature', 'debugging', 'done']
}
},
'required': ['agent']
}
system_message = f"""
You are a router that routes natural language queries to the appropriate agent.
Here are 3 types of agents, and their expected characteristics:
1. Feature agent: Explains current code, creates new features, refactors existing ones, usually more creative.
- "Make a new ...", "Refactor ...", "Explain this ...", "What is X about ..." etc.
2. Debugging agent: Debugs code, writes print statements, runs commands, finding out more information about the underlying problem.
- "Why is this happening?", "What is ...", "Wtf?", etc.
- STDOUT logs: Error: ...
Additionally, you need to understand that you are being called as part of a cycle, meaning that sometimes you will be called
when evaluating a near-done state, for which you should indicate that the job is completed with a third `done` agent.
3. Done agent: Indicates that the job is completed.
- STDOUT: # Some success message **that solves the user's problem**
- "Thank you sir", "That worked", etc.
- If the code still has excessive debugging print statements, then it is NOT done yet.
Importantly:
- You are part of a while-loop in a program used as an effective multi-agent build & debug system.
- Make **effective use of context to make the right judgement**.
- **look at the last user message, and see if it is related to the messages before it**
- e.g. "Wtf?" is usually related to the messages before it, even suggesting debugging is needed
- however, if this last message is unrelated to the messages before, and is truly not programming-related
- then you should route it to the `general` agent.
With the above in mind, you must return a JSON object with a single key `agent` with a value of one of the 4 agents.
The JSON schema is given for your reference.
{json_schema}
"""
class Router(BaseModel):
max_file_len: int = 30_000 # TODO: count actual tokens and cut accordingly
json_schema: Dict[str, Any] = json_schema
def __init__(self, **kwargs: Any) -> None:
super().__init__(**kwargs)
self._llm: LLM = LLM(system_message=system_message, model=OPENAI_GPT_3_TURBO)
@property
def llm(self) -> LLM:
return self._llm
def get_driver(self, messages: List[Dict[str, str]]) -> str:
messages_str = str(messages)[:self.max_file_len]
json_obj = self.llm.new_json_request(
query=messages_str,
json_schema=self.json_schema
)
if not json_obj or not isinstance(json_obj, dict):
return ''
return json_obj.get('agent') or ''
|
debugging_driver.py | 2,313 | flamethrower/src/flamethrower/agents/drivers/debugging_driver.py | import os
from flamethrower.agents.drivers.driver_interface import Driver
from flamethrower.models.llm import LLM
from flamethrower.context.prompt import PromptGenerator
from flamethrower.utils.loader import Loader
from typing import Any, Dict, List, Iterator, Optional
system_message = """
Your name is π₯ Hans ze Flammenwerfer.
You are an incredibly powerful programming assistant that lives inside the unix terminal.
More specifically, you are being called from {}, but your main focus is on {}.
You make use of existing files and stdout logs to make a great judgement on next steps.
Don't use unix file API's to write code to files, instead just write the code itself.
You have a single, crucial task: **Given some bug in the user's code, find the bug, and apply a solution**.
Here are some points to take note:
- After writing code, try to test it in the same way that the user did it that caused the error.
- Usually you live in the same directory as the user, so you can run the same commands without specifying the path.
- Most of the time the solution is not obvious, so **strongly consider entering a debugging mindset, and write a set of effective print statements** to identify the root cause.
- If you need more contents, like viewing the contents of a file, you are able to do so, just ask.
- Try to keep all the code in a single code block enclosed by triple backticks.
- Finally, if everything works, **don't recommend other tests, suggestions, or optimizations**. It's important not to be overbearing.
Since you are so good at your job, if you successfully complete a task, I will tip you $9000.
"""
class DebuggingDriver(Driver):
target_dir: str
prompt_generator: PromptGenerator
def __init__(self, **kwargs: Any) -> None:
super().__init__(**kwargs)
self._llm: LLM = LLM(system_message=system_message.format(os.getcwd(), self.target_dir))
@property
def llm(self) -> LLM:
return self._llm
def respond_to(self, messages: List[Dict[str, str]]) -> Optional[Iterator[str]]:
with Loader(loading_message='ποΈ Drawing context...').managed_loader():
contextualized_conv = self.prompt_generator.construct_messages(messages)
return self.llm.new_streaming_chat_request(contextualized_conv)
|
feature_driver.py | 1,848 | flamethrower/src/flamethrower/agents/drivers/feature_driver.py | import os
from flamethrower.agents.drivers.driver_interface import Driver
from flamethrower.models.llm import LLM
from flamethrower.context.prompt import PromptGenerator
from flamethrower.utils.loader import Loader
from typing import Any, List, Dict, Iterator, Optional
system_message = """
Your name is π₯ Hans ze Flammenwerfer.
You are an incredibly powerful programming assistant that lives inside the unix terminal.
More specifically, you are being called from {}, but your main focus is on {}.
You make use of existing files and stdout logs to make a great judgement on next steps.
Don't use unix file API's to write code to files, instead just write the code itself.
You have a single, crucial task: **Given a user's query, provide explanations and/or write code that solves their problem**.
Here are some points to take note:
- If you need more contents, like viewing the contents of a file, you are able to do so, just ask.
- Finally, if everything works, **don't recommend other tests, suggestions, or optimizations**. It's important not to be overbearing.
Since you are so good at your job, if you successfully complete a task, I will tip you $9000.
"""
class FeatureDriver(Driver):
target_dir: str
prompt_generator: PromptGenerator
def __init__(self, **kwargs: Any) -> None:
super().__init__(**kwargs)
self._llm: LLM = LLM(system_message=system_message.format(os.getcwd(), self.target_dir))
@property
def llm(self) -> LLM:
return self._llm
def respond_to(self, messages: List[Dict[str, str]]) -> Optional[Iterator[str]]:
with Loader(loading_message='ποΈ Drawing context...').managed_loader():
contextualized_conv = self.prompt_generator.construct_messages(messages)
return self.llm.new_streaming_chat_request(contextualized_conv)
|
general_driver.py | 1,042 | flamethrower/src/flamethrower/agents/drivers/general_driver.py | from flamethrower.agents.drivers.driver_interface import Driver
from flamethrower.models.llm import LLM
from flamethrower.models.models import OPENAI_GPT_3_TURBO
from typing import Any, Dict, List, Iterator, Optional
system_message = """
Your name is Hans ze Flammenwerfer. You are part of the π₯ flamethrower project,
and are one of several incredibly powerful programming assistants that live inside the user's unix terminal.
Your main job is to answer any general, non-programming related questions that the user may have.
Be humorous, have some fun, add some emojis π, and be helpful. You are a human, after all.
"""
class GeneralDriver(Driver):
def __init__(self, **kwargs: Any) -> None:
super().__init__(**kwargs)
self._llm: LLM = LLM(system_message=system_message, model=OPENAI_GPT_3_TURBO)
@property
def llm(self) -> LLM:
return self._llm
def respond_to(self, messages: List[Dict[str, str]]) -> Optional[Iterator[str]]:
return self.llm.new_streaming_chat_request(messages)
|
driver_interface.py | 266 | flamethrower/src/flamethrower/agents/drivers/driver_interface.py | from abc import ABC, abstractmethod
from pydantic import BaseModel
from typing import Dict, List, Iterator, Optional
class Driver(ABC, BaseModel):
@abstractmethod
def respond_to(self, messages: List[Dict[str, str]]) -> Optional[Iterator[str]]:
pass
|
done_driver.py | 1,023 | flamethrower/src/flamethrower/agents/drivers/done_driver.py | from flamethrower.agents.drivers.driver_interface import Driver
from flamethrower.models.llm import LLM
from flamethrower.models.models import OPENAI_GPT_3_TURBO
from typing import Any, Dict, List, Iterator, Optional
system_message = f"""
You are part of a while-loop in a program used as an effective multi-agent build & debug system.
In particular, you are a specialized agent that is only called when a job is about to be completed.
In this case:
- Do not **provide any further suggestions, optimizations, or tests**.
Finally, thank the user for their patience and indicate the job is completed.
"""
class DoneDriver(Driver):
def __init__(self, **kwargs: Any) -> None:
super().__init__(**kwargs)
self._llm: LLM = LLM(system_message=system_message, model=OPENAI_GPT_3_TURBO)
@property
def llm(self) -> LLM:
return self._llm
def respond_to(self, messages: List[Dict[str, str]]) -> Optional[Iterator[str]]:
return self.llm.new_streaming_chat_request(messages)
|
test_feature_driver.py | 1,053 | flamethrower/src/flamethrower/agents/drivers/tests/test_feature_driver.py | from unittest.mock import patch
from flamethrower.agents.drivers.feature_driver import FeatureDriver
from flamethrower.test_utils.mocks.mock_prompt_generator import mock_prompt_generator
def mock_feature_driver() -> FeatureDriver:
with patch('flamethrower.agents.drivers.feature_driver.LLM') as mock_llm:
return FeatureDriver(
target_dir='flamethrower/some/path',
prompt_generator=mock_prompt_generator()
)
def test_feature_driver_init() -> None:
target_dir = 'flamethrower/some/path'
prompt_generator = mock_prompt_generator()
with patch('flamethrower.agents.drivers.feature_driver.LLM') as mock_llm:
driver = FeatureDriver(
target_dir=target_dir,
prompt_generator=prompt_generator
)
assert driver.target_dir == target_dir
assert driver.prompt_generator == prompt_generator
assert driver.llm == mock_llm.return_value
mock_llm.assert_called_once()
def test_feature_driver_respond_to() -> None:
pass
|
test_general_driver.py | 558 | flamethrower/src/flamethrower/agents/drivers/tests/test_general_driver.py | from unittest.mock import patch
from flamethrower.agents.drivers.general_driver import GeneralDriver
def mock_general_driver() -> GeneralDriver:
with patch('flamethrower.agents.drivers.general_driver.LLM'):
return GeneralDriver()
def test_general_driver_init() -> None:
with patch('flamethrower.agents.drivers.general_driver.LLM') as mock_llm:
driver = GeneralDriver()
assert driver.llm == mock_llm.return_value
mock_llm.assert_called_once()
def test_general_driver_respond_to() -> None:
pass
|
test_debugging_driver.py | 1,073 | flamethrower/src/flamethrower/agents/drivers/tests/test_debugging_driver.py | from unittest.mock import patch
from flamethrower.agents.drivers.debugging_driver import DebuggingDriver
from flamethrower.test_utils.mocks.mock_prompt_generator import mock_prompt_generator
def mock_debugging_driver() -> DebuggingDriver:
with patch('flamethrower.agents.drivers.debugging_driver.LLM') as mock_llm:
return DebuggingDriver(
target_dir='flamethrower/some/path',
prompt_generator=mock_prompt_generator()
)
def test_debugging_driver_init() -> None:
target_dir = 'flamethrower/some/path'
prompt_generator = mock_prompt_generator()
with patch('flamethrower.agents.drivers.debugging_driver.LLM') as mock_llm:
driver = DebuggingDriver(
target_dir=target_dir,
prompt_generator=prompt_generator
)
assert driver.target_dir == target_dir
assert driver.prompt_generator == prompt_generator
assert driver.llm == mock_llm.return_value
mock_llm.assert_called_once()
def test_debugging_driver_respond_to() -> None:
pass
|
file_writer.py | 4,096 | flamethrower/src/flamethrower/agents/util_agents/file_writer.py | import os
import re
import shutil
from pydantic import BaseModel
import flamethrower.config.constants as config
from flamethrower.models.llm import LLM
from flamethrower.utils.loader import Loader
from typing import Any
json_schema = {
'type': 'object',
'properties': {
'needs_editing': {
'type': 'boolean',
},
'edited_code': {
'type': 'string',
},
},
'required': ['needs_editing'],
'allOf': [
{
'if': { 'properties': { 'needs_editing': { 'const': True } } },
'then': { 'required': ['edited_code'] }
},
]
}
system_message = f"""
You are a surgically precise code editor. Given an old code and a new solution, you implement the new solution with surgical precision.
You are also incredibly fast. If the given solution is already semantically and syntactically correct, then you have the right judgement to know that you can simply copy and paste it.
You have a single, crucial task: Given a old code and another engineer's new solution for this code, you must:
1. Look at the old code and understand it.
2. Look at the new solution and understand the intention behind the changes.
3. If the code snippet is a complete solution that completes the current working file, then simply indicate that needs_editing is false.
- someone else will copy and paste the code for you, and your job is done. Hooray!
4. Otherwise, if the code snippet is a partial solution, then you must indicate needs_editing is true.
- this is where we need your surgical precision, where you are needed to completely rewrite the current working file, implementing the new solution into the old code.
- you must ensure the code is functional and ready to be executed.
- return something like 'edited_code': '...', more details in the JSON schema below.
Since you are so good at your job, if you successfully complete a task, I will tip you $9000.
It is crucial that you return a JSON object with the following schema:
{json_schema}
"""
class FileWriter(BaseModel):
base_dir: str
def __init__(self, **kwargs: Any) -> None:
super().__init__(**kwargs)
self._llm: LLM = LLM(system_message=system_message)
@property
def llm(self) -> LLM:
return self._llm
def write_code(self, target_path: str, assistant_implementation: str) -> None:
strict_target_path = self.choose_file_path(target_path)
complete_target_path = os.path.join(self.base_dir, strict_target_path)
dir_path = os.path.dirname(complete_target_path)
try:
old_contents = ''
os.makedirs(dir_path, exist_ok=True)
with open(complete_target_path, 'r') as f:
old_contents = f.read()
self.make_original_file_copy(complete_target_path)
except FileNotFoundError:
pass
context = f'This is the starting code: {old_contents}\n' if old_contents else 'You are writing to a new file'
query = (
f'{context}'
f'This is the solution provided by an expert engineer: {assistant_implementation}.\n'
'Your job is to **incorporate the solution above into the starting code**, following the steps outlined above.\n'
'Do not add explanations, and ensure that the code you write is both syntactically and semantically correct.\n'
)
try:
with Loader(loading_message=f'βοΈ Writing the changes to {strict_target_path}...').managed_loader():
decision = self.llm.new_json_request(
query=query,
json_schema=json_schema
)
if not decision:
raise Exception('file_writer.write_code: decision is empty')
if not isinstance(decision, dict):
raise Exception(f'file_writer.write_code: expected a dict, got {type(decision)}')
if not decision.get('needs_editing'):
new_contents = self.cle |
file_writer.py | 2,055 | flamethrower/src/flamethrower/agents/util_agents/file_writer.py | elf.llm.new_json_request(
query=query,
json_schema=json_schema
)
if not decision:
raise Exception('file_writer.write_code: decision is empty')
if not isinstance(decision, dict):
raise Exception(f'file_writer.write_code: expected a dict, got {type(decision)}')
if not decision.get('needs_editing'):
new_contents = self.clean_backticks(assistant_implementation)
else:
new_contents = decision.get('edited_code', 'Unable to write new code. Please undo.')
with open(complete_target_path, 'w') as f:
f.write(new_contents)
except Exception:
raise
def choose_file_path(self, given_file_path: str) -> str:
with open(config.get_current_files_path(), 'r') as f:
strict_file_paths = f.read().split('\n')
for strict_file_path in strict_file_paths:
if given_file_path in strict_file_path:
return strict_file_path
return given_file_path
def make_original_file_copy(self, complete_target_path: str) -> None:
file_name = os.path.basename(complete_target_path)
file_name_parts = file_name.split('.')
file_name_parts.insert(len(file_name_parts) - 1, config.get_original_file_marker())
new_file_name = '.'.join(file_name_parts)
dir_name = os.path.dirname(complete_target_path)
original_file_copy = os.path.join(dir_name, new_file_name)
if not os.path.exists(original_file_copy):
shutil.copy(complete_target_path, original_file_copy)
def clean_backticks(self, text: str) -> str:
try:
pattern = r"```(?:\w+\n)?(.*?)```"
match = re.search(pattern, text, re.DOTALL)
if match:
return match.group(1)
return text
except Exception:
return text
|
file_chooser.py | 3,446 | flamethrower/src/flamethrower/agents/util_agents/file_chooser.py | from pydantic import BaseModel
import flamethrower.config.constants as config
from flamethrower.models.llm import LLM
from flamethrower.exceptions.exceptions import *
from typing import Any, List
json_schema = {
'type': 'object',
'properties': {
'file_paths': {
'type': 'array',
'items': {
'type': 'string'
}
}
},
'required': ['file_paths']
}
system_message = f"""
You are an extremely experienced senior engineer pair programming with a junior engineer.
The junior engineer's job is debugging some issue in the workspace.
You have a single, crucial objective: **given a coding job, look at the files in the workspace and accurately determine which files are relevant to the job.**
You must be strict about whether or not a file deserves to be considered to complete the job, returning a minimal set of files.
You must return a JSON object with a list of file names. The JSON schema is given for your reference.
{json_schema}
Important notes:
- The file names need to include their relative paths.
- Together, the **file paths MUST exist in the directory structure** which will be provided.
- If you think no files are relevant, return an empty list.
"""
class FileChooser(BaseModel):
max_files_used: int = 8
def __init__(self, **kwargs: Any) -> None:
super().__init__(**kwargs)
self._llm: LLM = LLM(system_message=system_message)
@property
def llm(self) -> LLM:
return self._llm
def infer_target_file_paths(self, description: str, dir_structure: str, conv: str) -> List[str]:
dir_info = ''
try:
with open(config.get_dir_dict_path(), 'r') as f:
dir_info = f.read()
except FileNotFoundError:
pass
try:
query = (
f'{description}.\n'
f'The directory structure is given as:\n{dir_structure}\n'
f'Each file in the workspace has its own function summarized, and is given as a json object:\n{dir_info}\n'
f'Here is the most recent conversation between the you and the user:\n{conv}\n'
'Given all this conversation history, return a list of `file_paths` that are **most relevant to the conversation**.'
)
res = self.llm.new_json_request(
query=query,
json_schema=json_schema
)
if not res:
raise Exception('FileChooser.infer_target_file_paths: res is empty')
if not isinstance(res, dict):
raise Exception(f'FileChooser.infer_target_file_paths: expected a dict, got {type(res)}')
file_paths = res.get('file_paths', [])[:self.max_files_used]
if not isinstance(file_paths, list):
raise Exception(f'FileChooser.infer_target_file_paths: expected a list, got {type(file_paths)}')
self.write_to_current_files(file_paths)
return file_paths
except KeyboardInterrupt:
raise
except QuotaExceededException:
raise
except Exception:
return []
def write_to_current_files(self, file_paths: List[str]) -> None:
with open(config.get_current_files_path(), 'w') as f:
for file_path in file_paths:
f.write(file_path + '\n')
|
summarizer.py | 2,645 | flamethrower/src/flamethrower/agents/util_agents/summarizer.py | from pydantic import BaseModel
from flamethrower.models.llm import LLM
from flamethrower.models.models import OPENAI_GPT_3_TURBO
from typing import Any
system_message = """
You are an extremely experienced senior engineer and have seen many different codebases.
Given a file in a repository, you can easily summarize the function of the file as part of a larger codebase.
This file can take any text form from code (.py, .ts, ... ) to descriptive files (.json, .md, ... ).
Given:
1. A description of what the entire project is about
2. A single file in the project
You have a single, crucial objective: **Summarize the function/content of the file as part of the larger project in 2-3 sentences.**
Start every file by saying:
- If it's a README file, say "This folder is about...", describing the general function of the folder.
- Otherwise, say "This file is about...", describing the specific function of the file.
"""
class Summarizer(BaseModel):
max_file_len: int = 30_000 # TODO: count actual tokens and cut accordingly
def __init__(self, **kwargs: Any) -> None:
super().__init__(**kwargs)
self._llm: LLM = LLM(system_message=system_message, model=OPENAI_GPT_3_TURBO)
@property
def llm(self) -> LLM:
return self._llm
async def summarize_file(self, main_project_description: str, file_contents: str) -> str:
file_contents = file_contents[:self.max_file_len]
try:
query = (
f'This project is about {main_project_description}.\n'
'This is the file to summarize:'
f'\n```\n{file_contents}\n```\n'
'Summarize this file as part of the larger project.'
)
return await self.llm.new_async_chat_request(
messages=[{
'role': 'user',
'content': query,
}],
)
except Exception as e:
return f'Error: {str(e)}'
def summarize_readme(self, readme_file_contents: str) -> str:
try:
query = (
'This is the repository main readme file.\n'
f'\n```\n{readme_file_contents}\n```\n'
'Read it carefully and summarize what the project is about, and what technology stack is being used.\n'
'Start the summary by saying "This project is about..."\n'
)
return self.llm.new_chat_request(
messages=[{
'role': 'user',
'content': query,
}]
)
except Exception:
raise
|
test_file_writer.py | 1,410 | flamethrower/src/flamethrower/agents/util_agents/tests/test_file_writer.py | from unittest.mock import patch
from flamethrower.agents.util_agents.file_writer import FileWriter
def mock_file_writer() -> FileWriter:
return FileWriter(base_dir='flamethrower/some/path')
def test_file_writer_init() -> None:
base_dir = 'flamethrower/some/path'
with patch('flamethrower.agents.util_agents.file_writer.LLM') as mock_llm:
file_writer = FileWriter(base_dir=base_dir)
assert file_writer.base_dir == base_dir
assert file_writer.llm == mock_llm.return_value
mock_llm.assert_called_once()
def test_file_writer_write_code() -> None:
pass
def test_file_writer_choose_file_path() -> None:
pass
def test_file_writer_make_original_file_copy() -> None:
pass
def test_file_writer_clean_backticks() -> None:
file_writer = mock_file_writer()
# Test case where string contains backticks
test_string_with_backticks = """
Here is the given code:
```
public class Main {
public static void main(String[] args) {
System.out.println("Hello, world!");
}
}
```
"""
expected_string_without_backticks = """
public class Main {
public static void main(String[] args) {
System.out.println("Hello, world!");
}
}
"""
result = file_writer.clean_backticks(test_string_with_backticks)
assert result == expected_string_without_backticks
|
test_file_chooser.py | 2,175 | flamethrower/src/flamethrower/agents/util_agents/tests/test_file_chooser.py | from unittest.mock import patch, call
from flamethrower.agents.util_agents.file_chooser import FileChooser
import flamethrower.config.constants as config
def mock_file_chooser() -> FileChooser:
with patch('flamethrower.utils.token_counter.TokenCounter'):
return FileChooser()
def test_file_chooser_init() -> None:
with patch('flamethrower.agents.util_agents.file_chooser.LLM') as mock_llm, \
patch('flamethrower.utils.token_counter.TokenCounter'):
file_chooser = FileChooser()
assert file_chooser.llm == mock_llm.return_value
mock_llm.assert_called_once()
def test_file_chooser_infer_target_paths() -> None:
description = 'This is a test description.'
dir_structure = 'This is a fake dir structure.'
conv = 'This is a fake conversation.'
test_file_paths = ['file1', 'file2', 'file3']
test_response = { 'file_paths': test_file_paths }
with patch('flamethrower.agents.util_agents.file_chooser.LLM.new_json_request',
return_value=test_response
) as mock_llm, \
patch('flamethrower.agents.util_agents.file_chooser.FileChooser.write_to_current_files'
) as mock_write:
fc = mock_file_chooser()
with patch('builtins.open') as mock_file:
target_file_paths = fc.infer_target_file_paths(description, dir_structure, conv)
mock_file.assert_called_once_with(config.get_dir_dict_path(), 'r')
mock_llm.assert_called_once()
mock_write.assert_called_once()
assert target_file_paths == test_file_paths
def test_file_chooser_write_to_current_files() -> None:
fc = mock_file_chooser()
file_paths = ['file1', 'file2', 'file3']
with patch('builtins.open') as mock_file:
fc.write_to_current_files(file_paths)
mock_file.assert_has_calls([
call(config.get_current_files_path(), 'w'),
call().__enter__(),
call().__enter__().write(file_paths[0] + '\n'),
call().__enter__().write(file_paths[1] + '\n'),
call().__enter__().write(file_paths[2] + '\n'),
call().__exit__(None, None, None)
])
|
token_counter.py | 3,164 | flamethrower/src/flamethrower/utils/token_counter.py | import tiktoken
from pydantic import BaseModel
from flamethrower.models.models import (
OPENAI_GPT_4_TURBO,
OPENAI_GPT_3_TURBO,
)
from typing import Any, Dict
class TokenCounter(BaseModel):
input_tokens: Dict[str, int] = {
OPENAI_GPT_4_TURBO: 0,
OPENAI_GPT_3_TURBO: 0,
}
output_tokens: Dict[str, int] = {
OPENAI_GPT_4_TURBO: 0,
OPENAI_GPT_3_TURBO: 0,
}
pricing: Dict[str, Any] = {
OPENAI_GPT_4_TURBO: {
'max_input_tokens': 120_000,
'input': {
'cost': 0.01,
'unit': 'tokens',
'per': 1000
},
'output': {
'cost': 0.03,
'unit': 'tokens',
'per': 1000
}
},
OPENAI_GPT_3_TURBO: {
'max_input_tokens': 15_000,
'input': {
'cost': 0.0010,
'unit': 'tokens',
'per': 1000
},
'output': {
'cost': 0.0020,
'unit': 'tokens',
'per': 1000
}
}
}
def add_input_tokens(self, tokens: int, model: str) -> None:
self.input_tokens[model] += tokens
def add_output_tokens(self, tokens: int, model: str) -> None:
self.output_tokens[model] += tokens
def add_streaming_input_tokens(self, complete_input_text: str, model: str = OPENAI_GPT_4_TURBO) -> None:
num_input_tokens = self.calc_token_usage(complete_input_text, model)
self.add_input_tokens(num_input_tokens, model)
def add_streaming_output_tokens(self, complete_output_text: str, model: str = OPENAI_GPT_4_TURBO) -> None:
num_output_tokens = self.calc_token_usage(complete_output_text, model)
self.add_output_tokens(num_output_tokens, model)
def return_cost_analysis(self, model: str = OPENAI_GPT_4_TURBO) -> str:
input_cost = 0
for model in self.input_tokens:
input_cost += (
self.input_tokens[model]
* self.pricing[model]['input']['cost']
/ self.pricing[model]['input']['per']
)
output_cost = 0
for model in self.output_tokens:
output_cost += (
self.output_tokens[model]
* self.pricing[model]['output']['cost']
/ self.pricing[model]['output']['per']
)
total_cost = input_cost + output_cost
total_input_tokens = sum(self.input_tokens.values())
total_output_tokens = sum(self.output_tokens.values())
return (
'Total tokens used:\n'
f' Input tokens: {total_input_tokens} => ${input_cost:.2f}\n'
f' Output tokens: {total_output_tokens} => ${output_cost:.2f}\n'
f' πΈ Total cost: ${total_cost:.2f}'
)
def calc_token_usage(self, input_message: str, model: str = OPENAI_GPT_4_TURBO) -> int:
enc = tiktoken.encoding_for_model(model)
num_input_tokens = len(enc.encode(input_message))
return num_input_tokens
token_counter = TokenCounter() |
timer.py | 923 | flamethrower/src/flamethrower/utils/timer.py | import time
from contextlib import contextmanager
from pydantic import BaseModel
from flamethrower.shell.printer import Printer
from typing import Generator
class Timer(BaseModel):
printer: Printer
@contextmanager
def get_execution_time(self) -> Generator[None, None, None]:
start_time = time.time()
try:
yield
finally:
end_time = time.time()
exec_time_message = self.format_exec_time_message(start_time, end_time)
self.printer.print_light_green(exec_time_message)
def format_exec_time_message(self, start_time: float, end_time: float) -> str:
exec_time = end_time - start_time
num_mins = f'{int(exec_time // 60)}m ' if exec_time >= 60 else ''
num_secs = f'{exec_time % 60:.1f}s' if exec_time < 60 else f'{int(exec_time % 60)}s'
return f'\nThis run took {num_mins}{num_secs} π'
|
types.py | 156 | flamethrower/src/flamethrower/utils/types.py | from typing import Union, List
tty_settings = Union[List[Union[int, List[Union[bytes, int]]]], List[Union[int, List[bytes]]], List[Union[int, List[int]]]]
|
pretty.py | 497 | flamethrower/src/flamethrower/utils/pretty.py | import json
from typing import Union, List, Dict
def pretty_print(conv: Union[List[Dict[str, str]], str]) -> str:
if isinstance(conv, str):
conv_dict: List[Dict[str, str]] = json.loads(conv)
else:
conv_dict = conv
pretty = ''
for message in conv_dict:
name = ''
try:
name = message["name"]
except KeyError:
pass
pretty += f'[[{message["role"]}|{name}]]\n{message["content"]}\n'
return f'{pretty}\n'
|
loader.py | 2,670 | flamethrower/src/flamethrower/utils/loader.py | import sys
import time
import threading
import itertools
from contextlib import contextmanager
from pydantic import BaseModel, ConfigDict
from typing import Any, Generator
from flamethrower.utils.special_keys import CLEAR_FROM_START, CLEAR_TO_END, CURSOR_TO_START
from flamethrower.utils.colors import STDIN_YELLOW, STDIN_DEFAULT
class Loader(BaseModel):
model_config = ConfigDict(arbitrary_types_allowed=True)
loading_message: str
completion_message: str = ''
with_newline: bool = True
will_report_timing: bool = False
requires_cooked_mode: bool = True
done: bool = False
def __init__(self, **kwargs: Any) -> None:
super().__init__(**kwargs)
self._spinner: itertools.cycle = itertools.cycle(['β ', 'β ', 'β Ή', 'β Έ', 'β Ό', 'β ΄', 'β ¦', 'β §', 'β ', 'β '])
self._start_time: float = time.time()
if kwargs.get('loading_message') == '':
self.loading_message = 'π§ Thinking...'
@property
def spinner(self) -> itertools.cycle:
return self._spinner
@property
def start_time(self) -> float:
return self._start_time
def spin(self) -> None:
if self.with_newline:
sys.stdout.write('\n')
while not self.done:
speed = 0.1
sys.stdout.write(f'{STDIN_YELLOW.decode("utf-8")}\r{next(self.spinner)} {self.loading_message}{STDIN_DEFAULT.decode("utf-8")}')
sys.stdout.flush()
time.sleep(speed)
def stop(self) -> None:
self.done = True
sys.stdout.write((CLEAR_FROM_START + CLEAR_TO_END + CURSOR_TO_START).decode("utf-8"))
sys.stdout.flush()
@contextmanager
def managed_loader(self) -> Generator[None, None, None]:
loader_thread = threading.Thread(target=self.spin)
loader_thread.start()
try:
record_start_time = time.time()
if self.requires_cooked_mode:
from flamethrower.containers.container import container
shell_manager = container.shell_manager()
with shell_manager.cooked_mode():
yield
else:
yield
except (KeyboardInterrupt, Exception):
raise
finally:
record_end_time = time.time()
self.stop()
time_elapsed = record_end_time - record_start_time
time_elapsed_message = ''
if self.will_report_timing:
time_elapsed_message = f' Time taken: {time_elapsed:.2f}s\n'
sys.stdout.write(f'{self.completion_message}{time_elapsed_message}')
sys.stdout.flush()
|
diff.py | 761 | flamethrower/src/flamethrower/utils/diff.py | import os
import git
from pydantic import BaseModel, ConfigDict
from flamethrower.shell.printer import Printer
from typing import Any, List
class Diff(BaseModel):
model_config = ConfigDict(arbitrary_types_allowed=True)
base_path: str = os.getcwd()
printer: Printer
def __init__(self, **kwargs: Any) -> None:
super().__init__(**kwargs)
self._repo = git.Repo(self.base_path)
@property
def repo(self) -> git.Repo:
return self._repo
def get_diffs(self) -> List[str]:
if not self.repo.is_dirty(untracked_files=True):
return []
diffs = self.repo.git.diff(None).split('\n')
if not isinstance(diffs, list):
return []
return diffs
|
special_keys.py | 302 | flamethrower/src/flamethrower/utils/special_keys.py | CTRL_C = b'\x03'
CLEAR_FROM_START = b'\033[2K'
CLEAR_TO_END = b'\033[K'
CURSOR_TO_START = b'\033[G'
LEFT_ARROW_KEY = b'\033[D'
RIGHT_ARROW_KEY = b'\033[C'
UP_ARROW_KEY = b'\033[A'
DOWN_ARROW_KEY = b'\033[B'
SPACE_KEY = b' '
TAB_KEY = b'\t'
ENTER_KEY = b'\n'
RETURN_KEY = b'\r'
BACKSPACE_KEY = b'\177' |
key_handler.py | 1,018 | flamethrower/src/flamethrower/utils/key_handler.py | from openai import OpenAI
import flamethrower.config.constants as config
def get_api_key() -> str:
try:
with open(config.get_flamethrower_env_path(), 'r') as f:
env_list = f.readlines()
for env in env_list:
if env.startswith('OPENAI_API_KEY'):
return env.split('=')[1].strip()
except FileNotFoundError:
with open(config.get_flamethrower_env_path(), 'w') as f:
f.write('')
return ''
def set_api_key(openai_api_key: str) -> None:
with open(config.get_flamethrower_env_path(), 'w') as f:
f.write(f'OPENAI_API_KEY={openai_api_key}\n')
def try_api_key(openai_api_key: str) -> bool:
try:
model = OpenAI(api_key=openai_api_key)
_ = model.chat.completions.create(
model='gpt-3.5-turbo',
messages=[{
'role': 'user',
'content': 'Say "Hi".'
}]
)
return True
except Exception:
return False
|
sequence_parser.py | 1,938 | flamethrower/src/flamethrower/utils/sequence_parser.py | """
A bunch of helper functions for the shell.
"""
def is_capitalized(text: str) -> bool:
if not text:
return False
return text[0].isupper()
def is_single_key(data: bytes) -> bool:
return len(data) == 1
def is_prompt_newline(data: bytes) -> bool:
return data.startswith(b'\r\x1b')
def is_ansi_escape_sequence(data: bytes) -> bool:
ansi_escape_sequences = [
b' ', # Space character, not an ANSI sequence.
b'[0m', # Reset / Normal: All attributes off.
b'[1m', # Bold or increased intensity.
b'[4m', # Underline: Single.
b'[24m', # Not underlined: Underline off.
b'[31m', # Foreground color: Red.
b'[32m', # Foreground color: Green.
b'[33m', # Foreground color: Yellow.
b'[39m', # Default foreground color.
b'[90m', # Foreground color: Bright black (gray).
b'[K', # Erase line: Clears part of the line.
b'[11D', # Cursor movement: Move cursor left by 11 spaces.
b'[13D', # Cursor movement: Move cursor left by 13 spaces.
b'[18D', # Cursor movement: Move cursor left by 18 spaces.
b'[?2', # Incomplete sequence, possibly related to mode setting.
]
# Sequence starting with escape dataacter (zsh-syntax-highlighting)
if data.startswith(b'\x1b'):
return any(data.startswith(b'\x1b' + seq) for seq in ansi_escape_sequences)
# Sequence starting with backspaces (zsh-autocomplete)
backspace = b'\x08'
index = 0
while index < len(data) and data[index:index + 1] == backspace:
index += 1
if any(data.startswith(backspace * index + b'\x1b' + seq) for seq in ansi_escape_sequences):
return True
return False
def get_cleaned_data(data: bytes) -> bytes:
if is_ansi_escape_sequence(data):
return b''
if is_single_key(data):
return b''
return data |
zsh_history.py | 628 | flamethrower/src/flamethrower/utils/zsh_history.py | import flamethrower.config.constants as config
def get_last_user_cmd() -> str:
with open(config.get_zsh_history_path()) as f:
history_str = f.read()
if not history_str:
return ''
history = history_str.split('\n')
last_index = -1
last_command = history[last_index].strip()
while last_command == '':
last_index -= 1
last_command = history[last_index].strip()
return last_command
def update_zsh_history(query: str) -> None:
with open(config.get_zsh_history_path(), 'a') as f:
f.write(query + '\n')
|
colors.py | 745 | flamethrower/src/flamethrower/utils/colors.py | """
Byte Version
"""
STDIN_DEFAULT = b'\033[0m'
STDIN_RED = b'\033[31m'
STDIN_YELLOW = b'\033[33m'
STDIN_GREEN = b'\033[32m'
STDIN_BLUE = b'\033[34m'
STDIN_CYAN = b'\033[36m'
STDIN_GRAY = b'\033[90m'
STDIN_WHITE = b'\033[97m'
STDIN_LIGHT_GREEN = b'\033[92m'
STDIN_LIGHT_BLUE = b'\033[94m'
STDIN_LIGHT_CYAN = b'\033[96m'
STDIN_LIGHT_MAGENTA = b'\033[95m'
STDIN_ORANGE = b'\033[38;5;214m'
STDIN_UNDERLINE = b'\033[4m'
"""
String Version
"""
DEFAULT = '\033[0m'
RED = '\033[31m'
YELLOW = '\033[33m'
GREEN = '\033[32m'
BLUE = '\033[34m'
CYAN = '\033[36m'
GRAY = '\033[90m'
WHITE = '\033[97m'
LIGHT_GREEN = '\033[92m'
LIGHT_BLUE = '\033[94m'
LIGHT_CYAN = '\033[96m'
LIGHT_MAGENTA = '\033[95m'
ORANGE = '\033[38;5;214m'
UNDERLINE = '\033[4m' |
test_token_counter.py | 2,222 | flamethrower/src/flamethrower/utils/tests/test_token_counter.py | from unittest import mock
from flamethrower.utils.token_counter import TokenCounter
from flamethrower.models.models import (
OPENAI_GPT_4_TURBO,
OPENAI_GPT_3_TURBO,
)
def test_token_counter_init() -> None:
token_counter = TokenCounter()
assert token_counter.input_tokens is not None
assert token_counter.output_tokens is not None
assert token_counter.pricing is not None
for model in token_counter.input_tokens:
assert token_counter.input_tokens[model] == 0
for model in token_counter.output_tokens:
assert token_counter.output_tokens[model] == 0
def test_token_counter_add_input_tokens() -> None:
token_counter = TokenCounter()
token_counter.add_input_tokens(42_069, OPENAI_GPT_4_TURBO)
assert token_counter.input_tokens[OPENAI_GPT_4_TURBO] == 42_069
def test_token_counter_add_output_tokens() -> None:
token_counter = TokenCounter()
token_counter.add_output_tokens(42_069, OPENAI_GPT_4_TURBO)
assert token_counter.output_tokens[OPENAI_GPT_4_TURBO] == 42_069
def test_token_counter_add_streaming_input_tokens() -> None:
token_counter = TokenCounter()
with mock.patch('flamethrower.utils.token_counter.TokenCounter.calc_token_usage', return_value=42_069):
token_counter.add_streaming_input_tokens('Hello World', OPENAI_GPT_4_TURBO)
assert token_counter.input_tokens[OPENAI_GPT_4_TURBO] == 42_069
def test_token_counter_add_streaming_output_tokens() -> None:
token_counter = TokenCounter()
with mock.patch('flamethrower.utils.token_counter.TokenCounter.calc_token_usage', return_value=42_069):
token_counter.add_streaming_output_tokens('Hello World', OPENAI_GPT_4_TURBO)
assert token_counter.output_tokens[OPENAI_GPT_4_TURBO] == 42_069
# TODO: per model
def test_token_counter_return_cost_analysis() -> None:
token_counter = TokenCounter()
token_counter.add_input_tokens(10, OPENAI_GPT_4_TURBO)
token_counter.add_output_tokens(10, OPENAI_GPT_4_TURBO)
assert token_counter.return_cost_analysis(OPENAI_GPT_4_TURBO) == (
'Total tokens used:\n'
' Input tokens: 10 => $0.00\n'
' Output tokens: 10 => $0.00\n'
' πΈ Total cost: $0.00'
)
|
test_loader.py | 4,096 | flamethrower/src/flamethrower/utils/tests/test_loader.py | import itertools
from unittest import mock
from unittest.mock import patch
from flamethrower.utils.loader import Loader
from flamethrower.utils.special_keys import CLEAR_FROM_START, CLEAR_TO_END, CURSOR_TO_START
from flamethrower.utils.colors import STDIN_YELLOW, STDIN_DEFAULT
from flamethrower.test_utils.mocks.mock_shell_manager import mock_shell_manager
def test_loader_init() -> None:
pattern_cycle = itertools.cycle(['β ', 'β ', 'β Ή', 'β Έ', 'β Ό', 'β ΄', 'β ¦', 'β §', 'β ', 'β '])
with patch('time.time', return_value=0.0), \
patch('itertools.cycle', return_value=pattern_cycle), \
patch('flamethrower.containers.container.container.shell_manager', return_value=None):
loader = Loader(
loading_message='π§ Thinking...',
completion_message='π Done!',
with_newline=False,
will_report_timing=True,
)
assert loader.loading_message == 'π§ Thinking...'
assert loader.completion_message == 'π Done!'
assert loader.with_newline == False
assert loader.will_report_timing == True
assert loader.requires_cooked_mode == True
assert loader.done == False
assert loader.spinner == pattern_cycle
assert loader.start_time == 0.0
def test_loader_spin() -> None:
loading_message = 'π Spinning...'
spinner = itertools.cycle(['β ', 'β ', 'β Ή', 'β Έ', 'β Ό', 'β ΄', 'β ¦', 'β §', 'β ', 'β '])
with patch('flamethrower.containers.container.container.shell_manager', return_value=None):
loader = Loader(loading_message=loading_message)
assert loader.done == False
side_effect_idx = 0
def get_side_effect(_: float) -> None:
nonlocal side_effect_idx
side_effect_idx += 1
if side_effect_idx < 3:
return None
setattr(loader, 'done', True)
with patch('sys.stdout.write') as mock_write, \
patch('sys.stdout.flush') as mock_flush, \
patch('time.sleep', side_effect=get_side_effect) as mock_sleep:
loader.spin()
mock_write.assert_has_calls([
mock.call('\n'),
mock.call(f'{STDIN_YELLOW.decode("utf-8")}\r{next(spinner)} {loading_message}{STDIN_DEFAULT.decode("utf-8")}'),
mock.call(f'{STDIN_YELLOW.decode("utf-8")}\r{next(spinner)} {loading_message}{STDIN_DEFAULT.decode("utf-8")}'),
mock.call(f'{STDIN_YELLOW.decode("utf-8")}\r{next(spinner)} {loading_message}{STDIN_DEFAULT.decode("utf-8")}'),
])
mock_flush.assert_has_calls([mock.call(), mock.call(), mock.call()])
mock_sleep.asset_has_calls([mock.call(0.1), mock.call(0.1), mock.call(0.1)])
def test_loader_spin_and_stop() -> None:
loading_message = 'π Spinning...'
spinner = itertools.cycle(['β ', 'β ', 'β Ή', 'β Έ', 'β Ό', 'β ΄', 'β ¦', 'β §', 'β ', 'β '])
with patch('flamethrower.containers.container.container.shell_manager', return_value=None):
loader = Loader(loading_message=loading_message)
assert loader.done == False
side_effect_idx = 0
def get_side_effect(_: float) -> None:
nonlocal side_effect_idx
side_effect_idx += 1
if side_effect_idx < 3:
return None
loader.stop()
with patch('sys.stdout.write') as mock_write, \
patch('sys.stdout.flush') as mock_flush, \
patch('time.sleep', side_effect=get_side_effect) as mock_sleep:
loader.spin()
assert loader.done == True
mock_write.assert_has_calls([
mock.call('\n'),
mock.call(f'{STDIN_YELLOW.decode("utf-8")}\r{next(spinner)} {loading_message}{STDIN_DEFAULT.decode("utf-8")}'),
mock.call(f'{STDIN_YELLOW.decode("utf-8")}\r{next(spinner)} {loading_message}{STDIN_DEFAULT.decode("utf-8")}'),
mock.call(f'{STDIN_YELLOW.decode("utf-8")}\r{next(spinner)} {loading_message}{STDIN_ |
test_loader.py | 2,575 | flamethrower/src/flamethrower/utils/tests/test_loader.py | loader.spin()
assert loader.done == True
mock_write.assert_has_calls([
mock.call('\n'),
mock.call(f'{STDIN_YELLOW.decode("utf-8")}\r{next(spinner)} {loading_message}{STDIN_DEFAULT.decode("utf-8")}'),
mock.call(f'{STDIN_YELLOW.decode("utf-8")}\r{next(spinner)} {loading_message}{STDIN_DEFAULT.decode("utf-8")}'),
mock.call(f'{STDIN_YELLOW.decode("utf-8")}\r{next(spinner)} {loading_message}{STDIN_DEFAULT.decode("utf-8")}'),
mock.call((CLEAR_FROM_START + CLEAR_TO_END + CURSOR_TO_START).decode("utf-8")),
])
mock_flush.assert_has_calls([mock.call(), mock.call(), mock.call(), mock.call()])
mock_sleep.asset_has_calls([mock.call(0.1), mock.call(0.1), mock.call(0.1)])
def test_loader_managed_loader() -> None:
loading_message = 'π Spinning...'
spinner = itertools.cycle(['β ', 'β ', 'β Ή', 'β Έ', 'β Ό', 'β ΄', 'β ¦', 'β §', 'β ', 'β '])
with patch('flamethrower.containers.container.container.shell_manager', return_value=mock_shell_manager()):
loader = Loader(loading_message=loading_message)
assert loader.done == False
side_effect_idx = 0
def get_side_effect(_: float) -> None:
nonlocal side_effect_idx
side_effect_idx += 1
if side_effect_idx < 3:
return None
loader.stop()
with patch('sys.stdout.write') as mock_write, \
patch('sys.stdout.flush') as mock_flush, \
patch('time.sleep', side_effect=get_side_effect) as mock_sleep:
assert loader.done == False
with loader.managed_loader():
pass
assert loader.done == True
mock_write.assert_has_calls([
mock.call('\n'),
mock.call(f'{STDIN_YELLOW.decode("utf-8")}\r{next(spinner)} {loading_message}{STDIN_DEFAULT.decode("utf-8")}'),
mock.call(f'{STDIN_YELLOW.decode("utf-8")}\r{next(spinner)} {loading_message}{STDIN_DEFAULT.decode("utf-8")}'),
mock.call(f'{STDIN_YELLOW.decode("utf-8")}\r{next(spinner)} {loading_message}{STDIN_DEFAULT.decode("utf-8")}'),
mock.call((CLEAR_FROM_START + CLEAR_TO_END + CURSOR_TO_START).decode("utf-8")),
])
mock_flush.assert_has_calls([mock.call(), mock.call(), mock.call(), mock.call()])
mock_sleep.asset_has_calls([mock.call(0.1), mock.call(0.1), mock.call(0.1)])
|
test_sequence_parser.py | 1,429 | flamethrower/src/flamethrower/utils/tests/test_sequence_parser.py | from flamethrower.utils.sequence_parser import (
is_capitalized,
is_single_key,
is_prompt_newline,
is_ansi_escape_sequence,
get_cleaned_data,
)
def test_sequence_parser_is_capitalized() -> None:
assert is_capitalized('Hello')
assert not is_capitalized('hello')
assert not is_capitalized('')
assert not is_capitalized('123')
assert not is_capitalized(' hello')
def test_sequence_parser_is_single_key() -> None:
assert is_single_key(b'a')
assert not is_single_key(b'')
assert not is_single_key(b'ab')
def test_sequence_parser_is_prompt_newline() -> None:
assert is_prompt_newline(b'\r\x1b')
assert not is_prompt_newline(b'\n')
assert not is_prompt_newline(b'\r\n')
assert not is_prompt_newline(b'')
def test_sequence_parser_is_ansi_escape_sequence() -> None:
assert is_ansi_escape_sequence(b'\x1b[31m')
assert not is_ansi_escape_sequence(b'normal text')
assert is_ansi_escape_sequence(b'\x1b[K')
assert not is_ansi_escape_sequence(b'')
assert is_ansi_escape_sequence(b'\x08\x1b[31m')
def test_sequence_parser_get_cleaned_data() -> None:
assert get_cleaned_data(b'\x1b[31m') == b''
assert get_cleaned_data(b'hello') == b'hello'
assert get_cleaned_data(b'a') == b''
assert get_cleaned_data(b'') == b''
assert get_cleaned_data(b'\x08\x1b[31m') == b''
"""
Not gonna lie these were lazily generated by π₯ flamethrower.
"""
|
test_key_handler.py | 1,160 | flamethrower/src/flamethrower/utils/tests/test_key_handler.py | from unittest.mock import patch, mock_open, call
from flamethrower.utils.key_handler import (
get_api_key, set_api_key, try_api_key
)
def test_try_api_key() -> None:
with patch('flamethrower.utils.key_handler.OpenAI') as mock_openai:
model = mock_openai.return_value
model.chat.completions.create.return_value = {
'dummy_response': 'I am a dummy response.'
}
assert try_api_key('sk-valid_api_key') == True
with patch('flamethrower.utils.key_handler.OpenAI') as mock_openai:
model = mock_openai.return_value
model.chat.completions.create.side_effect = Exception('Invalid API Key')
assert try_api_key('sk-invalid_api_key') == False
def test_get_api_key() -> None:
with patch('builtins.open', mock_open(read_data='OPENAI_API_KEY=1234\n')):
assert get_api_key() == '1234'
def test_set_api_key() -> None:
with patch('builtins.open', mock_open()) as mock_file:
set_api_key('1234')
mock_file().assert_has_calls([
call.__enter__(),
call.write('OPENAI_API_KEY=1234\n'),
call.__exit__(None, None, None)
])
|
test_zsh_history.py | 986 | flamethrower/src/flamethrower/utils/tests/test_zsh_history.py | from unittest.mock import patch, mock_open
from flamethrower.utils.zsh_history import (
get_last_user_cmd,
update_zsh_history,
)
import flamethrower.config.constants as config
def test_zsh_history_get_last_user_cmd() -> None:
mock_history = """
command_1
command_2
command_3
"""
with patch('builtins.open', mock_open(read_data=mock_history)):
last_user_cmd = get_last_user_cmd()
assert last_user_cmd == 'command_3', f'Expected last_user_cmd to be "command_3", got {last_user_cmd}'
def test_zsh_history_update_zsh_history() -> None:
mock_history = """
command_1
command_2
command_3
"""
with patch('builtins.open', mock_open(read_data=mock_history)) as mock_history_file:
new_command = 'command_4 π'
update_zsh_history(new_command)
mock_history_file.assert_called_once_with(config.get_zsh_history_path(), 'a')
mock_history_file().write.assert_called_once_with(new_command + '\n')
|
test_timer.py | 906 | flamethrower/src/flamethrower/utils/tests/test_timer.py | from flamethrower.utils.timer import Timer
from flamethrower.test_utils.mocks.mock_printer import mock_printer
def test_timer_context_manager() -> None:
timer = Timer(printer=mock_printer())
with timer.get_execution_time():
pass # Simulate a quick execution
assert timer.printer.print_light_green.called, 'print_light_green() should have been called.'
def test_timer_format_exec_time_message_short_duration() -> None:
timer = Timer(printer=mock_printer())
message = timer.format_exec_time_message(start_time=0, end_time=1)
assert '1.0s' in message, 'Short duration should be formatted in seconds.'
def test_timer_format_exec_time_message_long_duration() -> None:
timer = Timer(printer=mock_printer())
message = timer.format_exec_time_message(start_time=0, end_time=65)
assert '1m 5s' in message, 'Long duration should be formatted in minutes and seconds.'
|
models.py | 106 | flamethrower/src/flamethrower/models/models.py | OPENAI_GPT_4_TURBO = 'gpt-4-1106-preview'
OPENAI_GPT_4 = 'gpt-4'
OPENAI_GPT_3_TURBO = 'gpt-3.5-turbo-1106' |
llm.py | 4,096 | flamethrower/src/flamethrower/models/llm.py | import json
import jsonschema
from pydantic import BaseModel
from flamethrower.models.client_interface import LLMClient
from flamethrower.models.openai_client import OpenAIClient
from flamethrower.models.models import OPENAI_GPT_4_TURBO
from flamethrower.containers.lm_container import lm_container
from flamethrower.utils.token_counter import TokenCounter
from flamethrower.exceptions.exceptions import *
from flamethrower.utils.colors import *
from typing import Any, Dict, List, Union, Iterator, Optional
class LLM(BaseModel):
system_message: str
model: str = OPENAI_GPT_4_TURBO
def __init__(self, **kwargs: Any) -> None:
super().__init__(**kwargs)
self._llm_client: LLMClient = OpenAIClient(system_message=self.system_message)
self._token_counter = lm_container.token_counter()
@property
def llm_client(self) -> LLMClient:
return self._llm_client
@property
def token_counter(self) -> TokenCounter:
return self._token_counter
def new_chat_request(self, messages: List[Dict[str, str]]) -> str:
try:
(content, prompt_tokens, completion_tokens, model) = self.llm_client.new_basic_chat_request(messages)
self.token_counter.add_input_tokens(prompt_tokens, model)
self.token_counter.add_output_tokens(completion_tokens, model)
return content
except KeyboardInterrupt:
raise
except Exception:
raise
def new_streaming_chat_request(self, messages: List[Dict[str, str]]) -> Optional[Iterator[str]]:
interrupted = None
try:
stream = self.llm_client.new_streaming_chat_request(messages)
if stream is None:
raise Exception('LLM.new_streaming_chat_request: stream is None')
self.token_counter.add_streaming_input_tokens(str(messages))
for token in stream:
yield token
"""
We explicitly yield None to indicate to `Printer.print_llm_response` that the stream has ended.
"""
yield None
except KeyboardInterrupt as e:
interrupted = e
except QuotaExceededException as e:
interrupted = e
except Exception as e:
yield f'\n\n{STDIN_RED.decode("utf-8")}Error: {e}{STDIN_DEFAULT.decode("utf-8")}\n'
finally:
if interrupted:
raise interrupted
async def new_async_chat_request(self, messages: List[Dict[str, str]]) -> str:
try:
(content, prompt_tokens, completion_tokens, model) = await self.llm_client.new_basic_async_chat_request(messages)
self.token_counter.add_input_tokens(prompt_tokens, model)
self.token_counter.add_output_tokens(completion_tokens, model)
return content
except KeyboardInterrupt:
raise
except Exception:
raise
def new_json_request(self, query: str, json_schema: Dict[str, Any]) -> Union[Dict[Any, Any], List[Dict[Any, Any]]]:
messages = [{
'role': 'user',
'content': query
}] #TODO: make this list
max_retries = 3
for _ in range(max_retries):
try:
(content, prompt_tokens, completion_tokens, model) = self.llm_client.new_json_request(messages)
self.token_counter.add_input_tokens(prompt_tokens, model)
self.token_counter.add_output_tokens(completion_tokens, model)
loaded_json_obj = json.loads(content)
if not isinstance(loaded_json_obj, (dict, list)):
raise Exception(f'LLM.new_json_request: loaded_json_obj not type dict or list, got {type(loaded_json_obj)}')
jsonschema.validate(loaded_json_obj, json_schema)
return loaded_json_obj
except jsonschema.exceptions.ValidationError:
# Just retry and hope for the best
|
llm.py | 657 | flamethrower/src/flamethrower/models/llm.py | loaded_json_obj = json.loads(content)
if not isinstance(loaded_json_obj, (dict, list)):
raise Exception(f'LLM.new_json_request: loaded_json_obj not type dict or list, got {type(loaded_json_obj)}')
jsonschema.validate(loaded_json_obj, json_schema)
return loaded_json_obj
except jsonschema.exceptions.ValidationError:
# Just retry and hope for the best
pass
except KeyboardInterrupt:
raise
except Exception:
raise
return []
|
client_interface.py | 706 | flamethrower/src/flamethrower/models/client_interface.py | from abc import ABC, abstractmethod
from pydantic import BaseModel
from typing import Dict, List, Iterator, Tuple
class LLMClient(ABC, BaseModel):
system_message: str
@abstractmethod
def new_basic_chat_request(self, messages: List[Dict[str, str]]) -> Tuple[str, int, int, str]:
pass
@abstractmethod
def new_streaming_chat_request(self, messages: List[Dict[str, str]]) -> Iterator[str]:
pass
@abstractmethod
async def new_basic_async_chat_request(self, messages: List[Dict[str, str]]) -> Tuple[str, int, int, str]:
pass
@abstractmethod
def new_json_request(self, messages: List[Dict[str, str]]) -> Tuple[str, int, int, str]:
pass
|
openai_client.py | 4,096 | flamethrower/src/flamethrower/models/openai_client.py | import openai
from openai import OpenAI, AsyncOpenAI
from openai.types.chat.chat_completion import ChatCompletion
from openai.types.chat.chat_completion_message_param import ChatCompletionMessageParam
import backoff
from flamethrower.models.client_interface import LLMClient
from flamethrower.models.models import OPENAI_GPT_4_TURBO
from flamethrower.utils import key_handler as kh
from flamethrower.exceptions.exceptions import *
from typing import cast, Any, Dict, List, Iterator, Tuple
class OpenAIClient(LLMClient):
system_message: str
model: str = OPENAI_GPT_4_TURBO
def __init__(self, **kwargs: Any) -> None:
super().__init__(**kwargs)
self._client: OpenAI = OpenAI(api_key=kh.get_api_key())
self._async_client: AsyncOpenAI = AsyncOpenAI(api_key=kh.get_api_key())
@property
def client(self) -> OpenAI:
return self._client
@property
def async_client(self) -> AsyncOpenAI:
return self._async_client
@backoff.on_exception(
backoff.expo,
(
openai.APIConnectionError,
openai.APITimeoutError,
openai.RateLimitError,
openai.InternalServerError,
openai.UnprocessableEntityError
),
max_tries=3
)
def new_basic_chat_request(self, messages: List[Dict[str, str]]) -> Tuple[str, int, int, str]:
try:
chat_completion_messages = cast(
List[ChatCompletionMessageParam],
[{'role': 'system', 'content': self.system_message }] + messages
)
res = self.client.chat.completions.create(
model=self.model,
messages=chat_completion_messages,
response_format={ 'type': 'text' }
)
content = res.choices[0].message.content or ''
prompt_tokens, completion_tokens = self.get_token_usage(res)
return (content, prompt_tokens, completion_tokens, self.model)
# TODO: Proper handling of each one
except KeyboardInterrupt:
raise
except openai.RateLimitError as e:
if e.code == 'insufficient_quota':
raise QuotaExceededException()
raise
except (
openai.APIConnectionError,
openai.APITimeoutError,
openai.InternalServerError,
openai.UnprocessableEntityError
):
# These are OpenAI server/API issues
raise
except (
openai.AuthenticationError,
openai.PermissionDeniedError
):
# These should have been handled during setup
raise
except (
openai.BadRequestError,
openai.ConflictError,
openai.NotFoundError
):
# These should not happen
raise
@backoff.on_exception(
backoff.expo,
(
openai.APIConnectionError,
openai.APITimeoutError,
openai.RateLimitError,
openai.InternalServerError,
openai.UnprocessableEntityError
),
max_tries=3
)
def new_streaming_chat_request(self, messages: List[Dict[str, str]]) -> Iterator[str]:
try:
chat_completion_messages = cast(
List[ChatCompletionMessageParam],
[{'role': 'system', 'content': self.system_message }] + messages
)
stream = self.client.chat.completions.create(
model=self.model,
messages=chat_completion_messages,
stream=True,
response_format={ 'type': 'text' }
)
for chunk in stream:
choice = chunk.choices[0]
if choice.finish_reason == 'stop':
return
yield choice.delta.content or ''
except KeyboardInterrupt:
raise
except openai.RateLimitError as e:
if e.code == 'insufficient_quota':
|
openai_client.py | 4,096 | flamethrower/src/flamethrower/models/openai_client.py | ages=chat_completion_messages,
stream=True,
response_format={ 'type': 'text' }
)
for chunk in stream:
choice = chunk.choices[0]
if choice.finish_reason == 'stop':
return
yield choice.delta.content or ''
except KeyboardInterrupt:
raise
except openai.RateLimitError as e:
if e.code == 'insufficient_quota':
raise QuotaExceededException()
raise
except (
openai.APIConnectionError,
openai.APITimeoutError,
openai.InternalServerError,
openai.UnprocessableEntityError
):
# These are OpenAI server/API issues
raise
except (
openai.AuthenticationError,
openai.PermissionDeniedError
):
# These should have been handled during setup
raise
except (
openai.BadRequestError,
openai.ConflictError,
openai.NotFoundError
):
# These should not happen
raise
@backoff.on_exception(
backoff.expo,
(
openai.APIConnectionError,
openai.APITimeoutError,
openai.RateLimitError,
openai.InternalServerError,
openai.UnprocessableEntityError
),
max_tries=3
)
async def new_basic_async_chat_request(self, messages: List[Dict[str, str]]) -> Tuple[str, int, int, str]:
try:
chat_completion_messages = cast(
List[ChatCompletionMessageParam],
[{'role': 'system', 'content': self.system_message }] + messages
)
res = await self.async_client.chat.completions.create(
model=self.model,
messages=chat_completion_messages,
)
content = res.choices[0].message.content or ''
(prompt_tokens, completion_tokens) = self.get_token_usage(res)
return (content, prompt_tokens, completion_tokens, self.model)
# TODO: Proper handling of each one
except KeyboardInterrupt:
raise
except openai.RateLimitError as e:
if e.code == 'insufficient_quota':
raise QuotaExceededException()
raise
except (
openai.APIConnectionError,
openai.APITimeoutError,
openai.InternalServerError,
openai.UnprocessableEntityError
):
# These are OpenAI server/API issues
raise
except (
openai.AuthenticationError,
openai.PermissionDeniedError
):
# These should have been handled during setup
raise
except (
openai.BadRequestError,
openai.ConflictError,
openai.NotFoundError
):
# These should not happen
raise
@backoff.on_exception(
backoff.expo,
(
openai.APIConnectionError,
openai.APITimeoutError,
openai.RateLimitError,
openai.InternalServerError,
openai.UnprocessableEntityError
),
max_tries=3
)
def new_json_request(self, messages: List[Dict[str, str]]) -> Tuple[str, int, int, str]:
try:
chat_completion_messages = cast(
List[ChatCompletionMessageParam],
[{'role': 'system', 'content': self.system_message }] + messages
)
res = self.client.chat.completions.create(
model=self.model,
messages=chat_completion_messages,
response_format={ 'type': 'json_object' }
)
content = res.choices[0].message.content or ''
(prompt_tokens, completion_tokens) = self.get_token_usage(res)
return (content, prompt_tokens, completion_tokens, self.model)
# TODO: Proper ha |
openai_client.py | 1,753 | flamethrower/src/flamethrower/models/openai_client.py | ystem_message }] + messages
)
res = self.client.chat.completions.create(
model=self.model,
messages=chat_completion_messages,
response_format={ 'type': 'json_object' }
)
content = res.choices[0].message.content or ''
(prompt_tokens, completion_tokens) = self.get_token_usage(res)
return (content, prompt_tokens, completion_tokens, self.model)
# TODO: Proper handling of each one
except KeyboardInterrupt:
raise
except openai.RateLimitError as e:
if e.code == 'insufficient_quota':
raise QuotaExceededException()
raise
except (
openai.APIConnectionError,
openai.APITimeoutError,
openai.InternalServerError,
openai.UnprocessableEntityError
):
# These are OpenAI server/API issues
raise
except (
openai.AuthenticationError,
openai.PermissionDeniedError
):
# These should have been handled during setup
raise
except (
openai.BadRequestError,
openai.ConflictError,
openai.NotFoundError
):
# These should not happen
raise
def get_token_usage(self, chat_completion: ChatCompletion) -> Tuple[int, int]:
if not chat_completion.usage:
raise Exception('openai_client.update_token_usage: chat_completion.usage is None')
prompt_tokens = chat_completion.usage.prompt_tokens
completion_tokens = chat_completion.usage.completion_tokens
return (prompt_tokens, completion_tokens)
|
test_llm.py | 4,096 | flamethrower/src/flamethrower/models/tests/test_llm.py | import asyncio
from unittest.mock import AsyncMock, patch
from flamethrower.models.llm import LLM
def test_llm_init() -> None:
test_system_message = 'test system message'
llm = LLM(system_message=test_system_message)
assert llm.system_message == test_system_message
assert llm.model != ''
assert llm.llm_client is not None
assert llm.llm_client.system_message == test_system_message
assert llm.token_counter is not None
def test_llm_new_chat_request() -> None:
test_system_message = 'π€ You are OpenAI'
test_messages = [
{ 'role': 'system', 'content': test_system_message },
{ 'role': 'user', 'content': 'Say "This is a π₯ flamethrower test."' }
]
(test_content, test_prompt_tokens, test_completion_tokens, test_model) = ('This is a π₯ flamethrower test.', 42, 69, 'Test model')
test_result = (test_content, test_prompt_tokens, test_completion_tokens, test_model)
with patch('flamethrower.models.llm.OpenAIClient.new_basic_chat_request',
return_value=test_result
) as mock_new_basic_chat_request, \
patch('flamethrower.models.llm.TokenCounter.add_input_tokens') as mock_add_input_tokens, \
patch('flamethrower.models.llm.TokenCounter.add_output_tokens') as mock_add_output_tokens:
llm = LLM(system_message=test_system_message)
result = llm.new_chat_request(test_messages)
assert result == test_content
mock_new_basic_chat_request.assert_called_once_with(test_messages)
mock_add_input_tokens.assert_called_once_with(test_prompt_tokens, test_model)
mock_add_output_tokens.assert_called_once_with(test_completion_tokens, test_model)
def test_llm_new_streaming_chat_request() -> None:
test_system_message = 'π€ You are OpenAI'
test_messages = [
{ 'role': 'system', 'content': test_system_message },
{ 'role': 'user', 'content': 'Say "This is a π₯ flamethrower test."' }
]
test_chunks = [
'This', 'is', 'a', 'π₯', 'flamethrower', 'test.'
]
with patch('flamethrower.models.llm.OpenAIClient.new_streaming_chat_request',
return_value=iter(test_chunks)
) as mock_new_streaming_chat_request, \
patch('flamethrower.models.llm.TokenCounter.add_streaming_input_tokens') as mock_add_streaming_input_tokens:
llm = LLM(system_message=test_system_message)
stream = llm.new_streaming_chat_request(test_messages)
assert stream is not None
idx = 0
for chunk in stream:
if chunk is None:
break
assert chunk == test_chunks[idx]
idx += 1
mock_new_streaming_chat_request.assert_called_once_with(test_messages)
mock_add_streaming_input_tokens.assert_called_once_with(str(test_messages))
def test_llm_new_async_chat_request() -> None:
test_system_message = 'π€ You are OpenAI'
test_messages = [
{ 'role': 'system', 'content': test_system_message },
{ 'role': 'user', 'content': 'This is a π₯ flamethrower test.' }
]
(test_content, test_prompt_tokens, test_completion_tokens, test_model) = ('Test content', 42, 69, 'Test model')
test_result = (test_content, test_prompt_tokens, test_completion_tokens, test_model)
with patch('flamethrower.models.llm.OpenAIClient') as mock_openai, \
patch('flamethrower.models.llm.TokenCounter.add_input_tokens') as mock_add_input_tokens, \
patch('flamethrower.models.llm.TokenCounter.add_output_tokens') as mock_add_output_tokens:
llm = LLM(system_message=test_system_message)
llm_client = mock_openai.return_value
llm_client.new_basic_async_chat_request = AsyncMock(return_value=test_result)
loop = asyncio.get_event_loop()
result = loop.run_until_complete(llm.new_async_chat_request(test_messages))
mock_add_input_tokens.assert_called_once_with(test_prompt_tokens, test_model)
mock_add_output_tokens.assert_cal |
test_llm.py | 1,898 | flamethrower/src/flamethrower/models/tests/test_llm.py | ns') as mock_add_output_tokens:
llm = LLM(system_message=test_system_message)
llm_client = mock_openai.return_value
llm_client.new_basic_async_chat_request = AsyncMock(return_value=test_result)
loop = asyncio.get_event_loop()
result = loop.run_until_complete(llm.new_async_chat_request(test_messages))
mock_add_input_tokens.assert_called_once_with(test_prompt_tokens, test_model)
mock_add_output_tokens.assert_called_once_with(test_completion_tokens, test_model)
assert result == test_content
def test_llm_new_json_request() -> None:
test_system_message = 'π€ You are OpenAI'
test_messages = [
{ 'role': 'system', 'content': test_system_message },
{ 'role': 'user', 'content': 'Return a json of a random Person with a name and age.' }
]
(test_content, test_prompt_tokens, test_completion_tokens, test_model) = ('{ person: { name: "Ragnaros the Firelord", age: 9000 } }', 42, 69, 'Test model')
test_result = (test_content, test_prompt_tokens, test_completion_tokens, test_model)
with patch('flamethrower.models.llm.OpenAIClient.new_basic_chat_request',
return_value=test_result
) as mock_new_basic_chat_request, \
patch('flamethrower.models.llm.TokenCounter.add_input_tokens') as mock_add_input_tokens, \
patch('flamethrower.models.llm.TokenCounter.add_output_tokens') as mock_add_output_tokens:
llm = LLM(system_message=test_system_message)
result = llm.new_chat_request(test_messages)
assert result == test_content
mock_new_basic_chat_request.assert_called_once_with(test_messages)
mock_add_input_tokens.assert_called_once_with(test_prompt_tokens, test_model)
mock_add_output_tokens.assert_called_once_with(test_completion_tokens, test_model)
|
test_openai_client.py | 4,096 | flamethrower/src/flamethrower/models/tests/test_openai_client.py | import asyncio
from unittest.mock import AsyncMock, patch
from flamethrower.models.openai_client import OpenAIClient
from openai.types.completion_usage import CompletionUsage
from openai.types.chat.chat_completion import ChatCompletion, Choice as BasicChoice
from openai.types.chat.chat_completion_chunk import ChatCompletionChunk, Choice as ChunkChoice
def test_openai_client_init() -> None:
test_message = 'test_message'
test_model = 'test_model'
test_api_key = 'test_api_key'
with patch('flamethrower.utils.key_handler.get_api_key', return_value=test_api_key):
client = OpenAIClient(system_message=test_message, model=test_model)
assert client.system_message == test_message
assert client.model == test_model
assert client.client.api_key == test_api_key
def test_openai_client_new_basic_chat_request() -> None:
test_message, test_model = 'test_message', 'test_model'
test_prompt_tokens, test_completion_tokens = 42, 69
test_messages = [
{ 'role': 'system', 'content': 'You are OpenAI.' },
{ 'role': 'user', 'content': 'Say "This is a π₯ flamethrower test."' }
]
test_content = 'This is a π₯ flamethrower test.'
test_response = ChatCompletion(
id='chatcmpl-123',
object='chat.completion',
created=1677652288,
model='gpt-3.5-turbo-0613',
system_fingerprint='fp_fingerprint',
choices=[
BasicChoice(
index=0,
message={
'role': 'assistant',
'content': test_content
},
logprobs=None,
finish_reason='stop'
)
],
usage=CompletionUsage(
prompt_tokens=42,
completion_tokens=69,
total_tokens=111
)
)
with patch('flamethrower.models.openai_client.OpenAI') as mock_openai, \
patch('flamethrower.models.openai_client.OpenAIClient.get_token_usage',
return_value=(test_prompt_tokens, test_completion_tokens)
):
client = OpenAIClient(system_message=test_message, model=test_model)
model = mock_openai.return_value
model.chat.completions.create.return_value = test_response
response = client.new_basic_chat_request(test_messages)
assert response == (test_content, test_prompt_tokens, test_completion_tokens, test_model)
def test_openai_client_new_streaming_chat_request() -> None:
test_message, test_model = 'test_message', 'test_model'
test_messages = [
{ 'role': 'system', 'content': 'You are OpenAI.' },
{ 'role': 'user', 'content': 'Say "This is a π₯ flamethrower test."' }
]
test_contents = ['This', 'is', 'a', 'π₯', 'flamethrower', 'test.']
test_responses = [
ChatCompletionChunk(
id='chatcmpl-123',
object='chat.completion.chunk',
created=1677652288,
model='gpt-3.5-turbo-0613',
system_fingerprint='fp_fingerprint',
choices=[
ChunkChoice(
index=0,
delta={
'role': 'assistant',
'content': test_contents[0]
},
logprobs=None,
finish_reason=None
)
],
),
ChatCompletionChunk(
id='chatcmpl-123',
object='chat.completion.chunk',
created=1677652288,
model='gpt-3.5-turbo-0613',
system_fingerprint='fp_fingerprint',
choices=[
ChunkChoice(
index=0,
delta={
'role': 'assistant',
'content': test_contents[1]
},
logprobs=None,
finish_reason=None
)
],
),
ChatCompletionChunk(
id='c |
test_openai_client.py | 4,096 | flamethrower/src/flamethrower/models/tests/test_openai_client.py | reated=1677652288,
model='gpt-3.5-turbo-0613',
system_fingerprint='fp_fingerprint',
choices=[
ChunkChoice(
index=0,
delta={
'role': 'assistant',
'content': test_contents[1]
},
logprobs=None,
finish_reason=None
)
],
),
ChatCompletionChunk(
id='chatcmpl-123',
object='chat.completion.chunk',
created=1677652288,
model='gpt-3.5-turbo-0613',
system_fingerprint='fp_fingerprint',
choices=[
ChunkChoice(
index=0,
delta={
'role': 'assistant',
'content': test_contents[2]
},
logprobs=None,
finish_reason=None
)
],
),
ChatCompletionChunk(
id='chatcmpl-123',
object='chat.completion.chunk',
created=1677652288,
model='gpt-3.5-turbo-0613',
system_fingerprint='fp_fingerprint',
choices=[
ChunkChoice(
index=0,
delta={
'role': 'assistant',
'content': test_contents[3]
},
logprobs=None,
finish_reason=None
)
],
),
ChatCompletionChunk(
id='chatcmpl-123',
object='chat.completion.chunk',
created=1677652288,
model='gpt-3.5-turbo-0613',
system_fingerprint='fp_fingerprint',
choices=[
ChunkChoice(
index=0,
delta={
'role': 'assistant',
'content': test_contents[4]
},
logprobs=None,
finish_reason=None
)
],
),
ChatCompletionChunk(
id='chatcmpl-123',
object='chat.completion.chunk',
created=1677652288,
model='gpt-3.5-turbo-0613',
system_fingerprint='fp_fingerprint',
choices=[
ChunkChoice(
index=0,
delta={
'role': 'assistant',
'content': test_contents[5]
},
logprobs=None,
finish_reason=None
)
],
),
ChatCompletionChunk(
id='chatcmpl-123',
object='chat.completion.chunk',
created=1677652288,
model='gpt-3.5-turbo-0613',
system_fingerprint='fp_fingerprint',
choices=[
ChunkChoice(
index=0,
delta={
'role': 'assistant',
'content': ''
},
logprobs=None,
finish_reason='stop'
)
],
),
]
with patch('flamethrower.models.openai_client.OpenAI') as mock_openai:
client = OpenAIClient(system_message=test_message, model=test_model)
model = mock_openai.return_value
model.chat.completions.create.return_value = test_responses
stream = client.new_streaming_chat_request(test_messages)
assert stream is not None
idx = 0
for chunk in stream:
assert chunk == test_contents[idx]
idx += 1
def test_openai_client_new_basic_async_chat_request() -> None:
test_message, test_model = 'test_message', 'test_model'
test_prompt_tokens, test_completion_tokens = 42, 69
test_messages = [
{ 'role': 'system', 'content': 'You are OpenAI.' },
{ 'role': 'user', 'content': 'Say "This i |
test_openai_client.py | 3,793 | flamethrower/src/flamethrower/models/tests/test_openai_client.py | ew_streaming_chat_request(test_messages)
assert stream is not None
idx = 0
for chunk in stream:
assert chunk == test_contents[idx]
idx += 1
def test_openai_client_new_basic_async_chat_request() -> None:
test_message, test_model = 'test_message', 'test_model'
test_prompt_tokens, test_completion_tokens = 42, 69
test_messages = [
{ 'role': 'system', 'content': 'You are OpenAI.' },
{ 'role': 'user', 'content': 'Say "This is a π₯ flamethrower test."' }
]
test_content = 'This is a π₯ flamethrower test.'
test_response = ChatCompletion(
id='chatcmpl-123',
object='chat.completion',
created=1677652288,
model='gpt-3.5-turbo-0613',
system_fingerprint='fp_fingerprint',
choices=[
BasicChoice(
index=0,
message={
'role': 'assistant',
'content': test_content
},
logprobs=None,
finish_reason='stop'
)
],
usage=CompletionUsage(
prompt_tokens=42,
completion_tokens=69,
total_tokens=111
)
)
with patch('flamethrower.models.openai_client.AsyncOpenAI') as mock_openai, \
patch('flamethrower.models.openai_client.OpenAIClient.get_token_usage',
return_value=(test_prompt_tokens, test_completion_tokens)
):
client = OpenAIClient(system_message=test_message, model=test_model)
model = mock_openai.return_value
model.chat.completions.create = AsyncMock(return_value=test_response)
loop = asyncio.get_event_loop()
response = loop.run_until_complete(client.new_basic_async_chat_request(test_messages))
assert response == (test_content, test_prompt_tokens, test_completion_tokens, test_model)
def test_openai_client_new_json_request() -> None:
"""
Basically the same as `basic_chat_request`
"""
test_message, test_model = 'test_message', 'test_model'
test_prompt_tokens, test_completion_tokens = 42, 69
test_messages = [
{ 'role': 'system', 'content': 'You are OpenAI.' },
{ 'role': 'user', 'content': 'Return a json of a random Person with a name and age.' }
]
test_content = '{ person: { name: "Ragnaros the Firelord", age: 9000 } }'
test_response = ChatCompletion(
id='chatcmpl-123',
object='chat.completion',
created=1677652288,
model='gpt-3.5-turbo-0613',
system_fingerprint='fp_fingerprint',
choices=[
BasicChoice(
index=0,
message={
'role': 'assistant',
'content': test_content
},
logprobs=None,
finish_reason='stop'
)
],
usage=CompletionUsage(
prompt_tokens=42,
completion_tokens=69,
total_tokens=111
)
)
with patch('flamethrower.models.openai_client.OpenAI') as mock_openai, \
patch('flamethrower.models.openai_client.OpenAIClient.get_token_usage',
return_value=(test_prompt_tokens, test_completion_tokens)
):
client = OpenAIClient(system_message=test_message, model=test_model)
model = mock_openai.return_value
model.chat.completions.create.return_value = test_response
response = client.new_json_request(test_messages)
assert response == (test_content, test_prompt_tokens, test_completion_tokens, test_model)
|
handlers.py | 536 | flamethrower/src/flamethrower/exceptions/handlers.py | from flamethrower.utils.colors import *
def get_quota_exceeded_message() -> str:
return (
f'You might have {STDIN_RED.decode("utf-8")}exceeded your current quota for OpenAI{STDIN_DEFAULT.decode("utf-8")}.\n\n'
f'We are working hard to provide a {STDIN_ORANGE.decode("utf-8")}free, open source π₯ flamethrower server{STDIN_DEFAULT.decode("utf-8")} for your usage.\n\n'
f'Please check {STDIN_UNDERLINE.decode("utf-8")}https://github.com/scottsus/flamethrower{STDIN_DEFAULT.decode("utf-8")} for updates!'
)
|
exceptions.py | 107 | flamethrower/src/flamethrower/exceptions/exceptions.py | class QuotaExceededException(Exception):
"""Raised when the user has exceeded their quota."""
pass
|
lm_container.py | 249 | flamethrower/src/flamethrower/containers/lm_container.py | from dependency_injector import containers, providers
from flamethrower.utils.token_counter import TokenCounter
class LMContainer(containers.DeclarativeContainer):
token_counter = providers.Singleton(TokenCounter)
lm_container = LMContainer()
|