seekerj commited on
Commit
4e8e742
1 Parent(s): e1b5033

update: Update the METAGPT version

Browse files
README.md CHANGED
@@ -7,3 +7,254 @@ sdk: docker
7
  app_file: app.py
8
  pinned: false
9
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  app_file: app.py
8
  pinned: false
9
  ---
10
+
11
+ # MetaGPT: The Multi-Agent Framework
12
+
13
+ <p align="center">
14
+ <a href=""><img src="docs/resources/MetaGPT-logo.jpeg" alt="MetaGPT logo: Enable GPT to work in software company, collaborating to tackle more complex tasks." width="150px"></a>
15
+ </p>
16
+
17
+ <p align="center">
18
+ <b>Assign different roles to GPTs to form a collaborative software entity for complex tasks.</b>
19
+ </p>
20
+
21
+ <p align="center">
22
+ <a href="docs/README_CN.md"><img src="https://img.shields.io/badge/文档-中文版-blue.svg" alt="CN doc"></a>
23
+ <a href="README.md"><img src="https://img.shields.io/badge/document-English-blue.svg" alt="EN doc"></a>
24
+ <a href="docs/README_JA.md"><img src="https://img.shields.io/badge/ドキュメント-日本語-blue.svg" alt="JA doc"></a>
25
+ <a href="https://discord.gg/wCp6Q3fsAk"><img src="https://dcbadge.vercel.app/api/server/wCp6Q3fsAk?compact=true&style=flat" alt="Discord Follow"></a>
26
+ <a href="https://opensource.org/licenses/MIT"><img src="https://img.shields.io/badge/License-MIT-yellow.svg" alt="License: MIT"></a>
27
+ <a href="docs/ROADMAP.md"><img src="https://img.shields.io/badge/ROADMAP-路线图-blue" alt="roadmap"></a>
28
+ <a href="docs/resources/MetaGPT-WeChat-Personal.jpeg"><img src="https://img.shields.io/badge/WeChat-微信-blue" alt="roadmap"></a>
29
+ <a href="https://twitter.com/DeepWisdom2019"><img src="https://img.shields.io/twitter/follow/MetaGPT?style=social" alt="Twitter Follow"></a>
30
+ </p>
31
+
32
+ <p align="center">
33
+ <a href="https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/geekan/MetaGPT"><img src="https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode" alt="Open in Dev Containers"></a>
34
+ <a href="https://codespaces.new/geekan/MetaGPT"><img src="https://img.shields.io/badge/Github_Codespace-Open-blue?logo=github" alt="Open in GitHub Codespaces"></a>
35
+ </p>
36
+
37
+ 1. MetaGPT takes a **one line requirement** as input and outputs **user stories / competitive analysis / requirements / data structures / APIs / documents, etc.**
38
+ 2. Internally, MetaGPT includes **product managers / architects / project managers / engineers.** It provides the entire process of a **software company along with carefully orchestrated SOPs.**
39
+ 1. `Code = SOP(Team)` is the core philosophy. We materialize SOP and apply it to teams composed of LLMs.
40
+
41
+ ![A software company consists of LLM-based roles](docs/resources/software_company_cd.jpeg)
42
+
43
+ <p align="center">Software Company Multi-Role Schematic (Gradually Implementing)</p>
44
+
45
+ ## Examples (fully generated by GPT-4)
46
+
47
+ For example, if you type `python startup.py "Design a RecSys like Toutiao"`, you would get many outputs, one of them is data & api design
48
+
49
+ ![Jinri Toutiao Recsys Data & API Design](docs/resources/workspace/content_rec_sys/resources/data_api_design.png)
50
+
51
+ It costs approximately **$0.2** (in GPT-4 API fees) to generate one example with analysis and design, and around **$2.0** for a full project.
52
+
53
+ ## Installation
54
+
55
+ ### Installation Video Guide
56
+
57
+ - [Matthew Berman: How To Install MetaGPT - Build A Startup With One Prompt!!](https://youtu.be/uT75J_KG_aY)
58
+
59
+ ### Traditional Installation
60
+
61
+ ```bash
62
+ # Step 1: Ensure that NPM is installed on your system. Then install mermaid-js.
63
+ npm --version
64
+ sudo npm install -g @mermaid-js/mermaid-cli
65
+
66
+ # Step 2: Ensure that Python 3.9+ is installed on your system. You can check this by using:
67
+ python --version
68
+
69
+ # Step 3: Clone the repository to your local machine, and install it.
70
+ git clone https://github.com/geekan/metagpt
71
+ cd metagpt
72
+ python setup.py install
73
+ ```
74
+
75
+ **Note:**
76
+
77
+ - If already have Chrome, Chromium, or MS Edge installed, you can skip downloading Chromium by setting the environment variable
78
+ `PUPPETEER_SKIP_CHROMIUM_DOWNLOAD` to `true`.
79
+
80
+ - Some people are [having issues](https://github.com/mermaidjs/mermaid.cli/issues/15) installing this tool globally. Installing it locally is an alternative solution,
81
+
82
+ ```bash
83
+ npm install @mermaid-js/mermaid-cli
84
+ ```
85
+
86
+ - don't forget to the configuration for mmdc in config.yml
87
+
88
+ ```yml
89
+ PUPPETEER_CONFIG: "./config/puppeteer-config.json"
90
+ MMDC: "./node_modules/.bin/mmdc"
91
+ ```
92
+
93
+ - if `python setup.py install` fails with error `[Errno 13] Permission denied: '/usr/local/lib/python3.11/dist-packages/test-easy-install-13129.write-test'`, try instead running `python setup.py install --user`
94
+
95
+ ### Installation by Docker
96
+
97
+ ```bash
98
+ # Step 1: Download metagpt official image and prepare config.yaml
99
+ docker pull metagpt/metagpt:v0.3.1
100
+ mkdir -p /opt/metagpt/{config,workspace}
101
+ docker run --rm metagpt/metagpt:v0.3.1 cat /app/metagpt/config/config.yaml > /opt/metagpt/config/key.yaml
102
+ vim /opt/metagpt/config/key.yaml # Change the config
103
+
104
+ # Step 2: Run metagpt demo with container
105
+ docker run --rm \
106
+ --privileged \
107
+ -v /opt/metagpt/config/key.yaml:/app/metagpt/config/key.yaml \
108
+ -v /opt/metagpt/workspace:/app/metagpt/workspace \
109
+ metagpt/metagpt:v0.3.1 \
110
+ python startup.py "Write a cli snake game"
111
+
112
+ # You can also start a container and execute commands in it
113
+ docker run --name metagpt -d \
114
+ --privileged \
115
+ -v /opt/metagpt/config/key.yaml:/app/metagpt/config/key.yaml \
116
+ -v /opt/metagpt/workspace:/app/metagpt/workspace \
117
+ metagpt/metagpt:v0.3.1
118
+
119
+ docker exec -it metagpt /bin/bash
120
+ $ python startup.py "Write a cli snake game"
121
+ ```
122
+
123
+ The command `docker run ...` do the following things:
124
+
125
+ - Run in privileged mode to have permission to run the browser
126
+ - Map host directory `/opt/metagpt/config` to container directory `/app/metagpt/config`
127
+ - Map host directory `/opt/metagpt/workspace` to container directory `/app/metagpt/workspace`
128
+ - Execute the demo command `python startup.py "Write a cli snake game"`
129
+
130
+ ### Build image by yourself
131
+
132
+ ```bash
133
+ # You can also build metagpt image by yourself.
134
+ git clone https://github.com/geekan/MetaGPT.git
135
+ cd MetaGPT && docker build -t metagpt:custom .
136
+ ```
137
+
138
+ ## Configuration
139
+
140
+ - Configure your `OPENAI_API_KEY` in any of `config/key.yaml / config/config.yaml / env`
141
+ - Priority order: `config/key.yaml > config/config.yaml > env`
142
+
143
+ ```bash
144
+ # Copy the configuration file and make the necessary modifications.
145
+ cp config/config.yaml config/key.yaml
146
+ ```
147
+
148
+ | Variable Name | config/key.yaml | env |
149
+ | ------------------------------------------ | ----------------------------------------- | ----------------------------------------------- |
150
+ | OPENAI_API_KEY # Replace with your own key | OPENAI_API_KEY: "sk-..." | export OPENAI_API_KEY="sk-..." |
151
+ | OPENAI_API_BASE # Optional | OPENAI_API_BASE: "https://<YOUR_SITE>/v1" | export OPENAI_API_BASE="https://<YOUR_SITE>/v1" |
152
+
153
+ ## Tutorial: Initiating a startup
154
+
155
+ ```shell
156
+ # Run the script
157
+ python startup.py "Write a cli snake game"
158
+ # Do not hire an engineer to implement the project
159
+ python startup.py "Write a cli snake game" --implement False
160
+ # Hire an engineer and perform code reviews
161
+ python startup.py "Write a cli snake game" --code_review True
162
+ ```
163
+
164
+ After running the script, you can find your new project in the `workspace/` directory.
165
+
166
+ ### Preference of Platform or Tool
167
+
168
+ You can tell which platform or tool you want to use when stating your requirements.
169
+
170
+ ```shell
171
+ python startup.py "Write a cli snake game based on pygame"
172
+ ```
173
+
174
+ ### Usage
175
+
176
+ ```
177
+ NAME
178
+ startup.py - We are a software startup comprised of AI. By investing in us, you are empowering a future filled with limitless possibilities.
179
+
180
+ SYNOPSIS
181
+ startup.py IDEA <flags>
182
+
183
+ DESCRIPTION
184
+ We are a software startup comprised of AI. By investing in us, you are empowering a future filled with limitless possibilities.
185
+
186
+ POSITIONAL ARGUMENTS
187
+ IDEA
188
+ Type: str
189
+ Your innovative idea, such as "Creating a snake game."
190
+
191
+ FLAGS
192
+ --investment=INVESTMENT
193
+ Type: float
194
+ Default: 3.0
195
+ As an investor, you have the opportunity to contribute a certain dollar amount to this AI company.
196
+ --n_round=N_ROUND
197
+ Type: int
198
+ Default: 5
199
+
200
+ NOTES
201
+ You can also use flags syntax for POSITIONAL ARGUMENTS
202
+ ```
203
+
204
+ ### Code walkthrough
205
+
206
+ ```python
207
+ from metagpt.software_company import SoftwareCompany
208
+ from metagpt.roles import ProjectManager, ProductManager, Architect, Engineer
209
+
210
+ async def startup(idea: str, investment: float = 3.0, n_round: int = 5):
211
+ """Run a startup. Be a boss."""
212
+ company = SoftwareCompany()
213
+ company.hire([ProductManager(), Architect(), ProjectManager(), Engineer()])
214
+ company.invest(investment)
215
+ company.start_project(idea)
216
+ await company.run(n_round=n_round)
217
+ ```
218
+
219
+ You can check `examples` for more details on single role (with knowledge base) and LLM only examples.
220
+
221
+ ## QuickStart
222
+
223
+ It is difficult to install and configure the local environment for some users. The following tutorials will allow you to quickly experience the charm of MetaGPT.
224
+
225
+ - [MetaGPT quickstart](https://deepwisdom.feishu.cn/wiki/CyY9wdJc4iNqArku3Lncl4v8n2b)
226
+
227
+ ## Citation
228
+
229
+ For now, cite the [Arxiv paper](https://arxiv.org/abs/2308.00352):
230
+
231
+ ```bibtex
232
+ @misc{hong2023metagpt,
233
+ title={MetaGPT: Meta Programming for Multi-Agent Collaborative Framework},
234
+ author={Sirui Hong and Xiawu Zheng and Jonathan Chen and Yuheng Cheng and Jinlin Wang and Ceyao Zhang and Zili Wang and Steven Ka Shing Yau and Zijuan Lin and Liyang Zhou and Chenyu Ran and Lingfeng Xiao and Chenglin Wu},
235
+ year={2023},
236
+ eprint={2308.00352},
237
+ archivePrefix={arXiv},
238
+ primaryClass={cs.AI}
239
+ }
240
+ ```
241
+
242
+ ## Contact Information
243
+
244
+ If you have any questions or feedback about this project, please feel free to contact us. We highly appreciate your suggestions!
245
+
246
+ - **Email:** alexanderwu@fuzhi.ai
247
+ - **GitHub Issues:** For more technical inquiries, you can also create a new issue in our [GitHub repository](https://github.com/geekan/metagpt/issues).
248
+
249
+ We will respond to all questions within 2-3 business days.
250
+
251
+ ## Demo
252
+
253
+ https://github.com/geekan/MetaGPT/assets/2707039/5e8c1062-8c35-440f-bb20-2b0320f8d27d
254
+
255
+ ## Join us
256
+
257
+ 📢 Join Our Discord Channel!
258
+ https://discord.gg/ZRHeExS6xv
259
+
260
+ Looking forward to seeing you there! 🎉
metagpt/.gitattributes DELETED
@@ -1 +0,0 @@
1
- *.mp4 filter=lfs diff=lfs merge=lfs -text
 
 
metagpt/actions/talk_action.py CHANGED
@@ -45,6 +45,20 @@ class TalkAction(Action):
45
  )
46
  return prompt
47
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
48
  async def run(self, *args, **kwargs) -> ActionOutput:
49
  prompt = self.prompt
50
  logger.info(prompt)
@@ -52,3 +66,60 @@ class TalkAction(Action):
52
  logger.info(rsp)
53
  self._rsp = ActionOutput(content=rsp)
54
  return self._rsp
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
45
  )
46
  return prompt
47
 
48
+ @property
49
+ def formation_prompt(self):
50
+ kvs = {
51
+ "{role}": CONFIG.agent_description or "",
52
+ "{history}": self._history_summary or "",
53
+ "{knowledge}": self._knowledge or "",
54
+ "{language}": CONFIG.language or DEFAULT_LANGUAGE,
55
+ "{ask}": self._talk,
56
+ }
57
+ prompt = TalkAction.__FORMATION_LOOSE__
58
+ for k, v in kvs.items():
59
+ prompt = prompt.replace(k, v)
60
+ return prompt
61
+
62
  async def run(self, *args, **kwargs) -> ActionOutput:
63
  prompt = self.prompt
64
  logger.info(prompt)
 
66
  logger.info(rsp)
67
  self._rsp = ActionOutput(content=rsp)
68
  return self._rsp
69
+
70
+ __FORMATION__ = """Formation: "Capacity and role" defines the role you are currently playing;
71
+ "[HISTORY_BEGIN]" and "[HISTORY_END]" tags enclose the historical conversation;
72
+ "[KNOWLEDGE_BEGIN]" and "[KNOWLEDGE_END]" tags enclose the knowledge may help for your responses;
73
+ "Statement" defines the work detail you need to complete at this stage;
74
+ "[ASK_BEGIN]" and [ASK_END] tags enclose the requirements for your to respond;
75
+ "Constraint" defines the conditions that your responses must comply with.
76
+
77
+ Capacity and role: {role}
78
+ Statement: Your responses should align with the role-play agreement, maintaining the
79
+ character's persona and habits. When faced with unrelated questions, playfully decline to answer without revealing
80
+ your AI nature to preserve the character's image.
81
+
82
+ [HISTORY_BEGIN]
83
+ {history}
84
+ [HISTORY_END]
85
+
86
+ [KNOWLEDGE_BEGIN]
87
+ {knowledge}
88
+ [KNOWLEDGE_END]
89
+
90
+ Statement: If the information is insufficient, you can search in the historical conversation or knowledge.
91
+ Statement: Answer the following questions in {language}, and the answers must follow the Markdown format
92
+ , excluding any tag likes "[HISTORY_BEGIN]", "[HISTORY_END]", "[KNOWLEDGE_BEGIN]", "[KNOWLEDGE_END]", "[ASK_BEGIN]"
93
+ , "[ASK_END]"
94
+
95
+ [ASK_BEGIN]
96
+ {ask}
97
+ [ASK_END]"""
98
+
99
+ __FORMATION_LOOSE__ = """Formation: "Capacity and role" defines the role you are currently playing;
100
+ "[HISTORY_BEGIN]" and "[HISTORY_END]" tags enclose the historical conversation;
101
+ "[KNOWLEDGE_BEGIN]" and "[KNOWLEDGE_END]" tags enclose the knowledge may help for your responses;
102
+ "Statement" defines the work detail you need to complete at this stage;
103
+ "[ASK_BEGIN]" and [ASK_END] tags enclose the requirements for your to respond;
104
+ "Constraint" defines the conditions that your responses must comply with.
105
+
106
+ Capacity and role: {role}
107
+ Statement: Your responses should maintaining the character's persona and habits. When faced with unrelated questions
108
+ , playfully decline to answer without revealing your AI nature to preserve the character's image.
109
+
110
+ [HISTORY_BEGIN]
111
+ {history}
112
+ [HISTORY_END]
113
+
114
+ [KNOWLEDGE_BEGIN]
115
+ {knowledge}
116
+ [KNOWLEDGE_END]
117
+
118
+ Statement: If the information is insufficient, you can search in the historical conversation or knowledge.
119
+ Statement: Answer the following questions in {language}, and the answers must follow the Markdown format
120
+ , excluding any tag likes "[HISTORY_BEGIN]", "[HISTORY_END]", "[KNOWLEDGE_BEGIN]", "[KNOWLEDGE_END]", "[ASK_BEGIN]"
121
+ , "[ASK_END]"
122
+
123
+ [ASK_BEGIN]
124
+ {ask}
125
+ [ASK_END]"""
metagpt/document_store/lancedb_store.py ADDED
@@ -0,0 +1,90 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python
2
+ # -*- coding: utf-8 -*-
3
+ """
4
+ @Time : 2023/8/9 15:42
5
+ @Author : unkn-wn (Leon Yee)
6
+ @File : lancedb_store.py
7
+ """
8
+ import lancedb
9
+ import shutil, os
10
+
11
+
12
+ class LanceStore:
13
+ def __init__(self, name):
14
+ db = lancedb.connect('./data/lancedb')
15
+ self.db = db
16
+ self.name = name
17
+ self.table = None
18
+
19
+ def search(self, query, n_results=2, metric="L2", nprobes=20, **kwargs):
20
+ # This assumes query is a vector embedding
21
+ # kwargs can be used for optional filtering
22
+ # .select - only searches the specified columns
23
+ # .where - SQL syntax filtering for metadata (e.g. where("price > 100"))
24
+ # .metric - specifies the distance metric to use
25
+ # .nprobes - values will yield better recall (more likely to find vectors if they exist) at the expense of latency.
26
+ if self.table == None: raise Exception("Table not created yet, please add data first.")
27
+
28
+ results = self.table \
29
+ .search(query) \
30
+ .limit(n_results) \
31
+ .select(kwargs.get('select')) \
32
+ .where(kwargs.get('where')) \
33
+ .metric(metric) \
34
+ .nprobes(nprobes) \
35
+ .to_df()
36
+ return results
37
+
38
+ def persist(self):
39
+ raise NotImplementedError
40
+
41
+ def write(self, data, metadatas, ids):
42
+ # This function is similar to add(), but it's for more generalized updates
43
+ # "data" is the list of embeddings
44
+ # Inserts into table by expanding metadatas into a dataframe: [{'vector', 'id', 'meta', 'meta2'}, ...]
45
+
46
+ documents = []
47
+ for i in range(len(data)):
48
+ row = {
49
+ 'vector': data[i],
50
+ 'id': ids[i]
51
+ }
52
+ row.update(metadatas[i])
53
+ documents.append(row)
54
+
55
+ if self.table != None:
56
+ self.table.add(documents)
57
+ else:
58
+ self.table = self.db.create_table(self.name, documents)
59
+
60
+ def add(self, data, metadata, _id):
61
+ # This function is for adding individual documents
62
+ # It assumes you're passing in a single vector embedding, metadata, and id
63
+
64
+ row = {
65
+ 'vector': data,
66
+ 'id': _id
67
+ }
68
+ row.update(metadata)
69
+
70
+ if self.table != None:
71
+ self.table.add([row])
72
+ else:
73
+ self.table = self.db.create_table(self.name, [row])
74
+
75
+ def delete(self, _id):
76
+ # This function deletes a row by id.
77
+ # LanceDB delete syntax uses SQL syntax, so you can use "in" or "="
78
+ if self.table == None: raise Exception("Table not created yet, please add data first")
79
+
80
+ if isinstance(_id, str):
81
+ return self.table.delete(f"id = '{_id}'")
82
+ else:
83
+ return self.table.delete(f"id = {_id}")
84
+
85
+ def drop(self, name):
86
+ # This function drops a table, if it exists.
87
+
88
+ path = os.path.join(self.db.uri, name + '.lance')
89
+ if os.path.exists(path):
90
+ shutil.rmtree(path)
metagpt/management/skill_manager.py CHANGED
@@ -15,7 +15,9 @@ Skill = Action
15
 
16
 
17
  class SkillManager:
18
- """用来管理所有技能"""
 
 
19
 
20
  def __init__(self):
21
  self._store = ChromaStore('skill_manager')
@@ -24,7 +26,8 @@ class SkillManager:
24
  def add_skill(self, skill: Skill):
25
  """
26
  增加技能,将技能加入到技能池与可检索的存储中
27
- :param skill: 技能
 
28
  :return:
29
  """
30
  self._skills[skill.name] = skill
@@ -33,7 +36,8 @@ class SkillManager:
33
  def del_skill(self, skill_name: str):
34
  """
35
  删除技能,将技能从技能池与可检索的存储中移除
36
- :param skill_name: 技能名
 
37
  :return:
38
  """
39
  self._skills.pop(skill_name)
@@ -42,30 +46,31 @@ class SkillManager:
42
  def get_skill(self, skill_name: str) -> Skill:
43
  """
44
  通过技能名获得精确的技能
45
- :param skill_name: 技能名
46
- :return: 技能
 
47
  """
48
  return self._skills.get(skill_name)
49
 
50
  def retrieve_skill(self, desc: str, n_results: int = 2) -> list[Skill]:
51
  """
52
- 通过检索引擎获得技能
53
- :param desc: 技能描述
54
- :return: 技能(多个)
55
  """
56
  return self._store.search(desc, n_results=n_results)['ids'][0]
57
 
58
  def retrieve_skill_scored(self, desc: str, n_results: int = 2) -> dict:
59
  """
60
- 通过检索引擎获得技能
61
- :param desc: 技能描述
62
- :return: 技能与分数组成的字典
63
  """
64
  return self._store.search(desc, n_results=n_results)
65
 
66
  def generate_skill_desc(self, skill: Skill) -> str:
67
  """
68
- 为每个技能生成对应的描述性文本
69
  :param skill:
70
  :return:
71
  """
 
15
 
16
 
17
  class SkillManager:
18
+ """用来管理所有技能
19
+ to manage all skills
20
+ """
21
 
22
  def __init__(self):
23
  self._store = ChromaStore('skill_manager')
 
26
  def add_skill(self, skill: Skill):
27
  """
28
  增加技能,将技能加入到技能池与可检索的存储中
29
+ Adding skills, adding skills to skill pools and retrievable storage
30
+ :param skill: 技能 Skill
31
  :return:
32
  """
33
  self._skills[skill.name] = skill
 
36
  def del_skill(self, skill_name: str):
37
  """
38
  删除技能,将技能从技能池与可检索的存储中移除
39
+ delete skill removes skill from skill pool and retrievable storage
40
+ :param skill_name: 技能名 skill name
41
  :return:
42
  """
43
  self._skills.pop(skill_name)
 
46
  def get_skill(self, skill_name: str) -> Skill:
47
  """
48
  通过技能名获得精确的技能
49
+ Get the exact skill by skill name
50
+ :param skill_name: 技能名 skill name
51
+ :return: 技能 Skill
52
  """
53
  return self._skills.get(skill_name)
54
 
55
  def retrieve_skill(self, desc: str, n_results: int = 2) -> list[Skill]:
56
  """
57
+ 通过检索引擎获得技能 Acquiring Skills Through Search Engines
58
+ :param desc: 技能描述 skill description
59
+ :return: 技能(多个)skill(s)
60
  """
61
  return self._store.search(desc, n_results=n_results)['ids'][0]
62
 
63
  def retrieve_skill_scored(self, desc: str, n_results: int = 2) -> dict:
64
  """
65
+ 通过检索引擎获得技能 Acquiring Skills Through Search Engines
66
+ :param desc: 技能描述 skill description
67
+ :return: 技能与分数组成的字典 A dictionary of skills and scores
68
  """
69
  return self._store.search(desc, n_results=n_results)
70
 
71
  def generate_skill_desc(self, skill: Skill) -> str:
72
  """
73
+ 为每个技能生成对应的描述性文本 Generate corresponding descriptive text for each skill
74
  :param skill:
75
  :return:
76
  """
metagpt/provider/base_gpt_api.py CHANGED
@@ -38,13 +38,13 @@ class BaseGPTAPI(BaseChatbot):
38
  rsp = self.completion(message)
39
  return self.get_choice_text(rsp)
40
 
41
- async def aask(self, msg: str, system_msgs: Optional[list[str]] = None, generator: bool = False) -> str:
42
  if system_msgs:
43
  message = self._system_msgs(system_msgs) + [self._user_msg(msg)]
44
  else:
45
  message = [self._default_system_msg(), self._user_msg(msg)]
46
  try:
47
- rsp = await self.acompletion_text(message, stream=True, generator=generator)
48
  except Exception as e:
49
  logger.exception(f"{e}")
50
  logger.info(f"ask:{msg}, error:{e}")
 
38
  rsp = self.completion(message)
39
  return self.get_choice_text(rsp)
40
 
41
+ async def aask(self, msg: str, system_msgs: Optional[list[str]] = None) -> str:
42
  if system_msgs:
43
  message = self._system_msgs(system_msgs) + [self._user_msg(msg)]
44
  else:
45
  message = [self._default_system_msg(), self._user_msg(msg)]
46
  try:
47
+ rsp = await self.acompletion_text(message, stream=True)
48
  except Exception as e:
49
  logger.exception(f"{e}")
50
  logger.info(f"ask:{msg}, error:{e}")
metagpt/provider/openai_api.py CHANGED
@@ -87,11 +87,22 @@ class OpenAIGPTAPI(BaseGPTAPI, RateLimiter):
87
  response = await self.async_retry_call(
88
  openai.ChatCompletion.acreate, **self._cons_kwargs(messages), stream=True
89
  )
 
 
 
90
  # iterate through the stream of events
91
  async for chunk in response:
 
92
  chunk_message = chunk["choices"][0]["delta"] # extract the message
 
93
  if "content" in chunk_message:
94
- yield chunk_message["content"]
 
 
 
 
 
 
95
 
96
  def _cons_kwargs(self, messages: list[dict]) -> dict:
97
  if CONFIG.openai_api_type == "azure":
@@ -146,23 +157,10 @@ class OpenAIGPTAPI(BaseGPTAPI, RateLimiter):
146
  retry=retry_if_exception_type(APIConnectionError),
147
  retry_error_callback=log_and_reraise,
148
  )
149
- async def acompletion_text(self, messages: list[dict], stream=False, generator: bool = False) -> str:
150
  """when streaming, print each token in place."""
151
  if stream:
152
- resp = self._achat_completion_stream(messages)
153
- if generator:
154
- return resp
155
-
156
- collected_messages = []
157
- async for i in resp:
158
- print(i, end="")
159
- collected_messages.append(i)
160
-
161
- full_reply_content = "".join(collected_messages)
162
- usage = self._calc_usage(messages, full_reply_content)
163
- self._update_costs(usage)
164
- return full_reply_content
165
-
166
  rsp = await self._achat_completion(messages)
167
  return self.get_choice_text(rsp)
168
 
@@ -228,13 +226,13 @@ class OpenAIGPTAPI(BaseGPTAPI, RateLimiter):
228
  max_count = 100
229
  while max_count > 0:
230
  if len(text) < max_token_count:
231
- return await self._get_summary(text=text, max_words=max_words, keep_language=keep_language)
232
 
233
  padding_size = 20 if max_token_count > 20 else 0
234
  text_windows = self.split_texts(text, window_size=max_token_count - padding_size)
235
  summaries = []
236
  for ws in text_windows:
237
- response = await self._get_summary(text=ws, max_words=max_words, keep_language=keep_language)
238
  summaries.append(response)
239
  if len(summaries) == 1:
240
  return summaries[0]
 
87
  response = await self.async_retry_call(
88
  openai.ChatCompletion.acreate, **self._cons_kwargs(messages), stream=True
89
  )
90
+ # create variables to collect the stream of chunks
91
+ collected_chunks = []
92
+ collected_messages = []
93
  # iterate through the stream of events
94
  async for chunk in response:
95
+ collected_chunks.append(chunk) # save the event response
96
  chunk_message = chunk["choices"][0]["delta"] # extract the message
97
+ collected_messages.append(chunk_message) # save the message
98
  if "content" in chunk_message:
99
+ print(chunk_message["content"], end="")
100
+ print()
101
+
102
+ full_reply_content = "".join([m.get("content", "") for m in collected_messages])
103
+ usage = self._calc_usage(messages, full_reply_content)
104
+ self._update_costs(usage)
105
+ return full_reply_content
106
 
107
  def _cons_kwargs(self, messages: list[dict]) -> dict:
108
  if CONFIG.openai_api_type == "azure":
 
157
  retry=retry_if_exception_type(APIConnectionError),
158
  retry_error_callback=log_and_reraise,
159
  )
160
+ async def acompletion_text(self, messages: list[dict], stream=False) -> str:
161
  """when streaming, print each token in place."""
162
  if stream:
163
+ return await self._achat_completion_stream(messages)
 
 
 
 
 
 
 
 
 
 
 
 
 
164
  rsp = await self._achat_completion(messages)
165
  return self.get_choice_text(rsp)
166
 
 
226
  max_count = 100
227
  while max_count > 0:
228
  if len(text) < max_token_count:
229
+ return await self._get_summary(text=text, max_words=max_words,keep_language=keep_language)
230
 
231
  padding_size = 20 if max_token_count > 20 else 0
232
  text_windows = self.split_texts(text, window_size=max_token_count - padding_size)
233
  summaries = []
234
  for ws in text_windows:
235
+ response = await self._get_summary(text=ws, max_words=max_words,keep_language=keep_language)
236
  summaries.append(response)
237
  if len(summaries) == 1:
238
  return summaries[0]
startup.py CHANGED
@@ -4,38 +4,64 @@ import asyncio
4
  import platform
5
  import fire
6
 
7
- from metagpt.roles import Architect, Engineer, ProductManager, ProjectManager, QaEngineer
 
8
  from metagpt.software_company import SoftwareCompany
9
 
10
 
11
- async def startup(idea: str, investment: float = 3.0, n_round: int = 5,
12
- code_review: bool = False, run_tests: bool = False):
 
 
 
 
 
 
13
  """Run a startup. Be a boss."""
14
  company = SoftwareCompany()
15
- company.hire([ProductManager(),
16
- Architect(),
17
- ProjectManager(),
18
- Engineer(n_borg=5, use_code_review=code_review)])
 
 
 
 
 
 
 
19
  if run_tests:
20
- # developing features: run tests on the spot and identify bugs (bug fixing capability comes soon!)
 
21
  company.hire([QaEngineer()])
 
22
  company.invest(investment)
23
  company.start_project(idea)
24
  await company.run(n_round=n_round)
25
 
26
 
27
- def main(idea: str, investment: float = 3.0, n_round: int = 5, code_review: bool = False, run_tests: bool = False):
 
 
 
 
 
 
 
28
  """
29
- We are a software startup comprised of AI. By investing in us, you are empowering a future filled with limitless possibilities.
 
30
  :param idea: Your innovative idea, such as "Creating a snake game."
31
- :param investment: As an investor, you have the opportunity to contribute a certain dollar amount to this AI company.
 
32
  :param n_round:
33
  :param code_review: Whether to use code review.
34
  :return:
35
  """
36
  if platform.system() == "Windows":
37
  asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy())
38
- asyncio.run(startup(idea, investment, n_round, code_review, run_tests))
 
39
 
40
 
41
  if __name__ == '__main__':
 
4
  import platform
5
  import fire
6
 
7
+ from metagpt.roles import Architect, Engineer, ProductManager
8
+ from metagpt.roles import ProjectManager, QaEngineer
9
  from metagpt.software_company import SoftwareCompany
10
 
11
 
12
+ async def startup(
13
+ idea: str,
14
+ investment: float = 3.0,
15
+ n_round: int = 5,
16
+ code_review: bool = False,
17
+ run_tests: bool = False,
18
+ implement: bool = True
19
+ ):
20
  """Run a startup. Be a boss."""
21
  company = SoftwareCompany()
22
+ company.hire([
23
+ ProductManager(),
24
+ Architect(),
25
+ ProjectManager(),
26
+ ])
27
+
28
+ # if implement or code_review
29
+ if implement or code_review:
30
+ # developing features: implement the idea
31
+ company.hire([Engineer(n_borg=5, use_code_review=code_review)])
32
+
33
  if run_tests:
34
+ # developing features: run tests on the spot and identify bugs
35
+ # (bug fixing capability comes soon!)
36
  company.hire([QaEngineer()])
37
+
38
  company.invest(investment)
39
  company.start_project(idea)
40
  await company.run(n_round=n_round)
41
 
42
 
43
+ def main(
44
+ idea: str,
45
+ investment: float = 3.0,
46
+ n_round: int = 5,
47
+ code_review: bool = False,
48
+ run_tests: bool = False,
49
+ implement: bool = False
50
+ ):
51
  """
52
+ We are a software startup comprised of AI. By investing in us,
53
+ you are empowering a future filled with limitless possibilities.
54
  :param idea: Your innovative idea, such as "Creating a snake game."
55
+ :param investment: As an investor, you have the opportunity to contribute
56
+ a certain dollar amount to this AI company.
57
  :param n_round:
58
  :param code_review: Whether to use code review.
59
  :return:
60
  """
61
  if platform.system() == "Windows":
62
  asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy())
63
+ asyncio.run(startup(idea, investment, n_round,
64
+ code_review, run_tests, implement))
65
 
66
 
67
  if __name__ == '__main__':
tests/metagpt/document_store/test_lancedb_store.py ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python
2
+ # -*- coding: utf-8 -*-
3
+ """
4
+ @Time : 2023/8/9 15:42
5
+ @Author : unkn-wn (Leon Yee)
6
+ @File : test_lancedb_store.py
7
+ """
8
+ from metagpt.document_store.lancedb_store import LanceStore
9
+ import pytest
10
+ import random
11
+
12
+ @pytest
13
+ def test_lance_store():
14
+
15
+ # This simply establishes the connection to the database, so we can drop the table if it exists
16
+ store = LanceStore('test')
17
+
18
+ store.drop('test')
19
+
20
+ store.write(data=[[random.random() for _ in range(100)] for _ in range(2)],
21
+ metadatas=[{"source": "google-docs"}, {"source": "notion"}],
22
+ ids=["doc1", "doc2"])
23
+
24
+ store.add(data=[random.random() for _ in range(100)], metadata={"source": "notion"}, _id="doc3")
25
+
26
+ result = store.search([random.random() for _ in range(100)], n_results=3)
27
+ assert(len(result) == 3)
28
+
29
+ store.delete("doc2")
30
+ result = store.search([random.random() for _ in range(100)], n_results=3, where="source = 'notion'", metric='cosine')
31
+ assert(len(result) == 1)