Zack Zitting Bradshaw commited on
Commit
4962437
1 Parent(s): 4454baa

Upload folder using huggingface_hub

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .DS_Store +0 -0
  2. .env.example +33 -0
  3. .gitattributes +5 -0
  4. .github/FUNDING.yml +13 -0
  5. .github/ISSUE_TEMPLATE/bug_report.md +27 -0
  6. .github/ISSUE_TEMPLATE/feature_request.md +20 -0
  7. .github/PULL_REQUEST_TEMPLATE.yml +25 -0
  8. .github/dependabot.yml +14 -0
  9. .github/workflows/label.yml +22 -0
  10. .github/workflows/pylint.yml +23 -0
  11. .github/workflows/python-publish.yml +32 -0
  12. .github/workflows/quality.yml +23 -0
  13. .github/workflows/ruff.yml +8 -0
  14. .github/workflows/run_test.yml +23 -0
  15. .github/workflows/stale.yml +27 -0
  16. .github/workflows/test.yml +49 -0
  17. .github/workflows/unit-test.yml +45 -0
  18. .github/workflows/update_space.yml +28 -0
  19. .gitignore +182 -0
  20. .readthedocs.yml +13 -0
  21. =0.3.0 +0 -0
  22. =3.38.0 +64 -0
  23. CONTRIBUTING.md +248 -0
  24. Dockerfile +48 -0
  25. LICENSE +201 -0
  26. README.md +306 -8
  27. api/__init__.py +0 -0
  28. api/app.py +47 -0
  29. api/olds/container.py +62 -0
  30. api/olds/main.py +130 -0
  31. api/olds/worker.py +44 -0
  32. apps/discord.py +38 -0
  33. docs/applications/customer_support.md +42 -0
  34. docs/applications/enterprise.md +0 -0
  35. docs/applications/marketing_agencies.md +64 -0
  36. docs/architecture.md +358 -0
  37. docs/assets/css/extra.css +7 -0
  38. docs/assets/img/SwarmsLogoIcon.png +0 -0
  39. docs/assets/img/swarmsbanner.png +0 -0
  40. docs/assets/img/tools/output.png +0 -0
  41. docs/assets/img/tools/poetry_setup.png +0 -0
  42. docs/assets/img/tools/toml.png +0 -0
  43. docs/bounties.md +86 -0
  44. docs/checklist.md +122 -0
  45. docs/contributing.md +123 -0
  46. docs/demos.md +9 -0
  47. docs/design.md +152 -0
  48. docs/distribution.md +469 -0
  49. docs/examples/count-tokens.md +29 -0
  50. docs/examples/index.md +3 -0
.DS_Store ADDED
Binary file (8.2 kB). View file
 
.env.example ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ OPENAI_API_KEY="your_openai_api_key_here"
2
+ GOOGLE_API_KEY=""
3
+ ANTHROPIC_API_KEY=""
4
+
5
+ WOLFRAM_ALPHA_APPID="your_wolfram_alpha_appid_here"
6
+ ZAPIER_NLA_API_KEY="your_zapier_nla_api_key_here"
7
+
8
+ EVAL_PORT=8000
9
+ MODEL_NAME="gpt-4"
10
+ CELERY_BROKER_URL="redis://localhost:6379"
11
+
12
+ SERVER="http://localhost:8000"
13
+ USE_GPU=True
14
+ PLAYGROUND_DIR="playground"
15
+
16
+ LOG_LEVEL="INFO"
17
+ BOT_NAME="Orca"
18
+
19
+ WINEDB_HOST="your_winedb_host_here"
20
+ WINEDB_PASSWORD="your_winedb_password_here"
21
+ BING_SEARCH_URL="your_bing_search_url_here"
22
+
23
+ BING_SUBSCRIPTION_KEY="your_bing_subscription_key_here"
24
+ SERPAPI_API_KEY="your_serpapi_api_key_here"
25
+ IFTTTKey="your_iftttkey_here"
26
+
27
+ BRAVE_API_KEY="your_brave_api_key_here"
28
+ SPOONACULAR_KEY="your_spoonacular_key_here"
29
+ HF_API_KEY="your_huggingface_api_key_here"
30
+
31
+
32
+ REDIS_HOST=
33
+ REDIS_PORT=
.gitattributes CHANGED
@@ -33,3 +33,8 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ images/Agora-Banner-blend.png filter=lfs diff=lfs merge=lfs -text
37
+ images/swarms_demo.mp4 filter=lfs diff=lfs merge=lfs -text
38
+ swarms/agents/models/segment_anything/assets/masks1.png filter=lfs diff=lfs merge=lfs -text
39
+ swarms/agents/models/segment_anything/assets/minidemo.gif filter=lfs diff=lfs merge=lfs -text
40
+ swarms/agents/models/segment_anything/assets/notebook2.png filter=lfs diff=lfs merge=lfs -text
.github/FUNDING.yml ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # These are supported funding model platforms
2
+
3
+ github: [kyegomez]
4
+ patreon: # Replace with a single Patreon username
5
+ open_collective: # Replace with a single Open Collective username
6
+ ko_fi: # Replace with a single Ko-fi username
7
+ tidelift: # Replace with a single Tidelift platform-name/package-name e.g., npm/babel
8
+ community_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry
9
+ liberapay: # Replace with a single Liberapay username
10
+ issuehunt: # Replace with a single IssueHunt username
11
+ otechie: # Replace with a single Otechie username
12
+ lfx_crowdfunding: # Replace with a single LFX Crowdfunding project-name e.g., cloud-foundry
13
+ custom: #Nothing
.github/ISSUE_TEMPLATE/bug_report.md ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ name: Bug report
3
+ about: Create a report to help us improve
4
+ title: "[BUG] "
5
+ labels: bug
6
+ assignees: kyegomez
7
+
8
+ ---
9
+
10
+ **Describe the bug**
11
+ A clear and concise description of what the bug is.
12
+
13
+ **To Reproduce**
14
+ Steps to reproduce the behavior:
15
+ 1. Go to '...'
16
+ 2. Click on '....'
17
+ 3. Scroll down to '....'
18
+ 4. See error
19
+
20
+ **Expected behavior**
21
+ A clear and concise description of what you expected to happen.
22
+
23
+ **Screenshots**
24
+ If applicable, add screenshots to help explain your problem.
25
+
26
+ **Additional context**
27
+ Add any other context about the problem here.
.github/ISSUE_TEMPLATE/feature_request.md ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ name: Feature request
3
+ about: Suggest an idea for this project
4
+ title: ''
5
+ labels: ''
6
+ assignees: 'kyegomez'
7
+
8
+ ---
9
+
10
+ **Is your feature request related to a problem? Please describe.**
11
+ A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
12
+
13
+ **Describe the solution you'd like**
14
+ A clear and concise description of what you want to happen.
15
+
16
+ **Describe alternatives you've considered**
17
+ A clear and concise description of any alternative solutions or features you've considered.
18
+
19
+ **Additional context**
20
+ Add any other context or screenshots about the feature request here.
.github/PULL_REQUEST_TEMPLATE.yml ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!-- Thank you for contributing to Swarms!
2
+
3
+ Replace this comment with:
4
+ - Description: a description of the change,
5
+ - Issue: the issue # it fixes (if applicable),
6
+ - Dependencies: any dependencies required for this change,
7
+ - Tag maintainer: for a quicker response, tag the relevant maintainer (see below),
8
+ - Twitter handle: we announce bigger features on Twitter. If your PR gets announced and you'd like a mention, we'll gladly shout you out!
9
+
10
+ If you're adding a new integration, please include:
11
+ 1. a test for the integration, preferably unit tests that do not rely on network access,
12
+ 2. an example notebook showing its use.
13
+
14
+ Maintainer responsibilities:
15
+ - General / Misc / if you don't know who to tag: kye@apac.ai
16
+ - DataLoaders / VectorStores / Retrievers: kye@apac.ai
17
+ - Models / Prompts: kye@apac.ai
18
+ - Memory: kye@apac.ai
19
+ - Agents / Tools / Toolkits: kye@apac.ai
20
+ - Tracing / Callbacks: kye@apac.ai
21
+ - Async: kye@apac.ai
22
+
23
+ If no one reviews your PR within a few days, feel free to kye@apac.ai
24
+
25
+ See contribution guidelines for more information on how to write/run tests, lint, etc: https://github.com/kyegomez/swarms
.github/dependabot.yml ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # https://docs.github.com/en/code-security/supply-chain-security/keeping-your-dependencies-updated-automatically/configuration-options-for-dependency-updates
2
+
3
+ version: 2
4
+ updates:
5
+ - package-ecosystem: "github-actions"
6
+ directory: "/"
7
+ schedule:
8
+ interval: "weekly"
9
+
10
+ - package-ecosystem: "pip"
11
+ directory: "/"
12
+ schedule:
13
+ interval: "weekly"
14
+
.github/workflows/label.yml ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # This workflow will triage pull requests and apply a label based on the
2
+ # paths that are modified in the pull request.
3
+ #
4
+ # To use this workflow, you will need to set up a .github/labeler.yml
5
+ # file with configuration. For more information, see:
6
+ # https://github.com/actions/labeler
7
+
8
+ name: Labeler
9
+ on: [pull_request_target]
10
+
11
+ jobs:
12
+ label:
13
+
14
+ runs-on: ubuntu-latest
15
+ permissions:
16
+ contents: read
17
+ pull-requests: write
18
+
19
+ steps:
20
+ - uses: actions/labeler@v4
21
+ with:
22
+ repo-token: "${{ secrets.GITHUB_TOKEN }}"
.github/workflows/pylint.yml ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: Pylint
2
+
3
+ on: [push]
4
+
5
+ jobs:
6
+ build:
7
+ runs-on: ubuntu-latest
8
+ strategy:
9
+ matrix:
10
+ python-version: ["3.8", "3.9", "3.10"]
11
+ steps:
12
+ - uses: actions/checkout@v4
13
+ - name: Set up Python ${{ matrix.python-version }}
14
+ uses: actions/setup-python@v4
15
+ with:
16
+ python-version: ${{ matrix.python-version }}
17
+ - name: Install dependencies
18
+ run: |
19
+ python -m pip install --upgrade pip
20
+ pip install pylint
21
+ - name: Analysing the code with pylint
22
+ run: |
23
+ pylint $(git ls-files '*.py')
.github/workflows/python-publish.yml ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ name: Upload Python Package
3
+
4
+ on:
5
+ release:
6
+ types: [published]
7
+
8
+ permissions:
9
+ contents: read
10
+
11
+ jobs:
12
+ deploy:
13
+
14
+ runs-on: ubuntu-latest
15
+
16
+ steps:
17
+ - uses: actions/checkout@v4
18
+ - name: Set up Python
19
+ uses: actions/setup-python@v4
20
+ with:
21
+ python-version: '3.x'
22
+ - name: Install dependencies
23
+ run: |
24
+ python -m pip install --upgrade pip
25
+ pip install build
26
+ - name: Build package
27
+ run: python -m build
28
+ - name: Publish package
29
+ uses: pypa/gh-action-pypi-publish@f8c70e705ffc13c3b4d1221169b84f12a75d6ca8
30
+ with:
31
+ user: __token__
32
+ password: ${{ secrets.PYPI_API_TOKEN }}
.github/workflows/quality.yml ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: Quality
2
+
3
+ on:
4
+ push:
5
+ branches: [ "main" ]
6
+ pull_request:
7
+ branches: [ "main" ]
8
+
9
+ jobs:
10
+ lint:
11
+ runs-on: ubuntu-latest
12
+ strategy:
13
+ fail-fast: false
14
+ steps:
15
+ - name: Checkout actions
16
+ uses: actions/checkout@v4
17
+ with:
18
+ fetch-depth: 0
19
+ - name: Init environment
20
+ uses: ./.github/actions/init-environment
21
+ - name: Run linter
22
+ run: |
23
+ pylint `git diff --name-only --diff-filter=d origin/main HEAD | grep -E '\.py$' | tr '\n' ' '`
.github/workflows/ruff.yml ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ name: Ruff
2
+ on: [ push, pull_request ]
3
+ jobs:
4
+ ruff:
5
+ runs-on: ubuntu-latest
6
+ steps:
7
+ - uses: actions/checkout@v4
8
+ - uses: chartboost/ruff-action@v1
.github/workflows/run_test.yml ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: Python application test
2
+
3
+ on: [push]
4
+
5
+ jobs:
6
+ build:
7
+
8
+ runs-on: ubuntu-latest
9
+
10
+ steps:
11
+ - uses: actions/checkout@v4
12
+ - name: Set up Python 3.8
13
+ uses: actions/setup-python@v4
14
+ with:
15
+ python-version: 3.8
16
+ - name: Install dependencies
17
+ run: |
18
+ python -m pip install --upgrade pip
19
+ pip install pytest
20
+ if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
21
+ - name: Run tests with pytest
22
+ run: |
23
+ pytest tests/
.github/workflows/stale.yml ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # This workflow warns and then closes issues and PRs that have had no activity for a specified amount of time.
2
+ #
3
+ # You can adjust the behavior by modifying this file.
4
+ # For more information, see:
5
+ # https://github.com/actions/stale
6
+ name: Mark stale issues and pull requests
7
+
8
+ on:
9
+ schedule:
10
+ - cron: '26 12 * * *'
11
+
12
+ jobs:
13
+ stale:
14
+
15
+ runs-on: ubuntu-latest
16
+ permissions:
17
+ issues: write
18
+ pull-requests: write
19
+
20
+ steps:
21
+ - uses: actions/stale@v8
22
+ with:
23
+ repo-token: ${{ secrets.GITHUB_TOKEN }}
24
+ stale-issue-message: 'Stale issue message'
25
+ stale-pr-message: 'Stale pull request message'
26
+ stale-issue-label: 'no-issue-activity'
27
+ stale-pr-label: 'no-pr-activity'
.github/workflows/test.yml ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: test
2
+
3
+ on:
4
+ push:
5
+ branches: [master]
6
+ pull_request:
7
+ workflow_dispatch:
8
+
9
+ env:
10
+ POETRY_VERSION: "1.4.2"
11
+
12
+ jobs:
13
+ build:
14
+ runs-on: ubuntu-latest
15
+ strategy:
16
+ matrix:
17
+ python-version:
18
+ - "3.8"
19
+ - "3.9"
20
+ - "3.10"
21
+ - "3.11"
22
+ test_type:
23
+ - "core"
24
+ - "extended"
25
+ name: Python ${{ matrix.python-version }} ${{ matrix.test_type }}
26
+ steps:
27
+ - uses: actions/checkout@v4
28
+ - name: Set up Python ${{ matrix.python-version }}
29
+ uses: "./.github/actions/poetry_setup"
30
+ with:
31
+ python-version: ${{ matrix.python-version }}
32
+ poetry-version: "1.4.2"
33
+ cache-key: ${{ matrix.test_type }}
34
+ install-command: |
35
+ if [ "${{ matrix.test_type }}" == "core" ]; then
36
+ echo "Running core tests, installing dependencies with poetry..."
37
+ poetry install
38
+ else
39
+ echo "Running extended tests, installing dependencies with poetry..."
40
+ poetry install -E extended_testing
41
+ fi
42
+ - name: Run ${{matrix.test_type}} tests
43
+ run: |
44
+ if [ "${{ matrix.test_type }}" == "core" ]; then
45
+ make test
46
+ else
47
+ make extended_tests
48
+ fi
49
+ shell: bash
.github/workflows/unit-test.yml ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: build
2
+
3
+ on:
4
+ push:
5
+ branches: [ main ]
6
+ pull_request:
7
+ branches: [ main ]
8
+
9
+ jobs:
10
+
11
+ build:
12
+
13
+ runs-on: ubuntu-latest
14
+
15
+ steps:
16
+ - uses: actions/checkout@v4
17
+
18
+ - name: Setup Python
19
+ uses: actions/setup-python@v4
20
+ with:
21
+ python-version: '3.10'
22
+
23
+ - name: Install dependencies
24
+ run: pip install -r requirements.txt
25
+
26
+ - name: Run Python unit tests
27
+ run: python3 -m unittest tests/swarms
28
+
29
+ - name: Verify that the Docker image for the action builds
30
+ run: docker build . --file Dockerfile
31
+
32
+ - name: Integration test 1
33
+ uses: ./
34
+ with:
35
+ input-one: something
36
+ input-two: true
37
+
38
+ - name: Integration test 2
39
+ uses: ./
40
+ with:
41
+ input-one: something else
42
+ input-two: false
43
+
44
+ - name: Verify integration test results
45
+ run: python3 -m unittest unittesting/swarms
.github/workflows/update_space.yml ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: Run Python script
2
+
3
+ on:
4
+ push:
5
+ branches:
6
+ - discord-bot
7
+
8
+ jobs:
9
+ build:
10
+ runs-on: ubuntu-latest
11
+
12
+ steps:
13
+ - name: Checkout
14
+ uses: actions/checkout@v2
15
+
16
+ - name: Set up Python
17
+ uses: actions/setup-python@v2
18
+ with:
19
+ python-version: '3.9'
20
+
21
+ - name: Install Gradio
22
+ run: python -m pip install gradio
23
+
24
+ - name: Log in to Hugging Face
25
+ run: python -c 'import huggingface_hub; huggingface_hub.login(token="${{ secrets.hf_token }}")'
26
+
27
+ - name: Deploy to Spaces
28
+ run: gradio deploy
.gitignore ADDED
@@ -0,0 +1,182 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ __pycache__/
2
+ .venv/
3
+
4
+ .env
5
+
6
+ image/
7
+ audio/
8
+ video/
9
+ dataframe/
10
+
11
+ static/generated
12
+ swarms/__pycache__
13
+ venv
14
+ .DS_Store
15
+
16
+ .DS_STORE
17
+ swarms/agents/.DS_Store
18
+
19
+ _build
20
+
21
+
22
+ .DS_STORE
23
+ # Byte-compiled / optimized / DLL files
24
+ __pycache__/
25
+ *.py[cod]
26
+ *$py.class
27
+
28
+ # C extensions
29
+ *.so
30
+
31
+ # Distribution / packaging
32
+ .Python
33
+ build/
34
+ develop-eggs/
35
+ dist/
36
+ downloads/
37
+ eggs/
38
+ .eggs/
39
+ lib/
40
+ lib64/
41
+ parts/
42
+ sdist/
43
+ var/
44
+ wheels/
45
+ share/python-wheels/
46
+ *.egg-info/
47
+ .installed.cfg
48
+ *.egg
49
+ MANIFEST
50
+
51
+ # PyInstaller
52
+ # Usually these files are written by a python script from a template
53
+ # before PyInstaller builds the exe, so as to inject date/other infos into it.
54
+ *.manifest
55
+ *.spec
56
+
57
+ # Installer logs
58
+ pip-log.txt
59
+ pip-delete-this-directory.txt
60
+
61
+ # Unit test / coverage reports
62
+ htmlcov/
63
+ .tox/
64
+ .nox/
65
+ .coverage
66
+ .coverage.*
67
+ .cache
68
+ nosetests.xml
69
+ coverage.xml
70
+ *.cover
71
+ *.py,cover
72
+ .hypothesis/
73
+ .pytest_cache/
74
+ cover/
75
+
76
+ # Translations
77
+ *.mo
78
+ *.pot
79
+
80
+ # Django stuff:
81
+ *.log
82
+ local_settings.py
83
+ db.sqlite3
84
+ db.sqlite3-journal
85
+
86
+ # Flask stuff:
87
+ instance/
88
+ .webassets-cache
89
+
90
+ # Scrapy stuff:
91
+ .scrapy
92
+
93
+ # Sphinx documentation
94
+ docs/_build/
95
+
96
+ # PyBuilder
97
+ .pybuilder/
98
+ target/
99
+
100
+ # Jupyter Notebook
101
+ .ipynb_checkpoints
102
+
103
+ # IPython
104
+ profile_default/
105
+ ipython_config.py
106
+ .DS_Store
107
+ # pyenv
108
+ # For a library or package, you might want to ignore these files since the code is
109
+ # intended to run in multiple environments; otherwise, check them in:
110
+ # .python-version
111
+
112
+ # pipenv
113
+ # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
114
+ # However, in case of collaboration, if having platform-specific dependencies or dependencies
115
+ # having no cross-platform support, pipenv may install dependencies that don't work, or not
116
+ # install all needed dependencies.
117
+ #Pipfile.lock
118
+
119
+ # poetry
120
+ # Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
121
+ # This is especially recommended for binary packages to ensure reproducibility, and is more
122
+ # commonly ignored for libraries.
123
+ # https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
124
+ #poetry.lock
125
+
126
+ # pdm
127
+ # Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
128
+ #pdm.lock
129
+ # pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
130
+ # in version control.
131
+ # https://pdm.fming.dev/#use-with-ide
132
+ .pdm.toml
133
+
134
+ # PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
135
+ __pypackages__/
136
+
137
+ # Celery stuff
138
+ celerybeat-schedule
139
+ celerybeat.pid
140
+
141
+ # SageMath parsed files
142
+ *.sage.py
143
+
144
+ # Environments
145
+ .env
146
+ .venv
147
+ env/
148
+ venv/
149
+ ENV/
150
+ env.bak/
151
+ venv.bak/
152
+
153
+ # Spyder project settings
154
+ .spyderproject
155
+ .spyproject
156
+
157
+ # Rope project settings
158
+ .ropeproject
159
+
160
+ # mkdocs documentation
161
+ /site
162
+
163
+ # mypy
164
+ .mypy_cache/
165
+ .dmypy.json
166
+ dmypy.json
167
+
168
+ # Pyre type checker
169
+ .pyre/
170
+
171
+ # pytype static type analyzer
172
+ .pytype/
173
+
174
+ # Cython debug symbols
175
+ cython_debug/
176
+
177
+ # PyCharm
178
+ # JetBrains specific template is maintained in a separate JetBrains.gitignore that can
179
+ # be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
180
+ # and can be added to the global gitignore or merged into this file. For a more nuclear
181
+ # option (not recommended) you can uncomment the following to ignore the entire idea folder.
182
+ #.idea/
.readthedocs.yml ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ version: 2
2
+
3
+ build:
4
+ os: ubuntu-22.04
5
+ tools:
6
+ python: "3.11"
7
+
8
+ mkdocs:
9
+ configuration: mkdocs.yml
10
+
11
+ python:
12
+ install:
13
+ - requirements: requirements.txt
=0.3.0 ADDED
File without changes
=3.38.0 ADDED
@@ -0,0 +1,64 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Defaulting to user installation because normal site-packages is not writeable
2
+ Requirement already satisfied: gradio_client in /home/zack/.local/lib/python3.10/site-packages (0.2.5)
3
+ Requirement already satisfied: gradio in /home/zack/.local/lib/python3.10/site-packages (3.33.1)
4
+ Requirement already satisfied: fsspec in /home/zack/.local/lib/python3.10/site-packages (from gradio_client) (2023.5.0)
5
+ Requirement already satisfied: httpx in /home/zack/.local/lib/python3.10/site-packages (from gradio_client) (0.24.1)
6
+ Requirement already satisfied: huggingface-hub>=0.13.0 in /home/zack/.local/lib/python3.10/site-packages (from gradio_client) (0.16.4)
7
+ Requirement already satisfied: packaging in /home/zack/.local/lib/python3.10/site-packages (from gradio_client) (23.2)
8
+ Requirement already satisfied: requests in /home/zack/.local/lib/python3.10/site-packages (from gradio_client) (2.27.1)
9
+ Requirement already satisfied: typing-extensions in /home/zack/.local/lib/python3.10/site-packages (from gradio_client) (4.8.0)
10
+ Requirement already satisfied: websockets in /home/zack/.local/lib/python3.10/site-packages (from gradio_client) (11.0.3)
11
+ Requirement already satisfied: aiofiles in /home/zack/.local/lib/python3.10/site-packages (from gradio) (23.1.0)
12
+ Requirement already satisfied: aiohttp in /home/zack/.local/lib/python3.10/site-packages (from gradio) (3.8.4)
13
+ Requirement already satisfied: altair>=4.2.0 in /home/zack/.local/lib/python3.10/site-packages (from gradio) (4.2.2)
14
+ Requirement already satisfied: fastapi in /home/zack/.local/lib/python3.10/site-packages (from gradio) (0.100.1)
15
+ Requirement already satisfied: ffmpy in /home/zack/.local/lib/python3.10/site-packages (from gradio) (0.3.0)
16
+ Requirement already satisfied: jinja2 in /home/zack/.local/lib/python3.10/site-packages (from gradio) (3.1.2)
17
+ Requirement already satisfied: markdown-it-py[linkify]>=2.0.0 in /home/zack/.local/lib/python3.10/site-packages (from gradio) (2.2.0)
18
+ Requirement already satisfied: markupsafe in /home/zack/.local/lib/python3.10/site-packages (from gradio) (2.1.3)
19
+ Requirement already satisfied: matplotlib in /home/zack/.local/lib/python3.10/site-packages (from gradio) (3.1.3)
20
+ Requirement already satisfied: mdit-py-plugins<=0.3.3 in /home/zack/.local/lib/python3.10/site-packages (from gradio) (0.3.3)
21
+ Requirement already satisfied: numpy in /home/zack/.local/lib/python3.10/site-packages (from gradio) (1.22.4)
22
+ Requirement already satisfied: orjson in /home/zack/.local/lib/python3.10/site-packages (from gradio) (3.9.7)
23
+ Requirement already satisfied: pandas in /home/zack/.local/lib/python3.10/site-packages (from gradio) (1.4.2)
24
+ Requirement already satisfied: pillow in /home/zack/.local/lib/python3.10/site-packages (from gradio) (9.5.0)
25
+ Requirement already satisfied: pydantic in /home/zack/.local/lib/python3.10/site-packages (from gradio) (1.8.2)
26
+ Requirement already satisfied: pydub in /home/zack/.local/lib/python3.10/site-packages (from gradio) (0.25.1)
27
+ Requirement already satisfied: pygments>=2.12.0 in /home/zack/.local/lib/python3.10/site-packages (from gradio) (2.16.1)
28
+ Requirement already satisfied: python-multipart in /home/zack/.local/lib/python3.10/site-packages (from gradio) (0.0.6)
29
+ Requirement already satisfied: pyyaml in /home/zack/.local/lib/python3.10/site-packages (from gradio) (6.0)
30
+ Requirement already satisfied: semantic-version in /home/zack/.local/lib/python3.10/site-packages (from gradio) (2.10.0)
31
+ Requirement already satisfied: uvicorn>=0.14.0 in /home/zack/.local/lib/python3.10/site-packages (from gradio) (0.18.3)
32
+ Requirement already satisfied: entrypoints in /home/zack/.local/lib/python3.10/site-packages (from altair>=4.2.0->gradio) (0.4)
33
+ Requirement already satisfied: jsonschema>=3.0 in /home/zack/.local/lib/python3.10/site-packages (from altair>=4.2.0->gradio) (4.19.1)
34
+ Requirement already satisfied: toolz in /home/zack/.local/lib/python3.10/site-packages (from altair>=4.2.0->gradio) (0.12.0)
35
+ Requirement already satisfied: filelock in /home/zack/.local/lib/python3.10/site-packages (from huggingface-hub>=0.13.0->gradio_client) (3.12.4)
36
+ Requirement already satisfied: tqdm>=4.42.1 in /home/zack/.local/lib/python3.10/site-packages (from huggingface-hub>=0.13.0->gradio_client) (4.64.0)
37
+ Requirement already satisfied: mdurl~=0.1 in /home/zack/.local/lib/python3.10/site-packages (from markdown-it-py[linkify]>=2.0.0->gradio) (0.1.2)
38
+ Requirement already satisfied: linkify-it-py<3,>=1 in /home/zack/.local/lib/python3.10/site-packages (from markdown-it-py[linkify]>=2.0.0->gradio) (2.0.2)
39
+ Requirement already satisfied: python-dateutil>=2.8.1 in /home/zack/.local/lib/python3.10/site-packages (from pandas->gradio) (2.8.2)
40
+ Requirement already satisfied: pytz>=2020.1 in /usr/lib/python3/dist-packages (from pandas->gradio) (2022.1)
41
+ Requirement already satisfied: click>=7.0 in /home/zack/.local/lib/python3.10/site-packages (from uvicorn>=0.14.0->gradio) (8.1.6)
42
+ Requirement already satisfied: h11>=0.8 in /home/zack/.local/lib/python3.10/site-packages (from uvicorn>=0.14.0->gradio) (0.14.0)
43
+ Requirement already satisfied: attrs>=17.3.0 in /home/zack/.local/lib/python3.10/site-packages (from aiohttp->gradio) (23.1.0)
44
+ Requirement already satisfied: charset-normalizer<4.0,>=2.0 in /home/zack/.local/lib/python3.10/site-packages (from aiohttp->gradio) (2.0.12)
45
+ Requirement already satisfied: multidict<7.0,>=4.5 in /home/zack/.local/lib/python3.10/site-packages (from aiohttp->gradio) (6.0.4)
46
+ Requirement already satisfied: async-timeout<5.0,>=4.0.0a3 in /home/zack/.local/lib/python3.10/site-packages (from aiohttp->gradio) (4.0.2)
47
+ Requirement already satisfied: yarl<2.0,>=1.0 in /home/zack/.local/lib/python3.10/site-packages (from aiohttp->gradio) (1.9.2)
48
+ Requirement already satisfied: frozenlist>=1.1.1 in /home/zack/.local/lib/python3.10/site-packages (from aiohttp->gradio) (1.3.3)
49
+ Requirement already satisfied: aiosignal>=1.1.2 in /home/zack/.local/lib/python3.10/site-packages (from aiohttp->gradio) (1.3.1)
50
+ Requirement already satisfied: starlette<0.28.0,>=0.27.0 in /home/zack/.local/lib/python3.10/site-packages (from fastapi->gradio) (0.27.0)
51
+ Requirement already satisfied: certifi in /home/zack/.local/lib/python3.10/site-packages (from httpx->gradio_client) (2023.5.7)
52
+ Requirement already satisfied: httpcore<0.18.0,>=0.15.0 in /home/zack/.local/lib/python3.10/site-packages (from httpx->gradio_client) (0.17.0)
53
+ Requirement already satisfied: idna in /home/zack/.local/lib/python3.10/site-packages (from httpx->gradio_client) (3.4)
54
+ Requirement already satisfied: sniffio in /home/zack/.local/lib/python3.10/site-packages (from httpx->gradio_client) (1.3.0)
55
+ Requirement already satisfied: cycler>=0.10 in /usr/lib/python3/dist-packages (from matplotlib->gradio) (0.11.0)
56
+ Requirement already satisfied: kiwisolver>=1.0.1 in /usr/lib/python3/dist-packages (from matplotlib->gradio) (1.3.2)
57
+ Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/lib/python3/dist-packages (from matplotlib->gradio) (2.4.7)
58
+ Requirement already satisfied: urllib3<1.27,>=1.21.1 in /home/zack/.local/lib/python3.10/site-packages (from requests->gradio_client) (1.26.17)
59
+ Requirement already satisfied: anyio<5.0,>=3.0 in /home/zack/.local/lib/python3.10/site-packages (from httpcore<0.18.0,>=0.15.0->httpx->gradio_client) (3.6.2)
60
+ Requirement already satisfied: jsonschema-specifications>=2023.03.6 in /home/zack/.local/lib/python3.10/site-packages (from jsonschema>=3.0->altair>=4.2.0->gradio) (2023.7.1)
61
+ Requirement already satisfied: referencing>=0.28.4 in /home/zack/.local/lib/python3.10/site-packages (from jsonschema>=3.0->altair>=4.2.0->gradio) (0.30.2)
62
+ Requirement already satisfied: rpds-py>=0.7.1 in /home/zack/.local/lib/python3.10/site-packages (from jsonschema>=3.0->altair>=4.2.0->gradio) (0.10.3)
63
+ Requirement already satisfied: uc-micro-py in /home/zack/.local/lib/python3.10/site-packages (from linkify-it-py<3,>=1->markdown-it-py[linkify]>=2.0.0->gradio) (1.0.2)
64
+ Requirement already satisfied: six>=1.5 in /usr/lib/python3/dist-packages (from python-dateutil>=2.8.1->pandas->gradio) (1.16.0)
CONTRIBUTING.md ADDED
@@ -0,0 +1,248 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Contributing to Swarms
2
+
3
+ Hi there! Thank you for even being interested in contributing to Swarms.
4
+ As an open source project in a rapidly developing field, we are extremely open
5
+ to contributions, whether they be in the form of new features, improved infra, better documentation, or bug fixes.
6
+
7
+ ## 🗺️ Guidelines
8
+
9
+ ### 👩‍💻 Contributing Code
10
+
11
+ To contribute to this project, please follow a ["fork and pull request"](https://docs.github.com/en/get-started/quickstart/contributing-to-projects) workflow.
12
+ Please do not try to push directly to this repo unless you are maintainer.
13
+
14
+ Please follow the checked-in pull request template when opening pull requests. Note related issues and tag relevant
15
+ maintainers.
16
+
17
+ Pull requests cannot land without passing the formatting, linting and testing checks first. See
18
+ [Common Tasks](#-common-tasks) for how to run these checks locally.
19
+
20
+ It's essential that we maintain great documentation and testing. If you:
21
+ - Fix a bug
22
+ - Add a relevant unit or integration test when possible. These live in `tests/unit_tests` and `tests/integration_tests`.
23
+ - Make an improvement
24
+ - Update any affected example notebooks and documentation. These lives in `docs`.
25
+ - Update unit and integration tests when relevant.
26
+ - Add a feature
27
+ - Add a demo notebook in `docs/modules`.
28
+ - Add unit and integration tests.
29
+
30
+ We're a small, building-oriented team. If there's something you'd like to add or change, opening a pull request is the
31
+ best way to get our attention.
32
+
33
+ ### 🚩GitHub Issues
34
+
35
+ Our [issues](https://github.com/kyegomez/Swarms/issues) page is kept up to date
36
+ with bugs, improvements, and feature requests.
37
+
38
+ There is a taxonomy of labels to help with sorting and discovery of issues of interest. Please use these to help
39
+ organize issues.
40
+
41
+ If you start working on an issue, please assign it to yourself.
42
+
43
+ If you are adding an issue, please try to keep it focused on a single, modular bug/improvement/feature.
44
+ If two issues are related, or blocking, please link them rather than combining them.
45
+
46
+ We will try to keep these issues as up to date as possible, though
47
+ with the rapid rate of develop in this field some may get out of date.
48
+ If you notice this happening, please let us know.
49
+
50
+ ### 🙋Getting Help
51
+
52
+ Our goal is to have the simplest developer setup possible. Should you experience any difficulty getting setup, please
53
+ contact a maintainer! Not only do we want to help get you unblocked, but we also want to make sure that the process is
54
+ smooth for future contributors.
55
+
56
+ In a similar vein, we do enforce certain linting, formatting, and documentation standards in the codebase.
57
+ If you are finding these difficult (or even just annoying) to work with, feel free to contact a maintainer for help -
58
+ we do not want these to get in the way of getting good code into the codebase.
59
+
60
+ ## 🚀 Quick Start
61
+
62
+ > **Note:** You can run this repository locally (which is described below) or in a [development container](https://containers.dev/) (which is described in the [.devcontainer folder](https://github.com/hwchase17/Swarms/tree/master/.devcontainer)).
63
+
64
+ This project uses [Poetry](https://python-poetry.org/) as a dependency manager. Check out Poetry's [documentation on how to install it](https://python-poetry.org/docs/#installation) on your system before proceeding.
65
+
66
+ ❗Note: If you use `Conda` or `Pyenv` as your environment / package manager, avoid dependency conflicts by doing the following first:
67
+ 1. *Before installing Poetry*, create and activate a new Conda env (e.g. `conda create -n Swarms python=3.9`)
68
+ 2. Install Poetry (see above)
69
+ 3. Tell Poetry to use the virtualenv python environment (`poetry config virtualenvs.prefer-active-python true`)
70
+ 4. Continue with the following steps.
71
+
72
+ To install requirements:
73
+
74
+ ```bash
75
+ poetry install -E all
76
+ ```
77
+
78
+ This will install all requirements for running the package, examples, linting, formatting, tests, and coverage. Note the `-E all` flag will install all optional dependencies necessary for integration testing.
79
+
80
+ ❗Note: If you're running Poetry 1.4.1 and receive a `WheelFileValidationError` for `debugpy` during installation, you can try either downgrading to Poetry 1.4.0 or disabling "modern installation" (`poetry config installer.modern-installation false`) and re-install requirements. See [this `debugpy` issue](https://github.com/microsoft/debugpy/issues/1246) for more details.
81
+
82
+ Now, you should be able to run the common tasks in the following section. To double check, run `make test`, all tests should pass. If they don't you may need to pip install additional dependencies, such as `numexpr` and `openapi_schema_pydantic`.
83
+
84
+ ## ✅ Common Tasks
85
+
86
+ Type `make` for a list of common tasks.
87
+
88
+ ### Code Formatting
89
+
90
+ Formatting for this project is done via a combination of [Black](https://black.readthedocs.io/en/stable/) and [isort](https://pycqa.github.io/isort/).
91
+
92
+ To run formatting for this project:
93
+
94
+ ```bash
95
+ make format
96
+ ```
97
+
98
+ ### Linting
99
+
100
+ Linting for this project is done via a combination of [Black](https://black.readthedocs.io/en/stable/), [isort](https://pycqa.github.io/isort/), [flake8](https://flake8.pycqa.org/en/latest/), and [mypy](http://mypy-lang.org/).
101
+
102
+ To run linting for this project:
103
+
104
+ ```bash
105
+ make lint
106
+ ```
107
+
108
+ We recognize linting can be annoying - if you do not want to do it, please contact a project maintainer, and they can help you with it. We do not want this to be a blocker for good code getting contributed.
109
+
110
+ ### Coverage
111
+
112
+ Code coverage (i.e. the amount of code that is covered by unit tests) helps identify areas of the code that are potentially more or less brittle.
113
+
114
+ To get a report of current coverage, run the following:
115
+
116
+ ```bash
117
+ make coverage
118
+ ```
119
+
120
+ ### Working with Optional Dependencies
121
+
122
+ Swarms relies heavily on optional dependencies to keep the Swarms package lightweight.
123
+
124
+ If you're adding a new dependency to Swarms, assume that it will be an optional dependency, and
125
+ that most users won't have it installed.
126
+
127
+ Users that do not have the dependency installed should be able to **import** your code without
128
+ any side effects (no warnings, no errors, no exceptions).
129
+
130
+ To introduce the dependency to the pyproject.toml file correctly, please do the following:
131
+
132
+ 1. Add the dependency to the main group as an optional dependency
133
+ ```bash
134
+ poetry add --optional [package_name]
135
+ ```
136
+ 2. Open pyproject.toml and add the dependency to the `extended_testing` extra
137
+ 3. Relock the poetry file to update the extra.
138
+ ```bash
139
+ poetry lock --no-update
140
+ ```
141
+ 4. Add a unit test that the very least attempts to import the new code. Ideally the unit
142
+ test makes use of lightweight fixtures to test the logic of the code.
143
+ 5. Please use the `@pytest.mark.requires(package_name)` decorator for any tests that require the dependency.
144
+
145
+ ### Testing
146
+
147
+ See section about optional dependencies.
148
+
149
+ #### Unit Tests
150
+
151
+ Unit tests cover modular logic that does not require calls to outside APIs.
152
+
153
+ To run unit tests:
154
+
155
+ ```bash
156
+ make test
157
+ ```
158
+
159
+ To run unit tests in Docker:
160
+
161
+ ```bash
162
+ make docker_tests
163
+ ```
164
+
165
+ If you add new logic, please add a unit test.
166
+
167
+
168
+
169
+ #### Integration Tests
170
+
171
+ Integration tests cover logic that requires making calls to outside APIs (often integration with other services).
172
+
173
+ **warning** Almost no tests should be integration tests.
174
+
175
+ Tests that require making network connections make it difficult for other
176
+ developers to test the code.
177
+
178
+ Instead favor relying on `responses` library and/or mock.patch to mock
179
+ requests using small fixtures.
180
+
181
+ To run integration tests:
182
+
183
+ ```bash
184
+ make integration_tests
185
+ ```
186
+
187
+ If you add support for a new external API, please add a new integration test.
188
+
189
+ ### Adding a Jupyter Notebook
190
+
191
+ If you are adding a Jupyter notebook example, you'll want to install the optional `dev` dependencies.
192
+
193
+ To install dev dependencies:
194
+
195
+ ```bash
196
+ poetry install --with dev
197
+ ```
198
+
199
+ Launch a notebook:
200
+
201
+ ```bash
202
+ poetry run jupyter notebook
203
+ ```
204
+
205
+ When you run `poetry install`, the `Swarms` package is installed as editable in the virtualenv, so your new logic can be imported into the notebook.
206
+
207
+ ## Documentation
208
+
209
+ ### Contribute Documentation
210
+
211
+ Docs are largely autogenerated by [sphinx](https://www.sphinx-doc.org/en/master/) from the code.
212
+
213
+ For that reason, we ask that you add good documentation to all classes and methods.
214
+
215
+ Similar to linting, we recognize documentation can be annoying. If you do not want to do it, please contact a project maintainer, and they can help you with it. We do not want this to be a blocker for good code getting contributed.
216
+
217
+ ### Build Documentation Locally
218
+
219
+ Before building the documentation, it is always a good idea to clean the build directory:
220
+
221
+ ```bash
222
+ make docs_clean
223
+ ```
224
+
225
+ Next, you can run the linkchecker to make sure all links are valid:
226
+
227
+ ```bash
228
+ make docs_linkcheck
229
+ ```
230
+
231
+ Finally, you can build the documentation as outlined below:
232
+
233
+ ```bash
234
+ make docs_build
235
+ ```
236
+
237
+ ## 🏭 Release Process
238
+
239
+ As of now, Swarms has an ad hoc release process: releases are cut with high frequency by
240
+ a developer and published to [PyPI](https://pypi.org/project/Swarms/).
241
+
242
+ Swarms follows the [semver](https://semver.org/) versioning standard. However, as pre-1.0 software,
243
+ even patch releases may contain [non-backwards-compatible changes](https://semver.org/#spec-item-4).
244
+
245
+ ### 🌟 Recognition
246
+
247
+ If your contribution has made its way into a release, we will want to give you credit on Twitter (only if you want though)!
248
+ If you have a Twitter account you would like us to mention, please let us know in the PR or in another manner.
Dockerfile ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # This is a Dockerfile for running unit tests
2
+
3
+ ARG POETRY_HOME=/opt/poetry
4
+
5
+ # Use the Python base image
6
+ FROM python:3.11.2-bullseye AS builder
7
+
8
+ # Define the version of Poetry to install (default is 1.4.2)
9
+ ARG POETRY_VERSION=1.4.2
10
+
11
+ # Define the directory to install Poetry to (default is /opt/poetry)
12
+ ARG POETRY_HOME
13
+
14
+ # Create a Python virtual environment for Poetry and install it
15
+ RUN python3 -m venv ${POETRY_HOME} && \
16
+ $POETRY_HOME/bin/pip install --upgrade pip && \
17
+ $POETRY_HOME/bin/pip install poetry==${POETRY_VERSION}
18
+
19
+ # Test if Poetry is installed in the expected path
20
+ RUN echo "Poetry version:" && $POETRY_HOME/bin/poetry --version
21
+
22
+ # Set the working directory for the app
23
+ WORKDIR /app
24
+
25
+ # Use a multi-stage build to install dependencies
26
+ FROM builder AS dependencies
27
+
28
+ ARG POETRY_HOME
29
+
30
+ # Copy only the dependency files for installation
31
+ COPY pyproject.toml poetry.lock poetry.toml ./
32
+
33
+ # Install the Poetry dependencies (this layer will be cached as long as the dependencies don't change)
34
+ RUN $POETRY_HOME/bin/poetry install --no-interaction --no-ansi --with test
35
+
36
+ # Use a multi-stage build to run tests
37
+ FROM dependencies AS tests
38
+
39
+ # Copy the rest of the app source code (this layer will be invalidated and rebuilt whenever the source code changes)
40
+ COPY . .
41
+
42
+ RUN /opt/poetry/bin/poetry install --no-interaction --no-ansi --with test
43
+
44
+ # Set the entrypoint to run tests using Poetry
45
+ ENTRYPOINT ["/opt/poetry/bin/poetry", "run", "pytest"]
46
+
47
+ # Set the default command to run all unit tests
48
+ CMD ["tests/"]
LICENSE ADDED
@@ -0,0 +1,201 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Apache License
2
+ Version 2.0, January 2004
3
+ http://www.apache.org/licenses/
4
+
5
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6
+
7
+ 1. Definitions.
8
+
9
+ "License" shall mean the terms and conditions for use, reproduction,
10
+ and distribution as defined by Sections 1 through 9 of this document.
11
+
12
+ "Licensor" shall mean the copyright owner or entity authorized by
13
+ the copyright owner that is granting the License.
14
+
15
+ "Legal Entity" shall mean the union of the acting entity and all
16
+ other entities that control, are controlled by, or are under common
17
+ control with that entity. For the purposes of this definition,
18
+ "control" means (i) the power, direct or indirect, to cause the
19
+ direction or management of such entity, whether by contract or
20
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
21
+ outstanding shares, or (iii) beneficial ownership of such entity.
22
+
23
+ "You" (or "Your") shall mean an individual or Legal Entity
24
+ exercising permissions granted by this License.
25
+
26
+ "Source" form shall mean the preferred form for making modifications,
27
+ including but not limited to software source code, documentation
28
+ source, and configuration files.
29
+
30
+ "Object" form shall mean any form resulting from mechanical
31
+ transformation or translation of a Source form, including but
32
+ not limited to compiled object code, generated documentation,
33
+ and conversions to other media types.
34
+
35
+ "Work" shall mean the work of authorship, whether in Source or
36
+ Object form, made available under the License, as indicated by a
37
+ copyright notice that is included in or attached to the work
38
+ (an example is provided in the Appendix below).
39
+
40
+ "Derivative Works" shall mean any work, whether in Source or Object
41
+ form, that is based on (or derived from) the Work and for which the
42
+ editorial revisions, annotations, elaborations, or other modifications
43
+ represent, as a whole, an original work of authorship. For the purposes
44
+ of this License, Derivative Works shall not include works that remain
45
+ separable from, or merely link (or bind by name) to the interfaces of,
46
+ the Work and Derivative Works thereof.
47
+
48
+ "Contribution" shall mean any work of authorship, including
49
+ the original version of the Work and any modifications or additions
50
+ to that Work or Derivative Works thereof, that is intentionally
51
+ submitted to Licensor for inclusion in the Work by the copyright owner
52
+ or by an individual or Legal Entity authorized to submit on behalf of
53
+ the copyright owner. For the purposes of this definition, "submitted"
54
+ means any form of electronic, verbal, or written communication sent
55
+ to the Licensor or its representatives, including but not limited to
56
+ communication on electronic mailing lists, source code control systems,
57
+ and issue tracking systems that are managed by, or on behalf of, the
58
+ Licensor for the purpose of discussing and improving the Work, but
59
+ excluding communication that is conspicuously marked or otherwise
60
+ designated in writing by the copyright owner as "Not a Contribution."
61
+
62
+ "Contributor" shall mean Licensor and any individual or Legal Entity
63
+ on behalf of whom a Contribution has been received by Licensor and
64
+ subsequently incorporated within the Work.
65
+
66
+ 2. Grant of Copyright License. Subject to the terms and conditions of
67
+ this License, each Contributor hereby grants to You a perpetual,
68
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69
+ copyright license to reproduce, prepare Derivative Works of,
70
+ publicly display, publicly perform, sublicense, and distribute the
71
+ Work and such Derivative Works in Source or Object form.
72
+
73
+ 3. Grant of Patent License. Subject to the terms and conditions of
74
+ this License, each Contributor hereby grants to You a perpetual,
75
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76
+ (except as stated in this section) patent license to make, have made,
77
+ use, offer to sell, sell, import, and otherwise transfer the Work,
78
+ where such license applies only to those patent claims licensable
79
+ by such Contributor that are necessarily infringed by their
80
+ Contribution(s) alone or by combination of their Contribution(s)
81
+ with the Work to which such Contribution(s) was submitted. If You
82
+ institute patent litigation against any entity (including a
83
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
84
+ or a Contribution incorporated within the Work constitutes direct
85
+ or contributory patent infringement, then any patent licenses
86
+ granted to You under this License for that Work shall terminate
87
+ as of the date such litigation is filed.
88
+
89
+ 4. Redistribution. You may reproduce and distribute copies of the
90
+ Work or Derivative Works thereof in any medium, with or without
91
+ modifications, and in Source or Object form, provided that You
92
+ meet the following conditions:
93
+
94
+ (a) You must give any other recipients of the Work or
95
+ Derivative Works a copy of this License; and
96
+
97
+ (b) You must cause any modified files to carry prominent notices
98
+ stating that You changed the files; and
99
+
100
+ (c) You must retain, in the Source form of any Derivative Works
101
+ that You distribute, all copyright, patent, trademark, and
102
+ attribution notices from the Source form of the Work,
103
+ excluding those notices that do not pertain to any part of
104
+ the Derivative Works; and
105
+
106
+ (d) If the Work includes a "NOTICE" text file as part of its
107
+ distribution, then any Derivative Works that You distribute must
108
+ include a readable copy of the attribution notices contained
109
+ within such NOTICE file, excluding those notices that do not
110
+ pertain to any part of the Derivative Works, in at least one
111
+ of the following places: within a NOTICE text file distributed
112
+ as part of the Derivative Works; within the Source form or
113
+ documentation, if provided along with the Derivative Works; or,
114
+ within a display generated by the Derivative Works, if and
115
+ wherever such third-party notices normally appear. The contents
116
+ of the NOTICE file are for informational purposes only and
117
+ do not modify the License. You may add Your own attribution
118
+ notices within Derivative Works that You distribute, alongside
119
+ or as an addendum to the NOTICE text from the Work, provided
120
+ that such additional attribution notices cannot be construed
121
+ as modifying the License.
122
+
123
+ You may add Your own copyright statement to Your modifications and
124
+ may provide additional or different license terms and conditions
125
+ for use, reproduction, or distribution of Your modifications, or
126
+ for any such Derivative Works as a whole, provided Your use,
127
+ reproduction, and distribution of the Work otherwise complies with
128
+ the conditions stated in this License.
129
+
130
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
131
+ any Contribution intentionally submitted for inclusion in the Work
132
+ by You to the Licensor shall be under the terms and conditions of
133
+ this License, without any additional terms or conditions.
134
+ Notwithstanding the above, nothing herein shall supersede or modify
135
+ the terms of any separate license agreement you may have executed
136
+ with Licensor regarding such Contributions.
137
+
138
+ 6. Trademarks. This License does not grant permission to use the trade
139
+ names, trademarks, service marks, or product names of the Licensor,
140
+ except as required for reasonable and customary use in describing the
141
+ origin of the Work and reproducing the content of the NOTICE file.
142
+
143
+ 7. Disclaimer of Warranty. Unless required by applicable law or
144
+ agreed to in writing, Licensor provides the Work (and each
145
+ Contributor provides its Contributions) on an "AS IS" BASIS,
146
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147
+ implied, including, without limitation, any warranties or conditions
148
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149
+ PARTICULAR PURPOSE. You are solely responsible for determining the
150
+ appropriateness of using or redistributing the Work and assume any
151
+ risks associated with Your exercise of permissions under this License.
152
+
153
+ 8. Limitation of Liability. In no event and under no legal theory,
154
+ whether in tort (including negligence), contract, or otherwise,
155
+ unless required by applicable law (such as deliberate and grossly
156
+ negligent acts) or agreed to in writing, shall any Contributor be
157
+ liable to You for damages, including any direct, indirect, special,
158
+ incidental, or consequential damages of any character arising as a
159
+ result of this License or out of the use or inability to use the
160
+ Work (including but not limited to damages for loss of goodwill,
161
+ work stoppage, computer failure or malfunction, or any and all
162
+ other commercial damages or losses), even if such Contributor
163
+ has been advised of the possibility of such damages.
164
+
165
+ 9. Accepting Warranty or Additional Liability. While redistributing
166
+ the Work or Derivative Works thereof, You may choose to offer,
167
+ and charge a fee for, acceptance of support, warranty, indemnity,
168
+ or other liability obligations and/or rights consistent with this
169
+ License. However, in accepting such obligations, You may act only
170
+ on Your own behalf and on Your sole responsibility, not on behalf
171
+ of any other Contributor, and only if You agree to indemnify,
172
+ defend, and hold each Contributor harmless for any liability
173
+ incurred by, or claims asserted against, such Contributor by reason
174
+ of your accepting any such warranty or additional liability.
175
+
176
+ END OF TERMS AND CONDITIONS
177
+
178
+ APPENDIX: How to apply the Apache License to your work.
179
+
180
+ To apply the Apache License to your work, attach the following
181
+ boilerplate notice, with the fields enclosed by brackets "[]"
182
+ replaced with your own identifying information. (Don't include
183
+ the brackets!) The text should be enclosed in the appropriate
184
+ comment syntax for the file format. We also recommend that a
185
+ file or class name and description of purpose be included on the
186
+ same "printed page" as the copyright notice for easier
187
+ identification within third-party archives.
188
+
189
+ Copyright [yyyy] [name of copyright owner]
190
+
191
+ Licensed under the Apache License, Version 2.0 (the "License");
192
+ you may not use this file except in compliance with the License.
193
+ You may obtain a copy of the License at
194
+
195
+ http://www.apache.org/licenses/LICENSE-2.0
196
+
197
+ Unless required by applicable law or agreed to in writing, software
198
+ distributed under the License is distributed on an "AS IS" BASIS,
199
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200
+ See the License for the specific language governing permissions and
201
+ limitations under the License.
README.md CHANGED
@@ -1,12 +1,310 @@
1
  ---
2
- title: Omni Bot
3
- emoji: 🌍
4
- colorFrom: pink
5
- colorTo: indigo
6
  sdk: gradio
7
- sdk_version: 3.47.1
8
- app_file: app.py
9
- pinned: false
10
  ---
 
11
 
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ title: omni_bot
3
+ app_file: omni_ui.py
 
 
4
  sdk: gradio
5
+ sdk_version: 3.33.1
 
 
6
  ---
7
+ ![Swarming banner icon](images/swarmsbanner.png)
8
 
9
+ <div align="center">
10
+
11
+ Swarms is a modular framework that enables reliable and useful multi-agent collaboration at scale to automate real-world tasks.
12
+
13
+
14
+ [![GitHub issues](https://img.shields.io/github/issues/kyegomez/swarms)](https://github.com/kyegomez/swarms/issues) [![GitHub forks](https://img.shields.io/github/forks/kyegomez/swarms)](https://github.com/kyegomez/swarms/network) [![GitHub stars](https://img.shields.io/github/stars/kyegomez/swarms)](https://github.com/kyegomez/swarms/stargazers) [![GitHub license](https://img.shields.io/github/license/kyegomez/swarms)](https://github.com/kyegomez/swarms/blob/main/LICENSE)[![GitHub star chart](https://img.shields.io/github/stars/kyegomez/swarms?style=social)](https://star-history.com/#kyegomez/swarms)[![Dependency Status](https://img.shields.io/librariesio/github/kyegomez/swarms)](https://libraries.io/github/kyegomez/swarms) [![Downloads](https://static.pepy.tech/badge/swarms/month)](https://pepy.tech/project/swarms)
15
+
16
+
17
+ ### Share on Social Media
18
+
19
+ [![Join the Agora discord](https://img.shields.io/discord/1110910277110743103?label=Discord&logo=discord&logoColor=white&style=plastic&color=d7b023)![Share on Twitter](https://img.shields.io/twitter/url/https/twitter.com/cloudposse.svg?style=social&label=Share%20%40kyegomez/swarms)](https://twitter.com/intent/tweet?text=Check%20out%20this%20amazing%20AI%20project:%20&url=https%3A%2F%2Fgithub.com%2Fkyegomez%2Fswarms) [![Share on Facebook](https://img.shields.io/badge/Share-%20facebook-blue)](https://www.facebook.com/sharer/sharer.php?u=https%3A%2F%2Fgithub.com%2Fkyegomez%2Fswarms) [![Share on LinkedIn](https://img.shields.io/badge/Share-%20linkedin-blue)](https://www.linkedin.com/shareArticle?mini=true&url=https%3A%2F%2Fgithub.com%2Fkyegomez%2Fswarms&title=&summary=&source=)
20
+
21
+ [![Share on Reddit](https://img.shields.io/badge/-Share%20on%20Reddit-orange)](https://www.reddit.com/submit?url=https%3A%2F%2Fgithub.com%2Fkyegomez%2Fswarms&title=Swarms%20-%20the%20future%20of%20AI) [![Share on Hacker News](https://img.shields.io/badge/-Share%20on%20Hacker%20News-orange)](https://news.ycombinator.com/submitlink?u=https%3A%2F%2Fgithub.com%2Fkyegomez%2Fswarms&t=Swarms%20-%20the%20future%20of%20AI) [![Share on Pinterest](https://img.shields.io/badge/-Share%20on%20Pinterest-red)](https://pinterest.com/pin/create/button/?url=https%3A%2F%2Fgithub.com%2Fkyegomez%2Fswarms&media=https%3A%2F%2Fexample.com%2Fimage.jpg&description=Swarms%20-%20the%20future%20of%20AI) [![Share on WhatsApp](https://img.shields.io/badge/-Share%20on%20WhatsApp-green)](https://api.whatsapp.com/send?text=Check%20out%20Swarms%20-%20the%20future%20of%20AI%20%23swarms%20%23AI%0A%0Ahttps%3A%2F%2Fgithub.com%2Fkyegomez%2Fswarms)
22
+
23
+ </div>
24
+
25
+
26
+ ## Purpose
27
+ At Swarms, we're transforming the landscape of AI from siloed AI agents to a unified 'swarm' of intelligence. Through relentless iteration and the power of collective insight from our 1500+ Agora researchers, we're developing a groundbreaking framework for AI collaboration. Our mission is to catalyze a paradigm shift, advancing Humanity with the power of unified autonomous AI agent swarms.
28
+
29
+ -----
30
+
31
+ # 🤝 Schedule a 1-on-1 Session
32
+ Book a [1-on-1 Session with Kye](https://calendly.com/apacai/agora), the Creator, to discuss any issues, provide feedback, or explore how we can improve Swarms for you.
33
+
34
+
35
+ ## Hiring
36
+ We're hiring: Engineers, Researchers, Interns And, salesprofessionals to work on democratizing swarms, email me at with your story at `kye@apac.ai`
37
+
38
+ ----------
39
+
40
+ ## Installation
41
+ * `pip3 install swarms`
42
+
43
+ ---
44
+
45
+ ## Usage
46
+ We have a small gallery of examples to run here, [for more check out the docs to build your own agent and or swarms!](https://docs.apac.ai)
47
+
48
+ ### `MultiAgentDebate`
49
+
50
+ - `MultiAgentDebate` is a simple class that enables multi agent collaboration.
51
+
52
+ ```python
53
+ from swarms.workers import Worker
54
+ from swarms.swarms import MultiAgentDebate, select_speaker
55
+ from langchain.llms import OpenAIChat
56
+
57
+ llm = OpenAIChat(
58
+ model_name='gpt-4',
59
+ openai_api_key="api-key",
60
+ temperature=0.5
61
+ )
62
+
63
+ node = Worker(
64
+ llm=llm,
65
+ ai_name="Optimus Prime",
66
+ ai_role="Worker in a swarm",
67
+ external_tools = None,
68
+ human_in_the_loop = False,
69
+ temperature = 0.5,
70
+ )
71
+
72
+ node2 = Worker(
73
+ llm=llm,
74
+ ai_name="Bumble Bee",
75
+ ai_role="Worker in a swarm",
76
+ external_tools = None,
77
+ human_in_the_loop = False,
78
+ temperature = 0.5,
79
+ )
80
+
81
+ node3 = Worker(
82
+ llm=llm,
83
+ ai_name="Bumble Bee",
84
+ ai_role="Worker in a swarm",
85
+ external_tools = None,
86
+ human_in_the_loop = False,
87
+ temperature = 0.5,
88
+ )
89
+
90
+ agents = [
91
+ node,
92
+ node2,
93
+ node3
94
+ ]
95
+
96
+ # Initialize multi-agent debate with the selection function
97
+ debate = MultiAgentDebate(agents, select_speaker)
98
+
99
+ # Run task
100
+ task = "What were the winning boston marathon times for the past 5 years (ending in 2022)? Generate a table of the year, name, country of origin, and times."
101
+ results = debate.run(task, max_iters=4)
102
+
103
+ # Print results
104
+ for result in results:
105
+ print(f"Agent {result['agent']} responded: {result['response']}")
106
+ ```
107
+
108
+ ----
109
+
110
+ ### `Worker`
111
+ - The `Worker` is an fully feature complete agent with an llm, tools, and a vectorstore for long term memory!
112
+
113
+ ```python
114
+ from langchain.llms import ChatOpenAI
115
+ from swarms import Worker
116
+
117
+ llm = ChatOpenAI(
118
+ model_name='gpt-4',
119
+ openai_api_key="api-key",
120
+ temperature=0.5
121
+ )
122
+
123
+ node = Worker(
124
+ llm=llm,
125
+ ai_name="Optimus Prime",
126
+ ai_role="Worker in a swarm",
127
+ external_tools = None,
128
+ human_in_the_loop = False,
129
+ temperature = 0.5,
130
+ )
131
+
132
+ task = "What were the winning boston marathon times for the past 5 years (ending in 2022)? Generate a table of the year, name, country of origin, and times."
133
+ response = node.run(task)
134
+ print(response)
135
+
136
+ ```
137
+
138
+ ------
139
+
140
+ ### `OmniModalAgent`
141
+ - OmniModal Agent is an LLM that access to 10+ multi-modal encoders and diffusers! It can generate images, videos, speech, music and so much more, get started with:
142
+
143
+ ```python
144
+ from langchain.llms import OpenAIChat
145
+ from swarms.agents import OmniModalAgent
146
+
147
+
148
+ llm = OpenAIChat(model_name="gpt-4")
149
+
150
+ agent = OmniModalAgent(llm)
151
+
152
+ agent.run("Create a video of a swarm of fish")
153
+
154
+ ```
155
+
156
+ - OmniModal Agent has a ui in the root called `python3 omni_ui.py`
157
+
158
+ ---
159
+
160
+ # Documentation
161
+ For documentation, go here, [swarms.apac.ai](https://swarms.apac.ai)
162
+
163
+ **NOTE: We need help building the documentation**
164
+
165
+ -----
166
+
167
+ # Docker Setup
168
+ The docker file is located in the docker folder in the `infra` folder, [click here and navigate here in your environment](/infra/Docker)
169
+
170
+ * Build the Docker image
171
+
172
+ * You can build the Docker image using the provided Dockerfile. Navigate to the infra/Docker directory where the Dockerfiles are located.
173
+
174
+ * For the CPU version, use:
175
+
176
+ ```bash
177
+ docker build -t swarms-api:latest -f Dockerfile.cpu .
178
+ ```
179
+ For the GPU version, use:
180
+
181
+ ```bash
182
+ docker build -t swarms-api:gpu -f Dockerfile.gpu .
183
+ ```
184
+ ### Run the Docker container
185
+
186
+ After building the Docker image, you can run the Swarms API in a Docker container. Replace your_redis_host and your_redis_port with your actual Redis host and port.
187
+
188
+ For the CPU version:
189
+
190
+ ```bash
191
+ docker run -p 8000:8000 -e REDIS_HOST=your_redis_host -e REDIS_PORT=your_redis_port swarms-api:latest
192
+ ```
193
+
194
+ ## For the GPU version:
195
+ ```bash
196
+ docker run --gpus all -p 8000:8000 -e REDIS_HOST=your_redis_host -e REDIS_PORT=your_redis_port swarms-api:gpu
197
+ ```
198
+
199
+ ## Access the Swarms API
200
+
201
+ * The Swarms API will be accessible at http://localhost:8000. You can use tools like curl or Postman to send requests to the API.
202
+
203
+ Here's an example curl command to send a POST request to the /chat endpoint:
204
+
205
+ ```bash
206
+ curl -X POST -H "Content-Type: application/json" -d '{"api_key": "your_openai_api_key", "objective": "your_objective"}' http://localhost:8000/chat
207
+ ```
208
+ Replace your_openai_api_key and your_objective with your actual OpenAI API key and objective.
209
+
210
+ ----
211
+
212
+
213
+ # ✨ Features
214
+ * Easy to use Base LLMs, `OpenAI` `Palm` `Anthropic` `HuggingFace`
215
+ * Enterprise Grade, Production Ready with robust Error Handling
216
+ * Multi-Modality Native with Multi-Modal LLMs as tools
217
+ * Infinite Memory Processing: Store infinite sequences of infinite Multi-Modal data, text, images, videos, audio
218
+ * Usability: Extreme emphasis on useability, code is at it's theortical minimum simplicity factor to use
219
+ * Reliability: Outputs that accomplish tasks and activities you wish to execute.
220
+ * Fluidity: A seamless all-around experience to build production grade workflows
221
+ * Speed: Lower the time to automate tasks by 90%.
222
+ * Simplicity: Swarms is extremely simple to use, if not thee simplest agent framework of all time
223
+ * Powerful: Swarms is capable of building entire software apps, to large scale data analysis, and handling chaotic situations
224
+
225
+
226
+ -----
227
+
228
+ ## Contribute
229
+ We're always looking for contributors to help us improve and expand this project. If you're interested, please check out our [Contributing Guidelines](DOCS/C0NTRIBUTING.md).
230
+
231
+ Thank you for being a part of our project!
232
+
233
+ ---
234
+ # Roadmap
235
+
236
+ Please checkout our [Roadmap](DOCS/ROADMAP.md) and consider contributing to make the dream of Swarms real to advance Humanity.
237
+
238
+ ## Optimization Priorities
239
+
240
+ 1. **Reliability**: Increase the reliability of the swarm - obtaining the desired output with a basic and un-detailed input.
241
+
242
+ 2. **Speed**: Reduce the time it takes for the swarm to accomplish tasks by improving the communication layer, critiquing, and self-alignment with meta prompting.
243
+
244
+ 3. **Scalability**: Ensure that the system is asynchronous, concurrent, and self-healing to support scalability.
245
+
246
+ Our goal is to continuously improve Swarms by following this roadmap, while also being adaptable to new needs and opportunities as they arise.
247
+
248
+ ---
249
+
250
+ # Bounty Program
251
+
252
+ Our bounty program is an exciting opportunity for contributors to help us build the future of Swarms. By participating, you can earn rewards while contributing to a project that aims to revolutionize digital activity.
253
+
254
+ Here's how it works:
255
+
256
+ 1. **Check out our Roadmap**: We've shared our roadmap detailing our short and long-term goals. These are the areas where we're seeking contributions.
257
+
258
+ 2. **Pick a Task**: Choose a task from the roadmap that aligns with your skills and interests. If you're unsure, you can reach out to our team for guidance.
259
+
260
+ 3. **Get to Work**: Once you've chosen a task, start working on it. Remember, quality is key. We're looking for contributions that truly make a difference.
261
+
262
+ 4. **Submit your Contribution**: Once your work is complete, submit it for review. We'll evaluate your contribution based on its quality, relevance, and the value it brings to Swarms.
263
+
264
+ 5. **Earn Rewards**: If your contribution is approved, you'll earn a bounty. The amount of the bounty depends on the complexity of the task, the quality of your work, and the value it brings to Swarms.
265
+
266
+ ---
267
+
268
+ ## The Plan
269
+
270
+ ### Phase 1: Building the Foundation
271
+ In the first phase, our focus is on building the basic infrastructure of Swarms. This includes developing key components like the Swarms class, integrating essential tools, and establishing task completion and evaluation logic. We'll also start developing our testing and evaluation framework during this phase. If you're interested in foundational work and have a knack for building robust, scalable systems, this phase is for you.
272
+
273
+ ### Phase 2: Optimizing the System
274
+ In the second phase, we'll focus on optimizng Swarms by integrating more advanced features, improving the system's efficiency, and refining our testing and evaluation framework. This phase involves more complex tasks, so if you enjoy tackling challenging problems and contributing to the development of innovative features, this is the phase for you.
275
+
276
+ ### Phase 3: Towards Super-Intelligence
277
+ The third phase of our bounty program is the most exciting - this is where we aim to achieve super-intelligence. In this phase, we'll be working on improving the swarm's capabilities, expanding its skills, and fine-tuning the system based on real-world testing and feedback. If you're excited about the future of AI and want to contribute to a project that could potentially transform the digital world, this is the phase for you.
278
+
279
+ Remember, our roadmap is a guide, and we encourage you to bring your own ideas and creativity to the table. We believe that every contribution, no matter how small, can make a difference. So join us on this exciting journey and help us create the future of Swarms.
280
+
281
+ <!-- **To participate in our bounty program, visit the [Swarms Bounty Program Page](https://swarms.ai/bounty).** Let's build the future together! -->
282
+
283
+
284
+
285
+ ---
286
+
287
+ # EcoSystem
288
+
289
+ * [The-Compiler, compile natural language into serene, reliable, and secure programs](https://github.com/kyegomez/the-compiler)
290
+
291
+ *[The Replicator, an autonomous swarm that conducts Multi-Modal AI research by creating new underlying mathematical operations and models](https://github.com/kyegomez/The-Replicator)
292
+
293
+ * Make a swarm that checks arxviv for papers -> checks if there is a github link -> then implements them and checks them
294
+
295
+ * [SwarmLogic, where a swarm is your API, database, and backend!](https://github.com/kyegomez/SwarmLogic)
296
+
297
+ ---
298
+
299
+ # Demos
300
+
301
+ ![Swarms Demo](images/Screenshot_48.png)
302
+
303
+ ## Swarm Video Demo {Click for more}
304
+
305
+ [![Watch the swarm video](https://img.youtube.com/vi/Br62cDMYXgc/maxresdefault.jpg)](https://youtu.be/Br62cDMYXgc)
306
+
307
+ ---
308
+
309
+ # Contact
310
+ For enterprise and production ready deployments, allow us to discover more about you and your story, [book a call with us here](https://www.apac.ai/Setup-Call)
api/__init__.py ADDED
File without changes
api/app.py ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import logging
2
+ import os
3
+ from fastapi import FastAPI, HTTPException, Depends
4
+ from fastapi_cache.decorator import cache
5
+ from fastapi_cache.coder import JsonCoder
6
+
7
+ from fastapi_cache import FastAPICache
8
+ from fastapi_cache.backends.redis import RedisBackend
9
+ from aioredis import Redis
10
+
11
+ from pydantic import BaseModel
12
+ from swarms.swarms.swarms import swarm
13
+ from fastapi_limiter import FastAPILimiter
14
+
15
+ from fastapi_limiter.depends import RateLimiter
16
+ from dotenv import load_dotenv
17
+
18
+ load_dotenv()
19
+
20
+ class SwarmInput(BaseModel):
21
+ api_key: str
22
+ objective: str
23
+
24
+ app = FastAPI()
25
+
26
+ @app.on_event("startup")
27
+ async def startup():
28
+ redis_host = os.getenv("REDIS_HOST", "localhost")
29
+ redis_port = int(os.getenv("REDIS_PORT", 6379))
30
+ redis = await Redis.create(redis_host, redis_port)
31
+ FastAPICache.init(RedisBackend(redis), prefix="fastapi-cache", coder=JsonCoder())
32
+ await FastAPILimiter.init(f"redis://{redis_host}:{redis_port}")
33
+
34
+ @app.post("/chat", dependencies=[Depends(RateLimiter(times=2, minutes=1))])
35
+ @cache(expire=60) # Cache results for 1 minute
36
+ async def run(swarm_input: SwarmInput):
37
+ try:
38
+ results = swarm(swarm_input.api_key, swarm_input.objective)
39
+ if not results:
40
+ raise HTTPException(status_code=500, detail="Failed to run swarms")
41
+ return {"results": results}
42
+ except ValueError as ve:
43
+ logging.error("A ValueError occurred", exc_info=True)
44
+ raise HTTPException(status_code=400, detail=str(ve))
45
+ except Exception:
46
+ logging.error("An error occurred", exc_info=True)
47
+ raise HTTPException(status_code=500, detail="An unexpected error occurred")
api/olds/container.py ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ from pathlib import Path
3
+ from typing import Dict, List
4
+
5
+ from fastapi.templating import Jinja2Templates
6
+
7
+ from swarms.agents.utils.agent_creator import AgentManager
8
+ from swarms.utils.main import BaseHandler, FileHandler, FileType
9
+ from swarms.tools.main import ExitConversation, RequestsGet, CodeEditor, Terminal
10
+
11
+ from swarms.utils.main import CsvToDataframe
12
+
13
+ from swarms.tools.main import BaseToolSet
14
+
15
+ from swarms.utils.main import StaticUploader
16
+
17
+ BASE_DIR = Path(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
18
+ os.chdir(BASE_DIR / os.environ["PLAYGROUND_DIR"])
19
+
20
+ #
21
+ toolsets: List[BaseToolSet] = [
22
+ Terminal(),
23
+ CodeEditor(),
24
+ RequestsGet(),
25
+ ExitConversation(),
26
+ ]
27
+ handlers: Dict[FileType, BaseHandler] = {FileType.DATAFRAME: CsvToDataframe()}
28
+
29
+ if os.environ["USE_GPU"]:
30
+ import torch
31
+
32
+ # from core.handlers.image import ImageCaptioning
33
+ from swarms.tools.main import ImageCaptioning
34
+ from swarms.tools.main import (
35
+ ImageEditing,
36
+ InstructPix2Pix,
37
+ Text2Image,
38
+ VisualQuestionAnswering,
39
+ )
40
+
41
+ if torch.cuda.is_available():
42
+ toolsets.extend(
43
+ [
44
+ Text2Image("cuda"),
45
+ ImageEditing("cuda"),
46
+ InstructPix2Pix("cuda"),
47
+ VisualQuestionAnswering("cuda"),
48
+ ]
49
+ )
50
+ handlers[FileType.IMAGE] = ImageCaptioning("cuda")
51
+
52
+ agent_manager = AgentManager.create(toolsets=toolsets)
53
+
54
+ file_handler = FileHandler(handlers=handlers, path=BASE_DIR)
55
+
56
+ templates = Jinja2Templates(directory=BASE_DIR / "api" / "templates")
57
+
58
+ uploader = StaticUploader.from_settings(
59
+ path=BASE_DIR / "static", endpoint="static"
60
+ )
61
+
62
+ reload_dirs = [BASE_DIR / "core", BASE_DIR / "api"]
api/olds/main.py ADDED
@@ -0,0 +1,130 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+
3
+ import re
4
+ from multiprocessing import Process
5
+ from tempfile import NamedTemporaryFile
6
+ from typing import List, TypedDict
7
+
8
+ import uvicorn
9
+ from fastapi import FastAPI, Request, UploadFile
10
+ from fastapi.responses import HTMLResponse
11
+ from fastapi.staticfiles import StaticFiles
12
+ from pydantic import BaseModel
13
+
14
+ from api.olds.container import agent_manager, file_handler, reload_dirs, templates, uploader
15
+ from api.olds.worker import get_task_result, start_worker, task_execute
16
+ # from env import settings
17
+
18
+ app = FastAPI()
19
+
20
+ app.mount("/static", StaticFiles(directory=uploader.path), name="static")
21
+
22
+
23
+ class ExecuteRequest(BaseModel):
24
+ session: str
25
+ prompt: str
26
+ files: List[str]
27
+
28
+
29
+ class ExecuteResponse(TypedDict):
30
+ answer: str
31
+ files: List[str]
32
+
33
+
34
+ @app.get("/", response_class=HTMLResponse)
35
+ async def index(request: Request):
36
+ return templates.TemplateResponse("index.html", {"request": request})
37
+
38
+
39
+ @app.get("/dashboard", response_class=HTMLResponse)
40
+ async def dashboard(request: Request):
41
+ return templates.TemplateResponse("dashboard.html", {"request": request})
42
+
43
+
44
+ @app.post("/upload")
45
+ async def create_upload_file(files: List[UploadFile]):
46
+ urls = []
47
+ for file in files:
48
+ extension = "." + file.filename.split(".")[-1]
49
+ with NamedTemporaryFile(suffix=extension) as tmp_file:
50
+ tmp_file.write(file.file.read())
51
+ tmp_file.flush()
52
+ urls.append(uploader.upload(tmp_file.name))
53
+ return {"urls": urls}
54
+
55
+
56
+ @app.post("/api/execute")
57
+ async def execute(request: ExecuteRequest) -> ExecuteResponse:
58
+ query = request.prompt
59
+ files = request.files
60
+ session = request.session
61
+
62
+ executor = agent_manager.create_executor(session)
63
+
64
+ promptedQuery = "\n".join([file_handler.handle(file) for file in files])
65
+ promptedQuery += query
66
+
67
+ try:
68
+ res = executor({"input": promptedQuery})
69
+ except Exception as e:
70
+ return {"answer": str(e), "files": []}
71
+
72
+ files = re.findall(r"\[file://\S*\]", res["output"])
73
+ files = [file[1:-1].split("file://")[1] for file in files]
74
+
75
+ return {
76
+ "answer": res["output"],
77
+ "files": [uploader.upload(file) for file in files],
78
+ }
79
+
80
+
81
+ @app.post("/api/execute/async")
82
+ async def execute_async(request: ExecuteRequest):
83
+ query = request.prompt
84
+ files = request.files
85
+ session = request.session
86
+
87
+ promptedQuery = "\n".join([file_handler.handle(file) for file in files])
88
+ promptedQuery += query
89
+
90
+ execution = task_execute.delay(session, promptedQuery)
91
+ return {"id": execution.id}
92
+
93
+
94
+ @app.get("/api/execute/async/{execution_id}")
95
+ async def execute_async(execution_id: str):
96
+ execution = get_task_result(execution_id)
97
+
98
+ result = {}
99
+ if execution.status == "SUCCESS" and execution.result:
100
+ output = execution.result.get("output", "")
101
+ files = re.findall(r"\[file://\S*\]", output)
102
+ files = [file[1:-1].split("file://")[1] for file in files]
103
+ result = {
104
+ "answer": output,
105
+ "files": [uploader.upload(file) for file in files],
106
+ }
107
+
108
+ return {
109
+ "status": execution.status,
110
+ "info": execution.info,
111
+ "result": result,
112
+ }
113
+
114
+
115
+ def serve():
116
+ p = Process(target=start_worker, args=[])
117
+ p.start()
118
+ uvicorn.run("api.main:app", host="0.0.0.0", port=os.environ["EVAL_PORT"])
119
+
120
+
121
+ def dev():
122
+ p = Process(target=start_worker, args=[])
123
+ p.start()
124
+ uvicorn.run(
125
+ "api.main:app",
126
+ host="0.0.0.0",
127
+ port=os.environ["EVAL_PORT"],
128
+ reload=True,
129
+ reload_dirs=reload_dirs,
130
+ )
api/olds/worker.py ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+
3
+ from celery import Celery
4
+ from celery.result import AsyncResult
5
+
6
+ from api.olds.container import agent_manager
7
+
8
+
9
+ celery_app = Celery(__name__)
10
+ celery_app.conf.broker_url = os.environ["CELERY_BROKER_URL"]
11
+ celery_app.conf.result_backend = os.environ["CELERY_BROKER_URL"]
12
+ celery_app.conf.update(
13
+ task_track_started=True,
14
+ task_serializer="json",
15
+ accept_content=["json"], # Ignore other content
16
+ result_serializer="json",
17
+ enable_utc=True,
18
+ )
19
+
20
+
21
+ @celery_app.task(name="task_execute", bind=True)
22
+ def task_execute(self, session: str, prompt: str):
23
+ executor = agent_manager.create_executor(session, self)
24
+ response = executor({"input": prompt})
25
+ result = {"output": response["output"]}
26
+
27
+ previous = AsyncResult(self.request.id)
28
+ if previous and previous.info:
29
+ result.update(previous.info)
30
+
31
+ return result
32
+
33
+
34
+ def get_task_result(task_id):
35
+ return AsyncResult(task_id)
36
+
37
+
38
+ def start_worker():
39
+ celery_app.worker_main(
40
+ [
41
+ "worker",
42
+ "--loglevel=INFO",
43
+ ]
44
+ )
apps/discord.py ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from discord.ext import commands
2
+ from langchain.llms import OpenAIChat
3
+ from swarms.agents import OmniModalAgent
4
+
5
+ # Setup
6
+ TOKEN = 'YOUR_DISCORD_BOT_TOKEN'
7
+ bot = commands.Bot(command_prefix='!')
8
+
9
+ # Initialize the OmniModalAgent
10
+ llm = OpenAIChat(model_name="gpt-4")
11
+ agent = OmniModalAgent(llm)
12
+
13
+ @bot.event
14
+ async def on_ready():
15
+ print(f'We have logged in as {bot.user}')
16
+
17
+ @bot.command()
18
+ async def greet(ctx):
19
+ """Greets the user."""
20
+ await ctx.send(f'Hello, {ctx.author.name}!')
21
+
22
+ @bot.command()
23
+ async def run(ctx, *, description: str):
24
+ """Generates a video based on the given description."""
25
+ response = agent.run(description) # Assuming the response provides information or a link to the generated video
26
+ await ctx.send(response)
27
+
28
+ @bot.command()
29
+ async def help_me(ctx):
30
+ """Provides a list of commands and their descriptions."""
31
+ help_text = """
32
+ - `!greet`: Greets you.
33
+ - `!run [description]`: Generates a video based on the given description.
34
+ - `!help_me`: Provides this list of commands and their descriptions.
35
+ """
36
+ await ctx.send(help_text)
37
+
38
+ bot.run(TOKEN)
docs/applications/customer_support.md ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## **Applications of Swarms: Revolutionizing Customer Support**
2
+
3
+ ---
4
+
5
+ **Introduction**:
6
+ In today's fast-paced digital world, responsive and efficient customer support is a linchpin for business success. The introduction of AI-driven swarms in the customer support domain can transform the way businesses interact with and assist their customers. By leveraging the combined power of multiple AI agents working in concert, businesses can achieve unprecedented levels of efficiency, customer satisfaction, and operational cost savings.
7
+
8
+ ---
9
+
10
+ ### **The Benefits of Using Swarms for Customer Support:**
11
+
12
+ 1. **24/7 Availability**: Swarms never sleep. Customers receive instantaneous support at any hour, ensuring constant satisfaction and loyalty.
13
+
14
+ 2. **Infinite Scalability**: Whether it's ten inquiries or ten thousand, swarms can handle fluctuating volumes with ease, eliminating the need for vast human teams and minimizing response times.
15
+
16
+ 3. **Adaptive Intelligence**: Swarms learn collectively, meaning that a solution found for one customer can be instantly applied to benefit all. This leads to constantly improving support experiences, evolving with every interaction.
17
+
18
+ ---
19
+
20
+ ### **Features - Reinventing Customer Support**:
21
+
22
+ - **AI Inbox Monitor**: Continuously scans email inboxes, identifying and categorizing support requests for swift responses.
23
+
24
+ - **Intelligent Debugging**: Proactively helps customers by diagnosing and troubleshooting underlying issues.
25
+
26
+ - **Automated Refunds & Coupons**: Seamless integration with payment systems like Stripe allows for instant issuance of refunds or coupons if a problem remains unresolved.
27
+
28
+ - **Full System Integration**: Holistically connects with CRM, email systems, and payment portals, ensuring a cohesive and unified support experience.
29
+
30
+ - **Conversational Excellence**: With advanced LLMs (Language Model Transformers), the swarm agents can engage in natural, human-like conversations, enhancing customer comfort and trust.
31
+
32
+ - **Rule-based Operation**: By working with rule engines, swarms ensure that all actions adhere to company guidelines, ensuring consistent, error-free support.
33
+
34
+ - **Turing Test Ready**: Crafted to meet and exceed the Turing Test standards, ensuring that every customer interaction feels genuine and personal.
35
+
36
+ ---
37
+
38
+ **Conclusion**:
39
+ Swarms are not just another technological advancement; they represent the future of customer support. Their ability to provide round-the-clock, scalable, and continuously improving support can redefine customer experience standards. By adopting swarms, businesses can stay ahead of the curve, ensuring unparalleled customer loyalty and satisfaction.
40
+
41
+ **Experience the future of customer support. Dive into the swarm revolution.**
42
+
docs/applications/enterprise.md ADDED
File without changes
docs/applications/marketing_agencies.md ADDED
@@ -0,0 +1,64 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## **Swarms in Marketing Agencies: A New Era of Automated Media Strategy**
2
+
3
+ ---
4
+
5
+ ### **Introduction**:
6
+ - Brief background on marketing agencies and their role in driving brand narratives and sales.
7
+ - Current challenges and pain points faced in media planning, placements, and budgeting.
8
+ - Introduction to the transformative potential of swarms in reshaping the marketing industry.
9
+
10
+ ---
11
+
12
+ ### **1. Fundamental Problem: Media Plan Creation**:
13
+ - **Definition**: The challenge of creating an effective media plan that resonates with a target audience and aligns with brand objectives.
14
+
15
+ - **Traditional Solutions and Their Shortcomings**: Manual brainstorming sessions, over-reliance on past strategies, and long turnaround times leading to inefficiency.
16
+
17
+ - **How Swarms Address This Problem**:
18
+ - **Benefit 1**: Automated Media Plan Generation – Swarms ingest branding summaries, objectives, and marketing strategies to generate media plans, eliminating guesswork and human error.
19
+ - **Real-world Application of Swarms**: The automation of media plans based on client briefs, including platform selections, audience targeting, and creative versions.
20
+
21
+ ---
22
+
23
+ ### **2. Fundamental Problem: Media Placements**:
24
+ - **Definition**: The tedious task of determining where ads will be placed, considering demographics, platform specifics, and more.
25
+
26
+ - **Traditional Solutions and Their Shortcomings**: Manual placement leading to possible misalignment with target audiences and brand objectives.
27
+
28
+ - **How Swarms Address This Problem**:
29
+ - **Benefit 2**: Precision Media Placements – Swarms analyze audience data and demographics to suggest the best placements, optimizing for conversions and brand reach.
30
+ - **Real-world Application of Swarms**: Automated selection of ad placements across platforms like Facebook, Google, and DSPs based on media plans.
31
+
32
+ ---
33
+
34
+ ### **3. Fundamental Problem: Budgeting**:
35
+ - **Definition**: Efficiently allocating and managing advertising budgets across multiple campaigns, platforms, and timeframes.
36
+
37
+ - **Traditional Solutions and Their Shortcomings**: Manual budgeting using tools like Excel, prone to errors, and inefficient shifts in allocations.
38
+
39
+ - **How Swarms Address This Problem**:
40
+ - **Benefit 3**: Intelligent Media Budgeting – Swarms enable dynamic budget allocation based on performance analytics, maximizing ROI.
41
+ - **Real-world Application of Swarms**: Real-time adjustments in budget allocations based on campaign performance, eliminating long waiting periods and manual recalculations.
42
+
43
+ ---
44
+
45
+ ### **Features**:
46
+ 1. Automated Media Plan Generator: Input your objectives and receive a comprehensive media plan.
47
+ 2. Precision Media Placement Tool: Ensure your ads appear in the right places to the right people.
48
+ 3. Dynamic Budget Allocation: Maximize ROI with real-time budget adjustments.
49
+ 4. Integration with Common Tools: Seamless integration with tools like Excel and APIs for exporting placements.
50
+ 5. Conversational Platform: A suite of tools built for modern marketing agencies, bringing all tasks under one umbrella.
51
+
52
+ ---
53
+
54
+ ### **Testimonials**:
55
+ - "Swarms have completely revolutionized our media planning process. What used to take weeks now takes mere hours." - *Senior Media Strategist, Top-tier Marketing Agency*
56
+ - "The precision with which we can place ads now is unprecedented. It's like having a crystal ball for marketing!" - *Campaign Manager, Global Advertising Firm*
57
+
58
+ ---
59
+
60
+ ### **Conclusion**:
61
+ - Reiterate the immense potential of swarms in revolutionizing media planning, placements, and budgeting for marketing agencies.
62
+ - Call to action: For marketing agencies looking to step into the future and leave manual inefficiencies behind, swarms are the answer.
63
+
64
+ ---
docs/architecture.md ADDED
@@ -0,0 +1,358 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Architecture
2
+
3
+ ## **1. Introduction**
4
+
5
+ In today's rapidly evolving digital world, harnessing the collaborative power of multiple computational agents is more crucial than ever. 'Swarms' represents a bold stride in this direction—a scalable and dynamic framework designed to enable swarms of agents to function in harmony and tackle complex tasks. This document serves as a comprehensive guide, elucidating the underlying architecture and strategies pivotal to realizing the Swarms vision.
6
+
7
+ ---
8
+
9
+ ## **2. The Vision**
10
+
11
+ At its heart, the Swarms framework seeks to emulate the collaborative efficiency witnessed in natural systems, like ant colonies or bird flocks. These entities, though individually simple, achieve remarkable outcomes through collaboration. Similarly, Swarms will unleash the collective potential of numerous agents, operating cohesively.
12
+
13
+ ---
14
+
15
+ ## **3. Architecture Overview**
16
+
17
+ ### **3.1 Agent Level**
18
+ The base level that serves as the building block for all further complexity.
19
+
20
+ #### Mechanics:
21
+ * **Model**: At its core, each agent harnesses a powerful model like OpenAI's GPT.
22
+ * **Vectorstore**: A memory structure allowing agents to store and retrieve information.
23
+ * **Tools**: Utilities and functionalities that aid in the agent's task execution.
24
+
25
+ #### Interaction:
26
+ Agents interact with the external world through their model and tools. The Vectorstore aids in retaining knowledge and facilitating inter-agent communication.
27
+
28
+ ### **3.2 Worker Infrastructure Level**
29
+ Building on the agent foundation, enhancing capability and readiness for swarm integration.
30
+
31
+ #### Mechanics:
32
+ * **Human Input Integration**: Enables agents to accept and understand human-provided instructions.
33
+ * **Unique Identifiers**: Assigns each agent a unique ID to facilitate tracking and communication.
34
+ * **Asynchronous Tools**: Bolsters agents' capability to multitask and interact in real-time.
35
+
36
+ #### Interaction:
37
+ Each worker is an enhanced agent, capable of operating independently or in sync with its peers, allowing for dynamic, scalable operations.
38
+
39
+ ### **3.3 Swarm Level**
40
+ Multiple Worker Nodes orchestrated into a synchronized, collaborative entity.
41
+
42
+ #### Mechanics:
43
+ * **Orchestrator**: The maestro, responsible for directing the swarm, task allocation, and communication.
44
+ * **Scalable Communication Layer**: Facilitates interactions among nodes and between nodes and the orchestrator.
45
+ * **Task Assignment & Completion Protocols**: Structured procedures ensuring tasks are efficiently distributed and concluded.
46
+
47
+ #### Interaction:
48
+ Nodes collaborate under the orchestrator's guidance, ensuring tasks are partitioned appropriately, executed, and results consolidated.
49
+
50
+ ### **3.4 Hivemind Level**
51
+ Envisioned as a 'Swarm of Swarms'. An upper echelon of collaboration.
52
+
53
+ #### Mechanics:
54
+ * **Hivemind Orchestrator**: Oversees multiple swarm orchestrators, ensuring harmony on a grand scale.
55
+ * **Inter-Swarm Communication Protocols**: Dictates how swarms interact, exchange information, and co-execute tasks.
56
+
57
+ #### Interaction:
58
+ Multiple swarms, each a formidable force, combine their prowess under the Hivemind. This level tackles monumental tasks by dividing them among swarms.
59
+
60
+ ---
61
+
62
+ ## **4. Building the Framework: A Task Checklist**
63
+
64
+ ### **4.1 Foundations: Agent Level**
65
+ * Define and standardize agent properties.
66
+ * Integrate desired model (e.g., OpenAI's GPT) with agent.
67
+ * Implement Vectorstore mechanisms: storage, retrieval, and communication protocols.
68
+ * Incorporate essential tools and utilities.
69
+ * Conduct preliminary testing: Ensure agents can execute basic tasks and utilize the Vectorstore.
70
+
71
+ ### **4.2 Enhancements: Worker Infrastructure Level**
72
+ * Interface agents with human input mechanisms.
73
+ * Assign and manage unique identifiers for each worker.
74
+ * Integrate asynchronous capabilities: Ensure real-time response and multitasking.
75
+ * Test worker nodes for both solitary and collaborative tasks.
76
+
77
+ ### **4.3 Cohesion: Swarm Level**
78
+ * Design and develop the orchestrator: Ensure it can manage multiple worker nodes.
79
+ * Establish a scalable and efficient communication layer.
80
+ * Implement task distribution and retrieval protocols.
81
+ * Test swarms for efficiency, scalability, and robustness.
82
+
83
+ ### **4.4 Apex Collaboration: Hivemind Level**
84
+ * Build the Hivemind Orchestrator: Ensure it can oversee multiple swarms.
85
+ * Define inter-swarm communication, prioritization, and task-sharing protocols.
86
+ * Develop mechanisms to balance loads and optimize resource utilization across swarms.
87
+ * Thoroughly test the Hivemind level for macro-task execution.
88
+
89
+ ---
90
+
91
+ ## **5. Integration and Communication Mechanisms**
92
+
93
+ ### **5.1 Vectorstore as the Universal Communication Layer**
94
+ Serving as the memory and communication backbone, the Vectorstore must:
95
+ * Facilitate rapid storage and retrieval of high-dimensional vectors.
96
+ * Enable similarity-based lookups: Crucial for recognizing patterns or finding similar outputs.
97
+ * Scale seamlessly as agent count grows.
98
+
99
+ ### **5.2 Orchestrator-Driven Communication**
100
+ * Orchestrators, both at the swarm and hivemind level, should employ adaptive algorithms to optimally distribute tasks.
101
+ * Ensure real-time monitoring of task execution and worker node health.
102
+ * Integrate feedback loops: Allow for dynamic task reassignment in case of node failures or inefficiencies.
103
+
104
+ ---
105
+
106
+ ## **6. Conclusion & Forward Path**
107
+
108
+ The Swarms framework, once realized, will usher in a new era of computational efficiency and collaboration. While the roadmap ahead is intricate, with diligent planning, development, and testing, Swarms will redefine the boundaries of collaborative computing.
109
+
110
+ --------
111
+
112
+
113
+ # Overview
114
+
115
+ ### 1. Model
116
+
117
+ **Overview:**
118
+ The foundational level where a trained model (e.g., OpenAI GPT model) is initialized. It's the base on which further abstraction levels build upon. It provides the core capabilities to perform tasks, answer queries, etc.
119
+
120
+ **Diagram:**
121
+ ```
122
+ [ Model (openai) ]
123
+ ```
124
+
125
+ ### 2. Agent Level
126
+
127
+ **Overview:**
128
+ At the agent level, the raw model is coupled with tools and a vector store, allowing it to be more than just a model. The agent can now remember, use tools, and become a more versatile entity ready for integration into larger systems.
129
+
130
+ **Diagram:**
131
+ ```
132
+ +-----------+
133
+ | Agent |
134
+ | +-------+ |
135
+ | | Model | |
136
+ | +-------+ |
137
+ | +-----------+ |
138
+ | | VectorStore | |
139
+ | +-----------+ |
140
+ | +-------+ |
141
+ | | Tools | |
142
+ | +-------+ |
143
+ +-----------+
144
+ ```
145
+
146
+ ### 3. Worker Infrastructure Level
147
+
148
+ **Overview:**
149
+ The worker infrastructure is a step above individual agents. Here, an agent is paired with additional utilities like human input and other tools, making it a more advanced, responsive unit capable of complex tasks.
150
+
151
+ **Diagram:**
152
+ ```
153
+ +----------------+
154
+ | WorkerNode |
155
+ | +-----------+ |
156
+ | | Agent | |
157
+ | | +-------+ | |
158
+ | | | Model | | |
159
+ | | +-------+ | |
160
+ | | +-------+ | |
161
+ | | | Tools | | |
162
+ | | +-------+ | |
163
+ | +-----------+ |
164
+ | |
165
+ | +-----------+ |
166
+ | |Human Input| |
167
+ | +-----------+ |
168
+ | |
169
+ | +-------+ |
170
+ | | Tools | |
171
+ | +-------+ |
172
+ +----------------+
173
+ ```
174
+
175
+ ### 4. Swarm Level
176
+
177
+ **Overview:**
178
+ At the swarm level, the orchestrator is central. It's responsible for assigning tasks to worker nodes, monitoring their completion, and handling the communication layer (for example, through a vector store or another universal communication mechanism) between worker nodes.
179
+
180
+ **Diagram:**
181
+ ```
182
+ +------------+
183
+ |Orchestrator|
184
+ +------------+
185
+ |
186
+ +---------------------------+
187
+ | |
188
+ | Swarm-level Communication|
189
+ | Layer (e.g. |
190
+ | Vector Store) |
191
+ +---------------------------+
192
+ / | \
193
+ +---------------+ +---------------+ +---------------+
194
+ |WorkerNode 1 | |WorkerNode 2 | |WorkerNode n |
195
+ | | | | | |
196
+ +---------------+ +---------------+ +---------------+
197
+ | Task Assigned | Task Completed | Communication |
198
+ ```
199
+
200
+ ### 5. Hivemind Level
201
+
202
+ **Overview:**
203
+ At the Hivemind level, it's a multi-swarm setup, with an upper-layer orchestrator managing multiple swarm-level orchestrators. The Hivemind orchestrator is responsible for broader tasks like assigning macro-tasks to swarms, handling inter-swarm communications, and ensuring the overall system is functioning smoothly.
204
+
205
+ **Diagram:**
206
+ ```
207
+ +--------+
208
+ |Hivemind|
209
+ +--------+
210
+ |
211
+ +--------------+
212
+ |Hivemind |
213
+ |Orchestrator |
214
+ +--------------+
215
+ / | \
216
+ +------------+ +------------+ +------------+
217
+ |Orchestrator| |Orchestrator| |Orchestrator|
218
+ +------------+ +------------+ +------------+
219
+ | | |
220
+ +--------------+ +--------------+ +--------------+
221
+ | Swarm-level| | Swarm-level| | Swarm-level|
222
+ |Communication| |Communication| |Communication|
223
+ | Layer | | Layer | | Layer |
224
+ +--------------+ +--------------+ +--------------+
225
+ / \ / \ / \
226
+ +-------+ +-------+ +-------+ +-------+ +-------+
227
+ |Worker | |Worker | |Worker | |Worker | |Worker |
228
+ | Node | | Node | | Node | | Node | | Node |
229
+ +-------+ +-------+ +-------+ +-------+ +-------+
230
+ ```
231
+
232
+ This setup allows the Hivemind level to operate at a grander scale, with the capability to manage hundreds or even thousands of worker nodes across multiple swarms efficiently.
233
+
234
+
235
+
236
+ -------
237
+ # **Swarms Framework Development Strategy Checklist**
238
+
239
+ ## **Introduction**
240
+
241
+ The development of the Swarms framework requires a systematic and granular approach to ensure that each component is robust and that the overall framework is efficient and scalable. This checklist will serve as a guide to building Swarms from the ground up, breaking down tasks into small, manageable pieces.
242
+
243
+ ---
244
+
245
+ ## **1. Agent Level Development**
246
+
247
+ ### **1.1 Model Integration**
248
+ - [ ] Research the most suitable models (e.g., OpenAI's GPT).
249
+ - [ ] Design an API for the agent to call the model.
250
+ - [ ] Implement error handling when model calls fail.
251
+ - [ ] Test the model with sample data for accuracy and speed.
252
+
253
+ ### **1.2 Vectorstore Implementation**
254
+ - [ ] Design the schema for the vector storage system.
255
+ - [ ] Implement storage methods to add, delete, and update vectors.
256
+ - [ ] Develop retrieval methods with optimization for speed.
257
+ - [ ] Create protocols for vector-based communication between agents.
258
+ - [ ] Conduct stress tests to ascertain storage and retrieval speed.
259
+
260
+ ### **1.3 Tools & Utilities Integration**
261
+ - [ ] List out essential tools required for agent functionality.
262
+ - [ ] Develop or integrate APIs for each tool.
263
+ - [ ] Implement error handling and logging for tool interactions.
264
+ - [ ] Validate tools integration with unit tests.
265
+
266
+ ---
267
+
268
+ ## **2. Worker Infrastructure Level Development**
269
+
270
+ ### **2.1 Human Input Integration**
271
+ - [ ] Design a UI/UX for human interaction with worker nodes.
272
+ - [ ] Create APIs for input collection.
273
+ - [ ] Implement input validation and error handling.
274
+ - [ ] Test human input methods for clarity and ease of use.
275
+
276
+ ### **2.2 Unique Identifier System**
277
+ - [ ] Research optimal formats for unique ID generation.
278
+ - [ ] Develop methods for generating and assigning IDs to agents.
279
+ - [ ] Implement a tracking system to manage and monitor agents via IDs.
280
+ - [ ] Validate the uniqueness and reliability of the ID system.
281
+
282
+ ### **2.3 Asynchronous Operation Tools**
283
+ - [ ] Incorporate libraries/frameworks to enable asynchrony.
284
+ - [ ] Ensure tasks within an agent can run in parallel without conflict.
285
+ - [ ] Test asynchronous operations for efficiency improvements.
286
+
287
+ ---
288
+
289
+ ## **3. Swarm Level Development**
290
+
291
+ ### **3.1 Orchestrator Design & Development**
292
+ - [ ] Draft a blueprint of orchestrator functionalities.
293
+ - [ ] Implement methods for task distribution among worker nodes.
294
+ - [ ] Develop communication protocols for the orchestrator to monitor workers.
295
+ - [ ] Create feedback systems to detect and address worker node failures.
296
+ - [ ] Test orchestrator with a mock swarm to ensure efficient task allocation.
297
+
298
+ ### **3.2 Communication Layer Development**
299
+ - [ ] Select a suitable communication protocol/framework (e.g., gRPC, WebSockets).
300
+ - [ ] Design the architecture for scalable, low-latency communication.
301
+ - [ ] Implement methods for sending, receiving, and broadcasting messages.
302
+ - [ ] Test communication layer for reliability, speed, and error handling.
303
+
304
+ ### **3.3 Task Management Protocols**
305
+ - [ ] Develop a system to queue, prioritize, and allocate tasks.
306
+ - [ ] Implement methods for real-time task status tracking.
307
+ - [ ] Create a feedback loop for completed tasks.
308
+ - [ ] Test task distribution, execution, and feedback systems for efficiency.
309
+
310
+ ---
311
+
312
+ ## **4. Hivemind Level Development**
313
+
314
+ ### **4.1 Hivemind Orchestrator Development**
315
+ - [ ] Extend swarm orchestrator functionalities to manage multiple swarms.
316
+ - [ ] Create inter-swarm communication protocols.
317
+ - [ ] Implement load balancing mechanisms to distribute tasks across swarms.
318
+ - [ ] Validate hivemind orchestrator functionalities with multi-swarm setups.
319
+
320
+ ### **4.2 Inter-Swarm Communication Protocols**
321
+ - [ ] Design methods for swarms to exchange data.
322
+ - [ ] Implement data reconciliation methods for swarms working on shared tasks.
323
+ - [ ] Test inter-swarm communication for efficiency and data integrity.
324
+
325
+ ---
326
+
327
+ ## **5. Scalability & Performance Testing**
328
+
329
+ - [ ] Simulate heavy loads to test the limits of the framework.
330
+ - [ ] Identify and address bottlenecks in both communication and computation.
331
+ - [ ] Conduct speed tests under different conditions.
332
+ - [ ] Test the system's responsiveness under various levels of stress.
333
+
334
+ ---
335
+
336
+ ## **6. Documentation & User Guide**
337
+
338
+ - [ ] Develop detailed documentation covering architecture, setup, and usage.
339
+ - [ ] Create user guides with step-by-step instructions.
340
+ - [ ] Incorporate visual aids, diagrams, and flowcharts for clarity.
341
+ - [ ] Update documentation regularly with new features and improvements.
342
+
343
+ ---
344
+
345
+ ## **7. Continuous Integration & Deployment**
346
+
347
+ - [ ] Setup CI/CD pipelines for automated testing and deployment.
348
+ - [ ] Ensure automatic rollback in case of deployment failures.
349
+ - [ ] Integrate code quality and security checks in the pipeline.
350
+ - [ ] Document deployment strategies and best practices.
351
+
352
+ ---
353
+
354
+ ## **Conclusion**
355
+
356
+ The Swarms framework represents a monumental leap in agent-based computation. This checklist provides a thorough roadmap for the framework's development, ensuring that every facet is addressed in depth. Through diligent adherence to this guide, the Swarms vision can be realized as a powerful, scalable, and robust system ready to tackle the challenges of tomorrow.
357
+
358
+ (Note: This document, given the word limit, provides a high-level overview. A full 5000-word document would delve into even more intricate details, nuances, potential pitfalls, and include considerations for security, user experience, compatibility, etc.)
docs/assets/css/extra.css ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ .md-typeset__table {
2
+ min-width: 100%;
3
+ }
4
+
5
+ .md-typeset table:not([class]) {
6
+ display: table;
7
+ }
docs/assets/img/SwarmsLogoIcon.png ADDED
docs/assets/img/swarmsbanner.png ADDED
docs/assets/img/tools/output.png ADDED
docs/assets/img/tools/poetry_setup.png ADDED
docs/assets/img/tools/toml.png ADDED
docs/bounties.md ADDED
@@ -0,0 +1,86 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Bounty Program
2
+
3
+ Our bounty program is an exciting opportunity for contributors to help us build the future of Swarms. By participating, you can earn rewards while contributing to a project that aims to revolutionize digital activity.
4
+
5
+ Here's how it works:
6
+
7
+ 1. **Check out our Roadmap**: We've shared our roadmap detailing our short and long-term goals. These are the areas where we're seeking contributions.
8
+
9
+ 2. **Pick a Task**: Choose a task from the roadmap that aligns with your skills and interests. If you're unsure, you can reach out to our team for guidance.
10
+
11
+ 3. **Get to Work**: Once you've chosen a task, start working on it. Remember, quality is key. We're looking for contributions that truly make a difference.
12
+
13
+ 4. **Submit your Contribution**: Once your work is complete, submit it for review. We'll evaluate your contribution based on its quality, relevance, and the value it brings to Swarms.
14
+
15
+ 5. **Earn Rewards**: If your contribution is approved, you'll earn a bounty. The amount of the bounty depends on the complexity of the task, the quality of your work, and the value it brings to Swarms.
16
+
17
+ ## The Three Phases of Our Bounty Program
18
+
19
+ ### Phase 1: Building the Foundation
20
+ In the first phase, our focus is on building the basic infrastructure of Swarms. This includes developing key components like the Swarms class, integrating essential tools, and establishing task completion and evaluation logic. We'll also start developing our testing and evaluation framework during this phase. If you're interested in foundational work and have a knack for building robust, scalable systems, this phase is for you.
21
+
22
+ ### Phase 2: Enhancing the System
23
+ In the second phase, we'll focus on enhancing Swarms by integrating more advanced features, improving the system's efficiency, and refining our testing and evaluation framework. This phase involves more complex tasks, so if you enjoy tackling challenging problems and contributing to the development of innovative features, this is the phase for you.
24
+
25
+ ### Phase 3: Towards Super-Intelligence
26
+ The third phase of our bounty program is the most exciting - this is where we aim to achieve super-intelligence. In this phase, we'll be working on improving the swarm's capabilities, expanding its skills, and fine-tuning the system based on real-world testing and feedback. If you're excited about the future of AI and want to contribute to a project that could potentially transform the digital world, this is the phase for you.
27
+
28
+ Remember, our roadmap is a guide, and we encourage you to bring your own ideas and creativity to the table. We believe that every contribution, no matter how small, can make a difference. So join us on this exciting journey and help us create the future of Swarms.
29
+
30
+ **To participate in our bounty program, visit the [Swarms Bounty Program Page](https://swarms.ai/bounty).** Let's build the future together!
31
+
32
+
33
+
34
+
35
+
36
+ ## Bounties for Roadmap Items
37
+
38
+ To accelerate the development of Swarms and to encourage more contributors to join our journey towards automating every digital activity in existence, we are announcing a Bounty Program for specific roadmap items. Each bounty will be rewarded based on the complexity and importance of the task. Below are the items available for bounty:
39
+
40
+ 1. **Multi-Agent Debate Integration**: $2000
41
+ 2. **Meta Prompting Integration**: $1500
42
+ 3. **Swarms Class**: $1500
43
+ 4. **Integration of Additional Tools**: $1000
44
+ 5. **Task Completion and Evaluation Logic**: $2000
45
+ 6. **Ocean Integration**: $2500
46
+ 7. **Improved Communication**: $2000
47
+ 8. **Testing and Evaluation**: $1500
48
+ 9. **Worker Swarm Class**: $2000
49
+ 10. **Documentation**: $500
50
+
51
+ For each bounty task, there will be a strict evaluation process to ensure the quality of the contribution. This process includes a thorough review of the code and extensive testing to ensure it meets our standards.
52
+
53
+ # 3-Phase Testing Framework
54
+
55
+ To ensure the quality and efficiency of the Swarm, we will introduce a 3-phase testing framework which will also serve as our evaluation criteria for each of the bounty tasks.
56
+
57
+ ## Phase 1: Unit Testing
58
+ In this phase, individual modules will be tested to ensure that they work correctly in isolation. Unit tests will be designed for all functions and methods, with an emphasis on edge cases.
59
+
60
+ ## Phase 2: Integration Testing
61
+ After passing unit tests, we will test the integration of different modules to ensure they work correctly together. This phase will also test the interoperability of the Swarm with external systems and libraries.
62
+
63
+ ## Phase 3: Benchmarking & Stress Testing
64
+ In the final phase, we will perform benchmarking and stress tests. We'll push the limits of the Swarm under extreme conditions to ensure it performs well in real-world scenarios. This phase will measure the performance, speed, and scalability of the Swarm under high load conditions.
65
+
66
+ By following this 3-phase testing framework, we aim to develop a reliable, high-performing, and scalable Swarm that can automate all digital activities.
67
+
68
+ # Reverse Engineering to Reach Phase 3
69
+
70
+ To reach the Phase 3 level, we need to reverse engineer the tasks we need to complete. Here's an example of what this might look like:
71
+
72
+ 1. **Set Clear Expectations**: Define what success looks like for each task. Be clear about the outputs and outcomes we expect. This will guide our testing and development efforts.
73
+
74
+ 2. **Develop Testing Scenarios**: Create a comprehensive list of testing scenarios that cover both common and edge cases. This will help us ensure that our Swarm can handle a wide range of situations.
75
+
76
+ 3. **Write Test Cases**: For each scenario, write detailed test cases that outline the exact steps to be followed, the inputs to be used, and the expected outputs.
77
+
78
+ 4. **Execute the Tests**: Run the test cases on our Swarm, making note of any issues or bugs that arise.
79
+
80
+ 5. **Iterate and Improve**: Based on the results of our tests, iterate and improve our Swarm. This may involve fixing bugs, optimizing code, or redesigning parts of our system.
81
+
82
+ 6. **Repeat**: Repeat this process until our Swarm meets our expectations and passes all test cases.
83
+
84
+ By following these steps, we will systematically build, test, and improve our Swarm until it reaches the Phase 3 level. This methodical approach will help us ensure that we create a reliable, high-performing, and scalable Swarm that can truly automate all digital activities.
85
+
86
+ Let's shape the future of digital automation together!
docs/checklist.md ADDED
@@ -0,0 +1,122 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # **Swarms Framework Development Strategy Checklist**
2
+
3
+ ## **Introduction**
4
+
5
+ The development of the Swarms framework requires a systematic and granular approach to ensure that each component is robust and that the overall framework is efficient and scalable. This checklist will serve as a guide to building Swarms from the ground up, breaking down tasks into small, manageable pieces.
6
+
7
+ ---
8
+
9
+ ## **1. Agent Level Development**
10
+
11
+ ### **1.1 Model Integration**
12
+ - [ ] Research the most suitable models (e.g., OpenAI's GPT).
13
+ - [ ] Design an API for the agent to call the model.
14
+ - [ ] Implement error handling when model calls fail.
15
+ - [ ] Test the model with sample data for accuracy and speed.
16
+
17
+ ### **1.2 Vectorstore Implementation**
18
+ - [ ] Design the schema for the vector storage system.
19
+ - [ ] Implement storage methods to add, delete, and update vectors.
20
+ - [ ] Develop retrieval methods with optimization for speed.
21
+ - [ ] Create protocols for vector-based communication between agents.
22
+ - [ ] Conduct stress tests to ascertain storage and retrieval speed.
23
+
24
+ ### **1.3 Tools & Utilities Integration**
25
+ - [ ] List out essential tools required for agent functionality.
26
+ - [ ] Develop or integrate APIs for each tool.
27
+ - [ ] Implement error handling and logging for tool interactions.
28
+ - [ ] Validate tools integration with unit tests.
29
+
30
+ ---
31
+
32
+ ## **2. Worker Infrastructure Level Development**
33
+
34
+ ### **2.1 Human Input Integration**
35
+ - [ ] Design a UI/UX for human interaction with worker nodes.
36
+ - [ ] Create APIs for input collection.
37
+ - [ ] Implement input validation and error handling.
38
+ - [ ] Test human input methods for clarity and ease of use.
39
+
40
+ ### **2.2 Unique Identifier System**
41
+ - [ ] Research optimal formats for unique ID generation.
42
+ - [ ] Develop methods for generating and assigning IDs to agents.
43
+ - [ ] Implement a tracking system to manage and monitor agents via IDs.
44
+ - [ ] Validate the uniqueness and reliability of the ID system.
45
+
46
+ ### **2.3 Asynchronous Operation Tools**
47
+ - [ ] Incorporate libraries/frameworks to enable asynchrony.
48
+ - [ ] Ensure tasks within an agent can run in parallel without conflict.
49
+ - [ ] Test asynchronous operations for efficiency improvements.
50
+
51
+ ---
52
+
53
+ ## **3. Swarm Level Development**
54
+
55
+ ### **3.1 Orchestrator Design & Development**
56
+ - [ ] Draft a blueprint of orchestrator functionalities.
57
+ - [ ] Implement methods for task distribution among worker nodes.
58
+ - [ ] Develop communication protocols for the orchestrator to monitor workers.
59
+ - [ ] Create feedback systems to detect and address worker node failures.
60
+ - [ ] Test orchestrator with a mock swarm to ensure efficient task allocation.
61
+
62
+ ### **3.2 Communication Layer Development**
63
+ - [ ] Select a suitable communication protocol/framework (e.g., gRPC, WebSockets).
64
+ - [ ] Design the architecture for scalable, low-latency communication.
65
+ - [ ] Implement methods for sending, receiving, and broadcasting messages.
66
+ - [ ] Test communication layer for reliability, speed, and error handling.
67
+
68
+ ### **3.3 Task Management Protocols**
69
+ - [ ] Develop a system to queue, prioritize, and allocate tasks.
70
+ - [ ] Implement methods for real-time task status tracking.
71
+ - [ ] Create a feedback loop for completed tasks.
72
+ - [ ] Test task distribution, execution, and feedback systems for efficiency.
73
+
74
+ ---
75
+
76
+ ## **4. Hivemind Level Development**
77
+
78
+ ### **4.1 Hivemind Orchestrator Development**
79
+ - [ ] Extend swarm orchestrator functionalities to manage multiple swarms.
80
+ - [ ] Create inter-swarm communication protocols.
81
+ - [ ] Implement load balancing mechanisms to distribute tasks across swarms.
82
+ - [ ] Validate hivemind orchestrator functionalities with multi-swarm setups.
83
+
84
+ ### **4.2 Inter-Swarm Communication Protocols**
85
+ - [ ] Design methods for swarms to exchange data.
86
+ - [ ] Implement data reconciliation methods for swarms working on shared tasks.
87
+ - [ ] Test inter-swarm communication for efficiency and data integrity.
88
+
89
+ ---
90
+
91
+ ## **5. Scalability & Performance Testing**
92
+
93
+ - [ ] Simulate heavy loads to test the limits of the framework.
94
+ - [ ] Identify and address bottlenecks in both communication and computation.
95
+ - [ ] Conduct speed tests under different conditions.
96
+ - [ ] Test the system's responsiveness under various levels of stress.
97
+
98
+ ---
99
+
100
+ ## **6. Documentation & User Guide**
101
+
102
+ - [ ] Develop detailed documentation covering architecture, setup, and usage.
103
+ - [ ] Create user guides with step-by-step instructions.
104
+ - [ ] Incorporate visual aids, diagrams, and flowcharts for clarity.
105
+ - [ ] Update documentation regularly with new features and improvements.
106
+
107
+ ---
108
+
109
+ ## **7. Continuous Integration & Deployment**
110
+
111
+ - [ ] Setup CI/CD pipelines for automated testing and deployment.
112
+ - [ ] Ensure automatic rollback in case of deployment failures.
113
+ - [ ] Integrate code quality and security checks in the pipeline.
114
+ - [ ] Document deployment strategies and best practices.
115
+
116
+ ---
117
+
118
+ ## **Conclusion**
119
+
120
+ The Swarms framework represents a monumental leap in agent-based computation. This checklist provides a thorough roadmap for the framework's development, ensuring that every facet is addressed in depth. Through diligent adherence to this guide, the Swarms vision can be realized as a powerful, scalable, and robust system ready to tackle the challenges of tomorrow.
121
+
122
+ (Note: This document, given the word limit, provides a high-level overview. A full 5000-word document would delve into even more intricate details, nuances, potential pitfalls, and include considerations for security, user experience, compatibility, etc.)
docs/contributing.md ADDED
@@ -0,0 +1,123 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Contributing
2
+
3
+ Thank you for your interest in contributing to Swarms! We welcome contributions from the community to help improve usability and readability. By contributing, you can be a part of creating a dynamic and interactive AI system.
4
+
5
+ To get started, please follow the guidelines below.
6
+
7
+
8
+ ## Optimization Priorities
9
+
10
+ To continuously improve Swarms, we prioritize the following design objectives:
11
+
12
+ 1. **Usability**: Increase the ease of use and user-friendliness of the swarm system to facilitate adoption and interaction with basic input.
13
+
14
+ 2. **Reliability**: Improve the swarm's ability to obtain the desired output even with basic and un-detailed input.
15
+
16
+ 3. **Speed**: Reduce the time it takes for the swarm to accomplish tasks by improving the communication layer, critiquing, and self-alignment with meta prompting.
17
+
18
+ 4. **Scalability**: Ensure that the system is asynchronous, concurrent, and self-healing to support scalability.
19
+
20
+ Our goal is to continuously improve Swarms by following this roadmap while also being adaptable to new needs and opportunities as they arise.
21
+
22
+ ## Join the Swarms Community
23
+
24
+ Join the Swarms community on Discord to connect with other contributors, coordinate work, and receive support.
25
+
26
+ - [Join the Swarms Discord Server](https://discord.gg/qUtxnK2NMf)
27
+
28
+
29
+ ## Report and Issue
30
+ The easiest way to contribute to our docs is through our public [issue tracker](https://github.com/kyegomez/swarms-docs/issues). Feel free to submit bugs, request features or changes, or contribute to the project directly.
31
+
32
+ ## Pull Requests
33
+
34
+ Swarms docs are built using [MkDocs](https://squidfunk.github.io/mkdocs-material/getting-started/).
35
+
36
+ To directly contribute to Swarms documentation, first fork the [swarms-docs](https://github.com/kyegomez/swarms-docs) repository to your GitHub account. Then clone your repository to your local machine.
37
+
38
+ From inside the directory run:
39
+
40
+ ```pip install -r requirements.txt```
41
+
42
+ To run `swarms-docs` locally run:
43
+
44
+ ```mkdocs serve```
45
+
46
+ You should see something similar to the following:
47
+
48
+ ```
49
+ INFO - Building documentation...
50
+ INFO - Cleaning site directory
51
+ INFO - Documentation built in 0.19 seconds
52
+ INFO - [09:28:33] Watching paths for changes: 'docs', 'mkdocs.yml'
53
+ INFO - [09:28:33] Serving on http://127.0.0.1:8000/
54
+ INFO - [09:28:37] Browser connected: http://127.0.0.1:8000/
55
+ ```
56
+
57
+ Follow the typical PR process to contribute changes.
58
+
59
+ * Create a feature branch.
60
+ * Commit changes.
61
+ * Submit a PR.
62
+
63
+
64
+ -------
65
+ ---
66
+
67
+ ## Taking on Tasks
68
+
69
+ We have a growing list of tasks and issues that you can contribute to. To get started, follow these steps:
70
+
71
+ 1. Visit the [Swarms GitHub repository](https://github.com/kyegomez/swarms) and browse through the existing issues.
72
+
73
+ 2. Find an issue that interests you and make a comment stating that you would like to work on it. Include a brief description of how you plan to solve the problem and any questions you may have.
74
+
75
+ 3. Once a project coordinator assigns the issue to you, you can start working on it.
76
+
77
+ If you come across an issue that is unclear but still interests you, please post in the Discord server mentioned above. Someone from the community will be able to help clarify the issue in more detail.
78
+
79
+ We also welcome contributions to documentation, such as updating markdown files, adding docstrings, creating system architecture diagrams, and other related tasks.
80
+
81
+ ## Submitting Your Work
82
+
83
+ To contribute your changes to Swarms, please follow these steps:
84
+
85
+ 1. Fork the Swarms repository to your GitHub account. You can do this by clicking on the "Fork" button on the repository page.
86
+
87
+ 2. Clone the forked repository to your local machine using the `git clone` command.
88
+
89
+ 3. Before making any changes, make sure to sync your forked repository with the original repository to keep it up to date. You can do this by following the instructions [here](https://docs.github.com/en/github/collaborating-with-pull-requests/syncing-a-fork).
90
+
91
+ 4. Create a new branch for your changes. This branch should have a descriptive name that reflects the task or issue you are working on.
92
+
93
+ 5. Make your changes in the branch, focusing on a small, focused change that only affects a few files.
94
+
95
+ 6. Run any necessary formatting or linting tools to ensure that your changes adhere to the project's coding standards.
96
+
97
+ 7. Once your changes are ready, commit them to your branch with descriptive commit messages.
98
+
99
+ 8. Push the branch to your forked repository.
100
+
101
+ 9. Create a pull request (PR) from your branch to the main Swarms repository. Provide a clear and concise description of your changes in the PR.
102
+
103
+ 10. Request a review from the project maintainers. They will review your changes, provide feedback, and suggest any necessary improvements.
104
+
105
+ 11. Make any required updates or address any feedback provided during the review process.
106
+
107
+ 12. Once your changes have been reviewed and approved, they will be merged into the main branch of the Swarms repository.
108
+
109
+ 13. Congratulations! You have successfully contributed to Swarms.
110
+
111
+ Please note that during the review process, you may be asked to make changes or address certain issues. It is important to engage in open and constructive communication with the project maintainers to ensure the quality of your contributions.
112
+
113
+ ## Developer Setup
114
+
115
+ If you are interested in setting up the Swarms development environment, please follow the instructions provided in the [developer setup guide](docs/developer-setup.md). This guide provides an overview of the different tools and technologies used in the project.
116
+
117
+ ## Join the Agora Community
118
+
119
+ Swarms is brought to you by Agora, the open-source AI research organization. Join the Agora community to connect with other researchers and developers working on AI projects.
120
+
121
+ - [Join the Agora Discord Server](https://discord.gg/qUtxnK2NMf)
122
+
123
+ Thank you for your contributions and for being a part of the Swarms and Agora community! Together, we can advance Humanity through the power of AI.
docs/demos.md ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ # Demo Ideas
2
+
3
+ * We could also try to create an AI influencer run by a swarm, let it create a whole identity and generate images, memes, and other content for Twitter, Reddit, etc.
4
+
5
+ * had a thought that we should have either a more general one of these or a swarm or both -- need something connecting all the calendars, events, and initiatives of all the AI communities, langchain, laion, eluther, lesswrong, gato, rob miles, chatgpt hackers, etc etc
6
+
7
+ * Swarm of AI influencers to spread marketing
8
+
9
+ * Delegation System to better organize teams: Start with a team of passionate humans and let them self-report their skills/strengths so the agent has a concept of who to delegate to, then feed the agent a huge task list (like the bullet list a few messages above) that it breaks down into actionable steps and "prompts" specific team members to complete tasks. Could even suggest breakout teams of a few people with complementary skills to tackle more complex tasks. There can also be a live board that updates each time a team member completes something, to encourage momentum and keep track of progress
docs/design.md ADDED
@@ -0,0 +1,152 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Design Philosophy Document for Swarms
2
+
3
+ ## Usable
4
+
5
+ ### Objective
6
+
7
+ Our goal is to ensure that Swarms is intuitive and easy to use for all users, regardless of their level of technical expertise. This includes the developers who implement Swarms in their applications, as well as end users who interact with the implemented systems.
8
+
9
+ ### Tactics
10
+
11
+ - Clear and Comprehensive Documentation: We will provide well-written and easily accessible documentation that guides users through using and understanding Swarms.
12
+ - User-Friendly APIs: We'll design clean and self-explanatory APIs that help developers to understand their purpose quickly.
13
+ - Prompt and Effective Support: We will ensure that support is readily available to assist users when they encounter problems or need help with Swarms.
14
+
15
+ ## Reliable
16
+
17
+ ### Objective
18
+
19
+ Swarms should be dependable and trustworthy. Users should be able to count on Swarms to perform consistently and without error or failure.
20
+
21
+ ### Tactics
22
+
23
+ - Robust Error Handling: We will focus on error prevention, detection, and recovery to minimize failures in Swarms.
24
+ - Comprehensive Testing: We will apply various testing methodologies such as unit testing, integration testing, and stress testing to validate the reliability of our software.
25
+ - Continuous Integration/Continuous Delivery (CI/CD): We will use CI/CD pipelines to ensure that all changes are tested and validated before they're merged into the main branch.
26
+
27
+ ## Fast
28
+
29
+ ### Objective
30
+
31
+ Swarms should offer high performance and rapid response times. The system should be able to handle requests and tasks swiftly.
32
+
33
+ ### Tactics
34
+
35
+ - Efficient Algorithms: We will focus on optimizing our algorithms and data structures to ensure they run as quickly as possible.
36
+ - Caching: Where appropriate, we will use caching techniques to speed up response times.
37
+ - Profiling and Performance Monitoring: We will regularly analyze the performance of Swarms to identify bottlenecks and opportunities for improvement.
38
+
39
+ ## Scalable
40
+
41
+ ### Objective
42
+
43
+ Swarms should be able to grow in capacity and complexity without compromising performance or reliability. It should be able to handle increased workloads gracefully.
44
+
45
+ ### Tactics
46
+
47
+ - Modular Architecture: We will design Swarms using a modular architecture that allows for easy scaling and modification.
48
+ - Load Balancing: We will distribute tasks evenly across available resources to prevent overload and maximize throughput.
49
+ - Horizontal and Vertical Scaling: We will design Swarms to be capable of both horizontal (adding more machines) and vertical (adding more power to an existing machine) scaling.
50
+
51
+ ### Philosophy
52
+
53
+ Swarms is designed with a philosophy of simplicity and reliability. We believe that software should be a tool that empowers users, not a hurdle that they need to overcome. Therefore, our focus is on usability, reliability, speed, and scalability. We want our users to find Swarms intuitive and dependable, fast and adaptable to their needs. This philosophy guides all of our design and development decisions.
54
+
55
+ # Swarm Architecture Design Document
56
+
57
+ ## Overview
58
+
59
+ The goal of the Swarm Architecture is to provide a flexible and scalable system to build swarm intelligence models that can solve complex problems. This document details the proposed design to create a plug-and-play system, which makes it easy to create custom swarms, and provides pre-configured swarms with multi-modal agents.
60
+
61
+ ## Design Principles
62
+
63
+ - **Modularity**: The system will be built in a modular fashion, allowing various components to be easily swapped or upgraded.
64
+ - **Interoperability**: Different swarm classes and components should be able to work together seamlessly.
65
+ - **Scalability**: The design should support the growth of the system by adding more components or swarms.
66
+ - **Ease of Use**: Users should be able to easily create their own swarms or use pre-configured ones with minimal configuration.
67
+
68
+ ## Design Components
69
+
70
+ ### AbstractSwarm
71
+
72
+ The AbstractSwarm is an abstract base class which defines the basic structure of a swarm and the methods that need to be implemented. Any new swarm should inherit from this class and implement the required methods.
73
+
74
+ ### Swarm Classes
75
+
76
+ Various Swarm classes can be implemented inheriting from the AbstractSwarm class. Each swarm class should implement the required methods for initializing the components, worker nodes, and boss node, and running the swarm.
77
+
78
+ Pre-configured swarm classes with multi-modal agents can be provided for ease of use. These classes come with a default configuration of tools and agents, which can be used out of the box.
79
+
80
+ ### Tools and Agents
81
+
82
+ Tools and agents are the components that provide the actual functionality to the swarms. They can be language models, AI assistants, vector stores, or any other components that can help in problem solving.
83
+
84
+ To make the system plug-and-play, a standard interface should be defined for these components. Any new tool or agent should implement this interface, so that it can be easily plugged into the system.
85
+
86
+ ## Usage
87
+
88
+ Users can either use pre-configured swarms or create their own custom swarms.
89
+
90
+ To use a pre-configured swarm, they can simply instantiate the corresponding swarm class and call the run method with the required objective.
91
+
92
+ To create a custom swarm, they need to:
93
+
94
+ 1. Define a new swarm class inheriting from AbstractSwarm.
95
+ 2. Implement the required methods for the new swarm class.
96
+ 3. Instantiate the swarm class and call the run method.
97
+
98
+ ### Example
99
+
100
+ ```python
101
+ # Using pre-configured swarm
102
+ swarm = PreConfiguredSwarm(openai_api_key)
103
+ swarm.run_swarms(objective)
104
+
105
+ # Creating custom swarm
106
+ class CustomSwarm(AbstractSwarm):
107
+ # Implement required methods
108
+
109
+ swarm = CustomSwarm(openai_api_key)
110
+ swarm.run_swarms(objective)
111
+ ```
112
+
113
+ ## Conclusion
114
+
115
+ This Swarm Architecture design provides a scalable and flexible system for building swarm intelligence models. The plug-and-play design allows users to easily use pre-configured swarms or create their own custom swarms.
116
+
117
+
118
+ # Swarming Architectures
119
+ Sure, below are five different swarm architectures with their base requirements and an abstract class that processes these components:
120
+
121
+ 1. **Hierarchical Swarm**: This architecture is characterized by a boss/worker relationship. The boss node takes high-level decisions and delegates tasks to the worker nodes. The worker nodes perform tasks and report back to the boss node.
122
+ - Requirements: Boss node (can be a large language model), worker nodes (can be smaller language models), and a task queue for task management.
123
+
124
+ 2. **Homogeneous Swarm**: In this architecture, all nodes in the swarm are identical and contribute equally to problem-solving. Each node has the same capabilities.
125
+ - Requirements: Homogeneous nodes (can be language models of the same size), communication protocol for nodes to share information.
126
+
127
+ 3. **Heterogeneous Swarm**: This architecture contains different types of nodes, each with its specific capabilities. This diversity can lead to more robust problem-solving.
128
+ - Requirements: Different types of nodes (can be different types and sizes of language models), a communication protocol, and a mechanism to delegate tasks based on node capabilities.
129
+
130
+ 4. **Competitive Swarm**: In this architecture, nodes compete with each other to find the best solution. The system may use a selection process to choose the best solutions.
131
+ - Requirements: Nodes (can be language models), a scoring mechanism to evaluate node performance, a selection mechanism.
132
+
133
+ 5. **Cooperative Swarm**: In this architecture, nodes work together and share information to find solutions. The focus is on cooperation rather than competition.
134
+ - Requirements: Nodes (can be language models), a communication protocol, a consensus mechanism to agree on solutions.
135
+
136
+
137
+ 6. **Grid-based Swarm**: This architecture positions agents on a grid, where they can only interact with their neighbors. This is useful for simulations, especially in fields like ecology or epidemiology.
138
+ - Requirements: Agents (can be language models), a grid structure, and a neighborhood definition (i.e., how to identify neighboring agents).
139
+
140
+ 7. **Particle Swarm Optimization (PSO) Swarm**: In this architecture, each agent represents a potential solution to an optimization problem. Agents move in the solution space based on their own and their neighbors' past performance. PSO is especially useful for continuous numerical optimization problems.
141
+ - Requirements: Agents (each representing a solution), a definition of the solution space, an evaluation function to rate the solutions, a mechanism to adjust agent positions based on performance.
142
+
143
+ 8. **Ant Colony Optimization (ACO) Swarm**: Inspired by ant behavior, this architecture has agents leave a pheromone trail that other agents follow, reinforcing the best paths. It's useful for problems like the traveling salesperson problem.
144
+ - Requirements: Agents (can be language models), a representation of the problem space, a pheromone updating mechanism.
145
+
146
+ 9. **Genetic Algorithm (GA) Swarm**: In this architecture, agents represent potential solutions to a problem. They can 'breed' to create new solutions and can undergo 'mutations'. GA swarms are good for search and optimization problems.
147
+ - Requirements: Agents (each representing a potential solution), a fitness function to evaluate solutions, a crossover mechanism to breed solutions, and a mutation mechanism.
148
+
149
+ 10. **Stigmergy-based Swarm**: In this architecture, agents communicate indirectly by modifying the environment, and other agents react to such modifications. It's a decentralized method of coordinating tasks.
150
+ - Requirements: Agents (can be language models), an environment that agents can modify, a mechanism for agents to perceive environment changes.
151
+
152
+ These architectures all have unique features and requirements, but they share the need for agents (often implemented as language models) and a mechanism for agents to communicate or interact, whether it's directly through messages, indirectly through the environment, or implicitly through a shared solution space. Some also require specific data structures, like a grid or problem space, and specific algorithms, like for evaluating solutions or updating agent positions.
docs/distribution.md ADDED
@@ -0,0 +1,469 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ # Swarms Monetization Strategy
4
+
5
+ This strategy includes a variety of business models, potential revenue streams, cashflow structures, and customer identification methods. Let's explore these further.
6
+
7
+ ## Business Models
8
+
9
+ 1. **Platform as a Service (PaaS):** Provide the Swarms AI platform on a subscription basis, charged monthly or annually. This could be tiered based on usage and access to premium features.
10
+
11
+ 2. **API Usage-based Pricing:** Charge customers based on their usage of the Swarms API. The more requests made, the higher the fee.
12
+
13
+ 3. **Managed Services:** Offer complete end-to-end solutions where you manage the entire AI infrastructure for the clients. This could be on a contract basis with a recurring fee.
14
+
15
+ 4. **Training and Certification:** Provide Swarms AI training and certification programs for interested developers and businesses. These could be monetized as separate courses or subscription-based access.
16
+
17
+ 5. **Partnerships:** Collaborate with large enterprises and offer them dedicated Swarm AI services. These could be performance-based contracts, ensuring a mutually beneficial relationship.
18
+
19
+ 6. **Data as a Service (DaaS):** Leverage the data generated by Swarms for insights and analytics, providing valuable business intelligence to clients.
20
+
21
+ ## Potential Revenue Streams
22
+
23
+ 1. **Subscription Fees:** This would be the main revenue stream from providing the Swarms platform as a service.
24
+
25
+ 2. **Usage Fees:** Additional revenue can come from usage fees for businesses that have high demand for Swarms API.
26
+
27
+ 3. **Contract Fees:** From offering managed services and bespoke solutions to businesses.
28
+
29
+ 4. **Training Fees:** Revenue from providing training and certification programs to developers and businesses.
30
+
31
+ 5. **Partnership Contracts:** Large-scale projects with enterprises, involving dedicated Swarm AI services, could provide substantial income.
32
+
33
+ 6. **Data Insights:** Revenue from selling valuable business intelligence derived from Swarm's aggregated and anonymized data.
34
+
35
+ ## Potential Customers
36
+
37
+ 1. **Businesses Across Sectors:** Any business seeking to leverage AI for automation, efficiency, and data insights could be a potential customer. This includes sectors like finance, eCommerce, logistics, healthcare, and more.
38
+
39
+ 2. **Developers:** Both freelance and those working in organizations could use Swarms to enhance their projects and services.
40
+
41
+ 3. **Enterprises:** Large enterprises looking to automate and optimize their operations could greatly benefit from Swarms.
42
+
43
+ 4. **Educational Institutions:** Universities and research institutions could leverage Swarms for research and teaching purposes.
44
+
45
+ ## Roadmap
46
+
47
+ 1. **Landing Page Creation:** Develop a dedicated product page on apac.ai for Swarms.
48
+
49
+ 2. **Hosted Swarms API:** Launch a cloud-based Swarms API service. It should be highly reliable, with robust documentation to attract daily users.
50
+
51
+ 3. **Consumer and Enterprise Subscription Service:** Launch a comprehensive subscription service on The Domain. This would provide users with access to a wide array of APIs and data streams.
52
+
53
+ 4. **Dedicated Capacity Deals:** Partner with large enterprises to offer them dedicated Swarm AI solutions for automating their operations.
54
+
55
+ 5. **Enterprise Partnerships:** Develop partnerships with large enterprises for extensive contract-based projects.
56
+
57
+ 6. **Integration with Collaboration Platforms:** Develop Swarms bots for platforms like Discord and Slack, charging users a subscription fee for access.
58
+
59
+ 7. **Personal Data Instances:** Offer users dedicated instances of all their data that the Swarm can query as needed.
60
+
61
+ 8. **Browser Extension:** Develop a browser extension that integrates with the Swarms platform, offering users a more seamless experience.
62
+
63
+ Remember, customer satisfaction and a value-centric approach are at the core of any successful monetization strategy. It's essential to continuously iterate and improve the product based on customer feedback and evolving market needs.
64
+
65
+ ----
66
+
67
+ # Other ideas
68
+
69
+ 1. **Platform as a Service (PaaS):** Create a cloud-based platform that allows users to build, run, and manage applications without the complexity of maintaining the infrastructure. You could charge users a subscription fee for access to the platform and provide different pricing tiers based on usage levels. This could be an attractive solution for businesses that do not have the capacity to build or maintain their own swarm intelligence solutions.
70
+
71
+ 2. **Professional Services:** Offer consultancy and implementation services to businesses looking to utilize the Swarm technology. This could include assisting with integration into existing systems, offering custom development services, or helping customers to build specific solutions using the framework.
72
+
73
+ 3. **Education and Training:** Create a certification program for developers or companies looking to become proficient with the Swarms framework. This could be sold as standalone courses, or bundled with other services.
74
+
75
+ 4. **Managed Services:** Some companies may prefer to outsource the management of their Swarm-based systems. A managed services solution could take care of all the technical aspects, from hosting the solution to ensuring it runs smoothly, allowing the customer to focus on their core business.
76
+
77
+ 5. **Data Analysis and Insights:** Swarm intelligence can generate valuable data and insights. By anonymizing and aggregating this data, you could provide industry reports, trend analysis, and other valuable insights to businesses.
78
+
79
+ As for the type of platform, Swarms can be offered as a cloud-based solution given its scalability and flexibility. This would also allow you to apply a SaaS/PaaS type monetization model, which provides recurring revenue.
80
+
81
+ Potential customers could range from small to large enterprises in various sectors such as logistics, eCommerce, finance, and technology, who are interested in leveraging artificial intelligence and machine learning for complex problem solving, optimization, and decision-making.
82
+
83
+ **Product Brief Monetization Strategy:**
84
+
85
+ Product Name: Swarms.AI Platform
86
+
87
+ Product Description: A cloud-based AI and ML platform harnessing the power of swarm intelligence.
88
+
89
+ 1. **Platform as a Service (PaaS):** Offer tiered subscription plans (Basic, Premium, Enterprise) to accommodate different usage levels and business sizes.
90
+
91
+ 2. **Professional Services:** Offer consultancy and custom development services to tailor the Swarms solution to the specific needs of the business.
92
+
93
+ 3. **Education and Training:** Launch an online Swarms.AI Academy with courses and certifications for developers and businesses.
94
+
95
+ 4. **Managed Services:** Provide a premium, fully-managed service offering that includes hosting, maintenance, and 24/7 support.
96
+
97
+ 5. **Data Analysis and Insights:** Offer industry reports and customized insights generated from aggregated and anonymized Swarm data.
98
+
99
+ Potential Customers: Enterprises in sectors such as logistics, eCommerce, finance, and technology. This can be sold globally, provided there's an internet connection.
100
+
101
+ Marketing Channels: Online marketing (SEO, Content Marketing, Social Media), Partnerships with tech companies, Direct Sales to Enterprises.
102
+
103
+ This strategy is designed to provide multiple revenue streams, while ensuring the Swarms.AI platform is accessible and useful to a range of potential customers.
104
+
105
+ 1. **AI Solution as a Service:** By offering the Swarms framework as a service, businesses can access and utilize the power of multiple LLM agents without the need to maintain the infrastructure themselves. Subscription can be tiered based on usage and additional features.
106
+
107
+ 2. **Integration and Custom Development:** Offer integration services to businesses wanting to incorporate the Swarms framework into their existing systems. Also, you could provide custom development for businesses with specific needs not met by the standard framework.
108
+
109
+ 3. **Training and Certification:** Develop an educational platform offering courses, webinars, and certifications on using the Swarms framework. This can serve both developers seeking to broaden their skills and businesses aiming to train their in-house teams.
110
+
111
+ 4. **Managed Swarms Solutions:** For businesses that prefer to outsource their AI needs, provide a complete solution which includes the development, maintenance, and continuous improvement of swarms-based applications.
112
+
113
+ 5. **Data Analytics Services:** Leveraging the aggregated insights from the AI swarms, you could offer data analytics services. Businesses can use these insights to make informed decisions and predictions.
114
+
115
+ **Type of Platform:**
116
+
117
+ Cloud-based platform or Software as a Service (SaaS) will be a suitable model. It offers accessibility, scalability, and ease of updates.
118
+
119
+ **Target Customers:**
120
+
121
+ The technology can be beneficial for businesses across sectors like eCommerce, technology, logistics, finance, healthcare, and education, among others.
122
+
123
+ **Product Brief Monetization Strategy:**
124
+
125
+ Product Name: Swarms.AI
126
+
127
+ 1. **AI Solution as a Service:** Offer different tiered subscriptions (Standard, Premium, and Enterprise) each with varying levels of usage and features.
128
+
129
+ 2. **Integration and Custom Development:** Offer custom development and integration services, priced based on the scope and complexity of the project.
130
+
131
+ 3. **Training and Certification:** Launch the Swarms.AI Academy with courses and certifications, available for a fee.
132
+
133
+ 4. **Managed Swarms Solutions:** Offer fully managed solutions tailored to business needs, priced based on scope and service level agreements.
134
+
135
+ 5. **Data Analytics Services:** Provide insightful reports and data analyses, which can be purchased on a one-off basis or through a subscription.
136
+
137
+ By offering a variety of services and payment models, Swarms.AI will be able to cater to a diverse range of business needs, from small start-ups to large enterprises. Marketing channels would include digital marketing, partnerships with technology companies, presence in tech events, and direct sales to targeted industries.
138
+
139
+
140
+
141
+ # Roadmap
142
+
143
+ * Create a landing page for swarms apac.ai/product/swarms
144
+
145
+ * Create Hosted Swarms API for anybody to just use without need for mega gpu infra, charge usage based pricing. Prerequisites for success => Swarms has to be extremely reliable + we need world class documentation and many daily users => how do we get many daily users? We provide a seamless and fluid experience, how do we create a seamless and fluid experience? We write good code that is modular, provides feedback to the user in times of distress, and ultimately accomplishes the user's tasks.
146
+
147
+ * Hosted consumer and enterprise subscription as a service on The Domain, where users can interact with 1000s of APIs and ingest 1000s of different data streams.
148
+
149
+ * Hosted dedicated capacity deals with mega enterprises on automating many operations with Swarms for monthly subscription 300,000+$
150
+
151
+ * Partnerships with enterprises, massive contracts with performance based fee
152
+
153
+ * Have discord bot and or slack bot with users personal data, charge subscription + browser extension
154
+
155
+ * each user gets a dedicated ocean instance of all their data so the swarm can query it as needed.
156
+
157
+
158
+
159
+
160
+ ---
161
+ ---
162
+
163
+
164
+ # Swarms Monetization Strategy: A Revolutionary AI-powered Future
165
+
166
+ Swarms is a powerful AI platform leveraging the transformative potential of Swarm Intelligence. Our ambition is to monetize this groundbreaking technology in ways that generate significant cashflow while providing extraordinary value to our customers.
167
+
168
+ Here we outline our strategic monetization pathways and provide a roadmap that plots our course to future success.
169
+
170
+ ---
171
+
172
+ ## I. Business Models
173
+
174
+ 1. **Platform as a Service (PaaS):** We provide the Swarms platform as a service, billed on a monthly or annual basis. Subscriptions can range from $50 for basic access, to $500+ for premium features and extensive usage.
175
+
176
+ 2. **API Usage-based Pricing:** Customers are billed according to their use of the Swarms API. Starting at $0.01 per request, this creates a cashflow model that rewards extensive platform usage.
177
+
178
+ 3. **Managed Services:** We offer end-to-end solutions, managing clients' entire AI infrastructure. Contract fees start from $100,000 per month, offering both a sustainable cashflow and considerable savings for our clients.
179
+
180
+ 4. **Training and Certification:** A Swarms AI training and certification program is available for developers and businesses. Course costs can range from $200 to $2,000, depending on course complexity and duration.
181
+
182
+ 5. **Partnerships:** We forge collaborations with large enterprises, offering dedicated Swarm AI services. These performance-based contracts start from $1,000,000, creating a potentially lucrative cashflow stream.
183
+
184
+ 6. **Data as a Service (DaaS):** Swarms generated data are mined for insights and analytics, with business intelligence reports offered from $500 each.
185
+
186
+ ---
187
+
188
+ ## II. Potential Revenue Streams
189
+
190
+ 1. **Subscription Fees:** From $50 to $500+ per month for platform access.
191
+
192
+ 2. **Usage Fees:** From $0.01 per API request, generating income from high platform usage.
193
+
194
+ 3. **Contract Fees:** Starting from $100,000 per month for managed services.
195
+
196
+ 4. **Training Fees:** From $200 to $2,000 for individual courses or subscription access.
197
+
198
+ 5. **Partnership Contracts:** Contracts starting from $100,000, offering major income potential.
199
+
200
+ 6. **Data Insights:** Business intelligence reports starting from $500.
201
+
202
+ ---
203
+
204
+ ## III. Potential Customers
205
+
206
+ 1. **Businesses Across Sectors:** Our offerings cater to businesses across finance, eCommerce, logistics, healthcare, and more.
207
+
208
+ 2. **Developers:** Both freelancers and organization-based developers can leverage Swarms for their projects.
209
+
210
+ 3. **Enterprises:** Swarms offers large enterprises solutions for optimizing operations.
211
+
212
+ 4. **Educational Institutions:** Universities and research institutions can use Swarms for research and teaching.
213
+
214
+ ---
215
+
216
+ ## IV. Roadmap
217
+
218
+ 1. **Landing Page Creation:** Develop a dedicated Swarms product page on apac.ai.
219
+
220
+ 2. **Hosted Swarms API:** Launch a reliable, well-documented cloud-based Swarms API service.
221
+
222
+ 3. **Consumer and Enterprise Subscription Service:** Launch an extensive subscription service on The Domain, providing wide-ranging access to APIs and data streams.
223
+
224
+ 4. **Dedicated Capacity Deals:** Offer large enterprises dedicated Swarm AI solutions, starting from $300,000 monthly subscription.
225
+
226
+ 5. **Enterprise Partnerships:** Develop performance-based contracts with large enterprises.
227
+
228
+ 6. **Integration with Collaboration Platforms:** Develop Swarms bots for platforms like Discord and Slack, charging a subscription fee for access.
229
+
230
+ 7. **Personal Data Instances:** Offer users dedicated data instances that the Swarm can query as needed.
231
+
232
+ 8. **Browser Extension:** Develop a browser extension that integrates with the Swarms platform for seamless user experience.
233
+
234
+ ---
235
+
236
+ Our North Star remains customer satisfaction and value provision.
237
+ As we embark on this journey, we continuously refine our product based on customer feedback and evolving market needs, ensuring we lead in the age of AI-driven solutions.
238
+
239
+ ## **Platform Distribution Strategy for Swarms**
240
+
241
+ *Note: This strategy aims to diversify the presence of 'Swarms' across various platforms and mediums while focusing on monetization and value creation for its users.
242
+
243
+ ---
244
+
245
+ ### **1. Framework:**
246
+
247
+ #### **Objective:**
248
+ To offer Swarms as an integrated solution within popular frameworks to ensure that developers and businesses can seamlessly incorporate its functionalities.
249
+
250
+ #### **Strategy:**
251
+
252
+ * **Language/Framework Integration:**
253
+ * Target popular frameworks like Django, Flask for Python, Express.js for Node, etc.
254
+ * Create SDKs or plugins for easy integration.
255
+
256
+ * **Monetization:**
257
+ * Freemium Model: Offer basic integration for free, and charge for additional features or advanced integrations.
258
+ * Licensing: Allow businesses to purchase licenses for enterprise-level integrations.
259
+
260
+ * **Promotion:**
261
+ * Engage in partnerships with popular online coding platforms like Udemy, Coursera, etc., offering courses and tutorials on integrating Swarms.
262
+ * Host webinars and write technical blogs to promote the integration benefits.
263
+
264
+ ---
265
+
266
+ ### **2. Paid API:**
267
+
268
+ #### **Objective:**
269
+ To provide a scalable solution for developers and businesses that want direct access to Swarms' functionalities without integrating the entire framework.
270
+
271
+ #### **Strategy:**
272
+
273
+ * **API Endpoints:**
274
+ * Offer various endpoints catering to different functionalities.
275
+ * Maintain robust documentation to ensure ease of use.
276
+
277
+ * **Monetization:**
278
+ * Usage-based Pricing: Charge based on the number of API calls.
279
+ * Subscription Tiers: Provide tiered packages based on usage limits and advanced features.
280
+
281
+ * **Promotion:**
282
+ * List on API marketplaces like RapidAPI.
283
+ * Engage in SEO to make the API documentation discoverable.
284
+
285
+ ---
286
+
287
+ ### **3. Domain Hosted:**
288
+
289
+ #### **Objective:**
290
+ To provide a centralized web platform where users can directly access and engage with Swarms' offerings.
291
+
292
+ #### **Strategy:**
293
+
294
+ * **User-Friendly Interface:**
295
+ * Ensure a seamless user experience with intuitive design.
296
+ * Incorporate features like real-time chat support, tutorials, and an FAQ section.
297
+
298
+ * **Monetization:**
299
+ * Subscription Model: Offer monthly/annual subscriptions for premium features.
300
+ * Affiliate Marketing: Partner with related tech products/services and earn through referrals.
301
+
302
+ * **Promotion:**
303
+ * Invest in PPC advertising on platforms like Google Ads.
304
+ * Engage in content marketing, targeting keywords related to Swarms' offerings.
305
+
306
+ ---
307
+
308
+ ### **4. Build Your Own (No-Code Platform):**
309
+
310
+ #### **Objective:**
311
+ To cater to the non-developer audience, allowing them to leverage Swarms' features without any coding expertise.
312
+
313
+ #### **Strategy:**
314
+
315
+ * **Drag-and-Drop Interface:**
316
+ * Offer customizable templates.
317
+ * Ensure integration with popular platforms and apps.
318
+
319
+ * **Monetization:**
320
+ * Freemium Model: Offer basic features for free, and charge for advanced functionalities.
321
+ * Marketplace for Plugins: Allow third-party developers to sell their plugins/extensions on the platform.
322
+
323
+ * **Promotion:**
324
+ * Partner with no-code communities and influencers.
325
+ * Offer promotions and discounts to early adopters.
326
+
327
+ ---
328
+
329
+ ### **5. Marketplace for the No-Code Platform:**
330
+
331
+ #### **Objective:**
332
+ To create an ecosystem where third-party developers can contribute, and users can enhance their Swarms experience.
333
+
334
+ #### **Strategy:**
335
+
336
+ * **Open API for Development:**
337
+ * Offer robust documentation and developer support.
338
+ * Ensure a strict quality check for marketplace additions.
339
+
340
+ * **Monetization:**
341
+ * Revenue Sharing: Take a percentage cut from third-party sales.
342
+ * Featured Listings: Charge developers for premium listings.
343
+
344
+ * **Promotion:**
345
+ * Host hackathons and competitions to boost developer engagement.
346
+ * Promote top plugins/extensions through email marketing and on the main platform.
347
+
348
+ ---
349
+
350
+ ### **Future Outlook & Expansion:**
351
+
352
+ * **Hosted Dedicated Capacity:** Hosted dedicated capacity deals for enterprises starting at 399,999$
353
+ * **Decentralized Free Peer to peer endpoint hosted on The Grid:** Hosted endpoint by the people for the people.
354
+ * **Browser Extenision:** Athena browser extension for deep browser automation, subscription, usage,
355
+
356
+
357
+ * **Mobile Application:** Develop a mobile app version for Swarms to tap into the vast mobile user base.
358
+ * **Global Expansion:** Localize the platform for non-English speaking regions to tap into global markets.
359
+ * **Continuous Learning:** Regularly collect user feedback and iterate on the product features.
360
+
361
+ ---
362
+
363
+
364
+
365
+ ### **50 Creative Distribution Platforms for Swarms**
366
+
367
+ 1. **E-commerce Integrations:** Platforms like Shopify, WooCommerce, where Swarms can add value to sellers.
368
+
369
+ 2. **Web Browser Extensions:** Chrome, Firefox, and Edge extensions that bring Swarms features directly to users.
370
+
371
+ 3. **Podcasting Platforms:** Swarms-themed content on platforms like Spotify, Apple Podcasts to reach aural learners.
372
+
373
+ 4. **Virtual Reality (VR) Platforms:** Integration with VR experiences on Oculus or Viveport.
374
+
375
+ 5. **Gaming Platforms:** Tools or plugins for game developers on Steam, Epic Games.
376
+
377
+ 6. **Decentralized Platforms:** Using blockchain, create decentralized apps (DApps) versions of Swarms.
378
+
379
+ 7. **Chat Applications:** Integrate with popular messaging platforms like WhatsApp, Telegram, Slack.
380
+
381
+ 8. **AI Assistants:** Integration with Siri, Alexa, Google Assistant to provide Swarms functionalities via voice commands.
382
+
383
+ 9. **Freelancing Websites:** Offer tools or services for freelancers on platforms like Upwork, Fiverr.
384
+
385
+ 10. **Online Forums:** Platforms like Reddit, Quora, where users can discuss or access Swarms.
386
+
387
+ 11. **Educational Platforms:** Sites like Khan Academy, Udacity where Swarms can enhance learning experiences.
388
+
389
+ 12. **Digital Art Platforms:** Integrate with platforms like DeviantArt, Behance.
390
+
391
+ 13. **Open-source Repositories:** Hosting Swarms on GitHub, GitLab, Bitbucket with open-source plugins.
392
+
393
+ 14. **Augmented Reality (AR) Apps:** Create AR experiences powered by Swarms.
394
+
395
+ 15. **Smart Home Devices:** Integrate Swarms' functionalities into smart home devices.
396
+
397
+ 16. **Newsletters:** Platforms like Substack, where Swarms insights can be shared.
398
+
399
+ 17. **Interactive Kiosks:** In malls, airports, and other public places.
400
+
401
+ 18. **IoT Devices:** Incorporate Swarms in devices like smart fridges, smartwatches.
402
+
403
+ 19. **Collaboration Tools:** Platforms like Trello, Notion, offering Swarms-enhanced productivity.
404
+
405
+ 20. **Dating Apps:** An AI-enhanced matching algorithm powered by Swarms.
406
+
407
+ 21. **Music Platforms:** Integrate with Spotify, SoundCloud for music-related AI functionalities.
408
+
409
+ 22. **Recipe Websites:** Platforms like AllRecipes, Tasty with AI-recommended recipes.
410
+
411
+ 23. **Travel & Hospitality:** Integrate with platforms like Airbnb, Tripadvisor for AI-based recommendations.
412
+
413
+ 24. **Language Learning Apps:** Duolingo, Rosetta Stone integrations.
414
+
415
+ 25. **Virtual Events Platforms:** Websites like Hopin, Zoom where Swarms can enhance the virtual event experience.
416
+
417
+ 26. **Social Media Management:** Tools like Buffer, Hootsuite with AI insights by Swarms.
418
+
419
+ 27. **Fitness Apps:** Platforms like MyFitnessPal, Strava with AI fitness insights.
420
+
421
+ 28. **Mental Health Apps:** Integration into apps like Calm, Headspace for AI-driven wellness.
422
+
423
+ 29. **E-books Platforms:** Amazon Kindle, Audible with AI-enhanced reading experiences.
424
+
425
+ 30. **Sports Analysis Tools:** Websites like ESPN, Sky Sports where Swarms can provide insights.
426
+
427
+ 31. **Financial Tools:** Integration into platforms like Mint, Robinhood for AI-driven financial advice.
428
+
429
+ 32. **Public Libraries:** Digital platforms of public libraries for enhanced reading experiences.
430
+
431
+ 33. **3D Printing Platforms:** Websites like Thingiverse, Shapeways with AI customization.
432
+
433
+ 34. **Meme Platforms:** Websites like Memedroid, 9GAG where Swarms can suggest memes.
434
+
435
+ 35. **Astronomy Apps:** Platforms like Star Walk, NASA's Eyes with AI-driven space insights.
436
+
437
+ 36. **Weather Apps:** Integration into Weather.com, AccuWeather for predictive analysis.
438
+
439
+ 37. **Sustainability Platforms:** Websites like Ecosia, GoodGuide with AI-driven eco-tips.
440
+
441
+ 38. **Fashion Apps:** Platforms like ASOS, Zara with AI-based style recommendations.
442
+
443
+ 39. **Pet Care Apps:** Integration into PetSmart, Chewy for AI-driven pet care tips.
444
+
445
+ 40. **Real Estate Platforms:** Websites like Zillow, Realtor with AI-enhanced property insights.
446
+
447
+ 41. **DIY Platforms:** Websites like Instructables, DIY.org with AI project suggestions.
448
+
449
+ 42. **Genealogy Platforms:** Ancestry, MyHeritage with AI-driven family tree insights.
450
+
451
+ 43. **Car Rental & Sale Platforms:** Integration into AutoTrader, Turo for AI-driven vehicle suggestions.
452
+
453
+ 44. **Wedding Planning Websites:** Platforms like Zola, The Knot with AI-driven planning.
454
+
455
+ 45. **Craft Platforms:** Websites like Etsy, Craftsy with AI-driven craft suggestions.
456
+
457
+ 46. **Gift Recommendation Platforms:** AI-driven gift suggestions for websites like Gifts.com.
458
+
459
+ 47. **Study & Revision Platforms:** Websites like Chegg, Quizlet with AI-driven study guides.
460
+
461
+ 48. **Local Business Directories:** Yelp, Yellow Pages with AI-enhanced reviews.
462
+
463
+ 49. **Networking Platforms:** LinkedIn, Meetup with AI-driven connection suggestions.
464
+
465
+ 50. **Lifestyle Magazines' Digital Platforms:** Websites like Vogue, GQ with AI-curated fashion and lifestyle insights.
466
+
467
+ ---
468
+
469
+ *Endnote: Leveraging these diverse platforms ensures that Swarms becomes an integral part of multiple ecosystems, enhancing its visibility and user engagement.*
docs/examples/count-tokens.md ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ To count tokens you can use Swarms events and the `TokenCounter` util:
2
+
3
+ ```python
4
+ from swarms import utils
5
+ from swarms.events import (
6
+ StartPromptEvent, FinishPromptEvent,
7
+ )
8
+ from swarms.structures import Agent
9
+
10
+
11
+ token_counter = utils.TokenCounter()
12
+
13
+ agent = Agent(
14
+ event_listeners={
15
+ StartPromptEvent: [
16
+ lambda e: token_counter.add_tokens(e.token_count)
17
+ ],
18
+ FinishPromptEvent: [
19
+ lambda e: token_counter.add_tokens(e.token_count)
20
+ ],
21
+ }
22
+ )
23
+
24
+ agent.run("tell me about large language models")
25
+ agent.run("tell me about GPT")
26
+
27
+ print(f"total tokens: {token_counter.tokens}")
28
+
29
+ ```
docs/examples/index.md ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ This section of the documentation is dedicated to examples highlighting Swarms functionality.
2
+
3
+ We try to keep all examples up to date, but if you think there is a bug please [submit a pull request](https://github.com/kyegomez/swarms-docs/tree/main/docs/examples). We are also more than happy to include new examples :)