Update README.md
Browse files
README.md
CHANGED
@@ -62,3 +62,58 @@ In addition, the sequence-to-sequence architecture of the model makes it prone t
|
|
62 |
We anticipate that Moonshine models’ transcription capabilities may be used for improving accessibility tools, especially for real-time transcription. The real value of beneficial applications built on top of Moonshine models suggests that the disparate performance of these models may have real economic implications.
|
63 |
|
64 |
There are also potential dual-use concerns that come with releasing Moonshine. While we hope the technology will be used primarily for beneficial purposes, making ASR technology more accessible could enable more actors to build capable surveillance technologies or scale up existing surveillance efforts, as the speed and accuracy allow for affordable automatic transcription and translation of large volumes of audio communication. Moreover, these models may have some capabilities to recognize specific individuals out of the box, which in turn presents safety concerns related both to dual use and disparate performance. In practice, we expect that the cost of transcription is not the limiting factor of scaling up surveillance projects.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
62 |
We anticipate that Moonshine models’ transcription capabilities may be used for improving accessibility tools, especially for real-time transcription. The real value of beneficial applications built on top of Moonshine models suggests that the disparate performance of these models may have real economic implications.
|
63 |
|
64 |
There are also potential dual-use concerns that come with releasing Moonshine. While we hope the technology will be used primarily for beneficial purposes, making ASR technology more accessible could enable more actors to build capable surveillance technologies or scale up existing surveillance efforts, as the speed and accuracy allow for affordable automatic transcription and translation of large volumes of audio communication. Moreover, these models may have some capabilities to recognize specific individuals out of the box, which in turn presents safety concerns related both to dual use and disparate performance. In practice, we expect that the cost of transcription is not the limiting factor of scaling up surveillance projects.
|
65 |
+
|
66 |
+
## Setup
|
67 |
+
|
68 |
+
* Install `uv` for Python environment management
|
69 |
+
|
70 |
+
- Follow instructions [here](https://github.com/astral-sh/uv)
|
71 |
+
|
72 |
+
* Create and activate virtual environment
|
73 |
+
|
74 |
+
```shell
|
75 |
+
uv venv env_moonshine
|
76 |
+
source env_moonshine/bin/activate
|
77 |
+
```
|
78 |
+
|
79 |
+
* Install the `useful-moonshine` package from this github repo
|
80 |
+
|
81 |
+
```shell
|
82 |
+
uv pip install useful-moonshine@git+https://github.com/usefulsensors/moonshine.git
|
83 |
+
```
|
84 |
+
|
85 |
+
`moonshine` inference code is written in Keras and can run with the backends
|
86 |
+
that Keras supports. The above command will install with the PyTorch
|
87 |
+
backend. To run the provided inference code, you have to instruct Keras to use
|
88 |
+
the PyTorch backend by setting and environment variable .
|
89 |
+
|
90 |
+
```shell
|
91 |
+
export KERAS_BACKEND=torch
|
92 |
+
```
|
93 |
+
|
94 |
+
To run with TensorFlow backend, run the following to install Moonshine.
|
95 |
+
|
96 |
+
```shell
|
97 |
+
uv pip install useful-moonshine[tensorflow]@git+https://github.com/usefulsensors/moonshine.git
|
98 |
+
export KERAS_BACKEND=tensorflow
|
99 |
+
```
|
100 |
+
|
101 |
+
To run with jax backend, run the following:
|
102 |
+
|
103 |
+
```shell
|
104 |
+
uv pip install useful-moonshine[jax]@git+https://github.com/usefulsensors/moonshine.git
|
105 |
+
export KERAS_BACKEND=jax
|
106 |
+
# Use useful-moonshine[jax-cuda] for jax on GPU
|
107 |
+
```
|
108 |
+
|
109 |
+
* Test transcribing an audio file
|
110 |
+
|
111 |
+
```shell
|
112 |
+
python
|
113 |
+
>>> import moonshine
|
114 |
+
>>> moonshine.transcribe(moonshine.ASSETS_DIR / 'beckett.wav', 'moonshine/tiny')
|
115 |
+
['Ever tried ever failed, no matter try again, fail again, fail better.']
|
116 |
+
```
|
117 |
+
|
118 |
+
* The first argument is the filename for an audio file, the second is the name of a moonshine model. `moonshine/tiny` and `moonshine/base` are the currently available models.
|
119 |
+
|