carlosug commited on
Commit
00e1cf9
1 Parent(s): 6afe078

Update golden-alpaca.csv

Browse files
Files changed (1) hide show
  1. golden-alpaca.csv +886 -87
golden-alpaca.csv CHANGED
@@ -1,5 +1,5 @@
1
- software,repo_name,readme_url,portal,stars,selection,categories,,date_collection,date_submission,content,method,steps,seq_order,optional_steps,optional_instructions,seq_order,extra_info_optional,"available_method (1=source, 2=packagemanager, 3=container, 4=binary)"
2
- vcr-video-representation-for-contextual,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/oronnir/VCR/main/README.md,machine_learning,1,latest,,,13/02/2024,12/2/24,,,,,,,,,
3
  ensuring-trustworthy-and-ethical-behaviour-in,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/AAAI-DISIM-UnivAQ/DALI/master/README.md,machine_learning,15,latest,,,13/02/2024,12/2/24,"## Installation
4
 
5
  **OS X & Linux:**
@@ -23,8 +23,8 @@ bash startmas.sh
23
       You will see different windows opening:
24
  *      Prolog LINDA server (active_server_wi.pl)
25
  *      Prolog FIPA client (active_user_wi.pl)
26
- *      1 instance of DALI metaintepreter for each agent (active_dali_wi.pl)",binary,"1. To download and install SICStus Prolog (it is needed), follow the instructions at https://sicstus.sics.se/download4.html.
27
- 2. Then, you can download DALI and test it by running an example DALI MAS:
28
  ```sh
29
  git clone https://github.com/AAAI-DISIM-UnivAQ/DALI.git
30
  cd DALI/Examples/advanced
@@ -32,25 +32,22 @@ bash startmas.sh
32
  ```","1,2","**Windows:**
33
  1. To download and install SICStus Prolog (it is needed), follow the instructions at https://sicstus.sics.se/download4.html.
34
  2. Then, you can download DALI from https://github.com/AAAI-DISIM-UnivAQ/DALI.git.
35
- 3. Unzip the repository, go to the folder ""DALI/Examples/basic"", and test if DALI works by duble clicking ""startmas.bat"" file (this will launch an example DALI MAS). \","**Windows:**
36
- 1. To download and install SICStus Prolog (it is needed), follow the instructions at https://sicstus.sics.se/download4.html.
37
- 2. Then, you can download DALI from https://github.com/AAAI-DISIM-UnivAQ/DALI.git.
38
- 3. Unzip the repository, go to the folder ""DALI/Examples/basic"", and test if DALI works by duble clicking ""startmas.bat"" file (this will launch an example DALI MAS). \","1,2","You will see different windows opening:
39
  Prolog LINDA server (active_server_wi.pl)
40
  Prolog FIPA client (active_user_wi.pl)
41
- 1 instance of DALI metaintepreter for each agent (active_dali_wi.pl)",1
42
- synthesizing-sentiment-controlled-feedback,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/MIntelligence-Group/CMFeed/main/README.md,machine_learning,0,latest,,,13/02/2024,12/2/24,,,,,,,,,
43
  only-the-curve-shape-matters-training,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/cfeng783/GTT/main/README.md,machine_learning,0,latest,,,13/02/2024,12/2/24,"## Getting Started
44
 
45
  #### Install dependencies (with python 3.10)
46
 
47
  ```shell
48
  pip install -r requirements.txt
49
- ```",source,,,,,,,
50
- from-uncertainty-to-precision-enhancing,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/fer-agathe/calibration_binary_classifier/main/README.md,machine_learning,0,latest,,,13/02/2024,12/2/24,,,,,,,,,
51
- stochastic-gradient-flow-dynamics-of-test,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/rodsveiga/sgf_dyn/main/README.md,machine_learning,,,,,13/02/2024,12/2/24,,,,,,,,,
52
- accuracy-of-textfooler-black-box-adversarial,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/zero-one-loss/wordcnn01/main/LICENSE*,machine_learning,0,,,,13/02/2024,12/2/24,,,,,,,,,
53
- differentially-private-decentralized-learning-1,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/totilas/DPrandomwalk/main/README.md,machine_learning,,,,,13/02/2024,12/2/24,,,,,,,,,
54
  aydiv-adaptable-yielding-3d-object-detection,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/sanjay-810/AYDIV2/main/README.md,machine_learning,1,,,,13/02/2024,12/2/24,"### **Installation**
55
  1. Prepare for the running environment.
56
 
@@ -94,16 +91,15 @@ aydiv-adaptable-yielding-3d-object-detection,https://paperwithcode.com/paper/,ht
94
  cd pcdet/ops/iou3d/cuda_op
95
  python setup.py develop
96
  cd ../../../..
97
- ```","source,docker","1. Prepare for the running environment:
98
- You can use the docker image provided by [`OpenPCDet`](https://github.com/open-mmlab/OpenPCDet)
99
- 2. Prepare for the data:
100
- ```
101
  cd Aydiv
102
  python depth_to_lidar.py
103
- ```","1,2",,,,,
 
104
  cartesian-atomic-cluster-expansion-for,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/BingqingCheng/cace/main/README.md,machine_learning,4,latest,,,13/02/2024,12/2/24,"## Installation
105
 
106
- Please refer to the `setup.py` file for installation instructions.",,,,,,,,
107
  teller-a-trustworthy-framework-for,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/less-and-less-bugs/Trust_TELLER/main/README.md,machine_learning,1,latest,,,13/02/2024,12/2/24,"## Getting Started
108
 
109
  Step 1: Download the dataset folder from onedrive by [data.zip](https://portland-my.sharepoint.com/:u:/g/personal/liuhui3-c_my_cityu_edu_hk/EfApQlFP3PhFjUW4527STo0BALMdP16zs-HPMNgwQVFWsA?e=zoHlW2). Unzip this folder into the project directory. You can find four orginal datasets, pre-processed datasets (i.e., val.jsonl, test.jsonl, train.jsonl in each dataset folder) and the files incuding questions and answers
@@ -112,12 +108,12 @@ Step 2: Place you OpenAI key into the file named api_key.txt.
112
 
113
  ```
114
  openai.api_key = """"
115
- ```",binary,"1: Download the dataset folder from onedrive by https://portland-my.sharepoint.com/:u:/g/personal/liuhui3-c_my_cityu_edu_hk/EfApQlFP3PhFjUW4527STo0BALMdP16zs-HPMNgwQVFWsA?e=zoHlW2.
116
- 2. Unzip this folder into the project directory.
117
- 3. Place you OpenAI key into the file named api_key.txt.
118
  ```
119
  openai.api_key = """"
120
- ```","1,2,3",,,,,
121
  continuous-time-radar-inertial-and-lidar,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/utiasASRL/steam_icp/master/README.md,computer_science,77,latest,robotics,,13/02/2024,9/2/24,"## Installation
122
 
123
  Clone this repository and its submodules.
@@ -149,8 +145,8 @@ docker run -it --name steam_icp \
149
 
150
  ```bash
151
  bash build.sh
152
- ```",,"1. Clone this repository and its submodules.
153
- 2. We use docker to install dependencies The recommended way to build the docker image is
154
  ```bash
155
  docker build -t steam_icp \
156
  --build-arg USERID=$(id -u) \
@@ -158,7 +154,7 @@ docker build -t steam_icp \
158
  --build-arg USERNAME=$(whoami) \
159
  --build-arg HOMEDIR=${HOME} .
160
  ```
161
- 3. When starting a container, remember to mount the code, dataset, and output directories to proper locations in the container.
162
  An example command to start a docker container with the image is
163
 
164
  ```bash
@@ -170,12 +166,12 @@ docker run -it --name steam_icp \
170
  -v ${HOME}:${HOME}:rw \
171
  steam_icp
172
 
173
- 4.(Inside Container) Go to the root directory of this repository and build STEAM-ICP
174
 
175
  ```bash
176
  bash build.sh
177
- ```","1,2,3,4",,,,,3
178
- towards-a-thermodynamical-deep-learning,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/fedezocco/ThermoVisMedRob/main/README.md,computer_science,0,latest,robotics,,13/02/2024,8/2/24,,,,,,,,,
179
  robust-parameter-fitting-to-realistic-network,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/PFischbeck/parameter-fitting-experiments/main/Readme.md,computer_science,0,latest,Social and Information Networks Data Structures and Algorithms,,13/02/2024,8/2/24,"# Installation
180
 
181
  - Make sure you have Python, Pip and R installed.
@@ -195,25 +191,25 @@ R -e 'install.packages(c(""ggplot2"", ""reshape2"", ""plyr"", ""dplyr"", ""scale
195
  ```
196
 
197
  - Download the file `konect-data.zip` from [Zenodo](https://doi.org/10.5281/zenodo.10629451) and extract its contents into the folder `input_data/konect`
198
- - Optional: Download the file `output-data.zip` from [Zenodo](https://doi.org/10.5281/zenodo.10629451) and extract its contents into the folder `output_data`. This way, you can access all experiment results without running them yourself.",source,"1. Make sure you have Python, Pip and R installed.
199
- 2. Checkout this repository
200
- 3. Install the python dependencies with
201
  ```
202
  pip3 install -r requirements.txt
203
  ```
204
- 4. Install the `pygirgs` package at https://github.com/PFischbeck/pygirgs
205
- 5. Install the R dependencies (used for plots) with
206
  ```
207
  R -e 'install.packages(c(""ggplot2"", ""reshape2"", ""plyr"", ""dplyr"", ""scales""), repos=""https://cloud.r-project.org/"")'
208
  ```
209
- 6. Download the file `konect-data.zip` from [Zenodo](https://doi.org/10.5281/zenodo.10629451) and extract its contents into the folder `input_data/konect`
210
- 7. Optional: Download the file `output-data.zip` from [Zenodo](https://doi.org/10.5281/zenodo.10629451) and extract its contents into the folder `output_data`. This way, you can access all experiment results without running them yourself.","1,2,3,4,5,6,7","
211
- 7. Optional: Download the file `output-data.zip` from [Zenodo](https://doi.org/10.5281/zenodo.10629451) and extract its contents into the folder `output_data`. This way, you can access all experiment results without running them yourself.",,,,
212
  get-tok-a-genai-enriched-multimodal-tiktok,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/gabbypinto/GET-Tok-Peru/main/README.md,computer_science,1,latest,Social and Information Networks Computers and Society Human-Computer Interaction ,,13/02/2024,8/2/24,"## Installation
213
  pip install -r requirements.txt
214
 
215
- *Note: I did not us a virtual environment so the packages in the requirements.txt file are probably not reflective of all the packages used in this project. If some issues pop up please don't hesitate to email me at: gpinto@usc.edu*",,,,,,,,
216
- a-longitudinal-study-of-italian-and-french,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/orsoFra/LS_FRIT_UKR/main/README.md,computer_science,0,latest,Social and Information Networks Computers and Society,,13/02/2024,7/2/24,,,,,,,,,
 
217
  geometric-slosh-free-tracking-for-robotic,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/jonarriza96/gsft/main/README.md,computer_science,1,latest,robotics,,13/02/2024,7/2/24,"## Installation
218
 
219
  ### Dependencies
@@ -240,31 +236,31 @@ To ensure that Python paths are properly defined, update the `~/.bashrc` by addi
240
  ```
241
  export GSFT_PATH=/path_to_gsfc
242
  export PYTHONPATH=$PYTHONPATH:/$GSFT_PATH
243
- ```",source,"1. Check dependencies
244
- 2. Initialize git submodules with
245
  ```
246
  git submodule init
247
  git submodule update
248
  ```
249
- 3. Install the specific versions of every package from `requirements.txt` in a new conda environment:
250
  ```
251
  conda create --name gsft python=3.9
252
  conda activate gsft
253
  pip install -r requirements.txt
254
  ```
255
- 4. Create variables to ensure that Python paths are properly defined, update the `~/.bashrc` by adding the following lines
256
  ```
257
  export GSFT_PATH=/path_to_gsfc
258
  export PYTHONPATH=$PYTHONPATH:/$GSFT_PATH
259
- ```","1,2,3,4",,,,,
260
  real-time-line-based-room-segmentation-and,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/EricssonResearch/Line-Based-Room-Segmentation-and-EDF/release/README.md,computer_science,0,latest,robotics,,13/02/2024,7/2/24,"## Installation
261
  The project can be installed by running the following command in your terminal:
262
  ```bash
263
  pip install -r requirements.txt
264
- ```",source,"1. Run the command in your terminal:
265
  ```
266
  pip install -r requirements.txt
267
- ```",1,,,,,
268
  viga,https://bio.tools/,https://raw.githubusercontent.com/viralInformatics/VIGA/master/README.md,,7,https://bio.tools/t?sort=citationDate&ord=desc,,command_line,13/02/2024,last week,"## Installation
269
 
270
  ### Step1: Download VIGA
@@ -321,41 +317,40 @@ Base on python 3.6.8
321
  ```
322
  pip install pandas=1.1.5 numpy=1.19.5 matplotlib=3.3.4 biopython=1.79
323
  ```
324
- ",source,"1. Download VIGA with Git from GitHub:
325
  ```
326
  git clone https://github.com/viralInformatics/VIGA.git
327
  ```
328
  or Download ZIP to local
329
- 2. Download Database:
330
- download taxdmp.zip: https://ftp.ncbi.nlm.nih.gov/pub/taxonomy/ and unzip taxdmp.zip and put it in ./db/
331
- 2.2 download ""prot.accession2taxid"" file from https://ftp.ncbi.nlm.nih.gov/pub/taxonomy/accession2taxid/
332
- 2.3. download ""RefSeqVirusProtein"" file from
333
  wget -c ftp.ncbi.nlm.nih.gov/refseq/release/viral/viral.1.protein.faa.gz
334
  gzip -d viral.1.protein.faa.gz
335
  mv viral.1.protein.faa RefSeqVirusProtein
336
- 2.4. download ""nr"" file from
337
  wget -c ftp://ftp.ncbi.nlm.nih.gov/blast/db/FASTA/nr.gz
338
  or ascp -T -i asperaweb_id_dsa.openssh --host=ftp.ncbi.nih.gov --user=anonftp --mode=recv /blast/db/FASTA/nr.gz ./
339
  gzip -d nr.gz
340
- 2.5. Use Diamond v2.0.11.149 to create two separate databases as the indexing libraries in the current version are incompatible with each other.
341
- 2.6. In order to set up a reference database for DIAMOND, the makedb command needs to be executed with the following command line:
342
  diamond makedb --in YourPath/RefSeqVirusProtein -d Diamond_RefSeqVirusProtein --taxonmap YourPath/prot.accession2taxid --taxonnodes YourPath/nodes.dmp
343
  diamond makedb --in nr -d Dimond_nr --taxonmap YourPath/prot.accession2taxid --taxonnodes YourPath/nodes.dmp
344
  ```
345
- ### Step 3: Installation of dependent software
346
- 3.1. Installing Some Software Using Conda
347
  ```
348
  conda install fastp=0.12.4 trinity=2.8.5 diamond=2.0.11.149 ragtag=2.1.0 quast=5.0.2
349
  ```
350
- 3.2. Manual Installation of MetaCompass
351
  https://github.com/marbl/MetaCompass
352
-
353
- ### Step 4: Python Dependencies
354
- 4.1.Base on python 3.6.8
355
  ```
356
  pip install pandas=1.1.5 numpy=1.19.5 matplotlib=3.3.4 biopython=1.79
357
- ```","1,2,3,4,5,6",,,,,"1,4"
358
- lncrtpred,https://bio.tools/,https://raw.githubusercontent.com/zglabDIB/LncRTPred/main/README.md,,,https://bio.tools/t?sort=citationDate&ord=desc,,command_line,13/02/2024,8 months,,,,,,,,,
359
  nrn-ez,https://bio.tools/,https://raw.githubusercontent.com/scimemia/NRN-EZ/master/README.md,,,,,Script,13/02/2024,last week,"**INSTALLATION FOR VERSION 1.1.6**
360
 
361
  NRN-EZ was built with PyInstaller 3.6, and requires the following languages and libraries:
@@ -370,13 +365,13 @@ Installation instructions for Linux (Ubuntu and Pop!_OS): download the Linux zip
370
 
371
  Installation instructions for Mac OS: download the Mac zip file and copy the NRN-EZ app to the Applications folder.
372
 
373
- Installation instructions for Windows: download the Win zip file and run the installation wizard.",binary,"1. install requirements:
374
  Python 3.6.9 and higher (currently up to 3.10)
375
  PyQt 5.10.1
376
  PyQtGraph 0.11.0
377
- 2. for linux:download the Linux zip file and, from the command window, run a bash command for the install.sh file, in the corresponding installation folder.
378
  2. for Mac OS: download the Mac zip file and copy the NRN-EZ app to the Applications folder.
379
- 2. for Windows: download the Win zip file and run the installation wizard.",,,,,,
380
  causnet,https://bio.tools/,https://raw.githubusercontent.com/nand1155/CausNet/main/README.md,,0,https://bio.tools/t?sort=citationDate&ord=desc,,Library,13/02/2024,two years,"## Installation
381
 
382
  You can install the development version from GitHub with:
@@ -384,12 +379,12 @@ You can install the development version from GitHub with:
384
  ``` r
385
  require(""devtools"")
386
  install_github(""https://github.com/nand1155/CausNet"")
387
- ```",source,"install the development version from GitHub with:
388
  ``` r
389
  require(""devtools"")
390
  install_github(""https://github.com/nand1155/CausNet"")
391
- ```",,,,,,
392
- viralcc,https://bio.tools/,https://raw.githubusercontent.com/dyxstat/Reproduce_ViralCC/main/README.md,,0,https://bio.tools/t?sort=citationDate&ord=desc,,command_line,13/02/2024,8 months,"# Instruction of reproducing results in ViralCC paper
393
  We take the cow fecal datasets for example. The other two datasets were processed following the same procedure.
394
 
395
  Scripts to process the intermediate data and plot figures are available in the folder [Scripts](https://github.com/dyxstat/Reproduce_ViralCC/tree/main/Scripts).
@@ -468,7 +463,57 @@ python ./viralcc.py pipeline -v final.contigs.fa COW_MAP_SORTED.bam viral.txt ou
468
  ```
469
  python concatenation.py -p out_cow/VIRAL_BIN -o viralCC_cow_bins.fa
470
  checkv end_to_end viralCC_cow_bins.fa output_checkv_viralcc_cow -t 16 -d /panfs/qcb-panasas/yuxuandu/checkv-db-v1.0
471
- ```",,,,,,,,
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
472
  DRaW,https://bio.tools/,https://raw.githubusercontent.com/BioinformaticsIASBS/DRaW/main/README.md,,0,https://bio.tools/t?sort=citationDate&ord=desc,,,,,"# Running DRaW on COVID-19 datasets
473
  The DRaW has been applied on three COVID-19 datasets, DS1, DS2, and DS3. There are three subdirectories, �DS1_repur�, �DS2_repur�, and �DS3_repur�, in the �Drug-Repurposing� directory. Each subdirectory has been assigned to one of the mentioned datasets. We put the Draw implementation file for each dataset in each subdirectory separately. This is due to keep the corresponding hyperparameters of each dataset.
474
  We use Adam as the optimizer with a learning rate equal to 0.001, beta1 = 0.9, beta2 = 0.999, and epsilon = 1e_7. The dropout rate is set to 0.5. The batch size is chosen by the number of samples per dataset. This hyperparameter for DS1 is equal to 8, and those for DS2 and DS3 are set to 32.
@@ -487,7 +532,12 @@ The input parameter �dataset_name� is one the following five datasets� nam
487
  ```bash
488
  cd Performance_analysis
489
  python main.py dataset_name
490
- ```",,,,,,,,
 
 
 
 
 
491
  NRN-EZ,https://bio.tools/,https://raw.githubusercontent.com/scimemia/NRN-EZ/master/README.md,,,,,,,6 months,"**INSTALLATION FOR VERSION 1.1.6**
492
 
493
  NRN-EZ was built with PyInstaller 3.6, and requires the following languages and libraries:
@@ -502,7 +552,12 @@ Installation instructions for Linux (Ubuntu and Pop!_OS): download the Linux zip
502
 
503
  Installation instructions for Mac OS: download the Mac zip file and copy the NRN-EZ app to the Applications folder.
504
 
505
- Installation instructions for Windows: download the Win zip file and run the installation wizard.",,,,,,,,
 
 
 
 
 
506
  guiding-instruction-based-image-editing-via,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/apple/ml-mgie/main/README.md,computer_science,3089,top,,,13/02/2024,29/09/2023,"## Requirements
507
  ```
508
  conda create -n mgie python=3.10 -y
@@ -529,34 +584,778 @@ pip install -U pydrive2 gdown wget
529
  cd ..
530
  cp mgie_llava.py LLaVA/llava/model/llava.py
531
  cp mgie_train.py LLaVA/llava/train/train.py
532
- ```",,"1. create conda environment
533
  conda create -n mgie python=3.10 -y
534
  conda activate mgie
535
  conda update -n base -c defaults conda setuptools -y
536
  conda install -c conda-forge git git-lfs ffmpeg vim htop ninja gpustat -y
537
- conda clean -a -y
538
- 2. install dependencies
539
  pip install -U pip cmake cython==0.29.36 pydantic==1.10 numpy
540
  pip install -U gdown pydrive2 wget jupyter jupyterlab jupyterthemes ipython
541
  pip install -U sentencepiece transformers diffusers tokenizers datasets gradio==3.37 accelerate evaluate git+https://github.com/openai/CLIP.git
542
  pip install -U https://download.pytorch.org/whl/cu113/torch-1.12.0%2Bcu113-cp310-cp310-linux_x86_64.whl https://download.pytorch.org/whl/cu113/torchvision-0.13.0%2Bcu113-cp310-cp310-linux_x86_64.whl https://download.pytorch.org/whl/cu113/torchaudio-0.12.0%2Bcu113-cp310-cp310-linux_x86_64.whl
543
- pip install -U deepspeed
544
-
545
- 3. git clone this repo
546
  cd ml-mgie
547
  git submodule update --init --recursive
548
- cd LLaVA
549
- 4. install module
550
  pip install -e .
551
  pip install -U https://download.pytorch.org/whl/cu113/torch-1.12.0%2Bcu113-cp310-cp310-linux_x86_64.whl https://download.pytorch.org/whl/cu113/torchvision-0.13.0%2Bcu113-cp310-cp310-linux_x86_64.whl https://download.pytorch.org/whl/cu113/torchaudio-0.12.0%2Bcu113-cp310-cp310-linux_x86_64.whl
552
  pip install -U ninja flash-attn==1.0.2
553
  pip install -U pydrive2 gdown wget
554
-
555
  cd ..
556
  cp mgie_llava.py LLaVA/llava/model/llava.py
557
  cp mgie_train.py LLaVA/llava/train/train.py
558
- ```","1,2,3,4",,,,,1
559
- self-play-fine-tuning-converts-weak-language,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/uclaml/SPIN/main/README.md,computer_science,430,top,,,13/02/2024,2/1/24,,,,,,,,,
560
- genegpt-teaching-large-language-models-to-use,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/ncbi/GeneGPT/main/README.md,computer_science,214,top,,,13/02/2024,19/04/2023,,,,,,,,,
561
- the-boundary-of-neural-network-trainability,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/Sohl-Dickstein/fractal/main/README.md,computer_science,41,top,,,13/02/2024,9/2/24,,,,,,,,,
562
- learning-to-fly-in-seconds,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/arplaboratory/learning-to-fly/master/README.MD,computer_science,201,top,,,13/02/2024,22/11/2023,,,,,,,,,
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ software,repo_name,readme_url,portal,stars,selection,categories,,date_collection,date_submission,content,plan,steps,seq_order,optional_steps,extra_info_optional
2
+ vcr-video-representation-for-contextual,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/oronnir/VCR/main/README.md,machine_learning,1,latest,,,13/02/2024,12/2/24,,,,,,
3
  ensuring-trustworthy-and-ethical-behaviour-in,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/AAAI-DISIM-UnivAQ/DALI/master/README.md,machine_learning,15,latest,,,13/02/2024,12/2/24,"## Installation
4
 
5
  **OS X & Linux:**
 
23
       You will see different windows opening:
24
  *      Prolog LINDA server (active_server_wi.pl)
25
  *      Prolog FIPA client (active_user_wi.pl)
26
+ *      1 instance of DALI metaintepreter for each agent (active_dali_wi.pl)","binary, source","[Binary] 1. To download and install SICStus Prolog (it is needed), follow the instructions at https://sicstus.sics.se/download4.html.
27
+ [Source] 2. Then, you can download DALI and test it by running an example DALI MAS:
28
  ```sh
29
  git clone https://github.com/AAAI-DISIM-UnivAQ/DALI.git
30
  cd DALI/Examples/advanced
 
32
  ```","1,2","**Windows:**
33
  1. To download and install SICStus Prolog (it is needed), follow the instructions at https://sicstus.sics.se/download4.html.
34
  2. Then, you can download DALI from https://github.com/AAAI-DISIM-UnivAQ/DALI.git.
35
+ 3. Unzip the repository, go to the folder ""DALI/Examples/basic"", and test if DALI works by duble clicking ""startmas.bat"" file (this will launch an example DALI MAS). \","You will see different windows opening:
 
 
 
36
  Prolog LINDA server (active_server_wi.pl)
37
  Prolog FIPA client (active_user_wi.pl)
38
+ 1 instance of DALI metaintepreter for each agent (active_dali_wi.pl)"
39
+ synthesizing-sentiment-controlled-feedback,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/MIntelligence-Group/CMFeed/main/README.md,machine_learning,0,latest,,,13/02/2024,12/2/24,,,,,,
40
  only-the-curve-shape-matters-training,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/cfeng783/GTT/main/README.md,machine_learning,0,latest,,,13/02/2024,12/2/24,"## Getting Started
41
 
42
  #### Install dependencies (with python 3.10)
43
 
44
  ```shell
45
  pip install -r requirements.txt
46
+ ```",source,1. Install dependencies with pip install -r requirements.txt,,,
47
+ from-uncertainty-to-precision-enhancing,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/fer-agathe/calibration_binary_classifier/main/README.md,machine_learning,0,latest,,,13/02/2024,12/2/24,,,,,,
48
+ stochastic-gradient-flow-dynamics-of-test,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/rodsveiga/sgf_dyn/main/README.md,machine_learning,,,,,13/02/2024,12/2/24,,,,,,
49
+ accuracy-of-textfooler-black-box-adversarial,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/zero-one-loss/wordcnn01/main/LICENSE*,machine_learning,0,,,,13/02/2024,12/2/24,,,,,,
50
+ differentially-private-decentralized-learning-1,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/totilas/DPrandomwalk/main/README.md,machine_learning,,,,,13/02/2024,12/2/24,,,,,,
51
  aydiv-adaptable-yielding-3d-object-detection,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/sanjay-810/AYDIV2/main/README.md,machine_learning,1,,,,13/02/2024,12/2/24,"### **Installation**
52
  1. Prepare for the running environment.
53
 
 
91
  cd pcdet/ops/iou3d/cuda_op
92
  python setup.py develop
93
  cd ../../../..
94
+ ```","source,docker","[source]: step1. Prepare for the running environment.
95
+ step2. prepare for the data:```
 
 
96
  cd Aydiv
97
  python depth_to_lidar.py
98
+ ```
99
+ [docker]: step1. You can use the docker image provided by [`OpenPCDet`](https://github.com/open-mmlab/OpenPCDet)","1,2",,
100
  cartesian-atomic-cluster-expansion-for,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/BingqingCheng/cace/main/README.md,machine_learning,4,latest,,,13/02/2024,12/2/24,"## Installation
101
 
102
+ Please refer to the `setup.py` file for installation instructions.",source,[source] step1. please refer to the `setup.py` file for installation instructions.,,,
103
  teller-a-trustworthy-framework-for,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/less-and-less-bugs/Trust_TELLER/main/README.md,machine_learning,1,latest,,,13/02/2024,12/2/24,"## Getting Started
104
 
105
  Step 1: Download the dataset folder from onedrive by [data.zip](https://portland-my.sharepoint.com/:u:/g/personal/liuhui3-c_my_cityu_edu_hk/EfApQlFP3PhFjUW4527STo0BALMdP16zs-HPMNgwQVFWsA?e=zoHlW2). Unzip this folder into the project directory. You can find four orginal datasets, pre-processed datasets (i.e., val.jsonl, test.jsonl, train.jsonl in each dataset folder) and the files incuding questions and answers
 
108
 
109
  ```
110
  openai.api_key = """"
111
+ ```",binary,"[binary] step1: Download the dataset folder from onedrive by https://portland-my.sharepoint.com/:u:/g/personal/liuhui3-c_my_cityu_edu_hk/EfApQlFP3PhFjUW4527STo0BALMdP16zs-HPMNgwQVFWsA?e=zoHlW2.
112
+ step2. Unzip this folder into the project directory.
113
+ step3. Place you OpenAI key into the file named api_key.txt.
114
  ```
115
  openai.api_key = """"
116
+ ```","1,2,3",,
117
  continuous-time-radar-inertial-and-lidar,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/utiasASRL/steam_icp/master/README.md,computer_science,77,latest,robotics,,13/02/2024,9/2/24,"## Installation
118
 
119
  Clone this repository and its submodules.
 
145
 
146
  ```bash
147
  bash build.sh
148
+ ```",source,"[source] step1. Clone this repository and its submodules.
149
+ step2. We use docker to install dependencies The recommended way to build the docker image is
150
  ```bash
151
  docker build -t steam_icp \
152
  --build-arg USERID=$(id -u) \
 
154
  --build-arg USERNAME=$(whoami) \
155
  --build-arg HOMEDIR=${HOME} .
156
  ```
157
+ step3. When starting a container, remember to mount the code, dataset, and output directories to proper locations in the container.
158
  An example command to start a docker container with the image is
159
 
160
  ```bash
 
166
  -v ${HOME}:${HOME}:rw \
167
  steam_icp
168
 
169
+ step4.(Inside Container) Go to the root directory of this repository and build STEAM-ICP
170
 
171
  ```bash
172
  bash build.sh
173
+ ```","1,2,3,4",,
174
+ towards-a-thermodynamical-deep-learning,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/fedezocco/ThermoVisMedRob/main/README.md,computer_science,0,latest,robotics,,13/02/2024,8/2/24,,,,,,
175
  robust-parameter-fitting-to-realistic-network,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/PFischbeck/parameter-fitting-experiments/main/Readme.md,computer_science,0,latest,Social and Information Networks Data Structures and Algorithms,,13/02/2024,8/2/24,"# Installation
176
 
177
  - Make sure you have Python, Pip and R installed.
 
191
  ```
192
 
193
  - Download the file `konect-data.zip` from [Zenodo](https://doi.org/10.5281/zenodo.10629451) and extract its contents into the folder `input_data/konect`
194
+ - Optional: Download the file `output-data.zip` from [Zenodo](https://doi.org/10.5281/zenodo.10629451) and extract its contents into the folder `output_data`. This way, you can access all experiment results without running them yourself.",source,"[source] step1. Make sure you have Python, Pip and R installed.
195
+ step2. Checkout this repository
196
+ step3. Install the python dependencies with
197
  ```
198
  pip3 install -r requirements.txt
199
  ```
200
+ step4. Install the `pygirgs` package at https://github.com/PFischbeck/pygirgs
201
+ step5. Install the R dependencies (used for plots) with
202
  ```
203
  R -e 'install.packages(c(""ggplot2"", ""reshape2"", ""plyr"", ""dplyr"", ""scales""), repos=""https://cloud.r-project.org/"")'
204
  ```
205
+ step6. Download the file `konect-data.zip` from [Zenodo](https://doi.org/10.5281/zenodo.10629451) and extract its contents into the folder `input_data/konect`
206
+ step7. Optional: Download the file `output-data.zip` from [Zenodo](https://doi.org/10.5281/zenodo.10629451) and extract its contents into the folder `output_data`. This way, you can access all experiment results without running them yourself.","1,2,3,4,5,6,7","step7. Optional: Download the file `output-data.zip` from [Zenodo](https://doi.org/10.5281/zenodo.10629451) and extract its contents into the folder `output_data`. This way, you can access all experiment results without running them yourself.",
 
207
  get-tok-a-genai-enriched-multimodal-tiktok,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/gabbypinto/GET-Tok-Peru/main/README.md,computer_science,1,latest,Social and Information Networks Computers and Society Human-Computer Interaction ,,13/02/2024,8/2/24,"## Installation
208
  pip install -r requirements.txt
209
 
210
+ *Note: I did not us a virtual environment so the packages in the requirements.txt file are probably not reflective of all the packages used in this project. If some issues pop up please don't hesitate to email me at: gpinto@usc.edu*",packagemanager,"step1.
211
+ pip install -r requirements.txt ",,,*Note: I did not us a virtual environment so the packages in the requirements.txt file are probably not reflective of all the packages used in this project. If some issues pop up please don't hesitate to email me at: gpinto@usc.edu*
212
+ a-longitudinal-study-of-italian-and-french,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/orsoFra/LS_FRIT_UKR/main/README.md,computer_science,0,latest,Social and Information Networks Computers and Society,,13/02/2024,7/2/24,,,,,,
213
  geometric-slosh-free-tracking-for-robotic,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/jonarriza96/gsft/main/README.md,computer_science,1,latest,robotics,,13/02/2024,7/2/24,"## Installation
214
 
215
  ### Dependencies
 
236
  ```
237
  export GSFT_PATH=/path_to_gsfc
238
  export PYTHONPATH=$PYTHONPATH:/$GSFT_PATH
239
+ ```",source,"[source] step1. Check dependencies
240
+ step2. Initialize git submodules with
241
  ```
242
  git submodule init
243
  git submodule update
244
  ```
245
+ step3. Install the specific versions of every package from `requirements.txt` in a new conda environment:
246
  ```
247
  conda create --name gsft python=3.9
248
  conda activate gsft
249
  pip install -r requirements.txt
250
  ```
251
+ step4. Create variables to ensure that Python paths are properly defined, update the `~/.bashrc` by adding the following lines
252
  ```
253
  export GSFT_PATH=/path_to_gsfc
254
  export PYTHONPATH=$PYTHONPATH:/$GSFT_PATH
255
+ ```","1,2,3,4",,
256
  real-time-line-based-room-segmentation-and,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/EricssonResearch/Line-Based-Room-Segmentation-and-EDF/release/README.md,computer_science,0,latest,robotics,,13/02/2024,7/2/24,"## Installation
257
  The project can be installed by running the following command in your terminal:
258
  ```bash
259
  pip install -r requirements.txt
260
+ ```",source,"[source] step1. Run the command in your terminal:
261
  ```
262
  pip install -r requirements.txt
263
+ ```",1,,
264
  viga,https://bio.tools/,https://raw.githubusercontent.com/viralInformatics/VIGA/master/README.md,,7,https://bio.tools/t?sort=citationDate&ord=desc,,command_line,13/02/2024,last week,"## Installation
265
 
266
  ### Step1: Download VIGA
 
317
  ```
318
  pip install pandas=1.1.5 numpy=1.19.5 matplotlib=3.3.4 biopython=1.79
319
  ```
320
+ ",source,"[source] step1. Download VIGA with Git from GitHub:
321
  ```
322
  git clone https://github.com/viralInformatics/VIGA.git
323
  ```
324
  or Download ZIP to local
325
+ step2.download Database:
326
+ step2.1.download taxdmp.zip: https://ftp.ncbi.nlm.nih.gov/pub/taxonomy/ and unzip taxdmp.zip and put it in ./db/
327
+ step2.2.download ""prot.accession2taxid"" file from https://ftp.ncbi.nlm.nih.gov/pub/taxonomy/accession2taxid/
328
+ step2.3.download ""RefSeqVirusProtein"" file from
329
  wget -c ftp.ncbi.nlm.nih.gov/refseq/release/viral/viral.1.protein.faa.gz
330
  gzip -d viral.1.protein.faa.gz
331
  mv viral.1.protein.faa RefSeqVirusProtein
332
+ step2.4. download ""nr"" file from
333
  wget -c ftp://ftp.ncbi.nlm.nih.gov/blast/db/FASTA/nr.gz
334
  or ascp -T -i asperaweb_id_dsa.openssh --host=ftp.ncbi.nih.gov --user=anonftp --mode=recv /blast/db/FASTA/nr.gz ./
335
  gzip -d nr.gz
336
+ step2.5.use Diamond v2.0.11.149 to create two separate databases as the indexing libraries in the current version are incompatible with each other.
337
+ step2.6.In order to set up a reference database for DIAMOND, the makedb command needs to be executed with the following command line:
338
  diamond makedb --in YourPath/RefSeqVirusProtein -d Diamond_RefSeqVirusProtein --taxonmap YourPath/prot.accession2taxid --taxonnodes YourPath/nodes.dmp
339
  diamond makedb --in nr -d Dimond_nr --taxonmap YourPath/prot.accession2taxid --taxonnodes YourPath/nodes.dmp
340
  ```
341
+ step3. installation of dependent software
342
+ step3.1. installing Some Software Using Conda
343
  ```
344
  conda install fastp=0.12.4 trinity=2.8.5 diamond=2.0.11.149 ragtag=2.1.0 quast=5.0.2
345
  ```
346
+ step3.2. manual Installation of MetaCompass
347
  https://github.com/marbl/MetaCompass
348
+ step4: install Python Dependencies
349
+ step4.1.base on python 3.6.8
 
350
  ```
351
  pip install pandas=1.1.5 numpy=1.19.5 matplotlib=3.3.4 biopython=1.79
352
+ ```","1,2,3,4,5,6",,
353
+ lncrtpred,https://bio.tools/,https://raw.githubusercontent.com/zglabDIB/LncRTPred/main/README.md,,,https://bio.tools/t?sort=citationDate&ord=desc,,command_line,13/02/2024,8 months,,,,,,
354
  nrn-ez,https://bio.tools/,https://raw.githubusercontent.com/scimemia/NRN-EZ/master/README.md,,,,,Script,13/02/2024,last week,"**INSTALLATION FOR VERSION 1.1.6**
355
 
356
  NRN-EZ was built with PyInstaller 3.6, and requires the following languages and libraries:
 
365
 
366
  Installation instructions for Mac OS: download the Mac zip file and copy the NRN-EZ app to the Applications folder.
367
 
368
+ Installation instructions for Windows: download the Win zip file and run the installation wizard.",binary,"[binary] step1. install requirements:
369
  Python 3.6.9 and higher (currently up to 3.10)
370
  PyQt 5.10.1
371
  PyQtGraph 0.11.0
372
+ step2. for linux:download the Linux zip file and, from the command window, run a bash command for the install.sh file, in the corresponding installation folder. ",,"2. for linux:download the Linux zip file and, from the command window, run a bash command for the install.sh file, in the corresponding installation folder.
373
  2. for Mac OS: download the Mac zip file and copy the NRN-EZ app to the Applications folder.
374
+ 2. for Windows: download the Win zip file and run the installation wizard.",
375
  causnet,https://bio.tools/,https://raw.githubusercontent.com/nand1155/CausNet/main/README.md,,0,https://bio.tools/t?sort=citationDate&ord=desc,,Library,13/02/2024,two years,"## Installation
376
 
377
  You can install the development version from GitHub with:
 
379
  ``` r
380
  require(""devtools"")
381
  install_github(""https://github.com/nand1155/CausNet"")
382
+ ```",source,"[source]: step1.install the development version from GitHub with:
383
  ``` r
384
  require(""devtools"")
385
  install_github(""https://github.com/nand1155/CausNet"")
386
+ ```",,,
387
+ viralcc,https://bio.tools/,https://raw.githubusercontent.com/dyxstat/Reproduce_ViralCC/main/README.md,,0,https://bio.tools/t?sort=citationDate&ord=desc,,command_line,13/02/2024,8 months,"""# Instruction of reproducing results in ViralCC paper
388
  We take the cow fecal datasets for example. The other two datasets were processed following the same procedure.
389
 
390
  Scripts to process the intermediate data and plot figures are available in the folder [Scripts](https://github.com/dyxstat/Reproduce_ViralCC/tree/main/Scripts).
 
463
  ```
464
  python concatenation.py -p out_cow/VIRAL_BIN -o viralCC_cow_bins.fa
465
  checkv end_to_end viralCC_cow_bins.fa output_checkv_viralcc_cow -t 16 -d /panfs/qcb-panasas/yuxuandu/checkv-db-v1.0
466
+ ```""",source,"[source]step1.download and preprocess the raw data**
467
+ Note: NCBI may update its links for downloading the database. Please check the latest link at [NCBI](https://www.ncbi.nlm.nih.gov/) if you meet the download error.
468
+ ```
469
+ wget https://sra-downloadb.be-md.ncbi.nlm.nih.gov/sos2/sra-pub-run-13/ERR2282092/ERR2282092.1
470
+ wget https://sra-downloadb.be-md.ncbi.nlm.nih.gov/sos2/sra-pub-run-13/ERR2530126/ERR2530126.1
471
+ wget https://sra-downloadb.be-md.ncbi.nlm.nih.gov/sos2/sra-pub-run-13/ERR2530127/ERR2530127.1
472
+
473
+ fastq-dump --split-files --gzip ERR2282092.1
474
+ fastq-dump --split-files --gzip ERR2530126.1
475
+ fastq-dump --split-files --gzip ERR2530127.1
476
+
477
+ bbduk.sh in1=ERR2282092.1_1.fastq.gz in2=ERR2282092.1_2.fastq.gz out1=COWSG1_AQ.fastq.gz out2=COWSG2_AQ.fastq.gz ref=/home1/yuxuandu/cmb/SOFTWARE/bbmap/resources/adapters.fa ktrim=r k=23 mink=11 hdist=1 minlen=50 tpe tbo
478
+ bbduk.sh in1=ERR2530126.1_1.fastq.gz in2=ERR2530126.1_2.fastq.gz out1=S3HIC1_AQ.fastq.gz out2=S3HIC2_AQ.fastq.gz ref=/home1/yuxuandu/cmb/SOFTWARE/bbmap/resources/adapters.fa ktrim=r k=23 mink=11 hdist=1 minlen=50 tpe tbo
479
+ bbduk.sh in1=ERR2530127.1_1.fastq.gz in2=ERR2530127.1_2.fastq.gz out1=M1HIC1_AQ.fastq.gz out2=M1HIC2_AQ.fastq.gz ref=/home1/yuxuandu/cmb/SOFTWARE/bbmap/resources/adapters.fa ktrim=r k=23 mink=11 hdist=1 minlen=50 tpe tbo
480
+
481
+ bbduk.sh in1=S3HIC1_AQ.fastq.gz in2=S3HIC2_AQ.fastq.gz out1=S3HIC1_CL.fastq.gz out2=S3HIC2_CL.fastq.gz trimq=10 qtrim=r ftm=5 minlen=50
482
+ bbduk.sh in1=M1HIC1_AQ.fastq.gz in2=M1HIC2_AQ.fastq.gz out1=M1HIC1_CL.fastq.gz out2=M1HIC2_CL.fastq.gz trimq=10 qtrim=r ftm=5 minlen=50
483
+ bbduk.sh in1=COWSG1_AQ.fastq.gz in2=COWSG2_AQ.fastq.gz out1=COWSG1_CL.fastq.gz out2=COWSG2_CL.fastq.gz trimq=10 qtrim=r ftm=5 minlen=50
484
+
485
+ bbduk.sh in1=S3HIC1_CL.fastq.gz in2=S3HIC2_CL.fastq.gz out1=S3HIC1_trim.fastq.gz out2=S3HIC2_trim.fastq.gz ftl=10
486
+ bbduk.sh in1=M1HIC1_CL.fastq.gz in2=M1HIC2_CL.fastq.gz out1=M1HIC1_trim.fastq.gz out2=M1HIC2_trim.fastq.gz ftl=10
487
+
488
+ clumpify.sh in1=S3HIC1_trim.fastq.gz in2=S3HIC2_trim.fastq.gz out1=S3HIC1_dedup.fastq.gz out2=S3HIC2_dedup.fastq.gz dedupe
489
+ clumpify.sh in1=M1HIC1_trim.fastq.gz in2=M1HIC2_trim.fastq.gz out1=M1HIC1_dedup.fastq.gz out2=M1HIC2_dedup.fastq.gz dedupe
490
+ cat S3HIC1_dedup.fastq.gz M1HIC1_dedup.fastq.gz > HIC1.fastq.gz
491
+ cat S3HIC2_dedup.fastq.gz M1HIC2_dedup.fastq.gz > HIC2.fastq.gz
492
+ ```
493
+ step2.assemble contigs and align processed Hi-C reads to contigs**
494
+ ```
495
+ megahit -1 COWSG1_CL.fastq.gz -2 COWSG2_CL.fastq.gz -o COW_ASSEMBLY --min-contig-len 1000 --k-min 21 --k-max 141 --k-step 12 --merge-level 20,0.95
496
+
497
+ bwa index final.contigs.fa
498
+ bwa mem -5SP final.contigs.fa HIC1.fastq.gz HIC2.fastq.gz > COW_MAP.sam
499
+ samtools view -F 0x904 -bS COW_MAP.sam > COW_MAP_UNSORTED.bam
500
+ samtools sort -n COW_MAP_UNSORTED.bam -o COW_MAP_SORTED.bam
501
+ ```
502
+ step3. identify viral contigs from assembled contigs**
503
+ ```
504
+ perl removesmalls.pl 3000 final.contigs.fa > cow_3000.fa
505
+ wrapper_phage_contigs_sorter_iPlant.pl -f cow_3000.fa --db 1 --wdir output_directory --ncpu 16 --data-dir /panfs/qcb-panasas/yuxuandu/virsorter-data
506
+ Rscript find_viral_contig.R
507
+ ```
508
+ step4. run ViralCC**
509
+ ```
510
+ python ./viralcc.py pipeline -v final.contigs.fa COW_MAP_SORTED.bam viral.txt out_cow
511
+ ```
512
+ step5. evaluation draft viral genomes using CheckV**
513
+ ```
514
+ python concatenation.py -p out_cow/VIRAL_BIN -o viralCC_cow_bins.fa
515
+ checkv end_to_end viralCC_cow_bins.fa output_checkv_viralcc_cow -t 16 -d /panfs/qcb-panasas/yuxuandu/checkv-db-v1.0
516
+ ```",,,
517
  DRaW,https://bio.tools/,https://raw.githubusercontent.com/BioinformaticsIASBS/DRaW/main/README.md,,0,https://bio.tools/t?sort=citationDate&ord=desc,,,,,"# Running DRaW on COVID-19 datasets
518
  The DRaW has been applied on three COVID-19 datasets, DS1, DS2, and DS3. There are three subdirectories, �DS1_repur�, �DS2_repur�, and �DS3_repur�, in the �Drug-Repurposing� directory. Each subdirectory has been assigned to one of the mentioned datasets. We put the Draw implementation file for each dataset in each subdirectory separately. This is due to keep the corresponding hyperparameters of each dataset.
519
  We use Adam as the optimizer with a learning rate equal to 0.001, beta1 = 0.9, beta2 = 0.999, and epsilon = 1e_7. The dropout rate is set to 0.5. The batch size is chosen by the number of samples per dataset. This hyperparameter for DS1 is equal to 8, and those for DS2 and DS3 are set to 32.
 
532
  ```bash
533
  cd Performance_analysis
534
  python main.py dataset_name
535
+ ```",source,"[source]step1.execute ""Drug-Repurposing.py"" script in the command line. step2. after that, execute ""score.py"". The repurposed drugs will be stored in the ""meanScore.csv"" spreadsheet. It contains the average of ach drug ranking. The lower, the better. For example, to run the DRaW on DS1:
536
+ ```bash
537
+ cd Drug-Repurposing\DS1_repur
538
+ python Drug-Repurposing.py
539
+ python score.py
540
+ ```",,,
541
  NRN-EZ,https://bio.tools/,https://raw.githubusercontent.com/scimemia/NRN-EZ/master/README.md,,,,,,,6 months,"**INSTALLATION FOR VERSION 1.1.6**
542
 
543
  NRN-EZ was built with PyInstaller 3.6, and requires the following languages and libraries:
 
552
 
553
  Installation instructions for Mac OS: download the Mac zip file and copy the NRN-EZ app to the Applications folder.
554
 
555
+ Installation instructions for Windows: download the Win zip file and run the installation wizard.",source,"[source] step1. install the requirements:Python 3.6.9 and higher (currently up to 3.10), PyQt 5.10.1, PyQtGraph 0.11.0
556
+ step2. for Linux: download the Linux zip file and, from the command window, run a bash command for the install.sh file, in the corresponding installation folder.
557
+ step2. for Mac OS: download the Mac zip file and copy the NRN-EZ app to the Applications folder.
558
+ step2. for Windows: download the Win zip file and run the installation wizard.",,"step2. for Linux: download the Linux zip file and, from the command window, run a bash command for the install.sh file, in the corresponding installation folder.
559
+ step2. for Mac OS: download the Mac zip file and copy the NRN-EZ app to the Applications folder.
560
+ step2. for Windows: download the Win zip file and run the installation wizard.",
561
  guiding-instruction-based-image-editing-via,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/apple/ml-mgie/main/README.md,computer_science,3089,top,,,13/02/2024,29/09/2023,"## Requirements
562
  ```
563
  conda create -n mgie python=3.10 -y
 
584
  cd ..
585
  cp mgie_llava.py LLaVA/llava/model/llava.py
586
  cp mgie_train.py LLaVA/llava/train/train.py
587
+ ```",source,"[source]"" step1. create conda environment ```
588
  conda create -n mgie python=3.10 -y
589
  conda activate mgie
590
  conda update -n base -c defaults conda setuptools -y
591
  conda install -c conda-forge git git-lfs ffmpeg vim htop ninja gpustat -y
592
+ conda clean -a -y ```
593
+ step2. install dependencies ```
594
  pip install -U pip cmake cython==0.29.36 pydantic==1.10 numpy
595
  pip install -U gdown pydrive2 wget jupyter jupyterlab jupyterthemes ipython
596
  pip install -U sentencepiece transformers diffusers tokenizers datasets gradio==3.37 accelerate evaluate git+https://github.com/openai/CLIP.git
597
  pip install -U https://download.pytorch.org/whl/cu113/torch-1.12.0%2Bcu113-cp310-cp310-linux_x86_64.whl https://download.pytorch.org/whl/cu113/torchvision-0.13.0%2Bcu113-cp310-cp310-linux_x86_64.whl https://download.pytorch.org/whl/cu113/torchaudio-0.12.0%2Bcu113-cp310-cp310-linux_x86_64.whl
598
+ pip install -U deepspeed ```
599
+ step3. git clone this repo ```
 
600
  cd ml-mgie
601
  git submodule update --init --recursive
602
+ cd LLaVA ```
603
+ step4. install module ```
604
  pip install -e .
605
  pip install -U https://download.pytorch.org/whl/cu113/torch-1.12.0%2Bcu113-cp310-cp310-linux_x86_64.whl https://download.pytorch.org/whl/cu113/torchvision-0.13.0%2Bcu113-cp310-cp310-linux_x86_64.whl https://download.pytorch.org/whl/cu113/torchaudio-0.12.0%2Bcu113-cp310-cp310-linux_x86_64.whl
606
  pip install -U ninja flash-attn==1.0.2
607
  pip install -U pydrive2 gdown wget
 
608
  cd ..
609
  cp mgie_llava.py LLaVA/llava/model/llava.py
610
  cp mgie_train.py LLaVA/llava/train/train.py
611
+ ```","1,2,3,4",,
612
+ self-play-fine-tuning-converts-weak-language,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/uclaml/SPIN/main/README.md,computer_science,430,top,,,13/02/2024,2/1/24,"## Setup
613
+ The following steps provide the necessary setup to run our codes.
614
+ 1. Create a Python virtual environment with Conda:
615
+ ```
616
+ conda create -n myenv python=3.10
617
+ conda activate myenv
618
+ ```
619
+ 2. Install PyTorch `v2.1.0` with compatible cuda version, following instructions from [PyTorch Installation Page](https://pytorch.org/get-started/locally/). For example with cuda 11:
620
+ ```
621
+ pip install torch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 --index-url https://download.pytorch.org/whl/cu118
622
+ ```
623
+ 3. Install the following Python dependencies to run the codes.
624
+ ```
625
+ python -m pip install .
626
+ python -m pip install flash-attn --no-build-isolation
627
+ ```
628
+ 4. Login to your huggingface account for downloading models
629
+ ```
630
+ huggingface-cli login --token ""${your_access_token}""
631
+ ```",source,"step1.create a Python virtual environment with Conda:
632
+ ```
633
+ conda create -n myenv python=3.10
634
+ conda activate myenv
635
+ ```
636
+ step2.install PyTorch `v2.1.0` with compatible cuda version, following instructions from [PyTorch Installation Page](https://pytorch.org/get-started/locally/). For example with cuda 11:
637
+ ```
638
+ pip install torch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 --index-url https://download.pytorch.org/whl/cu118
639
+ ```
640
+ step3.install the following Python dependencies to run the codes.
641
+ ```
642
+ python -m pip install .
643
+ python -m pip install flash-attn --no-build-isolation
644
+ ```
645
+ step4.login to your huggingface account for downloading models
646
+ ```
647
+ huggingface-cli login --token ""${your_access_token}""
648
+ ```",,,
649
+ genegpt-teaching-large-language-models-to-use,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/ncbi/GeneGPT/main/README.md,computer_science,214,top,,,13/02/2024,19/04/2023,"# Requirements
650
+
651
+ The code has been tested with Python 3.9.13. Please first install the required packages by:
652
+ ```bash
653
+ pip install -r requirements.txt
654
+ ```
655
+
656
+ You also need an OpenAI API key to run GeneGPT with Codex. Replace the placeholder with your key in `config.py`:
657
+ ```bash
658
+ $ cat config.py
659
+ API_KEY = 'YOUR_OPENAI_API_KEY'
660
+ ```
661
+
662
+ ## Using GeneGPT
663
+
664
+ After setting up the environment, one can run GeneGPT on GeneTuring by:
665
+ ```bash
666
+ python main.py 111111
667
+ ```
668
+ where `111111` denotes that all Documentations (Dc.1-2) and Demonstrations (Dm.1-4) are used.
669
+
670
+ To run GeneGPT-slim, simply use:
671
+ ```bash
672
+ python main.py 001001
673
+ ```
674
+ which will only use the Dm.1 and Dm.4 for in-context learning.",source,"step1.install requirements:
675
+ The code has been tested with Python 3.9.13. Please first install the required packages by:
676
+ ```bash
677
+ pip install -r requirements.txt
678
+ ```
679
+ step2.set OpenAI API key to run GeneGPT with Codex. Replace the placeholder with your key in `config.py`:
680
+ ```bash
681
+ $ cat config.py
682
+ API_KEY = 'YOUR_OPENAI_API_KEY'
683
+ ```
684
+ step3. execute GeneGPT
685
+ After setting up the environment, one can run GeneGPT on GeneTuring by:
686
+ ```bash
687
+ python main.py 111111
688
+ ```
689
+ where `111111` denotes that all Documentations (Dc.1-2) and Demonstrations (Dm.1-4) are used.
690
+ To run GeneGPT-slim, simply use:
691
+ ```bash
692
+ python main.py 001001
693
+ ```
694
+ which will only use the Dm.1 and Dm.4 for in-context learning.",,,
695
+ the-boundary-of-neural-network-trainability,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/Sohl-Dickstein/fractal/main/README.md,computer_science,41,top,,,13/02/2024,9/2/24,,,,,,
696
+ learning-to-fly-in-seconds,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/arplaboratory/learning-to-fly/master/README.MD,computer_science,201,top,,,13/02/2024,22/11/2023,"## Instructions to run the code
697
+ ### Docker (isolated)
698
+ We provide a pre-built Docker image with a simple web interface that can be executed using a single command (given that Docker is already installed on your machine):
699
+ ```
700
+ docker run -it --rm -p 8000:8000 arpllab/learning_to_fly
701
+ ```
702
+ After the container is running, navigate to [https://0.0.0.0:8000](https://0.0.0.0:8000) and you should see something like (after starting the training):
703
+
704
+ <div align=""center"">
705
+ <img src=""https://github.com/arplaboratory/learning_to_fly_media/blob/master/simulator_screenshot.png"" />
706
+ </div>
707
+
708
+ Note that to make this Docker image compatible with a broad range of CPUs, some optimizations have been turned off. For full speed we recommend a [Native installation](#Native-installation).
709
+ ### Docker installation (isolated)
710
+ With the following instructions you can also easily build the Docker image yourself. If you want to run the code on bare metal jump [Native installation](#Native-installation).
711
+
712
+ First, install Docker on your machine. Then move to the original directory `learning_to_fly` and build the Docker image:
713
+ ```
714
+ docker build -t arpllab/learning_to_fly .
715
+ ```
716
+ If desired you can also build the container for building the firmware:
717
+ ```
718
+ docker build -t arpllab/learning_to_fly_build_firmware -f Dockerfile_build_firmware .
719
+ ```
720
+ After that you can run it using e.g.:
721
+ ```
722
+ docker run -it --rm -p 8000:8000 arpllab/learning_to_fly
723
+ ```
724
+ This will open the port `8000` for the UI of the training program and run it inside the container.
725
+
726
+ Navigate to [https://0.0.0.0:8000](https://0.0.0.0:8000) with your browser, and you should see something like in the screenshot above (after starting the training).
727
+
728
+ The training UI configuration does not log data by default. If you want to inspect the training data run:
729
+ ```
730
+ docker run -it --rm -p 6006:6006 arpllab/learning_to_fly training_headless
731
+ ```
732
+ Navigate to [https://0.0.0.0:6006](https://0.0.0.0:6006) with your browser to investigate the Tensorboard logs.
733
+
734
+ If you would like to benchmark the training speed you can use:
735
+ ```
736
+ docker run -it --rm arpllab/learning_to_fly training_benchmark
737
+ ```
738
+ This is the fastest configuration, without logging, UI, checkpointing etc.
739
+ ### Native installation
740
+ Clone this repository:
741
+ ```
742
+ git clone https://github.com/arplaboratory/learning-to-fly learning_to_fly
743
+ cd learning_to_fly
744
+ ```
745
+ Then instantiate the `RLtools` submodule:
746
+ ```
747
+ git submodule update --init -- external/rl_tools
748
+ cd external/rl_tools
749
+ ```
750
+
751
+ Then instantiate some dependencies of `RLtools` (for conveniences like checkpointing, Tensorboard logging, testing, etc.):
752
+ ```
753
+ git submodule update --init -- external/cli11 external/highfive external/json/ external/tensorboard tests/lib/googletest/
754
+ ```
755
+
756
+ #### Install dependencies on Ubuntu
757
+ ```
758
+ sudo apt update && sudo apt install libhdf5-dev libopenblas-dev protobuf-compiler libprotobuf-dev libboost-all-dev
759
+ ```
760
+ As an alternative to openblas you can also install [Intel MKL](https://www.intel.com/content/www/us/en/developer/tools/oneapi/onemkl-download.html) which in our experience is significantly faster than OpenBLAS.
761
+ #### Install dependencies on macOS
762
+ ```
763
+ brew install hdf5 protobuf boost
764
+ ```
765
+ Please make sure that `brew` links the libraries correctly. If not you might have to link e.g. `protobuf` manually using `brew link protobuf`.
766
+
767
+
768
+
769
+
770
+ Going back to the main directory (`learning_to_fly`), we can now configure the build of the code:
771
+ ```
772
+ cd ../../
773
+ mkdir build
774
+ cd build
775
+ ```
776
+ - Ubuntu + OpenBLAS: `cmake .. -DCMAKE_BUILD_TYPE=Release -DRL_TOOLS_BACKEND_ENABLE_OPENBLAS:BOOL=ON`
777
+ - Ubuntu + MKL: `cmake .. -DCMAKE_BUILD_TYPE=Release -DRL_TOOLS_BACKEND_ENABLE_MKL:BOOL=ON`
778
+ - macOS (tested on Sonoma): `cmake .. -DCMAKE_BUILD_TYPE=Release`
779
+
780
+ Finally, we can build the targets:
781
+ ```
782
+ cmake --build . -j8
783
+ ```
784
+
785
+ After successfully building the targets, we can run the code (in the original directory `learning_to_fly`):
786
+ ```
787
+ cd ..
788
+ ./build/src/training_headless
789
+ ```
790
+ While this is running, you should be able to see training metrics using Tensorboard
791
+
792
+ If not already installed:
793
+ ```
794
+ python3 -m pip install tensorboard
795
+ ```
796
+ Then from the original directory `learning_to_fly`:
797
+ ```
798
+ tensorboard --logdir=logs
799
+ ```
800
+
801
+ To run the training with the UI, we download the JavaScript dependencies in the form of the two files `three.module.js` and `OrbitControls.js`:
802
+ ```
803
+ cd src/ui
804
+ ./get_dependencies.sh
805
+ ```
806
+
807
+ After that we can execute the UI binary from the root folder:
808
+ ```
809
+ cd ../../
810
+ ./build/src/ui 0.0.0.0 8000
811
+ ```
812
+ Now you should be able to navigate to [http://0.0.0.0:8000](http://0.0.0.0:8000) in your browser and start the training.
813
+
814
+ To run the benchmark (with UI, checkpointing and Tensorboard logging turned off):
815
+ ```
816
+ sudo nice -n -20 ./build/src/training_benchmark
817
+ ```
818
+
819
+ ## Deploying trained policies on a Crazyflie
820
+ Train a policy, e.g. using the Docker image with the UI:
821
+ ```
822
+ docker run -it --rm -p 8000:8000 -v $(pwd)/checkpoints:/learning_to_fly/checkpoints arpllab/learning_to_fly
823
+ ```
824
+ The checkpoints are placed in the current working directory's `checkpoints` folder. Inspect the logs of the container to find the path of the final log, e.g., `checkpoints/multirotor_td3/2023_11_16_14_46_38_d+o+a+r+h+c+f+w+e+_002/actor_000000000300000.h`.
825
+ We can mount this file into the container `arpllab/learning_to_fly_build_firmware` for building the firmware, e.g.:
826
+ ```
827
+ docker run -it --rm -v $(pwd)/checkpoints/multirotor_td3/2023_11_16_14_46_38_d+o+a+r+h+c+f+w+e+_002/actor_000000000300000.h:/controller/data/actor.h:ro -v $(pwd)/build_firmware:/output arpllab/learning_to_fly_build_firmware
828
+ ```
829
+ This should build the firmware using the newly trained policy and output the binary to `build_firmware/cf2.bin`. After that we can use the `cfclient` package to flash the firmware (find the installation instructions [here](https://www.bitcraze.io/documentation/repository/crazyflie-clients-python/master/installation/install/))
830
+ ```
831
+ cfloader flash build_firmware/cf2.bin stm32-fw -w radio://0/80/2M
832
+ ```","source,docker","[plan1. Docker (isolated)]
833
+ step1: Execute a single command (given that Docker is already installed on your machine):
834
+ ```
835
+ docker run -it --rm -p 8000:8000 arpllab/learning_to_fly
836
+ ```
837
+ step2. the container is running, now step3 navigate to [https://0.0.0.0:8000](https://0.0.0.0:8000) and step 4 you should see something like (after starting the training):
838
+ <div align=""center"">
839
+ <img src=""https://github.com/arplaboratory/learning_to_fly_media/blob/master/simulator_screenshot.png"" />
840
+ </div>
841
+ Note that to make this Docker image compatible with a broad range of CPUs, some optimizations have been turned off. For full speed we recommend a [Native installation](#Native-installation).
842
+ [Docker installation (isolated)]
843
+ step1. install Docker on your machine. step2. Then move to the original directory `learning_to_fly` and step3. build the Docker image:
844
+ ```
845
+ docker build -t arpllab/learning_to_fly .
846
+ ```
847
+ [optional] If desired you can also build the container for building the firmware:
848
+ ```
849
+ docker build -t arpllab/learning_to_fly_build_firmware -f Dockerfile_build_firmware .
850
+ ```
851
+ step4. After that you can run it using e.g.:
852
+ ```
853
+ docker run -it --rm -p 8000:8000 arpllab/learning_to_fly
854
+ ```
855
+ Context. This will open the port `8000` for the UI of the training program and run it inside the container.
856
+ step5. Navigate to [https://0.0.0.0:8000](https://0.0.0.0:8000) with your browser, and you should see something like in the screenshot above (after starting the training).
857
+ The training UI configuration does not log data by default. If you want to inspect the training data run:
858
+ ```
859
+ docker run -it --rm -p 6006:6006 arpllab/learning_to_fly training_headless
860
+ ```
861
+ Navigate to [https://0.0.0.0:6006](https://0.0.0.0:6006) with your browser to investigate the Tensorboard logs.
862
+
863
+ [plan2]. Native installation
864
+ step1. clone this repository:
865
+ ```
866
+ git clone https://github.com/arplaboratory/learning-to-fly learning_to_fly
867
+ cd learning_to_fly
868
+ ```
869
+ step2.Instantiate the `RLtools` submodule:
870
+ ```
871
+ git submodule update --init -- external/rl_tools
872
+ cd external/rl_tools
873
+ ```
874
+ step3. Check dependencies of `RLtools` (for conveniences like checkpointing, Tensorboard logging, testing, etc.):
875
+ ```
876
+ git submodule update --init -- external/cli11 external/highfive external/json/ external/tensorboard tests/lib/googletest/
877
+ ```
878
+ step4. Install dependencies on Ubuntu
879
+ ```
880
+ sudo apt update && sudo apt install libhdf5-dev libopenblas-dev protobuf-compiler libprotobuf-dev libboost-all-dev
881
+ ```
882
+ optional. As an alternative to openblas you can also install [Intel MKL](https://www.intel.com/content/www/us/en/developer/tools/oneapi/onemkl-download.html) which in our experience is significantly faster than OpenBLAS.
883
+ #### Install dependencies on macOS
884
+ ```
885
+ brew install hdf5 protobuf boost
886
+ ```
887
+ Please make sure that `brew` links the libraries correctly. If not you might have to link e.g. `protobuf` manually using `brew link protobuf`.
888
+
889
+ Going back to the main directory (`learning_to_fly`), we can now configure the build of the code:
890
+ ```
891
+ cd ../../
892
+ mkdir build
893
+ cd build
894
+ ```
895
+ - Ubuntu + OpenBLAS: `cmake .. -DCMAKE_BUILD_TYPE=Release -DRL_TOOLS_BACKEND_ENABLE_OPENBLAS:BOOL=ON`
896
+ - Ubuntu + MKL: `cmake .. -DCMAKE_BUILD_TYPE=Release -DRL_TOOLS_BACKEND_ENABLE_MKL:BOOL=ON`
897
+ - macOS (tested on Sonoma): `cmake .. -DCMAKE_BUILD_TYPE=Release`
898
+
899
+ Finally, we can build the targets:
900
+ ```
901
+ cmake --build . -j8
902
+ ```
903
+
904
+ After successfully building the targets, we can run the code (in the original directory `learning_to_fly`):
905
+ ```
906
+ cd ..
907
+ ./build/src/training_headless
908
+ ```
909
+ While this is running, you should be able to see training metrics using Tensorboard
910
+
911
+ If not already installed:
912
+ ```
913
+ python3 -m pip install tensorboard
914
+ ```
915
+ Then from the original directory `learning_to_fly`:
916
+ ```
917
+ tensorboard --logdir=logs
918
+ ```
919
+
920
+ To run the training with the UI, we download the JavaScript dependencies in the form of the two files `three.module.js` and `OrbitControls.js`:
921
+ ```
922
+ cd src/ui
923
+ ./get_dependencies.sh
924
+ ```
925
+
926
+ After that we can execute the UI binary from the root folder:
927
+ ```
928
+ cd ../../
929
+ ./build/src/ui 0.0.0.0 8000
930
+ ```
931
+ Now you should be able to navigate to [http://0.0.0.0:8000](http://0.0.0.0:8000) in your browser and start the training.
932
+
933
+ To run the benchmark (with UI, checkpointing and Tensorboard logging turned off):
934
+ ```
935
+ sudo nice -n -20 ./build/src/training_benchmark
936
+ ```
937
+
938
+ ## Deploying trained policies on a Crazyflie
939
+ Train a policy, e.g. using the Docker image with the UI:
940
+ ```
941
+ docker run -it --rm -p 8000:8000 -v $(pwd)/checkpoints:/learning_to_fly/checkpoints arpllab/learning_to_fly
942
+ ```
943
+ The checkpoints are placed in the current working directory's `checkpoints` folder. Inspect the logs of the container to find the path of the final log, e.g., `checkpoints/multirotor_td3/2023_11_16_14_46_38_d+o+a+r+h+c+f+w+e+_002/actor_000000000300000.h`.
944
+ We can mount this file into the container `arpllab/learning_to_fly_build_firmware` for building the firmware, e.g.:
945
+ ```
946
+ docker run -it --rm -v $(pwd)/checkpoints/multirotor_td3/2023_11_16_14_46_38_d+o+a+r+h+c+f+w+e+_002/actor_000000000300000.h:/controller/data/actor.h:ro -v $(pwd)/build_firmware:/output arpllab/learning_to_fly_build_firmware
947
+ ```
948
+ This should build the firmware using the newly trained policy and output the binary to `build_firmware/cf2.bin`. After that we can use the `cfclient` package to flash the firmware (find the installation instructions [here](https://www.bitcraze.io/documentation/repository/crazyflie-clients-python/master/installation/install/))
949
+ ```
950
+ cfloader flash build_firmware/cf2.bin stm32-fw -w radio://0/80/2M
951
+ ```",,,
952
+ /LargeWorldModel/LWM,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/LargeWorldModel/LWM/main/README.md,,2098,top,,,13/02/2024,,"## Setup
953
+ Install the requirements with:
954
+ ```
955
+ conda create -n lwm python=3.10
956
+ pip install -U ""jax[cuda12_pip]==0.4.23"" -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html
957
+ pip install -r requirements.txt
958
+ ```
959
+ or set up TPU VM with:
960
+ ```
961
+ sh tpu_requirements.sh
962
+ ```","packagemanager, source","step1.install the requirements with:
963
+ ```
964
+ conda create -n lwm python=3.10
965
+ pip install -U ""jax[cuda12_pip]==0.4.23"" -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html
966
+ pip install -r requirements.txt
967
+ ```
968
+ optional. set up TPU VM with:
969
+ ```
970
+ sh tpu_requirements.sh
971
+ ```",,,
972
+ ,,https://raw.githubusercontent.com/microsoft/UFO/main/README.md,,830,top,,,,,"### ___ Step 1: Installation
973
+ UFO requires **Python >= 3.10** running on **Windows OS >= 10**. It can be installed by running the following command:
974
+ ```bash
975
+ # [optional to create conda environment]
976
+ # conda create -n ufo python=3.10
977
+ # conda activate ufo
978
+
979
+ # clone the repository
980
+ git clone https://github.com/microsoft/UFO.git
981
+ cd UFO
982
+ # install the requirements
983
+ pip install -r requirements.txt
984
+ ```
985
+
986
+ ### __ Step 2: Configure the LLMs
987
+ Before running UFO, you need to provide your LLM configurations. Taking OpenAI as an example, you can configure `ufo/config/config.yaml` file as follows.
988
+
989
+ #### OpenAI
990
+ ```
991
+ API_TYPE: ""openai""
992
+ OPENAI_API_BASE: ""https://api.openai.com/v1/chat/completions"" # The base URL for the OpenAI API
993
+ OPENAI_API_KEY: ""YOUR_API_KEY"" # Set the value to the openai key for the llm model
994
+ OPENAI_API_MODEL: ""GPTV_MODEL_NAME"" # The only OpenAI model by now that accepts visual input
995
+ ```
996
+
997
+ #### Azure OpenAI (AOAI)
998
+ ```
999
+ API_TYPE: ""aoai""
1000
+ OPENAI_API_BASE: ""YOUR_ENDPOINT"" # The AOAI API address. Format: https://{your-resource-name}.openai.azure.com/openai/deployments/{deployment-id}/completions?api-version={api-version}
1001
+ OPENAI_API_KEY: ""YOUR_API_KEY"" # Set the value to the openai key for the llm model
1002
+ OPENAI_API_MODEL: ""GPTV_MODEL_NAME"" # The only OpenAI model by now that accepts visual input
1003
+ ```
1004
+
1005
+
1006
+ ### __ Step 3: Start UFO
1007
+
1008
+ #### __ You can execute the following on your Windows command Line (CLI):
1009
+
1010
+ ```bash
1011
+ # assume you are in the cloned UFO folder
1012
+ python -m ufo --task <your_task_name>
1013
+ ```
1014
+
1015
+ This will start the UFO process and you can interact with it through the command line interface.
1016
+ If everything goes well, you will see the following message:
1017
+
1018
+ ```bash
1019
+ Welcome to use UFO__, A UI-focused Agent for Windows OS Interaction.
1020
+ _ _ _____ ___
1021
+ | | | || ___| / _ \
1022
+ | | | || |_ | | | |
1023
+ | |_| || _| | |_| |
1024
+ \___/ |_| \___/
1025
+ Please enter your request to be completed__:
1026
+ ```
1027
+ #### __Reminder: ####
1028
+ - Before UFO executing your request, please make sure the targeted applications are active on the system.
1029
+ - The GPT-V accepts screenshots of your desktop and application GUI as input. Please ensure that no sensitive or confidential information is visible or captured during the execution process. For further information, refer to [DISCLAIMER.md](./DISCLAIMER.md).
1030
+
1031
+
1032
+ ### Step 4 __: Execution Logs
1033
+
1034
+ You can find the screenshots taken and request & response logs in the following folder:
1035
+ ```
1036
+ ./ufo/logs/<your_task_name>/
1037
+ ```
1038
+ You may use them to debug, replay, or analyze the agent output.",source,"step1: Installation
1039
+ UFO requires **Python >= 3.10** running on **Windows OS >= 10**. It can be installed by running the following command:
1040
+ ```bash
1041
+ # [optional to create conda environment]
1042
+ # conda create -n ufo python=3.10
1043
+ # conda activate ufo
1044
+ # clone the repository
1045
+ git clone https://github.com/microsoft/UFO.git
1046
+ cd UFO
1047
+ # install the requirements
1048
+ pip install -r requirements.txt
1049
+ ```
1050
+ __ Step 2: Configure the LLMs
1051
+ Before running UFO, you need to provide your LLM configurations. Taking OpenAI as an example, you can configure `ufo/config/config.yaml` file as follows.
1052
+ #### OpenAI
1053
+ ```
1054
+ API_TYPE: ""openai""
1055
+ OPENAI_API_BASE: ""https://api.openai.com/v1/chat/completions"" # The base URL for the OpenAI API
1056
+ OPENAI_API_KEY: ""YOUR_API_KEY"" # Set the value to the openai key for the llm model
1057
+ OPENAI_API_MODEL: ""GPTV_MODEL_NAME"" # The only OpenAI model by now that accepts visual input
1058
+ ```
1059
+
1060
+ #### Azure OpenAI (AOAI)
1061
+ ```
1062
+ API_TYPE: ""aoai""
1063
+ OPENAI_API_BASE: ""YOUR_ENDPOINT"" # The AOAI API address. Format: https://{your-resource-name}.openai.azure.com/openai/deployments/{deployment-id}/completions?api-version={api-version}
1064
+ OPENAI_API_KEY: ""YOUR_API_KEY"" # Set the value to the openai key for the llm model
1065
+ OPENAI_API_MODEL: ""GPTV_MODEL_NAME"" # The only OpenAI model by now that accepts visual input
1066
+ ```
1067
+
1068
+
1069
+ ### __ Step 3: Start UFO
1070
+
1071
+ #### __ You can execute the following on your Windows command Line (CLI):
1072
+
1073
+ ```bash
1074
+ # assume you are in the cloned UFO folder
1075
+ python -m ufo --task <your_task_name>
1076
+ ```
1077
+
1078
+ This will start the UFO process and you can interact with it through the command line interface.
1079
+ If everything goes well, you will see the following message:
1080
+
1081
+ ```bash
1082
+ Welcome to use UFO__, A UI-focused Agent for Windows OS Interaction.
1083
+ _ _ _____ ___
1084
+ | | | || ___| / _ \
1085
+ | | | || |_ | | | |
1086
+ | |_| || _| | |_| |
1087
+ \___/ |_| \___/
1088
+ Please enter your request to be completed__:
1089
+ ```
1090
+ #### __Reminder: ####
1091
+ - Before UFO executing your request, please make sure the targeted applications are active on the system.
1092
+ - The GPT-V accepts screenshots of your desktop and application GUI as input. Please ensure that no sensitive or confidential information is visible or captured during the execution process. For further information, refer to [DISCLAIMER.md](./DISCLAIMER.md).
1093
+
1094
+
1095
+ ### Step 4 __: Execution Logs
1096
+
1097
+ You can find the screenshots taken and request & response logs in the following folder:
1098
+ ```
1099
+ ./ufo/logs/<your_task_name>/
1100
+ ```
1101
+ You may use them to debug, replay, or analyze the agent output.",,,
1102
+ ,,https://raw.githubusercontent.com/catid/dora/main/README.md,,135,top,,,,,"## Demo
1103
+
1104
+ Install conda: https://docs.conda.io/projects/miniconda/en/latest/index.html
1105
+
1106
+ ```bash
1107
+ git clone https://github.com/catid/dora.git
1108
+ cd dora
1109
+
1110
+ conda create -n dora python=3.10 -y && conda activate dora
1111
+
1112
+ pip install -U -r requirements.txt
1113
+
1114
+ python dora.py
1115
+ ```",source,,,,
1116
+ ,,https://raw.githubusercontent.com/AILab-CVC/YOLO-World/master/README.md,,,,,,,,"### 1. Installation
1117
+
1118
+ YOLO-World is developed based on `torch==1.11.0` `mmyolo==0.6.0` and `mmdetection==3.0.0`.
1119
+
1120
+ #### Clone Project
1121
+
1122
+ ```bash
1123
+ git clone --recursive https://github.com/AILab-CVC/YOLO-World.git
1124
+ ```
1125
+ #### Install
1126
+
1127
+ ```bash
1128
+ pip install torch wheel -q
1129
+ pip install -e .
1130
+ ```",source,,,,
1131
+ ,,https://raw.githubusercontent.com/FasterDecoding/BitDelta/main/README.md,,63,top,,,,,"## Install
1132
+
1133
+ 1. Clone the repo and navigate to BitDelta:
1134
+
1135
+ ```
1136
+ git clone https://github.com/FasterDecoding/BitDelta
1137
+ cd BitDelta
1138
+ ```
1139
+
1140
+ 2. Set up environment:
1141
+
1142
+ ```bash
1143
+ conda create -yn bitdelta python=3.9
1144
+ conda activate bitdelta
1145
+
1146
+ pip install -e .
1147
+ ```",source,"step1.clone the repo and navigate to BitDelta:
1148
+ ```
1149
+ git clone https://github.com/FasterDecoding/BitDelta
1150
+ cd BitDelta
1151
+ ```
1152
+ step2.set up environment:
1153
+ ```bash
1154
+ conda create -yn bitdelta python=3.9
1155
+ conda activate bitdelta
1156
+ pip install -e .
1157
+ ```",,,
1158
+ ,,https://raw.githubusercontent.com/tensorflow/tensorflow/master/README.md,,180724,greatest,,,,,"## Install
1159
+
1160
+ See the [TensorFlow install guide](https://www.tensorflow.org/install) for the
1161
+ [pip package](https://www.tensorflow.org/install/pip), to
1162
+ [enable GPU support](https://www.tensorflow.org/install/gpu), use a
1163
+ [Docker container](https://www.tensorflow.org/install/docker), and
1164
+ [build from source](https://www.tensorflow.org/install/source).
1165
+
1166
+ To install the current release, which includes support for
1167
+ [CUDA-enabled GPU cards](https://www.tensorflow.org/install/gpu) *(Ubuntu and
1168
+ Windows)*:
1169
+
1170
+ ```
1171
+ $ pip install tensorflow
1172
+ ```
1173
+
1174
+ Other devices (DirectX and MacOS-metal) are supported using
1175
+ [Device plugins](https://www.tensorflow.org/install/gpu_plugins#available_devices).
1176
+
1177
+ A smaller CPU-only package is also available:
1178
+
1179
+ ```
1180
+ $ pip install tensorflow-cpu
1181
+ ```
1182
+
1183
+ To update TensorFlow to the latest version, add `--upgrade` flag to the above
1184
+ commands.
1185
+
1186
+ *Nightly binaries are available for testing using the
1187
+ [tf-nightly](https://pypi.python.org/pypi/tf-nightly) and
1188
+ [tf-nightly-cpu](https://pypi.python.org/pypi/tf-nightly-cpu) packages on PyPi.*",packagemanager,"step1. To install the current release, which includes support for
1189
+ [CUDA-enabled GPU cards](https://www.tensorflow.org/install/gpu) *(Ubuntu and
1190
+ Windows)*:
1191
+ ```
1192
+ $ pip install tensorflow
1193
+ ```
1194
+ step2. optional. A smaller CPU-only package is also available:
1195
+ ```
1196
+ $ pip install tensorflow-cpu
1197
+ ```
1198
+ step3. optional.
1199
+ To update TensorFlow to the latest version, add `--upgrade` flag to the above
1200
+ commands.
1201
+
1202
+ *Nightly binaries are available for testing using the
1203
+ [tf-nightly](https://pypi.python.org/pypi/tf-nightly) and
1204
+ [tf-nightly-cpu](https://pypi.python.org/pypi/tf-nightly-cpu) packages on PyPi.*",,,
1205
+ ,,https://raw.githubusercontent.com/huggingface/transformers/main/README.md,,120272,greatest,,,,,"## Installation
1206
+
1207
+ ### With pip
1208
+
1209
+ This repository is tested on Python 3.8+, Flax 0.4.1+, PyTorch 1.11+, and TensorFlow 2.6+.
1210
+
1211
+ You should install __ Transformers in a [virtual environment](https://docs.python.org/3/library/venv.html). If you're unfamiliar with Python virtual environments, check out the [user guide](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/).
1212
+
1213
+ First, create a virtual environment with the version of Python you're going to use and activate it.
1214
+
1215
+ Then, you will need to install at least one of Flax, PyTorch, or TensorFlow.
1216
+ Please refer to [TensorFlow installation page](https://www.tensorflow.org/install/), [PyTorch installation page](https://pytorch.org/get-started/locally/#start-locally) and/or [Flax](https://github.com/google/flax#quick-install) and [Jax](https://github.com/google/jax#installation) installation pages regarding the specific installation command for your platform.
1217
+
1218
+ When one of those backends has been installed, __ Transformers can be installed using pip as follows:
1219
+
1220
+ ```bash
1221
+ pip install transformers
1222
+ ```
1223
+
1224
+ If you'd like to play with the examples or need the bleeding edge of the code and can't wait for a new release, you must [install the library from source](https://huggingface.co/docs/transformers/installation#installing-from-source).
1225
+
1226
+ ### With conda
1227
+
1228
+ __ Transformers can be installed using conda as follows:
1229
+
1230
+ ```shell script
1231
+ conda install conda-forge::transformers
1232
+ ```
1233
+
1234
+ > **_NOTE:_** Installing `transformers` from the `huggingface` channel is deprecated.
1235
+
1236
+ Follow the installation pages of Flax, PyTorch or TensorFlow to see how to install them with conda.
1237
+
1238
+ > **_NOTE:_** On Windows, you may be prompted to activate Developer Mode in order to benefit from caching. If this is not an option for you, please let us know in [this issue](https://github.com/huggingface/huggingface_hub/issues/1062).",packagemanager,"Plan1. With pip
1239
+ requirements >> This repository is tested on Python 3.8+, Flax 0.4.1+, PyTorch 1.11+, and TensorFlow 2.6+.
1240
+ step1. install __ Transformers in a [virtual environment](https://docs.python.org/3/library/venv.html).(extra information) If you're unfamiliar with Python virtual environments, check out the [user guide](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/).
1241
+ step2. create a virtual environment with the version of Python you're going to use and activate it.
1242
+ step3. install at least one of Flax, PyTorch, or TensorFlow.
1243
+ extrainoformation. Please refer to [TensorFlow installation page](https://www.tensorflow.org/install/), [PyTorch installation page](https://pytorch.org/get-started/locally/#start-locally) and/or [Flax](https://github.com/google/flax#quick-install) and [Jax](https://github.com/google/jax#installation) installation pages regarding the specific installation command for your platform.
1244
+ step4. When one of those backends has been installed, __ Transformers can be installed using pip as follows:
1245
+ ```bash
1246
+ pip install transformers
1247
+ ```
1248
+ extrainformation. If you'd like to play with the examples or need the bleeding edge of the code and can't wait for a new release, you must [install the library from source](https://huggingface.co/docs/transformers/installation#installing-from-source).
1249
+ plan1.2. With conda
1250
+ step1.
1251
+ ```shell script
1252
+ conda install conda-forge::transformers
1253
+ ```
1254
+ > **_NOTE:_** Installing `transformers` from the `huggingface` channel is deprecated.
1255
+ Follow the installation pages of Flax, PyTorch or TensorFlow to see how to install them with conda.
1256
+ > **_NOTE:_** On Windows, you may be prompted to activate Developer Mode in order to benefit from caching. If this is not an option for you, please let us know in [this issue](https://github.com/huggingface/huggingface_hub/issues/1062).",,,
1257
+ ,,https://raw.githubusercontent.com/langchain-ai/langchain/master/README.md,,120270,greatest,,,,,"## Quick Install
1258
+
1259
+ With pip:
1260
+ ```bash
1261
+ pip install langchain
1262
+ ```
1263
+
1264
+ With conda:
1265
+ ```bash
1266
+ conda install langchain -c conda-forge
1267
+ ```",packagemanager,"Plan1. With pip. Step1:
1268
+ ```bash
1269
+ pip install langchain
1270
+ ```
1271
+ Plan2. Step1: With conda:
1272
+ ```bash
1273
+ conda install langchain -c conda-forge
1274
+ ```",,,
1275
+ ,,https://raw.githubusercontent.com/divelab/DIG/dig-stable/README.md,,1912,topic,drug discovery,task libraries,,,"## Installation
1276
+
1277
+ ### Install from pip
1278
+ The key dependencies of DIG: Dive into Graphs are PyTorch (>=1.10.0), PyTorch Geometric (>=2.0.0), and RDKit.
1279
+
1280
+ 1. Install [PyTorch](https://pytorch.org/get-started/locally/) (>=1.10.0)
1281
+
1282
+ ```shell script
1283
+ $ python -c ""import torch; print(torch.__version__)""
1284
+ >>> 1.10.0
1285
+ ```
1286
+
1287
+
1288
+
1289
+
1290
+ 2. Install [PyG](https://pytorch-geometric.readthedocs.io/en/latest/notes/installation.html#) (>=2.0.0)
1291
+
1292
+ ```shell script
1293
+ $ python -c ""import torch_geometric; print(torch_geometric.__version__)""
1294
+ >>> 2.0.0
1295
+ ```
1296
+
1297
+ 3. Install DIG: Dive into Graphs.
1298
+
1299
+ ```shell script
1300
+ pip install dive-into-graphs
1301
+ ```
1302
+
1303
+
1304
+ After installation, you can check the version. You have successfully installed DIG: Dive into Graphs if no error occurs.
1305
+
1306
+ ``` shell script
1307
+ $ python
1308
+ >>> from dig.version import __version__
1309
+ >>> print(__version__)
1310
+ ```
1311
+
1312
+ ### Install from source
1313
+ If you want to try the latest features that have not been released yet, you can install dig from source.
1314
+
1315
+ ```shell script
1316
+ git clone https://github.com/divelab/DIG.git
1317
+ cd DIG
1318
+ pip install .
1319
+ ```",packagemanager,"step 1. Install from pip
1320
+ The key dependencies of DIG: Dive into Graphs are PyTorch (>=1.10.0), PyTorch Geometric (>=2.0.0), and RDKit.
1321
+
1322
+ 1. Install [PyTorch](https://pytorch.org/get-started/locally/) (>=1.10.0)
1323
+
1324
+ ```shell script
1325
+ $ python -c ""import torch; print(torch.__version__)""
1326
+ >>> 1.10.0
1327
+ ```
1328
+
1329
+
1330
+
1331
+
1332
+ 2. Install [PyG](https://pytorch-geometric.readthedocs.io/en/latest/notes/installation.html#) (>=2.0.0)
1333
+
1334
+ ```shell script
1335
+ $ python -c ""import torch_geometric; print(torch_geometric.__version__)""
1336
+ >>> 2.0.0
1337
+ ```
1338
+
1339
+ 3. Install DIG: Dive into Graphs.
1340
+
1341
+ ```shell script
1342
+ pip install dive-into-graphs
1343
+ ```
1344
+
1345
+
1346
+ After installation, you can check the version. You have successfully installed DIG: Dive into Graphs if no error occurs.
1347
+
1348
+ ``` shell script
1349
+ $ python
1350
+ >>> from dig.version import __version__
1351
+ >>> print(__version__)
1352
+ ```
1353
+
1354
+ ### Install from source
1355
+ If you want to try the latest features that have not been released yet, you can install dig from source.
1356
+
1357
+ ```shell script
1358
+ git clone https://github.com/divelab/DIG.git
1359
+ cd DIG
1360
+ pip install .
1361
+ ```",,,