carlosug commited on
Commit
d377668
1 Parent(s): 43de084

Update golden_last.csv

Browse files
Files changed (1) hide show
  1. golden_last.csv +177 -253
golden_last.csv CHANGED
@@ -1,6 +1,6 @@
1
- software,repo_name,readme_url,portal,stars,selection,categories,,date_collection,date_submission,content,plan,steps,seq_order,optional_steps,extra_info_optional
2
- vcr-video-representation-for-contextual,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/oronnir/VCR/main/README.md,machine_learning,1,latest,,,13/02/2024,12/2/24,,,,,,
3
- ensuring-trustworthy-and-ethical-behaviour-in,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/AAAI-DISIM-UnivAQ/DALI/master/README.md,machine_learning,15,latest,,,13/02/2024,12/2/24,"## Installation
4
 
5
  **OS X & Linux:**
6
  1. To download and install SICStus Prolog (it is needed), follow the instructions at https://sicstus.sics.se/download4.html.
@@ -23,32 +23,32 @@ bash startmas.sh
23
       You will see different windows opening:
24
  *      Prolog LINDA server (active_server_wi.pl)
25
  *      Prolog FIPA client (active_user_wi.pl)
26
- *      1 instance of DALI metaintepreter for each agent (active_dali_wi.pl)","binary, source","[Binary] 1. To download and install SICStus Prolog (it is needed), follow the instructions at https://sicstus.sics.se/download4.html.
27
- [Source] 2. Then, you can download DALI and test it by running an example DALI MAS:
28
  ```sh
29
  git clone https://github.com/AAAI-DISIM-UnivAQ/DALI.git
30
  cd DALI/Examples/advanced
31
  bash startmas.sh
32
- ```","1,2","**Windows:**
33
  1. To download and install SICStus Prolog (it is needed), follow the instructions at https://sicstus.sics.se/download4.html.
34
  2. Then, you can download DALI from https://github.com/AAAI-DISIM-UnivAQ/DALI.git.
35
  3. Unzip the repository, go to the folder ""DALI/Examples/basic"", and test if DALI works by duble clicking ""startmas.bat"" file (this will launch an example DALI MAS). \","You will see different windows opening:
36
  Prolog LINDA server (active_server_wi.pl)
37
  Prolog FIPA client (active_user_wi.pl)
38
  1 instance of DALI metaintepreter for each agent (active_dali_wi.pl)"
39
- synthesizing-sentiment-controlled-feedback,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/MIntelligence-Group/CMFeed/main/README.md,machine_learning,0,latest,,,13/02/2024,12/2/24,,,,,,
40
- only-the-curve-shape-matters-training,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/cfeng783/GTT/main/README.md,machine_learning,0,latest,,,13/02/2024,12/2/24,"## Getting Started
41
 
42
  #### Install dependencies (with python 3.10)
43
 
44
  ```shell
45
  pip install -r requirements.txt
46
- ```",source,1. Install dependencies with pip install -r requirements.txt,,,
47
- from-uncertainty-to-precision-enhancing,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/fer-agathe/calibration_binary_classifier/main/README.md,machine_learning,0,latest,,,13/02/2024,12/2/24,,,,,,
48
- stochastic-gradient-flow-dynamics-of-test,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/rodsveiga/sgf_dyn/main/README.md,machine_learning,,,,,13/02/2024,12/2/24,,,,,,
49
- accuracy-of-textfooler-black-box-adversarial,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/zero-one-loss/wordcnn01/main/LICENSE*,machine_learning,0,,,,13/02/2024,12/2/24,,,,,,
50
- differentially-private-decentralized-learning-1,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/totilas/DPrandomwalk/main/README.md,machine_learning,,,,,13/02/2024,12/2/24,,,,,,
51
- aydiv-adaptable-yielding-3d-object-detection,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/sanjay-810/AYDIV2/main/README.md,machine_learning,1,,,,13/02/2024,12/2/24,"### **Installation**
52
  1. Prepare for the running environment.
53
 
54
  You can use the docker image provided by [`OpenPCDet`](https://github.com/open-mmlab/OpenPCDet). Our experiments are based on the
@@ -91,16 +91,12 @@ aydiv-adaptable-yielding-3d-object-detection,https://paperwithcode.com/paper/,ht
91
  cd pcdet/ops/iou3d/cuda_op
92
  python setup.py develop
93
  cd ../../../..
94
- ```","source,docker","[source]: step1. Prepare for the running environment.
95
- step2. prepare for the data:```
96
- cd Aydiv
97
- python depth_to_lidar.py
98
- ```
99
- [docker]: step1. You can use the docker image provided by [`OpenPCDet`](https://github.com/open-mmlab/OpenPCDet)","1,2",,
100
- cartesian-atomic-cluster-expansion-for,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/BingqingCheng/cace/main/README.md,machine_learning,4,latest,,,13/02/2024,12/2/24,"## Installation
101
 
102
- Please refer to the `setup.py` file for installation instructions.",source,[source] step1. please refer to the `setup.py` file for installation instructions.,,,
103
- teller-a-trustworthy-framework-for,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/less-and-less-bugs/Trust_TELLER/main/README.md,machine_learning,1,latest,,,13/02/2024,12/2/24,"## Getting Started
104
 
105
  Step 1: Download the dataset folder from onedrive by [data.zip](https://portland-my.sharepoint.com/:u:/g/personal/liuhui3-c_my_cityu_edu_hk/EfApQlFP3PhFjUW4527STo0BALMdP16zs-HPMNgwQVFWsA?e=zoHlW2). Unzip this folder into the project directory. You can find four orginal datasets, pre-processed datasets (i.e., val.jsonl, test.jsonl, train.jsonl in each dataset folder) and the files incuding questions and answers
106
 
@@ -108,13 +104,13 @@ Step 2: Place you OpenAI key into the file named api_key.txt.
108
 
109
  ```
110
  openai.api_key = """"
111
- ```",binary,"[binary] step1: Download the dataset folder from onedrive by https://portland-my.sharepoint.com/:u:/g/personal/liuhui3-c_my_cityu_edu_hk/EfApQlFP3PhFjUW4527STo0BALMdP16zs-HPMNgwQVFWsA?e=zoHlW2.
112
  step2. Unzip this folder into the project directory.
113
  step3. Place you OpenAI key into the file named api_key.txt.
114
  ```
115
  openai.api_key = """"
116
- ```","1,2,3",,
117
- continuous-time-radar-inertial-and-lidar,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/utiasASRL/steam_icp/master/README.md,computer_science,77,latest,robotics,,13/02/2024,9/2/24,"## Installation
118
 
119
  Clone this repository and its submodules.
120
 
@@ -145,19 +141,14 @@ docker run -it --name steam_icp \
145
 
146
  ```bash
147
  bash build.sh
148
- ```",source,"[source] step1. Clone this repository and its submodules.
149
- step2. We use docker to install dependencies The recommended way to build the docker image is
150
- ```bash
151
- docker build -t steam_icp \
152
  --build-arg USERID=$(id -u) \
153
  --build-arg GROUPID=$(id -g) \
154
  --build-arg USERNAME=$(whoami) \
155
  --build-arg HOMEDIR=${HOME} .
 
 
156
  ```
157
- step3. When starting a container, remember to mount the code, dataset, and output directories to proper locations in the container.
158
- An example command to start a docker container with the image is
159
-
160
- ```bash
161
  docker run -it --name steam_icp \
162
  --privileged \
163
  --network=host \
@@ -165,14 +156,12 @@ docker run -it --name steam_icp \
165
  -v /tmp/.X11-unix:/tmp/.X11-unix \
166
  -v ${HOME}:${HOME}:rw \
167
  steam_icp
168
-
169
  step4.(Inside Container) Go to the root directory of this repository and build STEAM-ICP
170
-
171
  ```bash
172
  bash build.sh
173
- ```","1,2,3,4",,
174
- towards-a-thermodynamical-deep-learning,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/fedezocco/ThermoVisMedRob/main/README.md,computer_science,0,latest,robotics,,13/02/2024,8/2/24,,,,,,
175
- robust-parameter-fitting-to-realistic-network,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/PFischbeck/parameter-fitting-experiments/main/Readme.md,computer_science,0,latest,Social and Information Networks Data Structures and Algorithms,,13/02/2024,8/2/24,"# Installation
176
 
177
  - Make sure you have Python, Pip and R installed.
178
  - Checkout this repository
@@ -191,7 +180,7 @@ R -e 'install.packages(c(""ggplot2"", ""reshape2"", ""plyr"", ""dplyr"", ""scale
191
  ```
192
 
193
  - Download the file `konect-data.zip` from [Zenodo](https://doi.org/10.5281/zenodo.10629451) and extract its contents into the folder `input_data/konect`
194
- - Optional: Download the file `output-data.zip` from [Zenodo](https://doi.org/10.5281/zenodo.10629451) and extract its contents into the folder `output_data`. This way, you can access all experiment results without running them yourself.",source,"[source] step1. Make sure you have Python, Pip and R installed.
195
  step2. Checkout this repository
196
  step3. Install the python dependencies with
197
  ```
@@ -203,14 +192,13 @@ step5. Install the R dependencies (used for plots) with
203
  R -e 'install.packages(c(""ggplot2"", ""reshape2"", ""plyr"", ""dplyr"", ""scales""), repos=""https://cloud.r-project.org/"")'
204
  ```
205
  step6. Download the file `konect-data.zip` from [Zenodo](https://doi.org/10.5281/zenodo.10629451) and extract its contents into the folder `input_data/konect`
206
- step7. Optional: Download the file `output-data.zip` from [Zenodo](https://doi.org/10.5281/zenodo.10629451) and extract its contents into the folder `output_data`. This way, you can access all experiment results without running them yourself.","1,2,3,4,5,6,7","step7. Optional: Download the file `output-data.zip` from [Zenodo](https://doi.org/10.5281/zenodo.10629451) and extract its contents into the folder `output_data`. This way, you can access all experiment results without running them yourself.",
207
- get-tok-a-genai-enriched-multimodal-tiktok,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/gabbypinto/GET-Tok-Peru/main/README.md,computer_science,1,latest,Social and Information Networks Computers and Society Human-Computer Interaction ,,13/02/2024,8/2/24,"## Installation
208
  pip install -r requirements.txt
209
 
210
- *Note: I did not us a virtual environment so the packages in the requirements.txt file are probably not reflective of all the packages used in this project. If some issues pop up please don't hesitate to email me at: gpinto@usc.edu*",packagemanager,"step1.
211
- pip install -r requirements.txt ",,,*Note: I did not us a virtual environment so the packages in the requirements.txt file are probably not reflective of all the packages used in this project. If some issues pop up please don't hesitate to email me at: gpinto@usc.edu*
212
- a-longitudinal-study-of-italian-and-french,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/orsoFra/LS_FRIT_UKR/main/README.md,computer_science,0,latest,Social and Information Networks Computers and Society,,13/02/2024,7/2/24,,,,,,
213
- geometric-slosh-free-tracking-for-robotic,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/jonarriza96/gsft/main/README.md,computer_science,1,latest,robotics,,13/02/2024,7/2/24,"## Installation
214
 
215
  ### Dependencies
216
 
@@ -236,13 +224,12 @@ To ensure that Python paths are properly defined, update the `~/.bashrc` by addi
236
  ```
237
  export GSFT_PATH=/path_to_gsfc
238
  export PYTHONPATH=$PYTHONPATH:/$GSFT_PATH
239
- ```",source,"[source] step1. Check dependencies
240
- step2. Initialize git submodules with
241
  ```
242
  git submodule init
243
  git submodule update
244
  ```
245
- step3. Install the specific versions of every package from `requirements.txt` in a new conda environment:
246
  ```
247
  conda create --name gsft python=3.9
248
  conda activate gsft
@@ -252,16 +239,16 @@ step4. Create variables to ensure that Python paths are properly defined, update
252
  ```
253
  export GSFT_PATH=/path_to_gsfc
254
  export PYTHONPATH=$PYTHONPATH:/$GSFT_PATH
255
- ```","1,2,3,4",,
256
- real-time-line-based-room-segmentation-and,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/EricssonResearch/Line-Based-Room-Segmentation-and-EDF/release/README.md,computer_science,0,latest,robotics,,13/02/2024,7/2/24,"## Installation
257
  The project can be installed by running the following command in your terminal:
258
  ```bash
259
  pip install -r requirements.txt
260
- ```",source,"[source] step1. Run the command in your terminal:
261
  ```
262
  pip install -r requirements.txt
263
- ```",1,,
264
- viga,https://bio.tools/,https://raw.githubusercontent.com/viralInformatics/VIGA/master/README.md,,7,https://bio.tools/t?sort=citationDate&ord=desc,,command_line,13/02/2024,last week,"## Installation
265
 
266
  ### Step1: Download VIGA
267
 
@@ -317,74 +304,70 @@ Base on python 3.6.8
317
  ```
318
  pip install pandas=1.1.5 numpy=1.19.5 matplotlib=3.3.4 biopython=1.79
319
  ```
320
- ",source,"[source] step1. Download VIGA with Git from GitHub:
321
  ```
322
  git clone https://github.com/viralInformatics/VIGA.git
323
- ```
324
- or Download ZIP to local
325
  step2.download Database:
326
  step2.1.download taxdmp.zip: https://ftp.ncbi.nlm.nih.gov/pub/taxonomy/ and unzip taxdmp.zip and put it in ./db/
327
  step2.2.download ""prot.accession2taxid"" file from https://ftp.ncbi.nlm.nih.gov/pub/taxonomy/accession2taxid/
328
  step2.3.download ""RefSeqVirusProtein"" file from
329
- wget -c ftp.ncbi.nlm.nih.gov/refseq/release/viral/viral.1.protein.faa.gz
330
  gzip -d viral.1.protein.faa.gz
331
- mv viral.1.protein.faa RefSeqVirusProtein
332
- step2.4. download ""nr"" file from
333
  wget -c ftp://ftp.ncbi.nlm.nih.gov/blast/db/FASTA/nr.gz
334
  or ascp -T -i asperaweb_id_dsa.openssh --host=ftp.ncbi.nih.gov --user=anonftp --mode=recv /blast/db/FASTA/nr.gz ./
335
- gzip -d nr.gz
336
- step2.5.use Diamond v2.0.11.149 to create two separate databases as the indexing libraries in the current version are incompatible with each other.
337
  step2.6.In order to set up a reference database for DIAMOND, the makedb command needs to be executed with the following command line:
338
  diamond makedb --in YourPath/RefSeqVirusProtein -d Diamond_RefSeqVirusProtein --taxonmap YourPath/prot.accession2taxid --taxonnodes YourPath/nodes.dmp
339
  diamond makedb --in nr -d Dimond_nr --taxonmap YourPath/prot.accession2taxid --taxonnodes YourPath/nodes.dmp
340
  ```
341
- step3. installation of dependent software
342
- step3.1. installing Some Software Using Conda
343
  ```
344
  conda install fastp=0.12.4 trinity=2.8.5 diamond=2.0.11.149 ragtag=2.1.0 quast=5.0.2
345
  ```
346
- step3.2. manual Installation of MetaCompass
347
- https://github.com/marbl/MetaCompass
348
- step4: install Python Dependencies
349
- step4.1.base on python 3.6.8
350
  ```
351
  pip install pandas=1.1.5 numpy=1.19.5 matplotlib=3.3.4 biopython=1.79
352
- ```","1,2,3,4,5,6",,
353
- lncrtpred,https://bio.tools/,https://raw.githubusercontent.com/zglabDIB/LncRTPred/main/README.md,,,https://bio.tools/t?sort=citationDate&ord=desc,,command_line,13/02/2024,8 months,,,,,,
354
- nrn-ez,https://bio.tools/,https://raw.githubusercontent.com/scimemia/NRN-EZ/master/README.md,,,,,Script,13/02/2024,last week,"**INSTALLATION FOR VERSION 1.1.6**
 
355
 
356
  NRN-EZ was built with PyInstaller 3.6, and requires the following languages and libraries:
357
 
358
- Python 3.6.9 and higher (currently up to 3.10)
359
 
360
- PyQt 5.10.1
361
 
362
- PyQtGraph 0.11.0
363
 
364
  Installation instructions for Linux (Ubuntu and Pop!_OS): download the Linux zip file and, from the command window, run a bash command for the install.sh file, in the corresponding installation folder.
365
 
366
  Installation instructions for Mac OS: download the Mac zip file and copy the NRN-EZ app to the Applications folder.
367
 
368
- Installation instructions for Windows: download the Win zip file and run the installation wizard.",binary,"[binary] step1. install requirements:
369
  Python 3.6.9 and higher (currently up to 3.10)
370
  PyQt 5.10.1
371
  PyQtGraph 0.11.0
372
- step2. for linux:download the Linux zip file and, from the command window, run a bash command for the install.sh file, in the corresponding installation folder. ",,"2. for linux:download the Linux zip file and, from the command window, run a bash command for the install.sh file, in the corresponding installation folder.
373
  2. for Mac OS: download the Mac zip file and copy the NRN-EZ app to the Applications folder.
374
- 2. for Windows: download the Win zip file and run the installation wizard.",
375
- causnet,https://bio.tools/,https://raw.githubusercontent.com/nand1155/CausNet/main/README.md,,0,https://bio.tools/t?sort=citationDate&ord=desc,,Library,13/02/2024,two years,"## Installation
376
 
377
  You can install the development version from GitHub with:
378
 
379
  ``` r
380
  require(""devtools"")
381
  install_github(""https://github.com/nand1155/CausNet"")
382
- ```",source,"[source]: step1.install the development version from GitHub with:
383
  ``` r
384
  require(""devtools"")
385
  install_github(""https://github.com/nand1155/CausNet"")
386
- ```",,,
387
- viralcc,https://bio.tools/,https://raw.githubusercontent.com/dyxstat/Reproduce_ViralCC/main/README.md,,0,https://bio.tools/t?sort=citationDate&ord=desc,,command_line,13/02/2024,8 months,"""# Instruction of reproducing results in ViralCC paper
388
  We take the cow fecal datasets for example. The other two datasets were processed following the same procedure.
389
 
390
  Scripts to process the intermediate data and plot figures are available in the folder [Scripts](https://github.com/dyxstat/Reproduce_ViralCC/tree/main/Scripts).
@@ -463,59 +446,52 @@ python ./viralcc.py pipeline -v final.contigs.fa COW_MAP_SORTED.bam viral.txt ou
463
  ```
464
  python concatenation.py -p out_cow/VIRAL_BIN -o viralCC_cow_bins.fa
465
  checkv end_to_end viralCC_cow_bins.fa output_checkv_viralcc_cow -t 16 -d /panfs/qcb-panasas/yuxuandu/checkv-db-v1.0
466
- ```""",source,"[source]step1.download and preprocess the raw data**
467
- Note: NCBI may update its links for downloading the database. Please check the latest link at [NCBI](https://www.ncbi.nlm.nih.gov/) if you meet the download error.
468
  ```
469
  wget https://sra-downloadb.be-md.ncbi.nlm.nih.gov/sos2/sra-pub-run-13/ERR2282092/ERR2282092.1
470
  wget https://sra-downloadb.be-md.ncbi.nlm.nih.gov/sos2/sra-pub-run-13/ERR2530126/ERR2530126.1
471
  wget https://sra-downloadb.be-md.ncbi.nlm.nih.gov/sos2/sra-pub-run-13/ERR2530127/ERR2530127.1
472
-
473
  fastq-dump --split-files --gzip ERR2282092.1
474
  fastq-dump --split-files --gzip ERR2530126.1
475
  fastq-dump --split-files --gzip ERR2530127.1
476
-
477
  bbduk.sh in1=ERR2282092.1_1.fastq.gz in2=ERR2282092.1_2.fastq.gz out1=COWSG1_AQ.fastq.gz out2=COWSG2_AQ.fastq.gz ref=/home1/yuxuandu/cmb/SOFTWARE/bbmap/resources/adapters.fa ktrim=r k=23 mink=11 hdist=1 minlen=50 tpe tbo
478
  bbduk.sh in1=ERR2530126.1_1.fastq.gz in2=ERR2530126.1_2.fastq.gz out1=S3HIC1_AQ.fastq.gz out2=S3HIC2_AQ.fastq.gz ref=/home1/yuxuandu/cmb/SOFTWARE/bbmap/resources/adapters.fa ktrim=r k=23 mink=11 hdist=1 minlen=50 tpe tbo
479
  bbduk.sh in1=ERR2530127.1_1.fastq.gz in2=ERR2530127.1_2.fastq.gz out1=M1HIC1_AQ.fastq.gz out2=M1HIC2_AQ.fastq.gz ref=/home1/yuxuandu/cmb/SOFTWARE/bbmap/resources/adapters.fa ktrim=r k=23 mink=11 hdist=1 minlen=50 tpe tbo
480
-
481
  bbduk.sh in1=S3HIC1_AQ.fastq.gz in2=S3HIC2_AQ.fastq.gz out1=S3HIC1_CL.fastq.gz out2=S3HIC2_CL.fastq.gz trimq=10 qtrim=r ftm=5 minlen=50
482
  bbduk.sh in1=M1HIC1_AQ.fastq.gz in2=M1HIC2_AQ.fastq.gz out1=M1HIC1_CL.fastq.gz out2=M1HIC2_CL.fastq.gz trimq=10 qtrim=r ftm=5 minlen=50
483
  bbduk.sh in1=COWSG1_AQ.fastq.gz in2=COWSG2_AQ.fastq.gz out1=COWSG1_CL.fastq.gz out2=COWSG2_CL.fastq.gz trimq=10 qtrim=r ftm=5 minlen=50
484
-
485
  bbduk.sh in1=S3HIC1_CL.fastq.gz in2=S3HIC2_CL.fastq.gz out1=S3HIC1_trim.fastq.gz out2=S3HIC2_trim.fastq.gz ftl=10
486
  bbduk.sh in1=M1HIC1_CL.fastq.gz in2=M1HIC2_CL.fastq.gz out1=M1HIC1_trim.fastq.gz out2=M1HIC2_trim.fastq.gz ftl=10
487
-
488
  clumpify.sh in1=S3HIC1_trim.fastq.gz in2=S3HIC2_trim.fastq.gz out1=S3HIC1_dedup.fastq.gz out2=S3HIC2_dedup.fastq.gz dedupe
489
  clumpify.sh in1=M1HIC1_trim.fastq.gz in2=M1HIC2_trim.fastq.gz out1=M1HIC1_dedup.fastq.gz out2=M1HIC2_dedup.fastq.gz dedupe
490
  cat S3HIC1_dedup.fastq.gz M1HIC1_dedup.fastq.gz > HIC1.fastq.gz
491
  cat S3HIC2_dedup.fastq.gz M1HIC2_dedup.fastq.gz > HIC2.fastq.gz
492
  ```
493
- step2.assemble contigs and align processed Hi-C reads to contigs**
494
  ```
495
  megahit -1 COWSG1_CL.fastq.gz -2 COWSG2_CL.fastq.gz -o COW_ASSEMBLY --min-contig-len 1000 --k-min 21 --k-max 141 --k-step 12 --merge-level 20,0.95
496
-
497
  bwa index final.contigs.fa
498
  bwa mem -5SP final.contigs.fa HIC1.fastq.gz HIC2.fastq.gz > COW_MAP.sam
499
  samtools view -F 0x904 -bS COW_MAP.sam > COW_MAP_UNSORTED.bam
500
  samtools sort -n COW_MAP_UNSORTED.bam -o COW_MAP_SORTED.bam
501
  ```
502
- step3. identify viral contigs from assembled contigs**
503
  ```
504
  perl removesmalls.pl 3000 final.contigs.fa > cow_3000.fa
505
  wrapper_phage_contigs_sorter_iPlant.pl -f cow_3000.fa --db 1 --wdir output_directory --ncpu 16 --data-dir /panfs/qcb-panasas/yuxuandu/virsorter-data
506
  Rscript find_viral_contig.R
507
  ```
508
- step4. run ViralCC**
509
  ```
510
  python ./viralcc.py pipeline -v final.contigs.fa COW_MAP_SORTED.bam viral.txt out_cow
511
  ```
512
- step5. evaluation draft viral genomes using CheckV**
513
  ```
514
  python concatenation.py -p out_cow/VIRAL_BIN -o viralCC_cow_bins.fa
515
  checkv end_to_end viralCC_cow_bins.fa output_checkv_viralcc_cow -t 16 -d /panfs/qcb-panasas/yuxuandu/checkv-db-v1.0
516
- ```",,,
517
- DRaW,https://bio.tools/,https://raw.githubusercontent.com/BioinformaticsIASBS/DRaW/main/README.md,,0,https://bio.tools/t?sort=citationDate&ord=desc,,,,,"# Running DRaW on COVID-19 datasets
518
- The DRaW has been applied on three COVID-19 datasets, DS1, DS2, and DS3. There are three subdirectories, DS1_repur�, DS2_repur�, and DS3_repur�, in the Drug-Repurposing directory. Each subdirectory has been assigned to one of the mentioned datasets. We put the Draw implementation file for each dataset in each subdirectory separately. This is due to keep the corresponding hyperparameters of each dataset.
519
  We use Adam as the optimizer with a learning rate equal to 0.001, beta1 = 0.9, beta2 = 0.999, and epsilon = 1e_7. The dropout rate is set to 0.5. The batch size is chosen by the number of samples per dataset. This hyperparameter for DS1 is equal to 8, and those for DS2 and DS3 are set to 32.
520
  To run the model, it is enough to execute ""Drug-Repurposing.py"" script in the command line. After that, execute ""score.py"". The repurposed drugs will be stored in the ""meanScore.csv"" spreadsheet. It contains the average of ach drug ranking. The lower, the better. For example, to run the DRaW on DS1:
521
  ```bash
@@ -525,40 +501,40 @@ python score.py
525
  ```
526
  Same goes for other datasets. Just change the directory path.
527
  # Performance analysis
528
- In order to analysis the performance, there is a one extra directory in the root, Performance_analysis�. By running following command the model is trained on a given dataset and returns its performance metrics, AUC-ROC, AUPR, F1 score, etc.
529
- The input parameter dataset_name is one the following five datasets name. The first one is COVID-19 DS3 and other four are golden benchmarks.
530
  'DS3','ic','nr','gpcr','e'
531
 
532
  ```bash
533
  cd Performance_analysis
534
  python main.py dataset_name
535
- ```",source,"[source]step1.execute ""Drug-Repurposing.py"" script in the command line. step2. after that, execute ""score.py"". The repurposed drugs will be stored in the ""meanScore.csv"" spreadsheet. It contains the average of ach drug ranking. The lower, the better. For example, to run the DRaW on DS1:
536
  ```bash
537
  cd Drug-Repurposing\DS1_repur
538
  python Drug-Repurposing.py
539
  python score.py
540
- ```",,,
541
- NRN-EZ,https://bio.tools/,https://raw.githubusercontent.com/scimemia/NRN-EZ/master/README.md,,,,,,,6 months,"**INSTALLATION FOR VERSION 1.1.6**
542
 
543
  NRN-EZ was built with PyInstaller 3.6, and requires the following languages and libraries:
544
 
545
- Python 3.6.9 and higher (currently up to 3.10)
546
 
547
- PyQt 5.10.1
548
 
549
- PyQtGraph 0.11.0
550
 
551
  Installation instructions for Linux (Ubuntu and Pop!_OS): download the Linux zip file and, from the command window, run a bash command for the install.sh file, in the corresponding installation folder.
552
 
553
  Installation instructions for Mac OS: download the Mac zip file and copy the NRN-EZ app to the Applications folder.
554
 
555
- Installation instructions for Windows: download the Win zip file and run the installation wizard.",source,"[source] step1. install the requirements:Python 3.6.9 and higher (currently up to 3.10), PyQt 5.10.1, PyQtGraph 0.11.0
556
  step2. for Linux: download the Linux zip file and, from the command window, run a bash command for the install.sh file, in the corresponding installation folder.
557
  step2. for Mac OS: download the Mac zip file and copy the NRN-EZ app to the Applications folder.
558
- step2. for Windows: download the Win zip file and run the installation wizard.",,"step2. for Linux: download the Linux zip file and, from the command window, run a bash command for the install.sh file, in the corresponding installation folder.
559
  step2. for Mac OS: download the Mac zip file and copy the NRN-EZ app to the Applications folder.
560
  step2. for Windows: download the Win zip file and run the installation wizard.",
561
- guiding-instruction-based-image-editing-via,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/apple/ml-mgie/main/README.md,computer_science,3089,top,,,13/02/2024,29/09/2023,"## Requirements
562
  ```
563
  conda create -n mgie python=3.10 -y
564
  conda activate mgie
@@ -584,7 +560,7 @@ pip install -U pydrive2 gdown wget
584
  cd ..
585
  cp mgie_llava.py LLaVA/llava/model/llava.py
586
  cp mgie_train.py LLaVA/llava/train/train.py
587
- ```",source,"[source]"" step1. create conda environment ```
588
  conda create -n mgie python=3.10 -y
589
  conda activate mgie
590
  conda update -n base -c defaults conda setuptools -y
@@ -608,8 +584,8 @@ pip install -U pydrive2 gdown wget
608
  cd ..
609
  cp mgie_llava.py LLaVA/llava/model/llava.py
610
  cp mgie_train.py LLaVA/llava/train/train.py
611
- ```","1,2,3,4",,
612
- self-play-fine-tuning-converts-weak-language,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/uclaml/SPIN/main/README.md,computer_science,430,top,,,13/02/2024,2/1/24,"## Setup
613
  The following steps provide the necessary setup to run our codes.
614
  1. Create a Python virtual environment with Conda:
615
  ```
@@ -628,7 +604,7 @@ python -m pip install flash-attn --no-build-isolation
628
  4. Login to your huggingface account for downloading models
629
  ```
630
  huggingface-cli login --token ""${your_access_token}""
631
- ```",source,"step1.create a Python virtual environment with Conda:
632
  ```
633
  conda create -n myenv python=3.10
634
  conda activate myenv
@@ -645,8 +621,8 @@ python -m pip install flash-attn --no-build-isolation
645
  step4.login to your huggingface account for downloading models
646
  ```
647
  huggingface-cli login --token ""${your_access_token}""
648
- ```",,,
649
- genegpt-teaching-large-language-models-to-use,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/ncbi/GeneGPT/main/README.md,computer_science,214,top,,,13/02/2024,19/04/2023,"# Requirements
650
 
651
  The code has been tested with Python 3.9.13. Please first install the required packages by:
652
  ```bash
@@ -671,12 +647,11 @@ To run GeneGPT-slim, simply use:
671
  ```bash
672
  python main.py 001001
673
  ```
674
- which will only use the Dm.1 and Dm.4 for in-context learning.",source,"step1.install requirements:
675
- The code has been tested with Python 3.9.13. Please first install the required packages by:
676
  ```bash
677
  pip install -r requirements.txt
678
  ```
679
- step2.set OpenAI API key to run GeneGPT with Codex. Replace the placeholder with your key in `config.py`:
680
  ```bash
681
  $ cat config.py
682
  API_KEY = 'YOUR_OPENAI_API_KEY'
@@ -687,13 +662,12 @@ After setting up the environment, one can run GeneGPT on GeneTuring by:
687
  python main.py 111111
688
  ```
689
  where `111111` denotes that all Documentations (Dc.1-2) and Demonstrations (Dm.1-4) are used.
690
- To run GeneGPT-slim, simply use:
691
  ```bash
692
  python main.py 001001
693
- ```
694
- which will only use the Dm.1 and Dm.4 for in-context learning.",,,
695
- the-boundary-of-neural-network-trainability,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/Sohl-Dickstein/fractal/main/README.md,computer_science,41,top,,,13/02/2024,9/2/24,,,,,,
696
- learning-to-fly-in-seconds,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/arplaboratory/learning-to-fly/master/README.MD,computer_science,201,top,,,13/02/2024,22/11/2023,"## Instructions to run the code
697
  ### Docker (isolated)
698
  We provide a pre-built Docker image with a simple web interface that can be executed using a single command (given that Docker is already installed on your machine):
699
  ```
@@ -829,17 +803,17 @@ docker run -it --rm -v $(pwd)/checkpoints/multirotor_td3/2023_11_16_14_46_38_d+o
829
  This should build the firmware using the newly trained policy and output the binary to `build_firmware/cf2.bin`. After that we can use the `cfclient` package to flash the firmware (find the installation instructions [here](https://www.bitcraze.io/documentation/repository/crazyflie-clients-python/master/installation/install/))
830
  ```
831
  cfloader flash build_firmware/cf2.bin stm32-fw -w radio://0/80/2M
832
- ```","source,docker","[plan1. Docker (isolated)]
833
  step1: Execute a single command (given that Docker is already installed on your machine):
834
  ```
835
  docker run -it --rm -p 8000:8000 arpllab/learning_to_fly
836
  ```
837
- step2. the container is running, now step3 navigate to [https://0.0.0.0:8000](https://0.0.0.0:8000) and step 4 you should see something like (after starting the training):
838
  <div align=""center"">
839
  <img src=""https://github.com/arplaboratory/learning_to_fly_media/blob/master/simulator_screenshot.png"" />
840
  </div>
841
  Note that to make this Docker image compatible with a broad range of CPUs, some optimizations have been turned off. For full speed we recommend a [Native installation](#Native-installation).
842
- [Docker installation (isolated)]
843
  step1. install Docker on your machine. step2. Then move to the original directory `learning_to_fly` and step3. build the Docker image:
844
  ```
845
  docker build -t arpllab/learning_to_fly .
@@ -860,7 +834,7 @@ docker run -it --rm -p 6006:6006 arpllab/learning_to_fly training_headless
860
  ```
861
  Navigate to [https://0.0.0.0:6006](https://0.0.0.0:6006) with your browser to investigate the Tensorboard logs.
862
 
863
- [plan2]. Native installation
864
  step1. clone this repository:
865
  ```
866
  git clone https://github.com/arplaboratory/learning-to-fly learning_to_fly
@@ -884,9 +858,7 @@ optional. As an alternative to openblas you can also install [Intel MKL](https:/
884
  ```
885
  brew install hdf5 protobuf boost
886
  ```
887
- Please make sure that `brew` links the libraries correctly. If not you might have to link e.g. `protobuf` manually using `brew link protobuf`.
888
-
889
- Going back to the main directory (`learning_to_fly`), we can now configure the build of the code:
890
  ```
891
  cd ../../
892
  mkdir build
@@ -895,13 +867,11 @@ cd build
895
  - Ubuntu + OpenBLAS: `cmake .. -DCMAKE_BUILD_TYPE=Release -DRL_TOOLS_BACKEND_ENABLE_OPENBLAS:BOOL=ON`
896
  - Ubuntu + MKL: `cmake .. -DCMAKE_BUILD_TYPE=Release -DRL_TOOLS_BACKEND_ENABLE_MKL:BOOL=ON`
897
  - macOS (tested on Sonoma): `cmake .. -DCMAKE_BUILD_TYPE=Release`
898
-
899
- Finally, we can build the targets:
900
  ```
901
  cmake --build . -j8
902
  ```
903
-
904
- After successfully building the targets, we can run the code (in the original directory `learning_to_fly`):
905
  ```
906
  cd ..
907
  ./build/src/training_headless
@@ -922,34 +892,17 @@ To run the training with the UI, we download the JavaScript dependencies in the
922
  cd src/ui
923
  ./get_dependencies.sh
924
  ```
925
-
926
- After that we can execute the UI binary from the root folder:
927
  ```
928
  cd ../../
929
  ./build/src/ui 0.0.0.0 8000
930
  ```
931
- Now you should be able to navigate to [http://0.0.0.0:8000](http://0.0.0.0:8000) in your browser and start the training.
932
-
933
- To run the benchmark (with UI, checkpointing and Tensorboard logging turned off):
934
  ```
935
  sudo nice -n -20 ./build/src/training_benchmark
936
- ```
937
-
938
- ## Deploying trained policies on a Crazyflie
939
- Train a policy, e.g. using the Docker image with the UI:
940
- ```
941
- docker run -it --rm -p 8000:8000 -v $(pwd)/checkpoints:/learning_to_fly/checkpoints arpllab/learning_to_fly
942
- ```
943
- The checkpoints are placed in the current working directory's `checkpoints` folder. Inspect the logs of the container to find the path of the final log, e.g., `checkpoints/multirotor_td3/2023_11_16_14_46_38_d+o+a+r+h+c+f+w+e+_002/actor_000000000300000.h`.
944
- We can mount this file into the container `arpllab/learning_to_fly_build_firmware` for building the firmware, e.g.:
945
- ```
946
- docker run -it --rm -v $(pwd)/checkpoints/multirotor_td3/2023_11_16_14_46_38_d+o+a+r+h+c+f+w+e+_002/actor_000000000300000.h:/controller/data/actor.h:ro -v $(pwd)/build_firmware:/output arpllab/learning_to_fly_build_firmware
947
- ```
948
- This should build the firmware using the newly trained policy and output the binary to `build_firmware/cf2.bin`. After that we can use the `cfclient` package to flash the firmware (find the installation instructions [here](https://www.bitcraze.io/documentation/repository/crazyflie-clients-python/master/installation/install/))
949
- ```
950
- cfloader flash build_firmware/cf2.bin stm32-fw -w radio://0/80/2M
951
- ```",,,
952
- /LargeWorldModel/LWM,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/LargeWorldModel/LWM/main/README.md,,2098,top,,,13/02/2024,,"## Setup
953
  Install the requirements with:
954
  ```
955
  conda create -n lwm python=3.10
@@ -959,7 +912,7 @@ pip install -r requirements.txt
959
  or set up TPU VM with:
960
  ```
961
  sh tpu_requirements.sh
962
- ```","packagemanager, source","step1.install the requirements with:
963
  ```
964
  conda create -n lwm python=3.10
965
  pip install -U ""jax[cuda12_pip]==0.4.23"" -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html
@@ -968,8 +921,11 @@ pip install -r requirements.txt
968
  optional. set up TPU VM with:
969
  ```
970
  sh tpu_requirements.sh
971
- ```",,,
972
- ,,https://raw.githubusercontent.com/microsoft/UFO/main/README.md,,830,top,,,,,"### ___ Step 1: Installation
 
 
 
973
  UFO requires **Python >= 3.10** running on **Windows OS >= 10**. It can be installed by running the following command:
974
  ```bash
975
  # [optional to create conda environment]
@@ -1035,71 +991,49 @@ You can find the screenshots taken and request & response logs in the following
1035
  ```
1036
  ./ufo/logs/<your_task_name>/
1037
  ```
1038
- You may use them to debug, replay, or analyze the agent output.",source,"step1: Installation
1039
- UFO requires **Python >= 3.10** running on **Windows OS >= 10**. It can be installed by running the following command:
1040
- ```bash
1041
- # [optional to create conda environment]
1042
- # conda create -n ufo python=3.10
1043
- # conda activate ufo
1044
- # clone the repository
1045
  git clone https://github.com/microsoft/UFO.git
1046
- cd UFO
1047
- # install the requirements
1048
- pip install -r requirements.txt
1049
  ```
1050
- __ Step 2: Configure the LLMs
1051
- Before running UFO, you need to provide your LLM configurations. Taking OpenAI as an example, you can configure `ufo/config/config.yaml` file as follows.
1052
- #### OpenAI
1053
  ```
1054
  API_TYPE: ""openai""
1055
  OPENAI_API_BASE: ""https://api.openai.com/v1/chat/completions"" # The base URL for the OpenAI API
1056
  OPENAI_API_KEY: ""YOUR_API_KEY"" # Set the value to the openai key for the llm model
1057
  OPENAI_API_MODEL: ""GPTV_MODEL_NAME"" # The only OpenAI model by now that accepts visual input
1058
  ```
1059
-
1060
- #### Azure OpenAI (AOAI)
1061
  ```
1062
  API_TYPE: ""aoai""
1063
  OPENAI_API_BASE: ""YOUR_ENDPOINT"" # The AOAI API address. Format: https://{your-resource-name}.openai.azure.com/openai/deployments/{deployment-id}/completions?api-version={api-version}
1064
  OPENAI_API_KEY: ""YOUR_API_KEY"" # Set the value to the openai key for the llm model
1065
  OPENAI_API_MODEL: ""GPTV_MODEL_NAME"" # The only OpenAI model by now that accepts visual input
1066
  ```
1067
-
1068
-
1069
- ### __ Step 3: Start UFO
1070
-
1071
- #### __ You can execute the following on your Windows command Line (CLI):
1072
-
1073
- ```bash
1074
  # assume you are in the cloned UFO folder
1075
  python -m ufo --task <your_task_name>
1076
  ```
1077
-
1078
- This will start the UFO process and you can interact with it through the command line interface.
1079
- If everything goes well, you will see the following message:
1080
-
1081
  ```bash
1082
  Welcome to use UFO__, A UI-focused Agent for Windows OS Interaction.
1083
- _ _ _____ ___
1084
- | | | || ___| / _ \
1085
- | | | || |_ | | | |
1086
- | |_| || _| | |_| |
1087
- \___/ |_| \___/
1088
- Please enter your request to be completed__:
1089
- ```
1090
- #### __Reminder: ####
1091
  - Before UFO executing your request, please make sure the targeted applications are active on the system.
1092
  - The GPT-V accepts screenshots of your desktop and application GUI as input. Please ensure that no sensitive or confidential information is visible or captured during the execution process. For further information, refer to [DISCLAIMER.md](./DISCLAIMER.md).
1093
-
1094
-
1095
  ### Step 4 __: Execution Logs
1096
-
1097
  You can find the screenshots taken and request & response logs in the following folder:
1098
  ```
1099
  ./ufo/logs/<your_task_name>/
1100
  ```
1101
- You may use them to debug, replay, or analyze the agent output.",,,
1102
- ,,https://raw.githubusercontent.com/catid/dora/main/README.md,,135,top,,,,,"## Demo
1103
 
1104
  Install conda: https://docs.conda.io/projects/miniconda/en/latest/index.html
1105
 
@@ -1112,8 +1046,16 @@ conda create -n dora python=3.10 -y && conda activate dora
1112
  pip install -U -r requirements.txt
1113
 
1114
  python dora.py
1115
- ```",source,,,,
1116
- ,,https://raw.githubusercontent.com/AILab-CVC/YOLO-World/master/README.md,,,,,,,,"### 1. Installation
 
 
 
 
 
 
 
 
1117
 
1118
  YOLO-World is developed based on `torch==1.11.0` `mmyolo==0.6.0` and `mmdetection==3.0.0`.
1119
 
@@ -1127,8 +1069,15 @@ git clone --recursive https://github.com/AILab-CVC/YOLO-World.git
1127
  ```bash
1128
  pip install torch wheel -q
1129
  pip install -e .
1130
- ```",source,,,,
1131
- ,,https://raw.githubusercontent.com/FasterDecoding/BitDelta/main/README.md,,63,top,,,,,"## Install
 
 
 
 
 
 
 
1132
 
1133
  1. Clone the repo and navigate to BitDelta:
1134
 
@@ -1144,7 +1093,7 @@ conda create -yn bitdelta python=3.9
1144
  conda activate bitdelta
1145
 
1146
  pip install -e .
1147
- ```",source,"step1.clone the repo and navigate to BitDelta:
1148
  ```
1149
  git clone https://github.com/FasterDecoding/BitDelta
1150
  cd BitDelta
@@ -1154,8 +1103,8 @@ step2.set up environment:
1154
  conda create -yn bitdelta python=3.9
1155
  conda activate bitdelta
1156
  pip install -e .
1157
- ```",,,
1158
- ,,https://raw.githubusercontent.com/tensorflow/tensorflow/master/README.md,,180724,greatest,,,,,"## Install
1159
 
1160
  See the [TensorFlow install guide](https://www.tensorflow.org/install) for the
1161
  [pip package](https://www.tensorflow.org/install/pip), to
@@ -1185,9 +1134,7 @@ commands.
1185
 
1186
  *Nightly binaries are available for testing using the
1187
  [tf-nightly](https://pypi.python.org/pypi/tf-nightly) and
1188
- [tf-nightly-cpu](https://pypi.python.org/pypi/tf-nightly-cpu) packages on PyPi.*",packagemanager,"step1. To install the current release, which includes support for
1189
- [CUDA-enabled GPU cards](https://www.tensorflow.org/install/gpu) *(Ubuntu and
1190
- Windows)*:
1191
  ```
1192
  $ pip install tensorflow
1193
  ```
@@ -1198,11 +1145,10 @@ $ pip install tensorflow-cpu
1198
  step3. optional.
1199
  To update TensorFlow to the latest version, add `--upgrade` flag to the above
1200
  commands.
1201
-
1202
- *Nightly binaries are available for testing using the
1203
  [tf-nightly](https://pypi.python.org/pypi/tf-nightly) and
1204
- [tf-nightly-cpu](https://pypi.python.org/pypi/tf-nightly-cpu) packages on PyPi.*",,,
1205
- ,,https://raw.githubusercontent.com/huggingface/transformers/main/README.md,,120272,greatest,,,,,"## Installation
1206
 
1207
  ### With pip
1208
 
@@ -1235,8 +1181,7 @@ conda install conda-forge::transformers
1235
 
1236
  Follow the installation pages of Flax, PyTorch or TensorFlow to see how to install them with conda.
1237
 
1238
- > **_NOTE:_** On Windows, you may be prompted to activate Developer Mode in order to benefit from caching. If this is not an option for you, please let us know in [this issue](https://github.com/huggingface/huggingface_hub/issues/1062).",packagemanager,"Plan1. With pip
1239
- requirements >> This repository is tested on Python 3.8+, Flax 0.4.1+, PyTorch 1.11+, and TensorFlow 2.6+.
1240
  step1. install __ Transformers in a [virtual environment](https://docs.python.org/3/library/venv.html).(extra information) If you're unfamiliar with Python virtual environments, check out the [user guide](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/).
1241
  step2. create a virtual environment with the version of Python you're going to use and activate it.
1242
  step3. install at least one of Flax, PyTorch, or TensorFlow.
@@ -1246,15 +1191,15 @@ step4. When one of those backends has been installed, __ Transformers can be ins
1246
  pip install transformers
1247
  ```
1248
  extrainformation. If you'd like to play with the examples or need the bleeding edge of the code and can't wait for a new release, you must [install the library from source](https://huggingface.co/docs/transformers/installation#installing-from-source).
1249
- plan1.2. With conda
1250
  step1.
1251
  ```shell script
1252
  conda install conda-forge::transformers
1253
  ```
1254
  > **_NOTE:_** Installing `transformers` from the `huggingface` channel is deprecated.
1255
  Follow the installation pages of Flax, PyTorch or TensorFlow to see how to install them with conda.
1256
- > **_NOTE:_** On Windows, you may be prompted to activate Developer Mode in order to benefit from caching. If this is not an option for you, please let us know in [this issue](https://github.com/huggingface/huggingface_hub/issues/1062).",,,
1257
- ,,https://raw.githubusercontent.com/langchain-ai/langchain/master/README.md,,120270,greatest,,,,,"## Quick Install
1258
 
1259
  With pip:
1260
  ```bash
@@ -1264,15 +1209,15 @@ pip install langchain
1264
  With conda:
1265
  ```bash
1266
  conda install langchain -c conda-forge
1267
- ```",packagemanager,"Plan1. With pip. Step1:
1268
  ```bash
1269
  pip install langchain
1270
  ```
1271
- Plan2. Step1: With conda:
1272
  ```bash
1273
  conda install langchain -c conda-forge
1274
- ```",,,
1275
- ,,https://raw.githubusercontent.com/divelab/DIG/dig-stable/README.md,,1912,topic,drug discovery,task libraries,,,"## Installation
1276
 
1277
  ### Install from pip
1278
  The key dependencies of DIG: Dive into Graphs are PyTorch (>=1.10.0), PyTorch Geometric (>=2.0.0), and RDKit.
@@ -1316,46 +1261,25 @@ If you want to try the latest features that have not been released yet, you can
1316
  git clone https://github.com/divelab/DIG.git
1317
  cd DIG
1318
  pip install .
1319
- ```",packagemanager,"step 1. Install from pip
1320
- The key dependencies of DIG: Dive into Graphs are PyTorch (>=1.10.0), PyTorch Geometric (>=2.0.0), and RDKit.
1321
-
1322
- 1. Install [PyTorch](https://pytorch.org/get-started/locally/) (>=1.10.0)
1323
-
1324
- ```shell script
1325
- $ python -c ""import torch; print(torch.__version__)""
1326
- >>> 1.10.0
1327
  ```
1328
-
1329
-
1330
-
1331
-
1332
- 2. Install [PyG](https://pytorch-geometric.readthedocs.io/en/latest/notes/installation.html#) (>=2.0.0)
1333
-
1334
- ```shell script
1335
  $ python -c ""import torch_geometric; print(torch_geometric.__version__)""
1336
- >>> 2.0.0
 
1337
  ```
1338
-
1339
- 3. Install DIG: Dive into Graphs.
1340
-
1341
- ```shell script
1342
  pip install dive-into-graphs
1343
  ```
1344
-
1345
-
1346
- After installation, you can check the version. You have successfully installed DIG: Dive into Graphs if no error occurs.
1347
-
1348
- ``` shell script
1349
- $ python
1350
  >>> from dig.version import __version__
1351
  >>> print(__version__)
1352
  ```
1353
-
1354
- ### Install from source
1355
- If you want to try the latest features that have not been released yet, you can install dig from source.
1356
-
1357
- ```shell script
1358
  git clone https://github.com/divelab/DIG.git
1359
- cd DIG
1360
- pip install .
1361
- ```",,,
 
1
+ software,repo_name,readme_url,content,plan,steps,optional_steps,extra_info_optional
2
+ vcr-video-representation-for-contextual,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/oronnir/VCR/main/README.md,,,,,
3
+ ensuring-trustworthy-and-ethical-behaviour-in,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/AAAI-DISIM-UnivAQ/DALI/master/README.md,"## Installation
4
 
5
  **OS X & Linux:**
6
  1. To download and install SICStus Prolog (it is needed), follow the instructions at https://sicstus.sics.se/download4.html.
 
23
  &nbsp;&nbsp;&nbsp;&nbsp; You will see different windows opening:
24
  * &nbsp;&nbsp;&nbsp;&nbsp; Prolog LINDA server (active_server_wi.pl)
25
  * &nbsp;&nbsp;&nbsp;&nbsp; Prolog FIPA client (active_user_wi.pl)
26
+ * &nbsp;&nbsp;&nbsp;&nbsp; 1 instance of DALI metaintepreter for each agent (active_dali_wi.pl)","binary, source","[plan binary]>>step1. follow the instructions at https://sicstus.sics.se/download4.html.
27
+ [plan source]>>step 2. download DALI. step3. test it by running an example DALI MAS:
28
  ```sh
29
  git clone https://github.com/AAAI-DISIM-UnivAQ/DALI.git
30
  cd DALI/Examples/advanced
31
  bash startmas.sh
32
+ ```","**Windows:**
33
  1. To download and install SICStus Prolog (it is needed), follow the instructions at https://sicstus.sics.se/download4.html.
34
  2. Then, you can download DALI from https://github.com/AAAI-DISIM-UnivAQ/DALI.git.
35
  3. Unzip the repository, go to the folder ""DALI/Examples/basic"", and test if DALI works by duble clicking ""startmas.bat"" file (this will launch an example DALI MAS). \","You will see different windows opening:
36
  Prolog LINDA server (active_server_wi.pl)
37
  Prolog FIPA client (active_user_wi.pl)
38
  1 instance of DALI metaintepreter for each agent (active_dali_wi.pl)"
39
+ synthesizing-sentiment-controlled-feedback,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/MIntelligence-Group/CMFeed/main/README.md,,,,,
40
+ only-the-curve-shape-matters-training,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/cfeng783/GTT/main/README.md,"## Getting Started
41
 
42
  #### Install dependencies (with python 3.10)
43
 
44
  ```shell
45
  pip install -r requirements.txt
46
+ ```",source,[plan source]>> [INCOMPLETE] step1. Install dependencies with ```pip install -r requirements.txt```,,
47
+ from-uncertainty-to-precision-enhancing,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/fer-agathe/calibration_binary_classifier/main/README.md,,,,,
48
+ stochastic-gradient-flow-dynamics-of-test,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/rodsveiga/sgf_dyn/main/README.md,,,,,
49
+ accuracy-of-textfooler-black-box-adversarial,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/zero-one-loss/wordcnn01/main/LICENSE,,,,,
50
+ differentially-private-decentralized-learning-1,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/totilas/DPrandomwalk/main/README.md,,,,,
51
+ aydiv-adaptable-yielding-3d-object-detection,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/sanjay-810/AYDIV2/main/README.md,"### **Installation**
52
  1. Prepare for the running environment.
53
 
54
  You can use the docker image provided by [`OpenPCDet`](https://github.com/open-mmlab/OpenPCDet). Our experiments are based on the
 
91
  cd pcdet/ops/iou3d/cuda_op
92
  python setup.py develop
93
  cd ../../../..
94
+ ```","source,docker","[plan source]>> step1. Prepare for the running environment. step2. prepare for the data:```cd Aydiv python depth_to_lidar.py ```
95
+ [plan docker]>> step1. You can use the docker image provided by [`OpenPCDet`](https://github.com/open-mmlab/OpenPCDet)",,
96
+ cartesian-atomic-cluster-expansion-for,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/BingqingCheng/cace/main/README.md,"## Installation
 
 
 
 
97
 
98
+ Please refer to the `setup.py` file for installation instructions.",source,[plan source]>>[INCOMPLETE] step1. please refer to the `setup.py` file for installation instructions.,,
99
+ teller-a-trustworthy-framework-for,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/less-and-less-bugs/Trust_TELLER/main/README.md,"## Getting Started
100
 
101
  Step 1: Download the dataset folder from onedrive by [data.zip](https://portland-my.sharepoint.com/:u:/g/personal/liuhui3-c_my_cityu_edu_hk/EfApQlFP3PhFjUW4527STo0BALMdP16zs-HPMNgwQVFWsA?e=zoHlW2). Unzip this folder into the project directory. You can find four orginal datasets, pre-processed datasets (i.e., val.jsonl, test.jsonl, train.jsonl in each dataset folder) and the files incuding questions and answers
102
 
 
104
 
105
  ```
106
  openai.api_key = """"
107
+ ```",binary,"[plan binary]>> step1: Download the dataset folder from onedrive by https://portland-my.sharepoint.com/:u:/g/personal/liuhui3-c_my_cityu_edu_hk/EfApQlFP3PhFjUW4527STo0BALMdP16zs-HPMNgwQVFWsA?e=zoHlW2.
108
  step2. Unzip this folder into the project directory.
109
  step3. Place you OpenAI key into the file named api_key.txt.
110
  ```
111
  openai.api_key = """"
112
+ ```",,
113
+ continuous-time-radar-inertial-and-lidar,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/utiasASRL/steam_icp/master/README.md,"## Installation
114
 
115
  Clone this repository and its submodules.
116
 
 
141
 
142
  ```bash
143
  bash build.sh
144
+ ```",source,"[plan source]>> step1. clone this repository and its submodules. step2. Use docker to install dependencies ```docker build -t steam_icp \
 
 
 
145
  --build-arg USERID=$(id -u) \
146
  --build-arg GROUPID=$(id -g) \
147
  --build-arg USERNAME=$(whoami) \
148
  --build-arg HOMEDIR=${HOME} .
149
+ ```
150
+ step3. mount the code, dataset, and output directories to proper locations in the container.
151
  ```
 
 
 
 
152
  docker run -it --name steam_icp \
153
  --privileged \
154
  --network=host \
 
156
  -v /tmp/.X11-unix:/tmp/.X11-unix \
157
  -v ${HOME}:${HOME}:rw \
158
  steam_icp
 
159
  step4.(Inside Container) Go to the root directory of this repository and build STEAM-ICP
 
160
  ```bash
161
  bash build.sh
162
+ ```",,
163
+ towards-a-thermodynamical-deep-learning,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/fedezocco/ThermoVisMedRob/main/README.md,,,,,
164
+ robust-parameter-fitting-to-realistic-network,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/PFischbeck/parameter-fitting-experiments/main/Readme.md,"# Installation
165
 
166
  - Make sure you have Python, Pip and R installed.
167
  - Checkout this repository
 
180
  ```
181
 
182
  - Download the file `konect-data.zip` from [Zenodo](https://doi.org/10.5281/zenodo.10629451) and extract its contents into the folder `input_data/konect`
183
+ - Optional: Download the file `output-data.zip` from [Zenodo](https://doi.org/10.5281/zenodo.10629451) and extract its contents into the folder `output_data`. This way, you can access all experiment results without running them yourself.",source,"[plan source]>> step1. Make sure you have Python, Pip and R installed.
184
  step2. Checkout this repository
185
  step3. Install the python dependencies with
186
  ```
 
192
  R -e 'install.packages(c(""ggplot2"", ""reshape2"", ""plyr"", ""dplyr"", ""scales""), repos=""https://cloud.r-project.org/"")'
193
  ```
194
  step6. Download the file `konect-data.zip` from [Zenodo](https://doi.org/10.5281/zenodo.10629451) and extract its contents into the folder `input_data/konect`
195
+ step7. Optional: Download the file `output-data.zip` from [Zenodo](https://doi.org/10.5281/zenodo.10629451) and extract its contents into the folder `output_data`. This way, you can access all experiment results without running them yourself.","step7. Optional: Download the file `output-data.zip` from [Zenodo](https://doi.org/10.5281/zenodo.10629451) and extract its contents into the folder `output_data`. This way, you can access all experiment results without running them yourself.",
196
+ get-tok-a-genai-enriched-multimodal-tiktok,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/gabbypinto/GET-Tok-Peru/main/README.md,"## Installation
197
  pip install -r requirements.txt
198
 
199
+ *Note: I did not us a virtual environment so the packages in the requirements.txt file are probably not reflective of all the packages used in this project. If some issues pop up please don't hesitate to email me at: gpinto@usc.edu*",packagemanager,[plan packagemanager]>>step1. pip install -r requirements.txt ,,*Note: I did not us a virtual environment so the packages in the requirements.txt file are probably not reflective of all the packages used in this project. If some issues pop up please don't hesitate to email me at: gpinto@usc.edu*
200
+ a-longitudinal-study-of-italian-and-french,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/orsoFra/LS_FRIT_UKR/main/README.md,,,,,
201
+ geometric-slosh-free-tracking-for-robotic,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/jonarriza96/gsft/main/README.md,"## Installation
 
202
 
203
  ### Dependencies
204
 
 
224
  ```
225
  export GSFT_PATH=/path_to_gsfc
226
  export PYTHONPATH=$PYTHONPATH:/$GSFT_PATH
227
+ ```",source,"[plan source]>> step1. Check dependencies. step2. Initialize git submodules with
 
228
  ```
229
  git submodule init
230
  git submodule update
231
  ```
232
+ step3. Create conda environment and install requirements:
233
  ```
234
  conda create --name gsft python=3.9
235
  conda activate gsft
 
239
  ```
240
  export GSFT_PATH=/path_to_gsfc
241
  export PYTHONPATH=$PYTHONPATH:/$GSFT_PATH
242
+ ```",,
243
+ real-time-line-based-room-segmentation-and,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/EricssonResearch/Line-Based-Room-Segmentation-and-EDF/release/README.md,"## Installation
244
  The project can be installed by running the following command in your terminal:
245
  ```bash
246
  pip install -r requirements.txt
247
+ ```",source,"[plan source]>>[INCOMPLETE]step1. Run the command in your terminal:
248
  ```
249
  pip install -r requirements.txt
250
+ ```",,
251
+ viga,https://bio.tools/,https://raw.githubusercontent.com/viralInformatics/VIGA/master/README.md,"## Installation
252
 
253
  ### Step1: Download VIGA
254
 
 
304
  ```
305
  pip install pandas=1.1.5 numpy=1.19.5 matplotlib=3.3.4 biopython=1.79
306
  ```
307
+ ",source,"[plan source]>> step1. Download VIGA with Git from GitHub:
308
  ```
309
  git clone https://github.com/viralInformatics/VIGA.git
310
+ (stepOptional). or Download ZIP to local
 
311
  step2.download Database:
312
  step2.1.download taxdmp.zip: https://ftp.ncbi.nlm.nih.gov/pub/taxonomy/ and unzip taxdmp.zip and put it in ./db/
313
  step2.2.download ""prot.accession2taxid"" file from https://ftp.ncbi.nlm.nih.gov/pub/taxonomy/accession2taxid/
314
  step2.3.download ""RefSeqVirusProtein"" file from
315
+ ```wget -c ftp.ncbi.nlm.nih.gov/refseq/release/viral/viral.1.protein.faa.gz
316
  gzip -d viral.1.protein.faa.gz
317
+ mv viral.1.protein.faa RefSeqVirusProtein```
318
+ step2.4. download ""nr"" file from```
319
  wget -c ftp://ftp.ncbi.nlm.nih.gov/blast/db/FASTA/nr.gz
320
  or ascp -T -i asperaweb_id_dsa.openssh --host=ftp.ncbi.nih.gov --user=anonftp --mode=recv /blast/db/FASTA/nr.gz ./
321
+ gzip -d nr.gz```
322
+ step2.5.use Diamond v2.0.11.149 to create two separate databases.
323
  step2.6.In order to set up a reference database for DIAMOND, the makedb command needs to be executed with the following command line:
324
  diamond makedb --in YourPath/RefSeqVirusProtein -d Diamond_RefSeqVirusProtein --taxonmap YourPath/prot.accession2taxid --taxonnodes YourPath/nodes.dmp
325
  diamond makedb --in nr -d Dimond_nr --taxonmap YourPath/prot.accession2taxid --taxonnodes YourPath/nodes.dmp
326
  ```
327
+ step3. installing requirements via conda
 
328
  ```
329
  conda install fastp=0.12.4 trinity=2.8.5 diamond=2.0.11.149 ragtag=2.1.0 quast=5.0.2
330
  ```
331
+ step4: install Python dependencies base on python 3.6.8
 
 
 
332
  ```
333
  pip install pandas=1.1.5 numpy=1.19.5 matplotlib=3.3.4 biopython=1.79
334
+ ```",,"manual Installation of MetaCompass
335
+ https://github.com/marbl/MetaCompass"
336
+ lncrtpred,https://bio.tools/,https://raw.githubusercontent.com/zglabDIB/LncRTPred/main/README.md,,,,,
337
+ nrn-ez,https://bio.tools/,https://raw.githubusercontent.com/scimemia/NRN-EZ/master/README.md,"**INSTALLATION FOR VERSION 1.1.6**
338
 
339
  NRN-EZ was built with PyInstaller 3.6, and requires the following languages and libraries:
340
 
341
+ ÔøΩ Python 3.6.9 and higher (currently up to 3.10)
342
 
343
+ ÔøΩ PyQt 5.10.1
344
 
345
+ ÔøΩ PyQtGraph 0.11.0
346
 
347
  Installation instructions for Linux (Ubuntu and Pop!_OS): download the Linux zip file and, from the command window, run a bash command for the install.sh file, in the corresponding installation folder.
348
 
349
  Installation instructions for Mac OS: download the Mac zip file and copy the NRN-EZ app to the Applications folder.
350
 
351
+ Installation instructions for Windows: download the Win zip file and run the installation wizard.",binary,"[plan binary]>> step1. install requirements:
352
  Python 3.6.9 and higher (currently up to 3.10)
353
  PyQt 5.10.1
354
  PyQtGraph 0.11.0
355
+ step2. for linux:download the Linux zip file and, from the command window. step3. run a bash command for the install.sh file in the corresponding installation folder. ",,"2. for linux:download the Linux zip file and, from the command window, run a bash command for the install.sh file, in the corresponding installation folder.
356
  2. for Mac OS: download the Mac zip file and copy the NRN-EZ app to the Applications folder.
357
+ 2. for Windows: download the Win zip file and run the installation wizard."
358
+ causnet,https://bio.tools/,https://raw.githubusercontent.com/nand1155/CausNet/main/README.md,"## Installation
359
 
360
  You can install the development version from GitHub with:
361
 
362
  ``` r
363
  require(""devtools"")
364
  install_github(""https://github.com/nand1155/CausNet"")
365
+ ```",source,"[source]>>step1.install the development version from GitHub with:
366
  ``` r
367
  require(""devtools"")
368
  install_github(""https://github.com/nand1155/CausNet"")
369
+ ```",,
370
+ viralcc,https://bio.tools/,https://raw.githubusercontent.com/dyxstat/Reproduce_ViralCC/main/README.md,"""# Instruction of reproducing results in ViralCC paper
371
  We take the cow fecal datasets for example. The other two datasets were processed following the same procedure.
372
 
373
  Scripts to process the intermediate data and plot figures are available in the folder [Scripts](https://github.com/dyxstat/Reproduce_ViralCC/tree/main/Scripts).
 
446
  ```
447
  python concatenation.py -p out_cow/VIRAL_BIN -o viralCC_cow_bins.fa
448
  checkv end_to_end viralCC_cow_bins.fa output_checkv_viralcc_cow -t 16 -d /panfs/qcb-panasas/yuxuandu/checkv-db-v1.0
449
+ ```""",source,"[plan source]>>step1.download and preprocess the raw data.
 
450
  ```
451
  wget https://sra-downloadb.be-md.ncbi.nlm.nih.gov/sos2/sra-pub-run-13/ERR2282092/ERR2282092.1
452
  wget https://sra-downloadb.be-md.ncbi.nlm.nih.gov/sos2/sra-pub-run-13/ERR2530126/ERR2530126.1
453
  wget https://sra-downloadb.be-md.ncbi.nlm.nih.gov/sos2/sra-pub-run-13/ERR2530127/ERR2530127.1
 
454
  fastq-dump --split-files --gzip ERR2282092.1
455
  fastq-dump --split-files --gzip ERR2530126.1
456
  fastq-dump --split-files --gzip ERR2530127.1
 
457
  bbduk.sh in1=ERR2282092.1_1.fastq.gz in2=ERR2282092.1_2.fastq.gz out1=COWSG1_AQ.fastq.gz out2=COWSG2_AQ.fastq.gz ref=/home1/yuxuandu/cmb/SOFTWARE/bbmap/resources/adapters.fa ktrim=r k=23 mink=11 hdist=1 minlen=50 tpe tbo
458
  bbduk.sh in1=ERR2530126.1_1.fastq.gz in2=ERR2530126.1_2.fastq.gz out1=S3HIC1_AQ.fastq.gz out2=S3HIC2_AQ.fastq.gz ref=/home1/yuxuandu/cmb/SOFTWARE/bbmap/resources/adapters.fa ktrim=r k=23 mink=11 hdist=1 minlen=50 tpe tbo
459
  bbduk.sh in1=ERR2530127.1_1.fastq.gz in2=ERR2530127.1_2.fastq.gz out1=M1HIC1_AQ.fastq.gz out2=M1HIC2_AQ.fastq.gz ref=/home1/yuxuandu/cmb/SOFTWARE/bbmap/resources/adapters.fa ktrim=r k=23 mink=11 hdist=1 minlen=50 tpe tbo
 
460
  bbduk.sh in1=S3HIC1_AQ.fastq.gz in2=S3HIC2_AQ.fastq.gz out1=S3HIC1_CL.fastq.gz out2=S3HIC2_CL.fastq.gz trimq=10 qtrim=r ftm=5 minlen=50
461
  bbduk.sh in1=M1HIC1_AQ.fastq.gz in2=M1HIC2_AQ.fastq.gz out1=M1HIC1_CL.fastq.gz out2=M1HIC2_CL.fastq.gz trimq=10 qtrim=r ftm=5 minlen=50
462
  bbduk.sh in1=COWSG1_AQ.fastq.gz in2=COWSG2_AQ.fastq.gz out1=COWSG1_CL.fastq.gz out2=COWSG2_CL.fastq.gz trimq=10 qtrim=r ftm=5 minlen=50
 
463
  bbduk.sh in1=S3HIC1_CL.fastq.gz in2=S3HIC2_CL.fastq.gz out1=S3HIC1_trim.fastq.gz out2=S3HIC2_trim.fastq.gz ftl=10
464
  bbduk.sh in1=M1HIC1_CL.fastq.gz in2=M1HIC2_CL.fastq.gz out1=M1HIC1_trim.fastq.gz out2=M1HIC2_trim.fastq.gz ftl=10
 
465
  clumpify.sh in1=S3HIC1_trim.fastq.gz in2=S3HIC2_trim.fastq.gz out1=S3HIC1_dedup.fastq.gz out2=S3HIC2_dedup.fastq.gz dedupe
466
  clumpify.sh in1=M1HIC1_trim.fastq.gz in2=M1HIC2_trim.fastq.gz out1=M1HIC1_dedup.fastq.gz out2=M1HIC2_dedup.fastq.gz dedupe
467
  cat S3HIC1_dedup.fastq.gz M1HIC1_dedup.fastq.gz > HIC1.fastq.gz
468
  cat S3HIC2_dedup.fastq.gz M1HIC2_dedup.fastq.gz > HIC2.fastq.gz
469
  ```
470
+ step2.assemble contigs and step3. align processed Hi-C reads to contigs**
471
  ```
472
  megahit -1 COWSG1_CL.fastq.gz -2 COWSG2_CL.fastq.gz -o COW_ASSEMBLY --min-contig-len 1000 --k-min 21 --k-max 141 --k-step 12 --merge-level 20,0.95
 
473
  bwa index final.contigs.fa
474
  bwa mem -5SP final.contigs.fa HIC1.fastq.gz HIC2.fastq.gz > COW_MAP.sam
475
  samtools view -F 0x904 -bS COW_MAP.sam > COW_MAP_UNSORTED.bam
476
  samtools sort -n COW_MAP_UNSORTED.bam -o COW_MAP_SORTED.bam
477
  ```
478
+ step4. identify viral contigs from assembled contigs:
479
  ```
480
  perl removesmalls.pl 3000 final.contigs.fa > cow_3000.fa
481
  wrapper_phage_contigs_sorter_iPlant.pl -f cow_3000.fa --db 1 --wdir output_directory --ncpu 16 --data-dir /panfs/qcb-panasas/yuxuandu/virsorter-data
482
  Rscript find_viral_contig.R
483
  ```
484
+ step5. run ViralCC:
485
  ```
486
  python ./viralcc.py pipeline -v final.contigs.fa COW_MAP_SORTED.bam viral.txt out_cow
487
  ```
488
+ step6. evaluation draft viral genomes using CheckV:
489
  ```
490
  python concatenation.py -p out_cow/VIRAL_BIN -o viralCC_cow_bins.fa
491
  checkv end_to_end viralCC_cow_bins.fa output_checkv_viralcc_cow -t 16 -d /panfs/qcb-panasas/yuxuandu/checkv-db-v1.0
492
+ ```",, (extra comment: NCBI may update its links for downloading the database. Please check the latest link at [NCBI](https://www.ncbi.nlm.nih.gov/) if you meet the download error)
493
+ DRaW,https://bio.tools/,https://raw.githubusercontent.com/BioinformaticsIASBS/DRaW/main/README.md,"# Running DRaW on COVID-19 datasets
494
+ The DRaW has been applied on three COVID-19 datasets, DS1, DS2, and DS3. There are three subdirectories, ÔøΩDS1_repurÔøΩ, ÔøΩDS2_repurÔøΩ, and ÔøΩDS3_repurÔøΩ, in the ÔøΩDrug-RepurposingÔøΩ directory. Each subdirectory has been assigned to one of the mentioned datasets. We put the Draw implementation file for each dataset in each subdirectory separately. This is due to keep the corresponding hyperparameters of each dataset.
495
  We use Adam as the optimizer with a learning rate equal to 0.001, beta1 = 0.9, beta2 = 0.999, and epsilon = 1e_7. The dropout rate is set to 0.5. The batch size is chosen by the number of samples per dataset. This hyperparameter for DS1 is equal to 8, and those for DS2 and DS3 are set to 32.
496
  To run the model, it is enough to execute ""Drug-Repurposing.py"" script in the command line. After that, execute ""score.py"". The repurposed drugs will be stored in the ""meanScore.csv"" spreadsheet. It contains the average of ach drug ranking. The lower, the better. For example, to run the DRaW on DS1:
497
  ```bash
 
501
  ```
502
  Same goes for other datasets. Just change the directory path.
503
  # Performance analysis
504
+ In order to analysis the performance, there is a one extra directory in the root, ÔøΩPerformance_analysisÔøΩ. By running following command the model is trained on a given dataset and returns its performance metrics, AUC-ROC, AUPR, F1 score, etc.
505
+ The input parameter ÔøΩdataset_nameÔøΩ is one the following five datasetsÔøΩ name. The first one is COVID-19 DS3 and other four are golden benchmarks.
506
  'DS3','ic','nr','gpcr','e'
507
 
508
  ```bash
509
  cd Performance_analysis
510
  python main.py dataset_name
511
+ ```",source,"[plan source]>>step1.execute ""Drug-Repurposing.py"" script in the command line. step2. after that, execute ""score.py"":
512
  ```bash
513
  cd Drug-Repurposing\DS1_repur
514
  python Drug-Repurposing.py
515
  python score.py
516
+ ```",," The repurposed drugs will be stored in the ""meanScore.csv"" spreadsheet. It contains the average of ach drug ranking. The lower, the better. For example, to run the DRaW on DS1"
517
+ NRN-EZ,https://bio.tools/,https://raw.githubusercontent.com/scimemia/NRN-EZ/master/README.md,"**INSTALLATION FOR VERSION 1.1.6**
518
 
519
  NRN-EZ was built with PyInstaller 3.6, and requires the following languages and libraries:
520
 
521
+ ÔøΩ Python 3.6.9 and higher (currently up to 3.10)
522
 
523
+ ÔøΩ PyQt 5.10.1
524
 
525
+ ÔøΩ PyQtGraph 0.11.0
526
 
527
  Installation instructions for Linux (Ubuntu and Pop!_OS): download the Linux zip file and, from the command window, run a bash command for the install.sh file, in the corresponding installation folder.
528
 
529
  Installation instructions for Mac OS: download the Mac zip file and copy the NRN-EZ app to the Applications folder.
530
 
531
+ Installation instructions for Windows: download the Win zip file and run the installation wizard.",source,"[plan source]>>step1. install the requirements:Python 3.6.9 and higher (currently up to 3.10), PyQt 5.10.1, PyQtGraph 0.11.0
532
  step2. for Linux: download the Linux zip file and, from the command window, run a bash command for the install.sh file, in the corresponding installation folder.
533
  step2. for Mac OS: download the Mac zip file and copy the NRN-EZ app to the Applications folder.
534
+ step2. for Windows: download the Win zip file and run the installation wizard.","step2. for Linux: download the Linux zip file and, from the command window, run a bash command for the install.sh file, in the corresponding installation folder.
535
  step2. for Mac OS: download the Mac zip file and copy the NRN-EZ app to the Applications folder.
536
  step2. for Windows: download the Win zip file and run the installation wizard.",
537
+ guiding-instruction-based-image-editing-via,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/apple/ml-mgie/main/README.md,"## Requirements
538
  ```
539
  conda create -n mgie python=3.10 -y
540
  conda activate mgie
 
560
  cd ..
561
  cp mgie_llava.py LLaVA/llava/model/llava.py
562
  cp mgie_train.py LLaVA/llava/train/train.py
563
+ ```",source,"[plan source]>> step1. create conda environment ```
564
  conda create -n mgie python=3.10 -y
565
  conda activate mgie
566
  conda update -n base -c defaults conda setuptools -y
 
584
  cd ..
585
  cp mgie_llava.py LLaVA/llava/model/llava.py
586
  cp mgie_train.py LLaVA/llava/train/train.py
587
+ ```",,
588
+ self-play-fine-tuning-converts-weak-language,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/uclaml/SPIN/main/README.md,"## Setup
589
  The following steps provide the necessary setup to run our codes.
590
  1. Create a Python virtual environment with Conda:
591
  ```
 
604
  4. Login to your huggingface account for downloading models
605
  ```
606
  huggingface-cli login --token ""${your_access_token}""
607
+ ```",source,"[plan source]>>step1.create a Python virtual environment with Conda:
608
  ```
609
  conda create -n myenv python=3.10
610
  conda activate myenv
 
621
  step4.login to your huggingface account for downloading models
622
  ```
623
  huggingface-cli login --token ""${your_access_token}""
624
+ ```",,
625
+ genegpt-teaching-large-language-models-to-use,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/ncbi/GeneGPT/main/README.md,"# Requirements
626
 
627
  The code has been tested with Python 3.9.13. Please first install the required packages by:
628
  ```bash
 
647
  ```bash
648
  python main.py 001001
649
  ```
650
+ which will only use the Dm.1 and Dm.4 for in-context learning.",source,"[plan source]>>step1.install requirements:
 
651
  ```bash
652
  pip install -r requirements.txt
653
  ```
654
+ step2.set OpenAI API key to run GeneGPT with Codex. replace the placeholder with your key in `config.py`:
655
  ```bash
656
  $ cat config.py
657
  API_KEY = 'YOUR_OPENAI_API_KEY'
 
662
  python main.py 111111
663
  ```
664
  where `111111` denotes that all Documentations (Dc.1-2) and Demonstrations (Dm.1-4) are used.
665
+ step4. To run GeneGPT-slim, simply use:
666
  ```bash
667
  python main.py 001001
668
+ ```",,The code has been tested with Python 3.9.13
669
+ the-boundary-of-neural-network-trainability,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/Sohl-Dickstein/fractal/main/README.md,,,,,
670
+ learning-to-fly-in-seconds,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/arplaboratory/learning-to-fly/master/README.MD,"## Instructions to run the code
 
671
  ### Docker (isolated)
672
  We provide a pre-built Docker image with a simple web interface that can be executed using a single command (given that Docker is already installed on your machine):
673
  ```
 
803
  This should build the firmware using the newly trained policy and output the binary to `build_firmware/cf2.bin`. After that we can use the `cfclient` package to flash the firmware (find the installation instructions [here](https://www.bitcraze.io/documentation/repository/crazyflie-clients-python/master/installation/install/))
804
  ```
805
  cfloader flash build_firmware/cf2.bin stm32-fw -w radio://0/80/2M
806
+ ```","source,docker","[plan1>>Docker (isolated)]
807
  step1: Execute a single command (given that Docker is already installed on your machine):
808
  ```
809
  docker run -it --rm -p 8000:8000 arpllab/learning_to_fly
810
  ```
811
+ step2. the container is running, now step3. navigate to [https://0.0.0.0:8000](https://0.0.0.0:8000) and step 4. you should see something like (after starting the training):
812
  <div align=""center"">
813
  <img src=""https://github.com/arplaboratory/learning_to_fly_media/blob/master/simulator_screenshot.png"" />
814
  </div>
815
  Note that to make this Docker image compatible with a broad range of CPUs, some optimizations have been turned off. For full speed we recommend a [Native installation](#Native-installation).
816
+ [plan>>Docker installation (isolated)]
817
  step1. install Docker on your machine. step2. Then move to the original directory `learning_to_fly` and step3. build the Docker image:
818
  ```
819
  docker build -t arpllab/learning_to_fly .
 
834
  ```
835
  Navigate to [https://0.0.0.0:6006](https://0.0.0.0:6006) with your browser to investigate the Tensorboard logs.
836
 
837
+ [plan>>Native installation]
838
  step1. clone this repository:
839
  ```
840
  git clone https://github.com/arplaboratory/learning-to-fly learning_to_fly
 
858
  ```
859
  brew install hdf5 protobuf boost
860
  ```
861
+ step5. Configure the build of the code:
 
 
862
  ```
863
  cd ../../
864
  mkdir build
 
867
  - Ubuntu + OpenBLAS: `cmake .. -DCMAKE_BUILD_TYPE=Release -DRL_TOOLS_BACKEND_ENABLE_OPENBLAS:BOOL=ON`
868
  - Ubuntu + MKL: `cmake .. -DCMAKE_BUILD_TYPE=Release -DRL_TOOLS_BACKEND_ENABLE_MKL:BOOL=ON`
869
  - macOS (tested on Sonoma): `cmake .. -DCMAKE_BUILD_TYPE=Release`
870
+ step6. build the targets:
 
871
  ```
872
  cmake --build . -j8
873
  ```
874
+ step7. run the code (in the original directory `learning_to_fly`):
 
875
  ```
876
  cd ..
877
  ./build/src/training_headless
 
892
  cd src/ui
893
  ./get_dependencies.sh
894
  ```
895
+ step8.execute the UI binary from the root folder:
 
896
  ```
897
  cd ../../
898
  ./build/src/ui 0.0.0.0 8000
899
  ```
900
+ step9. navigate to [http://0.0.0.0:8000](http://0.0.0.0:8000) in your browser and start the training.
901
+ step10. run the benchmark (with UI, checkpointing and Tensorboard logging turned off):
 
902
  ```
903
  sudo nice -n -20 ./build/src/training_benchmark
904
+ ```",,
905
+ /LargeWorldModel/LWM,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/LargeWorldModel/LWM/main/README.md,"## Setup
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
906
  Install the requirements with:
907
  ```
908
  conda create -n lwm python=3.10
 
912
  or set up TPU VM with:
913
  ```
914
  sh tpu_requirements.sh
915
+ ```","packagemanager, source","[plan packagemanager]>>step1.install the requirements with:
916
  ```
917
  conda create -n lwm python=3.10
918
  pip install -U ""jax[cuda12_pip]==0.4.23"" -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html
 
921
  optional. set up TPU VM with:
922
  ```
923
  sh tpu_requirements.sh
924
+ ```",,"optional. set up TPU VM with:
925
+ ```
926
+ sh tpu_requirements.sh
927
+ ```"
928
+ microsoft/UFO,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/microsoft/UFO/main/README.md,"### ___ Step 1: Installation
929
  UFO requires **Python >= 3.10** running on **Windows OS >= 10**. It can be installed by running the following command:
930
  ```bash
931
  # [optional to create conda environment]
 
991
  ```
992
  ./ufo/logs/<your_task_name>/
993
  ```
994
+ You may use them to debug, replay, or analyze the agent output.",source,"[plan source]>>step1: Run the following command:
995
+ ```
996
+ conda create -n ufo python=3.10
997
+ conda activate ufo
998
+ clone the repository
 
 
999
  git clone https://github.com/microsoft/UFO.git
1000
+ cd UFO```
1001
+ step2. install the requirements:
1002
+ ```pip install -r requirements.txt
1003
  ```
1004
+ step 3: configure the LLMs `ufo/config/config.yaml` file as follows:
1005
+ for OpenAI
 
1006
  ```
1007
  API_TYPE: ""openai""
1008
  OPENAI_API_BASE: ""https://api.openai.com/v1/chat/completions"" # The base URL for the OpenAI API
1009
  OPENAI_API_KEY: ""YOUR_API_KEY"" # Set the value to the openai key for the llm model
1010
  OPENAI_API_MODEL: ""GPTV_MODEL_NAME"" # The only OpenAI model by now that accepts visual input
1011
  ```
1012
+ for Azure OpenAI (AOAI):
 
1013
  ```
1014
  API_TYPE: ""aoai""
1015
  OPENAI_API_BASE: ""YOUR_ENDPOINT"" # The AOAI API address. Format: https://{your-resource-name}.openai.azure.com/openai/deployments/{deployment-id}/completions?api-version={api-version}
1016
  OPENAI_API_KEY: ""YOUR_API_KEY"" # Set the value to the openai key for the llm model
1017
  OPENAI_API_MODEL: ""GPTV_MODEL_NAME"" # The only OpenAI model by now that accepts visual input
1018
  ```
1019
+ step4: Start UFO using command line:
1020
+ ```
 
 
 
 
 
1021
  # assume you are in the cloned UFO folder
1022
  python -m ufo --task <your_task_name>
1023
  ```
1024
+ step5. check installation. If everything goes well, you will see the following message in the console:
 
 
 
1025
  ```bash
1026
  Welcome to use UFO__, A UI-focused Agent for Windows OS Interaction.
1027
+ ```",,"#### __Reminder: ####
 
 
 
 
 
 
 
1028
  - Before UFO executing your request, please make sure the targeted applications are active on the system.
1029
  - The GPT-V accepts screenshots of your desktop and application GUI as input. Please ensure that no sensitive or confidential information is visible or captured during the execution process. For further information, refer to [DISCLAIMER.md](./DISCLAIMER.md).
 
 
1030
  ### Step 4 __: Execution Logs
 
1031
  You can find the screenshots taken and request & response logs in the following folder:
1032
  ```
1033
  ./ufo/logs/<your_task_name>/
1034
  ```
1035
+ You may use them to debug, replay, or analyze the agent output."
1036
+ /catid/dora,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/catid/dora/main/README.md,"## Demo
1037
 
1038
  Install conda: https://docs.conda.io/projects/miniconda/en/latest/index.html
1039
 
 
1046
  pip install -U -r requirements.txt
1047
 
1048
  python dora.py
1049
+ ```",source,"[plan source]>>step1. install conda:https://docs.conda.io/projects/miniconda/en/latest/index.html. step2. clone the repository and move to the folder:
1050
+ ```bash
1051
+ git clone https://github.com/catid/dora.git
1052
+ cd dora
1053
+ step3. create conda environment:```
1054
+ conda create -n dora python=3.10 -y && conda activate dora```
1055
+ step4. install requirements:```
1056
+ pip install -U -r requirements.txt```
1057
+ step5. execute the script: ```python dora.py```",,
1058
+ YOLO-World,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/AILab-CVC/YOLO-World/master/README.md,"### 1. Installation
1059
 
1060
  YOLO-World is developed based on `torch==1.11.0` `mmyolo==0.6.0` and `mmdetection==3.0.0`.
1061
 
 
1069
  ```bash
1070
  pip install torch wheel -q
1071
  pip install -e .
1072
+ ```",source,"[plan source]>>step1. clone repository:
1073
+ ```
1074
+ git clone --recursive https://github.com/AILab-CVC/YOLO-World.git
1075
+ ```
1076
+ step2. install module:
1077
+ pip install torch wheel -q
1078
+ pip install -e .
1079
+ ```",,
1080
+ FasterDecoding/BitDelta,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/FasterDecoding/BitDelta/main/README.md,"## Install
1081
 
1082
  1. Clone the repo and navigate to BitDelta:
1083
 
 
1093
  conda activate bitdelta
1094
 
1095
  pip install -e .
1096
+ ```",source,"[plan source]>>step1.clone the repo and navigate to BitDelta:
1097
  ```
1098
  git clone https://github.com/FasterDecoding/BitDelta
1099
  cd BitDelta
 
1103
  conda create -yn bitdelta python=3.9
1104
  conda activate bitdelta
1105
  pip install -e .
1106
+ ```",,
1107
+ tensorflow,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/tensorflow/tensorflow/master/README.md,"## Install
1108
 
1109
  See the [TensorFlow install guide](https://www.tensorflow.org/install) for the
1110
  [pip package](https://www.tensorflow.org/install/pip), to
 
1134
 
1135
  *Nightly binaries are available for testing using the
1136
  [tf-nightly](https://pypi.python.org/pypi/tf-nightly) and
1137
+ [tf-nightly-cpu](https://pypi.python.org/pypi/tf-nightly-cpu) packages on PyPi.*",packagemanager,"[plan packagemanager]>>via pip. step1.:
 
 
1138
  ```
1139
  $ pip install tensorflow
1140
  ```
 
1145
  step3. optional.
1146
  To update TensorFlow to the latest version, add `--upgrade` flag to the above
1147
  commands.
1148
+ [plan binary]>> binaries are available for testing using the
 
1149
  [tf-nightly](https://pypi.python.org/pypi/tf-nightly) and
1150
+ [tf-nightly-cpu](https://pypi.python.org/pypi/tf-nightly-cpu) packages on PyPi.",,
1151
+ transformers,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/huggingface/transformers/main/README.md,"## Installation
1152
 
1153
  ### With pip
1154
 
 
1181
 
1182
  Follow the installation pages of Flax, PyTorch or TensorFlow to see how to install them with conda.
1183
 
1184
+ > **_NOTE:_** On Windows, you may be prompted to activate Developer Mode in order to benefit from caching. If this is not an option for you, please let us know in [this issue](https://github.com/huggingface/huggingface_hub/issues/1062).",packagemanager,"[plan packagemanager]>>via pip:
 
1185
  step1. install __ Transformers in a [virtual environment](https://docs.python.org/3/library/venv.html).(extra information) If you're unfamiliar with Python virtual environments, check out the [user guide](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/).
1186
  step2. create a virtual environment with the version of Python you're going to use and activate it.
1187
  step3. install at least one of Flax, PyTorch, or TensorFlow.
 
1191
  pip install transformers
1192
  ```
1193
  extrainformation. If you'd like to play with the examples or need the bleeding edge of the code and can't wait for a new release, you must [install the library from source](https://huggingface.co/docs/transformers/installation#installing-from-source).
1194
+ [plan packagemanager]>>via conda:
1195
  step1.
1196
  ```shell script
1197
  conda install conda-forge::transformers
1198
  ```
1199
  > **_NOTE:_** Installing `transformers` from the `huggingface` channel is deprecated.
1200
  Follow the installation pages of Flax, PyTorch or TensorFlow to see how to install them with conda.
1201
+ > **_NOTE:_** On Windows, you may be prompted to activate Developer Mode in order to benefit from caching. If this is not an option for you, please let us know in [this issue](https://github.com/huggingface/huggingface_hub/issues/1062).",,"requirements >> This repository is tested on Python 3.8+, Flax 0.4.1+, PyTorch 1.11+, and TensorFlow 2.6+."
1202
+ langchain,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/langchain-ai/langchain/master/README.md,"## Quick Install
1203
 
1204
  With pip:
1205
  ```bash
 
1209
  With conda:
1210
  ```bash
1211
  conda install langchain -c conda-forge
1212
+ ```",packagemanager,"[plan packagemanager]>>step1: via pip
1213
  ```bash
1214
  pip install langchain
1215
  ```
1216
+ [plan packagemanager]>>step1: via conda:
1217
  ```bash
1218
  conda install langchain -c conda-forge
1219
+ ```",,
1220
+ DIG/dig-stable,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/divelab/DIG/dig-stable/README.md,"## Installation
1221
 
1222
  ### Install from pip
1223
  The key dependencies of DIG: Dive into Graphs are PyTorch (>=1.10.0), PyTorch Geometric (>=2.0.0), and RDKit.
 
1261
  git clone https://github.com/divelab/DIG.git
1262
  cd DIG
1263
  pip install .
1264
+ ```",packagemanager,"[plan packagemanager]>>step 1. Install [PyTorch](https://pytorch.org/get-started/locally/) (>=1.10.0)
1265
+ ```python -c ""import torch; print(torch.__version__)""
1266
+ ```
1267
+ step2. Install [PyG](https://pytorch-geometric.readthedocs.io/en/latest/notes/installation.html#) (>=2.0.0)
 
 
 
 
1268
  ```
 
 
 
 
 
 
 
1269
  $ python -c ""import torch_geometric; print(torch_geometric.__version__)""
1270
+ ```
1271
+ step3. Install DIG: Dive into Graphs.
1272
  ```
 
 
 
 
1273
  pip install dive-into-graphs
1274
  ```
1275
+ step4. check the version installed.
1276
+ ```
1277
+ python
 
 
 
1278
  >>> from dig.version import __version__
1279
  >>> print(__version__)
1280
  ```
1281
+ [plan source] step1. Clone repository ```
 
 
 
 
1282
  git clone https://github.com/divelab/DIG.git
1283
+ ```
1284
+ step2. move to the folder ```cd DIG```
1285
+ step3. install the module ```pip install .```",,