software,repo_name,readme_url,content,plan,steps,optional_steps,extra_info_optional vcr-video-representation-for-contextual,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/oronnir/VCR/main/README.md,,,,, ensuring-trustworthy-and-ethical-behaviour-in,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/AAAI-DISIM-UnivAQ/DALI/master/README.md,"## Installation **OS X & Linux:** 1. To download and install SICStus Prolog (it is needed), follow the instructions at https://sicstus.sics.se/download4.html. 2. Then, you can download DALI and test it by running an example DALI MAS: ```sh git clone https://github.com/AAAI-DISIM-UnivAQ/DALI.git cd DALI/Examples/advanced bash startmas.sh ``` You will see different windows opening: * Prolog LINDA server (active_server_wi.pl) * Prolog FIPA client (active_user_wi.pl) * 1 instance of DALI metaintepreter for each agent (active_dali_wi.pl) **Windows:** 1. To download and install SICStus Prolog (it is needed), follow the instructions at https://sicstus.sics.se/download4.html. 2. Then, you can download DALI from https://github.com/AAAI-DISIM-UnivAQ/DALI.git. 3. Unzip the repository, go to the folder ""DALI/Examples/basic"", and test if DALI works by duble clicking ""startmas.bat"" file (this will launch an example DALI MAS). \ \ You will see different windows opening: * Prolog LINDA server (active_server_wi.pl) * Prolog FIPA client (active_user_wi.pl) * 1 instance of DALI metaintepreter for each agent (active_dali_wi.pl)","binary, source","[plan binary]>>step1. follow the instructions at https://sicstus.sics.se/download4.html. [plan source]>>step 2. download DALI. step3. test it by running an example DALI MAS: ```sh git clone https://github.com/AAAI-DISIM-UnivAQ/DALI.git cd DALI/Examples/advanced bash startmas.sh ```","**Windows:** 1. To download and install SICStus Prolog (it is needed), follow the instructions at https://sicstus.sics.se/download4.html. 2. Then, you can download DALI from https://github.com/AAAI-DISIM-UnivAQ/DALI.git. 3. Unzip the repository, go to the folder ""DALI/Examples/basic"", and test if DALI works by duble clicking ""startmas.bat"" file (this will launch an example DALI MAS). \","You will see different windows opening: Prolog LINDA server (active_server_wi.pl) Prolog FIPA client (active_user_wi.pl) 1 instance of DALI metaintepreter for each agent (active_dali_wi.pl)" synthesizing-sentiment-controlled-feedback,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/MIntelligence-Group/CMFeed/main/README.md,,,,, only-the-curve-shape-matters-training,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/cfeng783/GTT/main/README.md,"## Getting Started #### Install dependencies (with python 3.10) ```shell pip install -r requirements.txt ```",source,[plan source]>> [INCOMPLETE] step1. Install dependencies with ```pip install -r requirements.txt```,, from-uncertainty-to-precision-enhancing,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/fer-agathe/calibration_binary_classifier/main/README.md,,,,, stochastic-gradient-flow-dynamics-of-test,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/rodsveiga/sgf_dyn/main/README.md,,,,, accuracy-of-textfooler-black-box-adversarial,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/zero-one-loss/wordcnn01/main/LICENSE,,,,, differentially-private-decentralized-learning-1,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/totilas/DPrandomwalk/main/README.md,,,,, aydiv-adaptable-yielding-3d-object-detection,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/sanjay-810/AYDIV2/main/README.md,"### **Installation** 1. Prepare for the running environment. You can use the docker image provided by [`OpenPCDet`](https://github.com/open-mmlab/OpenPCDet). Our experiments are based on the docker provided by Voxel-R-CNN and we use NVIDIA Tesla V100 to train our Aydiv. 2. Prepare for the data. Convert Argoverse 2 (or) waymo open Dataset into kitti format [`converter`](https://github.com/sanjay-810/AYDIV_ICRA/tree/main/data_converter/convert) Please prepare dataset as [`OpenPCDet`](https://github.com/open-mmlab/OpenPCDet). To generate depth_pseudo_rgbseguv_twise by yourself with depth_dense_twise as follows: ``` cd Aydiv python depth_to_lidar.py ``` If you want to generate dense depth maps by yourself, it is recommended to use [`TWISE`](https://github.com/imransai/TWISE). The dense depth maps we provide are generated by TWISE. Anyway, you should have your dataset as follows: ``` Aydiv ___ data _ ___ waymo_aydiv_seguv_twise _ _ ___ ImageSets _ _ ___ training _ _ _ ___calib & velodyne & label_2 & image_2 & (optional: planes) & depth_dense_twise & depth_pseudo_rgbseguv_twise _ _ ___ testing _ _ _ ___calib & velodyne & image_2 & depth_dense_twise & depth_pseudo_rgbseguv_twise ___ pcdet ___ tools ``` Each pseudo point in depth_pseudo_rgbseguv_twise has 9 attributes (x, y, z, r, g, b, seg, u, v). It should be noted that we do not use the seg attribute, because the image segmentation results cannot bring improvement to Aydiv in our experiments. Argoverse 2 data should be in same format. 3. Setup. ``` cd Aydiv python setup.py develop cd pcdet/ops/iou3d/cuda_op python setup.py develop cd ../../../.. ```","source,docker","[plan source]>> step1. Prepare for the running environment. step2. prepare for the data:```cd Aydiv python depth_to_lidar.py ``` [plan docker]>> step1. You can use the docker image provided by [`OpenPCDet`](https://github.com/open-mmlab/OpenPCDet)",, cartesian-atomic-cluster-expansion-for,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/BingqingCheng/cace/main/README.md,"## Installation Please refer to the `setup.py` file for installation instructions.",source,[plan source]>>[INCOMPLETE] step1. please refer to the `setup.py` file for installation instructions.,, teller-a-trustworthy-framework-for,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/less-and-less-bugs/Trust_TELLER/main/README.md,"## Getting Started Step 1: Download the dataset folder from onedrive by [data.zip](https://portland-my.sharepoint.com/:u:/g/personal/liuhui3-c_my_cityu_edu_hk/EfApQlFP3PhFjUW4527STo0BALMdP16zs-HPMNgwQVFWsA?e=zoHlW2). Unzip this folder into the project directory. You can find four orginal datasets, pre-processed datasets (i.e., val.jsonl, test.jsonl, train.jsonl in each dataset folder) and the files incuding questions and answers Step 2: Place you OpenAI key into the file named api_key.txt. ``` openai.api_key = """" ```",binary,"[plan binary]>> step1: Download the dataset folder from onedrive by https://portland-my.sharepoint.com/:u:/g/personal/liuhui3-c_my_cityu_edu_hk/EfApQlFP3PhFjUW4527STo0BALMdP16zs-HPMNgwQVFWsA?e=zoHlW2. step2. Unzip this folder into the project directory. step3. Place you OpenAI key into the file named api_key.txt. ``` openai.api_key = """" ```",, continuous-time-radar-inertial-and-lidar,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/utiasASRL/steam_icp/master/README.md,"## Installation Clone this repository and its submodules. We use docker to install dependencies The recommended way to build the docker image is ```bash docker build -t steam_icp \ --build-arg USERID=$(id -u) \ --build-arg GROUPID=$(id -g) \ --build-arg USERNAME=$(whoami) \ --build-arg HOMEDIR=${HOME} . ``` When starting a container, remember to mount the code, dataset, and output directories to proper locations in the container. An example command to start a docker container with the image is ```bash docker run -it --name steam_icp \ --privileged \ --network=host \ -e DISPLAY=$DISPLAY \ -v /tmp/.X11-unix:/tmp/.X11-unix \ -v ${HOME}:${HOME}:rw \ steam_icp ``` (Inside Container) Go to the root directory of this repository and build STEAM-ICP ```bash bash build.sh ```",source,"[plan source]>> step1. clone this repository and its submodules. step2. Use docker to install dependencies ```docker build -t steam_icp \ --build-arg USERID=$(id -u) \ --build-arg GROUPID=$(id -g) \ --build-arg USERNAME=$(whoami) \ --build-arg HOMEDIR=${HOME} . ``` step3. mount the code, dataset, and output directories to proper locations in the container. ``` docker run -it --name steam_icp \ --privileged \ --network=host \ -e DISPLAY=$DISPLAY \ -v /tmp/.X11-unix:/tmp/.X11-unix \ -v ${HOME}:${HOME}:rw \ steam_icp step4.(Inside Container) Go to the root directory of this repository and build STEAM-ICP ```bash bash build.sh ```",, towards-a-thermodynamical-deep-learning,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/fedezocco/ThermoVisMedRob/main/README.md,,,,, robust-parameter-fitting-to-realistic-network,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/PFischbeck/parameter-fitting-experiments/main/Readme.md,"# Installation - Make sure you have Python, Pip and R installed. - Checkout this repository - Install the python dependencies with ``` pip3 install -r requirements.txt ``` - Install the `pygirgs` package at https://github.com/PFischbeck/pygirgs - Install the R dependencies (used for plots) with ``` R -e 'install.packages(c(""ggplot2"", ""reshape2"", ""plyr"", ""dplyr"", ""scales""), repos=""https://cloud.r-project.org/"")' ``` - Download the file `konect-data.zip` from [Zenodo](https://doi.org/10.5281/zenodo.10629451) and extract its contents into the folder `input_data/konect` - Optional: Download the file `output-data.zip` from [Zenodo](https://doi.org/10.5281/zenodo.10629451) and extract its contents into the folder `output_data`. This way, you can access all experiment results without running them yourself.",source,"[plan source]>> step1. Make sure you have Python, Pip and R installed. step2. Checkout this repository step3. Install the python dependencies with ``` pip3 install -r requirements.txt ``` step4. Install the `pygirgs` package at https://github.com/PFischbeck/pygirgs step5. Install the R dependencies (used for plots) with ``` R -e 'install.packages(c(""ggplot2"", ""reshape2"", ""plyr"", ""dplyr"", ""scales""), repos=""https://cloud.r-project.org/"")' ``` step6. Download the file `konect-data.zip` from [Zenodo](https://doi.org/10.5281/zenodo.10629451) and extract its contents into the folder `input_data/konect` step7. Optional: Download the file `output-data.zip` from [Zenodo](https://doi.org/10.5281/zenodo.10629451) and extract its contents into the folder `output_data`. This way, you can access all experiment results without running them yourself.","step7. Optional: Download the file `output-data.zip` from [Zenodo](https://doi.org/10.5281/zenodo.10629451) and extract its contents into the folder `output_data`. This way, you can access all experiment results without running them yourself.", get-tok-a-genai-enriched-multimodal-tiktok,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/gabbypinto/GET-Tok-Peru/main/README.md,"## Installation pip install -r requirements.txt *Note: I did not us a virtual environment so the packages in the requirements.txt file are probably not reflective of all the packages used in this project. If some issues pop up please don't hesitate to email me at: gpinto@usc.edu*",packagemanager,[plan packagemanager]>>step1. pip install -r requirements.txt ,,*Note: I did not us a virtual environment so the packages in the requirements.txt file are probably not reflective of all the packages used in this project. If some issues pop up please don't hesitate to email me at: gpinto@usc.edu* a-longitudinal-study-of-italian-and-french,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/orsoFra/LS_FRIT_UKR/main/README.md,,,,, geometric-slosh-free-tracking-for-robotic,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/jonarriza96/gsft/main/README.md,"## Installation ### Dependencies Initialize git submodules with ``` git submodule init git submodule update ``` ### Python environment Install the specific versions of every package from `requirements.txt` in a new conda environment: ``` conda create --name gsft python=3.9 conda activate gsft pip install -r requirements.txt ``` To ensure that Python paths are properly defined, update the `~/.bashrc` by adding the following lines ``` export GSFT_PATH=/path_to_gsfc export PYTHONPATH=$PYTHONPATH:/$GSFT_PATH ```",source,"[plan source]>> step1. Check dependencies. step2. Initialize git submodules with ``` git submodule init git submodule update ``` step3. Create conda environment and install requirements: ``` conda create --name gsft python=3.9 conda activate gsft pip install -r requirements.txt ``` step4. Create variables to ensure that Python paths are properly defined, update the `~/.bashrc` by adding the following lines ``` export GSFT_PATH=/path_to_gsfc export PYTHONPATH=$PYTHONPATH:/$GSFT_PATH ```",, real-time-line-based-room-segmentation-and,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/EricssonResearch/Line-Based-Room-Segmentation-and-EDF/release/README.md,"## Installation The project can be installed by running the following command in your terminal: ```bash pip install -r requirements.txt ```",source,"[plan source]>>[INCOMPLETE]step1. Run the command in your terminal: ``` pip install -r requirements.txt ```",, viga,https://bio.tools/,https://raw.githubusercontent.com/viralInformatics/VIGA/master/README.md,"## Installation ### Step1: Download VIGA Download VIGA with Git from GitHub ``` git clone https://github.com/viralInformatics/VIGA.git ``` or Download ZIP to local ### Step 2: Download Database ``` 1. download taxdmp.zip [Index of /pub/taxonomy (nih.gov)](https://ftp.ncbi.nlm.nih.gov/pub/taxonomy/) and unzip taxdmp.zip and put it in ./db/ 2. download ""prot.accession2taxid"" file from https://ftp.ncbi.nlm.nih.gov/pub/taxonomy/accession2taxid/ 3. download ""RefSeqVirusProtein"" file from wget -c ftp.ncbi.nlm.nih.gov/refseq/release/viral/viral.1.protein.faa.gz gzip -d viral.1.protein.faa.gz mv viral.1.protein.faa RefSeqVirusProtein 4. download ""nr"" file from wget -c ftp://ftp.ncbi.nlm.nih.gov/blast/db/FASTA/nr.gz or ascp -T -i asperaweb_id_dsa.openssh --host=ftp.ncbi.nih.gov --user=anonftp --mode=recv /blast/db/FASTA/nr.gz ./ gzip -d nr.gz 5. Use Diamond v2.0.11.149 to create two separate databases as the indexing libraries in the current version are incompatible with each other. 6. In order to set up a reference database for DIAMOND, the makedb command needs to be executed with the following command line: diamond makedb --in YourPath/RefSeqVirusProtein -d Diamond_RefSeqVirusProtein --taxonmap YourPath/prot.accession2taxid --taxonnodes YourPath/nodes.dmp diamond makedb --in nr -d Dimond_nr --taxonmap YourPath/prot.accession2taxid --taxonnodes YourPath/nodes.dmp ``` ### Step 3: Installation of dependent software #### Installing Some Software Using Conda ``` conda install fastp=0.12.4 trinity=2.8.5 diamond=2.0.11.149 ragtag=2.1.0 quast=5.0.2 ``` #### Manual Installation of MetaCompass https://github.com/marbl/MetaCompass ### Step 4: Python Dependencies Base on python 3.6.8 ``` pip install pandas=1.1.5 numpy=1.19.5 matplotlib=3.3.4 biopython=1.79 ``` ",source,"[plan source]>> step1. Download VIGA with Git from GitHub: ``` git clone https://github.com/viralInformatics/VIGA.git (stepOptional). or Download ZIP to local step2.download Database: step2.1.download taxdmp.zip: https://ftp.ncbi.nlm.nih.gov/pub/taxonomy/ and unzip taxdmp.zip and put it in ./db/ step2.2.download ""prot.accession2taxid"" file from https://ftp.ncbi.nlm.nih.gov/pub/taxonomy/accession2taxid/ step2.3.download ""RefSeqVirusProtein"" file from ```wget -c ftp.ncbi.nlm.nih.gov/refseq/release/viral/viral.1.protein.faa.gz gzip -d viral.1.protein.faa.gz mv viral.1.protein.faa RefSeqVirusProtein``` step2.4. download ""nr"" file from``` wget -c ftp://ftp.ncbi.nlm.nih.gov/blast/db/FASTA/nr.gz or ascp -T -i asperaweb_id_dsa.openssh --host=ftp.ncbi.nih.gov --user=anonftp --mode=recv /blast/db/FASTA/nr.gz ./ gzip -d nr.gz``` step2.5.use Diamond v2.0.11.149 to create two separate databases. step2.6.In order to set up a reference database for DIAMOND, the makedb command needs to be executed with the following command line: diamond makedb --in YourPath/RefSeqVirusProtein -d Diamond_RefSeqVirusProtein --taxonmap YourPath/prot.accession2taxid --taxonnodes YourPath/nodes.dmp diamond makedb --in nr -d Dimond_nr --taxonmap YourPath/prot.accession2taxid --taxonnodes YourPath/nodes.dmp ``` step3. installing requirements via conda ``` conda install fastp=0.12.4 trinity=2.8.5 diamond=2.0.11.149 ragtag=2.1.0 quast=5.0.2 ``` step4: install Python dependencies base on python 3.6.8 ``` pip install pandas=1.1.5 numpy=1.19.5 matplotlib=3.3.4 biopython=1.79 ```",,"manual Installation of MetaCompass https://github.com/marbl/MetaCompass" lncrtpred,https://bio.tools/,https://raw.githubusercontent.com/zglabDIB/LncRTPred/main/README.md,,,,, nrn-ez,https://bio.tools/,https://raw.githubusercontent.com/scimemia/NRN-EZ/master/README.md,"**INSTALLATION FOR VERSION 1.1.6** NRN-EZ was built with PyInstaller 3.6, and requires the following languages and libraries: ÔøΩ Python 3.6.9 and higher (currently up to 3.10) ÔøΩ PyQt 5.10.1 ÔøΩ PyQtGraph 0.11.0 Installation instructions for Linux (Ubuntu and Pop!_OS): download the Linux zip file and, from the command window, run a bash command for the install.sh file, in the corresponding installation folder. Installation instructions for Mac OS: download the Mac zip file and copy the NRN-EZ app to the Applications folder. Installation instructions for Windows: download the Win zip file and run the installation wizard.",binary,"[plan binary]>> step1. install requirements: Python 3.6.9 and higher (currently up to 3.10) PyQt 5.10.1 PyQtGraph 0.11.0 step2. for linux:download the Linux zip file and, from the command window. step3. run a bash command for the install.sh file in the corresponding installation folder. ",,"2. for linux:download the Linux zip file and, from the command window, run a bash command for the install.sh file, in the corresponding installation folder. 2. for Mac OS: download the Mac zip file and copy the NRN-EZ app to the Applications folder. 2. for Windows: download the Win zip file and run the installation wizard." causnet,https://bio.tools/,https://raw.githubusercontent.com/nand1155/CausNet/main/README.md,"## Installation You can install the development version from GitHub with: ``` r require(""devtools"") install_github(""https://github.com/nand1155/CausNet"") ```",source,"[source]>>step1.install the development version from GitHub with: ``` r require(""devtools"") install_github(""https://github.com/nand1155/CausNet"") ```",, viralcc,https://bio.tools/,https://raw.githubusercontent.com/dyxstat/Reproduce_ViralCC/main/README.md,"""# Instruction of reproducing results in ViralCC paper We take the cow fecal datasets for example. The other two datasets were processed following the same procedure. Scripts to process the intermediate data and plot figures are available in the folder [Scripts](https://github.com/dyxstat/Reproduce_ViralCC/tree/main/Scripts). Source data of Figure 2 and 3 in the main text and Figure S1 in the supplementary materials are provided in the folder [Source Data](https://github.com/dyxstat/Reproduce_ViralCC/tree/main/Source%20Data). **Version of softwares exploited in the analyses** ``` fastq_dump command from Sratoolkit: v2.10.8 bbduk.sh and clumpify.sh command from BBTools suite: v37.25 megahit command from MEGAHIT: v1.2.9 bwa command from BWA MEM: v0.7.17 samtools command from Samtools: v1.15.1 wrapper_phage_contigs_sorter_iPlant.pl command from VirSorter: v1.0.6 checkv command from CheckV: 0.7.0 ``` **Step 1 Download and preprocess the raw data** Note: NCBI may update its links for downloading the database. Please check the latest link at [NCBI](https://www.ncbi.nlm.nih.gov/) if you meet the download error. ``` wget https://sra-downloadb.be-md.ncbi.nlm.nih.gov/sos2/sra-pub-run-13/ERR2282092/ERR2282092.1 wget https://sra-downloadb.be-md.ncbi.nlm.nih.gov/sos2/sra-pub-run-13/ERR2530126/ERR2530126.1 wget https://sra-downloadb.be-md.ncbi.nlm.nih.gov/sos2/sra-pub-run-13/ERR2530127/ERR2530127.1 fastq-dump --split-files --gzip ERR2282092.1 fastq-dump --split-files --gzip ERR2530126.1 fastq-dump --split-files --gzip ERR2530127.1 bbduk.sh in1=ERR2282092.1_1.fastq.gz in2=ERR2282092.1_2.fastq.gz out1=COWSG1_AQ.fastq.gz out2=COWSG2_AQ.fastq.gz ref=/home1/yuxuandu/cmb/SOFTWARE/bbmap/resources/adapters.fa ktrim=r k=23 mink=11 hdist=1 minlen=50 tpe tbo bbduk.sh in1=ERR2530126.1_1.fastq.gz in2=ERR2530126.1_2.fastq.gz out1=S3HIC1_AQ.fastq.gz out2=S3HIC2_AQ.fastq.gz ref=/home1/yuxuandu/cmb/SOFTWARE/bbmap/resources/adapters.fa ktrim=r k=23 mink=11 hdist=1 minlen=50 tpe tbo bbduk.sh in1=ERR2530127.1_1.fastq.gz in2=ERR2530127.1_2.fastq.gz out1=M1HIC1_AQ.fastq.gz out2=M1HIC2_AQ.fastq.gz ref=/home1/yuxuandu/cmb/SOFTWARE/bbmap/resources/adapters.fa ktrim=r k=23 mink=11 hdist=1 minlen=50 tpe tbo bbduk.sh in1=S3HIC1_AQ.fastq.gz in2=S3HIC2_AQ.fastq.gz out1=S3HIC1_CL.fastq.gz out2=S3HIC2_CL.fastq.gz trimq=10 qtrim=r ftm=5 minlen=50 bbduk.sh in1=M1HIC1_AQ.fastq.gz in2=M1HIC2_AQ.fastq.gz out1=M1HIC1_CL.fastq.gz out2=M1HIC2_CL.fastq.gz trimq=10 qtrim=r ftm=5 minlen=50 bbduk.sh in1=COWSG1_AQ.fastq.gz in2=COWSG2_AQ.fastq.gz out1=COWSG1_CL.fastq.gz out2=COWSG2_CL.fastq.gz trimq=10 qtrim=r ftm=5 minlen=50 bbduk.sh in1=S3HIC1_CL.fastq.gz in2=S3HIC2_CL.fastq.gz out1=S3HIC1_trim.fastq.gz out2=S3HIC2_trim.fastq.gz ftl=10 bbduk.sh in1=M1HIC1_CL.fastq.gz in2=M1HIC2_CL.fastq.gz out1=M1HIC1_trim.fastq.gz out2=M1HIC2_trim.fastq.gz ftl=10 clumpify.sh in1=S3HIC1_trim.fastq.gz in2=S3HIC2_trim.fastq.gz out1=S3HIC1_dedup.fastq.gz out2=S3HIC2_dedup.fastq.gz dedupe clumpify.sh in1=M1HIC1_trim.fastq.gz in2=M1HIC2_trim.fastq.gz out1=M1HIC1_dedup.fastq.gz out2=M1HIC2_dedup.fastq.gz dedupe cat S3HIC1_dedup.fastq.gz M1HIC1_dedup.fastq.gz > HIC1.fastq.gz cat S3HIC2_dedup.fastq.gz M1HIC2_dedup.fastq.gz > HIC2.fastq.gz ``` **Step 2: Assemble contigs and align processed Hi-C reads to contigs** ``` megahit -1 COWSG1_CL.fastq.gz -2 COWSG2_CL.fastq.gz -o COW_ASSEMBLY --min-contig-len 1000 --k-min 21 --k-max 141 --k-step 12 --merge-level 20,0.95 bwa index final.contigs.fa bwa mem -5SP final.contigs.fa HIC1.fastq.gz HIC2.fastq.gz > COW_MAP.sam samtools view -F 0x904 -bS COW_MAP.sam > COW_MAP_UNSORTED.bam samtools sort -n COW_MAP_UNSORTED.bam -o COW_MAP_SORTED.bam ``` **Step3: Identify viral contigs from assembled contigs** ``` perl removesmalls.pl 3000 final.contigs.fa > cow_3000.fa wrapper_phage_contigs_sorter_iPlant.pl -f cow_3000.fa --db 1 --wdir output_directory --ncpu 16 --data-dir /panfs/qcb-panasas/yuxuandu/virsorter-data Rscript find_viral_contig.R ``` **Step4: Run ViralCC** ``` python ./viralcc.py pipeline -v final.contigs.fa COW_MAP_SORTED.bam viral.txt out_cow ``` **Step5: Evaluation draft viral genomes using CheckV** ``` python concatenation.py -p out_cow/VIRAL_BIN -o viralCC_cow_bins.fa checkv end_to_end viralCC_cow_bins.fa output_checkv_viralcc_cow -t 16 -d /panfs/qcb-panasas/yuxuandu/checkv-db-v1.0 ```""",source,"[plan source]>>step1.download and preprocess the raw data. ``` wget https://sra-downloadb.be-md.ncbi.nlm.nih.gov/sos2/sra-pub-run-13/ERR2282092/ERR2282092.1 wget https://sra-downloadb.be-md.ncbi.nlm.nih.gov/sos2/sra-pub-run-13/ERR2530126/ERR2530126.1 wget https://sra-downloadb.be-md.ncbi.nlm.nih.gov/sos2/sra-pub-run-13/ERR2530127/ERR2530127.1 fastq-dump --split-files --gzip ERR2282092.1 fastq-dump --split-files --gzip ERR2530126.1 fastq-dump --split-files --gzip ERR2530127.1 bbduk.sh in1=ERR2282092.1_1.fastq.gz in2=ERR2282092.1_2.fastq.gz out1=COWSG1_AQ.fastq.gz out2=COWSG2_AQ.fastq.gz ref=/home1/yuxuandu/cmb/SOFTWARE/bbmap/resources/adapters.fa ktrim=r k=23 mink=11 hdist=1 minlen=50 tpe tbo bbduk.sh in1=ERR2530126.1_1.fastq.gz in2=ERR2530126.1_2.fastq.gz out1=S3HIC1_AQ.fastq.gz out2=S3HIC2_AQ.fastq.gz ref=/home1/yuxuandu/cmb/SOFTWARE/bbmap/resources/adapters.fa ktrim=r k=23 mink=11 hdist=1 minlen=50 tpe tbo bbduk.sh in1=ERR2530127.1_1.fastq.gz in2=ERR2530127.1_2.fastq.gz out1=M1HIC1_AQ.fastq.gz out2=M1HIC2_AQ.fastq.gz ref=/home1/yuxuandu/cmb/SOFTWARE/bbmap/resources/adapters.fa ktrim=r k=23 mink=11 hdist=1 minlen=50 tpe tbo bbduk.sh in1=S3HIC1_AQ.fastq.gz in2=S3HIC2_AQ.fastq.gz out1=S3HIC1_CL.fastq.gz out2=S3HIC2_CL.fastq.gz trimq=10 qtrim=r ftm=5 minlen=50 bbduk.sh in1=M1HIC1_AQ.fastq.gz in2=M1HIC2_AQ.fastq.gz out1=M1HIC1_CL.fastq.gz out2=M1HIC2_CL.fastq.gz trimq=10 qtrim=r ftm=5 minlen=50 bbduk.sh in1=COWSG1_AQ.fastq.gz in2=COWSG2_AQ.fastq.gz out1=COWSG1_CL.fastq.gz out2=COWSG2_CL.fastq.gz trimq=10 qtrim=r ftm=5 minlen=50 bbduk.sh in1=S3HIC1_CL.fastq.gz in2=S3HIC2_CL.fastq.gz out1=S3HIC1_trim.fastq.gz out2=S3HIC2_trim.fastq.gz ftl=10 bbduk.sh in1=M1HIC1_CL.fastq.gz in2=M1HIC2_CL.fastq.gz out1=M1HIC1_trim.fastq.gz out2=M1HIC2_trim.fastq.gz ftl=10 clumpify.sh in1=S3HIC1_trim.fastq.gz in2=S3HIC2_trim.fastq.gz out1=S3HIC1_dedup.fastq.gz out2=S3HIC2_dedup.fastq.gz dedupe clumpify.sh in1=M1HIC1_trim.fastq.gz in2=M1HIC2_trim.fastq.gz out1=M1HIC1_dedup.fastq.gz out2=M1HIC2_dedup.fastq.gz dedupe cat S3HIC1_dedup.fastq.gz M1HIC1_dedup.fastq.gz > HIC1.fastq.gz cat S3HIC2_dedup.fastq.gz M1HIC2_dedup.fastq.gz > HIC2.fastq.gz ``` step2.assemble contigs and step3. align processed Hi-C reads to contigs** ``` megahit -1 COWSG1_CL.fastq.gz -2 COWSG2_CL.fastq.gz -o COW_ASSEMBLY --min-contig-len 1000 --k-min 21 --k-max 141 --k-step 12 --merge-level 20,0.95 bwa index final.contigs.fa bwa mem -5SP final.contigs.fa HIC1.fastq.gz HIC2.fastq.gz > COW_MAP.sam samtools view -F 0x904 -bS COW_MAP.sam > COW_MAP_UNSORTED.bam samtools sort -n COW_MAP_UNSORTED.bam -o COW_MAP_SORTED.bam ``` step4. identify viral contigs from assembled contigs: ``` perl removesmalls.pl 3000 final.contigs.fa > cow_3000.fa wrapper_phage_contigs_sorter_iPlant.pl -f cow_3000.fa --db 1 --wdir output_directory --ncpu 16 --data-dir /panfs/qcb-panasas/yuxuandu/virsorter-data Rscript find_viral_contig.R ``` step5. run ViralCC: ``` python ./viralcc.py pipeline -v final.contigs.fa COW_MAP_SORTED.bam viral.txt out_cow ``` step6. evaluation draft viral genomes using CheckV: ``` python concatenation.py -p out_cow/VIRAL_BIN -o viralCC_cow_bins.fa checkv end_to_end viralCC_cow_bins.fa output_checkv_viralcc_cow -t 16 -d /panfs/qcb-panasas/yuxuandu/checkv-db-v1.0 ```",, (extra comment: NCBI may update its links for downloading the database. Please check the latest link at [NCBI](https://www.ncbi.nlm.nih.gov/) if you meet the download error) DRaW,https://bio.tools/,https://raw.githubusercontent.com/BioinformaticsIASBS/DRaW/main/README.md,"# Running DRaW on COVID-19 datasets The DRaW has been applied on three COVID-19 datasets, DS1, DS2, and DS3. There are three subdirectories, ÔøΩDS1_repurÔøΩ, ÔøΩDS2_repurÔøΩ, and ÔøΩDS3_repurÔøΩ, in the ÔøΩDrug-RepurposingÔøΩ directory. Each subdirectory has been assigned to one of the mentioned datasets. We put the Draw implementation file for each dataset in each subdirectory separately. This is due to keep the corresponding hyperparameters of each dataset. We use Adam as the optimizer with a learning rate equal to 0.001, beta1 = 0.9, beta2 = 0.999, and epsilon = 1e_7. The dropout rate is set to 0.5. The batch size is chosen by the number of samples per dataset. This hyperparameter for DS1 is equal to 8, and those for DS2 and DS3 are set to 32. To run the model, it is enough to execute ""Drug-Repurposing.py"" script in the command line. After that, execute ""score.py"". The repurposed drugs will be stored in the ""meanScore.csv"" spreadsheet. It contains the average of ach drug ranking. The lower, the better. For example, to run the DRaW on DS1: ```bash cd Drug-Repurposing\DS1_repur python Drug-Repurposing.py python score.py ``` Same goes for other datasets. Just change the directory path. # Performance analysis In order to analysis the performance, there is a one extra directory in the root, ÔøΩPerformance_analysisÔøΩ. By running following command the model is trained on a given dataset and returns its performance metrics, AUC-ROC, AUPR, F1 score, etc. The input parameter ÔøΩdataset_nameÔøΩ is one the following five datasetsÔøΩ name. The first one is COVID-19 DS3 and other four are golden benchmarks. 'DS3','ic','nr','gpcr','e' ```bash cd Performance_analysis python main.py dataset_name ```",source,"[plan source]>>step1.execute ""Drug-Repurposing.py"" script in the command line. step2. after that, execute ""score.py"": ```bash cd Drug-Repurposing\DS1_repur python Drug-Repurposing.py python score.py ```",," The repurposed drugs will be stored in the ""meanScore.csv"" spreadsheet. It contains the average of ach drug ranking. The lower, the better. For example, to run the DRaW on DS1" NRN-EZ,https://bio.tools/,https://raw.githubusercontent.com/scimemia/NRN-EZ/master/README.md,"**INSTALLATION FOR VERSION 1.1.6** NRN-EZ was built with PyInstaller 3.6, and requires the following languages and libraries: ÔøΩ Python 3.6.9 and higher (currently up to 3.10) ÔøΩ PyQt 5.10.1 ÔøΩ PyQtGraph 0.11.0 Installation instructions for Linux (Ubuntu and Pop!_OS): download the Linux zip file and, from the command window, run a bash command for the install.sh file, in the corresponding installation folder. Installation instructions for Mac OS: download the Mac zip file and copy the NRN-EZ app to the Applications folder. Installation instructions for Windows: download the Win zip file and run the installation wizard.",source,"[plan source]>>step1. install the requirements:Python 3.6.9 and higher (currently up to 3.10), PyQt 5.10.1, PyQtGraph 0.11.0 step2. for Linux: download the Linux zip file and, from the command window, run a bash command for the install.sh file, in the corresponding installation folder. step2. for Mac OS: download the Mac zip file and copy the NRN-EZ app to the Applications folder. step2. for Windows: download the Win zip file and run the installation wizard.","step2. for Linux: download the Linux zip file and, from the command window, run a bash command for the install.sh file, in the corresponding installation folder. step2. for Mac OS: download the Mac zip file and copy the NRN-EZ app to the Applications folder. step2. for Windows: download the Win zip file and run the installation wizard.", guiding-instruction-based-image-editing-via,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/apple/ml-mgie/main/README.md,"## Requirements ``` conda create -n mgie python=3.10 -y conda activate mgie conda update -n base -c defaults conda setuptools -y conda install -c conda-forge git git-lfs ffmpeg vim htop ninja gpustat -y conda clean -a -y pip install -U pip cmake cython==0.29.36 pydantic==1.10 numpy pip install -U gdown pydrive2 wget jupyter jupyterlab jupyterthemes ipython pip install -U sentencepiece transformers diffusers tokenizers datasets gradio==3.37 accelerate evaluate git+https://github.com/openai/CLIP.git pip install -U https://download.pytorch.org/whl/cu113/torch-1.12.0%2Bcu113-cp310-cp310-linux_x86_64.whl https://download.pytorch.org/whl/cu113/torchvision-0.13.0%2Bcu113-cp310-cp310-linux_x86_64.whl https://download.pytorch.org/whl/cu113/torchaudio-0.12.0%2Bcu113-cp310-cp310-linux_x86_64.whl pip install -U deepspeed # git clone this repo cd ml-mgie git submodule update --init --recursive cd LLaVA pip install -e . pip install -U https://download.pytorch.org/whl/cu113/torch-1.12.0%2Bcu113-cp310-cp310-linux_x86_64.whl https://download.pytorch.org/whl/cu113/torchvision-0.13.0%2Bcu113-cp310-cp310-linux_x86_64.whl https://download.pytorch.org/whl/cu113/torchaudio-0.12.0%2Bcu113-cp310-cp310-linux_x86_64.whl pip install -U ninja flash-attn==1.0.2 pip install -U pydrive2 gdown wget cd .. cp mgie_llava.py LLaVA/llava/model/llava.py cp mgie_train.py LLaVA/llava/train/train.py ```",source,"[plan source]>> step1. create conda environment ``` conda create -n mgie python=3.10 -y conda activate mgie conda update -n base -c defaults conda setuptools -y conda install -c conda-forge git git-lfs ffmpeg vim htop ninja gpustat -y conda clean -a -y ``` step2. install dependencies ``` pip install -U pip cmake cython==0.29.36 pydantic==1.10 numpy pip install -U gdown pydrive2 wget jupyter jupyterlab jupyterthemes ipython pip install -U sentencepiece transformers diffusers tokenizers datasets gradio==3.37 accelerate evaluate git+https://github.com/openai/CLIP.git pip install -U https://download.pytorch.org/whl/cu113/torch-1.12.0%2Bcu113-cp310-cp310-linux_x86_64.whl https://download.pytorch.org/whl/cu113/torchvision-0.13.0%2Bcu113-cp310-cp310-linux_x86_64.whl https://download.pytorch.org/whl/cu113/torchaudio-0.12.0%2Bcu113-cp310-cp310-linux_x86_64.whl pip install -U deepspeed ``` step3. git clone this repo ``` cd ml-mgie git submodule update --init --recursive cd LLaVA ``` step4. install module ``` pip install -e . pip install -U https://download.pytorch.org/whl/cu113/torch-1.12.0%2Bcu113-cp310-cp310-linux_x86_64.whl https://download.pytorch.org/whl/cu113/torchvision-0.13.0%2Bcu113-cp310-cp310-linux_x86_64.whl https://download.pytorch.org/whl/cu113/torchaudio-0.12.0%2Bcu113-cp310-cp310-linux_x86_64.whl pip install -U ninja flash-attn==1.0.2 pip install -U pydrive2 gdown wget cd .. cp mgie_llava.py LLaVA/llava/model/llava.py cp mgie_train.py LLaVA/llava/train/train.py ```",, self-play-fine-tuning-converts-weak-language,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/uclaml/SPIN/main/README.md,"## Setup The following steps provide the necessary setup to run our codes. 1. Create a Python virtual environment with Conda: ``` conda create -n myenv python=3.10 conda activate myenv ``` 2. Install PyTorch `v2.1.0` with compatible cuda version, following instructions from [PyTorch Installation Page](https://pytorch.org/get-started/locally/). For example with cuda 11: ``` pip install torch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 --index-url https://download.pytorch.org/whl/cu118 ``` 3. Install the following Python dependencies to run the codes. ``` python -m pip install . python -m pip install flash-attn --no-build-isolation ``` 4. Login to your huggingface account for downloading models ``` huggingface-cli login --token ""${your_access_token}"" ```",source,"[plan source]>>step1.create a Python virtual environment with Conda: ``` conda create -n myenv python=3.10 conda activate myenv ``` step2.install PyTorch `v2.1.0` with compatible cuda version, following instructions from [PyTorch Installation Page](https://pytorch.org/get-started/locally/). For example with cuda 11: ``` pip install torch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 --index-url https://download.pytorch.org/whl/cu118 ``` step3.install the following Python dependencies to run the codes. ``` python -m pip install . python -m pip install flash-attn --no-build-isolation ``` step4.login to your huggingface account for downloading models ``` huggingface-cli login --token ""${your_access_token}"" ```",, genegpt-teaching-large-language-models-to-use,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/ncbi/GeneGPT/main/README.md,"# Requirements The code has been tested with Python 3.9.13. Please first install the required packages by: ```bash pip install -r requirements.txt ``` You also need an OpenAI API key to run GeneGPT with Codex. Replace the placeholder with your key in `config.py`: ```bash $ cat config.py API_KEY = 'YOUR_OPENAI_API_KEY' ``` ## Using GeneGPT After setting up the environment, one can run GeneGPT on GeneTuring by: ```bash python main.py 111111 ``` where `111111` denotes that all Documentations (Dc.1-2) and Demonstrations (Dm.1-4) are used. To run GeneGPT-slim, simply use: ```bash python main.py 001001 ``` which will only use the Dm.1 and Dm.4 for in-context learning.",source,"[plan source]>>step1.install requirements: ```bash pip install -r requirements.txt ``` step2.set OpenAI API key to run GeneGPT with Codex. replace the placeholder with your key in `config.py`: ```bash $ cat config.py API_KEY = 'YOUR_OPENAI_API_KEY' ``` step3. execute GeneGPT After setting up the environment, one can run GeneGPT on GeneTuring by: ```bash python main.py 111111 ``` where `111111` denotes that all Documentations (Dc.1-2) and Demonstrations (Dm.1-4) are used. step4. To run GeneGPT-slim, simply use: ```bash python main.py 001001 ```",,The code has been tested with Python 3.9.13 the-boundary-of-neural-network-trainability,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/Sohl-Dickstein/fractal/main/README.md,,,,, learning-to-fly-in-seconds,https://paperwithcode.com/paper/,https://raw.githubusercontent.com/arplaboratory/learning-to-fly/master/README.MD,"## Instructions to run the code ### Docker (isolated) We provide a pre-built Docker image with a simple web interface that can be executed using a single command (given that Docker is already installed on your machine): ``` docker run -it --rm -p 8000:8000 arpllab/learning_to_fly ``` After the container is running, navigate to [https://0.0.0.0:8000](https://0.0.0.0:8000) and you should see something like (after starting the training):