INSTRUCTION
stringlengths
48
14.1k
RESPONSE
stringlengths
14
8.79k
SOURCE
stringclasses
1 value
METADATA
stringlengths
48
55
hi, I have a list of miRNAs IDs from different organisms how I can take sequence of mature miRNAs for all my IDs? for example lgi-mir-34 mdo-mir-193 mdo-mir-21 mdo-mir-302d mml-mir-18
another way was mirdeep2 fastaselect.pl reads.fa reads_select.ids > reads_select.fa This script only prints out the fasta entries that match an id in the id file.
biostars
{"uid": 212904, "view_count": 1527, "vote_count": 1}
Hello friends, I recently analysed a RNA-Seq data and yielded DEGs using DESeq2. Everything was fine until I figured out that on applying P value 0.05, i am left with 1,700 DEGs whereas on considering and applying Padj 0.05, I am left with only 127 DEGs??? isn't this surprising ? How can there be such a huge difference in number of significant data on just considering adj P value? Thank you to anyone who can afford any little energy and time to guide me. I am stuck!!
Its completely expected. Under classic multiple testing correction you'd expect your adjusted P-values to be 20,000 times larger than you non-adjusted values. Obviously its not as bad as this because we don't use Bonferroni in RNAseq, but the multiple testing burden is still high. Another way of looking at it is that under a standard hypothesis test, we would expect 5% of tests to give a false positive, thats around 1000 genes, so we would expect the number of genes passing 5% threshold to be at least 1000 less for an adjusted p-value.
biostars
{"uid": 487859, "view_count": 632, "vote_count": 2}
Hi all, I am very new in this filed and really appriciate whoever can help me with this question. I was wondering if you how I can filter my bfiles( BIM, BED, FAM) for a subset of my SNPs. I have my BIM , BED, FAM files. But I do not need all SNPs available so I filterd my BIM file by AWK commands but I do not know how to match my BED file with the new BIM file created. Please tell me how should I do it? thank you very much in advance.
you should create a list of your snps and use plink's filtering option. I believe it goes something like : plink --bfile <yourfile> --remove [snps.txt] --out<newset> this should also subset your BED file
biostars
{"uid": 238359, "view_count": 3870, "vote_count": 1}
**hello, when i run gatk, the error always occur, like this "java.lang.IllegalArgumentException: samples cannot be empty" is there mistake in my input file? thank you for your help !** **my bam file as following:** HWI-EAS418:3:37:1070:1462 83 chr20 46689301 255 50M = 46687222 -2129 CAGCTCCAGGCCGCTCAAGAAGCGGCTGCTCCGCTCCCGGGCTGCGGCCA 0:@2%,6.=:.4=,4+6;B79>=BB=2>7=:9=4(599@BB9BB>BB??A NH:i:1 HI:i:1 AS:i:99 nM:i:0 **the script is following:** java -jar /home/H/mutation/gatk/gatk-package-4.1.4.1-local.jar HaplotypeCaller -R /home/H/mutation/ref/ctat_genome_lib_build_dir/ref_genome.fa \ -I /home/H/mutation/ctat-mutations-master/testing/__misc_data/Aligned.sortedByCoord.out.GRCh38.bam \ --recover-dangling-heads true \ --dont-use-soft-clipped-bases \ -stand-call-conf 20.0 -O test.vcf **the logs are followinig:** Runtime.totalMemory()=2260729856 java.lang.IllegalArgumentException: samples cannot be empty at org.broadinstitute.hellbender.utils.Utils.validateArg(Utils.java:725) at org.broadinstitute.hellbender.tools.walkers.haplotypecaller.ReferenceConfidenceModel.<init>(ReferenceConfidenceModel.java:116) at org.broadinstitute.hellbender.tools.walkers.haplotypecaller.HaplotypeCallerEngine.initialize(HaplotypeCallerEngine.java:221) at org.broadinstitute.hellbender.tools.walkers.haplotypecaller.HaplotypeCallerEngine.<init>(HaplotypeCallerEngine.java:164) at org.broadinstitute.hellbender.tools.walkers.haplotypecaller.HaplotypeCaller.onTraversalStart(HaplotypeCaller.java:224) at org.broadinstitute.hellbender.engine.GATKTool.doWork(GATKTool.java:1046) at org.broadinstitute.hellbender.cmdline.CommandLineProgram.runTool(CommandLineProgram.java:139) at org.broadinstitute.hellbender.cmdline.CommandLineProgram.instanceMainPostParseArgs(CommandLineProgram.java:191) at org.broadinstitute.hellbender.cmdline.CommandLineProgram.instanceMain(CommandLineProgram.java:210) at org.broadinstitute.hellbender.Main.runCommandLineProgram(Main.java:163) at org.broadinstitute.hellbender.Main.mainEntry(Main.java:206) at org.broadinstitute.hellbender.Main.main(Main.java:292)
I had the same problem, I had to add the read group @RG tag to the .bam file using `samtools addreplacerg -r '@RG\tID:samplename\tSM:samplename' input.bam -o output.bam`
biostars
{"uid": 424817, "view_count": 4493, "vote_count": 3}
<p> I m looking for a source code that can take multiple sequence alignment and return back the Sum-of-pairs or weight sum-of-pairs score based on any scoring scheme.. any one can help..</p>
<p>when I moved on from CLUSTAL, I found <a href="http://mafft.cbrc.jp/alignment/software/">MAFFT</a> to be a great (and amazingly faster) alternative for multiple sequence alignment that even allowed to play around with scoring matrices. I know it takes sum-of-pairs into account when setting up a run, but what I don't know is if it provides them as output. the source is available and published, so I think it would be worth it giving it a try.</p>
biostars
{"uid": 12675, "view_count": 3991, "vote_count": 2}
Hi all, I have recently tried to estimate runs of homozygosity (ROH) from my vcf file by using plink 1.9. I ran following code to generate binary files that plink required: plink --vcf myfile.vcf --make-bed --out out_name --no-sex --no-parents --no-fid --no-pheno --allow-extra-chr This vcf file only contains one individual and about 3 million SNPs. I used --allow-extra-chr here because I mapped my sequences to a drift genome. Then, I used following code (with default parameters), trying to estimate ROH of my sample: plink -bfile out_name --homozyg --allow-extra-chr The result gave me 0 ROH, and only header produced in .hom file. I also tried different parameters with different SNP windows and criterions, such like: plink -bfile out_name --homozyg --homozyg-window-snp 50 --homozyg-snp 50 --homozyg-window-missing 3 --homozyg-kb 100 --homozyg-density 1000 --allow-extra-chr However, all the results were the same, that showed : PLINK v1.90b6.12 64-bit (28 Oct 2019) www.cog-genomics.org/plink/1.9/ (C) 2005-2019 Shaun Purcell, Christopher Chang GNU General Public License v3 Logging to plink.log. Options in effect: --allow-extra-chr --bfile out_name --homozyg --homozyg-density 1000 --homozyg-kb 100 --homozyg-snp 50 --homozyg-window-missing 3 --homozyg-window-snp 50 515905 MB RAM detected; reserving 257952 MB for main workspace. 3708761 variants loaded from .bim file. 1 person (0 males, 0 females, 1 ambiguous) loaded from .fam. Ambiguous sex ID written to plink.nosex . Using 1 thread (no multithreaded calculations invoked). Before main variant filters, 1 founder and 0 nonfounders present. Calculating allele frequencies... done. 3708761 variants and 1 person pass filters and QC. Note: No phenotypes present. --homozyg: Scan complete, found 0 ROH. Results saved to plink.hom + plink.hom.indiv + plink.hom.summary . Does anyone have idea why this happened to my files? Will be grateful for any suggestions.
This is unsurprising if your VCF contains only positions where your sample *differs* from the reference genome; practically all the ROHs will span regions which are excluded by your VCF. You may need to reconstruct your VCF in a way that includes homozygous-REF calls in an unbiased manner.
biostars
{"uid": 411013, "view_count": 5346, "vote_count": 5}
<p>Hi, guys please i need some help because i am not confident with linux first i have managed to installed ubuntu on a vitual box on my windows 7 and now i want to install vcf tools and make exeutable files please how can i do this. thanks</p>
This is from the installation instruction in the read me: > Compiling: > To compile and install VCFtools, you should type 'make' in the vcftools folder. The perl scripts and cpp executable will be installed in the /vcftools_(version_num)/bin/ folder. It is recommended that you add this folder to you PATH. So, go to the folder where you have extracted vcftools by typing: cd [path-to-vcf-tools] make Then you can add it to your path, if you like, (so that you can run the tools by typing there name on the command line, rather then referencing them exactly) by following the instructions [here](http://askubuntu.com/questions/60218/how-to-add-a-directory-to-my-path). Alternative way: If you have the Ubuntu universe repositories enabled you can install it from there (of course assuming that you are running Ubuntu). Just open a terminal and type: sudo apt-get install vcftools Hope you get it working. Edit 1: Fixed formatting. Edit 2: Added easier way to install vcftools in from the Ubuntu universe repositories
biostars
{"uid": 50107, "view_count": 18718, "vote_count": 1}
I am trying to extract maximum features for my machine learning input, and I need PV4, BQ, G3, NS to be added to my VCF files for all the SNP calling I used the sametools and bcftools as below to extract some of the information: samtools mpileup --skip-indels -m 1 -E --BCF --output-tags DP,AD,ADF,ADR,SP,INFO/AD,INFO/ADF,INFO/ADR -f test/reference/Aradu.fa test/bam/A72.bam | bcftools call -mv --skip-variants indels --multiallelic-caller --variants-only |bcftools +fill-tags> test/test/At.vcf The result of my code has > Aradu.A01 1345 . T G 5.57134 . DP=6;ADF=0,5;ADR=1,0;AD=1,5;VDB=0.0340507;SGB=-0.590765;RPB=1;MQB=1;MQSB=1;BQB=1;MQ0F=0;AC=2;AN=2;DP4=0,1,5,0;MQ=9;NS=1;AF=1;MAF=0;AC_Het=0;AC_Hom=2;AC_Hemi=0;HWE=1 GT:PL:DP:SP:ADF:ADR:AD 1/1:31,10,0:6:0:0,5:1,0:1,5 How can I include any of these PV4, BQ, G3, NS to the file Thanks
Hi Sahar, I get **PV4** (*P-values for strand bias, baseQ bias, mapQ bias and tail distance bias*) and **G3** (*ML estimate of genotype frequencies*) automatically when I align my data with `bwa mem` and then call variants with `samtools mpileup` piped into `bcftools call` (latest versions). **NS** (*Number of Samples With Data*) may be a tag that was used a lot in the past but that has been more or less replaced. You can obtains similar information by looking at the **AC** (*allele count in genotypes, for each ALT allele, in the same order as listed*), **AF** (*allele frequency for each ALT allele in the same order as listed: use this when estimated from primary data, not called genotypes*), and **AN** (*total number of alleles in called genotypes*) tags. The same may be true for **BQ** (*RMS base quality at this position*). However, using `samtools mpileup`, you can ensure a minimal base quality on variant bases with the `--min-BQ` comman-lime parameter.
biostars
{"uid": 289224, "view_count": 2267, "vote_count": 1}
Good afternoon, I'm working with gatk and more specifically with the Haplotypecaller function in order to create the GVCF files and I've seen that it takes quite a lot of time. I'm a bit in a rush and I would like to speed up the process, but I did not find useful information about it. Here is my code: java -jar ~/softwares/GATKK/gatk/gatk-package-4.1.7.0-local.jar HaplotypeCaller --reference Pmuralis_1.0.fa --input run2_mergeandaligned.bam --output run2_4096_mergeandaligned.g.vcf -ERC GVCF The only thing that seems to improve a bit the process is if I add `-Xmx4096m` to the beginning, like this: java -Xmx4096m -jar ~/softwares/GATKK/gatk/gatk-package-4.1.7.0-local.jar HaplotypeCaller --reference Pmuralis_1.0.fa --input run2_mergeandaligned.bam --output run2_4096_mergeandaligned.g.vcf -ERC GVCF Another thing I noticed is this message: 20:58:54.759 INFO IntelPairHmm - Available threads: 20 20:58:54.759 INFO IntelPairHmm - Requested threads: 4 As in my server I have 20 cores but the process is just taking 4. I think I solved it adding `-native-pair-hmm-threads 20` but it didn't speed up the process... Let's see if somebody knows about java and can help me! Thank you very much!
> . I'm a bit in a rush and I would like to speed up the process, but I did not find useful information about it. split by chromosome using option `-L` and run in parallel.
biostars
{"uid": 437521, "view_count": 2674, "vote_count": 1}
Hi, I am fairly new to bioinformatics. I have the genome sequences of two closely related bacteria (or, more accurately, both genome sequences are publicly available). One bacterium grows much faster under a particular set of growth conditions than the other. I'm trying to determine what genes may be responsible for this improved growth. How do I compare these two genomes to find out what genes are different between the two? And is there a way to find out what the functions (e.g. GO terms) for the different genes are or will I just get a list of gene names as output? Thanks for any help you can offer.
Fast option is to look up DB that store informations about microbial genomes, i.e. [IMG][2] A more detailed analysis would require a bit more time and some command line options. If your strains are related (i.e. same Genus or Species), I would say that the first thing to do is to calculate the genes that these two bacteria shares and the one that are unique. This means to identify orthologous genes. There are plenty of programs to do that out there. You can take a look at this tools: https://omictools.com/pangenomics-category. They are intended to study pan-genomes, but most of them will perfectly suit your use. I personally use GET_HOMOLOGUES, but I read that [OrthoFINDER][1] is really good and reliable. After this analysis, you will have a list of genes that are shared between the two bacteria and a list of genes that are unique. Also, you will see which genes are present in duplicate (paralogues) in your genomes. However, I am not sure that this analysis will answer your question. Especially if the strain are extremely closely related, you might need to do a more detailed analysis considering gene variants, i.e. SNPs. You might then see that some shared genes show differences in the two strains. You might find that these differences will influence the protein sequences as well (they are non synonymous mutations), indicating that the two strains have different version of shared proteins. Hope this helps. [1]: http://www.stevekellylab.com/software/orthofinder [2]: https://img.jgi.doe.gov/
biostars
{"uid": 213069, "view_count": 5677, "vote_count": 1}
Hi, I'm running bam-readcount (commit `6c3f3ae901`) on a few hundred bam files against a single reference fasta file (designated with `-f`). However, when I look at the output for any of the files, all of the reference bases, for any position, are N. NODE_14_length_46_cov_1.239130 52 N 1 =:0:0.00:0.00:0.00:0:0:0.00:0.00:0.00:0:0.00:0.00:0.00 A:1:2.00:36.00:2.00:1:0:0.00:0.02:0.00:0:0.00:0.00:0.00 C:0:0.00:0.00:0.00:0:0:0.00:0.00:0.00:0:0.00:0.00:0.00 G:0:0.00:0.00:0.00:0:0:0.00:0.00:0.00:0:0.00:0.00:0.00 T:0:0.00:0.00:0.00:0:0:0.00:0.00:0.00:0:0.00:0.00:0.00 N:0:0.00:0.00:0.00:0:0:0.00:0.00:0.00:0:0.00:0.00:0.00 Is this a bug? The base in the reference sequence at this position is A. >NODE_14_length_46_cov_1.239130 AGCTAACTGAGTTTATCACACTCAGTTAATGTCCATTTCACTTCACACATAACCTTACAG ATCGGAAGATCTCGTA Thanks!
I see this if I run on the entire BAM file without specifying a list of positions or a region. This is a bug. I pushed a very quick fix out that should solve this issue. It's 4b6479a42d002d855eda6a45bca097756d493cdb. Does this fix the issue?
biostars
{"uid": 98497, "view_count": 2896, "vote_count": 1}
I'm new to the Bioinformatics field and am still learning how to use all the tools and such, but I've been stumped on something that I feel should be very easy for about a week. Basically, I want to compare the reads in a BAM file to a FASTA reference sequence and get the changes. I can easily view them with **samtools tview** but I need to have it in an Excel file for manipulation/concatenation. I can't seem to figure out why **mpileup** won't give me a straight list of what reads are mutated, unless that's the wrong tool for the job. I'm pretty good at programming, so if the solution is something I need to code for myself I could do it but since the formats are different than what I'm used to, it's been a major headache. Thanks <img alt="What I'm dealing with" src="http://i.imgur.com/XR3F7xc.png" style="float:left; height:1687px; width:3168px" />
you need to use bcftools to generate a tabular VCF file: * http://samtools.sourceforge.net/mpileup.shtml (old version) * http://www.htslib.org/workflow/#mapping_to_variant (new version) PS: no, you don't want to have it in an Excel file PS2: no, you really don't want to have it in an Excel file PS3: Every time you want to have it in an Excel file, God kills a kitten PS4: <img src="https://i.ibb.co/9qnB04y/image.png" width=500 />
biostars
{"uid": 112364, "view_count": 4934, "vote_count": 1}
Hi, I have RNA-Seq data from patients, and I want to find novel transcripts that are differentially expressed in the treatment condition. For this, I followed the methodology given in this Galaxy tutorial - https://galaxyproject.org/tutorials/nt_rnaseq/. Here they use HISAT for mapping, followed by Stringtie for transcript reconstruction (using Stringtie allows them to find novel transcripts), and then use FeatureCounts for counting the number of reads per transcript. Then they use DESeq2 for differential expression analysis. My question is that since Stringtie itself also gives the count (in terms of coverage, FPKM and TPM) for each transcript that it constructs, then why should I use FeatureCounts for these constructed transcripts?
There is absolutely no reason to do that. Actually you should rather use StringTie's quantification since that is more accurate than featureCounts - you can read more about such considerations [here][1]. Acutally DEXSeq directly supports analysis of StringTie data via tximport::tximport() as described [here][2] - although they actually don't mention StringTie in the DESeq2 vignette tximport does support StrigTie as describe [here][3]. [1]: https://bioconductor.org/packages/devel/bioc/vignettes/IsoformSwitchAnalyzeR/inst/doc/IsoformSwitchAnalyzeR.html#what-quantification-tools-should-i-use [2]: https://bioconductor.org/packages/release/bioc/vignettes/DESeq2/inst/doc/DESeq2.html#transcript-abundance-files-and-tximport-input [3]: http://bioconductor.org/packages/release/bioc/vignettes/tximport/inst/doc/tximport.html#stringtie
biostars
{"uid": 394260, "view_count": 3556, "vote_count": 3}
I have a freebayes genotyped multisample vcf file. I want to sort the names of the samples in alphabetical order to make my life a bit easy with post variant calling analysis. I have tried Picared SortVcf which work fine which works fine on a small file but failed on a large file (~4 Gb). If you suggest ways to sort a large multi-sample file without distorting the variants information, it would a great help.
Hey, try this: $ bcftools query -l input.vcf | sort > samples.txt $ bcftools view -S samples.txt input.vcf > output.vcf If not already done, I would also suggest to use `bcf` instead of `vcf` or `vcf.gz`. This really improves speed when working with `bcftools` on large datasets. fin swimmer
biostars
{"uid": 410655, "view_count": 4007, "vote_count": 1}
Hello everyone, I want to explore differential expression of a relatively small set of genes (~30). I used this set as reference to map reads. I used Bowtie2 to build an index of these genes sequences, then I mapped transcriptomes reads to this reference, and I got the sam files. In the sam file, for each mapped reads, there is the gene name to which the read mapped as RNAME (reference sequence name). Now, I want to obtain the matrix to see if there are differentially expressed genes. Looking to HTSeq, I noticed he wants gff or gtf file to build this matrix, but since I don't have the whole genome as reference, I cannot use it. Do you have some suggestion on how to build the matrix? Should I use samtools and build a bed file to get the coverage for each gene?
There are a couple of problems with this appoach: 1. `bowtie2` is not a splice-aware aligner so reads spanning exon/intron junctions will likely go unmapped. 2. Alignment against a subset of the total genome or transcriptome is problematic as the aligner will always try to find best matches for every read. If the true origin is a gene that is not included, you will get false-positives (maybe even a lot of them). 3. Most differential analysis frameworks rely on normalization based on a large number of genes not changing between conditions, e.g. TMM from `edgeR`. With only 30 genes you will have a hard time doing proper normalization. Don't use RPKM or FPKM as they do not correct for library composition changes, e.g. where a few genes with high counts in one condition "dominate" the sequencing run (=take away reads from other genes leading to fewer counts even though these are not differentially expressed). 4. How do you identify differentially-expressed genes (DEG)? I am pretty sure established tools do not work on such small subsets. I hope you do not plan custom statistics such as pairwise rank sum tests or t-tests? I suggest you align against the entire genome with a splice-aware aligner such as `hisat2` or `star` or use a lightweight quantifier such as `salmon` or `kallisto`. Then use established DEG tools such as `DESeq2`, `edgeR` or `limma` to get all DE genes and check how many of ones you are interested in are among them.
biostars
{"uid": 386812, "view_count": 2548, "vote_count": 1}
I have few VCF Files which contains Chromosome number, position, Variation allele (Ref and Alt). How can I generate the Gene name, its Refseq and listing all of its transcripts? 1. Any tool which gives these kind of results when upload VCF data? 2. Is there any module in Biopython or Bioconductor or script for this? 3. Is there any lists that links gene name to Refseq and transcript information? Thank you for the valuable suggestions
Ensembl BioMart is probably your best bet. You may need to use/write a quick script or awk command to extract your chromosome number and position but you can enter a list of chromosomal positions in the filter menu under regions->multiple chromosomal positions. You can then choose which attributes you would like to download such as Ensembl Gene ID or (under the external references menu) Refseq id. If your list is too long you'll probably have to use the Perl API. Short tutorial available here: http://www.ensembl.org/info/data/biomart/biomart_perl_api.html
biostars
{"uid": 179619, "view_count": 2937, "vote_count": 1}
<p>Hi all, Does anyone know how to make database for compressed file fasta.gz using blast? I am using the blast2.2.29+. I saw the tutorial of makeblastdb, there is no option for dealing with the fasta.gz file. Does that mean I have to unzip the file first and then build database?</p>
<p>You should be able to uncompress the file then pipe the output to makeblastdb, without an intermediate file. The command would look something like:</p> <pre><code>gunzip -c myfastafile.gz | makeblastdb -in stdin &lt;other options&gt; </code></pre> <p>or</p> <pre><code>gunzip -c myfastafile.gz | makeblastdb -in - &lt;other options&gt; </code></pre>
biostars
{"uid": 97466, "view_count": 11049, "vote_count": 7}
Hi all, I am new to the BioInformatics, and quite a beginner in programming languages. Can anyone suggest me some sources where I can at least learn 50% of the scRNA seq data analysis? I am familiar with C language, and I know a little bit of molecular biology too.
Comprehensive collection of all things single cell (including tutorials) : https://github.com/seandavi/awesome-single-cell
biostars
{"uid": 373386, "view_count": 4322, "vote_count": 7}
Hello dears all, Actually I have a .frq file (allele frequency file) which It is generated from .vcf file through VCFtools and it is included by five populations (The vcf file is generated by Stacks pipeline). But the point is, I don't know how can I visual the allele frequency. I will be so grateful if anyone can help me to solve this issue. Here is a part of my .frq file. Best regards. CHROM POS N_ALLELES N_CHR {ALLELE:FREQ} CM003279.1 116397 2 4 A:0.5 C:0.5 CM003279.1 184945 2 4 C:0.5 G:0.5 CM003279.1 185823 2 36 C:0.944444 T:0.0555556 CM003279.1 408595 2 54 T:0.5 A:0.5 CM003279.1 420946 2 10 A:0.5 T:0.5 CM003279.1 420946 2 42 T:0.619048 A:0.380952 CM003279.1 432468 2 4 A:0.5 T:0.5 CM003279.1 432468 2 4 T:0.5 A:0.5 CM003279.1 521247 2 6 A:0.5 C:0.5 CM003279.1 521247 2 4 C:0.5 A:0.5 CM003279.1 768668 2 46 T:0.978261 A:0.0217391 CM003279.1 768678 2 50 A:0.84 G:0.16 CM003279.1 768679 2 50 T:0.64 C:0.36 CM003279.1 768685 2 50 G:0.84 C:0.16 CM003279.1 768691 2 50 G:0.84 A:0.16 CM003279.1 768705 2 50 C:0.84 T:0.16 CM003279.1 768709 2 54 A:0.814815 T:0.185185
Perhaps you can use online VEP (Variant Effector Prediction) server from ENSEMBL, just google it. I don't know whether your VCF file derived from a plant species or animal species or prokaryotic species like bacteria. Depends on the species, you can choose the correct VEP-ENSEMBL server. Here, you could predict variants effect (with SIFT score) for each position of the SNPs from your VCF input file and you have to choose the correct species of interest when you give your input VCF file. For instance, here is the VEP ENSEMBL server for plants: https://plants.ensembl.org/Triticum_aestivum/Tools/VEP?db=core you can read here for variant effects prediction/variant of alleles and annotations concerning VEP server: https://plants.ensembl.org/Help/View?id=484 Hope it may help.
biostars
{"uid": 354626, "view_count": 1765, "vote_count": 1}
<p>Hello, we're getting RNAseq data from increasing numbers of "emerging" model species for which little is known. The data often wasn't meant to provide population genetic estimates, but could perhaps be used to provide some. </p> <p>Is it possible to estimate Ne from the SNP patterns that can be identified from:</p> <ul> <li>a single. sequence pooled from many individuals from a single population?</li> <li>a single diploid genome?</li> <li>RNAseq data where multiple individual from a single population (siblings or not) were independently sequenced. </li> </ul> <p>Cheers, yannick</p> <p>A somewhat related question, but specific to pools is <a href='http://biostar.stackexchange.com/questions/3683/is-it-possible-to-infer-population-genetics-parameters-like-ne-using-de-novo-sequ'>here</a>. </p>
It seems that people use RNA-seq or exome capture data for demographic inference by using synonymous mutations, thereby getting around the issue of non-synonymous mutations being non-neutral. Therefore, it should be similarly possible to use your RNA-seq data to estimate Ne (your third point above) if you use the synonymous mutations only? For example, see: > Fraïsse C, Roux C, Gagnaire P, Romiguier J, Faivre N, Welch JJ, Bierne N. (2018) The divergence history of European blue mussel species reconstructed from Approximate Bayesian Computation: the effects of sequencing techniques and sampling strategies. PeerJ 6:e5198 https://doi.org/10.7717/peerj.5198
biostars
{"uid": 9819, "view_count": 3204, "vote_count": 1}
<p>I have 1000 fasta files that have simulated reads, and I want to split each of these 1000 files into separate files (one per chromosome) as I need this for some further analysis. The header lines look like this &gt;chr1:startpos:endpos.</p> <p>I wrote this code (<a href="https://gist.github.com/ethanagbaker/6e40c58127b7ca8b9242">https://gist.github.com/ethanagbaker/6e40c58127b7ca8b9242</a>) in Python, and it works, but it is very slow (ie it has been running for more than 24 hours on a cluster). This seems like it should be a really quick thing to do. Is there a better way to be doing this?</p> <p>Thanks!</p>
$ pip install pyfaidx $ faidx -x sequences.fa Or you can use the `Fasta` class and write your own script to do the same thing. Your code is slow because it is opening a bunch of files in a loop, and then opening (the same files?) and reading them completely to get the sequences. Indexed file access will be orders of magnitude faster.
biostars
{"uid": 173723, "view_count": 14900, "vote_count": 3}
let's say I have a fasta of a protein sequence > albumin MKWVTFISLL FLFSSAYSRG ... ... ... I want to split the sequence into all possible consecutive 8 amino acids (only in 1 direction, amino -> carboxyl) (And no looping(I don't know if it is the right expression), say, GMKWVTFIS) I need > fasta.albumin1 MKWVTFIS >fasta.albumin2 KWVTFISL > fasta.albumin3 WVTFISLLF ... > fasta.albumin13 FSSAYSRG And, I want to do this for all known human protein sequences. How would I do it??? I need the result as a fasta or fasta files. And IDs for resulting 8-mer seuqeunces need to be unique.
Using [bioalcidaejdk][1]: $ java -jar dist/bioalcidaejdk.jar -e 'final int N=8;stream().filter(F->F.length()>=N).forEach(F->IntStream.range(0,F.length()-N).forEach(L->{out.println(">"+F.getName()+"."+(L+1)+"\n"+F.subSequence(L,L+N));}));' input.fasta [1]: http://lindenb.github.io/jvarkit/BioAlcidaeJdk.html
biostars
{"uid": 297681, "view_count": 2907, "vote_count": 2}
I know that `blastp` has a `-remote` option, which allows you to run your query on NCBI servers (see [here](https://www.ncbi.nlm.nih.gov/books/NBK279680/)). Is there a way I can set up my own remote server that could be queried using `blastp` and would run queries against my own custom database? I am running lots of `blastp` queries on a cluster of machines, and I think it would be much faster if I delegated this work to a single machine that could use a local blast database, instead of querying a database stored on a distributed file system?
Check the [documentation for NCBI BLAST AMI][1]. While this is talking about enabling remote blast on AMI, the idea should be similar for local implementation. [1]: https://ncbi.github.io/blast-cloud/doc/setting-up-blast.html
biostars
{"uid": 281272, "view_count": 1970, "vote_count": 1}
I read many explanations about LD but I'm still not comfortable with the explanations. https://www.youtube.com/watch?v=iH8b-5BxtuY&list=WL&index=7 https://www.youtube.com/watch?v=DvrAuMyu4wU as you can see on these videos, most of them explains as if LD is about the relation between the genotype frequency of parental cell and gametes made from it. But if so, I should have the genotype frequency of the parental cell and the frequency for all the gametes made which is impossible. Or is it's the relation between the genotype frequency for each loci(such as 0.6 for A and 0.4 for a in one loci and 0.6 for B and 0.4 for b in another loci) of the parental population and the combined genotype frequency of the sibling population(such as 0.36 for AB , 0.16 for ab)? if so, isn't it also impossible to get the frequency of the parental population? I suppose I'm not getting it right I'm so confused. For my situation, if I look at the VCF file I made, I have genotypes of 0|0 1|0 0|1 1|1 for each SNP. For simplicity, if there are two samples(A and B) and I want to see how linked two SNP positions 1,2 are, how do I find out? let's say sample A SNP position 1 has a genotype of 0|0 and position 2 has a genotype of 1|0. and sample B SNP position 1 has a genotype of 0|1 and position 2 has a genotype of 1|1 is it possible to calculate the linked relationship? or are there other values required. Please help me
In the beginning was the Gene. As originally envisaged genes were atomic (i.e. indivisible) and **inherited independently**. That means if gene 1 has alleles A and a, and gene 2 has alleles B and b, then the allele you inherit for gene 1 should not depend on the allele you inherit for gene 2. But this isn't true because in physical reality genes are linked to each other on chromosomes. In the case of a single cross, this clearly isn't true. Consider following parents: Mother: chromosome copy 1: ----A--------B----- chromosome copy 2: ----A--------B----- Father: Chromosome copy 1: ----a--------b----- Chromosome copy 2: ----A--------B----- Possible offspring: ----A--------B----- or ----A--------B----- ----A--------B----- ----a--------b----- Offspring from this cross will always inherit an A and a B from the mother, but 50% will inherit A and 50% a and 50% B and 50%b from the father. Under independent inheritance, the genotypes AABB AaBB AABb and AaBb should be equally likely (you can do a punnet's square to check). But that isn't the case because the offspring either in inherits chromosome copy 1 or chormosome copy 2 from the father, so the only possible genotypes for the offspring are AABB (if the offspring inherits copy 2 from the father) and AaBb (if the offspring inherits copy 1 from the father). This is the phenomena of *Linkage* All of this is assuming that recombination won't form a `------A-------b-----` chromosome in the father. While this is unlikely to happen in one cross, over a population and across evolutionary time, the association of a with b and A with B will break down and you will get lots of `-----A--------b----` and `-----a------B----` chromosomes. when the probability of being hetrozyous at both loci is equal to the probability of being hetrozygous at gene 1 multiplied by the probability of being hetrozygous at gene 2, then the loci are said to be in *linkage equilibrium*. If this is not that case, and the probability of having b rather than B at loci 1 depends on whether you have a rather than A at loci 2, then the loci are said to be in *Linkage disequilibrium.* At its extreme, for 2 loci in complete LD, if you tell me what the allele at loci 1 is, I can tell you what the allele at loci 2 is. EDIT: after rereading the question. 1. What I've drawn above are haplotypes, not genotypes. In your question you write 0|1, which suggests your data is phased. Unphased data would normally be written 1/0. So if you phased genotypes are 0|1, 0|1, then your haplotypes are 00 and 11. 2. LD is a property of a population, not of a single individual, or even 2 individuals -you can only calculate LD if you have the genotypes of a large population. However, Linkage is not the same as linkage disequilibrium, and you can calculate Linkage from a collection of parents and their offspring (how often do you see a haplotype in the offspring that is not present in the parents). But no on really calculates linkage any more since genome sequences made genetic mapping unnecessary.
biostars
{"uid": 9488499, "view_count": 2185, "vote_count": 1}
I want to uninstall *Bioconductor* [affycoretools package](https://www.bioconductor.org/packages/release/bioc/html/affycoretools.html) and reinstall it, who can help me?
Go to the **Packages** in right bottom corner of Rstudio, sear the package name and click on the adjacent **X** icon to remove it. ![Rstudio Screenshot][1] [1]: https://image.ibb.co/j4KZjG/test.png Reinstall the package from Bioconductor/CRAN. ##Bioconductor source("https://bioconductor.org/biocLite.R") biocLite("affycoretools")
biostars
{"uid": 295628, "view_count": 178054, "vote_count": 2}
<p>Hello everyone,</p> <p>Today I spent 4 hours on a python script for executing a shell command constructed using user inputs within the script, still It was unsatisfactory to what I wanted. I achieved same thing using R &lt; 30 mins. I am in under obsession that one of the scripting language is very important. I use R, shell and terminal utilities for most of my tasks. </p> <p>So, the argument would be I haven't come across to such a task in exploratory next generation sequencing data analysis, where you specifically need a scripting language like Perl/Python or Ruby.</p> <p>Thus, if one is not writing new tools from scratch, how important is to learn any of the scripting language. I used Perl in start and then got influenced by python co-workers that python is better and more extensive and now lost in both. But I am confident in R.</p> <p>Could someone also elaborate, the part and parcel of work, where there is no move without any of the scripting language. One can also argue about the audience, perl user , python and R users at the consumer end, if you make software but what if the results and tasks are for oneself or its just a matter of taste.</p> <p>Thanks for your input</p>
<p>From a certain point of view you are right: you don't need to learn python or perl programming, and you can do a lot of work just in R. CRAN and Bioconducator contain a lot of libraries, and R allows you to create good plots, which is one of the most important skills for a bioinformatician. I know people who mostly use only R, and they are very good in their work, so you don't have to be afraid if you don't want to learn other programming languages.</p> <p>However, consider that learning at least the basics of other languages does not take too much time, adds skills to your curriculum, and allows you to learn new approaches to programming. R is a language for data analysis, centered on the data.frame structure, and is suited mostly to analyze data organized in tabular form. That's a good paradigm for analysing data, but as soon as you have to other problems R becomes more clunky and less useful. Web programming is an example, but also data manipulation (convert one format to another, automatizing tasks on the shell, etc.) is important. If you know the philosophy under which other programming languages are developed, and what tasks are they typically used for, your programming skills will be stronger, and you will be able to choose among a larger range of approaches when you have to solve a certain programming topic.</p>
biostars
{"uid": 42939, "view_count": 25731, "vote_count": 8}
Hello! I'm sorry if this question comes of as naive or ignorant because I'm very new to Bioinformatics. I'm trying to do an alignment with STAR and was wondering if I could access a pre-made STAR index for the mm10 genome. I was told I could do this from UCSC but have had no luck finding it there. So my question is, Are there pre-made STAR index files for the mm10 genome that I could download? And if so where and how? Thanks in advance for any help and I'm sorry to ask such a trivial question! Let me know if there's anymore detail I can give!
You can generate indexes yourself easily enough. Follow the directions here: https://www.biostars.org/p/198315/ . MM10 genome [from UCSC is here][1]. @Alex has some pre-made indexes available at [STAR Genomes site.][2] There does not appear to be a UCSC version of Mouse but there is [Gencode Mouse][3] which you can use. [1]: http://hgdownload.soe.ucsc.edu/goldenPath/mm10/bigZips/chromFa.tar.gz [2]: http://labshare.cshl.edu/shares/gingeraslab/www-data/dobin/STAR/STARgenomes/ [3]: http://labshare.cshl.edu/shares/gingeraslab/www-data/dobin/STAR/STARgenomes/GENCODE/GRCm38_GencodeM8/
biostars
{"uid": 221781, "view_count": 22003, "vote_count": 2}
Hello, I am running GATK 3.8 on a Linux cluster. I am trying to run the tool VariantFiltration with the following commands: java -jar $GATK/GenomeAnalysisTK.jar \ -T VariantFiltration \ -R $ref \ -V $wd/vcf/filtered/stick84_SNPs_GATK.vcf \ --filterExpression "QD < 2.0" --filterName QD2 \ --filterExpression "FS > 60.0" --filterName FS60 \ --filterExpression "MQ < 40.0" --filterName MQ40 \ --filterExpression "MQRankSum < -12.5" --filterName MQRS-12.5 \ --filterExpression "ReadPosRankSum < -8.0" --filterName RPRS-8 \ -o $wd/vcf/filtered/stick84_SNPs_filtered_GATK.vcf However, it is throwing me this error: ERROR MESSAGE: For input string: "nan" I don't know what this error means and how to fix it. All the potential help I find online sends me to ghost pages of GATK that I cannot access :( Please, any help will be greatly appreciated! Thank you and take care!
Ah, I've found out the reason for that here: https://github.com/broadinstitute/gatk/issues/5582. I hope this helps others as lost as me! I did not realize the INFO fields could have "NaN" values. Solution to fix is: bcftools view in.vcf.gz | sed 's/=nan/=NaN/g' | bgzip > fixed.vcf.gz
biostars
{"uid": 461995, "view_count": 935, "vote_count": 1}
There exists a lot of literature on distinguishing driver mutations from passengers. I am trying to build my own deep learning model to do the same. I am facing some potential issues. First, I have downloaded COSMIC mutation data and used the FATHMM labels to designate drivers (positive examples) in my dataset. I am sceptical to use passengers from COSMIC as they may be false negatives. So I turned to the 1000 genome project to download SNVs (to construct my negative examples). I am unsure if this is correct, however, I have seen some papers do the same. Do I need to apply any filters on the 1000 genome SNV data to construct the final dataset? One such paper talks of using SNVs with a global minor allele frequency≤1%.
You'll fist need to clarify what exactly you mean by driver mutation vs passenger mutation. 1) Are you concerned with somatic mutations or germline mutations (most often it is somatic, if you say "driver mutation")? 2) Are you concerned mostly about protein-coding mutations or non-coding mutations? 1) If you are interested in somatic driver mutations, I highly recommend that you do NOT use germline variants from sources such as the 1000 genome project as "passengers". Germline variants are systematically different than somatic mutations because they undergo substantially more negative selection than somatic mutations do (PMID: 19654296). If you are interested in "pathogenic" germline variants that relate to cancer, then using high allele frequency germline variants as "passengers" is an ok but not perfect solution (see Kevin's comments). 2) If you are interested in protein-coding driver mutations, I highly recommend that you do not develop a method that also tries to predict non-coding driver mutations. The recent PCAWG papers have highlighted that indeed most of the somatic driver mutations in cancer happen in protein-coding regions. Moreover, non-coding prediction methods applied to protein coding regions generally don't fare well in benchmarks (see PMID: 32079540). The top methods in that recent benchmark at predicting driver mutations were CHASM, CTAT-cancer (see my paper, PMID: 29625053), DEOGEN2 and PrimateAI. I recently released CHASMplus (pmid: 31202631) which is a substantial improvement over top performing CHASM at predicting driver mutations, and my training labels are available (https://chasmplus.readthedocs.io/en/latest/faq.html , see "Where can I obtain the training data for CHASMplus?").
biostars
{"uid": 425580, "view_count": 831, "vote_count": 1}
<p>Hi All,</p> <p>can anyone recommend me a tool/package, which allows me to create Venn diagrams up to 11 sets? The packages which I have found so far can support to create less sets only.</p> <p>Many thanks!</p>
<p>This is not an answer, and you may be upset :-) In my opinion, it is not practically useful to do Venn diagram for more then 4 or 5 sets. More than that, it is horribly difficult for reader to look at and understand what it conveys. It is also difficult to explain about the result for the authors themselves. So, if possible, divide them into smaller groups. I beg you as a reader.</p>
biostars
{"uid": 143110, "view_count": 12415, "vote_count": 1}
I have a long list of complete bacterial organism names (more than 100000, thus impossible to search and download it line by line). Format is one name on each line. I need to download GCA (It must be GCA instead of GCF) fasta files of the corresponding genomes from https://www.ncbi.nlm.nih.gov/genome/browse/ (Specify Levels as Complete). I have to achieve this through command lines. How to do it efficiently? Thank you.
cat species.txt Porphyromonas levii Porphyromonas somerae wget ftp://ftp.ncbi.nlm.nih.gov/genomes/genbank/bacteria/assembly_summary.txt IFS=$'\n'; for next in $(cat species.txt); do awk -v SPECIES=^"$next" 'BEGIN{FS="\t"}{if($8 ~ SPECIES && $12=="Complete Genome"){print $20}}' assembly_summary.txt \ | awk 'BEGIN{OFS=FS="/"}{print "wget "$0,$NF"_genomic.fna.gz"}'; done \ | sh NOTE: Only 8,413 Bacterial genomes have "Complete Genome" assembly level status (not even 10% of your list of names). For example, nothing will be downloaded in the example shown above. Do you really need to limit yourself to such a small subset? 1577 Chromosome 8413 Complete Genome 52594 Contig 54565 Scaffold
biostars
{"uid": 280165, "view_count": 3638, "vote_count": 1}
I need to do some statistical calculation on CNV data that is publicly available in TCGA website. I am from computer background and no idea about the meaning of these data. I tried searching for the meaning of the files, how they are structured and all but nothing was helpful. It would be nice if someone could give a short overview of the meaning of data in each column. A sample of data present in a CNV file Sample Chromosome Start End Num_Probes Segment_Mean DEBUT_p_TCGAb45_81_wRedosSNP_N_GenomeWideSNP_6_G03_729530 1 61735 415164 28 -0.0504 DEBUT_p_TCGAb45_81_wRedosSNP_N_GenomeWideSNP_6_G03_729530 1 462793 629241 4 1.822 DEBUT_p_TCGAb45_81_wRedosSNP_N_GenomeWideSNP_6_G03_729530 1 668210 2138242 350 -0.0311 So I want to know what is the underlying meaning of sample, how they are named. Sample name seems always a big name. Then I understand the columns chromosome, start and end. But I don't understand the meaning of Num_probes and Segmentation_mean?
Sample should be the unique sample identifier for this SNP-array experiment, it corresponds to a unique sample run by TCGA and can be a normal or tumor sample. Will map to a unique id like this: https://wiki.nci.nih.gov/display/TCGA/TCGA+barcode The rows correspond to contiguous chunks along the chromosome with the same DNA copy-number. "Chromosome" is the chromosome, can be 1-22, X or Y (see human genome). Start is the physical start location for the segment along said linear chromosome, end is the end coordinate. Num_probes is the number of SNP-array probes falling within the segment (these were used to call copy numbers). Segment_Mean is the estimated copy number for that particular segment. So this sample has (log2) copynumber -0.0504 on chromosome 1 from bases 61735 to 417164, at 462793 along the same chromosome the copy number changes to 1.822 (i.e. a genomic gain) and this segment ends at 629241. The next segment for which you have information is 668210 to 2138242 where the copy number has returned to a basically normal level. These segments should keep coming for chr1 until about 255M after which the chr2-segments should start. The data should basically be log2( intensity of sample / intensity of reference with normal copynumber ), this means that segments with a normal diploid copy number are around log2( 2/2 )=0, single copy losses are at log2(1/2)=-1, homozygous deletions at log2(0/2)=-Inf, correspondingly for gains you go from log2(3/2) and upward with experimental noise leading to deviations from these discrete levels. For some light reading about the concept of copy-number analysis check out these old papers: https://www.ncbi.nlm.nih.gov/pubmed/16899659 and https://genomebiology.biomedcentral.com/articles/10.1186/gb-2008-9-9-r136 and maybe http://bmcgenomics.biomedcentral.com/articles/10.1186/1471-2164-8-382
biostars
{"uid": 244374, "view_count": 7897, "vote_count": 4}
Hello everyone! I am trying to install MUMmer 3.9.4 on Ubuntu, following the instructions given here: https://github.com/gmarcais/mummer/blob/master/INSTALL.md When I try to run nucmer, I get this error message: > nucmer: error while loading shared libraries: libumdmummer.so.0: > cannot open shared object file: No such file or directory I have tried to look it up online, but can't seem to find anything. Any help would be greatly appreciated! Nancy
@nancydong20 I had the same problem as you, I did these steps and it helped: 1. This is critical: Uninstall the existing MUMmer with Synaptic Package Manager. (If you install with `sudo apt-get install mummer`you get v 3.22 and it is still called by any software which you use. This is why you cannot use the option *CPPFLAGS="-O3 -DSIXTYFOURBITS"* to install 64-bit Mummer). 2. Download the stable MUMer distro 3.23 from https://github.com/gmarcais/mummer 3. Extract the package 4. Install MUMer with this code: 5. `cd MUMmer3.23` 6. `mkdir aux_bin` 7. `make check` 8. `make install CPPFLAGS="-O3 -DSIXTYFOURBITS"` 9. ` mummerpath=`pwd` ` 10. `cd ..` 11. `export PATH=$mergerpath:$mummerpath:$PATH` In the end you will get: - (maximum reference length is 2305843009213693948) - (maximum query length is 18446744073709551615) - process 6420479 characters per dot My OS Ubuntu 14, 64 bit. Hope it helps.
biostars
{"uid": 239590, "view_count": 4421, "vote_count": 1}
Question is in the title. Imagine you have many bigwig track...and you want to display the same scale for each tracks but you don't want to change viewlimits one by one for all of them. Is there a way to do that in a one time shot for all using maybe session file or Supertrack ,multiwig, composite arguments.
You can check the "Oct. 11, 2019 New "group auto-scale" option" [here][1]. Example track file for group of tracks: [1]: http://genome.ucsc.edu/goldenPath/newsarch.html#101119 bigWig track myComposite compositeTrack on shortLabel test longLabel my_comp_testing type bigWig track myFirstSubtrack parent myComposite on shortLabel test_1_short longLabel tes_1_long type bigWig visibility full maxHeightPixels 70:70:32 configurable on autoScale off viewLimits 0.0:1.0 alwaysZero on priority 0.1 bigDataUrl file1.bw color 0,0,0 track mySecondSubtrack parent myComposite on shortLabel test_2_short longLabel tes_2_long type bigWig visibility full maxHeightPixels 70:70:32 configurable on autoScale off viewLimits 0.0:1.0 alwaysZero on priority 0.1 bigDataUrl file2.bw color 0,0,255 ```
biostars
{"uid": 222669, "view_count": 4306, "vote_count": 1}
So I'm trying to integrate two datasets together for 10 different samples (taken in different batches). To mitigate for that I looked into Harmony but was not sure how to essentially apply Harmony from two datasets and get fragment files for later analysis in ArchR. Any suggestions or pointers?
`ArchR` has built-in support for Harmony correction; ``` projHeme2 <- addHarmony( ArchRProj = projHeme2, reducedDims = "IterativeLSI", name = "Harmony", groupBy = "dataset" ) ``` https://www.archrproject.com/bookdown/batch-effect-correction-wtih-harmony.html
biostars
{"uid": 9534124, "view_count": 552, "vote_count": 1}
Two map/ped files refused to merge for some reason. I followed this method here (https://www.biostars.org/p/56640/ by [ff.cc.cc][1]) to reduce the datasets to the same list of sites. I used plink --bfile Deven --bmerge Harvard.fam Harvard.bim Harvard.bed --merge-mode 6 --out mismatchB to identify the sites that had flipped genotypes, and I straightened them out using plink --bfile Harvard --flip mismatchB.missnp --recode --make-bed --out HarvardFlipped I converted them to map/ped from the binary format, and I tried to merge them and got an inexplicable map file error ``` ERROR: Problem with MAP file line: HGDP00001 HGDP00001 0 0 1 -9 G T A A C C G G A G G G A A C T G G A G T T T T G G C C C C A A T C C C C C G G G G T T T T G G A G T T C C A A A G C T A G C C G G A A G G C C T C C T A G A G G G G G A A T C T T T T G G G G A G G G C C T C G A G G G G C C T ``` I looked at the map and ped files and everything was fine. Any clue what is going on? I know that the Harvard dataset originally has coordinates using Hg18 and the Deven dataset has coordinates the using Hg19 (but the rs ids come from the same annotation file, which provided the conversion of Affy #s to dbSNP #s) [1]: https://www.biostars.org/u/4466/
<p>UPDATE: I merged them as binary, and then converted them to map/ped and it worked... You all can probably ignore this.</p>
biostars
{"uid": 147781, "view_count": 4288, "vote_count": 2}
Hi, Easy question here. I am split up my vcf file up by chromosome to try and save some time. I want to combine the files now to have one complete vcf file with all the chromosomes. Do I wanted to merge the files or do I want to concatenate the files? I just ran the command cat *.vcf > cat.chr1.chr2.vcf This was only on two of the chromosomes and when I went to look at the files, there was only data for the first chromosome and nothing for the second. Am I doing something wrong here? I could also try vcf tools if this "cat" command wont work for what I am trying to do.
first alternative, use one vcf file to get the header, and the concatenante all the other vcf without the header (this was your error) grep '^#' chr1.vcf > merge.vcf grep -v '^#' chr1.vcf chr2.vcf chr3.vcf chr4.vcf >> merge.vcf second alternative: use picard gatherVcfs http://broadinstitute.github.io/picard/command-line-overview.html, which is going to check the headers, orders etc... java -jar picard.jar GatherVcfs I=chr1.vcf I=chr2.vcf I=chr3.vcf I=chr4.vcf O=merged.vcf
biostars
{"uid": 321396, "view_count": 11271, "vote_count": 2}
Good morning, I want to analyse public SHort Read archive from an RNA-seq experiment. I am trying to quantify them using sailfish. it is needed to provide the library type as an argument : -l "<libtype>" But I only that my data are paired-end ( dataset GSE89063) How can I know what to write for this argument ? Thanks a lot Have a good day
Hi, You can also check by performing some steps to know the strand specificity, please check the link below. http://onetipperday.sterding.com/2012/07/how-to-tell-which-library-type-to-use.html Hope it works with you.
biostars
{"uid": 324417, "view_count": 1815, "vote_count": 1}
Hello everyone, I've tried to work with the TCGA DNA methylation data, but I'm having problems to understand better these data. The TCGA's website isn't working anymore to get the biological data. In its place this GDC website (https://gdc-portal.nci.nih.gov/) is working. I was able to get clinical, mRNA and miRNA data in that site, however, I didn't find the DNA methylation data. Isn't there dna methylation data in that portal? Fortunately, I found another site: cancer genomics browser (https://genome-cancer.ucsc.edu/proj/site/hgHeatmap/) where I was able to get the DNA methylation data for breast cancer (HumanMethylation450). There are multiple files in the dna methylation file. levels of methylation are in the "genomicmatrix.txt" file, which each sample methylation has beta-value and a probe. On the other hand, the probe.txt file contains correspondence between the probe with the genes. Here is a little example about the genomicMatrixand probe file. genomicMatrix: sample TCGA-OL-A66H-01 TCGA-3C-AALK-01 TCGA-AC-A5EH-01 cg13332474 -0.4808 -0.2968 -0.1997 cg00651829 -0.4821 -0.2110 -0.4108 cg17027195 -0.4633 -0.4250 -0.4667 cg09868354 -0.4345 -0.3630 -0.4230 cg03050183 -0.4252 -0.3749 0.1269 cg01989731 NA NA NA cg06819656 0.4028 0.3047 0.3755 cg04244851 0.4398 0.3894 0.2533 cg19669385 -0.1353 0.3650 0.0664 cg04244855 0.4292 0.4008 0.2468 cg17689707 -0.4842 0.0109 -0.2484 cg04244857 -0.0918 0.2731 -0.0084 cg02434381 -0.4443 -0.4273 -0.4175 cg05777492 -0.4595 -0.4780 -0.4786 cg23340034 0.0933 0.3611 0.4120 cg26361545 0.4339 0.4389 0.4348 cg10609310 0.2913 0.0337 -0.1307 When looking at the file genomicmatrix.txt see several negative and NA values. I thought of disregarding them. Have positive values, I do not find any value above 0.8, ie no hypermethylation values. Why? Probe: id gene chrom chromStart chromEnd strand cg00035864 TTTY18 chrY 8613009 8613010 . cg13275322 WAS chrX 48426764 48426765 . cg13798679 chr1 36390157 36390158 . cg13799227 chr1 226719204 226719205 . cg13799302 CYP2J2 chr1 60164980 60164981 . cg13799671 CD58 chr1 116881090 116881091 . cg13805052 MORN1,LOC100129534 chr1 2272923 2272924 . Here I consider only genes that are not on chromosome X and Y. I noticed that there are some probes that associate with more than one gene, in this case thought to obtain the median of methylation values ​​to result in final gene methylation level. I was thinking of converting these files into a single file with the following header: gene | beta value | sample CD58| 0.4 | TCGA-OL-A66H-01 Please can someone help me these questions? Thank you for attention.
It turned out one can filter the legacy archive from GDC and get back the DNAme data, e.g., https://gdc-portal.nci.nih.gov/legacy-archive/search/f?filters=%7B%22op%22:%22and%22,%22content%22:%5B%7B%22op%22:%22in%22,%22content%22:%7B%22field%22:%22files.data_category%22,%22value%22:%5B%22DNA%20methylation%22%5D%7D%7D,%7B%22op%22:%22in%22,%22content%22:%7B%22field%22:%22files.data_type%22,%22value%22:%5B%22Methylation%20beta%20value%22%5D%7D%7D%5D%7D Beta values should be from 0 to 1. My guess is that the genomicMatrix file you are looking at is zero-centered. So the data is from -0.5 to 0.5 with the methylated and unmethylated peaks at around 0.4 and -0.4.
biostars
{"uid": 203805, "view_count": 3129, "vote_count": 2}
Dear all, I have FASTQ files and on start of my read I have 7 nucleotides tag - I would like to extract reads with this specific tag. I would like to search in first 15 nucleotides of my reads, if match - extract this read to new fastq files. Thank you for any ideas or help.
In order to stay old-school, you can use the FastX toolkit's [barcode splitter][1]. You need a text file with your tag (let it be myTag.txt) and then you can run (with N mismatches): cat sequence.fastq | /usr/local/bin/fastx_barcode_splitter.pl -Q33 --bcfile myTag.txt --bol --mismatches $N --prefix out --suffix .txt In order to remove the tag-sequence, you can use the fastx_trimmer from the same toolkit Using the Fastx-tools, do not forget to add the -Q33 option. [1]: http://hannonlab.cshl.edu/fastx_toolkit/commandline.html#fastx_barcode_splitter_usage
biostars
{"uid": 141203, "view_count": 10662, "vote_count": 2}
I want to store a docker image with the input data, code, software, intermediate results and results. This will be a massive file. Is there something like the sequence archive for people who want to upload a docker image of their whole study? If not, what is the best/cheapest way to store such a file? Any related thoughts not directly answering the question are also welcome.
Here are a few options that come to my mind: - Host the data on a public repository, for example [synapse][1], which has been created for this exact purpose. You can keep the data private during the publication, and then release it afterwards. Hosting is free of charge. - As previously noted, the docker image should be small. In the docker image, download the data from synapse or wherever you host the data, and pull the code from github or bitbucket - if you really need to host large datasets, bitbucket has no space limits (although large repositories may incur into limits) - I remember some bioinformatics-related journals advertising that they didn't have limits for the size of supplementary materials posted. This is to encourage people to upload all the relevant data and facilitate reproducibility. However right now I don't remember the name of the journals using this policy. Sorry :-) [1]: https://www.synapse.org/
biostars
{"uid": 206026, "view_count": 1988, "vote_count": 5}
Hello everyone! *Today I was trying to check whether all SNPs in my list (thousands of SNPs, GRCh37) belong to the particular chromosome (let's say chr 19) but I had some issues with it. I am doing everything on R and my strategy was this:* 1. To prepare list of SNP as a file (all starts with rs...) 2. To download the human chromosome 19 from dbSNP 3. To search my list as a pattern within downloaded chromosome However, downloading process of chr 19 is going on for many hours already... And the fastest download option doesn't give the proper information for analysis. I would love to know whether there is another way to check the list of SNPs for belonging to the particular chromosome in R? Thanks in advance!
Usign *biomaRt*: #set up library(biomaRt) snpmart = useEnsembl(biomart = "snp", dataset="hsapiens_snp") #Other useful functions # listAttributes(snpmart) # listFilters(snpmart) Get SNPs that match with our list of IDs, here we have 2 SNP from chr1 and 1 SNPs from chr2: getBM(attributes = c("refsnp_id", "chr_name", "chrom_start", "chrom_end"), filters = c("snp_filter"), values = list(c("rs17599629", "rs1218582", "rs11902236")), mart = snpmart) # refsnp_id chr_name chrom_start chrom_end # 1 rs11902236 2 9977740 9977740 # 2 rs1218582 1 154861707 154861707 # 3 rs17599629 1 150685811 150685811 Now, to return only the ones which are on chr2, we can add `"chr_name"` filter: getBM(attributes = c("refsnp_id", "chr_name", "chrom_start", "chrom_end"), filters = c("chr_name", "snp_filter"), values = list(c(2), c("rs17599629", "rs1218582", "rs11902236")), mart = snpmart) # refsnp_id chr_name chrom_start chrom_end # 1 rs11902236 2 9977740 9977740
biostars
{"uid": 470922, "view_count": 501, "vote_count": 1}
Say I have two VCFs with 100 samples in each file. Each VCF was joint-called separately and now I want to merge the variant calls. Do I need to left-align the INDELs in the merged VCF? I've used `bcftools norm` in the past and got odd results. It seems that `vt` is a better tool for this. Is left-aligning only useful for common variants? If I'm interested in rare variants (<0.5% AF) would left-alignment actually matter? Thanks Here's an example of bcftools norm Original VCF chr7 157009949 . AGCGGCGGCGGCG AGCGGCGGCGGCGGCGGCG,A,AGCGGCGGCGGCGGCGGCGGCG,AGCGGCGGCG,AGCGGCGGCGGCGGCGGCGGCGGCG,AGCGGCGGCGGCGGCG --- Left-Aligned VCF (with multiallelics split into biallelic calls) chr7 157009949 . A AGCGGCGGCG chr7 157009949 . A AGCGGCGGCGGCG chr7 157009949 . A AGCGGCG chr7 157009949 . A AGCG chr7 157009949 . A AGCGGCGGCGGCGGCG chr7 157009949 . AGCG A chr7 157009949 . AGCGGCG A chr7 157009949 . AGCGGCGGCGGCG A ---
Suppose one of your VCF files has a non-left-aligned insertion, represented as REF=AG, ALT=AGT, starting at position 99999, and another file has an insertion represented as REF=G, ALT=GT, starting at position 100000. If you don't left-align, these may not be recognized as the same variant, and downstream analysis will suffer.
biostars
{"uid": 320584, "view_count": 4279, "vote_count": 2}
Dear all, for a list of Structural Variants (including deletions, duplications, inversions, translocations), either in VCF or BEDPE format, we would like to have the gene annotations, and the lists of the following sets of genes : -- fusions (if both breakpoints are in exons, introns, utrs) -- truncations (if only one breakpoint is in exon, intron, utr; and the other breakpoint is in intergenic area) -- the genes in the areas that are deleted, duplicated, inverted Although I wrote some scripts in perl based on Annovar , thought that we could get all these annotations with a package that is already available ? thanks a lot, -- bogdan
SVs are problematic for many pipelines/software as, unlike SNVs and small indels, each event involves at least two genomic loci. Be aware that not all callers correctly classify events. Many callers will classify events purely on their break-end position and orientation. This results in deletion calls even when there is no copy number change to support the event (most callers), or an inversion calls even when only one of the two inversion breakpoints actually exist (e.g. DELLY). For simple germline analysis this is probably ok, and you can just ignore all large or inter-chromosomal events but for highly rearranged genomes (eg cancer), things are much more complicated. > thought that we could get all these annotations with a package that is already available What you're asking is really two separate processes: one for looking at the intervening sequence of simple events, and another for break-end overlap for fusions/interchromosomal/complex events. If you're familiar with BioConductor then you can do the first part relatively easily for a BEDPE: just convert to GRanges intervals and calculate overlaps against the BioConductor annotation package for your organism. For the second part you might be interested in my [StructuralVariantAnnotation][1] package. It's key feature is conversion of VCFs generated by a number of popular SV callers into a GRanges object containing break-end coordinates. Once in GRanges format, you can again use the BioConductor annotation packages to calculate feature overlap. [1]: https://github.com/PapenfussLab/StructuralVariantAnnotation
biostars
{"uid": 244396, "view_count": 4891, "vote_count": 3}
Here's another beginner BioPython question from me... I'm running some genome assemblies for someone who has some new Illumina sequence data and also had done some sequencing a few years ago. They have some Sanger and 454 sequences (a couple thousand sequences with a couple thousand base pairs for each) and everything is in fasta format. They are trusted contigs and I'd like to use the data to help bridge assembly gaps of Illumina sequencing data (from the same extracted DNA that was kept frozen for a few years -- same bacterial strain). There are no quality files for the older data and if they ever existed they are long gone from the data directories. What I want to do is create FASTQ files for the old Sanger and 454 FASTA files with a phred score of 40 -- I think that would be a `I` under PHRED-33. (I know there are assemblers out there that take fasta files inputs along with fastq -- but my in-house makefile and pipeline does not and I really would like to incorporate these sequences in our current workflow without a lot of changes). There are lots of resources for converting from fasta to fastq *WITH* a qual file -- see [here](/p/16458/), [here](/p/85929/), [here](http://qiime.org/scripts/convert_fastaqual_fastq.html), [here](http://seqanswers.com/forums/showthread.php?t=16925), and [here](http://sequenceconversion.bugaco.com/converter/biology/sequences/fasta_to_fastq-solexa.php); but I've looked all over but can't find anything for converting without a qual file. [Here is a public perl script](https://code.google.com/p/fasta-to-fastq/) that does exactly what I want it to do -- but I can't get it to work for me and I keep getting an error. As an exercise, I quickly wrote the script below, but I can't get it to provide the quality score as the length of the sequence. The quality string needs to be the same length as the input sequence string. Anyone have any hints for me? Any help is appreciated. Here's my code so far: #!/usr/bin/env python """ Convert FASTA to FASTQ file with a static quality read Usage: $ ./fasta_to_fastq NAME.fasta NAME.fastq """ # import libraries import sys, os from Bio import SeqIO from Bio.SeqIO.QualityIO import PairedFastaQualIterator # Get inputs fa_path = sys.argv[1] fq_path = sys.argv[2] phred_quality = "I" # Check inputs if not os.path.exists(fa_path): raise Exception("No file at %s." % fa_path) # make fastq with open(fq_path, "w") as handle: records = PairedFastaQualIterator(open(fa_path), print("phred_quality")) count = SeqIO.write(records, handle, "fastq") # print fastq print "Converted %i records" % count
<p>Since Matt &amp; Istvan have given you SeqRecord based solutions using SeqIO, here&#39;s a light weight solution using strings:</p> <pre> <code>from Bio.SeqIO.FastaIO import SimpleFastaParser </code>with open(&quot;input.fasta&quot;) as in_handle: with open(&quot;output.fastq&quot;, &quot;w&quot;) as out_handle: for title, seq in SimpleFastaParser(in_handle): out_handle.write(&quot;@%s\n%s\n+\n%s\n&quot; \ % (title, seq, &quot;I&quot; * len(seq)))</pre> <p>Note that faking quality scores like this is probably not a good idea, and to be used with caution.</p>
biostars
{"uid": 99886, "view_count": 19795, "vote_count": 5}
Hi, I am completely a newbie in the field of NGS-RNAseq. But I have a set pipeline to do RNAseq analysis. But to start with my analysis I need to go with TCGA database. But I cannot find any fastq files and the normal samples for the respective data set. Please guide me how to work with TCGA database to download fastq files for specific type of cancer with the respective normal samples to do the RNA-seq analysis? I've queried [Breast invasive carcinoma][1] [BRCA], which gave me 1094 mRNA cases. And number of NT and TN samples are listed out. but for RNAseq level 1 no data is available, for level 2 only vcf files are available and for level 3 all the outputs and analysis files are available. Thanks in advance [1]: https://tcga-data.nci.nih.gov/tcga/tcgaCancerDetails.jsp?diseaseType=BRCA&diseaseName=Breast%20invasive%20carcinoma
The Level 1 data are controlled access for ethical reasons. See that "Controlled Access Data" section on this site for details and the process. https://tcga-data.nci.nih.gov/tcga/tcgaAccessTiers.jsp
biostars
{"uid": 142981, "view_count": 3569, "vote_count": 2}
So, I have a list of variants I want to annotate based off several databases in .bed format. The rest of the databases I want to include are off of ENCODE, but despite my searches I cannot for the love of my life download the data off ENCODE based off cell line. I always end up at [this page](http://genome.ucsc.edu/ENCODE/downloads.html), which provides me ALL the databases, but not by cell line. I found a search function [here](http://www.epigenomebrowser.org/cgi-bin/hgFileSearch?), but for some reason, whenever I run a search, it doesn't return any results. I am looking for .bed files listing sites of DNA methylation, DNase-seq, Chip seq (TFBS and histone modifications) for HepG2, K562, GM12878, and Human Skeletal Muscle cell lines each. I am basically asking: How can I get these off ENCODE? Thanks.
You go to the <a href="http://genome.ucsc.edu/ENCODE/dataMatrix/encodeDataMatrixHuman.html">Encode Data Matrix</a> for hg19 -> Click on the green boxes -> You will see a lot of tracks under the column name Track Name -> There is a small arrow at the end of the tracks which will expand the track, click on it -> You will get the links to `narrowpeaks.bed.gz` or `broadpeaks.bed.gz` which you can directly download. You might not get every data that you need though.
biostars
{"uid": 117521, "view_count": 6433, "vote_count": 2}
I'm studying a species where there is a reference genome that is assembled *only* at the scaffold level ("unplaced scaffolds"). See here https://www.ncbi.nlm.nih.gov/genome/annotation_euk/Geospiza_fortis/101/. My question is - Do people generally treat a reference genome at the scaffold level as if each scaffold would be a chromosome? - Should a scaffold vs chromosome level reference genome be treated differently? - What are the main challenges for using a reference genome that is only at the scaffold level? Basically, I often read in population genetics textbook that we have to study "chromosome". But I have a hard imagine, when having only scaffold, how the theory applies differently.
All bioinformatic applications, including mapping (to a FASTA), will proceed as usual. For annotation, you'll also be fine as long as the contig/scaffold names are the same as the annotation file you're using. This isn't a major problem - the human reference, for example, has a number of unlocalized scaffolds and patch scaffolds that are relevant for annotation but aren't in the set of more thoroughly characterized chromosomes. Contigs/scaffolds/chromosomes are often treated the same by most applications. So, to answer your questions: 1. More-or-less. 2. Not really, but realize that your analysis may be impacted if all the scaffolds can't be localized to true chromosomes (e.g., are two scaffolds in LD because they're next to each other in reality?). 3. Outlined above, but they're generally captured under issues resulting from 'assembly uncertainty' sensu lato.
biostars
{"uid": 393339, "view_count": 1172, "vote_count": 2}
Hi everyone, I have a RNA-seq library (paired-end) with a large amount of multimappers because it comes from a total RNA extract with partial ribodepletion (~50 % of reads are from rRNAs). I tried mapping those reads with tophat changing the --max-multihits parameter (1 and 10), and I was surprised to see that the number of reads mapped in a proper pair is **massively - from 38 to 87% -** affected by that option. samtools flagstat accepted_hits_1.bam # tophat --max-multihits 1 111707163 + 0 in total (QC-passed reads + QC-failed reads) 0 + 0 secondary 0 + 0 supplementary 0 + 0 duplicates 111707163 + 0 mapped (100.00% : N/A) 111707163 + 0 paired in sequencing 54154286 + 0 read1 57552877 + 0 read2 42710174 + 0 properly paired (38.23% : N/A) 50502846 + 0 with itself and mate mapped 61204317 + 0 singletons (54.79% : N/A) 0 + 0 with mate mapped to a different chr samtools flagstat accepted_hits_2.bam # tophat --max-multihits 10 379040545 + 0 in total (QC-passed reads + QC-failed reads) 267241809 + 0 secondary 0 + 0 supplementary 0 + 0 duplicates 379040545 + 0 mapped (100.00% : N/A) 111798736 + 0 paired in sequencing 54212619 + 0 read1 57586117 + 0 read2 96903430 + 0 properly paired (86.68% : N/A) 97210716 + 0 with itself and mate mapped 14588020 + 0 singletons (13.05% : N/A) 0 + 0 with mate mapped to a different chr I first thought that this change was due the secondary alignment, but if I filter them out (`samtools view -F 256`), I still get 87% reads mapped in a proper pair. What exactly happened here ? My intuition is that reads are mapped independantly from their mate up to 1 or 10 times. After that tophat looks if there is one combination of position from the paired reads that leads to proper pairing and reports that pair of position. Can anyone confirm this or provide additional insight ? Thank you, Carlo
Hi Carlo, The key part to understand what happens is in the TopHat help: > Instructs TopHat to allow up to this many alignments to the reference for a given read, and choose the alignments based on their alignment scores if there are more than this number. Alignment works with a seed search and a subsequent seed extention applying boni and mali depending on the sequence similarity given a score matrix defined by the algorithm. This means that, if you specify *--max-multihits 2* for example, the program will search the best seed and extend it, then it will search the second best seed and extend it, and it will report as primary alignment the best scoring of the two while down-ranking the second best to secondary alignment. > "I tried mapping those reads with tophat changing the --max-multihits parameter (1 and 10), and I was surprised to see that the number of reads mapped in a proper pair is massively - from 38 to 87%" Given what I said before, what happens is that your command searches one seed, extends it and says "wow this is a primary alignment for this read!" but doesn't try another one, when you use --max-multihits 1. When you use --max-multihits 10, instead, it tries 10 times before saying which one is the best. > "What exactly happened here ?" Same as what happens when you explore an energy landscape with a conformation of a protein, you might not end up with the best fold for the protein just as much as you might not end up with the best alignment for the read if you try only once (because local minima). The *proper pair* is a pair of reads that map as primary alignments and within insert size. The more you try aligning reads, the more proper pairs you get because the reads that didn't find a good match at the first tryout, eventually find it at the second, or n-th.
biostars
{"uid": 222782, "view_count": 2403, "vote_count": 2}
Hi, I would like to analyse the content of a GTF file. I am quite able with R and dplyr, so I would like to transform my GTF file into a data frame to facilitate my analysis. Does anybody know of any tool to do this? Thanks. Best, C.
try `gtf_df=as.data.frame(gtf)` after importing via import function from rtracklayer. The code would be: ```r gtf <- rtracklayer::import('celegans.gtf') gtf_df=as.data.frame(gtf) ```
biostars
{"uid": 272889, "view_count": 29700, "vote_count": 9}
<p>I am new to gene ontology and was wondering the best tool to use to test if a gene set is enriched for certain GO terms in a non-model prokaryote?</p>
Not a lot is out there regarding gene set enrichment in Prokaryotes. A few links that might be helpful and give you a push: - https://www.biostars.org/p/85909/ - https://www.biostars.org/p/50960/ An excerpt from **Bioinformatics enrichment tools: paths toward the comprehensive functional analysis of large gene lists** http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2615629/ > Several enrichment tools were specifically designed for these less popular species, such as WEGO for rice ([54][1]); easyGO for crops ([66][2]); FINA for prokaryotes ([58][3]); CLENCH for Arabidopsis ([21][4]); JProGo for prokaryotes ([48][5]); BayGo for *Xylella fastidiosa* ([52][6]). [Protein Localization Analysis of Essential Genes in Prokaryotes][7] HTH [1]: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2615629/#B54 [2]: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2615629/#B66 [3]: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2615629/#B58 [4]: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2615629/#B21 [5]: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2615629/#B48 [6]: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2615629/#B52 [7]: http://www.nature.com/srep/2014/140808/srep06001/full/srep06001.html
biostars
{"uid": 153475, "view_count": 3622, "vote_count": 3}
Hi, I found [this useful website](http://cbio.mskcc.org/tcga-generanker/index.jsp) based on mutliple cancer gene list. I searched, but I cannot find any information on how this tcga-generanker is made ? and which statistics it used to compute the score ? And also it says Gene Ranker: TCGA GBM 6000. Is it only for GBM (Glioblastoma Multiforme) .
<pre> and which statistics it used to compute the score ?</pre> <p>If you look at their <a href="http://cbio.mskcc.org/tcga-generanker/sources.jsp">TCGA generator sources</a> page, you can see that their list of candidate genes comes from a collection of lists of from other sources. Each list has a specific weight, and from the way it looks, it seems that they attributed scores based on the confidence they had on each list. Then, each gene is ranked with the sum of of the scores of all the lists to which it belongs</p> <pre> Is it only for GBM (Glioblastoma Multiforme) .?</pre> <p>The database contains genes specific to other tissues (e.g. COL11A1<strong><em> </em></strong>which is specific to Lung cancer), so it seems the database is not only for GBM.</p>
biostars
{"uid": 97886, "view_count": 2799, "vote_count": 1}
<p>I am using the following code to generate a tetramer from a monomer using a tetramer template in Modeller. </p> <p>Template tetramer = 2jgs.pdb Monomer = 2c4i.pdb</p> <p>But my code only produces a monomer and not a tetramer.</p> <pre><code>env = environ() #read pdb1 and pdb2 and creat and alignment between their sequences aln = alignment(env) mdl = model(env, file='2jgs.pdb') aln.append_model(mdl=mdl, align_codes='2jgs', atom_files='2jgs') mdl2 = model(env, file='2c4i.pdb') aln.append_model(mdl=mdl2, align_codes='model1', atom_files='model1') aln.write(file='2jgs_2jgs.ali') z = (mdl.chains) for i in z: print i.name print len(i.residues) # Genereate and save structure=structure alignment atmsel = selection(mdl.chains[i.name]).only_atom_types('CA') r = atmsel.superpose(mdl2, aln,) rms = r.rms drms = r.drms mdl2.write(file='model1_fit22.pdb') </code></pre>
<p>I found the solution to be very simple. The mdl2.write() method actually overwrites the previous structures, so all I had to do was change the variable name of the output and join the files together</p>
biostars
{"uid": 13428, "view_count": 2030, "vote_count": 2}
Hello, I have what might be a simple question but I would really like to get this right and despite all my readings it is still not clear to me what the best option is. I am using DESeq2 to analyse an RNAseq experiment. The design of the experiment is as follow (each sample is in triplicate): - cell line A - untrt - cell line A - trt - cell line B - untrt - cell line B - trt The goal is to peform the following contrats: - cell line A - untrt **versus** cell line B - untrt - cell line A - trt **versus** cell line B - trt - cell line A - untrt **versus** cell line A - trt - cell line B - untrt **versus** cell line B - trt My question is, in order to do things "correctly", should I create a separate DESeq object for each contrast using only the samples I will be using for each contrast, or should I have just a single object from which I do the contrasts that I need? For your information, the difference between these cell lines is that "cell line A" is overexpressing gene X, while "cell line B" is overexpressing genes X and Y. Any help would be very much appreciated...
To make the comparisons you want, your sample data should include a column like cellline.treatment, with values like A.treat, A.untreat. You make one dds object with cellline.treatment as the design with all the samples, and call results 4 times, each time specifying which cellline.treatment to compare to which.
biostars
{"uid": 371935, "view_count": 5159, "vote_count": 3}
Hi everybody I am trying to use htseq for counting the number of reads aligned to genomic features. I use the following command: htseq-count -s no -t exon -i gene_id --additional-attr=gene_name Alignment-files annotation-files After running the command the read count table is printed on the terminal screen but there is no spreed sheet saved in my current directory. where can I find the output spreadsheet of htseq-count command? Best
Hi Javad, you need to direct the output to a file : htseq-count -s no -t exon -i gene_id --additional-attr=gene_name Alignment-files annotation-files > result_file.txt Cheers, Michael
biostars
{"uid": 286659, "view_count": 11677, "vote_count": 1}
Hi, I am running CNVKIT and I am new to it. I was running a batch on two tumor samples with a flat reference. anaconda3/bin/cnvkit.py batch 1.bam 2.bam -n -t /refseq_exome_genesymbol_headless_CNVkit.csv -f hg19.fasta --access access-5k-mappable.hg19.bed --output-reference my_flat_reference.cnn -d output -p 6 I get [E::bgzf_read] Read block operation failed with error 4 after 0 of 4 bytes but it still continues to run. Is it because my bam files are not bzipped? could the results still be usable? Thanks a lot in advance! https://www.biostars.org/u/24
That's odd. CNVkit reads the input BAM files with pysam, which internally uses htslib, the same as samtools. If samtools can read (e.g. index) your BAM files, then they should work fine with CNVkit. I haven't seen that error/warning myself, but if the process continues to run and the outputs from the CNVkit command look reasonable then maybe it's not a problem.
biostars
{"uid": 391165, "view_count": 1884, "vote_count": 1}
I'm very new to python. I know the basic formula, but I don't really know any commands. I'm learning, but I have to get this done faster than I can learn. I have a tab-delimited file that looks like this: A R R R A R W R W R R M S W K etc. and want to generate a new file where I replace certain letters under certain conditions, i.e.: if all cells in a row are A or R: replace R with G in that row only if all cells in a row are R and W: replace R with G and W with T in that row only (+20 other conditions) so that it looks like: A G G G A G T G T G R M S W K Nothing should happen if a row has 3 or more unique characters. The concept would be to read through the first row, see if it matches any conditions, and if it does, write the new output to a file. Iterate through all rows. file = open(file.txt) num_columns=numcolumns() # I can manually put in the number of columns for each run, though I'm sure there's a simple command. for line in file: if line.contains('A') and line.contains('R') and (count('A')+count('R')==num_columns()): write.to.newfile.replace('R') with ('G') #I obviously have no idea how to code this elif: new_condition #etc. etc. etc. else: write line to file as is Thanks for any help and sorry for the noob question!
I would probably do it like this. Seems like an odd thing to do, though..care to elaborate about the background? #!/usr/bin/env python import sys import csv from collections import Counter # script is called with python scriptname name_of_input_file # it will create a file named name_of_input_file.out filename = sys.argv[1] # with is a safe way to open files for i/o with open(filename) as fi, open(filename + '.out', 'w') as fo: # for processing tab-separated, we use csv.reader with delimiter='\t' for row in csv.reader(fi, delimiter='\t'): # Counter objects count elements in a collection and store them in a dictionary-like structure # e.g. {'A': 2, 'R': 3} symbolCounter = Counter(row) # this way we can easily determine the number of different characters by checking the length of the Counter object if len(symbolCounter) == 2: # assuming all your conditions only take place when there are exactly two different characters # build a string from the current row rowString = ''.join(row) # extract the keys from the Counter object, this will give you the different characters present # sorting will allow to resolve ambiguities ('AR' == 'RA') keys = ''.join(sorted(symbolCounter)) # check for your conditions if keys == 'AR': # perform the replace-operations on the string built from the row rowString = rowString.replace('R', 'G') elif keys == 'RW': # chaining replace-operations is also possible rowString = rowString.replace('W', 'T').replace('R', 'G') # add your other conditions here # turn the row string back into a list and assign to the original row row = list(rowString) pass # write the row (transformed, if only contains two different characters; untransformed otherwise) fo.write('\t'.join(row) + '\n')
biostars
{"uid": 221680, "view_count": 2366, "vote_count": 1}
Hello, I've got a large number of fastq files generated from a paired end single cell RNAseq experiment. I'm looking to align them back to mm10 using Hisat2, I can do this if I run every pair individually but is there a way to get hisat to do them all with one code one after the other? I'm relatively new to the ubuntu coding environment so feel free to dumb it down. Thanks in advance
Try this instead and let us know. for f in `ls -1 *_1.fastq.gz | sed 's/_1.fastq.gz//' ` do echo hisat2 -x mm10idx -1 ${f}_1.fastq.gz -2 ${f}_2.fastq.gz -S ${f}.bam done
biostars
{"uid": 223404, "view_count": 10916, "vote_count": 5}
Does the Common Workflow Language have if statements and for or while loops? If it does, what is the syntax?
It does not. This feature is under active discussion at https://groups.google.com/d/topic/common-workflow-language/JU7PSEKr97M/discussion However, if you have several files that need to be processed (or an array that you want to iterate over for one or more other parameters) then you can use the scatter/gather feature http://www.commonwl.org/v1.0/Workflow.html#WorkflowStep
biostars
{"uid": 250486, "view_count": 3809, "vote_count": 4}
Does anyone know of a Bioconductor or other R package that can do GO and/or KEGG Gene set enrichment analysis from counts generated with DESeq2? I have a set of counts generated via DESeq2, with some very nice differential expression analysis. I was thinking of redoing the DE analysis using EdgeR which also uses a negative bionomial based normalization, as I could then use the GOana package for GO and KEGG analysis. Can that work with DESeq2, or is there a similar package that can use counts generated in DESeq2?
GOseq is a suitable R package for GO AND KEGG enrichment. Maybe this link will help you related to DESeq2. https://wikis.utexas.edu/display/bioiteam/GO+Enrichment+using+goseq
biostars
{"uid": 450277, "view_count": 3852, "vote_count": 3}
<p>As title.</p> <p>Because SELEX experiment can detect strong binding site of TF, and ChIP-seq experiment can detect both strong and weak binding site (due to the cross-linked).</p> <p>So I wonder if PWM derived from SELEX still have the strong binding site property and PWM derived from ChIP-seq still have the strong and weak binding site property.</p> <p>Or,it will lose the property when these two experiments model to PWM.</p> <p>Any suggestion or papers will help.</p> <p>Thanks</p>
The TFBSs by ChIP-seq data are usually divided into different groups (strong and weak, sometimes with an intermediate group: moderate) based on their binding strength. A common assumption is that "ChIP-seq signal values are correlated with the affinity of the TF-DNA binding" [ [*][1] ] PWM alone helps us model the sequences at TFBSs but no such information about binding affinity, unless binding scores are also annotated. [1]: http://www.ploscompbiol.org/article/info%3Adoi%2F10.1371%2Fjournal.pcbi.1003214
biostars
{"uid": 104419, "view_count": 3267, "vote_count": 2}
Hi, How to create ribosomal RNA Interval list for to be used with picard CollectRNASeqMetrics? I have mapped reads to Drosophila_melanogaster Ensembl Genome with Ensembl GTF. I need to identify the percent of reads mapping to ribosomal RNA? Is there a way to create interval list from GTF file? Thanks!
<p><a href="http://samtools.github.io/htsjdk/javadoc/htsjdk/htsjdk/samtools/util/IntervalList.html">The documentation for the IntervalList format</a> is somewhat hard to find. From the linked page:</p> <p><em>Represents a list of intervals against a reference sequence that can be written to and read from a file. The file format is relatively simple and reflects the SAM alignment format to a degree. A SAM style header must be present in the file which lists the sequence records against which the intervals are described. After the header the file then contains records one per line in text format with the following values tab-separated: Sequence name, Start position (1-based), End position (1-based, end inclusive), Strand (either + or -), Interval name (an, ideally unique, name for the interval),</em></p> <p>So the first thing you need to do is get the header from your SAM/BAM file:</p> <p><code>samtools view -H [your.bam] &gt; intervalList.txt</code></p> <p>If your GTF file is standard and we assume that it contains only ribosomal intervals, then we need the first, fourth, fifth, seventh, and ninth fields from the file. We can append them onto our text file which contains the header:</p> <p><code>cut -s -f 1,4,5,7,9 [your.gtf] &gt;&gt; intervalListBody.txt</code></p> <p>This is a very basic approach and you&#39;ll probably want to modify it somewhat for your specific needs, but hopefully it&#39;s a good start.</p>
biostars
{"uid": 120145, "view_count": 9986, "vote_count": 1}
Hi, I would like to filter out some sequences from a fasta file by using a specific pattern. For example I have this file: ``` >input1 UGAGGUAGUAGG >input2 CUAUGCUUACC >out1 UCCCUGAGACCGUGA >out2 CUCCGGGUACC >desc1 ACUUCCUUACAUGCCC ``` I know already how I can extract all the fasta sequences with a specific pattern into a new file by using awk. But what I would like to do is to remove all entries of a specific pattern from the original fasta file and save the newly made file into a new one. In my file above, I would like for example to remove all sequences with the header pattern out. and save only the other to a new file. Is there a tool somewhere for doing that, or is it possible in awk/sed or even grep Thanks Assa
> I would like for example to remove all sequences with the header pattern out. and save only the other to a new file awk '/^>/ {P=index($0,"out")==0} {if(P) print} ' in.fasta > out.fasta
biostars
{"uid": 133245, "view_count": 7571, "vote_count": 2}
I cannot find templates for `phenofile.txt` and `my.phenotype` in the following [PLINK/SEQ code][1]. Can anyone provide the link to the templates? For `phenofile.txt`, is it like the file [here][2]? Can I just have two columns, ID and phenotype? Thanks! pseq data.vcf.gz v-assoc --phenotype phenofile.txt my.phenotype [1]: https://atgu.mgh.harvard.edu/plinkseq/assoc.shtml [2]: https://groups.google.com/forum/#!topic/plinkseq-users/AglI4PWYB00
The example is like below. You need to load phenotype to your project first. ``` pseq proj load-pheno --file myphe.phe pseq proj v-assoc --phenotype myphe > output.txt ``` The file `myphe.phe` begins with ``` ##myphe #ID myphe 1 1 2 0 3 1 ```
biostars
{"uid": 124463, "view_count": 2930, "vote_count": 1}
I aligned many fastq files with HISAT2 to grch38. This proceeded without problems. But in the next step with StringTie, which I am trying to find novel transcripts and their counts with the Gencode27 GTF: stringtie Donor1_IL2OKT3ZA.HISAT2.sort.bam -G /illumina/runs/RNASeq/Gencode27/gencode.v27.annotation.gtf -A try.tab -p 4 > stringtie.out 2> stringtie.err However, I get an error WARNING: no reference transcripts were found for the genomic sequences where reads were mapped! Please make sure the -G annotation file uses the same naming convention for the genome sequences. Why doesn't Stringtie recognize Gencode annotation? Do I have to do something to the gencode data? Update: STAR works with this stringtie, but HISAT2 output doesn't. Strange. stringtie's output from `cut -f 3 try.tab | sort | uniq` looks like 703404669@ssxfisctimga004:~/RNASeq_benchmark/GSE96075/HISAT2$ cut -f 3 try.tab | sort | uniq 1 10 11 12 13 14 15 16 17 18 19 2 20 21 22 3 4 5 6 7 8 9 GL000008.2 GL000009.2 GL000194.1 GL000205.2 GL000214.1 GL000218.1 GL000219.1 GL000220.1 GL000221.1 GL000224.1 KI270442.1 KI270706.1 KI270711.1 KI270713.1 KI270721.1 KI270733.1 KI270734.1 KI270742.1 KI270744.1 KI270745.1 MT Reference X Y chr1 chr10 chr11 chr12 chr13 chr14 chr15 chr16 chr17 chr18 chr19 chr2 chr20 chr21 chr22 chr3 chr4 chr5 chr6 chr7 chr8 chr9 chrM chrX chrY UPDATE: I have found several other instances of this error, but no one ever addressed how to solve this: https://github.com/gpertea/stringtie/issues/113 https://www.biostars.org/p/247954/
StringTie gave you an advice there, did you follow up on it? >Please make sure the -G annotation file uses the same naming convention for the genome sequences. In case you don't understand that message: it's about the chromosome names. The Gencode annotation uses chromosome names like *chr1*, *chr2* ,*chr3*,... while the grch38 alignments have *1*,*2*,*3*,.. instead. Hence that WARNING message. Sure for you it might be obvious that "1" is the same as "chr1". But StringTie is made to work on any assembly/genome data, not just the human genome and it's not going to second guess your use of mismatching chromosome names like this.. I would suggest to use the UCSC hg38 genome instead -- or, if you don't want to re-run hisat2 again, find an annotation which uses the 1,2,3.. naming convention for the chromosomes.. or find a *reliable* way to convert the Gencode genomic sequence names to the grch38 naming convention.. This might be as simple as removing the "chr" prefix for the chromosome names -- but this might only work for the main chromosome sequences, the naming convention for the additional/alternate contigs might be different.
biostars
{"uid": 292650, "view_count": 5236, "vote_count": 1}
Hi, Is anyone aware of pubilcly available T Cell Receptor repertoire data sets from immuno-sequencing, particularly from tumor or PBMC in cancer patients or mouse. I am looking for a summarized form repertoire profile dataset that includes clonotype (AA and NT), number and frequency of clones, and V, D, J gene name for each clonotype. Thank you, - Pankaj
The paper provided by Denise is about a software for T-cell repertoire post-analysis and thus doesn't contain cancer TCR repertoire datasets (as far as I remember it uses twin repertoire sequencing data from Zvyagin et al PNAS 2014). We have quite a large collection of sequenced repertoires, yet those are mostly from healthy people and people with autoimmune diseases. Regarding cancer datasets you can try to look in these papers: * http://cancerres.aacrjournals.org/content/early/2012/05/09/0008-5472.CAN-12-0277.full.pdf * http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4666197/ * http://www.nature.com/articles/srep13664 and check out Immunoseq Analyzer page: http://ww2.adaptivebiotech.com/IMMUNOSEQ-ANALYZER/DEMO You'll have to register there and try to find cancer patient samples in published samples section (they can be exported as a tabular file with columns that you need)
biostars
{"uid": 192991, "view_count": 2567, "vote_count": 2}
I have a large PED file (ftp://ftp.cephb.fr/hgdp_supp10/Harvard_HGDP-CEPH/all_snp.ped.gz) and a large MAP file (ftp://ftp.cephb.fr/hgdp_supp10/Harvard_HGDP-CEPH/all_snp.map.gz) that I am trying to work with. The slightly modified map file I have has >600,000 rows ``` 1 rs3094315 0 742429 1 rs7419119 0 831876 1 rs13302957 0 880884 1 rs6696609 0 893289 1 rs8997 0 939517 1 rs9442372 0 1008567 1 rs147606383 0 1035194 1 rs4970405 0 1038818 1 rs11807848 0 1051029 1 rs4970421 0 1098500 1 rs1320571 0 1110294 1 rs2887286 0 1145994 1 rs79118541 0 1147410 1 rs3813199 0 1148140 1 rs113791678 0 1151643 1 rs78424188 0 1160450 1 rs12073590 0 1195018 1 rs6685064 0 1201155 1 rs61559999 0 1225655 1 rs60785581 0 1225708 ``` The slightly modified ped has 942 rows, where each row is an individual and each column is a genotype correspond to the associated row of map file ``` HGDP00001 HGDP00001 0 0 1 0 AG GT AA CC GG AG GG AA HGDP00003 HGDP00003 0 0 1 0 AA GT AA TT GG GG GG AA HGDP00005 HGDP00005 0 0 1 0 AA TT AA CC GG GG GG AA HGDP00007 HGDP00007 0 0 1 0 AA TT AA CC GG GG GG AA HGDP00011 HGDP00011 0 0 1 0 AG GT AA CT GG AG GG AA HGDP00013 HGDP00013 0 0 1 0 AG TT AA CC AG AG GG AA HGDP00015 HGDP00015 0 0 1 0 AG GT AA CT AG GG GG AA HGDP00017 HGDP00017 0 0 1 0 AG GT AA CC GG GG GG AA HGDP00019 HGDP00019 0 0 1 0 AA TT AG CT GG GG GG AA HGDP00021 HGDP00021 0 0 1 0 AA GT AA TT GG AA GG AA HGDP00023 HGDP00023 0 0 1 0 AA GT AA TT GG AG GG AA HGDP00025 HGDP00025 0 0 1 0 AA GT AA CT GG AG GG AA HGDP00027 HGDP00027 0 0 1 0 AG GT AA CT AG AG GG AA HGDP00029 HGDP00029 0 0 1 0 AA TT AA CC GG GG GG AA HGDP00031 HGDP00031 0 0 1 0 AG GT AG CT GG GG GG AA HGDP00033 HGDP00033 0 0 1 0 AA TT AG CC GG AA GG AG HGDP00035 HGDP00035 0 0 1 0 AG TT AG CT GG AG GG AA HGDP00037 HGDP00037 0 0 1 0 AG GT AA TT AG GG GG AG ``` I trying to get them into a single table with a similar formatting to some Affy array data I have (which has rows as SNP ids and columns as individuals). I was wondering if anyone could help figure out a Python or Bash scripting solution to transpose the ped file such that the 1st row of the ped file becomes the 5th column of the map file, and the 2nd row of the ped becomes the 6th of the map file, and so on... Basically, I want it to look like this (I presume subsequently taking out the 0/1 rows and location columns will be fairly simple?) ``` HGDP00001 HGDP00003 HGDP00005 HGDP00007 HGDP00011 HGDP00013 HGDP00015 HGDP00017 HGDP00019 HGDP00021 HGDP00023 HGDP00025 HGDP00027 HGDP00029 HGDP00031 HGDP00033 HGDP00035 HGDP00037 HGDP00039 HGDP00001 HGDP00003 HGDP00005 HGDP00007 HGDP00011 HGDP00013 HGDP00015 HGDP00017 HGDP00019 HGDP00021 HGDP00023 HGDP00025 HGDP00027 HGDP00029 HGDP00031 HGDP00033 HGDP00035 HGDP00037 HGDP00039 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 rs3094315 0 742429 AG AA AA AA AG AG AG AG AA AA AA AA AG AA AG AA AG AG AA 1 rs7419119 0 831876 GT GT TT TT GT TT GT GT TT GT GT GT GT TT GT TT TT GT TT 1 rs13302957 0 880884 AA AA AA AA AA AA AA AA AG AA AA AA AA AA AG AG AG AA AA 1 rs6696609 0 893289 CC TT CC CC CT CC CT CC CT TT TT CT CT CC CT CC CT TT CT 1 rs8997 0 939517 GG GG GG GG GG AG AG GG GG GG GG GG AG GG GG GG GG AG GG 1 rs9442372 0 1008567 AG GG GG GG AG AG GG GG GG AA AG AG AG GG GG AA AG GG GG 1 rs147606383 0 1035194 GG GG GG GG GG GG GG GG GG GG GG GG GG GG GG GG GG GG GG 1 rs4970405 0 1038818 AA AA AA AA AA AA AA AA AA AA AA AA AA AA AA AG AA AG AA ```
Here's my (quick and dirty) solution in R: ```r ##read files map_file<-read.csv('./mapfile.csv') ped_file<-read.csv('./pedfile.csv') ##transform and merge data #create matrix of blanks matrix_of_blanks<-matrix(data="",nrow=6,ncol=4) #bind matrix of blanks to map file tr_map<-rbind(matrix_of_blanks,as.matrix(map_file)) #combine transformed map file and transposed ped file cbind(tr_map,t(ped_file)) ```
biostars
{"uid": 145971, "view_count": 5191, "vote_count": 1}
Hi, I'm trying to use a R package for the first time. Does anyone know how to save/ export a plot (e.g. heatmap) in DiffBind? Example: in there example (https://bioconductor.org/packages/release/bioc/vignettes/DiffBind/inst/doc/DiffBind.pdf) Heatmap was generated by plot(tamoxifen) or dba.plotHeatmap(tamoxifen) What should I add in order to save the plot directly from the command? Thanks!!
The comment above is the correct answer to the question. Before plotting, call one of the functions that redirects the plot to a file, such as `jpeg()`, `pdf()`, or `png()`. After plotting, call `dev.off()` to close the file. You can do multiple plots before calling `dev.off()` and they will all end up in the same file.
biostars
{"uid": 275846, "view_count": 3299, "vote_count": 1}
<p>Is there a parameter in STAR for setting the minimum length of reads to be mapped?</p> <p>In the &quot;Log.final.out&quot; file, there is a category, &quot;% of reads unmapped: too short,&quot; how do I change the &quot;too short&quot; variable?</p>
I was able to answer my own question: The following parameters reduced my "Too short" percent from 10.8% to ~4%. Most of the reads went to multi-mapped or "other," but at least it worked. --seedSearchStartLmax 30 # increase overall mapping sensitivity --outFilterScoreMinOverLread 0 --outFilterMatchNminOverLread 0 --outFilterMatchNmin 50 # allow output of short alignments There apparently isn't a minimum read length threshold for STAR.
biostars
{"uid": 169716, "view_count": 12556, "vote_count": 2}
Hello, I am analysing ChIP-seq data and I saw the following post by @Devon Ryan (made 1 year, 7 months ago) https://bioinformatics.stackexchange.com/questions/458/when-to-account-for-the-blacklisted-genomic-regions-in-chip-seq-data-analyses/459#459?newreg=dca76bad61c443d7b4f0b1abd1487878 saying that, nowadays with the latest genome assemblies, one has less problems with blacklisted regions since they have been reduced. I want to know, then, what's the state of the art of this situation? Should I remove them or not? (By the way, I was planning to use Deeptools to do it, but if it's not really necessary anymore I won't). Thank you!
It's still considered best-practice to remove these regions. For genomes like GRCh38, the blacklisted regions are largely comprised of things like major satellite repeats, which are primarily located in hard-masked telomeric and pericentromeric regions. Given that, these regions will still show aberrantly high signal in all of your samples (thereby skewing normalization and often adding meaningless peaks).
biostars
{"uid": 361297, "view_count": 4335, "vote_count": 3}
When converting from base count to coverage in [this document][1] they divide by 2.85 billion nucleotides: ``` POP BASE COUNT COVERAGE ACB 3182037349066 1116.50433300561 ``` Does build 37 of the human reference genome still only contain 2.85Gbp? It's the same figure as in [build 35][2]. In my reference sequence I count 3.1Gbp across the autosomes and the sex chromosomes. Have I missed something obvious here? [1]: ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/sequence_indices/20130502.sequence.index.low_coverage.stats [2]: http://www.nature.com/nature/journal/v431/n7011/full/nature03001.html
I have computed this recently on GRCh37 : <img alt="Human wholegenome mappability" src="http://i.imgur.com/D8QTz7j.png" style="height:325px; width:400px" /> (Mappability was computed with GEM program for reads of length 100 and 5 mismatches authorised) EDIT : And so dividing by 2.85Gb gives you a more realistic estimate of your mean coverage since N's will never be covered by definition.
biostars
{"uid": 128955, "view_count": 2186, "vote_count": 2}
im using samtool to extract specific regions from the sorted bam file/bam.bai file. e.g if i want to extract the sequence from 329729 to 329840 of chromosome one, then how would i proceed??? the command that im using is ./samtools view alignment.sorted.bam gnl|BGIA2|CA_chr1:329729-329840 and im getting following error: -bash: BGIA2: command not found -bash: CA_chr1:329729-329840: command not found [main_samview] region "gnl" specifies an unknown reference name. Continue anyway. while the "gnl|BGIA2|CA_chr1" could be found in bam file if is viewed by head command. further how to extract the multiple sequences with the similar formate in separate file using samtool or any other tool??? thanks
Hello, the problem is the `|` in the name. This is interpreted as a pipe to stream the output to the next program, Try to escape this symbol by `\` (Didn't test it) ./samtools view alignment.sorted.bam gnl\|BGIA2\|CA_chr1:329729-329840 Another way could be to create a bed file with the coordinates and use the `-L` parameter. fin swimmer
biostars
{"uid": 310999, "view_count": 2053, "vote_count": 1}
I'm using edgeR to test for DE genes in an expression matrix. For each sample if have a condition and a batch. I want to use the glm-functionality in edgeR to test for DE between conditions, while taking into account batches. Some example data to show my problem. Say if have a count matrix, EM, of 10 sample with the following labels: ``` EM = EM # Imagine matrix of counts here... conditions = c("con1", "con1", "con1", "con2", "con2", "con2", "con3", "con3", "con3", "con3") batches = c("batch1", "batch2", "batch3", "batch1", "batch2", "batch3", "batch1", "con2", "con3", "con3") ``` The pipeline could look like this: ``` dge = DGEList(counts=EM) # Create object dge = calcNormFactors(dge, method='TMM') # Normalize library sizes using TMM design = model.matrix(~0+conditions+batches) # Create design matrix for glm colnames(design) = c(levels(conditions), levels(batches)[2:length(levels(batches))]) # Set prettier column names dge = estimateGLMCommonDisp(dge, design) # Estimate common dispersion dge = estimateGLMTagwiseDisp(dge, design) # Estimate tagwise dispersion fit = glmFit(dge,design) # Fit glm pair_vector = sprintf("%s-%s", "con1", "con3") # Samples to be compared pair_contrast = makeContrasts(contrasts=pair_vector, levels=design) # Make contrast lrt = glmLRT(fit, contrast=pair_contrast) # Likelihood ratio test ``` My questions: 1. The design matrix: There is no baseline conditions, so I remove the intersect with the 0+. Is this necessary to do for batches as well, even though they are not directly used as contrasts? 2. Does the glmFit take into account norm-factors for library sizes?
<p>There are two ways to go about this. Firstly, either allow an intercept and then just use a contrast for the cond3 to cond2 comparison, or don&#39;t allow an intercept and group things different (e.g., group=sprint(&quot;%s.%s&quot;,batch,condition)).</p> <p>If we just allow an intercept then:</p> <pre> design &lt;- model.matrix(~conditions+batches) dge = estimateGLMCommonDisp(dge, design) dge = estimateGLMTagwiseDisp(dge, design) fit = glmFit(dge,design) lrt2vs1 &lt;- glmLRT(fit, coef=2) lrt3vs1 &lt;- glmLRT(fit, coef=3) lrt3vs2 &lt;- glmLRT(fit,contrast=c(0,-1,1,0,0))</pre> <p>BTW, have a read through the edgeR user&#39;s guide, particularly section 3.4.2, which has an almost identical example (it&#39;s a really well done user&#39;s guide).</p>
biostars
{"uid": 110861, "view_count": 17582, "vote_count": 2}
Hello, I'd appreciate some feedback on an approach for an experiment. I am working on an RNA-Seq problem that is exploring RNA-Seq data from samples that vary on three dimensions: - Sex (M or F) - Timepoint (1 or 2) - Genotype (A or B) My colleagues and I are curious a number of different hypotheses, e.g.: - (Q1.) What genes are DE because of Genotype? (like a "main effect" in an ANOVA) - (Q2.) What genes are DE through an interaction of Genotype and Timepoint ? - (Q3.) What genes are DE through an interaction of Genotype, Timepoint , and Sex? - (...and so on). Additionally, this data likely has batch effects I am trying to account for using SVA. ### I've gotten a variety of pieces of advice on how to approach this problem: **[Approach 1]** Use limma (although be careful how you set up the design matrix) **[Approach 2]** Use edgeR **[Approach 3]** Use DESeq, but set the design matrix to something like this: myDESeqObject$SV1 <- svseq$sv[,1] # add in the sole surrogate variable detected by/calculated in SVA to design matrix design(myDESeqObject) <- ~ SV1 + Sex + Timepoint + Genotype + Sex:Timepoint + Timepoint:Genotype + Sex:Genotype + Sex:Genotype:Timepoint # design matrix dds <- DESeq(myDESeqObject) # perform DESeq # look for genes that are DE as a consequence of genotype alone, regardless of influence of other factors (answering Q1) results_genotype <- results(dds, contrast=c("Genotype", "A", "B")) # look for genes that are DE as a consequence of genotype alone, regardless of influence of other factors (answering Q) # (not entirely sure what to put for the specific contrasts - what I put seems like it's just looking at effect of Genotype wrt a single timepoint, so not quite answering Q2?) results_genotype_timepoint <- results(dds, contrast=c("Timepoint:Genotype", "1.A", "1.B")) **[Approach 4]** Use DESeq, but create a new variable, 'group', with 8 different factors representing all possible combinations of individual variables: M_1_A, F_1_A, M_1_B, F_2_B, and so forth. Do DESeq as normal, but use the contrast function to ask what genes are DE between two specific groups, e.g. M_1_A, F_1_A. In this way, you don't directly find the set of genes that satisfy Q1,Q2, or Q3...but to answer Q1, at least, you can look for the genes that are called in all of these analyses (e.g. an intersection of these DE gene lists): M.1.A vs. M.1.B ("genes affected by Genotype in Males at Timepoint 1") M.2.A vs. M.2.B ("genes affected by Genotype in Males at Timepoint 2") F.1.A vs. F.1.B ("genes affected by Genotype in Females at Timepoint 1") F.2.A vs. F.2.B ("genes affected by Genotype in Females at Timepoint 2") **[Approach 5]** Use Approach 4 PLUS repeat analysis in edgeR - ask what genes are in common between the two analyses - focus on that subset when looking for pathway enrichments/biological functions/etc. ### The end goal of this RNA-Seq experiment is to identify potential biological targets for validation in a wet lab environment, but I would like to do the best possible job with this bioinformatic portion. **Which approach above do you think is the best approach, given the questions we would like to answer?** Are there any approaches that stand out as obviously wrong? Any feedback to that end is appreciated. Thanks for taking the time to read!
Any of approach 1-3 are fine and you'd use the same design matrix in all of them (you'd also get very similar results). Whether you want to use contrasts like in your example or instead use coefficients is up to you. Approaches 4 and 5 are obviously wrong given your goals.
biostars
{"uid": 332329, "view_count": 5717, "vote_count": 1}
Hi, I'm trying to use sciClone to infer the clonal evolution of some tumor data. I have a couple of questions about its usage (hope I'm not repeating any older question, I checked carefully but none seems satisfying): 1. I have vaf files (from the VarScan variant caller) containing both normal and tumor reads. Does it mean I can create 2 sciClone sample files (1 normal, 1 tumor) from them or should I just use tumor data? 2. What is the meaning of the five columns of the exclude regions file? (I can guess the first three are chromosome, start and end position, not quite sure about the other two) 3. To create the exclude regions file, I can use copy number data from the ICGC releases, where each variant (row) in the file is annotated as "copy neutral LOH", "gain" or "loss". I'm guessing I can populate the exclude regions file with all the rows in the file that are annotated as "copy neutral LOH", is it correct? I apologize if I'm asking silly questions, I have not much experience in bioinformatics.
1) You should be using somatic calls, which means that you should only have one output file. (in your example, created from comparing the tumor and normal using varscan's somatic mode) 2) only three columns are required for the exclude file (chr, start, stop), and that's all that sciClone will look at. One common usage is to call segmented regions of LOH, in which case there may be trailing columns giving info on those segments. 3) if you have copy-number data, you really should be using the segmented copy number data as input if available (which will give the copy number for each segment (which might be not be simple integers of gain or loss, but might be 2.15, or 3.16 etc). If all you have available is the gain or loss designations, then yeah, I guess you could put those into the exclude file. It's not ideal, though.
biostars
{"uid": 273385, "view_count": 2218, "vote_count": 1}
Hi Biostars, My question might be a duplicate, but what software would you recommend to simulate RNAseq reads (with different parameters like read length, PE/SE, error-rate, strandedness, etc), by using reference genome and annotations as input. Google search and omicstools give many results, but I didn't find any soft so far fitting to my needs. ART looks good (but not specific to RNAseq), and I bet that there would be something in BBtools :) Would appreciate any comments and suggestions. Thanks
`randomreads.sh` in [BBMap suite][1] will do what you want. I assume you want to generate reads only from the transcriptome (i.e. provide a milti-fasta of transcripts. Don't think of any programs that can take ref genome+GTF file)? Otherwise there is nothing special about RNAseq as far as read generation would be concerned. [1]: https://sourceforge.net/projects/bbmap/
biostars
{"uid": 288294, "view_count": 1608, "vote_count": 1}
I have installed GenomicRanges, the dependency before-hand and then have tried to install GenomicFeatures. Both appear to say that they have successfully been installed however RStudio does not recognise any of them. Below are the commands and outputs I have used: > ## try http:// if https:// URLs are not supported > source("https://bioconductor.org/biocLite.R") Bioconductor version 3.6 (BiocInstaller 1.28.0), ?biocLite for help > biocLite("GenomicRanges") BioC_mirror: https://bioconductor.org Using Bioconductor 3.6 (BiocInstaller 1.28.0), R 3.4.3 (2017-11-30). Installing package(s) ‘GenomicRanges’ There is a binary version available but the source version is later: binary source needs_compilation GenomicRanges 1.30.0 1.30.3 TRUE Binaries will be installed trying URL 'https://bioconductor.org/packages/3.6/bioc/bin/windows/contrib/3.4/GenomicRanges_1.30.0.zip' Content type 'application/zip' length 3223041 bytes (3.1 MB) downloaded 3.1 MB package ‘GenomicRanges’ successfully unpacked and MD5 sums checked Warning: cannot remove prior installation of package ‘GenomicRanges’ The downloaded binary packages are in Z:\Users\Marjory\AppData\Local\Temp\Rtmp8CRT7c\downloaded_packages Then when attempting to install GenomicFeatures: > ## try http:// if https:// URLs are not supported > source("https://bioconductor.org/biocLite.R") Bioconductor version 3.6 (BiocInstaller 1.28.0), ?biocLite for help > biocLite("GenomicFeatures") BioC_mirror: https://bioconductor.org Using Bioconductor 3.6 (BiocInstaller 1.28.0), R 3.4.3 (2017-11-30). Installing package(s) ‘GenomicFeatures’ also installing the dependency ‘GenomicRanges’ There is a binary version available but the source version is later: binary source needs_compilation GenomicRanges 1.30.0 1.30.3 TRUE Binaries will be installed trying URL 'https://bioconductor.org/packages/3.6/bioc/bin/windows/contrib/3.4/GenomicRanges_1.30.0.zip' Content type 'application/zip' length 3223041 bytes (3.1 MB) downloaded 3.1 MB trying URL 'https://bioconductor.org/packages/3.6/bioc/bin/windows/contrib/3.4/GenomicFeatures_1.30.3.zip' Content type 'application/zip' length 2737145 bytes (2.6 MB) downloaded 2.6 MB package ‘GenomicRanges’ successfully unpacked and MD5 sums checked Warning: cannot remove prior installation of package ‘GenomicRanges’ package ‘GenomicFeatures’ successfully unpacked and MD5 sums checked The downloaded binary packages are in Z:\Users\Marjory\AppData\Local\Temp\Rtmp8CRT7c\downloaded_packages I have tried installing through both RStudio and the RGui, tried opening and closing, tried the clearing of the envrionment too, but it is still telling me that there is no package called GenomicRanges or GenomicFeatures etc.
Please take a look at the comment by Martin Morgan, here: https://support.bioconductor.org/p/77688/#77693 > I suggest that as administrator you start R and run the command > .libPaths(). Then quit R, remove the version of R that you have > installed, and remove the directories reported by .libPaths().
biostars
{"uid": 302664, "view_count": 2732, "vote_count": 1}
Sorry if this may appear like a really dumb question, but I am new to RNAseq and feeling lost about some of the terminologies and calculations used in literature. I understand what's the difference between the principle behind paired-end and single-end reads, however I need to clarify the method to calculate number of reads in paired end vs. single end reads. For single end reads, I was told that I could calculate the no. of reads by: wc -l <name of FastQ file>, divided by 4. For paired end reads, because 2 files are generated (R1, R2), so is it correct that I calculate the number of reads by adding the read count derived from R1 and R2? Thank you very much for your clarification.
In paired-end sequencing its often less confusing to talk about fragment numbers rather than the more ambiguous read numbers. So count the number of reads in R1. That's the number of fragments sequenced (it's also the number of reads in R2). If you really prefer the words "reads", you could use "we sequenced XXX read pairs", so it's clear what's meant.
biostars
{"uid": 191517, "view_count": 8772, "vote_count": 1}
**What is the code for male and female in plink covariate files? Is it (male = 1, female = 0), or ('1' = male, '2' = female, '0' = unknown)?** Is the code for gender different in covariate and fam. files? I'm wondering about the inconsistent statements in the the manual of plink 1.9, as showed below. Thank you in advance. --- As mentioned in [the manual of plink 1.9 which discusses the coding for male/female of covariate files][1]: "By default, when at least one male and one female is present, sex (male = 1, female = 0) is automatically added as a covariate on X chromosome SNPs, and nowhere else." As mentioned in [the manual of plink 1.9 which discusses the coding for male/female of .fam files][2]: Sex code ('1' = male, '2' = female, '0' = unknown). [1]: https://www.cog-genomics.org/plink/1.9/assoc#linear [2]: https://www.cog-genomics.org/plink/1.9/formats#fam
The expected coding in the .fam file is male='1', female='2'; this is then coded by plink 1.x as male=1, female=0 during the --linear/--logistic regression. In other words, you would get the same results (outside of chrX) with `--linear sex` as you would with `--linear` combined with a covariate file with a male=1, female=0 sex covariate added. Yes, this is counterintuitive, so I got rid of this discrepancy in plink 2.0. From its [--glm documentation][1]: "Note that PLINK 2.0 encodes the .fam/.psam sex covariate as male = 1, female = 2, to match the actual numbers in the input file. This is a minor change from PLINK 1.x." So with PLINK 2.0, if you use male=1, female=2 coding in both file types, you don't have to worry about the sign of the sex beta coefficient changing on you with .fam vs. `--covar`. (With that said, even with PLINK 1.x, you don't have to worry about any p-values being affected by the 1/2 vs. 1/0 coding.) [1]: https://www.cog-genomics.org/plink/2.0/assoc#glm
biostars
{"uid": 9515914, "view_count": 854, "vote_count": 1}
Hello everyone! I use apache (v2.4.18) on a GNU/linux (ubuntu) machine. /var/www/ is configure as the root for the homepage, jbrowse is installed in /var/www/html/summitdb/jbrowse, however the data is too big to store there. My question is: how can I reach data with jbrowse outside /var/www/? I have already tried using an alias without success: I added the following to /etc/apache2/sites-available/000-default.conf Alias "/data3" "/data/projects/summitdb/jbrowse/data3/" <Directory "/data/projects/summitdb/jbrowse/data3/"> Require all granted </Directory> Copied the trackList.json inside /var/www/html/summitdb/jbrowse/data and modified it like so: "urlTemplate" : "/data3/tracks/100206/{refseq}/trackData.json", The error massage is this: SyntaxError: missing : after property id when parsing fsl/trackList.json. Actually this using an alias is recomanded at http://gmod.org/wiki/JBrowse_Configuration_Guide#Configuring_track_locations_with_Apache. What am I missing? I also tried simply creating a symbolic link to my data and reach that by giving th link as atribute to the url (www.myhompa.ge/summitdb/jbrowse/index.html?data=symlink) but that does not seem to work either. Even though apache2 is set to allow symlinks. In this case the error message is this: RequestError: Unable to load http://myhompa.ge/summitdb/jbrowse/symlink/trackList.json status: 403. This is my first time with both with jbrowse and apache2 and I am very confused. Any help would be appreciated! Thanks, Levi
OK, it's working now. There were actually two tings. There was a line uncommented in the json file that caused syntax error (possibly my metadata adding script is not working perfectly). The other one was solved by moving the jbrowse out of /var/www/ to the folder where the data is, and making a symlink there. To me, it seams like apache2 actually follows the symlink but jbrowse not. Thanks to anyone who gave it a thought! Chears, Levi
biostars
{"uid": 292924, "view_count": 1975, "vote_count": 2}
Hi I have many large files with association results. Each file containes 8 columns (the 3rd one is the p value), I need to create from each file a new one conataining only observations where the p value is < 10 e-5. How can I do this with bash code ? Here a small example from these files : SNP N P p2 or1 or2 q q1 c10_pos5974849 2 0.1881 0.1881 1.1931 1.1931 0.5707 0.00 c10_pos5975482 2 0.3225 0.3225 0.8670 0.8670 0.8840 0.00 c11_pos68438345 2 0.6537 0.66 0.9705 0.9690 0.2856 12.29 c11_pos107693921 2 0.8938 0.8558 1.0133 1.0250 0.1755 45.52 c12_pos67499221 2 0.8351 0.8351 1.0236 1.0236 0.6413 0.00 c14_pos67844869 2 0.1103 0.1915 0.7334 0.7229 0.2039 38.05 c14_pos68073026 2 0.09954 0.1298 0.6383 0.6215 0.2662 19.11 c14_pos68087872 2 0.3704 0.3704 1.2500 1.2500 0.7319 0.00 Thank you
find . -type f -name "*.common.suffix" | while read F ; do awk '($5=="P" || $5 < 1E-5)' $F > ${F}.subset.txt ; done
biostars
{"uid": 415549, "view_count": 429, "vote_count": 1}
Hi Biostars Users, I am rookie in bio-informatics interested in undergoing NGS training to learn how to build NGS pipeline for whole exome sequencing analyses. May you please guide me to find a suitable training program? Best, Kizuna
<p>Perhaps one of out trainings might be of interest to you!</p> <p>Check out <a href="http://www.ecseq.com/training.html">http://www.ecseq.com/training.html</a></p>
biostars
{"uid": 99950, "view_count": 4361, "vote_count": 5}
Hello, I am currently focusing on identifying denovo mutations from my trio data (parents are unaffected and child is affected). I used PhaseByTransmission. However, I found all denovo mutations (child is heterozygous, and both parents are hom. ref) were not phased (i.e. I am getting '/' instead of '|'). Do you think it is an error? If I search autosomal recessive, variants were phased correctly. What is the problem in my analysis? I am pasting the summary results provided by PhaseByTransmission for your kind perusal. Please also comment on the summary results, are they looking odd? Please help. ``` java -jar /gatk_3.3/GenomeAnalysisTK.jar -R /reference_sequence/human_g1k_v37.fasta -T PhaseByTransmission -V trio1.vcf -ped trio1.ped --DeNovoPrior 0.00001 -o trio_out.vcf --MendelianViolationsFile mendelian_violation.vcf INFO 20:04:04,201 GenomeAnalysisEngine - Strictness is SILENT INFO 20:04:04,341 GenomeAnalysisEngine - Downsampling Settings: Method: BY_SAMPLE, Target Coverage: 1000 INFO 20:04:04,453 PedReader - Reading PED file trio1.ped with missing fields: [] INFO 20:04:04,457 PedReader - Phenotype is other? false INFO 20:04:04,510 GenomeAnalysisEngine - Preparing for traversal INFO 20:04:04,530 GenomeAnalysisEngine - Done preparing for traversal INFO 20:04:04,531 ProgressMeter - [INITIALIZATION COMPLETE; STARTING PROCESSING] INFO 20:04:04,531 ProgressMeter - | processed | time | per 1M | | total | remaining INFO 20:04:04,532 ProgressMeter - Location | sites | elapsed | sites | completed | runtime | runtime INFO 20:04:34,824 ProgressMeter - 15:96876611 147844.0 30.0 s 3.4 m 77.5% 38.0 s 8.0 s INFO 20:04:43,701 PhaseByTransmission - Number of complete trio-genotypes: 139299 INFO 20:04:43,702 PhaseByTransmission - Number of trio-genotypes containing no call(s): 0 INFO 20:04:43,703 PhaseByTransmission - Number of trio-genotypes phased: 124651 INFO 20:04:43,703 PhaseByTransmission - Number of resulting Het/Het/Het trios: 13391 INFO 20:04:43,704 PhaseByTransmission - Number of remaining single mendelian violations in trios: 937 INFO 20:04:43,704 PhaseByTransmission - Number of remaining double mendelian violations in trios: 12 INFO 20:04:43,704 PhaseByTransmission - Number of complete pair-genotypes: 0 INFO 20:04:43,705 PhaseByTransmission - Number of pair-genotypes containing no call(s): 0 INFO 20:04:43,705 PhaseByTransmission - Number of pair-genotypes phased: 0 INFO 20:04:43,705 PhaseByTransmission - Number of resulting Het/Het pairs: 0 INFO 20:04:43,706 PhaseByTransmission - Number of remaining mendelian violations in pairs: 0 INFO 20:04:43,706 PhaseByTransmission - Number of genotypes updated: 4395 INFO 20:04:45,481 ProgressMeter - done 201351.0 40.0 s 3.4 m 100.0% 40.0 s 0.0 s INFO 20:04:45,482 ProgressMeter - Total runtime 40.95 secs, 0.68 min, 0.01 hours INFO 20:04:47,002 GATKRunReport - Uploaded run statistics report to AWS S3 ```
Those are your likely candidate de novo mutations, the tool uses `/` instead of `|` to indicate a violation of mendelian inheritance pattern and the presence of an un-inherited allele at these loci.
biostars
{"uid": 142884, "view_count": 3659, "vote_count": 3}
I was wondering which ways are available for getting metadata from accession numbers. I have seen other tools such as Nextstrain make use of a so called "metadata" file to describe used sequences. The file looks something like this: ![Metadata for sequences][1] [1]: https://imgur.com/a/uL3m7T5 https://imgur.com/a/uL3m7T5 It shows various data from NCBI for the accession numbers such as virus strain, country, date, URL, etc. For me the most import ones are strain, country and date. Are there ways to download such data automatically when you have a list of accession numbers? Any help is appreciated.
Using [EntrezDirect][1] : $ esearch -db nuccore -query "KY317939" | esummary | xtract -pattern DocumentSummary -element SubName ZIKV/Homo_sapiens/Colombia/2016/ZC204Se|Homo sapiens|Colombia|serum|06-Jan-2016|Antibody Systems Inc Fields you are getting above are (separated by `|`) isolate|host|country|isolation_source|collection_date|collected_by [1]: http://bit.ly/entrez-direct
biostars
{"uid": 409082, "view_count": 1116, "vote_count": 2}
Dear Friends, I mostly script in shell/bash as am comfortable with it and also use a lot of linux commands and Entrez utilities for various big file analysis; however, when I stared working on VCF files, dealing with million rows and columns I find issues in operating these files with bash. For instance, below is an example script to count number of SNPs for samples(TCGA...barcode...) in a VCF file: for i in {10..10000} do snp_count=$(zcat xxx.vcf.gz | grep -v "##" | awk "{print \$$i}" | grep -v "TCGA" | grep "0/1\|1/1" | wc -l) sample=$(zcat xxx.vcf.gz | grep -v "##" | awk "{print \$$i}" | grep "TCGA") echo -e "$sample\t$snp_count\n" done Output: TCGA...barcode1 60123 TCGA...barcode2 45245 . . The script works but it has been running for 2 days now, and will take time. I would really appreciate if you could share/suggest on the best ways to deal with VCF files using bash/shell or if another programming language could make a difference? However, if possible, I would prefer bash scripting ways to tackle VCF files. Thanks, DK
Here is oneliner using bioalcidaejdk http://lindenb.github.io/jvarkit/BioAlcidaeJdk.html $ wget -O - "ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/release/20130502/ALL.chrY.phase3_integrated_v2a.20130502.genotypes.vcf.gz" |\ gunzip -c |\ java -jar dist/bioalcidaejdk.jar -e 'stream().flatMap(V->V.getGenotypes().stream()).filter(G->!(G.isHomRef()||G.isNoCall())).map(G->G.getSampleName()).collect(Collectors.groupingBy(Function.identity(),Collectors.counting())).forEach((k,v)->println(k+" "+v));' -F VCF (...) NA20852 441 HG02489 1233 HG02008 112 NA11919 905 HG02002 119 HG01398 133 HG03577 837 HG02009 1217 NA18549 819 detail: * `stream().` get the stream of variants * `flatMap(V->V.getGenotypes().stream()).` convert it to a stream of genotypes * `filter(G->!(G.isHomRef()||G.isNoCall())).` remove the hom_ref and the nocall * `map(G->G.getSampleName()).` extract the sample name from the genotype * `collect(Collectors.groupingBy(Function.identity(),Collectors.counting())).` count the number of distinct names * `forEach((k,v)->println(k+" "+v));` print it
biostars
{"uid": 326294, "view_count": 4718, "vote_count": 1}
I'm having a plink file (ped map format) which I have subset for a list of SNPs and now I want to know with which genes these SNPs are related. I did as the website suggested, plink --file mydata --assoc so the `plink.assoc` file was created as should be, but then when I run plink --noweb --gene-report plink.assoc --gene-list glist-hg18 the `plink.range.report` file which should contain what I want, is empty. I have manually checked and confirmed that for at least one SNP there is one corresponding gene. When I run plink --noweb --gene-report plink.assoc --gene-list glist-hg18 --gene-report-empty which is the opposite command, it contains all the genes of the glist-hg18 file. At this point I am unable to think of a solution.
Can you check whether the latest PLINK 1.90 build has the same problem? (Note that it does not support the undocumented `--gene-report-empty` flag.) (Edit: oops, this should not have been an answer, but hopefully it will lead to one in short order.)
biostars
{"uid": 151582, "view_count": 1676, "vote_count": 2}
Hi, I want to do DE analysis using DESeq2. My experiment is small RNAseq experiment with 5 tissue samples. I want to find out the DE genes between different tissues. I have following matrix (as an example) ``` gene tissue1 tissue2 tissue3 tissue4 tissue5 gene1 233 91 17 593 93 gene2 1011 0 7 1 11 gene3 963 2 3 66 2 gene4 908 41 1 74 33 gene5 596 50 26 328 104 gene6 1 0 0 0 1111 gene7 202 187 35 425 277 gene8 985 24 10 76 33 gene9 523 87 32 286 203 gene10 822 82 23 120 87 ``` My aim is to find DE gene between each column, - i.e. tissue1 vs tissue2, tissue1 vs tissue3 ......tissue2 vs tissue3...... tissue4 vs tisue5 I don't fully understand the program, I tried following the program vignette as follows: ```r library("DESeq2") CountTable = read.table("test.tsv", header=TRUE, row.names=1) head(CountTable) gene tissue1 tissue2 tissue3 tissue4 gene1 233 91 17 593 gene2 1011 0 7 1 gene3 963 2 3 66 gene4 908 41 1 74 gene5 596 50 26 328 gene6 1 0 0 0 colData = data.frame( row.names= colnames(CountTable), condition = c("tissue1", "tissue2", "tissue3", "tissue4", "tissue5"), libType = c( "single-end", "single-end", "single-end", "single-end", "single-end")) dds <- DESeqDataSetFromMatrix( countData = CountTable, colData = colData, design = ~ condition) dds <- DESeq(dds) #estimating size factors #estimating dispersions #gene-wise dispersion estimates #mean-dispersion relationship #final dispersion estimates #fitting model and testing #Warning message: #In checkForExperimentalReplicates(object, modelMatrix) : #same number of samples and coefficients to fit, #estimating dispersion by treating samples as replicates. #read the ?DESeq section on 'Experiments without replicates' res <- results(dds) res #log2 fold change (MAP): condition testis vs brain #Wald test p-value: condition tissue5 vs tissue1 #DataFrame with 927 rows and 6 columns baseMean log2FoldChange lfcSE stat pvalue padj ``` So in the result I am getting comparison only between tissue5 and tissue1, what do I need to do to find out comparison between each tissue? Help is greatly appreciated. P.S.: I am new to R and first time using DESeq2
In looking up 'results' in the DESeq2 manual at http://www.bioconductor.org/packages/release/bioc/manuals/DESeq2/man/DESeq2.pdf , I find the following information which suggests that it is only doing the first and last condition comparison: > The results table when printed will provide the information about the comparison, e.g. "log2 fold change (MAP): condition treated vs untreated", meaning that the estimates are of log2(treated /untreated), as would be returned by contrast=c("condition","treated","untreated"). **Multiple results can be returned for analyses beyond a simple two group comparison, so results takes arguments contrast and name to help the user pick out the comparisons of interest for printing a results table.** The use of the contrast argument is recommended for exact specification of the levels which should be compared and their order. **If results is run without specifying contrast or name , it will return the comparison of the last level of the last variable in the design formula over the first level of this variable.** For example, for a simple two-group comparison, this would return the log2 fold changes of the second group over the first group (the reference level). Please see examples below and in the vignette. On a side note, did you look up the information in `?DESeq` on 'Experiments without replicates' as the warning message says? I copied a little bit from the manual: > Experiments without replicates do not allow for estimation of the dispersion of counts around the expected value for each group, which is critical for differential expression analysis. If an experimental design is supplied which does not contain the necessary degrees of freedom for differential analysis, DESeq will provide a message to the user and follow the strategy outlined in Anders and Huber (2010) under the section 'Working without replicates', wherein all the samples are considered as replicates of a single group for the estimation of dispersion. As noted in the reference above: "Some overestimation of the variance may be expected, which will make that approach conservative." Furthermore, "while one may not want to draw strong conclusions from such an analysis, it may still be useful for exploration and hypothesis generation."
biostars
{"uid": 145211, "view_count": 26532, "vote_count": 4}
Hi all, Usually get on well with Trimmomatic but am having issues removing adapters from some paired-end RNA-seq data. Here's my process so far: 1) Put together a PE adapter fasta for the adapters described by the sequencing company; ran trimmomatic; no reads dropped. 2) Ran Trim_galore! to automatically detect adapters. It detected the Illumina sequence `AGATCGGAAGAGC` in ~30% of reads. 3) I grepped for this sequence in the raw FASTQs (and used flag `-B 1` to also retrieve the metadata line). Sure enough, I could see this sequence in the grepped fastq entries, plus 20 bp following that were different for Read 1's vs Read 2's. (Images 1 and 3 respectively, see highlighted regions for an example.) 4) I searched the Illumina adapters PDF for these sequences, and they turn out to be a perfect match for the "IDT for Illumina TruSeq DNA and RNA UD Indexes" (see image 2). I then confirmed with the sequencing company that this was what they had in fact used - yes, the ones they initially sent me were wrong. Great, back to trimmomatic then! 5) Ran trimmomatic looking for these adapters (which I had confirmed with my own eyes were present in at least a handful of reads - hopefully you can confirm this for yourself using the screenshots provided). Zero reads dropped. I tried every combination of the adapter sequences I could think of, including complements with non-complements, read 2 adapters taking the place of read 1 adapters, etc, etc. Still zero reads dropped. (See below for a list of all pairs of adapter sequences I submitted to trimmomatic.) What am I doing wrong? I will likely just accept the results of trim_galore and move on, but this is driving me crazy as I can't see any reason why it isn't working. ADAPTER SEQUENCES TRIALLED: (formatting being mucked around by BioStars, the "Prefix.../1" was in fact on a separate line to the nucleotides and there were angle brackets (greater-than-signs) at the start of each "Prefix..." line.) >PrefixIllumina1/1 AGATCGGAAGAGCACACGTCTGAACTCCAGTCA >PrefixIllumina1/2 AGATCGGAAGAGCGTCGTGTAGGGAAAGAGTGT >PrefixIllumina2/1 AGATCGGAAGAGCGTCGTGTAGGGAAAGAGTGT >PrefixIllumina2/2 AGATCGGAAGAGCACACGTCTGAACTCCAGTCA >PrefixIllumina3/1 TCTAGCCTTCTCGTGTGCAGACTTGAGGTCAGT >PrefixIllumina3/2 TCTAGCCTTCTCGCAGCACATCCCTTTCTCACA >PrefixIllumina4/1 TCTAGCCTTCTCGCAGCACATCCCTTTCTCACA >PrefixIllumina4/2 TCTAGCCTTCTCGTGTGCAGACTTGAGGTCAGT >PrefixIlluminaOne/1 AGATCGGAAGAGCACACGTCTGAACTCCAGTCA >PrefixIlluminaOne/2 AGATCGGAAGAGCGTCGTGTAGGGAAAGAGTGT >PrefixIlluminaTwo/1 AGATCGGAAGAGCACACGTCTGAACTCCAGTCA >PrefixIlluminaTwo/2 TCTAGCCTTCTCGCAGCACATCCCTTTCTCACA >PrefixIlluminaThree/1 TCTAGCCTTCTCGTGTGCAGACTTGAGGTCAGT >PrefixIlluminaThree/2 TCTAGCCTTCTCGCAGCACATCCCTTTCTCACA >PrefixIlluminaFour/1 TCTAGCCTTCTCGTGTGCAGACTTGAGGTCAGT >PrefixIlluminaFour/2 AGATCGGAAGAGCGTCGTGTAGGGAAAGAGTGT` ![Screenshot of example read 1's][1] ![Illumina adapter PDF screenshot][2] ![Screenshot of example read 2's][3] [1]: /media/images/55c5783b-8f08-408b-9074-b0d9e60a [2]: /media/images/315841a2-8364-4929-bf81-4c5c62e4 [3]: /media/images/7d5390dd-a9d4-4306-b04a-0b8f5725
I think you are overcomplicating things, make a simple adapter file like so: >adapter AGATCGGAAGAGCACACGTCTGAACTCCAGTCAC then cut with that. Trimmomatic has some additional features and behaviors called palindromic trimming and I think naming your reads with trailing /1 and /2 triggers that behavior.
biostars
{"uid": 9497948, "view_count": 2734, "vote_count": 1}
<p>Hello,</p> <p>I was wondering what is the quickest way to get a listing of the human Gene Symbols from Refseq Build 37. Thannks in advance for your suggestions.</p> <p>Fred</p>
<pre><code> curl -s "http://hgdownload.cse.ucsc.edu/goldenPath/hg19/database/refGene.txt.gz" |\ gunzip -c | cut -d ' ' -f 13 |\ sort -u </code></pre>
biostars
{"uid": 62532, "view_count": 6652, "vote_count": 2}
Hello there, I did standard GSEA for my microarray expression data. As I found out GSEA tools provide GSEA plot just for top 20 of enriched pathways. How can I provide plot for other pathways which are also significantly enriched (adj-pval < 0.05) in my dataset based on GSEA? I am trying *fgsea* package in r. Could anyone please bring an example here that shows, how the input should look like for plotEnrichment()? What's wrong with my input? I want to plot for TGFb gene set and this is my script : `plotEnrichment(tgfb_gene, gene_list, gseaParam = 1, ticksSize = 0.2)` so for the first argument, *examplePathway* I used this list: > tgfb_gene $HALLMARK_TGF_BETA_SIGNALING [1] "TGFBR1" "SMAD7" "TGFB1" "SMURF2" "SMURF1" "BMPR2" "SKIL" [8] "SKI" "ACVR1" "PMEPA1" "NCOR2" "SERPINE1" "JUNB" "SMAD1" [15] "SMAD6" "PPP1R15A" "TGIF1" "FURIN" "SMAD3" "FKBP1A" "MAP3K7" [22] "BMPR1A" "CTNNB1" "HIPK2" "KLF10" "BMP2" "ENG" "APC" [29] "PPM1A" "XIAP" "CDH1" "ID1" "LEFTY2" "CDKN1C" "TRIM33" [36] "RAB31" "TJP1" "SLC20A1" "CDK9" "ID3" "NOG" "ARID4B" [43] "IFNGR2" "ID2" "PPP1CA" "SPTBN1" "WWTR1" "BCAR3" "THBS1" [50] "FNTA" "HDAC1" "UBE2D3" "LTBP2" "RHOA" and for the second argument, *exampleRanks*, I used a **vector** of the **logFC** (output of differential gene expression analysis), the names of vector is the name of genes. But I got this error and warnings: Error: Aesthetics must be either length 1 or the same as the data (1): x, xend In addition: Warning messages: 1: In max(tops) : no non-missing arguments to max; returning -Inf 2: In min(bottoms) : no non-missing arguments to min; returning Inf 3: In max(tops) : no non-missing arguments to max; returning -Inf 4: In min(bottoms) : no non-missing arguments to min; returning Inf 5: In max(tops) : no non-missing arguments to max; returning -Inf 6: In min(bottoms) : no non-missing arguments to min; returning Inf Is there any other ways to provide or re-plot GSEA results? I really appreciate any help!!
I have been working on GSEA too and I found two very interesting resources but I am not that proficient with this kind of analysis so I am still not able to interpret the plots. However, here you can find a [very nice tutorial for R][1] that gives you some plotting. Also there is a [question][2] and another [tutorial][3] about how to interpret the graphical results. Finally, this is an [article][4] about GSEA. Hope this helps! [1]: https://bioinformatics-core-shared-training.github.io/cruk-summer-school-2018/RNASeq2018/html/06_Gene_set_testing.nb.html#gsea-analysis [2]: https://www.biostars.org/p/367191/ [3]: http://software.broadinstitute.org/gsea/doc/GSEAUserGuideFrame.htmlhttp:// [4]: https://www.pnas.org/content/102/43/15545
biostars
{"uid": 408313, "view_count": 6411, "vote_count": 2}
Hello Biostars Community, General question(s) here, specifically in regards to sequencing platforms and some related questions on sequencing, as well? 1. Could using a newer/the newest computational genome/annotation (for example, presently, Ensembl 107 or the newest Gencode version) adversely effect the actual truth of what was sequenced? 2. When sequencing is done through an Illumina machine or other big name company machines, are those sequencing platforms *completely independent* from the genome or DNA/cDNA being sequencing? 3. What happens if, for example, "famous gene ABC" and "low-profile gene XYZ" are found to have different 3' and/or 5' ends by some new discovery, in Illumina, would adapters still link to them to perform those bridge PCR reactions on the flow cell lanes, or would it be that all the data published before on "famous gene ABC" and "low-profile gene XYZ" should be revisited? Or is it like question #2, "completely independent" - are even adapters independent from genes? I was doing some reading, and I guess the gene sequence really only matters for probe-based sequencing (chips and arrays?). Hopefully this question could be a good resource for others? Thank you in advance. - Pratik
A newer genome annotation/curation would be **independent** and not affect the actually nucleic acids sequenced by the machine (unless they were captured using a targeted approach **prior** to sequencing - e.g. whole exome sequencing). Whole genome sequencing will include ALL fragments present (regardless of genome/annotation) - assuming they are physically "able to be sequenced" (see homopolymers, etc... in the comment above). **Adapters are independent of genes** > was doing some reading, and I guess the gene sequence really only > matters for probe-based sequencing (chips and arrays?). Hopefully this > question could be a good resource for others? You are correct
biostars
{"uid": 9539218, "view_count": 505, "vote_count": 2}
Hi, I am performing a shuffle of reads from a ChIP-Seq experiment over some features in the genome and since I do not know the original distributions I am trying to perform an observed to expected calculation to see which features are enriched. Boiled down this means I have counts of reads mapping to feature x, then I perform a shuffle of the reads over the entire genome, count those and then calculate the count of how many reads mapped to feature x by chance. Unfortunately, I am not quite sure on how to get a p-value from this. I hope you can help me with this. Thank you very much!
This looks like a permutation test. Your p-value would be the number of times the permutations give more reads mapping to the feature under consideration divided by the number of permutations. And if you're doing multiple tests (i.e. testing many features), you should do a correction for multiple testing. Bonferroni is fine but too conservative, you should prefer FDR.
biostars
{"uid": 254121, "view_count": 3219, "vote_count": 1}
<p>Hi all, Hope someone can help.</p> <p>We can pass a region to <a href='http://samtools.sourceforge.net/'>samtools</a> view such as 1:1000-1010.</p> <p>Is it possible to pass a list of regions to <a href='http://samtools.sourceforge.net/'>samtools</a> view and have the output into a single <a href='http://samtools.sourceforge.net/SAM1.pdf'>sam</a> file i.e. (something like)</p> <p>$ <a href='http://samtools.sourceforge.net/'>samtools</a> view -h .bam regionlist.txt &gt; mysam.sam</p> <p>what would the syntax for this be?</p> <p>also, in what format does the region list file should be in?</p> <p>thanks, a</p>
Samtools have the `-L` option samtools view -b -L ROI.bed file.bam > ROI_file.bam BUT `-L` does not use the samtools index so the search is slooow depending on how large is your bam file. In my benchmarks querying for 10, 100 or 1000 sequences take exactly the same time so seem that bed size does not matter too much. You need to do a bit of benchmarking if you prefer to do a `-L` or to do a loop (for a bam with 23 million reads take the same time the `-L` that writting all the stuff for looping ;-)) YMMV $ time samtools view -b -L 100_ROI.bed file.bam > ROI.bam real 0m38.831s user 0m37.666s sys 0m0.556s $ time (samtools view -H file.bam > roi_xargs.sam; \ cat 100_ROI.bed | perl -lane 'print "$F[0]:$F[1]-$F[2]"' | xargs -n1 -t -I{} samtools view file.bam {} >> roi_xargs.sam; \ samtools view -bSh roi_xargs.sam > roi_xargs.bam \ ) real 0m7.188s user 0m5.080s sys 0m1.304s And if you feel adventurous and your disks are using isilon or lustre you can use gnu-parallel and forget about same file concurrency and do it in parallel in a breeze. The only issue is that you would need to do the bam merge afterwards.
biostars
{"uid": 49306, "view_count": 30291, "vote_count": 7}
Hello, I am doing a RNA-Seq analysis with R using the limma package. I have RNA-Seq data of different mutant lines of a model organism which have two different phenotypes in comparison with the siblings as control. The samples are like that: - line 1, phenotype 1, mutant and sibling - line 2, phenotype 1, mutant and sibling - line 3, phenotype 2, mutant and sibling - line 4, phenotype 2, mutant and sibling - line 5, phenotype 2, mutant and sibling That means line 1 and 2 have the similar phenotype 1 and line 3, 4, and 5 have the similar phenotype 2 which differ from phenotype 1. Let's say phenotype 1 has more cells than the control and phenotype 2 has less cells than the control. I did the RNA-Seq analysis for each line, but now I want to compare the lines of one phenotype. I want to find the genes which are different in the lines with the same phenotype in comparison to the siblings. My makeContrast command looks like that: cont.matrix <- makeContrasts(pheno1 = (line1.mut+line2.mut)-(line1.sib+line2.sib), pheno2 = (line3.mut+line4.mut+line5.mut)-(line3.sib+line4.sib+line5.sib), levels=design) Finally I want to find genes (eg. genes involved in the cell cycle) which are in phenotype 1 up and in phenotype 2 down regulated or the other way around. cont.matrix <- makeContrasts(pheno1vspheno2 = ((line1.mut+line2.mut)-(line1.sib+line2.sib)) - ((line3.mut+line4.mut+line5.mut)-(line3.sib+line4.sib+line5.sib)), levels=design) I am not sure if a can do a contrast matrix like this. Do you think this will give me the genes I am interested in?
Let's suppose theres a gene that is 2-fold upregulated in phenotype1 relative to it's siblings and that is also 2-fold upregulated in phenotype2 relative to it's siblings (regardless of the lines). So this is a gene where the phenotype-induced change in expression is identical. log2(2) = 1, so we expect line1.mut - line1.sib ~ 1, line2.mut - line2.sib ~ 1, line3.mut - line3.sub ~ 1 and so on. Now your contrasts: pheno1 = (line1.mut+line2.mut)-(line1.sib+line2.sib) = (line1.mut - line1.sib) + (line2.mut - line2.sib) ~ 1 + 1 = 2 pheno2 = (line3.mut+line4.mut+line5.mut)-(line3.sib+line4.sib+line5.sib) = (line3.mut - line3.sib) + (line4.mut - line4.sib) + (line5.mut - line5.sib) ~ 1 + 1 + 1 = 3 and as a result: pheno1vspheno2 = ((line1.mut+line2.mut)-(line1.sib+line2.sib)) - ((line3.mut+line4.mut+line5.mut)-(line3.sib+line4.sib+line5.sib)) = pheno1 - pheno2 ~ 2 - 3 = -1 So, although this gene was induced to the exact-same-extent in mutant vs sib in each of your individual lines, there appears to be a two-fold-lower induction in phenotype1 relative to phenotype2. So something is wrong with the maths underlying your contrasts. To prevent this, you should normalise your contrasts by the number of lines: pheno1 = (line1.mut+line2.mut) / 2 - (line1.sib+line2.sib) / 2 pheno2 = (line3.mut+line4.mut+line5.mut) / 3 - (line3.sib+line4.sib+line5.sib) / 3 pheno1vspheno2 = ((line1.mut+line2.mut)-(line1.sib+line2.sib)) / 2 - ((line3.mut+line4.mut+line5.mut)-(line3.sib+line4.sib+line5.sib)) / 3
biostars
{"uid": 435540, "view_count": 1314, "vote_count": 3}
Hi everyone, I am using the TCGA portal to get mRNA expression data for various cancer studies (e.g. lung, liver, thyroid etc). I have two questions about the data: 1. Some cancer studies on TCGA have "mRNA expression RNASeq V2 RSEM" values & corresponding "z-scores". I am confused as to what the "mRNA expression z-Scores (RNA Seq V2 RSEM)" data constitutes of. How are the z-scores calculated and what do they represent? 2. We have been on a lookout for control dataset for the cancer studies on TCGA. Does anyone know of a good place where you can find control dataset for tissues like Lung, Liver, Thyroid etc. (basically all the fore-gut tissues)? We are working with control data from [GTEx][1] but they have RPKM values & TCGA has RSEM/RSEM z-scored values, so we have to do a lot of scaling/normalization/transformation to compare these disparate datasets. We would like to know if there is any mRNA expression data (obtained via RNASeq V2 RSEM) for controls. Thanks! UPDATE: I have posted the second part as a separate question [here][2]. [1]: http://www.gtexportal.org/home/ [2]: https://www.biostars.org/p/107594/
I will attempt to answer part 1 of your question: A z-score for a sample indicates the number of standard deviations away from the mean of expression in the reference. The formula is: z = (expression in tumor sample - mean expression in reference sample) / standard deviation of expression in reference sample TCGA states: > For mRNA and microRNA expression data, we typically compute the relative expression of an individual gene and tumor to the gene's expression distribution in a reference population. **That reference population is either all tumors that are diploid for the gene in question, or, when available, normal adjacent tissue.** (It is not always clear what the cell of origin of a tumor is, so the mRNA expression in normal adjacent tissue can sometimes be misleading, which is why expression is sometimes compared within the set of tumors only). As for part 2, CPM (count per million) data for each gene and sample would be ideal for the cross-sample comparisons, but I am not sure where you could get such data. Maybe you should post this as a separate question?
biostars
{"uid": 107519, "view_count": 48457, "vote_count": 13}
<p>How much does it cost to align a 8 lanes of a HiSeq 2000 flowcell paired-end run consisting of 6 billion 100bp human genomic reads (600Gbp output) using the Amazon EC2 and associated storage?</p> <p>Assume using an aligner such as BWA with default parameters, aligned against a human reference. Please compute the cost and time from upload of FASTQ files to download of BAM files inclusive.</p> <p>"It depends" is not a useful answer - if there is a factor such as instance-type or EBS vs S3 please select a sensible option and provide a quote.</p>
<p><a href="https://twitter.com/konrad_jk">Konrad Karczewski</a> of <a href="http://stormseq.org">stormseq</a> said in a recent tweet that "a full genome (30X coverage) is about $30 or so, exome (80X coverage) is around $2-3." I think this is from fastq-&gt;vcf using BWA and GATK.</p> <p>Assuming an 80X exome is about 40M reads, a $2.50 exome comes out to about $6.25/hmmvr (hundred million mapped variant reads - my awesome new metric), or about $375 for a HiSeq 2k flowcell. Granted this is going all the way to VCF but that number is in line with what I've heard elsewhere.</p> <p>I think the BWA-&gt;GATK pipeline is commoditized enough that providers should be using these $/hmmvr on <a href="http://www.scienceexchange.com">scienceexchange</a>. Then labs will have something to compare it with, if they don't want to try AWS themselves.</p>
biostars
{"uid": 50759, "view_count": 8907, "vote_count": 12}
Hi. I made vcf files using UnifiedGenotyper in GATK and found that many of mutants had 1/2 at genotype (GT) field. As I understand, 0/0 = homo ref, 0/1 = hetero, 1/1 = homo alt. Does anyone know what exactly 1/2 mean? Thank you!
There are apparently two alternate alleles at this position. So if the reference is `C`, then perhaps you have an `A` and a `G`.
biostars
{"uid": 187068, "view_count": 7850, "vote_count": 2}
I have to calculate the Shannon entropy for a given list of sequences in a fasta file. Recently I came across a program called rnaentropy which solves my problem but the issue is I cannot run sequences in batch. In the documentation the input is given as a sequence of strings. The parameters in the documentation is as follows: <pre> ./RNAentropy "sequence" -s sequence -t temperature -e energyModel(default 2004) -d delta_Temp (compute structural entropy using &lt;E> = RT² * d/dT ln(Z(T)) ) -c (centered) -z energy_is_zero [0|1] (default 0) -v (verbose, extended output) ./RNAentropy -h (detailed help) Input paramters are: &lt;sequence> : If FIRST argument is a valid nucleotide sequence it will be used input -s &lt;sequence> : Alternative flag for input sequence -t &lt;temperature> : Temperature in ºC (default is 17ºC) -e &lt;energy_model> : Thermodynamic energy model used. Valid values for energy model are 1999, 2004 and 2007 (resp. Turner'99, Turner'04 and Andronescu '07) (default is 2004) -d &lt;delta_T> : Temperature variation used for estimating d/dT ln(Z(T)) NOTE: If this parameter is provided, H is computed estimating &lt;E> = RT² * d/dT ln(Z(T)) -c : (Use only in combination of -d) Use the centered version for estimating &lt;E> = RT² * d/dT ln(Z(T)) -v : Output includes method for computing H and the name of the ouput parameters -z &lt;energy_is_zero>: [0|1] If value is 1, energies are set to 0, ouput is structural entropy for the uniform case </pre> Can anyone please suggest me a way to do it for a multi-fasta? Is there a way to iterate in a shell script?
I wouuld use a Perl (or Python script) to parse the multifasta and run your tool, like: #!/usr/bin/perl use strict; use warnings; $/="\n>"; # read fasta sequences in blocks while (<>) { s/>//g; my ($id, @seq) = split (/\n/, $_); my $seq = join "", @seq; # rebuild the sequence, one single line print "$id\n"; # to indicate which sequence is being analyzed system("./RNAentropy -s $seq"); # run the command, add params if needed } then you can run as `perl analyzeSeq.pl < multifasta_in > output_file`
biostars
{"uid": 406352, "view_count": 1359, "vote_count": 2}
Hi, I am very new to microarray analysis. I have some cancer and control samples (idat files) from illumina beadchip to analyze. I was following the tutorial on limma from ( https://www.bioconductor.org/packages/release/bioc/vignettes/limma/inst/doc/usersguide.pdf ), page 107. When I check the EListRaw object, I get lots of ERCC as well which I did not expect!! (code and results are below). 1. Why do I have spike ins? 2. How can I remove them? (should I actually remove them?) I ignored this and went through the downstream analysis but the FDR that I get is so hight (0.5 and above) which I think might be the result of I not removing ERCCs. $ idatfiles = dir("path", pattern = "idat",full.names = TRUE) $ bgxfile <- "my.bgx" $ x = read.idat(idatfiles, bgxfile) $ x$other$Detection <- detectionPValues(x) $ table(x$genes$Status) > biotin cy3_hyb ERCC-00002-02 ERCC-00003-01 2 6 1 1 ERCC-00004-01 ERCC-00009-01 ERCC-00012-01 ERCC-00013-01 1 1 1 1 ERCC-00014-02 ERCC-00016-01 ERCC-00017-02 ERCC-00019-01 1 1 1 1 ERCC-00022-02 ERCC-00024-02 ERCC-00025-01 ERCC-00028-02 1 1 1 1 ERCC-00031-02 ERCC-00033-01 ERCC-00034-02 ERCC-00035-02 1 1 1 1 ERCC-00039-01 ERCC-00040-01 ERCC-00041-01 ERCC-00042-01 1 1 1 1 ERCC-00043-01 ERCC-00044-02 ERCC-00046-01 ERCC-00048-01 1 1 1 1 ERCC-00051-01 ERCC-00053-01 ERCC-00054-01 ERCC-00057-01 1 1 1 1 ERCC-00058-02 ERCC-00059-01 ERCC-00060-01 ERCC-00061-02 1 1 1 1 ERCC-00062-01 ERCC-00067-02 ERCC-00069-02 ERCC-00071-01 1 1 1 1 ERCC-00073-01 ERCC-00074-01 ERCC-00075-01 ERCC-00076-02 1 1 1 1 ERCC-00077-01 ERCC-00078-01 ERCC-00079-01 ERCC-00081-02 1 1 1 1 ERCC-00083-01 ERCC-00084-01 ERCC-00085-01 ERCC-00086-01 1 1 1 1 ERCC-00092-02 ERCC-00095-01 ERCC-00096-02 ERCC-00097-01 1 1 1 1 ERCC-00098-02 ERCC-00099-01 ERCC-00104-01 ERCC-00108-02 1 1 1 1 ERCC-00109-02 ERCC-00111-01 ERCC-00112-02 ERCC-00113-01 1 1 1 1 ERCC-00116-02 ERCC-00117-02 ERCC-00120-01 ERCC-00123-01 1 1 1 1 ERCC-00126-02 ERCC-00130-01 ERCC-00131-02 ERCC-00134-01 1 1 1 1 ERCC-00136-01 ERCC-00137-02 ERCC-00138-01 ERCC-00142-02 1 1 1 1 ERCC-00143-01 ERCC-00144-02 ERCC-00145-01 ERCC-00147-01 1 1 1 1 ERCC-00148-01 ERCC-00150-01 ERCC-00154-02 ERCC-00156-01 1 1 1 1 ERCC-00157-02 ERCC-00158-01 ERCC-00160-02 ERCC-00162-01 1 1 1 1 ERCC-00163-01 ERCC-00164-01 ERCC-00165-01 ERCC-00168-01 1 1 1 1 ERCC-00170-01 ERCC-00171-01 housekeeping labeling 1 1 7 2 low_stringency_hyb negative regular 8 770 47231
You need to leave the control probes ***in*** the data for the purpose of background correction and normalisation. Then, if you perform background correction and normalisation via `neqc()`, these control probes should be automatically removed. After normalisation, you can do further filtering based on the detection p-values. Any other control probes that still remain in the data may be identified via `x$genes$Source`. Others that can be filtered out include those with no gene symbol (`x$genes$Symbol == ""`) Thus, filtering that I perform post-normalisation is like this: Control <- project.bgcorrect.norm$genes$Source=="ILMN_Controls" NoSymbol <- project.bgcorrect.norm$genes$Symbol == "" isexpr <- rowSums(project.bgcorrect.norm$other$Detection <= 0.05) >= 3 project.bgcorrect.norm.filt <- project.bgcorrect.norm[!Control & !NoSymbol & isexpr, ] dim(project.bgcorrect.norm) dim(project.bgcorrect.norm.filt) Kevin
biostars
{"uid": 413847, "view_count": 870, "vote_count": 1}
<p>Are there any programs available (Python, Perl, whatever), that can take a degenerate nucleotide sequence and translate it into its multiple possible oligos?</p> <p>Any help would be appreciated.</p>
This program should do the job: ``` #include <stdio.h> #include <stdlib.h> #include <string.h> #define RUN(base) copy[index]=base; recurs(seq,copy,index+1,len) static void recurs(const char *seq,char* copy,int index,const int len) { if(index==len) { fwrite(copy,sizeof(char),len,stdout); fputc('\n',stdout); } else { switch(toupper(seq[index])) { case 'A':case 'T': case 'G': case 'C': RUN(seq[index]); break; case 'N':RUN('A');RUN('T');RUN('G');RUN('C');break; case 'W':RUN('A');RUN('T');break; case 'S':RUN('G');RUN('C');break; case 'B':RUN('T');RUN('G');RUN('C');break; case 'D':RUN('A');RUN('T');RUN('G');break; case 'H':RUN('A');RUN('T');RUN('C');break; case 'V':RUN('A');RUN('G');RUN('C');break; case 'K':RUN('G');RUN('T');break; case 'M':RUN('A');RUN('C');break; case 'R':RUN('A');RUN('G');break; case 'Y':RUN('C');RUN('T');break; default: fprintf(stderr,"Bad base in %s (%c)\n",seq,seq[index]); exit(EXIT_FAILURE);break; } } } int main(int argc,char** argv) { char* seq; int len,i; if(argc!=2) { fprintf(stderr,"Usage : %s <dna>",argv[0]); return EXIT_FAILURE; } seq=argv[1]; len=strlen(seq); char* copy=malloc((len+1)*sizeof(char)); if(copy==NULL) { fprintf(stderr,"Out of memory\n"); exit(EXIT_FAILURE); } copy[len]='\0'; recurs(seq,copy,0,len); free(copy); return 0; } ``` **Compilation**: gcc -O3 -o prg prg.c **test** ``` > ./prg ATGCTGATCGAGCTANATCGATCGGACTACY ATGCTGATCGAGCTAAATCGATCGGACTACC ATGCTGATCGAGCTAAATCGATCGGACTACT ATGCTGATCGAGCTATATCGATCGGACTACC ATGCTGATCGAGCTATATCGATCGGACTACT ATGCTGATCGAGCTAGATCGATCGGACTACC ATGCTGATCGAGCTAGATCGATCGGACTACT ATGCTGATCGAGCTACATCGATCGGACTACC ATGCTGATCGAGCTACATCGATCGGACTACT ```
biostars
{"uid": 6219, "view_count": 9317, "vote_count": 2}