INSTRUCTION
stringlengths
48
14.1k
RESPONSE
stringlengths
14
8.79k
SOURCE
stringclasses
1 value
METADATA
stringlengths
48
55
I simulated reads for a list of variant sites using GATK SimulateReadsForVariants tool. From that, I get a bam file output. Next I create a pileup using samtools mpileup. samtools mpileup -t DP,DPR,DV,INFO/DPR -vuf GRCh37.fa -l snp.file.vcf simreads.bam > simreads.raw.vcf The problem is that the output is not giving me correct counts at the variant sites. Here's an example of the first two SNP sites from the pileup. ``` #CHROM POS ID REF ALT QUAL FILTER INFO FORMAT NA12878 1 837214 . G <X> 0 . DP=20;I16=0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0;QS=0,0;MQ0F=0;DPR=0,0 PL:DP:DV:DPR 0,0,0:0:0:0,0 1 851390 . G <X> 0 . DP=20;I16=0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0;QS=0,0;MQ0F=0;DPR=0,0 PL:DP:DV:DPR 0,0,0:0:0:0,0 ``` So why would all the counts (with the exception of the DP tag) be 0? I opened up the bam file in IGV to check the first couple of sites and the reads do pileup there as they should.
I figured out the solution myself. I had simulated more error in my reads than I initially thought. For anyone else experiencing a similar issue, check the `-Q`, `--min-BQ` mpileup flag. Samtools automatically does not count reads in the other tags unless they are "high quality" (i.e. have a minimum base quality of 13). Depending on your experiment, if you're only interest is getting all the reads counted then consider using `--min-BQ 1`.
biostars
{"uid": 153026, "view_count": 3328, "vote_count": 1}
Hi, I have some RNA-seq samples that I want to normalize and then output RPKM expression, but I am unsure how to do this. This is my pipeline so far: Normalise raw read counts with TMM in edgeR expr <- DGEList(counts=data, group=conditions) expr <- calcNormFactors(expr) output: $samples group lib.size norm.factors Sample1 F 19770521 1.0462660 Sample2 F 17970679 0.8794805 Sample3 F 19184265 1.0573665 QUESTION: How do I get normalized raw read counts from this? Do I multiply the read counts by the norm.factors? QUESTION: Ultimately, I want to end up with RPKM values for each gene in each sample. I know I can use the rpkm() function below in edgeR expr_norm <- rpkm(expr, log=FALSE,gene.length=vector) but is expr the output from calcNormFactors or something else? Thanks for your help! A
The library size normalized counts are made by dividing the counts by the normalization factor (you'll note that the larger libraries have larger normalization factors, so if you multiplied things you'd just inflate the difference in sequencing depth). For the rpkms, just do `rpkm(expr, gene.length=vector)`, since it can take your `DGEList`, (this is explained in `help(rpkm)`).
biostars
{"uid": 99310, "view_count": 25909, "vote_count": 8}
Hi, My question may seem so simple. Could you tell me what is the difference between aligning and mapping the short reads to the reference genome? And also what is the difference between Pairwise alignment, Multiple sequence alignment and Short-Read Sequence Alignment? Thanks
For your second question, pairwise alignment (e.g., Smith and Waterman) is between two sequences and multiple sequence alignment between more than 2 sequences (e.g., clustalW). Short read aligners are usually pairwise!
biostars
{"uid": 180986, "view_count": 19615, "vote_count": 30}
This seems simple maths behind, But I always get confused with the terminology of **"Fold change", "Log Fold Change" *(LFC)*, and "Percentage of Change"**., When I see some of the figure descriptions given in **volcano plots**, specially X axis, I get confused. i.e. [Page 64][1] <a href="https://ibb.co/LtmQF2Q"><img src="https://i.ibb.co/zRp8M08/Capture.jpg" alt="Capture" border="0"></a> When I google, I found [this figure][2] is easy to understand, but the figure description does not make sense for me. <a href="https://ibb.co/gFcsTDw"><img src="https://i.ibb.co/vvM2PJm/volcano-plot-chart-static.png" alt="volcano-plot-chart-static" border="0"></a> for e.g. If we take two-fold change; isn't this the way we calculate LFC; LFC = Log2 (Foldchange) = Log2(2) =>1 if X axix is in Log2 fold change, shouldn't two-fold change denotes at **± 1**? Why they denote 2 fold change cutoff line in **± 2**? ----------------- **Below VP; Fold Change greater than 2 times and significant P value less than 0.05** See this example: ![enter image description here][3] and this: ![enter image description here][4] Could someone help me to understand this terminology ( **"Fold change", "Log Fold Change" *(LFC)*, and "Percentage of Change"**. ) easily with an example? [1]: http://www.nathalievialaneix.eu/doc/pdf/tutorial-rnaseq.pdf [2]: https://documentation.partek.com/display/FLOWDOC/Volcano+Plot [3]: https://cdn.xlstat.com/uploads/assets/Tutorials/DifferentialExpression/differentialExpression06.png [4]: http://chenyuan.date/images/Volcano-plot-unnamed-chunk-4-1.png
All the figures you posted above shows same thing which is Log2(FoldChange). Although 1st figure x axis labeled "Fold Change" it is Log2(FoldChange). Without log you cannot have negative values (It means down regulated genes). One of the reasons to use log2(FoldChange) and not FoldChange only is to use same magnitude for UP and DOWN regulated genes. See the example below. Let's say you have 2 genes one **GeneA** and **GeneB**. **GeneA** has FPKM (or RPKM) value is **100 in control** and **value 200 in treatment**. `FoldChange for GeneA is : (treatment/control = 200 / 100 = 2 )`. **GeneB** has FPKM (or RPKM) value is **200 in control** and value **100 in treatment**. `FoldChange for GeneB is : (treatment/control = 100 / 200 = 1/2)`. As you can see from FoldChange values, **GeneA is 2 fold UP** while **GeneB is 1/2 (0.05) fold down**. Comparison of value 2 and 0.05 are less interpretable to indicate UP and DOWN gene regulation. However, normalising both UP and DOWN on log2 make UP and DOWN comparisons easy. For e.g. Log2FC for GeneA is 1 and GeneB is -1 which makes fold change comparable and so easy to plot figures (volcano , heatmap etc. ). In most RNAseq people use 1.5 or 2 foldChange cutoff. It means Log2FC 0.6 and 1 respectively. In short, log2FC value >= 0.6 & =< -0.6 suggests 1.5 fold cutoff and log2FC value >= 1 & =< -1 suggests 2 fold cutoff.
biostars
{"uid": 405277, "view_count": 11194, "vote_count": 3}
Dear Biostars, Could I please know, If there is some way to view the reads in BAM file according to user defined Insert Size range. I would like to visualize all the reads falling between certain Insert Size ranges, not all the reads. Your ideas and resources would help me a lot. Thanks in advance. ~Prakki Rama.
<p>This is a hack but it might be good enough? Parse the input bam to extract alignments within a given *template* size range, here between 0 and 200:</p> <pre> <code>samtools view -h read.pe.bam \ | awk &#39;{if (($0 ~ /^@/) || (sqrt($9^2) &gt; 0 &amp;&amp; sqrt($9^2) &lt; 200)) {print $0}}&#39; \ | samtools view -Sb - &gt; insert.bam &amp;&amp;</code> <code>samtools index insert.bam</code></pre> <p>Now load <code>insert.bam</code> in your favourite browser.</p>
biostars
{"uid": 104919, "view_count": 3111, "vote_count": 1}
Really simple question but for some reason I cannot find a simple answer to this. We are moving from SGE to SLURM on our HPC. On SGE, we often submit a job with a range of threads, then inside the job, read the number of threads we got and use that to configure multithreaded programs. Example: $ cat test.sh #!/bin/bash echo "got $NSLOTS cpu slots" $ qsub -pe threaded 4-16 test.sh Your job 5597119 ("test.sh") has been submitted $ cat test.sh.o5597119 got 15 cpu slots What is the SLURM equivalent of this? I have tried a million variations of `sbatch test.sh --cpus-per-task x --ntasks y` across different queues, but nothing seems to work. Its also not clear how to request a range of threads, nor is it clear how to figure out how many you got; I've seen examples online that use `$SLURM_NTASKS` and `$SLURM_CPUS_PER_TASK` but neither of those variables exist inside any of my jobs. The SLURM [docs](https://slurm.schedmd.com/mc_support.html) have been pretty [unhelpful](https://slurm.schedmd.com/sbatch.html).
You can't use a range of threads, you must state how many you need. The program you run in the script only knows how many threads you've allocated if you tell it. It is completely possible to lie to slurm and say: #SBATCH -c 1 bowtie2 -p 20 ... Jobs won't be killed for using too much CPU (it's often impossible to predict exactly how many will get used), at least on our cluster, rather what you specify with `SBATCH -c X` is removed from the list of allocatable resources. Make sure to specify `#SBATCH --ntasks-per-node=1` to prevent tasks from getting split across nodes (unless your processes can actually handle this...few can in bioinformatics).
biostars
{"uid": 337120, "view_count": 8771, "vote_count": 2}
I'm having problems with **quality score recalibration** using **GATK** as most tutorials/examples use an old version of GATK which has different syntax and arguments to the current version. I have run this to get a recalibration table, which I think I need to get the recalibrated BAM files: gatk-4.1.2.0/gatk BaseRecalibrator \ -reference human_genome/hg38_masked.fa \ --known-sites human_genome/All_20180418.vcf.gz \ --input S01_BAM_file.bam \ --output recal.table This works fine, and I could run this for all of my samples. But when I then do the next step to get the **recalibrated BAM files** I can't work out what to do. *The old way of doing it was: java –jar GenomeAnalysisTK.jar –T PrintReads \ –R human.fasta \ –I realigned.bam \ –BQSR recal.table \ –o recal.bam* I wrote: gatk-4.1.2.0/gatk PrintReads \ -R data_R3/human_genome/hg38_masked.fa \ --input S01_BAM_file.bam \ -**BQSR** recal.table \ --output quality_score_recalibrated_S01.bam But this makes an error that says: **"A USER ERROR has occurred: B is not a recognized option".** And I can't find an alternative argument for -BQSR when I look at the PrintReads help manual. **Does anyone know the current way to run this quality score recalibration to get recalibrated BAM files?**
you should use ApplyBQSR : https://software.broadinstitute.org/gatk/documentation/tooldocs/current/org_broadinstitute_hellbender_tools_walkers_bqsr_ApplyBQSR.php gatk BaseRecalibrator \ -I input.bam \ -R reference.fasta \ --known-sites sites_of_variation.vcf \ --known-sites another/optional/setOfSitesToMask.vcf \ -O recal_data.table then gatk ApplyBQSR \ -R reference.fasta \ -I input.bam \ --bqsr-recal-file recal_data.table \ -O output.bam
biostars
{"uid": 383555, "view_count": 4659, "vote_count": 1}
<p>DESeq2 gives logFC values based on fitted model. We can also get the normalized counts for each sample based on the raw counts and normalization factors, thus we can calculate the logFC ourselves using normalized counts. The question is how to report the final result. I prefer the logFC+pval/FDR from the fitted model while some colleagues prefer the self-calculated logFC and model-based pval/FDR (usually these logFC values are more dramatic than that from the fitted model). I feel it is a bit weird to combine different sources of results. What do you usually do? Will you combine FC values from one method and p values from another? Just curious to know which is a common practice. Thanks.</p>
Please ignore your colleagues, they're doing it wrong. One of the benefits of DESeq2 is that it shrinks fold-changes (toward a Gaussian prior if I remember correctly), which should increase their reliability. This is discussed and tested in [the preprint][1], which I advise everyone that uses DESeq2 to read (for a paper on a statistics package, it's surprisingly accessible). So just report the shrunken fold-changes and adjusted p-values reported by DESeq2 and politely hand your colleagues a copy of the preprint. [1]: http://biorxiv.org/content/early/2014/02/19/002832
biostars
{"uid": 110060, "view_count": 5906, "vote_count": 4}
<p>I always assumed the designations of "forward" and "reverse" strand on DNA were chosen arbitrarily (i.e. the strands could just as easily have been named the other way around).</p> <p>However, a quick browse of the chromosome maps for Human on the NCBI map viewer seems to imply that in all cases, base numbering starts on the side closest to the centromere. Assuming that this numbering applies to the "forward" strands of the reference genome, does this mean that, for humans at least, there is a policy whereby the forward strand is always the one which contains the centromere at the lowest numerical index?</p> <p>Also, if this is the case, does anybody know if this convention would apply outside humans?</p> <p>Thanks for your time.</p>
> ...does this mean that, for humans at least, there is a policy whereby the forward strand is always the one which contains the centromere at the lowest numerical index? Yes, in humans the centromere is always closest to the start coordinate of each chromosome, since human chromosomes are oriented based on their p (for "petit") and q arms. The p and q designation comes from the size of the chromosomes arms observed classically during mitotic cytogenetic preparations. > Also, if this is the case, does anybody know if this convention would apply outside humans? No, this convention does not apply generally outside of humans. For example, the yeast <em>Saccharomyces cerevisae</em> has sixteen chromosomes, 9 of which have the centromere closer to the start of the chromosome than the end. ``` $ mysql --user=genome --host=genome-mysql.cse.ucsc.edu -A -D sacCer1 -e 'select sgdOther.chrom as "chrom.", size as "chrom. size", sgdOther.chromStart as "left arm", size-chromStart as "right arm", chromStart/(size-chromStart) as "left/right ratio" from sgdOther, chromInfo where name like "CEN%" and sgdOther.chrom=chromInfo.chrom order by size/(size-chromStart);' +--------+-------------+----------+-----------+------------------+ | chrom. | chrom. size | left arm | right arm | left/right ratio | +--------+-------------+----------+-----------+------------------+ | chr12 | 1078173 | 150826 | 927347 | 0.1626 | | chr8 | 562639 | 105578 | 457061 | 0.2310 | | chr5 | 576869 | 151959 | 424910 | 0.3576 | | chr13 | 924430 | 268030 | 656400 | 0.4083 | | chr2 | 813136 | 238167 | 574969 | 0.4142 | | chr4 | 1531914 | 449706 | 1082208 | 0.4155 | | chr15 | 1091285 | 326583 | 764702 | 0.4271 | | chr3 | 316613 | 114378 | 202235 | 0.5656 | | chr7 | 1090944 | 496920 | 594024 | 0.8365 | | chr6 | 270148 | 148503 | 121645 | 1.2208 | | chr10 | 745446 | 436001 | 309445 | 1.4090 | | chr16 | 948060 | 555951 | 392109 | 1.4178 | | chr1 | 230208 | 151456 | 78752 | 1.9232 | | chr11 | 666445 | 439773 | 226672 | 1.9401 | | chr14 | 784328 | 628756 | 155572 | 4.0416 | | chr9 | 439885 | 355625 | 84260 | 4.2206 | +--------+-------------+----------+-----------+------------------+ ```
biostars
{"uid": 3908, "view_count": 12710, "vote_count": 18}
**Background** I have paired-end RNA-seq reads from a drug-treatment experiment, with < 15 million mapped reads in many samples (too few reads) and large variability in mapped reads across biological replicates. Differential expression and splicing analysis on these samples indicate that statistical power in my tests could be improved if I had better sequencing depth, and I have remaining RNA from these samples available to re-sequence. **The questions** Is it analytically and statistically tractable to re-sequence the same samples and control for potential artifacts in the combined data? What would be the best workflow for merging data from these two RNA-seq runs? I would guess that it's best to keep the runs separate until the counts have been summarized. Then I can carry out PCA to visually inspect the gross extent of artifact in the different runs before merging the counts for statistical analyses. Beyond gross visual inspection of PC's, what sorts of quality control steps could I take if I identify a strong batch effect between the different sequencing runs? Would software like svaseq or combat be appropriate here if I do identify a batch effect? If so, would it be best to remove the batch effect in the samples before combining the count data?
As you suggested (and confirmed by genomax2) it's probably the best to check using PCA if your two runs result in approximately the same result. But as soon as you have determined it's okay I would suggest to merge your bam files, and repeat the counting before you do your final analysis. That would minimize your chance of errors. Furthermore, in case you are using a two-step alignment (e.g. using STAR) it might be advantageous to merge the fastq files across runs and repeat the alignment.
biostars
{"uid": 252552, "view_count": 6911, "vote_count": 1}
I don't know how stupid of a question this is so please bear with me. I have a certain number of EST cDNA sequences that I used for miRNA prediction. The results are cDNA sequences so for the secondary structure prediction of precursor miRNA, should I use the RNA folding form or the DNA folding form on Mfold because it gives both options and I am confused on which should I use. DNA folding because my sequences are cDNA sequences or RNA folding form because I am trying to predict pre-miRNA sequences after all. Please advice.
I think you should use RNA parameters because miRNAs are indeed RNA, that doesn't depend on how the sequence was acquired.
biostars
{"uid": 144783, "view_count": 3424, "vote_count": 2}
Hi all. I think that p-value is one of the most greatest way of measuring degree of observed data. However, BLAST doesn't use p-value but E-value. Why the BLAST use e-value for interpreting sequence data instead of p-value? Is there any logical reason to use E-value for BLAST? If so, could you tell me the detail reason?
<p>Quote from the BLAST help (http://www.ncbi.nlm.nih.gov/BLAST/tutorial/Altschul-1.html#head4 ):</p> <p><em>&quot;The BLAST programs report E-value rather than P-values because it is easier to understand the difference between, for example, E-value of 5 and 10 than P-values of 0.993 and 0.99995. However, when E &lt; 0.01, P-values and E-value are nearly identical.&quot;</em></p> <p>Important to note that<em> </em>P value of the BLAST is not the same thing than a P-value of a t-test.</p>
biostars
{"uid": 138612, "view_count": 13389, "vote_count": 1}
Hi, I was wondering if there is a way I can generate junctions.bed file from hisat2 just like the one in tophat. Hisat2 documentation says use extract_splice_site.py python script but that gets splice site from gtf file, is there a way to know splice sites and no of reads associated with those sites using hisat2? Thanks
Easiest solution will be to use [featureCounts][1] and counts reads from exon-exon junctions by providing your aligned BAM files. Check manual [page 35][2] for detailed information. **-J (juncCounts)** Count the number of reads supporting each exon-exon junction. Junctions are identified from those exon-spanning reads (containing ‘N’ in CIGAR string) in input data. The output result includes names of primary and secondary genes that overlap at least one of the two splice sites of a junction. Only one primary gene is reported, but there might be more than one secondary gene reported. Secondary genes do not overlap more splice sites than the primary gene. When the primary and secondary genes overlap same number of splice sites, the gene with the smallest leftmost base position is selected as the primary gene. Also included in the output result are the position information for the left splice site (‘Site1’) and the right splice site (‘Site2’) of a junction. These include chromosome name, coordinate and strand of the splice site. In the last columns of the output, number of supporting reads is provided for each junction for each library [1]: http://bioinf.wehi.edu.au/featureCounts/ [2]: http://bioinf.wehi.edu.au/subread-package/SubreadUsersGuide.pdf
biostars
{"uid": 240537, "view_count": 4318, "vote_count": 1}
Hello everyone, I have a bunch of TARGET samples but the associated annotation lacks the tissue information i.e. whether it is a Primary tumor or Recurrent etc. I know the value for a couple codes like `01` stands for Primary solid tumors, but I want to define the tissue type for every code for future references. Does anybody know where I can find the TARGET code table to interpret the codes? TARGET_10 (Blood Derived Normal) TARGET_14 (Bone Marrow Normal) TARGET_11 (Solid Tissue Normal) TARGET_01 (Primary Solid Tumor) TARGET_09 (Primary Blood Derived Cancer - Bone Marrow) TARGET_03 (Primary Blood Derived Cancer - Peripheral Blood) TARGET_04 (Recurrent Blood Derived Cancer - Bone Marrow) TARGET_02 (Recurrent Solid Tumor) TARGET_40 (Recurrent Blood Derived Cancer - Peripheral Blood) TARGET_06 (Metastatic) TARGET_41 (Blood Derived Cancer - Bone Marrow, Post-treatment) TARGET_42 (Blood Derived Cancer - Peripheral Blood, Post-treatment) TARGET_13 (EBV Immortalized Normal) TARGET_15 (sample type 15) TARGET_60 (Primary Xenograft Tissue) TARGET_50 (Cell Lines) TARGET_61 (Cell Line Derived Xenograft Tissue) TARGET_20 (Control Analyte) Thanks! **UPDATE**: I have updated the code table based on @khericlim's response.
Hey Komal, how are you? Say hi to Pichai! The TCGA one was gone, but this should help. https://gdc.cancer.gov/resources-tcga-users/tcga-code-tables/sample-type-codes
biostars
{"uid": 220573, "view_count": 1784, "vote_count": 1}
In Jellyfish's original paper(Marcais et al, Bioinformatics 2011), it says that "counting kmers up to 31 bases in length". But on the other hand, in other papers such as KMC2 (Deorowicz et al, Bioinformatics 2015), Jellyfish2 is used to count kmer size = 55. So does Jellyfish support k > 31 now? and How it is supported?(i.e. why in original version it is not supported but now it is supported)? Thanks.
Probably because they originally used 64bits per kmer, which can store up to 32 bases in 2bit (less one for whatever reason, probably to signal reverse complementation or variable-lengths). With an update, this man-made constraint was removed.
biostars
{"uid": 243272, "view_count": 2049, "vote_count": 2}
hello I have a question while i am reading a paper that is related to a gene fusion discovery. below is some of the paper's content. > A paired end read will be a member of multiple valid clusters as a result of homology between genes, exon redundancy between transcripts, and the multiplicity of valid clusterings. Fusion events are rare when compared with the event of detecting a discordant paired end read given the existence of a fusion event that would generate that read. **Thus we seek an assignment of each paired end read to a single fusion event (valid cluster) that minimizes the number of fusion events**. (Valid cluster represents a potential fusion event implied by a set of discordant alignments) i don't understand the bold text. I think the verb "minimizes" should be changed to "maximizes " because It maybe increase the number of gene fusion by calculating the single paired end read to a single gene fusion. I hope you understand my basic questions. I'm beginner in this field. Anyway I am looking forward to your reply. Thank you.
The problem that the authors are trying to address is the problem of false positive results in the setting of reads that can map to multiple locations. They are suggesting to use a heuristic that takes all the data available and seeks a minimal set of fusions from the set of all potential fusions. So, I believe "minimizes" is what the authors meant.
biostars
{"uid": 107854, "view_count": 1997, "vote_count": 1}
Hello. I have various .bed and .tped files (+ corresponding .fam/.tfam files) that I am exporting in vcf format from plink as follows: ./plink --tfile examplesnps --chr-set 39 --out exported_vcf --recode vcf However, I don't know how to get an output that has the sample IDs attached to the genotype data. Does anyone know how to include the actual sample names (i.e., the individual IDs) in the outputs from plink? Thanks in advance!
Have you tried `--recode vcf-iid`?
biostars
{"uid": 446467, "view_count": 1221, "vote_count": 1}
I am a completely rookie on Bioinformatics, so please bear with me and use simple language (I am a computer scientist) :-) How can we use k-mers to find out if a gene is similar to our query string? For example: We have a reference gene r = `ACAAGTC`, and a query string q = `CATGT`. For sequence alignment we could get the two possible solutions: (see photo on link) ![enter image description here][1] But if we use k-mer with k=2 (reason for 2 is that I tried using k=3 and got 1 match) we could split q into the set = {CA, AT, TG, GT} we can get 2 matches. I am confused on how to use k-mer for queries. I am guessing that it would also need to know where in the reference string it is found, since order matter. But most importantly, **Why** can we use K-mers? That's maybe my biggest question mark. [1]: https://images2.imgbox.com/bc/62/X0aa6MzH_o.png
You wouldn't generally use kmers this short as the odds of finding spurious matches becomes massive, but I accept that this is a toy example. To my knowledge, we don't really approach a sequence alignment with kmers, though you might use them to seed the alignment (in a **local** alignment, small subregions of alignment are found and then slowly expanded). You can however compute useful statistics using kmers about how similar 2 sequences are when they are inefficient to align (e.g. a whole genome). It's not uncommon to look at sequence 'distances' in this way. You might find this code interesting: https://github.com/jrjhealey/bioinfo-tools/blob/master/StringComparisons.py#L158-L181 Those are a few of the most common kmer string/sequence comparison methods implemented in python. You can use kmers to get a rough approximation of 2 large sequences in a very computationally efficient manner, where size or number of sequences becomes prohibitive, with the idea being that the more similar 2 sequences are, the more likely you are to randomly select the same kmer from each of them and the more frequently they will probably occur. Not sure if that answers your question or not?
biostars
{"uid": 485136, "view_count": 2513, "vote_count": 1}
Dear all, I have FASTQ files and on start of my read I have 7 nucleotides tag - I would like to extract reads with this specific tag. I would like to search in first 15 nucleotides of my reads, if match - extract this read to new fastq files. Thank you for any ideas or help.
You can use Heng Li's <a href="https://github.com/lh3/bioawk">bioawk</a>: To check if the tag is part of the first 15 bases bioawk -c fastx 'substr($seq,0,15) ~ /$TAG/ { print }' reads.fq.gz To match the first 7 bases to your tag, bioawk -c fastx 'substr($seq,0,7) == $TAG { print }' reads.fq.gz
biostars
{"uid": 141203, "view_count": 10662, "vote_count": 2}
Hi everyone, I am trying to found out what residues are in contact with a ligand (the binding site) in a PDB file. To give you an example, I need to find the residues in contact with FMN ligand in the PDB entry 1AL7. First of all, can we find this type of information in a PDB file? Or do we need a software to calculate it? I think the first question is yes, but I am not sure: I used the new mmCIF format, as they are well organized compared to the old version. In this format, there is a Data Category called "**chem_comp**" and the dictionnary on the PDB website says : "*Data items in the CHEM_COMP category give details about each of the chemical components from which the relevant chemical structures can be constructed, such as name, mass or charge.*" It allows me to know if there is the presence of a FMN ligand. Another category is "**struct_site**": *"Data items in the STRUCT_SITE category record details about portions of the structure that contribute to structurally relevant sites (e.g. active sites, substrate-binding subsites, metal-coordination sites)."* In my example, there is a line "AC1 Software ? ? ? ? 22 'BINDING SITE FOR RESIDUE FMN A 360'" And then, you have another category called "**struct_site_gen**": "*Data items in the STRUCT_SITE_GEN category record details about the generation of portions of the structure that contribute to structurally relevant sites.*" This category is, I guess, my answer, because I see all the residues with their number sharing the same id (AC1) than in struct_site. Can I use this information for the binding site or you guys think I have to use a software to calculate the residues in the binding site? If so, which one? Thank you
Yes, you can use the information in STRUCT_SITE_GEN listed under AC1 to get the binding residues: TYR24, TYR25, ALA76 etc. If you did want to do this from the coordinate data, one way would be to use [BioStructures.jl][1] in Julia: using BioStructures struc = read("1AL7.cif", MMCIF) res = collectresidues(struc, standardselector) for r in res if distance(r, struc["A"]["H_360"]) < 4.0 println(r) end end This returns more or less the same residues as STRUCT_SITE_GEN. You could also do this with Biopython. [1]: https://github.com/BioJulia/BioStructures.jl
biostars
{"uid": 425276, "view_count": 1418, "vote_count": 1}
Hi I have a question regarding when steps are able to run in parallel. We noticed that sometimes all of the jobs from a scatter must complete before moving to the next I was wondering whether a scattered subworkflow can start before all the inputs to that step are available, or does a subworkflow have to wait for all of the previous inputs to be available regardless of whether they are available for **just the first set** of scattered inputs?
All inputs for the step have to be available. See [this][1], look for bullet point "Instead of scattering separate steps, prefer to scatter over a subworkflow." [1]: https://doc.arvados.org/user/cwl/cwl-style.html
biostars
{"uid": 339055, "view_count": 1007, "vote_count": 1}
Any tool that can do median per base coverage over pre-defined windows from a BAM file? I could make the actual per base coverage using `samtools depth` or `genomeCoverageBed`. chr pos read_depth 1 1 6 1 2 8 ... I already have the windows prepared using bedtools makewindows \ -g "genome.fa.fai" \ -w 1000 > "windows.bed" I am looking for a result like below: Median per base coverage over 1000bp windows chr start end median_read_depth 1 1 1000 5 1 1000 2000 8 ...
I 've written : http://lindenb.github.io/jvarkit/BamStats04.html $ java -jar dist/bamstats04.jar -B src/test/resources/toy.bed.gz src/test/resources/toy.bam 2> /dev/null | column -t #chrom start end length sample mincov maxcov meancov mediancov nocoveragebp percentcovered ref 10 13 3 S1 3 3 3.0 3.0 0 100 ref2 1 2 1 S1 2 2 2.0 2.0 0 100 ref2 13 14 1 S1 6 6 6.0 6.0 0 100 ref2 16 17 1 S1 6 6 6.0 6.0 0 100
biostars
{"uid": 293520, "view_count": 3605, "vote_count": 1}
Is there an option in Hisat that is similar to `-g`/`--max-multihits` in Tophat? I can not find it in the parameter list on https://ccb.jhu.edu/software/hisat/manual.shtml#options. There is a Uniqueness class in here: https://github.com/infphilo/hisat/blob/master/unique.h, but when I grep for that class or the `bestIsUnique` function in the Hisat source folder I can't find it used anywhere. Or would it work to use `samtools view -q 1`? If anyone else is interested, I e-mailed the author and he said: > One possible solution is to use the next best alignment score (ZS:i:, previously XS:i:). If the next alignment score is the same as the primary alignment score (AS:i:), that means there are at least two equivalent alignments found by HISAT. This additional SAM field (ZS:i:) is only available at HISAT github, which I'll include in the next release of HISAT.
<p>The most recent release of hisat (v0.1.6) uses NH tags in the BAM to specify the number of alignments for the read/pair (see release notes: <a href="https://ccb.jhu.edu/software/hisat/index.shtml">https://ccb.jhu.edu/software/hisat/index.shtml</a>). As far as I understand it, reads/pairs with NH:i:1 have a single best alignment and are therefore &quot;uniquely&quot; mapped.</p>
biostars
{"uid": 148289, "view_count": 7316, "vote_count": 1}
I'm using ete toolkit to get a phylogenetic tree for a list of ncbi taxids: ```py from ete2 import NCBITaxa ncbi = NCBITaxa() tree = ncbi.get_topology([9606, 9598, 10090, 7707, 8782]) print tree.get_ascii(attributes=["sci_name", "rank"]) ``` Printing it with ascii chars works, but how to render this tree *including* the attributes (sci_name, rank) to an image? tree.render("tree.pdf") There seems to be no 'attribute' in this function.
`tree.render()` is a general purpose method. As node attributes are completely arbitrary, you need to specify what should be drawn and where... Check the docs regarding the drawing system: https://pythonhosted.org/ete2/tutorial/tutorial_drawing.html For your example, something like this should work: ```py from ete2 import NCBITaxa, AttrFace, TreeStyle ncbi = NCBITaxa() tree = ncbi.get_topology([9606, 9598, 10090, 7707, 8782]) # custom layout: adds "rank" on top of branches, and sci_name as tip names def my_layout(node): if getattr(node, "rank", None): rank_face = AttrFace("rank", fsize=7, fgcolor="indianred") node.add_face(rank_face, column=0, position="branch-top") if node.is_leaf(): sciname_face = AttrFace("sci_name", fsize=9, fgcolor="steelblue") node.add_face(sciname_face, column=0, position="branch-right") ts = TreeStyle() ts.layout_fn = my_layout ts.show_leaf_name = False tree.render("tree.png", tree_style=ts) ```
biostars
{"uid": 153788, "view_count": 4108, "vote_count": 1}
I have an amalgamated BCF file, containing the variant calls from the FULL GENOME of 70 individuals. I've run the command ... bcftools view -i 'TYPE="snp" && AC=140 && GT="hom" && QUAL>20' ./<filename>.bcf | wc -l This should (and based on the output lines I've looked at - does) respond with any variant calls that are SNPs (only), with only one ALT, where the GT is 1/1 for ALL samples. In other words, every sample (out of 70) has this SNP, in both alleles. The count I got was 30704...meaning there are 30704 different SNPs that every patient shares. This seems...unlikely. Is this reasonable? SAMPLE OUTPUT: 1 792862 . C G 228 PASS VDB=0.413797;SGB=-0.693147;MQSB=0.362809;MQ0F=0;MQ=52;RPB=1;MQB=1;BQB=1;DP=5464;DP4=0,1,2859,2288;AN=140;AC=140 GT:PL 1/1:255,144,0 1/1:255,244,0 1/1:255,129,0 1/1:255,175,0 1/1:255,229,0 1/1:255,235,0 1/1:255,255,0 1/1:255,211,0 1/1:255,255,0 1/1:86,9,0 1/1:255,214,0 1/1:255,223,0 1/1:255,220,0 1/1:255,211,0 1/1:255,205,0 1/1:255,232,0 1/1:255,226,0 1/1:255,157,0 1/1:255,220,0 1/1:255,255,0 1/1:255,255,0 1/1:255,87,0 1/1:255,211,0 1/1:255,247,0 1/1:255,223,0 1/1:255,166,0 1/1:255,220,0 1/1:255,223,0 1/1:255,190,0 1/1:255,172,0 1/1:255,226,0 1/1:255,255,0 1/1:255,255,0 1/1:255,241,0 1/1:255,255,0 1/1:255,255,0 1/1:255,238,0 1/1:255,202,0 1/1:255,138,0 1/1:255,196,0 1/1:255,111,0 1/1:255,226,0 1/1:255,255,0 1/1:255,255,0 1/1:255,255,0 1/1:255,208,0 1/1:255,196,0 1/1:255,250,0 1/1:255,226,0 1/1:255,255,0 1/1:255,255,0 1/1:255,235,0 1/1:255,199,0 1/1:255,255,0 1/1:255,157,0 1/1:255,181,0 1/1:255,220,0 1/1:255,163,0 1/1:255,169,0 1/1:255,255,0 1/1:255,255,0 1/1:161,15,0 1/1:255,255,0 1/1:255,255,0 1/1:255,187,0 1/1:255,190,0 1/1:255,255,0 1/1:255,193,0 1/1:255,255,0 1/1:255,126,0 1 822944 . T C 228 PASS VDB=0.978422;SGB=-0.693147;MQSB=1;MQ0F=0;MQ=60;RPB=1;MQB=1;BQB=1;DP=4660;DP4=1,2,2178,2254;AN=140;AC=140 GT:PL 1/1:255,135,0 1/1:255,181,0 1/1:255,90,0 1/1:255,220,0 1/1:255,114,0 1/1:255,169,0 1/1:255,187,0 1/1:255,129,0 1/1:255,144,0 1/1:61,6,0 1/1:255,232,0 1/1:255,160,0 1/1:255,193,0 1/1:255,247,0 1/1:255,169,0 1/1:255,247,0 1/1:255,151,0 1/1:255,99,0 1/1:255,255,0 1/1:255,255,0 1/1:255,255,0 1/1:255,57,0 1/1:255,177,0 1/1:255,250,0 1/1:255,255,0 1/1:255,105,0 1/1:255,220,0 1/1:255,255,0 1/1:255,163,0 1/1:255,169,0 1/1:255,172,0 1/1:255,144,0 1/1:255,255,0 1/1:255,172,0 1/1:255,255,0 1/1:255,235,0 1/1:255,196,0 1/1:255,202,0 1/1:255,190,0 1/1:255,211,0 1/1:255,66,0 1/1:255,214,0 1/1:255,232,0 1/1:255,190,0 1/1:255,123,0 1/1:255,157,0 1/1:255,144,0 1/1:255,178,0 1/1:255,255,0 1/1:255,247,0 1/1:255,255,0 1/1:255,253,0 1/1:255,220,0 1/1:255,250,0 1/1:255,175,0 1/1:255,175,0 1/1:255,132,0 1/1:255,166,0 1/1:255,229,0 1/1:255,255,0 1/1:255,255,0 1/1:55,6,0 1/1:255,169,0 1/1:255,255,0 1/1:255,229,0 1/1:255,157,0 1/1:255,184,0 1/1:255,108,0 1/1:255,241,0 1/1:255,120,0 1 839356 . A C 228 PASS VDB=0.209904;SGB=-0.692976;MQSB=1;MQ0F=0;MQ=60;RPB=1;MQB=1;BQB=1;DP=2889;DP4=1,0,1441,1171;AN=140;AC=140 GT:PL 1/1:255,78,0 1/1:255,66,0 1/1:255,39,0 1/1:255,141,0 1/1:255,151,0 1/1:255,138,0 1/1:255,102,0 1/1:255,69,0 1/1:255,84,0 1/1:65,6,0 1/1:255,120,0 1/1:255,87,0 1/1:255,81,0 1/1:255,117,0 1/1:255,90,0 1/1:255,129,0 1/1:255,129,0 1/1:255,102,0 1/1:255,126,0 1/1:255,123,0 1/1:255,96,0 1/1:255,69,0 1/1:255,169,0 1/1:255,153,0 1/1:255,144,0 1/1:255,90,0 1/1:255,96,0 1/1:255,141,0 1/1:255,111,0 1/1:255,87,0 1/1:255,105,0 1/1:255,102,0 1/1:255,160,0 1/1:255,117,0 1/1:255,141,0 1/1:255,135,0 1/1:255,96,0 1/1:255,157,0 1/1:255,117,0 1/1:255,111,0 1/1:255,78,0 1/1:255,111,0 1/1:255,144,0 1/1:255,102,0 1/1:255,42,0 1/1:255,117,0 1/1:255,69,0 1/1:255,135,0 1/1:255,129,0 1/1:255,129,0 1/1:255,141,0 1/1:255,99,0 1/1:255,126,0 1/1:255,123,0 1/1:255,117,0 1/1:255,111,0 1/1:255,151,0 1/1:255,172,0 1/1:255,132,0 1/1:255,135,0 1/1:255,126,0 1/1:83,9,0 1/1:255,111,0 1/1:255,187,0 1/1:255,111,0 1/1:255,99,0 1/1:255,108,0 1/1:255,72,0 1/1:255,138,0 1/1:255,126,0
The ALT and REF SNPs do not always correspond to the Major/Minor allele at a SNP position. Thus, in some cases the major allele = ALT allele. This could be driving the numbers you are seeing. The other thing that could be driving these numbers is if al 70 patients are of similar ethnicity. I would recommend looking at the raw reads for a handful of patients at a handful oh these 1/1 SNPs and confirming the SNP characterization.
biostars
{"uid": 281202, "view_count": 1371, "vote_count": 1}
Hi folks! I have two GFF3 files, one with annotated genes. Another with predicted transposons. I want to find the numerical distribution of the distances between transposons and genes. I already found the distance amongst just transposons, and amongst just genes - using the 'spacing' sub-command of bedtools 2.2.4.0, however 'spacing' cannot compare two GFF files. For comparing spacing of features in 2 separate GFFs, bedtools 2.24.0 has the reldist sub-command, but this does **not** yield absolute distance in base pairs, but only relative distance distribution, so its not so useful to me... So I am not sure if there is an off the shelf option in bedtools that can help me answer this question. Would any of you have a simple solution to my problem? It does not have to be using bedtools... Thank you!
Maybe `closestBed` with `-d` option is what you want? ``` Tool: bedtools closest (aka closestBed) Version: v2.23.0 Summary: For each feature in A, finds the closest feature (upstream or downstream) in B. ... -d In addition to the closest feature in B, report its distance to A as an extra column. - The reported distance for overlapping features will be 0. ```
biostars
{"uid": 147099, "view_count": 1594, "vote_count": 1}
What happened with miRecords mirna database website? The webpage is not available and I used it's data as part of an analysis in my paper. Now the reviewer can't see the page and asks for an explanation. Edit: Sorry, I forgot to post the link. The miRecords was suppose to be available at http://miRecords.umn.edu/miRecords.
<p>http://mirecords.umn.edu/miRecords/ works fine for me at this moment. It may just have been a hiccup.</p>
biostars
{"uid": 110420, "view_count": 5805, "vote_count": 3}
<p>Hi all,</p> <p>I&#39;m dealing with a fasta file with spaces at the end of line, which caused the problem. I didn&#39;t find a suitable way to remove them. Please kindly tell me the appropriate command for removing them?</p>
Oh my gawk! All previous solutions would risk modifying your fasta header as well. This one will not. gawk 'BEGIN{line=0}{ if ($0 !~/^>/ && $0 ~/ +/ ) {gsub(/ +/, //); line++} print}END{print line" lines with white spaces treated" > "/dev/stderr"}' myfasta.fa >output.fa If you only want to remove the spaces at the end of the lines: gawk 'BEGIN{line=0}{ if ($0 !~/^>/ && $0 ~/ +$/ ) {gsub(/ +$/, //); line++} print}END{print line" lines with white spaces treated" > "/dev/stderr"}' myfasta.fa>output.fa
biostars
{"uid": 170941, "view_count": 14145, "vote_count": 1}
I am trying to run MutSIgCV and got stuck with this error: ``` MutSigCV allsamples.md.tc.ir.br.pr.ug.dbsnp.vep.maf \ "$anno"exome_full192.coverage.txt \ "$anno"gene.covariates.txt \ my_results \ "$anno"mutation_type_dictionary_file.txt \ "$anno"chr_files_hg19 ====================================== MutSigCV v1.4 (c) Mike Lawrence and Gaddy Getz Broad Institute of MIT and Harvard ====================================== MutSigCV: PREPROCESS -------------------- Loading mutation_file... Error using MutSigCV>MutSig_preprocess (line 246) MutSig is not applicable to single patients.\n Error in MutSigCV (line 184) ``` I suspect this is because I did not create the maf file properly. A run-down of the experiment: - variants were called using the GATK pipeline (unified genotyper) - annotated with VEP - vcf was converted to maf using the tool [vcf2maf][1] [1]: https://github.com/ckandoth/vcf2maf I am not working with cancer data. Questions: - is the issue due to the maf? - If yes, how can it be prepared to include patient information? A few of the vcf (header removed, this is a test so only chr14 is present): ``` ##INFO= #CHROM POS ID REF ALT QUAL FILTER INFO FORMAT Psio10_gDNA Psio11_gDNA Psio12_gDNA Psio1_gDNA Psio2_gDNA Psio3_gDNA Psio4_gDNA Psio5_gDNA Psio6_gDNA Psio7_gDNA Psio8_gDNA Psio9_gDNA chr14 62290315 . G A 27.75 LowQual AC=4;AF=0.250;AN=16;BaseCounts=4,0,9,0;BaseQRankSum=-1.271;DP=13;Dels=0.00;FS=0.000;HaplotypeScore=0.0000;MLEAC=3;MLEAF=0.188;MQ=50.39;MQ0=0;MQRankSum=0.480;QD=13.88;ReadPosRankSum=-1.733;CSQ=intron_variant&nc_transcript_variant|||ENSG00000258882|CTD-2277K2.1|ENST00000554138|||||lincRNA GT:AD:DP:GQ:PL 0/0:2,0:2:6:0,6,69 0/0:2,0:2:6:0,6,68 ./. ./. ./. ./. 1/1:0,1:1:3:32,3,0 0/0:1,0:1:3:0,3,32 1/1:0,1:1:3:33,3,0 0/0:1,1:2:3:0,3,33 0/0:2,0:2:3:0,3,33 0/0:1,0:1:3:0,3,33 chr14 62292953 . A T 10.92 LowQual AC=2;AF=1.00;AN=2;BaseCounts=0,0,0,1;DP=1;Dels=0.00;FS=0.000;HaplotypeScore=0.0000;MLEAC=2;MLEAF=1.00;MQ=60.00;MQ0=0;QD=10.92;CSQ=intron_variant&nc_transcript_variant|||ENSG00000258882|CTD-2277K2.1|ENST00000554138|||||lincRNA GT:AD:DP:GQ:PL ./. ./. 1/1:0,1:1:3:33,3,0 ./. ./. ./. ./. ./. ./. ./. ./. ./. chr14 62293825 . G T 20.64 LowQual AC=2;AF=0.167;AN=12;BaseCounts=0,0,9,2;BaseQRankSum=-1.296;DP=11;Dels=0.00;FS=0.000;HaplotypeScore=0.1662;MLEAC=2;MLEAF=0.167;MQ=60.00;MQ0=0;MQRankSum=0.354;QD=4.13;ReadPosRankSum=0.354;CSQ=downstream_gene_variant|||ENSG00000258956|COX4I1P1|ENST00000554239|||||processed_pseudogene,intron_variant&nc_transcript_variant|||ENSG00000258882|CTD-2277K2.1|ENST00000554138|||||lincRNA GT:AB:AD:DP:GQ:PL ./. 0/0:.:2,0:2:6:0,6,68 ./. ./. ./. ./. 0/0:.:1,0:1:3:0,3,33 0/1:0.670:2,1:3:26:26,0,39 ./. 0/0:.:2,0:2:6:0,6,68 0/1:0.500:1,1:2:27:27,0,27 0/0:.:1,0:1:3:0,3,33 chr14 62294744 . G A 28.05 LowQual AC=3;AF=0.300;AN=10;BaseCounts=2,1,5,0;BaseQRankSum=-0.198;DP=8;Dels=0.00;FS=0.000;HaplotypeScore=0.0000;MLEAC=3;MLEAF=0.300;MQ=57.05;MQ0=0;MQRankSum=0.198;QD=9.35;ReadPosRankSum=0.198;CSQ=downstream_gene_variant|||ENSG00000258956|COX4I1P1|ENST00000554239|||||processed_pseudogene,intron_variant&nc_transcript_variant|||ENSG00000258882|CTD-2277K2.1|ENST00000554138|||||lincRNA GT:AB:AD:DP:GQ:PL ./. ./. 1/1:.:0,1:1:3:33,3,0 ./. 0/0:.:1,0:1:3:0,3,33 0/1:0.500:1,1:2:26:26,0,26 ./. 0/0:.:1,0:1:3:0,3,32 0/0:.:2,0:2:3:0,3,45 ./. ./. ./. chr14 62299005 . A C 144.99 . AC=1;AF=0.042;AN=24;BaseCounts=301,8,0,0;BaseQRankSum=-4.723;DP=309;Dels=0.00;FS=2.199;HaplotypeScore=0.9147;InbreedingCoeff=-0.0435;MLEAC=1;MLEAF=0.042;MQ=59.70;MQ0=0;MQRankSum=0.508;QD=7.63;ReadPosRankSum=-1.488;CSQ=non_coding_exon_variant&nc_transcript_variant|||ENSG00000258956|COX4I1P1|ENST00000554239|1/1||||processed_pseudogene,intron_variant&nc_transcript_variant|||ENSG00000258882|CTD-2277K2.1|ENST00000554138|||||lincRNA GT:AB:AD:DP:GQ:PL 0/0:.:26,0:26:78:0,78,892 0/0:.:29,0:29:81:0,81,927 0/0:.:17,0:17:45:0,45,528 0/0:.:28,0:28:75:0,75,861 0/0:.:25,0:25:69:0,69,777 0/0:.:35,0:35:99:0,102,1127 0/0:.:38,0:38:99:0,102,1149 0/1:0.580:11,8:19:99:180,0,294 0/0:.:24,0:24:66:0,66,764 0/0:.:33,0:33:93:0,93,1065 0/0:.:16,0:16:45:0,45,502 0/0:.:19,0:19:54:0,54,603 ```
<a href="https://github.com/mskcc/vcf2maf">vcf2maf</a> doesn't support multi-sample VCFs, like the sample you've shown. When `--tumor-id` is not specified, the resulting MAF will name all the samples as `TUMOR`... which is why MutSig thinks you have a single patient. Even though you're not working with cancer, it is perfectly fine to set `--tumor-id` to your sample ID, as long as it matches the corresponding column header in the VCF e.g. `Psio10_gDNA`. But you will first need to split your VCF into per-sample VCFs, run vcf2maf on each, concatenate the resulting MAFs, and feed that into MutSig. Here is a bit of code to help out: # Download and unpack VCFtools. You don't need to compile it, because you'll only need the Perl utils for now: wget http://downloads.sourceforge.net/project/vcftools/vcftools_0.1.12b.tar.gz tar -zxf vcftools_0.1.12b.tar.gz # Use VCFtools' vcf-query to make a list of all the sample IDs in the multisample VCF: perl -I vcftools_0.1.12b/perl vcftools_0.1.12b/perl/vcf-query --list-columns allsamples.vcf > sample_ids # For each sample ID, run vcf-subset to create per-sample VCFs in a subfolder: mkdir vcf2maf cat sample_ids | perl -ne 'chomp; print `cat allsamples.vcf | perl -I vcftools_0.1.12b/perl vcftools_0.1.12b/perl/vcf-subset --exclude-ref --columns $_ > vcf2maf/$_.vcf`' # For each VCF, run vcf2maf with the `--tumor-id` specified, to create per-sample MAFs into the subfolder: cat sample_ids | perl -ne 'chomp; print `perl vcf2maf.pl --input-vcf vcf2maf/$_.vcf --output-maf vcf2maf/$_.vep.maf --tumor-id $_`' # Concatenate the per-sample MAFs together, making sure that the MAF header is not duplicated: cat vcf2maf/*.maf | egrep "^#|^Hugo_Symbol" | head -2 > allsamples.vep.maf cat vcf2maf/*.maf | egrep -v "^#|^Hugo_Symbol" >> allsamples.vep.maf In addition to MutSig, you can also try the SMG test from the <a href="http://tvap.genome.wustl.edu/tools/music/">MuSiC suite</a> of tools developed at my former lab. It is not cancer specific, and can also find significantly altered regulatory regions, based on how you define your regions of interest.
biostars
{"uid": 108112, "view_count": 9400, "vote_count": 1}
Hello everyone (first post ever for me) I am using Biomart in R and I am trying to obtain the maize homologs of a set of barley genes (I am only displaying one here for example's sake). Everything works fine when I use the getBM function for getLDS does not seem to work. ```r host="plants.ensembl.org" mysets<-listDatasets(useMart("plants_mart", host = host)) barley <- useDataset("hvulgare_eg_gene", mart = useMart("plants_mart", host = host)) zea <-useDataset("zmays_eg_gene", mart = useMart("plants_mart", host = host)) getLDS(attributes = c("ensembl_gene_id","description", "go_id","name_1006"), filters = "ensembl_gene_id", values = "HORVU.MOREX.r3.1HG0000220", mart = barley , attributesL = c("ensembl_gene_id","description", "go_id","name_1006"), martL = zea) ``` I get the following error message. I think the issue may be that the Virtual Schema is set to default, but I can't find how to change it to plants_mart ```r Error in getLDS(attributes = c("ensembl_gene_id", "description", "go_id", : Query ERROR: caught BioMart::Exception::Usage: WITHIN Virtual Schema : default, Dataset hvulgare_eg_gene NOT FOUND Traceback: 1. getLDS(attributes = c("ensembl_gene_id", "description", "go_id", . "name_1006"), filters = "ensembl_gene_id", values = "HORVU.MOREX.r3.1HG0000220", . mart = barley, attributesL = c("ensembl_gene_id", "description", . "go_id", "name_1006"), martL = zea) 2. stop(postRes) ``` Any help would be greatly appreciated
Great spot that this was related to the `virtualSchemaName`. That was hardcoded to be `"default"` in **biomaRt** and no one has ever reported a problem before. I assumed there were no Mart instances that ever used anything different! Anyway, it's been patched in **biomaRt** version 2.50.3, which should be available from Bioconductor in the next few days. If you want to get a copy before then you can install from Github with `BiocManager::install('grimbough/biomaRt', ref = 'RELEASE_3_14')`. Here's an example with it working for a similar query: ```r library(biomaRt) packageVersion("biomaRt") #> [1] '2.50.3' ensembl_plants <- useEnsemblGenomes(biomart = "plants_mart") barley <- useDataset("hvulgare_eg_gene", mart = ensembl_plants) zea <- useDataset("zmays_eg_gene", mart = ensembl_plants) barley_gene_id <- "HORVU.MOREX.r3.4HG0381830" getLDS(filters = "ensembl_gene_id", values = barley_gene_id, attributes = c("ensembl_gene_id","description", "go_id","name_1006"), mart = barley , attributesL = c("ensembl_gene_id","description", "go_id","name_1006"), martL = zea) #> Gene.stable.ID Gene.description GO.term.accession #> 1 HORVU.MOREX.r3.4HG0381830 NA GO:0016788 #> 2 HORVU.MOREX.r3.4HG0381830 NA GO:0016788 #> GO.term.name Gene.stable.ID.1 Gene.description.1 #> 1 hydrolase activity, acting on ester bonds Zm00001eb395980 Zm00001e038320 #> 2 hydrolase activity, acting on ester bonds Zm00001eb395980 Zm00001e038320 #> GO.term.accession.1 GO.term.name.1 #> 1 GO:0016788 hydrolase activity, acting on ester bonds #> 2 GO:0016787 hydrolase activity ```
biostars
{"uid": 9507620, "view_count": 1178, "vote_count": 1}
I am using BWA to align files. I have four directories: seqtk_1, seqtk_2, seqtk_3, seqtk_4. Within each of those directories I have 10 subdirectories: subsample_1, subsample_2, subsample_3, etc. Within each of those subdirectories I have 20 paired-end reads (so from 10 genomes). I want to put all the files from the all the directories through a pipe and into an output directory (BWA). The structure of this directory is the same as described above. So I have 800 input files and 400 output files. I have written a script (below): echo "[info] creating filenames"; for filename in ./Mock_Run/seqtk_*/subsample_*/*_1.fq.gz; do file=`echo $filename|sed 's/_1.fq.gz//'`; filenopath=`basename $file`; for i in $(seq 10 $END); do echo subsample_$i; for u in $(seq 4 $END); do eval outpath=BWA/seqtk_$u/subsample_$i; echo "[info] starting BWA alignment..."; bwa mem -v 0 combine_reference.fa.gz ${filenopath}_1.fq.gz ${filenopath}_2.fq.gz > ${outpath}/${filenopath}_BWA.sam; echo "[info] converting sam file to bam file"; samtools view -bS ${outpath}/${filenopath}_BWA.sam > ${outpath}/${filenopath}_BWA.bam; echo "[info]filtering unmapped reads...."; samtools view -h -f 4 ${outpath}/${filenopath}_BWA.bam > ${outpath}/${filenopath}_unmapped.bam; echo "[info] filtering mapped reads..."; samtools view -h -F 4 ${outpath}/${filenopath}_BWA.bam > ${outpath}/${filenopath}_mapped.bam; echo "[info] sorting files"; samtools sort -o ${outpath}/${filenopath}_mapped_sorted.bam ${outpath}/${filenopath}_mapped.bam ; samtools sort -o ${outpath}/${filenopath}_unmapped_sorted.bam ${outpath}/${filenopath}_unmapped.bam; echo "[info] finished...no error to report"; done; done; done It loops through all the files (like I wanted) and puts them into the right output subdirectory (like I wanted). It all seems to work, except it continues to loop. Once it has gone through all the files, it then starts again. Any help would be appreciated.
You should parse the seqtk folder and subsample folder directly from `$filename` - see bellow for a suggestion. Your second and third for loops are already contained in the first: First loop: for filename in ./Mock_Run/seqtk_*/subsample_*/*_1.fq.gz; Second loop: for i in $(seq 10 $END); do echo subsample_$i; You are already globbing `subsample_*` and `seqtk_*`, so there is no need for `for i ... do subsample_$i` and `for u do seqtk_$u`, is there? Try something like (untested): SEQTK=$(echo $filename | sed "s|\.\/Mock_Run\/\(seqtk.*\)\/subsample_.*|\1|") echo $SEQTK Same for subsample.
biostars
{"uid": 269775, "view_count": 2738, "vote_count": 2}
Hi guys, The genotype imputation need human genetic map file, whose format is like(in chr1): position COMBINED_rate(cM/Mb) Genetic_Map(cM) 55550 0 0 568322 0 0 568527 0 0 721290 2.685807669 0.410292036939447 723819 2.8222713027 0.417429561063975 723891 2.9813105581 0.417644215424158 728242 2.9806151254 0.430612871834774 729948 3.0780969498 0.435864105231133 740857 3.075133293 0.46941073432447 There are 3 columns, the first column is physical position united in bp, which is easy to understand. The second and third column is recombination rate(cM/Mb) and genetic position(cM, centiMorgan). I'm confused about the values in column 2 and 3. Let's take the 4th row as an example. I know 721290 means the 721290th position on chr1. So what's the 2.68 and 0.41 represent for? I looked up some webpages and learned that "A centiMorgan is a unit of genetic distance that represents a 1% probability of recombination during meiosis." But I still cannot figure out the what's the meaning of 0.41/2.68 at position 721290. Can anyone explain it to me? Thanks! Tao
Hi, Tao. Let's take the 4th row as an example. 1. 721290 means the 721290th position on chr1 and also means 0.721290 Mb. 2. To find 2nd column you need not numbers from 1st and 3d columns, but results of the subtractions: in the 4th row, it will be (0.41...-0)/(0.721290-0.568527) which gives us 2.685807669. So, in each row in column 1 and 3 you need to subtract numbers from the previous one to find the number from column 2. Hope that it'll help you Igor
biostars
{"uid": 222697, "view_count": 6942, "vote_count": 5}
If I made a fake fasta with 10x the material of a book, randomly cut and spliced, and use this to align a Book, could tophat2 reconstruct the book? Or it only work with with ATCG letters? Just a question that came out in my mind today.
You would need to re-encode the book as ACGT. For example, 1 ASCII character is 8 bits, corresponding to 4 nucleotides if you use the simplest possible encoding (rather than trying to pack into 7 or 6.5 bits, or whatever). Thus for an ASCII-formatted text file of the book, the encoded book would be 4x as long, but the mapping would work fine. You MIGHT be able to map to the raw book using some protein aligners, as those allow more symbols. As Istvan said, though, you'd need to use an assembler to reconstruct the book, not an aligner.
biostars
{"uid": 146643, "view_count": 1899, "vote_count": 5}
Hello clever community! I need your advice. I am working on a *de novo* plant genome assembly of ~400 Mb. I have Chromium 10x data, which was assembled with supernova. I also have Illumina paired end reads. Now I have additional data of PacBio reads, 120x roughly. The genome is diploid and I am thinking about using Falcon. What do you think should be the best strategy: 1. Assembling PacBio reads and then using a tool to integrate the two assemblies? Is there anything like this? Which tool would you use? 2. Using a tool that can assemble the genome from both the chromium and the PacBio reads? Is there anything like it? 3. Assembling the PacBio reads and using chromium 10x and the illumina for polishing? If I assemble with Falcon, what tool should I use for polishing? 4? Anything else that I am missing to get the best out of what I can get? Thank you very much in advance! Alex
Falcon is not a bad choice, an alternative might be Canu (if you have the computational resources for it) 1) MEDUSA (as well as QuickMerge) is one of those integrating assembly/scaffolding tools 3) Pilon, Arrow, and there will be others I guess 4) Canu, but with the same remark as https://www.biostars.org/u/22207/ for MaSuRCa
biostars
{"uid": 324156, "view_count": 3032, "vote_count": 2}
I am working on a project comparing RNAseq quantification results between Illumina short-reads and Nanopore long-reads and I have a couple questions about comparing the quantification results from these two technologies. More specifically I need some help with figuring out how to normalize the data for the comparisons within samples and between samples. So far I have come up with the following plan: 1. Using CPM to compare gene/transcript expression within each sample sequenced with nanopore. For example, comparing if gene.X transcripts are more abundant than gene.Y transcripts within sample_1 sequenced with nanopore. Using CPM instead of TPM for nanopore seems like a good option since our nanopore runs do not have transcript length bias. Does this sound like a good strategy? 2. Using TPM to compare gene/transcript expression within each sample sequenced with illumina. For example, comparing if gene.X transcripts are more abundant than gene.Y transcripts within sample_1 sequenced with illumina. Using TPM instead of CPM for illumina seems like a good option since illumina has transcript length bias (a single long transcript will have more counts that a single short transcript). Does this sound like a good strategy? 3. Here is where I am having trouble coming up with a good normalization strategy. Comparing gene/transcript expression between the same sample sequenced with illumina and nanopore. e.g., performing a spearman correlation between gene expression in sample_1 sequenced with illumina and sample_1 sequenced with nanopore. I am not sure what would work here since Illumina has transcript length bias and nanopore does not. Do you have any suggestions? Any help here will be greatly appreciated. Best, Bernardo
I did some work on this area before, it definitely has a lot of challenges. The biggest difference, is short reads RNAseq measurement are calculated considering the transcript length (normalized against it). The long reads tools does not normalized to transcript length. The reads are the actual transcripts, if you are using directRNAseq. Due to the science behinds the chemistry. I believe there is a paper that attempt doing this, https://academic.oup.com/nar/article/50/4/e19/6439677 They use EM based method, similar to kallisto or salmon for illumina. I will report back on my work later when I have more time. I worked with the great @rpolicastro at the time. He must have some great thoughts.
biostars
{"uid": 9552419, "view_count": 268, "vote_count": 2}
Since phylogenetic analysis largely depends on a quality of primary sequence data I usually curate alignments manually and exclude phylogenetically uninformative or misleading sites. But do you know any tool that could tell me how good my alignment is before sending it to phylogenetic program? Are there any programs or methods for assessing the phylogenetic signal that comes from a given alignment?
The 'quality' of an alignment is somewhat ambiguous given that an alignment is an inference of homology; we may or may not have a good idea of which sites are homologous in any given sequence set. Alignment algorithms compute alignment scores by assigning certain values to matches, mismatches, insertions/deletions, and gap extensions. These scores are then used to evaluate whether or not an alignment is better than another by simply comparing scores. However, the scoring scheme is arbitrary. If you are concerned with the quality of your hand-curated alignment (and you may not need to be - expert 'by eye' alignments are often considered acceptable!), I would use your aligner of choice (MAFFT, perhaps?) and estimate the score of your alignment and compare it to the alignment produced or refined by the program. One other concern: excluding misleading sites is one thing (the program Gblocks will remove regions thought to be resulting from spurious alignment), but removing non-informative sites can impact your analysis. For model-based phylogenetic approaches, invariant or slowly-evolving sites are included in the model as either a proportion of invariable sites or as part of the gamma distribution modeling among-site rate heterogeneity. If you are only selecting variable sites, it may be inappropriate to concatenate them and apply a model or a single evolutionary history. If this is the case, I would recommend [SNAPP][1] by David Bryant and others which estimates species trees from SNP data while treating individual gene trees as nuisance parameters. Hope this helps. [1]: http://www.beast2.org/wiki/index.php/SNAPP
biostars
{"uid": 114570, "view_count": 4163, "vote_count": 3}
I want to compare two bam files to see whether they are same, how many reads they overlapped, and how many reads are unique. bam1 file was got several years ago and then I convert it to bam1_r1_fq and bam1_r2_fq and then remap to the new reference genome and got new sam file. Then I sorted, added header etc and got the bam2 file. When I use samtools view -f 64 -F 2304 bam1 | cut -f 1,3,4 | LC_ALL=C sort -t ' ' -k 1,1 > bam1.txt samtools view -f 64 -F 2304 bam2 | cut -f 1,3,4 | LC_ALL=C sort -t ' ' -k 1,1 > bam2.txt join -t ' ' -1 1 -2 1 bam1.txt bam2.txt > bam.comparaison I got error like this : join: bam1.txt:4077: is not sorted: HWI-D00222:175:H7C06ADXX:2:1101:1250:20054 MT 10679 join: bam2.txt:4077 is not sorted: HWI-D00222:175:H7C06ADXX:2:1101:1250:20054 MT 10679 Then I want to try "cmpbams" that @Pierre Lindenbaum wrote, and it showed mkdir -p lib/com/github/samtools/htsjdk/2.9.1/ && curl -Lk -o "lib/com/github/samtools/htsjdk/2.9.1/htsjdk-2.9.1.jar" "http://central.maven.org/maven2/com/github/samtools/htsjdk/2.9.1/htsjdk-2.9.1.jar" /bin/bash: curl: command not found maven.mk:102: recipe for target 'lib/com/github/samtools/htsjdk/2.9.1/htsjdk-2.9.1.jar' failed make: *** [lib/com/github/samtools/htsjdk/2.9.1/htsjdk-2.9.1.jar] Error 127 How can I fix one of this problem or both? Thanks ahead.
Since many aligners work in non-deterministic mode the files will not be 100% identical (more so now due to different reference genomes, differences that may be there due to actual alignment logic, if you used different aligners/different versions of the same aligner etc). It also appears that you are missing the program `curl` for your OS. Install it using appropriate means.
biostars
{"uid": 254790, "view_count": 2174, "vote_count": 1}
Hi - when using VEP to get allele frequency information for a list of SNPs, I ask for the gnomad_AF column. I have noticed however that around a third of my list return an NA value. How should I intepret this? Is that of all combined frequency consoritums across multiple continents, this SNP has never been seen (i.e. Ultra Rare)? Some of the SNPs do appear in dbsnp however, but I guess the exomes/genomes used for gnomad might not coincide with those reported to dbSNP? This seems about right to me - any lab can report to dbSNP, but only subset of the population end up in consotriums such as 100k genomes and exac. Could anyone provide a definitive answer on this? Can I count them as "ultra rare" if a SNP does not appear in gnomad? Thanks
I think it does count as ultra rare if gnomAD doesn't have it. As long as the comparing tool is configured well in comparing multi-allelic sites (as in, you're *sure* the variant is not in gnomAD), it can be labelled ultra-rare.
biostars
{"uid": 308357, "view_count": 2123, "vote_count": 1}
It might be a very stupid question for many of you but, since it's my first variant calling, I didn't figure it out yet. I have **mpileup**'ped two bam files from two samples, then I filtered the results with **vcfutils.pl** and called the genotypes with **bcftools** **call**. Now I have a **VCF** file containing what I want, but I managed to analyze differences only between the two samples and the reference (which is an assembly coming from a line that is *different* *from both* *samples*). I have the variants between my two samples and the assembly, but what if I want to detect the differences **between** the two samples? I did it with awk / sed / cut and other command line tools but is there maybe a better and more straight-forward way to do that? Perhaps using bcftools? Up to know, I didn't find it. Any suggestion appreaciated! EDIT: I did it with bcftools gtcheck as well, that works fine. I am asking if there are **other** reliable tools to test! EDIT 2: As this post has many views now, I guess it will be useful for everyone to know that I used **bcftools isec** and it worked brilliantly.
I was recently in a very similar situation. I wrote a bash script that finds the "similarity" between two VCF files. Someone might find it useful: #!/usr/bin/env bash sort -u <(grep -v '^#' $1 | cut -f1,2,4,5) > a sort -u <(grep -v '^#' $2 | cut -f1,2,4,5) > b comm -23 a b > a_only comm -13 a b > b_only comm -12 a b > ab numer=`cat a_only b_only | wc -l` denom=`cat ab a_only b_only | wc -l` dist=`echo "$numer/$denom" | bc -l` sim=`echo "(1-$dist)*100" | bc -l` echo $sim # CAREFUL rm -f a_only b_only a b ab <br> If you put it in `sim.sh` then you can use it like so: `$ ./sim.sh sample_1.vcf sample_2.vcf`
biostars
{"uid": 224919, "view_count": 16415, "vote_count": 14}
How to use step by step the software of IPA (Ingenuity Pathway Analysis) to analyze the interaction between these genes: IL3RA CD38 CS MS4A1 GPC2 in the cases of **lymphoma** disease. Thank you for your enlightenment. Best regards, Dito Anurogo (A Ph.D. student at Taipei Medical University, Taiwan)
There are several youtube videos, manuals and videos on product owner's website. Please consult them. I think they might have training sessions by vendor. Most of these vendors employ application scientists and contact them.
biostars
{"uid": 9521801, "view_count": 471, "vote_count": 1}
This is a duplicate of: 1. https://www.biostars.org/p/7372/ from 5.4 years ago and 2. https://www.biostars.org/p/65920/ from 3.5 years ago. Nevertheless, I found the answers on the second post enlightening, and 3.5 years later it seems worthwhile to have an update from the community (and @lh3) on how you deal with these issues. Notably, @Aaronquinlan's gemini from the second most upvoted answer to question #2 is still under active development. Are people using it? What else are people doing?
This is called "variant warehousing" and there are several open source and commercial efforts in various stages: Golden Helix VSWarehouse Paradigm4 SciDB WuXi NextCODE CMH Variant Warehouse ViaGenetics Genesis Curoverse Lightning Intel GenomicsDB Cloudera OMICS
biostars
{"uid": 211076, "view_count": 1444, "vote_count": 1}
EDIT: SOLUTION As @rpolicastro said, the problem had to do with the conda channels. The solution for me was: conda create -n cutadapt -c conda-forge -c bioconda cutadapt python=3.9 ORIGINAL POST Hello everyone I'm having trouble installing cutadapt. I just installed conda and created a new environment for cutapdat with a specific python version (since it send me a message before about that). conda create -n cutadapt python=3.9 When I enter the command `conda install -c bioconda cutadapt` the following message appears ``` UnsatisfiableError: The following specifications were found to be incompatible with each other: Output in format: Requested package -> Available versionsThe following specifications were found to be incompatible with your system: - feature:/linux-64::__glibc==2.35=0 - feature:|@/linux-64::__glibc==2.35=0 - cutadapt -> libgcc-ng[version='>=9.3.0'] -> __glibc[version='>=2.17'] Your installed version is: 2.35 ``` But when I run `conda list` in my cutadapt environment to see the packages it says: ``` packages in environment at /home/camila/miniconda3/envs/cutadapt: Name Version Build Channel _libgcc_mutex 0.1 main _openmp_mutex 4.5 1_gnu ca-certificates 2022.4.26 h06a4308_0 certifi 2021.10.8 py39h06a4308_2 ld_impl_linux-64 2.35.1 h7274673_9 libffi 3.3 he6710b0_2 libgcc-ng 9.3.0 h5101ec6_17 libgomp 9.3.0 h5101ec6_17 libstdcxx-ng 9.3.0 hd4cf53a_17 ncurses 6.3 h7f8727e_2 openssl 1.1.1o h7f8727e_0 pip 21.2.4 py39h06a4308_0 python 3.9.12 h12debd9_0 readline 8.1.2 h7f8727e_1 setuptools 61.2.0 py39h06a4308_0 sqlite 3.38.3 hc218d9a_0 tk 8.6.11 h1ccaba5_1 tzdata 2022a hda174b7_0 wheel 0.37.1 pyhd3eb1b0_0 xz 5.2.5 h7f8727e_1 zlib 1.2.12 h7f8727e_2 ``` So I don't know what the problem may be. Any thoughts? Thanks a lot.
It could be related to your channels or the solver is having a problem with your specific linux install. First try ensuring proper channel order when you go to create the environment. ``` conda create -n cutadapt -c conda-forge -c bioconda cutadapt python=3.9 ``` If that doesn't work try using the beta libmamba solver which is more powerful than the current one. ``` conda update -n base conda conda install -n base conda-libmamba-solver conda create -n cutadapt --experimental-solver=libmamba -c conda-forge -c bioconda cutadapt python=3.9 ```
biostars
{"uid": 9522705, "view_count": 2258, "vote_count": 2}
Struggling with biostats and association study designs. I initially wanted to do an association study comparing two populations and seeing which SNPs are significant. For example, I am looking at just variation between centenarians (people who live >100years) and a control group. Should I be including age as a covariate? I am interested in detecting longevity variants or anything that suggests a difference from the control group. I do not think age would be necessary since it is not a confounder because it is not an independent covariate. I think I was reading that adding independent covariates can decrease power of the study. When do you guys start considering modeling with linear or logistic regressions instead of GWAS? Is it when you have a dependent variable and a predictor variable you are interested in? Would you guys think I should have added covariates? Or is another way possible to change the case and control population to groups that reflect covariate status? For example, If I was interested in centenarians with Alzheimer's compared to a control population with Alzheimer's, a logistic regression with Alzheimer status be more appropriate than running a GWAS on them? Sorry for so many questions.
<p>I&#39;m a bit self-taught in this but since no-one has answered in 2 days I&#39;ll give it a try, others can feel free to chip in.</p> <p>&gt;Should I be including age as a covariate?</p> <p>You add covariates when you expect that they have an influence on your phenotype - for example, gender often has an influence, but most commonly in humans it&#39;s population structure (like PC1 and PC2 and sometimes PC3 from PCA software). In your case with old age I wouldn&#39;t use age as a covariate as it&#39;s practically identical to your phenotype of interest, it&#39;s not independent.</p> <p>Generally, I&#39;d add covariates if you have surprisingly tiny p-values, and especially if your QQ-plots look bad. For a good introduction to reading and interpreting QQ-plots, see here: <a href="http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0019416">Behavior of QQ-Plots and Genomic Control in Studies of Gene-Environment Interaction</a>. How to plot QQ-plots depends on the format of your output (some software like GAPIT does it automatically).</p> <p>Have a look at these two tutorials for more on covariates, QQ-plots and population stratification: http://sites.tufts.edu/cbi/files/2013/01/GWAS_Exercise4.pdf and http://sites.tufts.edu/cbi/files/2013/02/GWAS_Exercise6_Stratification.pdf (these also have code for plotting QQ-plots for PLINK results)</p> <p>&gt;When do you guys start considering modeling with linear or logistic regressions instead of GWAS?</p> <p>I&#39;m not sure I understand this question, regressions are a part of GWAS. In my experience running a regression with PLINK gives relatively similar results to a mixed linear model as for example implemented in GAPIT. The actual p-values are different but the SNPs with the lowest p-values stay roughly the same.</p>
biostars
{"uid": 156344, "view_count": 9742, "vote_count": 5}
I have an article in which two types of multiple protein structures docking are described (multiple docking and ensemble docking) but it is not clearly stated how these approaches differ.
Ensemble docking refers to the approach of generating multiple structures of the target molecule using molecular dynamics simulation and study docking to this ensemble of target structures. Multiple docking refers to the breaking down of the target structure into smaller components that are then used for docking in various combinations. Both of these approaches try to account for conformational changes of the target protein.
biostars
{"uid": 411268, "view_count": 783, "vote_count": 2}
<p>I'm using the SRA toolkit to convert some SRA files to Fastq format. I've been looking at the documentation to make sure I'm doing things right, and the word <em>spot</em> keeps coming up. My question is twofold.</p> <ol> <li>What is a <em>spot</em> and how does it differ from a read?</li> <li>Where is this (officially) documented (or is it)?</li> </ol> <p>The reason I've separated these two questions is that I think I know the answer to the first one, but I'm not sure and I can't find the answer in any of the documentation or online. Also, I expect more people will know the answer to question #1 than question #2.</p>
This is the description I received from the SRA staff (Adam Stine). The spot model is Illumina GA centric. The flowcells have the locations where the adapters have stuck them to the glass of the lane. There are X and Y coordinates that identify these 'spots'. As the camera reads the fluorescent flashes during sequencing, the coordinates indicate which spot the new base is added to. All of the bases for a single location constitute the spot. There may be one or more divisions of those bases for technical reads (adapters, primers, barcodes, etc) and there will always be at least one biological read (forward, reverse). I usually think of the technical reads as the "known" sequence and the biological as the "unknown". When we store the data, the bases for a single spot are all stored as one string with the description of where the breaks occur as well as the type of read each segment represents. The spot length is the expected total length for all reads (used as a check to make sure we have all the data). As an example, a 2x150 run with a 6bp barcode and 12bp primer on the forward read would have 4 reads. 0 - barcode basecoord 1 1 - primer basecoord 7 2 - forward basecoord 19 3 - reverse basecoord 151 --- But you only need to explain SRA about the barcode and primer is you submit sequences that contains it..In my case, a third party provided me with the BAM files and I do not have the untrimmed sequences. So the SPOT datamodel is useful for supplying untrimmed BAM.. yet, enable you to specify where the biological reads begin. In my case, I have 2X100 bp without index and I am only supplying the Application read with the adapter trimmed. so I simply submit. 0 - forward basecoord 1 (Application read) 1 - reverse basecoord 101 (Application read) ---
biostars
{"uid": 12047, "view_count": 23288, "vote_count": 23}
I understand that different Illumina sequencers can output different numbers of reads (i.e. HiSeq instruments can produce 40-400 million reads/lane, whereas MiSeq can produce 5-25 million reads/lane). My question is why? What factors in the Illumina next-generation sequencing technology impact read count outputs from these sequencers? For example, can certain instrument settings increase or decrease the number of reads produced (assuming that equal amounts of DNA were supplied to the sequencer)?
<p>The two factors that limit the amount of data are 1) the maximum cluster density (i.e., number of clones) that can be imaged and resolved by the camera, and 2) the surface area of the flow cell lane that&#39;s imaged (MiSeq flow cells are much smaller than HiSeq).</p>
biostars
{"uid": 167218, "view_count": 2471, "vote_count": 2}
I would like to use snakemake to analyze my data sets. As I am going to work with different organisms, I would like snakemake to create a folder for each of them when indexing the genome. let's say I would like to work with human and mouse data. In my `config.yaml` file I have the following snippet: organism: "Dmel" # "Dpse" Each time I am working with an organism I can comment out the others in the list. Below are the rules for getting the fastA files as well as the annotation and the indexing step (as an example) with STAR. #### get reference genomic data for the mapping (fasta and gtf files). Links are added in the config file rule get_genome: output: fastA="genome/genome.fa", gtf="genome/genome.gtf" shell: """ wget -nc -O - {config[fastA]} | gunzip -c - > {output.fastA} wget -nc -O - {config[gtf]} | gunzip -c - > {output.gtf} """ ### Indexing the reference genome rule star_index: input: fasta="genome/genome.fa", gtf="genome/genome.gtf" output: directory("genome/starIndex/") threads: 16 params: prefix = {config["organism"]} shell: "mkdir -p {output} && " "STAR --runThreadN {threads} " "--outFileNamePrefix {output}{params.prefix} " "--runMode genomeGenerate " "--genomeDir {output} " "--limitGenomeGenerateRAM {config[RAM]} " "--genomeSAindexNbases {config[SAindex]} " "--genomeFastaFiles {input.fasta} " "--sjdbGTFfile {input.gtf} " "--sjdbOverhang 100" I would like to know how to add the value from the config file in the organism, e.g. `config[organism]` into the rules, so that, when I download the fastA and gtf they will be renamed accordingly. I have tried to add `config[organism]` to the path, but it gives me an error. I have also tried this for example: rule get_genome: output: fastA="genome/{config[organism]}.fa", gtf="genome/{config[organism]}.gtf" But it just creates the files `{config[organism]}.gtf` and `{config[organism]}.fa`. Can someone please help me understand how I can add the parameter from the config file to the different paths in the various rules? thanks
You'd need `wildcards`. https://snakemake.readthedocs.io/en/stable/snakefiles/rules.html#wildcards [~/Data/scratch/tmp/biostar]$ cat wildcards.py rule: input: expand("{organisms}/starIndex/", organisms=config['organism']) rule get_genome: output: fa = touch('{organism}/{organism}.fa'), gtf = touch('{organism}/{organism}.gtf') run: # remove `touch` and implement logic to download fa and gtf # you can access organism via wildcards, i.e. config[wildcards.organism] pass rule star_index: input: fa = '{organism}/{organism}.fa', gtf = '{organism}/{organism}.gtf' output: directory("{organism}/starIndex/") shell: 'mkdir -p {output}' [~/Data/scratch/tmp/biostar]$ snakemake -s wildcards.py --config organism=GRCh38 ... [~/Data/scratch/tmp/biostar]$ tree GRCh38/ GRCh38/ ├── GRCh38.fa ├── GRCh38.gtf └── starIndex 1 directory, 2 files In this case, `config['organism']` can be a list of organisms and the workflow will expand itself. Hope the example helps.
biostars
{"uid": 406385, "view_count": 1863, "vote_count": 2}
<p>Are there any programs available (Python, Perl, whatever), that can take a degenerate nucleotide sequence and translate it into its multiple possible oligos?</p> <p>Any help would be appreciated.</p>
Like Pierre said, this is a combinatorial problem that can quickly blow up in your face. However, that aside, it easy to generate all the combinations using a recursion. Here is a compact implementation in Perl: ``` #!/usr/bin/perl -w # Lookup table of degenerate IUPAC nucleotide codes. my %deg2nuc = ( "R" => ["A", "G"], "Y" => ["C", "T"], "S" => ["G", "C"], "W" => ["A", "T"], "K" => ["G", "T"], "M" => ["A", "C"], "B" => ["C", "G", "T"], "D" => ["A", "G", "T"], "H" => ["A", "C", "T"], "V" => ["A", "C", "G"], "N" => ["A", "C", "G", "T"] ); # Recursive function that replaces degenerate nucleotides with all combinations. sub generate { if ($_[0] =~ /(.*)([RYSWKBDHVN])(.*)/) { my $head = $1; my $tail = $3; my @seqs; foreach my $nuc (@{$deg2nuc{$2}}) { push @seqs, generate($head.$nuc.$tail); } return @seqs; } else { return $_[0]; } } # Demo: print all sequences generated from ANCRG. print join("\n", generate("ANCRG")), "\n"; ``` EDIT: On second thought, that was not at all compact by Perl standards. Here is the really compact version that ventures into *code golf* territory: ``` #!/usr/bin/perl -w my %A = ("R"=>1, "W"=>1, "M"=>1, "D"=>1, "H"=>1, "V"=>1, "N"=>1); my %C = ("Y"=>1, "S"=>1, "M"=>1, "B"=>1, "H"=>1, "V"=>1, "N"=>1); my %G = ("R"=>1, "S"=>1, "K"=>1, "B"=>1, "D"=>1, "V"=>1, "N"=>1); my %T = ("Y"=>1, "W"=>1, "K"=>1, "B"=>1, "D"=>1, "H"=>1, "N"=>1); $_ = "ANCRG\n"; while (s/(.*)([RYSWKBDHVN])(.*)\n/(exists $A{$2} ? "$1A$3\n" : "").(exists $C{$2} ? "$1C$3\n" : "").(exists $G{$2} ? "$1G$3\n" : "").(exists $T{$2} ? "$1T$3\n" : "")/e) {} print; ```
biostars
{"uid": 6219, "view_count": 9317, "vote_count": 2}
Hi all, My question is to find a method to count enhancer RNAs from RNASeq data. Although it may have been asked before, I could not find any post close to mine. I have a couple of fastq files of RNASeq data. I am using TopHat for alignment. When I put my .bam files to IGV, I can visually see some reads in locations where enhancers bind to. But when it comes to using a tool to count the reads in those locations, although those specific locations are included in my .gtf file with an Ensemble ID, the count is "0". my code for featureCounts is ``../anaconda/bin/featureCounts -a .../bcbio_ku/share/bcbio/genomes/Hsapiens/GRCh37/rnaseq/ref-transcripts.gtf -o .../work/my_generated.counts -t transcript -s 0 -p -C .../final/Unif_DMSO/Unif_DMSO-ready.bam`` Is there anyone using an alternative method for enhancer RNA counting? I believe a lincRNA counter or miRNA counter would help me build the method too. The last option for me is to create a unique .gtf file according to "http://enhanceratlas.org/download.php" for each cell line through a python script. Thanks already
You could use bedtools to count number of reads mapping to interest of your region > intersectBed -wa -wb -s -a bedFile -b BAMFile Check different parameters Keep in mind that eRNA are generally not well detected in most RNA-seq samples because of the way how libraries are prepared. 1) As eRNA are generally non-polyadenylated, libraries should be prepared with ribo-depletion kit to make sure non polyadenylated RNAs are in the sample. If poly-A is used in library prep, eRNA are removed/depleted. 2) eRNA are degraded rapidly, so libraries should be sequenced at very high coverage to detect these events. Alternatively, instead of steady mRNA (like RNA-seq), sequencing the nascent RNA (like GRO-seq) enriches eRNA. 3) Not usually done, but if libraries are further size selected, lets says, 50-500 nt long, expected size range of eRNA, you will enhance the eRNA signals.
biostars
{"uid": 228128, "view_count": 2976, "vote_count": 1}
Hello everyone, I am interested in analyzing some data from TCGA, specifically the LAML project. From there I managed to download the HTSeq counts for the whole cohort, however I would like to map each sample to specific mutations for given set of genes in order to perform DE analysis between pools of samples with the same gene mutated. I am struggling with this second part. I found a R package named RTCGAToolbox in which you can download the expression data and the Mutation data among others. Instead, what I am getting are the mutations present in the cohort rather than a table to map samples with their mutations. Is there any possibility to download this kind of metadata, or I have to manually check from MAF files for each sample? If I have to, how should I do it? Thanks in advance to everyone, Esteve P.S: Checked before writing this post if there was any similar one, but I didn't find any. Apologies if it is a repeated question.
I did a lot of the analysis for the TCGA AML paper. The simplest way to pull mutations is to go to the [paper's supplemental site](https://tcga-data.nci.nih.gov/docs/publications/aml_2012/) and pull Supplemental Table 6. Note that the variants are all on build 36, so liftover may be in order if you're working from a newer reference genome build.
biostars
{"uid": 281955, "view_count": 1650, "vote_count": 3}
Hello, I've watched a video from YouTube about OMIM. Then I get that OMIM already had more than 3000 disease-associated genes till last year. Now I want to get all of those genes from OMIM to do some experiments. I think maybe I should use "Gene Map" of OMIM, but I still don't know how to search to get all disease-associated genes. Do you have any good advice or guide? Thank you in advance, naulty
I endorse the alternative BioMart. I'd perhaps point to another [link][1] as well. The reason for that is due to the fact that there can be a lag between the latest version of the Ensembl database from our main site (as part of our [release cycle][2]) and the version available in Central BioMart. At the moment they are in sync but as soon we release a new version, it takes some time for them to update it. So just keep an eye on that. [1]: http://www.ensembl.org/biomart/martview/a718a2c2ac2c280d41fa088bc5595bd9 [2]: http://www.ensembl.info/blog/2014/04/17/ensembl-release-cycle-and-grch38/
biostars
{"uid": 118566, "view_count": 12846, "vote_count": 1}
Does anyone know of any tools/scripts that would score a MSA in a sliding window manner? I want to get the average identity across the length of an operon MSA, so I'm looking for something that gives me back a measure of conservation that I can graph. I found some old threads on BioStars, but a lot of the linked packages either aren't exactly what I'm looking for, or the links etc are just defunct. If I have to write this myself (probably with python), does anyone have any thoughts on appropriate ways to score each window?
Wrote my own in the end. Uses some bits of code I found online, but a bit more performant. https://gist.github.com/jrjhealey/130d4efc6260dd76821edc8a41d45b6a
biostars
{"uid": 258014, "view_count": 3582, "vote_count": 1}
<p>My lab has sequenced an exome using the "Agilent SureSelect All Exon Kits" ( <a href="http://www.genomics.agilent.com/CollectionSubpage.aspx?PageType=Product&amp;SubPageType=ProductDetail&amp;PageID=3041">http://www.genomics.agilent.com/CollectionSubpage.aspx?PageType=Product&amp;SubPageType=ProductDetail&amp;PageID=3041</a> )</p> <p><img src="http://www.genomics.agilent.com/files/ProductImages/AllExon225.jpg" alt="allez, une petite image pour faire joli"/></p> <p>Does anyone knows where I can find a <strong>BED</strong> describing the captured regions ?</p> <p>P.</p>
<p>According to this post <a href="http://www.biostars.org/p/5187">http://www.biostars.org/p/5187</a> , maybe you can find the information in Agilent's Earray <a href="https://earray.chem.agilent.com/earray/">https://earray.chem.agilent.com/earray/</a></p>
biostars
{"uid": 57675, "view_count": 26988, "vote_count": 4}
Hi everyone, I have two unequal data sets. A data set V1 V2 V3 6 42721754 42721769 6 42721757 42721772 6 42721760 42721775 6 42721763 42721778 6 42721766 42721781 6 42721769 42721784 6 42721772 42721787 6 42721775 42721790 B data set V2 AF 42721757 0.003067485 42721760 0.006134969 42721763 0.006134969 42721766 0.003067485 42721769 0.006134969 42721772 0.006134969 42721775 0.003067485 42721778 0.006134969 42721781 0.003067485 42721784 0.003067485 42721787 0.009202454 42721790 0.009202454 I want to check if the value of `V2` in `B data set` falls between the values of `V2` and `V3` of `A data set`. And when its true, I want value of `AF` in `B data set` to be added as a new column in `A dataset`. There are two important points. 1. `B data set` is 80 rows while `A data set` is 6000 rows. 2. value of `V2` in `B` may be repeated, which shouldn't get thrown out in final output. an urgent help with be appreciated. Thanks,
Hey, you can try the idea of Rashedul; however, using *BEDTools*, you may encounter issues with the sorting of the regions and also the fact that you implied how the co-ordinates in B can be repeated. Here is a solution in trusty AWK: cat A.txt V1 V2 V3 6 42721754 42721769 6 42721757 42721772 6 42721760 42721775 6 42721763 42721778 6 42721766 42721781 6 42721769 42721784 6 42721772 42721787 6 42721775 42721790 cat B.txt V2 AF 42721757 0.003067485 42721760 0.006134969 42721763 0.006134969 42721766 0.003067485 42721769 0.006134969 42721772 0.006134969 42721775 0.003067485 42721778 0.006134969 42721781 0.003067485 42721784 0.003067485 42721787 0.009202454 42721790 0.009202454 awk 'FNR==NR \ {if (NR>1) {arr[$1]=$2}; next} \ {if (FNR>1) \ {for (pos in arr) \ if ((pos > $2) && (pos < $3)) \ print $0"\t"arr[pos]}}' FS='\t' B.txt FS='\t' A.txt 6 42721754 42721769 0.003067485 6 42721754 42721769 0.006134969 6 42721754 42721769 0.006134969 6 42721754 42721769 0.003067485 6 42721757 42721772 0.006134969 6 42721757 42721772 0.006134969 6 42721757 42721772 0.003067485 6 42721757 42721772 0.006134969 6 42721760 42721775 0.006134969 6 42721760 42721775 0.003067485 6 42721760 42721775 0.006134969 6 42721760 42721775 0.006134969 6 42721763 42721778 0.003067485 6 42721763 42721778 0.006134969 6 42721763 42721778 0.006134969 6 42721763 42721778 0.003067485 6 42721766 42721781 0.006134969 6 42721766 42721781 0.006134969 6 42721766 42721781 0.003067485 6 42721766 42721781 0.006134969 6 42721769 42721784 0.006134969 6 42721769 42721784 0.003067485 6 42721769 42721784 0.006134969 6 42721769 42721784 0.003067485 6 42721772 42721787 0.003067485 6 42721772 42721787 0.006134969 6 42721772 42721787 0.003067485 6 42721772 42721787 0.003067485 6 42721775 42721790 0.006134969 6 42721775 42721790 0.003067485 6 42721775 42721790 0.003067485 6 42721775 42721790 0.009202454 This assumes that the files are tab-delimited and that they genuinely have the headers that you imply they have (the code instructs to skip the first line of each). If the files are not tab-delimited, then modify the values set for `FS`. Also, it assumes that you want to match ***between*** the regions specified in *A.txt*, so, '*less than*' or '*greater than*', but not '*less than or equal to*' or '*greater than or equal to*'. Modify `>` and `<` in the code to suit your needs. Kevin
biostars
{"uid": 402757, "view_count": 1315, "vote_count": 1}
HI, Can someone give a good definition of the difference ? Thanks
UCEs are short- and medium-sized stretches of DNA that are identical between different species. They are often non-coding. Orthology refers always to coding sequences. It requires that function be conserved between different species, but there is no requirement for identical sequence. While presumably there is a requirement for functional conservation in both cases, evolutionary pressure for sequence conservation is larger on UCEs than orthologs.
biostars
{"uid": 9517284, "view_count": 435, "vote_count": 2}
After I got the filtered vcf file of snps with gatk pipeline, I tried to run PCA with gcta. afterwords, I tried to find the explained variation percentage for each PCs(principal components). what I'm confused is that the eigenvalue results I get are always the same number as the sample size. I thought that the amount of possible PCs were the same as the variable dimension I have (which is the number of snps called, about 17000000 in this case) and there should be the same amount of eigenvalues related to it. Of course there could be same eigenvalues and PCs that explain small variation are useless, but isn't it possible that there could be 17000000 PCs and eigenvalues? So I thought when I want to get the explained variation percentage of PC1, I had to divide it by the some of 17000000 eigenvalues. could someone explain why this is wrong and why I always get the same number of eigenvalues as the sample size?
The principal components are the eigenvectors of the covariance matrix (or correlation, if you scale) of your data. Now, you have two choices: do you want to focus on the covariance between your N samples, or the covariance between your M SNPs? GCTA focuses on the former, whereas you thought it was the latter. GCTA's covariance matrix (aka GRM) is NxN, hence you will get N eigenvectors. You are not reducing the "SNP dimensions", you are reducing the "sample dimensions".
biostars
{"uid": 9487101, "view_count": 978, "vote_count": 1}
I'm trying the tuxedo protocol as well as analysis using DESeq2, I wanted to see if I can see in the difference of DE genes. DESeq2 uses something called count matrix ,which I generated using both ht-seq and featurecounts .Now I want to know how do I transform those counts to make it usable for DESeq2 analysis I see the count table from ht-seq and featurecount whats the difference between both the output.? for ht-seq I see a long list of ensemble ID with the counts where as for featurecount there is lot more information . I have two 4 samples Control vs Test with replicate of each of them ,so I have the all together 4 count table. So how can i use those counts into my DESeq2 ? Any help and suggestion would be highly appreciated.
The output of both HT-seq and featurecounts can *almost* be fed directly in DESeq2. - **for FeatureCounts :** Output looks like : Geneid Chr Start End Strand Length cond1 cond2 SPBC460.05 I 16470 18062 + 1593 1 12 24 SPBC460.02c II;II 8856;9651 9365;9803 -;- 663 329 SPAC212.11 I 1 5662 - 5662 0 0 0 First you need to remove 5 firsts columns and name the rows as gene names cond1 cond2 SPBC460.05 12 24 SPBC460.02c 663 329 SPAC212.11 0 0 then just use the `DESeqDataSetFromMatrix` function of DESeq2 to convert it. - **for HTseqCounts :** see comments below
biostars
{"uid": 238253, "view_count": 4478, "vote_count": 2}
<p>Hi, I would like to calculate the percent-identity from a CIGAR string from a <a href='http://samtools.sourceforge.net/SAM1.pdf'>BAM</a>/SAM file containing alignments. I want to calculate the PID only for the aligned region, ignoring clipped ends ("H","S"). I can parse the CIGAR in R and get the sums for each letter in the CIGAR, so it's just a conceptual question about the definition and whether or not it is correct: so given the CIGAR contains characters: "M" (match),"N" (skip),"D" (Deletion),"I"(Insertion), "S" (soft-clip),"H" (hard-clip), of which I ignore "S", "H", and "M" is the total sum of M, N the total of N, etc.:</p> <p>e.g.: <code>10S 20M5I5D20M 10S</code></p> <p>Is this a good way of defining the formula?</p> <pre><code>pid1 := 100 * M / (M+N+I) </code></pre> <p>or maybe</p> <pre><code>pid2 := 100* M / (M+N+I-D) </code></pre> <p>the example: pid1 = ~88% but then pid2 = 100% (which wouldn't make much sense).</p> <p>Related: <a href='http://biostar.stackexchange.com/questions/9358/is-there-any-r-package-to-parse-cigar-element-of-sam-files/17031#17031'>http://biostar.stackexchange.com/questions/9358/is-there-any-r-package-to-parse-cigar-element-of-sam-files/17031#17031</a></p> <p>Thank you very much</p>
<p>No one seems to have mentioned it, but how about the "NM" tag ("edit distance" in SAM format specification)? I think this is the closest to "mismatches" that can be obtained in a relatively fast way by parsing bam files. One caveat, indels are always going to be a problem when defining identity and I don't know how they are counted in the "NM" tag, perhaps different programs have different ways of counting them.</p>
biostars
{"uid": 16987, "view_count": 14696, "vote_count": 11}
Hi, first post here. So I'm trying take the CDS out of various species' orthologous sequences. I'm running on a Linux server, and am mainly aiming to use BioPython or Linux programs for this. I've run OrthoFinder on 28 species of seaweed, which gave out roughly 10,000 orthogroup sequences fasta files, each of which is a a multi-fasta file. I've concatenated each of them into one huge multifasta file, and now I want to extract the fasta files according to their species into a new multifasta file (so 10k files -> 1 file -> 28 files, one per species). How do I do this? I'm still fairly new to BioPython, so I'm still wrapping my head around things. I know I'll definitely need SeqIO, not sure what other libraries I'll need. I already have a text file with all the species listed, one per line. Thanks heaps for any help. Lachlan
Here is a Python code that reads one FASTA file and creates multiple FASTA files for each species separately. from Bio import SeqIO d = {} fh = open('OG0000036.fa') for seq_record in SeqIO.parse(fh, 'fasta'): species_name = seq_record.id.split('-')[-1] if species_name not in d: d[species_name] = open(f"{species_name}.fa", 'w') d[species_name].write(seq_record.format("fasta")) fh.close() **OUTPUT:** FASTA files: Ahnfeltiopsis_flabelliformis.fa Betaphycus_philippinensis.fa Ceramium_kondoi.fa Chondrus_crispus.fa Chroodactylon_ornatum.fa Dumontia_simplex.fa Eucheuma_denticulatum.fa Glaucosphaera_vacuolata.fa Gloiopeltis_furcata.fa Gracilaria_blodgettii.fa Gracilaria_lemaneiformis.fa Gracilaria_sp..fa Grateloupia_catenata.fa Grateloupia_filicina.fa Grateloupia_livida.fa Grateloupia_turuturu.fa Heterosiphonia_pulchra.fa Kappaphycus_alvarezii.fa Mazzaella_japonica.fa Neosiphonia_japonica.fa Porphyridium_cruentum.fa Porphyridium_purpureum.fa Pyropia_yezoensis.fa Rhodochaete_parvula.fa For example, `Grateloupia_filicina.fa` has three sequences: >scaffold-ZJOJ-2006011-Grateloupia_filicina MVVVFGLTGNDFALVVADMTSARSIMCFKHDEDKIERIDERKVLATAGEHSNRIEFSEYI QKNLALMKLQTGLELSNHGTANFIRNEVAKALRTRGAYNTNSIMAGFDETGPAQKVNFTA HGYASYFSLSVMDSKWRQDMTLEEGKKLVQECIDQLKSRFLINQPKFMMKIVTDQGITE >scaffold-ZJOJ-2006903-Grateloupia_filicina MDTLLGIAGEGFVVLAADAQVARSILLYKNDMDKIAHLSENKALACAGPQSDCVSFTEYI SKNMALYELNNDVKLSTKAAASFIRGELAKALRKGPFQTQILMGGVDKRAAAEAEGKDDA SLFWLDYLGTLQKVPYGAHGYGAAFTLSVMDREYVKGLSLDEALAIIDNCIKELHTRFLI AQKNFVIKVVTAEGIK >scaffold-ZJOJ-2055484-Grateloupia_filicina MESIIALEGDGYVLIAADVASARSVVVMKDDMDKIRPLDSHKLFAAAGIPGDVSKFTEHV QKDVRLYNMRSGITMSTAAAANYTRGELARFLRRSPFQCNVLIGGYDAPPYGHGPSLYSC DYLGTLTKLKFAAEGYAQYFVLSTLDRYWKKNLSIEDGVEVIKKCVAEVQKRLVINQPRF AIKIVDKNGVRVID
biostars
{"uid": 9486713, "view_count": 2030, "vote_count": 1}
Dear Friends, My vcf file has SNPs available for different population(Africa, America, Europe,East Asia and South Asia ). I want to extract the data for Europe and East Asia together . Kindly let me know the possible ways. Thanks in Advance
You can do this easily using vcftools, GATK tools, plinkseq etc. you first have to generate a text file with the list of samples that form the population of your choice, let's say "population_of_interest.txt" Then, vcf-subset -e -c population_of_interest.txt input.vcf > output.vcf or vcftools --vcf input.vcf --keep population_of_interest.txt --recode > output.vcf
biostars
{"uid": 292691, "view_count": 4508, "vote_count": 1}
Hi, maybe it's a stupid question, but, are gene symbols supposed to be case sensitive or case insensitive? E.g. GENE1, gene1, GenE1 represent the same gene? Thanks
<p>It&#39;ll depend on the organism. Human genes, for example, are always upper case (e.g., &quot;GENE1&quot;), with the exception of ORFs (e.g., C17orf12, which probably doesn&#39;t exist). Mouse genes, on the other hand, have a capitalized first letter and the rest lower case (e.g., &quot;<em>Gene1</em>&quot;, which also needs to be italicized). In neither of those would a gene named &quot;GenE1&quot; exist. Having said that, you may see things like &quot;GenE1&quot; in print, since it&#39;s sometimes easier to see that that means gene family &quot;Gen&quot; subgroup &quot;E&quot; member &quot;1&quot;, though it&#39;s often not technically correct. Technically a drosophila gene could be named &quot;GenE1&quot;, since there&#39;s no strict rule on case (everything after the first letter is normally lower-case, but there&#39;s no &quot;must&quot; in that rule).</p>
biostars
{"uid": 99083, "view_count": 2203, "vote_count": 2}
Hi All, Is it possible to convert a bam index (.bai) to human readable format? I'm asking mostly for learning, I don't plan to do anything special with it. There is a good description of the bam index in the [sam spec][1] document, session 5.2 but I would like to see a "real" example of an index in plain text. Thanks Dario [1]: https://samtools.github.io/hts-specs/SAMv1.pdf
You can write such a tool yourself using R (just as an example): > x = file('/Users/sdavis2/Downloads/vcfanno_0.0.7_darwin_386/example/ex.bam.bai') > open(x,'rb') > readChar(x,4) # Magic number [1] "BAI\001" > readBin(x,integer(),size=1,signed=FALSE) # Number of Ref Sequences [1] 86 At this point, you can use loops to loop over each of the blocks, reading in data as described in the spec.
biostars
{"uid": 172515, "view_count": 4296, "vote_count": 2}
Hello, this is my first post here. I would like to know if there is a way to call IGB (Integrated Genome Browser) through Java. I want to start a specific instance of IGB from a Java application and automatically show some tracks of interest that are on local file. Is this possible? Is there a way to use IGB into a Java application? or at least is there a way to use a whatever genome browser like that? Thanks in advance
Hi, Thank you for the great question! Here are some additional possibilities that may be a good fit for what you want to do. **Option One: Use IGB links (probably the simplest) - port 7085** One option is to use ports on localhost, as IGB "listens" to port 7085 on localhost and can respond to commands encoded in a URL. Here is some documentation about this to get you started: - https://wiki.transvar.org/display/igbman/Controlling+IGB+using+IGB+Links - https://wiki.transvar.org/display/igbman/Scripting+and+the+IGB+command+language **Option Two: Use IGB commands - port 7084** Starting with IGB 8.2 (release next week!) you can control IGB by sending commands directly to port 7084. We implemented this to enable controlling IGB from within an interactive R session, but you could use this same mechanism to control IGB from other applications implemented in other languages. To see how the R code looks, look at SRAdb/IGBFunctions.Rmd in https://bitbucket.org/lorainelab/rtoigb Note that this is based on similar functionality in Bioconductor package SRAdb, hence the name. **Option Three: OSGi services (requires Java coding)** Possibly another option would be to use IGB's OSGi plug-in architecture to hook directly into IGB functions that are exposed as OSGi services. That would require more coding -- but if you wanted to invest the time, you could probably do some very nice stuff. To find out more about how to work with IGB as a developer, see the developer's guide: https://wiki.transvar.org/display/igbdevelopers/Home Thank you again for your great question. I will post again here as we make more demonstration code available. Yours, Ann
biostars
{"uid": 123400, "view_count": 2397, "vote_count": 2}
Im using this code to make based on log2foldchange and padj value ,im getting the plot but i want those value for my reference how do i extract the same . alpha <- 0.05 # Threshold on the adjusted p-value cols <- densCols(res$log2FoldChange, -log10(res$pvalue)) plot(res$log2FoldChange, -log10(res$padj), col=cols, panel.first=grid(), main="Volcano plot", xlab="Effect size: log2(fold-change)", ylab="-log10(adjusted p-value)", pch=20, cex=0.6) abline(v=0) abline(v=c(-1,1), col="brown") abline(h=-log10(alpha), col="brown") gn.selected <- abs(res$log2FoldChange) > 2.5 & res$padj < alpha text(res$log2FoldChange[gn.selected], -log10(res$padj)[gn.selected], lab=rownames(res)[gn.selected ], cex=0.4) when i view gn.selected i get only logical value that is true or false Any help or suggestion would be highly appreciated Update I'm doing this > DF <- DF[DF$log2FoldChange > 1.5 & DF$padj < 0.05,] is that suffice and am i doing it correctly ?
**Edit (October 24, 2018):** This is now a Bioconductor package: https://www.biostars.org/p/335751/ --------------------------------------------- --------------------------- Your code appears to run fine on my DESeq2 results objects: <a href="https://ibb.co/cdNecb"><img src="https://preview.ibb.co/kckMqw/yours.png" alt="yours" border="0"></a> ----------------------------------- I normally do these simple volcano plots a different way: par(mar=c(5,5,5,5), cex=1.0, cex.main=1.4, cex.axis=1.4, cex.lab=1.4) topT <- as.data.frame(resultsObject) #Adjusted P values (FDR Q values) with(topT, plot(log2FoldChange, -log10(padj), pch=20, main="Volcano plot", cex=1.0, xlab=bquote(~Log[2]~fold~change), ylab=bquote(~-log[10]~Q~value))) with(subset(topT, padj<0.05 & abs(log2FoldChange)>2), points(log2FoldChange, -log10(padj), pch=20, col="red", cex=0.5)) #with(subset(topT, padj<0.05 & abs(log2FoldChange)>2), text(log2FoldChange, -log10(padj), labels=subset(rownames(topT), topT$padj<0.05 & abs(topT$log2FoldChange)>2), cex=0.8, pos=3)) #Add lines for absolute FC>2 and P-value cut-off at FDR Q<0.05 abline(v=0, col="black", lty=3, lwd=1.0) abline(v=-2, col="black", lty=4, lwd=2.0) abline(v=2, col="black", lty=4, lwd=2.0) abline(h=-log10(max(topT$pvalue[topT$padj<0.05], na.rm=TRUE)), col="black", lty=4, lwd=2.0) <a href="https://ibb.co/nyG5Hb"><img src="https://image.ibb.co/kYOViG/volcano.png" alt="volcano" border="0"></a> -------------- There is an even better solution that I and colleagues developed using *ggplot2*, which allows you to easily fit labels into your plot using `ggrepel()`.
biostars
{"uid": 282295, "view_count": 47969, "vote_count": 3}
<p>bgzip files are backward compatible with gzip, but I have issues when using bgzip compressed vcf files with snpeff (java) or perl scripts that uses IO::Uncompress::Gunzip (that I believe it uses zlib under the hood). In both cases the data is decompressed but truncated after a few hundred lines aprox. I could be totally wrong but I was wondering if zlib (or whatever gzip compatible library they are using) is getting confused with the bgzip bloks and only processing one or a few of them leaving the output incomplete.</p> <p>perl code that does not work:</p> <pre><code>#!/usr/bin/env perl use strict; use IO::Uncompress::Gunzip qw(gunzip $GunzipError) ; my $infile = shift; my $infh = IO::Uncompress::Gunzip-&gt;new( $infile ) or die "IO::Uncompress::Gunzip failed: $GunzipError\n"; my $line_count = 0; while (my $line=&lt;$infh&gt;){ $line_count++ } print "total lines read = $line_count\n"; </code></pre> <p>This gives 419 lines</p> <pre><code>$ perl /home/pmg/tmp/test_zlib-bgzip.pl 460112_TTAGGC_L005_L006_C3HVJACXX.sorted.rmdup.varsit.vcf.gz total lines read = 419 </code></pre> <p>but using open with gzip pipe works:</p> <pre><code>#!/usr/bin/env perl use strict; my $infile = shift; open(my $infh , 'gzip -dc '.$infile.' |'); # I can use bgzip intead gzip my $line_count = 0; while (my $line=&lt;$infh&gt;){ $line_count++ } print "total lines read = $line_count\n"; </code></pre> <p>Gives the expected number of lines</p> <pre><code>$ perl /home/pmg/tmp/test_gzip-bgzip.pl 460112_TTAGGC_L005_L006_C3HVJACXX.sorted.rmdup.varsit.vcf.gz total lines read = 652829 </code></pre> <p>I googled about and I was unable to find quickly any relevant entry, but this is something that I am sure other people would have already faced. Do someone have a clue about why is this happening? I am using ubuntu 12.04.4 with perl 5.16</p>
<p>Hi. I have just encountered the same problem and it seems there is a pure-Perl solution surrounding this as shown below</p> <pre> my $status = gunzip(&quot;in.gz&quot; =&gt; &quot;out.gz&quot;, MultiStream =&gt; 1) or die &quot;gunzip failed: $GunzipError\n&quot;;</pre> <p>It also seems Apache Commons Compress can provide similar support for Java. Hope this helps</p>
biostars
{"uid": 94240, "view_count": 5247, "vote_count": 6}
Hi all, Sorry I know that this question has been asked several times, but unfortunately I haven't been able to find the right answer, or didn't understand. I'm trying to get TMM normalized counts thanks to edgeR. I understand that I have to compute normalization factors : dgList <- calcNormFactors(dgList, method="TMM") which gives me a normalization factor for all samples : head(dgList$samples) group lib.size norm.factors S1 1 21087314 0.9654794 S2 1 16542810 1.1589117 S3 1 18875473 0.8763291 S4 1 15865414 1.0864038 S5 1 19179795 1.0488230 S6 1 15063992 1.0707007 But at this step I don't know what to do to get a matrix of normalized TMM counts. I know that I can get CPM normalized counts thanks to : cpm(dgList) But CPM and TMM are not the same, right ? Thanks in advance for any of your input on this topic.
No, CPM and TMM are not exactly the same indeed. perhaps try this snippet of code: dgList <- estimateCommonDisp(dgList) dgList <- estimateTagwiseDisp(dgList) norm_counts.table <- t(t(dgList$pseudo.counts)*(dgList$samples$norm.factors)) write.table(norm_counts.table, file="./normalizedCounts.txt", sep="\t", quote=F)
biostars
{"uid": 317701, "view_count": 29756, "vote_count": 10}
Hello, I am currently focusing on identifying denovo mutations from my trio data (parents are unaffected and child is affected). I used PhaseByTransmission. However, I found all denovo mutations (child is heterozygous, and both parents are hom. ref) were not phased (i.e. I am getting '/' instead of '|'). Do you think it is an error? If I search autosomal recessive, variants were phased correctly. What is the problem in my analysis? I am pasting the summary results provided by PhaseByTransmission for your kind perusal. Please also comment on the summary results, are they looking odd? Please help. ``` java -jar /gatk_3.3/GenomeAnalysisTK.jar -R /reference_sequence/human_g1k_v37.fasta -T PhaseByTransmission -V trio1.vcf -ped trio1.ped --DeNovoPrior 0.00001 -o trio_out.vcf --MendelianViolationsFile mendelian_violation.vcf INFO 20:04:04,201 GenomeAnalysisEngine - Strictness is SILENT INFO 20:04:04,341 GenomeAnalysisEngine - Downsampling Settings: Method: BY_SAMPLE, Target Coverage: 1000 INFO 20:04:04,453 PedReader - Reading PED file trio1.ped with missing fields: [] INFO 20:04:04,457 PedReader - Phenotype is other? false INFO 20:04:04,510 GenomeAnalysisEngine - Preparing for traversal INFO 20:04:04,530 GenomeAnalysisEngine - Done preparing for traversal INFO 20:04:04,531 ProgressMeter - [INITIALIZATION COMPLETE; STARTING PROCESSING] INFO 20:04:04,531 ProgressMeter - | processed | time | per 1M | | total | remaining INFO 20:04:04,532 ProgressMeter - Location | sites | elapsed | sites | completed | runtime | runtime INFO 20:04:34,824 ProgressMeter - 15:96876611 147844.0 30.0 s 3.4 m 77.5% 38.0 s 8.0 s INFO 20:04:43,701 PhaseByTransmission - Number of complete trio-genotypes: 139299 INFO 20:04:43,702 PhaseByTransmission - Number of trio-genotypes containing no call(s): 0 INFO 20:04:43,703 PhaseByTransmission - Number of trio-genotypes phased: 124651 INFO 20:04:43,703 PhaseByTransmission - Number of resulting Het/Het/Het trios: 13391 INFO 20:04:43,704 PhaseByTransmission - Number of remaining single mendelian violations in trios: 937 INFO 20:04:43,704 PhaseByTransmission - Number of remaining double mendelian violations in trios: 12 INFO 20:04:43,704 PhaseByTransmission - Number of complete pair-genotypes: 0 INFO 20:04:43,705 PhaseByTransmission - Number of pair-genotypes containing no call(s): 0 INFO 20:04:43,705 PhaseByTransmission - Number of pair-genotypes phased: 0 INFO 20:04:43,705 PhaseByTransmission - Number of resulting Het/Het pairs: 0 INFO 20:04:43,706 PhaseByTransmission - Number of remaining mendelian violations in pairs: 0 INFO 20:04:43,706 PhaseByTransmission - Number of genotypes updated: 4395 INFO 20:04:45,481 ProgressMeter - done 201351.0 40.0 s 3.4 m 100.0% 40.0 s 0.0 s INFO 20:04:45,482 ProgressMeter - Total runtime 40.95 secs, 0.68 min, 0.01 hours INFO 20:04:47,002 GATKRunReport - Uploaded run statistics report to AWS S3 ```
The program is working correctly. Denovo variants can not be phased by transmission because they are not transmitted from the parents (they arise "de novo"). You also can not phase de novo variants using imputation, as imputation also depends on the variant being transmitted from a parent. To phase de novo variants, you need to use sequence reads to phase the de novo variant to a nearby inherited heterozygous variant which you can phase by either transmission or imputation.
biostars
{"uid": 142884, "view_count": 3659, "vote_count": 3}
We are currently downloading and analyzing multiple large WGS datasets (30-50x) from patients. So far, we downloaded the data of 20 patients from dbGaP/NCBI (tumor and matched normal respectively). More samples are planned to be included. The download itself via prefetch/fasp was relatively fast and smooth but now the problems begin, so maybe you have some experience in how to optimize things. -Sra to fastq via fastq-dump is often unbearably slow. Not only is the fastq-dump slow itself, but I often experience I/O bottlenecks on our university cluster, which uses gpfs (not lustre as I stated yesterday). fastq-dump is often stuck in "D"-state, so uninterruptable sleep. To speed up things, I dumped large sra into several fastq files, using the -N and -X options, but merging these chunks via GNU cat was also extremely slow, sometimes with only a few hundred MB in several hours. Is that normal (the server does not run on SSD as far as I know). -same goes for alignment sorting. I tried to rather use fewer threads with SAMtools sort but more memory per thread to avoid the creation of too many tmp files, which then need to be merged again. still, even merging few (< 50) files takes hours and hours of time, again with only few hundreds MB in several hours. That often collides with the walltimes. It would be great if you could share your experiences in how to handle these Terabyte-scale data, and what tricks one can apply in order to avoid performance bottlenecks. UPDATE: It seems that the main bottleneck is reading the files from disk, rather than writing them after being processed.
The solution we came up was the following: Our file system is simply slow, and there was nothing that could really be done about it. The main bottleneck was reading from the file system, rahter than writing. Fortunately, some of the nodes had local SSDs, which I could use. So loaded the SRAs via prefetch (ascp) to the SSD, then fastq-dump them from there, outputting directly to /scratch. Thanks to ascp, the download of a 40-100Gb files was done in no time, and the dumping was speeded-up by (never benchmarked it) I think factor 10. Thanks very much for all your suggestions.
biostars
{"uid": 260840, "view_count": 2051, "vote_count": 1}
Dear all, I have indexed bam file, and I want to print all alignments on chromosome "2" using pysam. My code is import pysam bam = pysam.AlignmentFile("Aligned.sortedByCoord_rep1.out.bam", "rb") for line in bam.fetch("2"): print line What I get is: D00733:162:CADM2ANXX:2:1311:7466:63232 163 18 3328 255 1S47M2S 18 3419 47 TTGCTTAGTGTCCGAAATACCATCCTCAAGGCTAAGAACTAAATCGATTA array('B', [33, 33, 33, 33, 33, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38]) [('NH', 1), ('HI', 1), ('AS', 70), ('nM', 10)] D00733:162:CADM2ANXX:2:1215:11810:89291 163 18 3356 255 44M6S 18 3441 44 GGCTAAGAACTAAATCGATTATTCTGGCTCGTAACGCATATAATATGGCA array('B', [33, 33, 33, 32, 33, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 35, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 37, 38, 38, 38, 38, 38, 38, 33]) [('NH', 1), ('HI', 1), ('AS', 70), ('nM', 10)] So there are 2 questions: 1. Why pysam does not preserve format of bam file? 2. Why does it print out chromosome 18 instead of 2? I would appreciate if somebody points out on what am I doing wrong. Thanks
Hi, you are not wrong, you are doing it correctly :) The AlignedSegment object is independent of a SAM-file. The AlignedSegment contains only an index of the chromosomal identifier (tid), and, thus, it does not contain the chromosome number information. It means that the number you see as 18 in the field 3 of the line is the number corresponding to chromosome 2 in the sequence dictionary. You can check that by doing the following: import pysam bam = pysam.AlignmentFile("Aligned.sortedByCoord_rep1.out.bam", "rb") for line in bam.fetch("2"): print line print line.reference_id print bam.get_reference_name(line.reference_id) As you might have notice, I have added two lines to your previous code: 1.`line.reference_id` contains the index of the reference sequence in the sequence dictionary 2.`bam.get_reference_name(line.reference_id)` will return the reference name corresponding to numerical tid. In your case chromosome 2.
biostars
{"uid": 258006, "view_count": 3220, "vote_count": 2}
Hello guys I'm working on Rna SEQ data, and I want to get good conceptual knowledge about the analysis of the process. I used the tool bowtie for indexing my reference genome and then the rest is according to the TUXEDO pipeline. I don't understand what exactly, "indexing" of the reference genome is and why it had to be done. I do get the point that there are many genes that can be identified and can be matched to the genome and indexed, But i am not clear as ti why should we do this , how is it done . ( i am new to bioinformatics and with minimal knowledge its so hard to understand the algorithm part of it ) so can anyone put it in simple concepts .
Indexing a genome can be explained similar to indexing a book. If you want to know on which page a certain word appears or a chapter begins, it is much more efficient/faster to look it up in a pre-built index than going through every page of the book until you found it. Same goes for alignments. Indices allow the aligner to narrow down the potential origin of a query sequence within the genome, saving both time and memory. You may have a look at Ben Langmeads youtube channel (creator of the bowtie and bowtie2 alignment software), where he lectures about concepts of sequence alignments, including the idea of indexing. https://www.youtube.com/user/BenLangmead
biostars
{"uid": 212594, "view_count": 26936, "vote_count": 6}
Dear all, I have paired-end fastq data generated with Illumina bcl2fastqv2.19 & sequenced on a Novaseq.The i5index is 7bp long, the i7 8bp long R1.fastq.gz contains R1 101bp reads: @A00154:125:HGKTMDMXX:1:1101:10420:1000 1:N:0:AACTGAGG+ATGCGTC CTGGCCGTCTCAGCCGAGAAGCCGAGGATTGAATGGGCATGGAGACTGAACTACCCCTCTCACCTTTAGAGGTGGCTCCTCCAAGTCGGGGTTGACGCCCG + FFFFFFFFFFFFFFFFFFFFFFFF:FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF:FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF R2.fastq.gz contains 6bp UMI sequence @A00154:125:HGKTMDMXX:1:1101:10420:1000 2:N:0:AACTGAGG+ATGCGTC GCGCGT + FFFFFF R3.fastq.gz contains R2 101bp reads: @A00154:125:HGKTMDMXX:1:1101:10420:1000 3:N:0:AACTGAGG+ATGCGTC CTTCATAGGCCACAAAAAGCCCATATATCAGTGTCATCCACTAAGCCTCAGACACTGCAGCACGGGCAGCGGCAGTGCCAGCTTCGCCCACACTGCCCCTC + FFFFFFFFFFFFFFFFFFFFFF:FF:FFF:FFFFFF:FFFFFFFFFFFFFFFF:FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF In a downstream analysis I want to use UMI-tools for deduplication. However for that I need the UMI be part of the read name. @Instrument:RunID:FlowCellID:Lane:Tile:X:Y:**UMI** ReadNum:FilterFlag:0:IndexSequence or SampleNumber There are tools to add a UMI to the read name when the UMI is present in the read itself. But in my case, the UMI is in a seperate fastq. How could this be achieved?
An `awk` solution: $ awk -v FS="\t" -v OFS="\t" 'NR==FNR {split($1, id, " "); umi[id[1]]=$2; next;} {split($1, id, " "); $1=id[1]":"umi[id[1]]" "id[2]; print $0}' <(zcat R2.fastq.gz|paste - - - -) <(zcat R1.fastq.gz|paste - - - -)|tr "\t" "\n"|bgzip -c > R1_umi.fastq.gz $ awk -v FS="\t" -v OFS="\t" 'NR==FNR {split($1, id, " "); umi[id[1]]=$2; next;} {split($1, id, " "); $1=id[1]":"umi[id[1]]" "id[2]; print $0}' <(zcat R2.fastq.gz|paste - - - -) <(zcat R3.fastq.gz|paste - - - -)|tr "\t" "\n"|bgzip -c > R3_umi.fastq.gz The `fastq.gz` files get uncompressed by `zcat` and the 4 line belonging to a read get tab delimited by `paste`. `awk` saves the id and the umi in a list, where the key is the header until the first white space, and the value is the umi code. If the second `fastq` file is read, we append the umi to the id and print out the line. Then the tabs are reverted to new lines by `tr` and the file get compressed using `bgzip`. fin swimmer
biostars
{"uid": 357359, "view_count": 6501, "vote_count": 1}
Hello everyone, I know that affymetrix probeset ids are portable across platforms: the same probeset id `1090_f_at` always refers the same gene (probe set). Is it the same for probe ids? Executing, as discussed in the "affy" package vignette, the code: > pm(Dilution, "1090_f_at") on the chip HG_U95Av2, for the probeset 1090_f_at are returned its pm probes 1, 2, .. 16 ``` 20A 20B 10A 10B 1090_f_at1 115.0 74.0 94.0 61.0 1090_f_at2 129.3 80.3 108.0 70.3 .. 1090_f_at16 262.0 157.0 194.8 131.5 ``` Executing the same code on a different chip containing, of course, the same probeset `1090_f_at`, are its pm probes returned in the same order, so that their ids ``` 1090_f_at1, 1090_f_at2, .. 1090_f_at16 ``` are consistent across chips? I suppose so. Could anyone confirm it? How could I check it? Thank you
It may depend on the source of the file you are using to obtain probe ids. Probe IDs in files from Affymetrix are integers and are associated with a specific feature (x, y position) on the microarray. For clarity this is different than a probeset id. A probeset id refers to a collection of probes (you seem to know this difference but I see people use the terms probe and probeset interchangeably in casual conversation which leads to confusion...) Probe IDs for Affymetrix arrays (in files released from Affymetrix, this may differ depending on the creator of your analysis package) are often calculated based on a formula and convertible from the probe id to an x y coordinate on the microarray. Therefore the only reason a probe_id should be the same between two different arrays is if the probe is the same height and width, and the probe is at the same x, y coordinate on the microarray. (This is rare). You can download lots of information from the Affymetrix website. If you have any further questions just ask. I'm pretty familiar with it.
biostars
{"uid": 176790, "view_count": 2627, "vote_count": 1}
Hi, I'm working on bacterial sulfur metabolism and I would like to be able to extract as much information as I can from the different pathways linked to this metabolism (at least all the biochemical reactions and the enzymes that catalyze these reactions and if possible, the genes coding for these enzymes). Here is what I've done so far: I've obtained a free license to download the [MetaCyc][1] database in Biopax level 2 and 3 format so I try to extract the information from these files. I've used the R package [rBiopaxParse][2]r to read and import the Biopax in R but now **I'm quite stuck because I don't really know how I can proceed to extract a pathway** ( by pathway I mean the pathway level but also all the lower levels of information like nested pathways, reactions, molecules and proteins interactions,...). My aim would be to get these pathways as individual Biopax files and the covert them in sif format thanks to the R package [paxtoolsr][3] and then visualise my pathways as a network and manipulate it with graphiz or [RCytoscape][4] packages. I don't know if it's the best strategy to be able to visualise and manipulate the whole metabolic pathways but since I don't have any knowledge in XML like langages I try to use R to get rid of this format^^. What could I do to be able to extract the needed information to get my metabolic pathways networks (all the biochemical reactions and the enzymes that catalyze these reactions)? Best regards, [1]: http://metacyc.org/ [2]: https://bioconductor.org/packages/release/bioc/html/rBiopaxParser.html [3]: http://bioconductor.org/packages/release/bioc/html/paxtoolsr.html [4]: https://www.bioconductor.org/packages/release/bioc/html/RCytoscape.html
Try this just using paxtoolsr with the BioPAX Level 3 file you have: # Update paxtoolsr First, update to the paxtoolsr development version (I just updated a few things): setRepositories(ind=1:6) options(repos="http://cran.rstudio.com/") if(!require(devtools)) { install.packages("devtools") } library(devtools) install_github("BioPAX/paxtoolsr") # Extract Pathways library(paxtoolsr) # An example with the sample BioPAX file in the paxtoolsr package exampleFileInPaxtoolsr <- system.file("extdata", "REACT_12034-3.owl", package="paxtoolsr") sifnx <- toSifnx(exampleFileInPaxtoolsr, "output.txt", "uniprot") # Not all rows represented, but that's because not every row has a pathway listed rowIndiciesForPathways <- splitSifnxByPathway(sifnx$edges) # A pathway extracted bmp <- sifnx$edges[rowIndiciesForPathways$`Signaling by BMP`, ] # If you prefer a data.frame over a data.table (data.table is used for file reading speed), then do this: library(data.table) class(bmp) # Should be "data.table" "data.frame" setDF(bmp) class(bmp) # Should be "data.frame" # Plot in R # For simple plotting in R g <- loadSifInIgraph(bmp) plot(g)
biostars
{"uid": 221424, "view_count": 2249, "vote_count": 2}
**Edit 1:** The following is Y/Auto depth versus age in a cohort. Individuals in this cohort are not known to be cancer patients. ![enter image description here][1] **Original question:** In WGS/WES studies I read and studies I participated in analyzing, I notice that the average sequencing depth on Y chromosome vary a lot more than X chromosome, and the depth is mostly less than the average depth on autosomes (see the figure for an example; from [Turro et al. 2020][2]). Most individuals that got sampled do not have diseases involving large deletion of Y chromosome. What causes the variability in the sequencing depth on Y chromosome? Why is the average depth on Y almost always less than that on autosomes? ![enter image description here][3] [1]: /media/images/50b4a58e-0c41-4919-890c-49cd71bc [2]: https://www.nature.com/articles/s41586-020-2434-2.pdf [3]: /media/images/835d5568-c6e1-4f3f-a0b0-2836db5b
Hi Samuel, I would guess it is the age related "Loss of the Y chromosome" you see in the data, which is a known effect. See: [Loss of Chromosome Y and Its Potential Applications as Biomarker in Health and Forensic Sciences][1] for example. "Loss of chromosome Y (LOY) is a mosaic aneuploidy that can be detected mainly in blood samples of male individuals. Usually, LOY occurrence increases with chronological age in healthy men." "The mechanism by which LOY occurs is yet to be elucidated. It is not known whether the Y chromosome is lost as a whole or if it is a gradual deletion process during the lifespan." We see that quite a lot in our WES data of older men. [1]: https://www.karger.com/Article/FullText/508564
biostars
{"uid": 9482437, "view_count": 1667, "vote_count": 3}
<p>Dear everyone,</p> <p>I have been asked to comment on an experiment design that involves WES or WGS of cancer cell lines which lack matched normal.</p> <p>I know that this design is far from ideal, but I was wondering if there are people who have already stream-lined it (to the extent possible of course). I can think of a couple of common variation filters to keep out the germline variants, but I am not sure how to go beyond.</p> <p>I am sorry if this is a duplicate question; I couldn&#39;t find a related hit.</p> <p>Thank you,</p> <p>Noushin</p>
<p>The bottom line is that without a matched normal, you&#39;re just not going to be able to call the somatic status for the vast majority of sites. That said, you can winnow down a list to those you *suspect* are somatic. Some ideas:</p> <ul> <li>Weed out sites with high frequency in the population</li> <li>If your tumor is very impure, you can take advantage of the fact that the frequencies of somatic variants will be shifted away from 50%/100%</li> </ul>
biostars
{"uid": 97834, "view_count": 5527, "vote_count": 5}
I was checking some of the variants I have found in cancer samples that I have been analysing. I am not much aware of how COSMIC data should be interpreted. So if something is already found in COSMIC, can we consider that variant novel or something significantly important for publication?
COSMIC does provide a large catalogue of somatic variants from many sequencing studies, although their data can be somewhat messy (they include variants sometimes which are not marked as somatic). Examining COSMIC can be reasonably good evidence at determining whether somatic variants have been seen before. That said, the majority of somatic variants in COSMIC are passenger mutations, and even if a hypothetically true driver mutation has been seen in COSMIC a few times does not mean people have previously implicated it as a driver (you might have to backtrack the study to examine their conclusions though). In fact, rare driver genes (especially <5-10% of samples) are generally poorly characterized in most tumor types (http://www.nature.com/nature/journal/v505/n7484/abs/nature12912.html). COSMIC updates regularly, but the current version may not have included all studies recently performed. So I would suggest that a literature search for very recent studies related to your particular cancer samples would still be warranted. Lastly and perhaps most important, whether your variants are "important" greatly matters on whether they are driver or passenger mutations. A passenger mutation that has not been seen before is not really important. Whether mutations may be drivers can be supported statistically/computationally and/or by experiments. Some support for driver status may be whether your cancer samples support the gene as significantly mutated (e.g. MutsigCV, MUSIC), high in silico predicted score for mutations (e.g. [CHASM, VEST](http://www.cravat.us/)), and for protein coding genes where the mutation occurs in 3D protein structure ([MUPIT](http://mupit.icm.jhu.edu/) maps cancer mutations to 3D protein structure with functional annotations).
biostars
{"uid": 181374, "view_count": 1625, "vote_count": 1}
I am 34 now. I have no time to prepare for any cet for PhD. I need a PhD for my job and my future. Please can you help me how to do PhD in bioinformatics, and where to go for distance education in bioinformatics?
> I am 34 now .i have no time to prepare for any cet for phd Its never too late. See [this][1] > ..i need a phd for my job and my future . No, that cant be true. People have jobs and future even without a PhD. > please can help me how to do phd in bioinformatics ,and where to go > for distance education in bioinf You are missing out the whole idea of a PhD. It's not like any other degree where you opt for some courses, you sit in some exams, you are graded and then you have a certificate. A "PhD" is a process; a journey where you learn by experience and failures. You learn to design your experiments; frame your own questions; develop a sense of problem solving. You fail at times and learn by mistakes and move on. That entire process ; that journey, polish you as a scientist. You develop abilities that you apply in other areas when you done with it. You miss all of this is a part time or distance PhD. [1]: https://chrisblattman.com/2013/06/12/when-are-you-too-old-for-a-phd/
biostars
{"uid": 306026, "view_count": 3236, "vote_count": 2}
The invocation of [VarDictJava][1] in paired variant calling mode is VarDict -b "/path/to/tumor.bam|/path/to/normal.bam" Now obviously in my CWL I would like the tumour and normal BAM to be separate inputs. However, I'm not sure how to combine these into a single command line argument in the actual CommandLineBinding [1]: https://github.com/AstraZeneca-NGS/VarDictJava#paired-variant-calling
Hello @ttmigueltt, Here's an alternative solution if you will always have both samples; no `InlineJavascriptRequirement` needed: cwlVersion: v1.0 class: CommandLineTool inputs: normal: type: File format: edam:format_2572 tumor type: File format: edam:format_2572 baseCommand: echo arguments: - $(inputs.bam_input.path)|$(inputs.bam_input.path) outputs: [] $namespaces: { edam: http://edamontology.org/ } $schemas: [ http://edamontology.org/EDAM_1.20.owl ]
biostars
{"uid": 317087, "view_count": 1858, "vote_count": 1}
<p>When trying to investigate a 16S rRNA dataset, I often identify several dozen/hundred species/families which are found in higher/lower abundances. I then start doing literature searches to see what they could be doing, where they have been observed before etc. </p> <p>To me this sounds:</p> <ol> <li>Really selective, only sampling a few papers for each species.</li> <li>Limiting, as there is no way to do this fully for tens of species.</li> <li>Incredibly time consuming.</li> </ol> <p>What I'm really looking for is a system which I can put a taxa list into and it'll say "Those ones are all anoxic" or "Those 5 have shown denitrifying ability". I don't know if this could be done with literature mining or where to start this, or if there is a database around in the world which curates data like this...</p> <p>Any suggestions are appreciated.</p>
<p>On following Neil's advice, and reading some papers I reverted back to <a href='http://img.jgi.doe.gov/cgi-bin/w/main.cgi'>IMG</a> to see what meta-data they collect with their published genomes, and actually, it's quite substantive.</p> <p>On clicking into the 'Genome Browser' section you get presented with a list of species and some basic data on the sequencing project. But if you navigate to the bottom of the page, there is a selection as such: <img src='http://i.imgur.com/hINFy6X.png' alt='IMG_Metadata_example' /></p> <p>This allows you to generate your table with the data you're interested in. A selection as above results in the following output, which can then be exported to excel or tsv format and used however you would like. (Full size available here:<a href='http://i.imgur.com/KjMXRDu.png'>http://i.imgur.com/KjMXRDu.png</a>)</p> <p><img src='http://i.imgur.com/KjMXRDu.png' alt='enter image description here' /> </p>
biostars
{"uid": 77151, "view_count": 3829, "vote_count": 3}
Dear lazyweb, here is the new menu for `samtools index`: ``` Usage: samtools index [-bc] [-m INT] [out.index] Options: -b Generate BAI-format index for BAM files [default] -c Generate CSI-format index for BAM files -m INT Set minimum interval size for CSI indices to 2^INT [14] ``` What is the **CSI** index compared to **BAI**? Should I better use it instead of the 'old' BAI index? What's the benefit?
It depends on how long your contigs/chromosomes are. The biggest benefit to CSI is that it supports indexing BAM files with contigs longer than 2^29-1 bases. At least in plants this isn't an uncommon requirement.
biostars
{"uid": 111984, "view_count": 9824, "vote_count": 3}
As the title says, I'm working on variant calling for somatic variant discovery where I have tumour samples, but no normal samples to compare with. Previously I've been using GATK's UnifiedGenotyper for variant calling, but as this tool is deprecated, I want to switch to the newer GATK variant calling tools. However, I can't decide whether it would be more appropriate to use HaplotypeCaller or Mutect2. On the one hand, HaplotypeCaller can be run without requiring a normal sample for comparison, but the documentation for HaplotypeCaller states that: > the algorithms used to calculate variant likelihoods is not well suited to extreme allele frequencies (relative to ploidy) so its use is not recommended for somatic (cancer) variant discovery. For that purpose, use MuTect2 instead. However, Mutect2 is still in Beta, and although it can be run on tumour samples only, and in the Mutect2 documentation it says: > Tumor-only variant calling is possible but it is NOT supported and we will not answer any questions about it until it becomes a supported feature. So neither tool is ideal for my purposes, but I'm having trouble deciding which will be the most applicable to my data. Any suggestions or advice would be greatly appreciated.
Some resources that address your question more broadly: - Evaluating Variant Calling Tools for Non-Matched Next-Generation Sequencing Data: https://www.nature.com/articles/srep43169 - https://www.biostars.org/p/207536/
biostars
{"uid": 283279, "view_count": 6291, "vote_count": 2}
Hi all, Can someone help me understand the RSeQC Output from infer_experiment.py? So this is the output: This is PairEnd Data Fraction of reads failed to determine: 0.0560 Fraction of reads explained by "1++,1--,2+-,2-+": 0.0192 Fraction of reads explained by "1+-,1-+,2++,2--": 0.9247 So it's stranded but is it fr-firststrand or fr-secondstrand? I do not understand the help given [here][1]: > For pair-end RNA-seq, there are two different ways to strand reads > (such as Illumina ScriptSeq protocol): > > 1++,1–,2+-,2-+ > > read1 mapped to ‘+’ strand indicates parental gene on ‘+’ strand > > read1 mapped to ‘-‘ strand indicates parental gene on ‘-‘ strand > > read2 mapped to ‘+’ strand indicates parental gene on ‘-‘ strand > > read2 mapped to ‘-‘ strand indicates parental gene on ‘+’ strand > > 1+-,1-+,2++,2– > > read1 mapped to ‘+’ strand indicates parental gene on ‘-‘ strand > > read1 mapped to ‘-‘ strand indicates parental gene on ‘+’ strand > > read2 mapped to ‘+’ strand indicates parental gene on ‘+’ strand > > read2 mapped to ‘-‘ strand indicates parental gene on ‘-‘ strand Thanks for your help! [1]: http://rseqc.sourceforge.net
It means you have a standard (dUTP-based) strand-specific library. If you want to use featureCounts, you'll want the `-s 2` setting. For HTSeq-count it's `--strand reverse`.
biostars
{"uid": 295344, "view_count": 7312, "vote_count": 7}
I have bam files that I have split into unmapped, uniquely mapped and multimapped bam files from HISAT2 alignment. I am trying to merge the sorted unmapped and uniquely mapped bam files with the command samtools merge A1_merged.bam -b A1_unmapped.bam A1_unique.bam and am getting an error [E::hts_open_format] Failed to open file BAMT samtools merge: fail to open "BAMT": No such file or directory the square actually has a question mark in it but that didn't copy across. Anyone have any ideas on why this error is coming up and how to fix it?
-b FILE List of input BAM filenames, one per line [null] `-b` would be necessary if the file names were in a separate text file. It is simply samtools merge A1_merged.bam A1_unmapped.bam A1_unique.bam
biostars
{"uid": 442691, "view_count": 2279, "vote_count": 1}
I did: if (!requireNamespace("BiocManager", quietly = TRUE)) install.packages("BiocManager") BiocManager::install("DESeq") I got the output and the installation BioC_mirror: https://bioconductor.org Using Bioconductor 3.7 (BiocInstaller 1.30.0), R 3.6.0 (2019-04-26). Installing package(s) ‘DESeq2’ also installing the dependencies ‘XML’, ‘annotate’, ‘genefilter’, ‘geneplotter’ trying URL 'https://cloud.r-project.org/src/contrib/XML_3.98-1.19.tar.gz' Content type 'application/x-gzip' length 1600788 bytes (1.5 MB) ================================================== downloaded 1.5 MB trying URL 'https://bioconductor.org/packages/3.7/bioc/src/contrib/annotate_1.58.0.tar.gz' Content type 'application/x-gzip' length 1867576 bytes (1.8 MB) ================================================== downloaded 1.8 MB trying URL 'https://bioconductor.org/packages/3.7/bioc/src/contrib/genefilter_1.62.0.tar.gz' Content type 'application/x-gzip' length 1432307 bytes (1.4 MB) ================================================== downloaded 1.4 MB trying URL 'https://bioconductor.org/packages/3.7/bioc/src/contrib/geneplotter_1.58.0.tar.gz' Content type 'application/x-gzip' length 1428691 bytes (1.4 MB) ================================================== downloaded 1.4 MB trying URL 'https://bioconductor.org/packages/3.7/bioc/src/contrib/DESeq2_1.20.0.tar.gz' Content type 'application/x-gzip' length 2059356 bytes (2.0 MB) ================================================== downloaded 2.0 MB * installing *source* package ‘XML’ ... ** package ‘XML’ successfully unpacked and MD5 sums checked ** using staged installation checking for gcc... gcc checking whether the C compiler works... yes checking for C compiler default output file name... a.out checking for suffix of executables... checking whether we are cross compiling... no checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gcc option to accept ISO C89... none needed checking how to run the C preprocessor... gcc -E checking for sed... /bin/sed checking for pkg-config... /usr/bin/pkg-config checking for xml2-config... no Cannot find xml2-config ERROR: configuration failed for package ‘XML’ * removing ‘/home/engin/R/x86_64-pc-linux-gnu-library/3.6/XML’ ERROR: dependency ‘XML’ is not available for package ‘annotate’ * removing ‘/home/engin/R/x86_64-pc-linux-gnu-library/3.6/annotate’ ERROR: dependency ‘annotate’ is not available for package ‘genefilter’ * removing ‘/home/engin/R/x86_64-pc-linux-gnu-library/3.6/genefilter’ ERROR: dependency ‘annotate’ is not available for package ‘geneplotter’ * removing ‘/home/engin/R/x86_64-pc-linux-gnu-library/3.6/geneplotter’ ERROR: dependencies ‘genefilter’, ‘geneplotter’ are not available for package ‘DESeq2’ * removing ‘/home/engin/R/x86_64-pc-linux-gnu-library/3.6/DESeq2’ The downloaded source packages are in ‘/tmp/RtmpnoZO5Z/downloaded_packages’ installation path not writeable, unable to update packages: boot, class, cluster, codetools, foreign, KernSmooth, lattice, MASS, Matrix, mgcv, nlme, nnet, spatial, survival Warning messages: 1: In install.packages(pkgs = doing, lib = lib, ...) : installation of package ‘XML’ had non-zero exit status 2: In install.packages(pkgs = doing, lib = lib, ...) : installation of package ‘annotate’ had non-zero exit status 3: In install.packages(pkgs = doing, lib = lib, ...) : installation of package ‘genefilter’ had non-zero exit status 4: In install.packages(pkgs = doing, lib = lib, ...) : installation of package ‘geneplotter’ had non-zero exit status 5: In install.packages(pkgs = doing, lib = lib, ...) : installation of package ‘DESeq2’ had non-zero exit status > if (!requireNamespace("BiocManager", quietly = TRUE)) + install.packages("BiocManager") > BiocManager::install("DESeq") Bioconductor version 3.9 (BiocManager 1.30.4), R 3.6.0 (2019-04-26) Installing package(s) 'DESeq' also installing the dependencies ‘XML’, ‘annotate’, ‘genefilter’, ‘geneplotter’ trying URL 'https://cloud.r-project.org/src/contrib/XML_3.98-1.19.tar.gz' Content type 'application/x-gzip' length 1600788 bytes (1.5 MB) ================================================== downloaded 1.5 MB trying URL 'https://bioconductor.org/packages/3.9/bioc/src/contrib/annotate_1.62.0.tar.gz' Content type 'application/x-gzip' length 1870418 bytes (1.8 MB) ================================================== downloaded 1.8 MB trying URL 'https://bioconductor.org/packages/3.9/bioc/src/contrib/genefilter_1.66.0.tar.gz' Content type 'application/x-gzip' length 1419350 bytes (1.4 MB) ================================================== downloaded 1.4 MB trying URL 'https://bioconductor.org/packages/3.9/bioc/src/contrib/geneplotter_1.62.0.tar.gz' Content type 'application/x-gzip' length 1435430 bytes (1.4 MB) ================================================== downloaded 1.4 MB trying URL 'https://bioconductor.org/packages/3.9/bioc/src/contrib/DESeq_1.36.0.tar.gz' Content type 'application/x-gzip' length 1692053 bytes (1.6 MB) ================================================== downloaded 1.6 MB * installing *source* package ‘XML’ ... ** package ‘XML’ successfully unpacked and MD5 sums checked ** using staged installation checking for gcc... gcc checking whether the C compiler works... yes checking for C compiler default output file name... a.out checking for suffix of executables... checking whether we are cross compiling... no checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gcc option to accept ISO C89... none needed checking how to run the C preprocessor... gcc -E checking for sed... /bin/sed checking for pkg-config... /usr/bin/pkg-config checking for xml2-config... no Cannot find xml2-config ERROR: configuration failed for package ‘XML’ * removing ‘/home/engin/R/x86_64-pc-linux-gnu-library/3.6/XML’ ERROR: dependency ‘XML’ is not available for package ‘annotate’ * removing ‘/home/engin/R/x86_64-pc-linux-gnu-library/3.6/annotate’ ERROR: dependency ‘annotate’ is not available for package ‘genefilter’ * removing ‘/home/engin/R/x86_64-pc-linux-gnu-library/3.6/genefilter’ ERROR: dependency ‘annotate’ is not available for package ‘geneplotter’ * removing ‘/home/engin/R/x86_64-pc-linux-gnu-library/3.6/geneplotter’ ERROR: dependencies ‘genefilter’, ‘geneplotter’ are not available for package ‘DESeq’ * removing ‘/home/engin/R/x86_64-pc-linux-gnu-library/3.6/DESeq’ The downloaded source packages are in ‘/tmp/RtmpnoZO5Z/downloaded_packages’ installation path not writeable, unable to update packages: boot, class, cluster, codetools, foreign, KernSmooth, lattice, MASS, Matrix, mgcv, nlme, nnet, spatial, survival Warning messages: 1: In install.packages(pkgs = doing, lib = lib, repos = repos, ...) : installation of package ‘XML’ had non-zero exit status 2: In install.packages(pkgs = doing, lib = lib, repos = repos, ...) : installation of package ‘annotate’ had non-zero exit status 3: In install.packages(pkgs = doing, lib = lib, repos = repos, ...) : installation of package ‘genefilter’ had non-zero exit status 4: In install.packages(pkgs = doing, lib = lib, repos = repos, ...) : installation of package ‘geneplotter’ had non-zero exit status 5: In install.packages(pkgs = doing, lib = lib, repos = repos, ...) : installation of package ‘DESeq’ had non-zero exit status
As the error message states, you're missing the package `xml2-config`. You can google how to install that (probably `libxml2-dev`) on your OS (if you're on Ubuntu/macOS, it should be easy with a package manager).
biostars
{"uid": 378437, "view_count": 8722, "vote_count": 2}
Hello, this is a very basic question but I was wondering if someone could help me understand if I've used the correct GTF file and Fasta file for the mouse genome indexing. I got the relevant Fasta file and GTF file from ensembl: GTF:ftp.ensembl.org/pub/release-103/gtf/mus_musculus/Mus_musculus.GRCm39.103.gtf.gz Fasta:ftp.ensembl.org/pub/release103/fasta/mus_musculus/dna/Mus_musculus.GRCm39.dna.primary_assembly.fa.gz Or shell I use Mus_musculus.GRCm39.dna.toplevel.fa.gz for fasta to make Generating genome indexes in STAR? STAR --runMode genomeGenerate --runThreadN 8 --genomeDir index_reference --genomeFastaFiles Mus_musculus.GRCm39.dna.primary_assembly.fa --sjdbGTFfile Mus_musculus.GRCm39.103.gtf Thank you for your help!
The genome construction step line looks good. Also consider including the `--sjdbOverhang` parameter. While the default value of 99 is usually fine, as stated in STAR's manual this should be chosen according to your maximum read length in the dataset by subtracting 1. So, if you have 100 nt reads, `--sjdbOverhang 99` is fine. If you performed trimming and has variable read lengths, then you choose this value according to `max(readLength-1)`. So if you have reads varying from 50-120, you'd have `--sjdbOverhang 119` as optimal parameter.
biostars
{"uid": 9465434, "view_count": 3818, "vote_count": 4}
<p>Which version control system do you prefer to use in your projects and which one is more commonly used. I tried using SVN and CVS but never got the hang of them. I currently use git for most of my projects. Which one is more common among bioinformaticians?</p>
<p>Having made the progression from CVS to SVN to git, I have to say I prefer the latter, although it's taken me a while to wrap my head around it where I always felt quite comfortable with CVS/SVN. Using <a href="http://github.com">github</a> has helped a lot, but as I found out yesterday the web based merging tools leave a little to be desired.</p> <p>I follow dozens of coders and bioinformatics projects on github, you can't deny that the social element makes the system far more valuable than just having an SVN repository tucked away somewhere. I spend a good hour yesterday poring over other peoples code there, it's a great resource, and certainly for NGS tools seems to be a popular repository, backed by a popular SCM.</p>
biostars
{"uid": 14896, "view_count": 2983, "vote_count": 6}
I work with RNA-seq data and have found a few deferentially expressed genes across particular tissue sample. Now I have been instructed to work with GTEX data to see the deferentially expressed genes across different tissue samples. Now to go green with GTEX data set, I first don't understand their sample codes like GTEX-N7MS-0007-SM-2D7W1 for which tissue? I tried to search the bar codes for GTEX but haven't found any. Please can anyone give some idea for decoding the GTEX bar codes and also performing such analysis. I am sorry if this question is silly because I am a complete naive in the field of NGS.
In [the download section][1], where it says "A de-identified, open access version of the sample annotations available in dbGaP.", you should find a file called `GTEx_Data_V4_Annotations_SampleAttributesDS.txt`, containing the annotation of each sample. For example GTEX-N7MS-0007-SM-2D7W1 is from Whole Blood. [1]: http://www.gtexportal.org/home/datasets2
biostars
{"uid": 149012, "view_count": 9081, "vote_count": 3}
I am trying to do some exploratory bioinformatics on TCGA data using fgsea. Our lab looks at a specific gene so I was trying to see whether high levels of this gene in TCGA expression data is correlated with enrichment of any genesets. I have been preranking the data using DeSeq2 (and using the F stat as a ranking) and was wondering how I should set up the design. Because it is a continuous variable I could plug in the scaled normalised counts for this gene straight into the DeSeq2 design or I could split the expression into low/high groups and then run the DeSeq2 to calculate the difference between low/high. I was wondering whether which of these (if either) is more acceptable? I assume using the continuous variable makes the most sense but I have only seen it done by splitting the expression into two groups by other bioinformatics. Is the Wald test with DeSeq2 the most appropriate tool to do this with? I have run both methods using the hallmark genesets and see very different ranking and similar but slightly different ES results. What are peoples' thoughts? ![NES from HALLMARK][1] ![enter image description here][2] [1]: https://i.imgur.com/ZY9JNs5.png [2]: https://i.imgur.com/NO9R1Ii.png
Personally, I think the low/high stratification isn't ideal because you lose information about the expression of your gene of interest (you're collapsing everything into two values: low or high). I prefer the continuous design (edit: however, please see discussion below; important caveats). An alternate approach would be to calculate the pair-wise correlation between every gene with respect to your gene of interest (using normalized count values); you can use the correlation coefficients are your ranking. Whether this is "better" than using the deseq2 statistic, I don't know. There are many ways to analyze data and the answer of what is "most acceptable" is not always clear or easy.
biostars
{"uid": 443094, "view_count": 2388, "vote_count": 1}
Dear All, I'm trying to download wgEncodeCrgMapabilityAlign100mer.bigWig for human genome 38. I can get the same file for hg19 from the golden path `ftp://hgdownload.soe.ucsc.edu/goldenPath/hg19/encodeDCC/wgEncodeMapability/` I can not find such a file in the hg28 directory `ftp://hgdownload.soe.ucsc.edu/goldenPath/hg38/encodeDCC/` What I am missing, any idea? Thanks!
http://hgdownload.soe.ucsc.edu/gbdb/hg38/hoffmanMappability/ ?
biostars
{"uid": 487547, "view_count": 1850, "vote_count": 1}
<!-- language-all: lang-r --> Hi, I may have missed something but, I have 2 ranges objects like this: suppressPackageStartupMessages(library(GenomicRanges)) #> Warning: package 'S4Vectors' was built under R version 3.5.1 rg1 = GenomicRanges::GRanges(seqnames = c(1,1,2), IRanges(start = c(1,10,1), end = c(9,19,20)) ) rg1 #> GRanges object with 3 ranges and 0 metadata columns: #> seqnames ranges strand #> <Rle> <IRanges> <Rle> #> [1] 1 1-9 * #> [2] 1 10-19 * #> [3] 2 1-20 * #> ------- #> seqinfo: 2 sequences from an unspecified genome; no seqlengths rg2 = GenomicRanges::GRanges(seqnames = c(1), IRanges(start = c(5), end = c(15)) ) rg2 #> GRanges object with 1 range and 0 metadata columns: #> seqnames ranges strand #> <Rle> <IRanges> <Rle> #> [1] 1 5-15 * #> ------- #> seqinfo: 1 sequence from an unspecified genome; no seqlengths Then I want to intesect them by maintaining the adjacent intances in the original `rg1` into different instances in the result object The default behaviour merge them in the same instance: rg_intersect = GenomicRanges::intersect(rg1,rg2) rg_intersect #> GRanges object with 1 range and 0 metadata columns: #> seqnames ranges strand #> <Rle> <IRanges> <Rle> #> [1] 1 5-15 * #> ------- #> seqinfo: 2 sequences from an unspecified genome; no seqlengths I tried to merge after applying a `findOverlaps` method, because the result of the FO method only gives 2 hits so I would expect that it also gives 2 instances. However, it also merges the 2 things. findOverlapPairs(rg1,rg2) #> Pairs object with 2 pairs and 0 metadata columns: #> first second #> <GRanges> <GRanges> #> [1] 1:1-9 1:5-15 #> [2] 1:10-19 1:5-15 GenomicRanges::intersect(findOverlapPairs(rg1,rg2)) #> GRanges object with 1 range and 0 metadata columns: #> seqnames ranges strand #> <Rle> <IRanges> <Rle> #> [1] 1 5-15 * #> ------- #> seqinfo: 2 sequences from an unspecified genome; no seqlengths Created on 2019-03-25 by the [reprex package](http://reprex.tidyverse.org) (v0.2.0).
<!-- language-all: lang-r --> Just found that I should be able to do it with the `pintersect` function from a Pair object. In my example it works but I am not familiar with this method so I am not sure if it would be general enough. suppressPackageStartupMessages(library(GenomicRanges)) #> Warning: package 'S4Vectors' was built under R version 3.5.1 rg1 = GenomicRanges::GRanges(seqnames = c(1,1,2), IRanges(start = c(1,10,1), end = c(9,19,20)) ) rg2 = GenomicRanges::GRanges(seqnames = c(1), IRanges(start = c(5), end = c(15)) ) GenomicRanges::pintersect(findOverlapPairs(rg1,rg2)) #> GRanges object with 2 ranges and 1 metadata column: #> seqnames ranges strand | hit #> <Rle> <IRanges> <Rle> | <logical> #> [1] 1 5-9 * | TRUE #> [2] 1 10-15 * | TRUE #> ------- #> seqinfo: 2 sequences from an unspecified genome; no seqlengths Created on 2019-03-25 by the [reprex package](http://reprex.tidyverse.org) (v0.2.0).
biostars
{"uid": 371231, "view_count": 1227, "vote_count": 1}
Hi Can we sort BAM files according to the read name? As the normal sorting happens on the co-ordinates, can anyone tell how to sort a BAM file on read names? I am trying to run HT-Seq count on paired end SAM files but receiving warnings for which I have to sort the BAM in read names and then create its SAM and then run HT-Seq
pass the `-n` flag to [samtools][1] samtools sort -n inputfile output http://samtools.sourceforge.net/samtools.shtml [1]: http://samtools.sourceforge.net/
biostars
{"uid": 78318, "view_count": 21941, "vote_count": 3}
How to merge unique/non overlapping genes between 2 gene model GFF3 files? I have 2 gene models: 1. Recent supposedly higher quality gene model 2. Older supposedly lower quality gene model Some genes though that are described in literature have been removed in the more recent gene model. While they are in the older gene model. I would like to add these and other "missing" genes back to the more recent gene model, from the older gene model. But if there is an overlap between genes from the different models, I would like to keep the genes as described in the more recent gene model. Is there a way to do this?
`bedtools intersect -v` is one option for identifying the unique genes from the older gene model. <https://bedtools.readthedocs.io/en/latest/content/tools/intersect.html>
biostars
{"uid": 9553217, "view_count": 444, "vote_count": 1}
Hello Everyone, I have a VCF with multiple individuals from multiple populations and I would like to get a summary of the allele frequency spectrum for each population. I know that VCFtools has some nice options for outputting allele frequencies. However my data is from non-model organisms and I think using the reference/derived alleles for calculating allele frequencies is resulting in some serious biases. So I am looking for two possible solutions: 1. How to calculate the folded allele frequency spectrum (not biased by ancestral/reference allele assumptions) from a VCF file as starting point. 2. How to calculate the allele frequency spectrum using an outgroup species to infer the ancestral allele. Are there any packages out there for this? It does not seem trivial to me to infer which allele is presumed ancestral and to incorporate this into the VCF file to then calculate allele frequency spectrum based on ancestral/derived alleles. Any comments or ideas are much appreciated. Thank you in advance! Best Rubal
This can easily be accomplished via [VCFlib](https://github.com/ekg/vcflib) If the end goal is association testing have a look at GPAT: https://github.com/jewmanchue/vcflib/wiki **EDIT**: **GPAT++** now supports population summary statistics: https://github.com/jewmanchue/vcflib/wiki/Basic-population-statistics-with-GPAT
biostars
{"uid": 101218, "view_count": 8396, "vote_count": 4}
I want to store a docker image with the input data, code, software, intermediate results and results. This will be a massive file. Is there something like the sequence archive for people who want to upload a docker image of their whole study? If not, what is the best/cheapest way to store such a file? Any related thoughts not directly answering the question are also welcome.
I get the point of using a Docker image is for reproducibility, but the key selling point of Docker is it's modularity! It seems slightly blasphemous to put *everything* onto a Docker image. Why not just a VM image if you're going to make an offline reproducibility archive of multi-gigabytes? Instead I would give some serious thought to what genomax said, which is to have a Docker image which automates the process of downloading, decompressing, etc, all the raw public data and turning that into the final result instead. This way, your Docker image would be tiny, you don't have issues with the Docker data and the public data falling out-of-sync if there are ever corrections needed (as a general rule of thumb, there should only be 1 place to download the data from). And of course, you can update your Docker image of 100Mb much easier than updating your Docker image of 100Gb to fix a typo in a script. As an aside, it seems to be a very popular these days when met with the question of "how can I reproduce this in 10 years time?" people think of the future as being some cataclysmic hellscape where nothing works anymore. Some poor future Bioinformatician slumped over a green and black cathode-ray monitor mumbling about "the wisdom of the ancients" while his buddies peddle bicycles to generate power. All to reproduce the RNA-Seq findings for some 10-year-old study. As someone typing this while playing Pokemon Red via Gameboy emulator on his phone, i'd say the chances of bad code still working in 10 years from now is fairly high, so long as the code was fairly popular at the time :)
biostars
{"uid": 206026, "view_count": 1988, "vote_count": 5}
I have a big data matrix and each column has named with multiple information and separated by an underscore. For an example; Genotype_Time_Replicate: X_T0_1, X_T0_2 etc I want to average my data for downstream analysis. How can I average replications (**Case 1: in column data**), and averaging repeated raw data and column replication data (**Case 2: Column and raw**) Final Expectation looks like; > <a href="https://ibb.co/nDEeVy"><img src="https://thumb.ibb.co/nDEeVy/Picture1.png" alt="Picture1" border="0"></a> Sample data frame is given below, Case 1 df: structure(list(Gene = c("AA", "PQ", "XY", "UBQ"), X_T0_R1 = c(1.46559502, 0.220140568, 0.304127515, 1.098842127), X_T0_R2 = c(1.087642983, 0.237500819, 0.319844338, 1.256624804), X_T0_R3 = c(1.424945196, 0.21066267, 0.256496284, 1.467120048), X_T1_R1 = c(1.289943948, 0.207778662, 0.277942721, 1.238400358), X_T1_R2 = c(1.376535013, 0.488774258, 0.362562315, 0.671502431), X_T1_R3 = c(1.833390311, 0.182798731, 0.332856558, 1.448757569), X_T2_R1 = c(1.450753714, 0.247576125, 0.274415259, 1.035410946), X_T2_R2 = c(1.3094609, 0.390028842, 0.352460646, 0.946426593), X_T2_R3 = c(0.5953716, 1.007079177, 1.912258811, 0.827119776), X_T3_R1 = c(0.7906009, 0.730242116, 1.235644748, 0.832287694), X_T3_R2 = c(1.215333041, 1.012914813, 1.086362205, 1.00918082), X_T3_R3 = c(1.069312467, 0.780421013, 1.002313082, 1.031761442), Y_T0_R1 = c(0.053317766, 3.316414959, 3.617213894, 0.788193798), Y_T0_R2 = c(0.506623748, 3.599442788, 1.734075583, 1.179462912), Y_T0_R3 = c(0.713670106, 2.516735845, 1.236204882, 1.075393433), Y_T1_R1 = c(0.740998252, 1.444496448, 1.077023349, 0.869258744), Y_T1_R2 = c(0.648231834, 0.097957459, 0.791438659, 0.428805547), Y_T1_R3 = c(0.780499252, 0.187840968, 0.820430227, 0.51636582), Y_T2_R1 = c(0.35344654, 1.190274584, 0.401845911, 1.223534348), Y_T2_R2 = c(0.220223951, 1.367784148, 0.362815405, 1.102117612), Y_T2_R3 = c(0.432856978, 1.403057729, 0.10802472, 1.304233845), Y_T3_R1 = c(0.234963735, 1.232129062, 0.072433381, 1.203096462), Y_T3_R2 = c(0.353770497, 0.885122768, 0.011662112, 1.188149743), Y_T3_R3 = c(0.396091395, 1.333921747, 0.192594116, 1.838029829), Z_T0_R1 = c(0.398000559, 1.286528398, 0.129147097, 1.452769794), Z_T0_R2 = c(0.384759325, 1.122251177, 0.119475721, 1.385513609), Z_T0_R3 = c(1.582230097, 0.697419716, 2.406671502, 0.477415567), Z_T1_R1 = c(1.136843842, 0.804552001, 2.13213228, 0.989075996), Z_T1_R2 = c(1.275683837, 1.227821594, 0.31900326, 0.835941568), Z_T1_R3 = c(0.963349308, 0.968589683, 1.706670339, 0.807060135), Z_T2_R1 = c(3.765036263, 0.477443352, 1.712841882, 0.469173869), Z_T2_R2 = c(1.901023385, 0.832736132, 2.223429427, 0.593558769), Z_T2_R3 = c(1.407713024, 0.911920317, 2.011259223, 0.692553388), Z_T3_R1 = c(0.988333629, 1.095130142, 1.648598854, 0.629915612), Z_T3_R2 = c(0.618606729, 0.497458337, 0.549147265, 1.249492088), Z_T3_R3 = c(0.429823986, 0.471389536, 0.977124788, 1.136635484)), row.names = c(NA, -4L ), class = c("data.table", "data.frame")) Case 2 df: structure(list(Gene = c("mut", "ACTIN", "ACTIN", "Pq", "UBQ", "UBQ", "Xa"), X_T0_R1 = c(0.344814469, 1.209073623, 1.071457953, 0.362842359, 1.014392244, 1.571055788, 0.570729408), X_T0_R2 = c(0.449930853, 1.031557118, 1.054965621, 0.522831228, 0.83300542, 0.967355216, 0.501057748), X_T0_R3 = c(0.601209073, 1.695796471, 1.052815987, 0.571729222, 1.391288288, 1.773644641, 0.453820027), X_T1_R1 = c(0.427800244, 1.308884798, 0.991302515, 0.329510681, 0.773414746, 1.029619555, 0.362504535), X_T1_R2 = c(0.418589633, 1.811507215, 1.206305091, 0.29886302, 0.895616224, 1.196317937, 0.408657559), X_T1_R3 = c(0.468263467, 1.352236153, 1.444060418, 0.359970383, 0.942421479, 2.388771681, 0.145078696), X_T2_R1 = c(0.300362616, 1.654754505, 1.109259911, 0.306699247, 0.585608303, 1.945573895, 0.270237172), X_T2_R2 = c(0.27920993, 1.573822163, 1.152985196, 0.310218502, 0.493783209, 1.573792123, 0.36659012), X_T2_R3 = c(1.792971556, 0.665809249, 0.778594892, 2.161999623, 1.888984449, 0.456632731, 1.631251843), X_T3_R1 = c(1.118011513, 0.570411874, 1.044634812, 1.213092011, 1.817947271, 0.234950383, 1.384650094), X_T3_R2 = c(1.008515071, 0.916509523, 0.905764637, 1.244132809, 0.752181246, 0.797524026, 1.010615689), X_T3_R3 = c(0.816620011, 0.740345088, 1.106478019, 0.899414205, 0.909160589, 0.672469518, 0.594865366), Y_T0_R1 = c(3.307846716, 0.027550169, 0.645327389, 2.887386508, 1.042465604, 0.05047425, 4.318466199), Y_T0_R2 = c(2.035398381, 0.633422527, 0.888069994, 2.062827838, 1.82433679, 0.500792593, 1.182188977), Y_T0_R3 = c(1.500168876, 0.877196975, 1.088593542, 1.392198697, 1.162069878, 0.470956741, 1.511890878), Y_T1_R1 = c(1.095875029, 0.777981021, 1.050238479, 1.17216374, 0.945470429, 0.40568268, 0.872396888), Y_T1_R2 = c(0.452742932, 0.352610874, 0.787861253, 0.477126035, 0.320200734, 1.826032539, 0.332244865), Y_T1_R3 = c(0.45960558, 0.478390214, 0.645688363, 0.395673468, 0.215407604, 0.759507568, 0.700730905), Y_T2_R1 = c(1.559068766, 0.062252184, 0.937463531, 0.994007758, 0.482591298, 1.269828631, 0.237326878), Y_T2_R2 = c(1.390406257, 0.215685731, 1.087380361, 1.018431329, 0.585660661, 1.05095161, 0.173209498), Y_T2_R3 = c(1.00828232, 0.376013801, 0.782410602, 0.906376375, 0.572489629, 1.359345852, 0.302963483), Y_T3_R1 = c(1.182635592, 0.117426355, 1.013642281, 0.967559933, 0.306328031, 1.231521805, 0.257804624), Y_T3_R2 = c(1.366839578, 0.341411017, 1.337125947, 0.943784803, 0.721978298, 1.10875345, 0.189978177), Y_T3_R3 = c(1.594404053, 0.209740069, 0.92384942, 0.897659445, 0.457172538, 1.543831721, 0.272475233), Z_T0_R1 = c(1.237203711, 0.233057698, 1.077219174, 1.156260667, 0.264806683, 1.591044318, 0.255767162), Z_T0_R2 = c(1.211301515, 0.251870699, 1.141522554, 1.194071909, 0.20882802, 1.533752995, 0.278059859), Z_T0_R3 = c(0.645425334, 1.53688617, 0.439888106, 0.819063313, 1.769224478, 0.250876057, 1.998822839), Z_T1_R1 = c(0.971645792, 0.671074934, 0.469502588, 1.312821698, 1.306039773, 1.40561198, 1.704347344), Z_T1_R2 = c(0.859830596, 1.580097955, 1.366461274, 1.24037716, 0.80578233, 1.116605654, 1.211928025), Z_T1_R3 = c(0.785228306, 1.286123696, 1.10243547, 0.996917372, 1.215506569, 0.683697612, 1.000232952), Z_T2_R1 = c(0.475576762, 2.673806674, 0.732913032, 0.763693301, 3.091813549, 0.347384763, 3.16064337), Z_T2_R2 = c(0.810829692, 1.590506889, 1.162262268, 1.367255133, 1.378518959, 0.677096267, 2.006934309), Z_T2_R3 = c(1.02507371, 2.164918846, 1.440885034, 1.185511625, 1.934374556, 0.460659928, 1.277191061), Z_T3_R1 = c(0.834953495, 2.155130232, 1.209137833, 0.934189133, 1.048650427, 0.704562113, 1.145400709), Z_T3_R2 = c(0.886903303, 0.237343684, 0.921370232, 0.737206101, 0.318232441, 1.314051524, 0.9314835), Z_T3_R3 = c(0.748710472, 0.501419194, 0.914476206, 0.641169316, 0.119979817, 1.187578276, 0.918544916)), row.names = c(NA, -7L), class = c("data.table", "data.frame")) if possible could you please help me with an **easy approach using R programming**
Assumption is that average is applied for every 3 columns and updated with data frame df1 from OP: results=data.frame(apply(array(as.matrix(df1[,-1]), c(nrow(df1),3, ncol(df1)/3)),3, rowMeans)) results=cbind(df1$Gene, results) results df1$Gene X1 X2 X3 X4 X5 X6 1 AA 1.3260611 1.4999564 1.1185287 1.0250821 0.4245372 0.7232431 2 PQ 0.2227680 0.2931172 0.5482280 0.8411926 3.1441979 0.5767650 3 XY 0.2934894 0.3244539 0.8463782 1.1081067 2.1958315 0.8962974 4 UBQ 1.2741957 1.1195535 0.9363191 0.9577433 1.0143500 0.6048100 X7 X8 X9 X10 X11 X12 1 0.3355092 0.32827521 0.7883300 1.1252923 2.3579242 0.6789214 2 1.3203722 1.15039119 1.0353998 1.0003211 0.7406999 0.6879927 3 0.2908953 0.09222987 0.8850981 1.3859353 1.9825102 1.0582903 4 1.2099619 1.40975868 1.1052330 0.8773592 0.5850953 1.0053477 Please change column names.
biostars
{"uid": 318210, "view_count": 11837, "vote_count": 2}
Hi, I have got my first Nanopore sequencing data and the first step was to see if the data is good. Has anyone has any experience with this kind of data and can tell me how to interpret the results. The whole report can be downloaded [here][1] (not sure how to post it here). Allin all it looks quite good to me, but what I'm not sure about are the two images attached here. These are the per base QC and sequencing content. It seems that the beginning of the reads is not good, but this can be probably trimmed by removing adapters etc. But Is it really that i need to remove the first 1000 positions? This seems a bit extreme. About the sequencing content, I don't know where to begin. This looks consistent with the first image, where the quality is not good, and if removing these positions, it should get better. but is it ok for the two pairs TG and AC to be so much apart like that? EDIT: I have done both pycoQC as well as the minionQC run (R). They show similar results. It can be downloaded from [here][2] a few of the images are also attached. ![Basecalled reads PHRED quality][3] ![Output over experiment time][4] ![Basecalled reads length vs reads PHRED quality][6] ![Channel activity over time][5] To me it looks as if the run was past its prime after ~50h. There is no real gain of new reads afterwards. As I'm usually work with mRNA-Seq, I'm not sure how to call the PHRED Quality. I can see that the most reads are of Q ~=8 and most of them are short (which is also expected). But all in all can this run be classified as good? thanks ![per base QC ][7] ![per base Seq content][8] [1]: https://datashare.biochem.mpg.de/s/CL7lvs0b1wMSlZj [2]:https://datashare.biochem.mpg.de/s/lkhPxr417F4eI6n [3]: /media/images/f4b05e56-2eab-41f0-8b11-46dc209b [4]: /media/images/cfd343fd-d3ad-41e6-8d8a-6aa6c81c [5]: /media/images/5fb9d993-e5b2-411e-94b6-40df8be8 [6]: /media/images/f880adb8-691a-4a7a-bfdb-7a185cef [7]: /media/images/0f7bf463-05c5-467c-8b94-24c6abd4 [8]: /media/images/84f7f372-dfb1-438a-a42f-0af1e6b0
a few of your reads are very long and those skew and alter the plots. there is also no binning for the first 10 bases then it is binning into huge bins subsequently, again makes the plot misleading do not remove the beginning of the reads for QC reasons, that is rarely an advisable course of action. I would filter the reads to a more manageable length, say 15K and rerun the QC analysis
biostars
{"uid": 9521828, "view_count": 1092, "vote_count": 1}
I am using esearch query as `$query = "SS1G_03709+AND+gene[filter]";`, but it gives me all (gene+ mRNA+genome sequences). What filter do I need to use so I only get gene sequences in my search? I tried a few filters from [here][1], but couldn't find anything to limit my search for genes. [1]: https://www.ncbi.nlm.nih.gov/books/NBK49540/
One way (sequence truncated for brevity): $ esearch -db nuccore -query "SS1G_01676 [GENE]" | efetch -format fasta | grep ">" | grep -v "genome" | awk '{print $1}' | epost -db nuccore | efetch -format fasta >XM_001597432.1 Sclerotinia sclerotiorum 1980 UF-70 hypothetical protein partial mRNA ATGGCGCCCAAATTTTCGGAAGACGAAATTGATGATTTGATATATTTTGCTCGTATTGGGGACAATGATG AATTCGAGAAATTGAGGGAGGAGTTATGTAAAAGGGAAGGATGTTCGATTGCTGAACTGCTAGAGACTGC $ esearch -db nuccore -query "SS1G_03709 [GENE]" | efetch -format fasta | grep ">" | grep -v "genome" | awk '{print $1}' | epost -db nuccore | efetch -format fasta >XM_001595570.1 Sclerotinia sclerotiorum 1980 UF-70 hypothetical protein partial mRNA ATGCATTTCTCAACTGCAAAAACGCTTCTTCCTCTCGCAGTTCTAGTTTCCTATACCACCGCTCAAACAA CAGCTGCAGCACCACCTGTTGCTAGTGCTCCTACAGGCGGCACTTCTAGTACTTGTCTCGGACAAAATGT @vkkodali has much neater ways of finding this info in that answer. Taking some inspiration from one of the command there you could ust do $ esearch -db nuccore -query "Sclerotinia sclerotiorum 1980 [TITLE]" | efilter -molecule mrna | efetch -format fasta > s_sclerot.fa to get all of them at one time.
biostars
{"uid": 345772, "view_count": 3271, "vote_count": 2}
<p>Hi, everyone:</p> <p>I found that there was no golden standard method for single cell RNA-seq subgroup. There were many papers showed different methods for this analysis. Now I wanna try these different clustering methods to my data. How to evaluate them using R, python, etc. I don&#39;t want to do biological experiments~</p> <p>Generally, is there a golden standard (methods, books, etc.) for evaluating clustering methods by computer in machine learning area?</p> <p>Thanks very much !</p>
<p>There is no standard method for evaluating clustering methods other than comparing to some ground truth when it is available. There are however several measures of clustering &quot;quality&quot; e.g. the silhouette (in R <a href="https://stat.ethz.ch/R-manual/R-devel/library/cluster/html/silhouette.html" target="_blank">silhouette</a>). See for example the python <a href="http://scikit-learn.org/stable/modules/clustering.html#clustering-performance-evaluation" target="_blank">sklearn page on evaluating clustering</a>.</p>
biostars
{"uid": 160958, "view_count": 2041, "vote_count": 2}
Hey I have a fasta file which contains more than 80 sequences. I want to take each sequences in different files and the file name should be header of corresponding sequence.
curl https://raw.githubusercontent.com/gouthamatla/fasta_File_Manipulation/master/SplitFastaFile.py | python - <in.fasta>
biostars
{"uid": 170646, "view_count": 2763, "vote_count": 1}
<p>Hey guys!</p> <p>Just a newbie in bioinformatics. I have a problem about integrative bioinformatics: how to combine those database to make my queries more efficiency?</p> <p>Is there any browser or interface which can help me to search multiple databases while only input one query?</p>
<p>As Sean said, there is no such magic box for all queries for all databases, nevertheless you could have a look at <a href='http://bio2rdf.org/'>bio2rdf</a> to get an idea of what is a Integrative database.</p> <blockquote> <p><a href='http://www.ncbi.nlm.nih.gov/pubmed/18472304'>http://www.ncbi.nlm.nih.gov/pubmed/18472304</a></p> <p>J Biomed Inform. 2008 Oct;41(5):706-16. doi: 10.1016/j.jbi.2008.03.004. </p> <p>Bio2RDF: towards a mashup to build bioinformatics knowledge systems.</p> <p>Belleau F, Nolin MA, Tourigny N, Rigault P, Morissette J.</p> </blockquote> <p>or look at the tools provided by the NCBI <a href='http://www.ncbi.nlm.nih.gov/books/NBK25500/'>http://www.ncbi.nlm.nih.gov/books/NBK25500/</a></p>
biostars
{"uid": 74565, "view_count": 3390, "vote_count": 1}