INSTRUCTION
stringlengths 48
14.1k
| RESPONSE
stringlengths 14
8.79k
| SOURCE
stringclasses 1
value | METADATA
stringlengths 48
55
|
---|---|---|---|
Hi,
I was wondering which NCBI reference genome assembly to use for mouse GRCm38, if I don't want to use the UCSC mm10. I thought the FTP-site of the Sanger mouse genomes project might be a good place to check: ftp://ftp-mouse.sanger.ac.uk/ref/ Does anyone know what the 68 refers to in the file name - [GRCm38_68.fa][1]?
Many thanks,
Lorna
[1]: ftp://ftp-mouse.sanger.ac.uk/ref/GRCm38_68.fa | <p>This is one, but from ensembl --> ftp://ftp.ensembl.org/pub/release-68/fasta/mus_musculus/dna/. Inappropriate?<br />
<br />
This Ensembl version (release 68) corresponds to GRCm38 (mm10) (https://www.biostars.org/p/52254/)<br />
<br />
Some sorces to download mm10 genome version:<br />
http://hgdownload-test.cse.ucsc.edu/goldenPath/mm10/bigZips/</p>
<p>http://www.ncbi.nlm.nih.gov/genome/52</p>
| biostars | {"uid": 146780, "view_count": 13878, "vote_count": 1} |
<p>I received genome-wide association (GWAS) data from a colleague who's supposedly done all the imputation and quality control according to the consortium's standards. Genotyping was Illumina 660, imputed to HapMap (3.2 million SNPs total).</p>
<p>The data came to me as a matrix of 11,000 samples (rows) and 3.2 million SNPs (columns). There's a header row for each SNP, and genotypes are coded as the number of minor alleles (or allele dosage for imputed SNPs). </p>
<p>Here's a few rows and columns to show you what it looks like:</p>
<pre><code>rs1793851 rs9929479 rs929483
2.0 0 1
1.6 0 1
2.0 NA 0
2.0 0 1
1.6 0 0
2.0 1 NA
1.0 0 0
1.9 0 2
</code></pre>
<p>I've always used PLINK for GWAS data management, QC, and analysis because of its efficient data handling capabilities for GWAS data. However, this kind of data can't be imported directly into PLINK or converted into a pedigree format file. (<a href='http://pngu.mgh.harvard.edu/~purcell/plink/dosage.shtml'>PLINK does handle imputed data</a>, and so does <a href='http://www.stats.ox.ac.uk/~marchini/software/gwas/snptest.html'>SNPTEST</a>, but both of these require genotype probabilities and I only have the expected allele dosage).</p>
<p>I did write some R code to read in the data in chunks and run some simple summary and association statistics, but this is clunky and suboptimal for many reasons:</p>
<ol>
<li>The dataset first has to be split up (<a href='http://stackoverflow.com/questions/5265839/split-delimited-file-into-smaller-files-by-column'>I used a perl wrapper around UNIX/cut to do this</a>). After splitting the dataset into several hundred files with all my samples and a subset of SNPs, computing sample-level measures (sample call rate, relatedness, ethnic outliers) is going to be a real coding nightmare.</li>
<li>Subsetting analyses is going to be difficult (not as easy as PLINK's --exclude, --include, --keep, --remove, --cluster, etc).</li>
<li>PLINK integrates SNP annotation info (in the map file) to your results. Joining QC and analysis results to genomic position, minor allele, etc, will require lots of SQL joins.</li>
</ol>
<p>Ideally I don't want to rewrite software for GWAS data management, QC, and analysis. I've considered (1) analyzing only genotyped SNPS, or (2) rounding the allele dosage to the nearest integer so I can use PLINK, but both of these methods discard useful data.</p>
<p>Does anyone have any suggestions on how I should start to QC and analyze this data without re-inventing the wheel or rewriting PLINK? Any other software suggestions that could take this kind of data? Keep in mind, my dataset is nearly 100GB.</p>
<p>Thanks in advance.</p>
| <p>For imputation, the primary QC really should be done on the input genotypes, rather than directly on the imputations. Prior to imputation, I remove SNPs at MAF < 0.01, HWE < 1e-06, and only used SNPs present on 97% of the samples. Also, since Illumina data has relatively few A/T and G/C SNPs, I also remove those, as it completely removes any potential strand issues (most of my runs are meta-analyzed with other studies, so I think this step is worthwhile). In my opinion, these steps are a key part of good imputations.</p>
<p>However, you already have imputations, so I will assume that they are to high quality. The exact answer to your question depends on the software used for imputation. Personally, I use MACH for most of my imputation work, and I use <a href='http://www.genabel.org/packages/ProbABEL'>ProbABEL</a> for association analysis. ProbABEL reads the MACH files directly without modification. Also MACH2QTL and MACH2DAT can be used similarly, but I don't have any experience with those. In the analysis, ProbABEL does track the Imputation Quality score from the input files, and typically those are filtered at R2<0.3, but that is typically done as part of the meta-analysis.</p>
<p>Some points:</p>
<ul>
<li>ProbABEL reads the entire file into memory, which will be sizable for large chromosomes and sample size.</li>
<li>The newer versions of ProbABEL have support for a "filevector" format which gets around these issue. I've just started experimenting with this, and it will remove the memory issues I just mentioned. If the documentation is unclear, there is a better explanation <a href='http://forum.genabel.org/viewtopic.php?f=6&t=22'>here</a> on how to use the files. The "filevector" format needs to be made via the <code>mach2databel</code> function of <a href='http://cran.r-project.org/web/packages/GenABEL/index.html'>GenABEL</a>.</li>
<li>To get this all to work, I have a custom job script written in Perl to queue the jobs, and I usually do several in parallel via the <a href='http://search.cpan.org/~dlux/Parallel-ForkManager-0.7.9/lib/Parallel/ForkManager.pm'>Parallel::Forkmanager</a> module, but you should choose your own poison.</li>
<li>ProbABEL can't take compressed files as input, which is not optimal, because the uncompressed files are huge. MACH2QTL and MACH2DAT can take compressed files, but I've still not jumped to that.</li>
<li>If your impuations came from IMPUTE, <a href='http://www.stats.ox.ac.uk/~marchini/software/gwas/snptest.html'>SNPTEST</a> can take the input directly, but I have no experience with SNPTEST.</li>
<li>The GenABEL package also has a <code>impute2databel</code> function to make filevector format for use with ProbABEL. I've not worked with this.</li>
<li>Previous answers cited association software for use with Beagle imputations. I have no experience with that myself.</li>
<li>These softwares typically allow for fairly standard association models, and if you want "unsupported" models, the work arounds are much slower. I've loaded all my genotypes into netcdf files and work via R with the <a href='http://cirrus.ucsd.edu/~pierce/ncdf/'>ncdf</a> library.</li>
<li>I don't recommend working with the "rounded" genotypes, as the decimal values appropriately capture uncertainty in the imputations.</li>
<li>You don't want to read the unrounded genotypes into R, as R does not support floats. Everything is read as double, and memory quickly becomes an issue. That is the reason for the netcdf workaround.</li>
</ul>
<p>Hopefully, this gives you some insight.</p>
| biostars | {"uid": 6476, "view_count": 40814, "vote_count": 30} |
Hi,
I keep coming up against these problem when writing scripts using BioPython SeqIO..
The first, is I can't find in the documentation of BioPython (or the cookbook) how to change a SeqcRecord attribute.. e.g. add a number to a `record.id` (or `record.name`) to give it a unique identifier (a common problem in preparing data)..
For example:
c=0
for x in SeqIO.parse("file.fasta","fasta"):
c=c+1
x.name = x.name+str(c)
SeqIO.write(x,"out.fasta","fasta")
...does nothing, the name for each record stays the same as the input.
If I try to build the output manually, simply by writing a text file, I always come across the problem of having no way to find out what all the SeqcRecord attributes are.. fore example, in a fastq record, what is the attribute for the phred33 data?? `record.letter_annotations[phred_quality]` will give phred values, but not the phred33 coded values.. Does anyone know how to get a full list of attributes (and their 'keys') for a SeqRecord??
Thanks for any help..
Theo | I have a feeling there's a minor confusion with ID and name in your example - you can use the print function to see all attributes of a SeqRecord, here for a small test record:
for s in SeqIO.parse('./test.fasta', 'fasta'):
print(s)
prints
ID: Unchanged
Name: Unchanged
Description: Unchanged
Number of features: 0
Seq('ATGCTAGCTAGCTAGCTA', SingleLetterAlphabet())
Now if I change that:
for s in SeqIO.parse('./test.fasta', 'fasta'):
s.id = 'CHANGED'
print s
it prints
ID: CHANGED
Name: Unchanged
Description: Unchanged
Number of features: 0
Seq('ATGCTAGCTAGCTAGCTA', SingleLetterAlphabet())
So as you can see, name and description stay the same, which is probably what happens in your example. If I write this SeqRecord via `SeqIO.write(s, 'out.fasta', 'fasta')` I get
>CHANGED Unchanged
ATGCTAGCTAGCTAGCTA
This should also answer your second question, using the normal print() function you can see all attributes. You can also use Python's in-built dir() method:
print dir(s)
prints
['__add__', '__class__', '__contains__', '__delattr__', '__dict__', '__doc__', '__format__', '__getattribute__', '__getitem__', '__hash__', '__init__', '__iter__', '__len__', '__module__', '__new__', '__nonzero__', '__radd__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', '_per_letter_annotations', '_seq', '_set_per_letter_annotations', '_set_seq', 'annotations', 'dbxrefs', 'description', 'features', 'format', 'id', 'letter_annotations', 'lower', 'name', 'reverse_complement', 'seq', 'upper']
Lots of Python standard functions, and a few BioPython specific methods.
The numbers in letter_annotations are ASCII numbers to accommodate for the various offsets, to get your phred+33 numbers back here's one example where the SeqRecord's quality is all '#':
>>> a.letter_annotations['phred_quality']
[2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]
Get the first number, and add 33 for the offset:
>>> print chr(a.letter_annotations['phred_quality'][0] + 33)
'#'
So you can for example do this:
>>> print [chr(x + 33) for x in a.letter_annotations['phred_quality']]
['#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#']
or even
>>> print ''.join([chr(x + 33) for x in a.letter_annotations['phred_quality']])
'################################################################################'
Edit:
In case you're wondering, quality score is stored as ASCII numbers so that the user can directly specify the offset without much hassle (as above with the +33 offset). This is also possible:
`print a.format('fastq-illumina')` - prints the # quality as B
`print a.format('fastq-sanger')` - prints the # quality as #
`print a.format('fastq-solexa')` - prints the # quality as > | biostars | {"uid": 103663, "view_count": 6604, "vote_count": 1} |
I am new to analysing ATAC-seq data.
As mentioned on https://www.biostars.org/p/209592/, there seem to be two ways to use MACS to analyse ATAC-seq data.
1. Utilising the --shift -100 --extsize 200 command
This, I believe, is to find where the cutting sites are.
2. Utilising the --shift 37 --extsize 73 command
This is to find the nucleosomes since the DNA is wrapped around nucleosomes is circa 147bp.
I specifically want to document regions of open chromatin, particularly enhancer regions.
I have read in the literature that open chromatin regions will give reads <100bp. If this is correct, do I have to filter my bam file to look at only this length.
Surely if I map all of the my data, not filtering for read size I will end up with genome-wide ATAC-coverage? Looking at my bam file following mapping to the genome, my read sizes range from 50bp-450bp (with a peak at around 70bp).
| You can use MACS, but it is not the best tool for identifying open chromatin regions. I would recommend using Fseq: http://fureylab.web.unc.edu/software/fseq/ It was designed to analyze DNase -seq data, the predecessor of ATAC-seq. When analyzing data from either experiment, you do not really want to look for overlapping reads the way you do for Chip-seq. You want to look for regions with a high density of cut sights or insertion sights, and Fseq is designed to highlight those. | biostars | {"uid": 324287, "view_count": 3791, "vote_count": 1} |
Hi,
By default IGV indicates insertions with respect to the reference with a purple I, and I would like to hide it or change the color, I have read the manual and it is making me crazy, I can´t find the way to do it, somebody knows if it is possible? | Preferences -> Alignments -> Hide indels < 'x' bases (?) | biostars | {"uid": 339007, "view_count": 2070, "vote_count": 1} |
Hi,
I am trying to compare few models for evoultionary distance calculation. I want to compare just those basic ones - Jukes-Cantor, K2P, Tamura, Tamura-Nei, HKY(Hasegawa-Kishino-Yano), GTR (general time reversible = Tavare) and Falsenstein81. I have already found equations for evolutionary distance calculations for first 4 models mentioned above. I am still looking for HKY,GTR,F81. Can you help me? I mean I know how to calculate substitution rates (for transitions and transversions) but please help me how to calculate evolutionary distance then?
Thanks Sam :) | I would first look at the original papers for the different models, you will find the equations for sure. I know that the APE/phangorm package uses those but I think they merely call [phyml][1]. I looked briefly and you can request the source code and that will give you some code to work with.
[1]: http://www.atgc-montpellier.fr/phyml | biostars | {"uid": 136434, "view_count": 4204, "vote_count": 2} |
I very much like the IGB tools and its features. While I have been able to make a good use of it, I have been facing a problem and can't seem to find a solution how much I try. I am trying to view the aligned tophat output (mapped.bam and junction files from aligned RNAseq data on the reference *A. lyrata* genome. When I load the lyrata genome on the IGB browser I can see the genome coordinate and the TAIRmRNA database (the annotated .gff file). But, after I upload a mapped.bam and junction file I am not able to see the alignment (aligned reads) with the reference and the annotation.
But, I figured that the mapped.bam and junction creates its own set of scaffold at the bottom of the default set of scaffold (one to one copy with default, but not sure why?). So, if I select a scaffold that the mapped.bam file has created I am able to see the mapped reads and the junctions but now cannot see the co-ordinate bases and the annotations. However, with A. thaliana genome there is no such problem with viewing the mapped output and junctions from RNAseq data along with genome coordinates and bases, TAIR10 mRNA database and several other databases from other labs.
Also, I see that updated version of phytozome data is available (V10.2). Is the data for A. lyrata available on IGB browser (V7) the same as V10.2?
Thanks,
Bishwa K. | Hello,
It sounds to me like the reference genome you used for the alignment step is using different names for scaffolds than the version of the sequence IGB is using.
**Some useful info**:
IGB is getting reference genome sequence and gene model annotations from a publicly accessible IGB QuickLoad site located at:
http://www.igbquickload.org/quickload
The various genomes we support are contained in folders for each genome, named for the species and the month and year of the genome assembly release.
It looks like our latest A. lyrata genome is in here:
http://www.igbquickload.org/quickload/A_lyrata_Apr_2011/
IGB uses a file called "genome.txt" to populate the list of chromosome/scaffold sequences you see in the "Current Genome" table (right side tab):
http://igbquickload.org/quickload/A_lyrata_Apr_2011/genome.txt
If you download that file and open it in Excel or a text editor, you can see all the names of the chromosomes and their sizes.
The sequence data, which IGB will load when you click the "Load Sequence" button, is in a "2bit" format file called A_lyrata_Apr_2011.2bit. We are using the "2bit" format because it's very compact and there are many utilities for working with it - mostly available from Jim Kent and the UCSC Genome Bioinformatics group. The 2bit format has some nice features that makes accessing sequence data fast and easy for IGB.
If you load a BAM file into IGB and notice that all-new sequence names are getting added to the Current Sequence table **and** when you click the "Load Sequence" button, no sequence gets loaded, then that usually means: the genome.txt and 2bit files don't contain the sequences you used to run your alignment. This could happen if your genome version is different *or* if it's the same version but is just using different names.
If using different names is the problem, then you can give IGB a list of synonyms that IGB can use to match names. So for example, if the reference genome sequence you used to do your alignments contains a sequence called "FooBar" which is the same sequence that IGB calls "foobar123", then you can tell IGB the two names mean the same thing by adding a personal synonyms "chromosomes.txt" file to IGB.
For more info about that, see: https://wiki.transvar.org/display/igbman/Personal+Synonyms
Let us know if you need any help with this.
Also, if there is a more recent version of A. lyrata genome, we'd be happy to add it to the IGB QuickLoad system - including sequence and gene model annotations.
So | biostars | {"uid": 147826, "view_count": 4013, "vote_count": 3} |
Hi guys,
I've to create a variable a in which elements are column names of a database.
For example, I have the following column names of a df:
colnames(df)
"PS_01", "PS_01_mod2", "PS_02", "PS_02_mod2"
I want to create a vector a in which elements are the column names of df ending with mod2, so I want to create this situation
a <- "PS_01_mod2", "PS_02_mod2"
How can I do this in a simple way?
| Hi, a simple `grep()` command will do this for you, and we can add a regular expression ('regex') to ensure positional specificity:
vector <- c('PS_01', 'PS_01_mod2', 'PS_02', 'PS_02_mod2', 'mod2_mod1')
idx <- grep('mod2$', vector)
vector[idx]
[1] "PS_01_mod2" "PS_02_mod2"
The dollar, `$`, means that we only want '*mod2*' appearing at the end of a line.
Note the difference here, without the dollar:
idx <- grep('mod2', vector)
vector[idx]
[1] "PS_01_mod2" "PS_02_mod2" "mod2_mod1"
Kevin
| biostars | {"uid": 9476797, "view_count": 577, "vote_count": 1} |
Hi!
I have an alignment (.bam) of reads to mm9 genome. I sorted it with `samtools sort`, so that later I can use `-sorted` key with bedtools. I also created a .bed-file with regions of interest, in which I want to count number of reads, that mapped to them. I tried this: converted .bam to .bed with `bedtools bamtobed`, and then intersected them counting number of hits (`bedtools intersect -a regions_of_interest.bed -b alignment_sorted.bed -c -sorted > Neg2H_counts.bedgraph`). The problem is, it looks fine for all chromosomes with numbers from 0 to 9 (and X), but all counts for all regions of interest of chromosomes with higher number (chr10, chr11, etc) are 0. There is no biological reason for that, in fact the highest signal should be on chr11. What could be wrong here? I am fairly new to all these tools.
**UPDATE**
I tried to do the same intersection with bedmap and the result is identical... So there probably is something wrong with my files - what could it be?
I also tried sorting the alignment-derived bed-file in the same way, as I did with the files with regions of interest and it doesn't help. | By default, the `-sorted` option assumes that both of your input files have chromosomes sorted lexicographically (chr1, chr10, chr11, etc.). I suspect one of your files is this way and one is not. Alternatively, if you want to use a different order, you can use the `-g` option. See the docs for details: http://bedtools.readthedocs.org/en/latest/content/tools/intersect.html#g-define-an-alternate-chromosome-sort-order-via-a-genome-file | biostars | {"uid": 102074, "view_count": 8544, "vote_count": 1} |
This is a follow-up to an earlier question of mine.
I tried the following command (copied verbatim from [UCSC's mysql access docs][1]):
mysql --user=genome --host=genome-mysql.cse.ucsc.edu -A
...but it fails with the error message:
ERROR 1045 (28000): Access denied for user 'genome'@'my.host.redacted' (using password: YES)
What else must I do to get through?
Thanks!
[1]: https://genome.ucsc.edu/goldenPath/help/mysql.html | OK, problem solved.
**Diagnosis:** <a href="https://en.wiktionary.org/wiki/PEBCAK">PEBKAC</a> [1]
**(one) Solution:** change the command line to
mysql **--no-defaults** --user=genome --host=genome-mysql.cse.ucsc.edu -A
---
[1] More specifically: I had a long-forgotten `~/.my.cnf` file lingering around, which was required by a pipeline that I used briefly back in 2006; even adding `--verbose` to the command line did not reveal that an incorrect connection parameter, namely a password, was being read from this file; I knew, from the remark `"(using password: YES)"` in the error message, that a password was somehow being used, but I had no way of determining the source of this password, nor what it was. How I finally figured out the problem is a tl;dr-length saga I won't bore you with. | biostars | {"uid": 166748, "view_count": 2395, "vote_count": 1} |
Hi all,
I have three somatic variants (mutations) detected in a single tumour sample. They come from Whole Genome Sequencing technologies and to call them normal and tumour samples were compared.
The mutations are supported by the same reads. It's a bit suspicious to me because this could indicate that the reads are from another part of the genome.
I have been searching a LOT for online tutorials about manual inspection of mutations with IGV. But I only found a couple of them that are interesting. However, I could not identify if these three mutations are real or not:
[Dropbox link to IGV screenshot of variants.][1]
[TinyPic link to IGV screenshot of variants.][2]
a) The quality of the reads is good: MAPQ between 60 and 29.
b) The variants are supported by reads from both strands.
c) There are no germline mutations annotated in dbSNP, 1000 genomes, etc.
d) Blat scores for the supporting reads in other regions of the genome are lower than 300, while the scores for this region are higher than 900.
But as I said before, I think that the fact that they are on the same reads is not a good signal.
Do you think that these could be false mutations? or real mutations from another region of the genome?
Thanks in advance.
[1]: https://www.dropbox.com/s/8sh8xf4koan9l3w/d_e.png?dl=0
[2]: http://es.tinypic.com/r/24pjqzl/9 | >The mutations are supported by the same reads. It's a bit suspicious to me because this could indicate that the reads are from another part of the genome.
It could also indicate that there are three adjacent mutations, or another sort of mutation. Without additional context, is it difficult to determine whether the reads are true variants. In addition to the properties you have identified, you should consider:
prior somatic mutation: are any of the SNVs in COSMIC?
strand bias: looks fine as there are reads supporting the variant originating from both strands
allele frequency: the variants appear to be have somewhere around 20-40% BAF. Is this consistent with the BAF of upstream and downstream SNV?
sequence context/alignment artifacts: do the reads actually represent three nearby SNV? Have a look at the sequence context of the mutations. Depending on the flanking sequence there may be a more parsimonious explanation. For example, in some sequencing contexts, the 3 SNVs could be explained by 1bp insertion that the aligner prefers to align as 3 SNVs. Are the reads containing the variants soft clipped? Is this aactually a STR repeat expansion/contraction?
normal coverage: is there sufficient coverage in the normal for the variants to be reliably called as somatic? (the SNV caller should calculate this)
kataegis: do these mutations occur in a region of kataegis?
variant quality score: what quality score does the variant caller assign to these variants?
SV: is the tumour highly structurally rearranged? Could these be explained by a single DNA repair event? More generally, any explanation that results in the simultaneous generation of the mutations increases the plausibility of the SNVs as a single event explanation removes your implicit assumption that somatic SNVs should not be clustered so closely.
> But as I said before, I think that the fact that they are on the same reads is not a good signal.
It just means they are phased together and all occurred on the same chromatid. It does not necessarily make them less likely - especially if they can be explained by a single event. Events such as chromothripsis result in hundreds of structural rearrangements from the same chromatid.
TLDR: from the limited information available, they look plausible.
| biostars | {"uid": 254467, "view_count": 2979, "vote_count": 1} |
Hello, I always believed expression matrix on GEO is normalized. However, I get huge big log2FC from [GSE85957](https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE85957) today.
> head(expr_3)
# A tibble: 6 x 8
SYMBOL logFC AveExpr t P.Value adj.P.Val B ENTREZID
<chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <chr>
1 Spp1 3419. 2421. 10.2 0.00000368 0.00128 0.221 25353
2 Gstp1 2125. 2338. 10.3 0.00000328 0.00128 0.243 24426
3 Cyp2e1 2047. 2833. 4.32 0.00204 0.0235 -1.89 25086
Here is how my expression data extracted
gse_path <- "/datapool/pengguoyu/Microarray/20190711_geo/rawdata/GSE85957_series_matrix.txt.gz"
gse <- getGEO(filename=gse_path, AnnotGPL=TRUE)
expr <- exprs(gse)
So I go back to check expression matrix
PROBEID GSM2288460 GSM2288461 GSM2288462 GSM2288463 GSM2288464
1367452_at 1165.0328 1011.4838 1193.8429 1143.6874 1162.2721
1367453_at 512.07166 519.57355 502.8087 433.26254 480.2318
1367454_at 647.18243 619.50635 673.89526 644.89575 685.5907
1367455_at 1226.1555 1299.9249 1318.0239 1363.5055 1308.6063
1367456_at 1530.6841 1611.0748 1768.4469 1761.0474 1751.5911
1367457_at 426.08826 282.9359 433.74475 421.27148 445.81595
This seems to be data without normalized. How can I know any one expression matrix from GEO is normalized or not, wether I can apply `lmFit` function from `limma` directly? Thanks. | Well, the easiest way is to read the data processing sections for each sample.
> The data were analyzed with Microarray Suite version 5.0 (MAS 5.0)
> using GeneData Expressionist® Pro Refiner. The trimmed mean target
> intensity of each array was arbitrarily set to 100.
The next easiest is to use [GEO's built-in analysis tools](https://www.ncbi.nlm.nih.gov/geo/geo2r/?acc=GSE85957) (really just R scripts, but whatever) to view the value distributions. RMA normalized microarrays are typically very obvious, as their distributions are all the same.
<a href="https://ibb.co/t8VVqRS"><img src="https://i.ibb.co/9sQQbPJ/geo2r.png" alt="geo2r" border="0"></a>
That isn't the case here, though the distributions aren't completely nuts. These have clearly been normalized in some capacity, though you'd typically hope for bit more detail in how it was done. I don't know exactly what `limma` expects (does it expect log2 values?), so you may do a little more reading on that. You could also normalize through `limma` and just see if the results make more sense. | biostars | {"uid": 395494, "view_count": 3690, "vote_count": 1} |
I've got a new issue...
I would like to compute the average distance between two distinct motifs in the same sequences list than before. Have you some clue on how manage this ???
I just come to do it for one motif like that :
source("motifOccurrence.R")
motif <- c("T", "C", "A", "A")
motidist <- sapply(df, FUN=function(df, motif) {
computeDistance(coordMotif(df, motif))
}, motif = motif)
This R code give me the average distance between a define motif inside all the sequences of my list. And I would like to do the same but with two motif... Can someone help me ?
To extend informations about what i wanted to do :
i worked with a fasta file in input :
'> 1
GACTCTACTATAAACGGGAGATAGCAATCTAACGCAGTGCTTCAACTCCTCCTCCATCTGAACACCCTTCAACCTTTGATACTCAGACGTTTTAGGTCGG
'> 2
ACCACCCCTTTGTCCAGAAATAGGACTCTTGGGCCTGTTGCCTGAATAAAGTCCAACCACCACAACCACTACACTACCATATGTAAGCTTCACTGATGGT
'> 3
CACCACAAGTGCGCGCCACGACGTGCATAGCCTCTAGATCGGCAACTCAGGCGAGAAGTGTTTTATTTCGGTGTGGCCGGTCCTGGGCATTTTACGGAAA
'> 4
GTTAGTGTACAAGTCCGAATAGAGTCACGAAAGACCCACACAACCACGTAATGACCTCGCTGTAATGAGATCAGTTGGCTCATGAAGGAAGAACGTAATG
'> 5
TGAGCGTTCGCCAATAACCATCCCTCTCGTTCCTTGTAACTGTACTATGATAGCGGGCGCCCCCCTAATTAAATAGCGGACGCCCTGACCTATTGTATGA
'> 6
TGATATATCTACTCGATAAGGATATAGAGGTCTAATTGTTGAGAAGTGTACCACCTTAGAGCACGAGTTTAGGATACTTAGTAGGTTCTTGCGAAGGATA
etcetc
Then the motidist object look like this :
1 2
152 94
3 4
36 138
5 6
92 113
And the distances given by the function stand for one motif, and now, i would like to do the same but for the disatnce between two motifs like this :
atcgacatagacgactgatcgtcag MOTIF1 acggtagacagt MOTIF2 agcagatgacta # And this for all sequences in the file !
Thanks by advance | Assuming I understood your problem this is my attempt to a solution. I do not use the functions in the script you pointed out, but the `Biostrings` Bioconductor package. Also, I use the Bioconductor package `BSgenome.Scerevisiae.UCSC.sacCer3` for the yeast genome as example sequence data.
# load required packages.
library(ggplot2) # for qplot().
library(Biostrings)
library(BSgenome.Scerevisiae.UCSC.sacCer3)
# get sequence data (DNAStringSet)
seq <- getSeq(Scerevisiae)
# create motif dictionary.
dict0 <- DNAStringSet(
x = c("TCAA",
"GGAT",
"ATAT",
"GGCC")
)
pdict0 <- PDict(dict0)
# search with the dictionary every sequence in seq.
res <- lapply(seq, function(s) matchPDict(pdict0, s))
res[[1]]
MIndex object of length 4
[[1]]
IRanges object with 1533 ranges and 0 metadata columns:
start end width
<integer> <integer> <integer>
[1] 114 117 4
[2] 161 164 4
[3] 456 459 4
[4] 719 722 4
[5] 776 779 4
... ... ... ...
[1529] 228963 228966 4
[1530] 229173 229176 4
[1531] 229260 229263 4
[1532] 229817 229820 4
[1533] 229825 229828 4
...
<3 more elements>
# calculate combinations of motifs to compare
# assuming you want to compute different motifs to each other.
motifcomp <- t(combn(seq_len(length(pdict0)), 2))
colnames(motifcomp) <- c("motif_i", "motid_j")
motifcomp
motif_i motid_j
[1,] 1 2
[2,] 1 3
[3,] 1 4
[4,] 2 3
[5,] 2 4
[6,] 3 4
# iterate over the comparison matrix. for each pair of motifs and
# compute the distance to the nearest one over all sequences.
# this gives a list with each elements being the distances.
foo <- lapply(seq_len(nrow(motifcomp)), function(i) {
m <- motifcomp[i,]
d <- sapply(res, function(r) {
mcols(distanceToNearest(r[[m[1]]], r[[m[2]]]))$distance
})
unlist(d)
})
# we can check these distances are not normally distributed.
qplot(foo[[1]])
# now compute mean (and maybe other summaries, here standard deviation).
hoo <- lapply(foo, function(x) {
data.frame(mean = mean(x), sd = sd(x))
})
hoo <- do.call(rbind, hoo)
data.frame(motifcomp, hoo)
motif_i motid_j mean sd
1 1 2 150.15481 154.67573
2 1 3 71.63235 85.10428
3 1 4 358.76577 390.83777
4 2 3 69.26831 87.50602
5 2 4 348.63816 389.88737
6 3 4 378.64977 409.81555 | biostars | {"uid": 216202, "view_count": 2726, "vote_count": 2} |
I'm curious as to why I'm getting two different values with bedtools intersect with two sorted bedfiles:
bedtools intersect -u -a test1.bed -b test2.bed | wc -l
This gives me 157,273
However, when swapping -a and -b:
bedtools intersect -u -a test2.bed -b test1.bed | wc -l
This gives me 158,095 | This is normal that order affect the result. Read carfully the help [https://bedtools.readthedocs.io/en/latest/content/tools/intersect.html][1]
First you report all sequence from A that overlap B
And secondly you report all sequence B that overlap A
The only case where both will give you the same result it is if the two files are identical
[1]: https://bedtools.readthedocs.io/en/latest/content/tools/intersect.html | biostars | {"uid": 9473825, "view_count": 618, "vote_count": 1} |
<p>I'm trying to find all the reads (by name) from a BAM file that align to various regions in a bed file. Right now I can do this with <code>bedtools</code> using <code>intersectBed</code>:</p>
<pre><code>intersectBed -abam reads.bam -wo -f 1 -b regions.bed -bed
</code></pre>
<p>From this one can parse all the read ids that land in every interval in <code>regions.bed</code>, but it's not very compact. Is there a way to get <code>bedtools</code> to natively transform this into a more compact format, e.g.</p>
<pre><code>chr1 x y .... read_id1,read_id2,read_id3
</code></pre>
<p>where <code>chr1 x y</code> is a given interval in <code>regions.bed</code> and the comma separated <code>read_id1,...</code> is the list of read ids from <code>reads.bam</code> that fall in that interval. In this compact format, the output BED file would have at most as many entries as there are regions in <code>regions.bed</code>, whereas with the <code>-wo</code> option it can be even larger than the number of reads in <code>reads.bam</code>. Thanks.</p>
| The BEDOPS <a href="http://code.google.com/p/bedops/wiki/bedmap">`bedmap` tool</a> with the `--echo`, `--echo-map-id`, `--delim` and `--multidelim` options can also answer this very quickly. You can pipe in reads into `bedmap` by using the <a href="http://code.google.com/p/bedops/wiki/bam2bed">`bam2bed` script</a>:
$ bam2bed.csh < reads.bam \
| bedmap --echo --echo-map-id --delim '\t' --multidelim ',' sorted-regions.bed - \
> answer.bed
The `--echo` option returns the `region` element, while `--echo-map-id` returns a list of `read` elements which overlap the region by one or more bases.
The `answer.bed` file will look like:
[ region-A in BED format ] \t [ comma-delimited list of read IDs overlapping region-A ]
[ region-B in BED format ] \t [ comma-delimited list of read IDs overlapping region-B ]
...
As the answer is BED-formatted output, you can very easily pipe this to downstream processes that consume BED data.
Just be sure to sort regions, if not sorted, e.g.:
$ sort-bed regions.bed > sorted-regions.bed
And, likewise, for reads:
$ bam2bed.csh < reads.bam | sort-bed - > sorted-reads.bed
Or to put it together with the overall analysis:
$ bam2bed.csh < reads.bam \
| sort-bed - \
| bedmap --echo --echo-map-id --delim '\t' --multidelim ',' sorted-regions.bed - \
> answer.bed | biostars | {"uid": 61044, "view_count": 4345, "vote_count": 1} |
Hello all, I am new in plink analysis
I have Map file and I know that location (start /end) of ***CD2AP ***gene on chromosome 6 has base pairs 47,477,745 to 47,627,262
How can I use physical position column in MAP file to extract SNPs of this gene
Any help will be appreciated | Dear,
First you need to make sure which transcript you are interested. Keep in mind that the same SNP will have different cDNA coordinates between transcripts. Also you need to watch out for the genome version. Because g coordinates will also vary between different genome versions. Moving from your example, you can go to www.ensembl.org and write in the search CD2AP. You will see that the golden transcript is <a href="http://www.ensembl.org/Homo_sapiens/Transcript/Summary?db=core;g=ENSG00000198087;r=6:47477789-47627263;t=ENST00000359314">ENST00000359314</a> with 629 aminoacids. Now go to biomart at http://www.ensembl.org/biomart . Here select ensembl variation database, than select SNPs and Indels. Next from the left click on filters and go to gene associated filters. Here paste your gene ID which is ENSG00000198087. Next go to the attributes tab and select the coordinates you want (genomic, mRNA, coding sequence..etc) Download your results and you will have your SNPs.
I hope this helps, | biostars | {"uid": 167027, "view_count": 3776, "vote_count": 1} |
<p>I'm new in this field. I want to identify circular RNAs from RNA-seq. I want to know which tools I could choose to detect circular RNAs using some simple commands. Thanks!</p>
| You could try [CIRCexplorer][1], a combined strategy to identify junction reads from back spliced exons and intron lariats using TopHat and TopHat-Fusion. The commands are very simple. For more information. please refer to [**Zhang et al., Complementary Sequence-Mediated Exon Circularization, Cell (2014), 159:134-147**][2]
[1]: https://github.com/YangLab/CIRCexplorer
[2]: http://www.sciencedirect.com/science/article/pii/S0092867414011118 | biostars | {"uid": 127785, "view_count": 11857, "vote_count": 8} |
Hi,
I am using edgeR to carry out DE analysis for my RNA Seq samples, may I know how can I construct the design matrix and how can I make sure that during testing, the DE genes shown are of the correct contrast?
My current steps are shown below:
**Current data type:
subject: 1,2,3,4**
**treatment for each subject: T0, T1**
I would like to find DE genes between conditions (T0, T1) for each paired samples:
**design <- model.matrix(~Subject+Condition)**
**d <- d[rowSums(cpm(d)>10) >= (length(G1_ids)/2),,keep.lib.sizes = FALSE ]**
**d <- calcNormFactors(d)**
**d <- estimateGLMCommonDisp(d,design)**
**d <- estimateGLMTrendedDisp(d,design)**
**d <- estimateGLMTagwiseDisp(d,design)**
**fit <- glmQLFit(d, design, robust = TRUE)**
**result_glm <- glmQLFTest(fit)**
It seems to me that, using the above steps, I get DE genes which are differentially expressed between samples, not between conditions.
Thank you!
Maggie Pan | Please see section 3.4.2 of the edgeR vignette.
result_glm <- glmQLFTest(fit, coef=5) # Or maybe coef=4, it'll be the last column in "design" | biostars | {"uid": 278093, "view_count": 4668, "vote_count": 1} |
Hello,
I was wondering if there is a website in which I can retrieve the list of genes for each ontology.
For example for the biological process "Regulation of Gene Expression," I would like to have all the list that belong to this ontology etc..
Thanks in advance | I recently wrote a small tool to fetch all the children of a given term in GO: http://lindenb.github.io/jvarkit/GoUtils.html
there is an example to fetch all the associated genes:
Use GO annotation to retrieve genes associated to GO:0005216 ‘ion channel activity’
join -t $'\t' -1 1 -2 2 \
<(java -jar dist/goutils.jar -A 'GO:0005216' | cut -f 1 | sort | uniq) \
<(wget -q -O - "http://cvsweb.geneontology.org/cgi-bin/cvsweb.cgi/go/gene-associations/gene_association.goa_human.gz?rev=HEAD" | gunzip -c | grep -v '^!' | cut -f3,5 | uniq | LC_ALL=C sort -t $'\t' -k2,2) |\
sort -t $'\t' -k2,2 |\
grep SCN5A -A 10 -B 10
(...)
GO:0086006 SCN2B
GO:0005244 SCN3A
GO:0005248 SCN3A
GO:0005248 SCN3A
GO:0005248 SCN3B
GO:0086006 SCN3B
GO:0005248 SCN4A
GO:0005248 SCN4A
GO:0005248 SCN4B
GO:0086006 SCN4B
GO:0005244 SCN5A
GO:0005248 SCN5A
GO:0005248 SCN5A
GO:0005248 SCN5A
GO:0005248 SCN5A
GO:0005248 SCN5A
GO:0005248 SCN5A
GO:0086006 SCN5A
GO:0086060 SCN5A
GO:0086061 SCN5A
GO:0086062 SCN5A
GO:0086063 SCN5A
GO:0005248 SCN7A
GO:0005248 SCN7A
GO:0005248 SCN7A
GO:0005248 SCN7A
GO:0005248 SCN8A
GO:0005248 SCN8A
GO:0005248 SCN9A
GO:0005248 SCN9A
GO:0005248 SCN9A
GO:0005272 SCNN1A
(...)
| biostars | {"uid": 319749, "view_count": 1085, "vote_count": 2} |
I've ran a BLAST using the Sol genomics network database for my tomato transcriptome
http://solgenomics.net/
I have my transcriptome labeled with the sol gene IDs, but need to convert them to a more typical gene ID such as NCBI-geneID. Any thoughts on how to do this? I've tried using a series of different GeneID converters such as DAVID and biomart, but none of these programs have GeneIDs from the Sol Genomics network.
How do I convert these geneIDs? Thanks! | Okay, so I finally completed this task by doing the following:
I contacted the Boyce Thompson Institute who ran the Sol Genomics website to see if they had any conversion sheets from their Sol genomics IDs to something more common. They sent me a very good conversion sheet of Sol gene IDs to UniProt IDs.
If someone knows how to share files on biostar I'll attach the conversion sheet to my answer. It's 1.7 MB, so not too big. I can't find the same file in the Sol genomics ftp directory, but I haven't looked that hard either. so if someone knows how to share files like this or if the file is on the sol genomics ftp directory, any help would be appreciated.
So I got my sol gene IDs, got the sol to Uniprot IDs sheet, and used Galaxy to convert the sequences with the info on the conversion sheet.
Most gene enrichment analysis programs take the UniProt IDs, so this conversion did what I wanted it to do.
Also this web based program can do Gene Enrichment analysis using the Sol gene IDs: http://bioinfo.bti.cornell.edu/tool/MetGenMAP
Another good program that can use the Sol gene IDs for gene enrichment analysis: http://mapman.gabipd.org/web/guest/home
So that's how I solved my issue. Thanks for the help guys. | biostars | {"uid": 150062, "view_count": 9879, "vote_count": 1} |
Hi:
I have a BED file with sites of a transcription factor binding sites (mm9). I want to annotate these sites, typically located between intergenic regions, to a gene nearby. I checked a publication (PMC3080774) and it lists some tools that can map and annotate.
I tried using Chipseqanno. Initially it was straightforward till first two commands (https://bioconductor.org/packages/release/bioc/vignettes/ChIPpeakAnno/inst/doc/pipeline.html) and it went confusing.
bed <- system.file("extdata", "MACS_output.bed", package="ChIPpeakAnno")
gr1 <- toGRanges(bed, format="BED", header=FALSE)
## one can also try import from rtracklayer
gff <- system.file("extdata", "GFF_peaks.gff", package="ChIPpeakAnno")
gr2 <- toGRanges(gff, format="GFF", header=FALSE, skip=3)
## must keep the class exactly same as gr1$score, i.e., numeric.
gr2$score <- as.numeric(gr2$score)
ol <- findOverlapsOfPeaks(gr1, gr2)
## add metadata (mean of score) to the overlapping peaks
Authors did not explain why they are overlapping gr1 and gr2. Before that authors say "## one can also try import from rtracklayer". also authors did not point out names for annotations MM8 or Hg19.
My question to this forum is:
1. Given a bed file how can I map and annotate the peak regions to nearby genes.
2. Can I define 1kb, 2kb range. (apparently chipseqanno uses granges and thus is possible- however lack of mm9 and proper reference to why findOverlaps between gr1 and gr2 makes it so confusing to deal with ranges).
3. Are there any tools other than chipseqanno to achieve above two tasks.
Thanks
Adrian
| You can just use *GenomicRanges* with your peaks BED file and your annotation GFF (after they have both been converted to *GenomicRanges* objects. There is a parameter in the GenomicRanges `findOverlaps()` function called `maxgap`, which would allow you to set your distances of 1kb, 2kb, etc.
With regard to why one would annotate regions that are up to 1kb, etc., away, well, that depends on your marker of study and how it is known to behave and affect gene transcription.
Kevin | biostars | {"uid": 322218, "view_count": 1730, "vote_count": 1} |
Hello,
I want to extract some specific strings in csv file, using either excel, R or Python.
for example as below: I want to find string from column A in column B and return in column C with 5 amino acid before and after **N**; thanks!!
```
A B C
INETTDFR MHRFLLMLLFPFSDNRPMMFFRSFIVFFFLIFFASNVSSRKQTYVIHT IVGKINETTDF
VTTSTKHIVTSLFNSLQTENINDDDFSLPEIHYIYENAMSGFSATLTDDQLDT
VKNTKGFISAYPDELLSLHTTYSHEFLGLEFGIGLWNETSLSSDVIIGLVDTG
ISPEHVSFRDTHMTPVPSRWRGSCDEGTNFSSSECNKKIIGASAFYKGYE
SIVGKINETTDFRSTRDAQGHGTHTASTAAGDIVPKANYFGQAKGLASGM
RFTSRIAAYKACWALGCASTDVIAAIDRAILDGVDVISLSLGGSSRPFYVDP
IAIAGFGAMQKNIFVSCSAGNSGPTASTVSNGAPWLMTVAASYTDRTFPAIV
RIGNRKSLVGSSLYKGKSLKNLPLAFNRTAGEESGAVFCIRDSLKRELVEGK
IVICLRGASGRTAKGEEVKRSGGAAMLLVSTEAEGEELLADPHVLPAVSLGF
SDGKTLLNYLAGAANATASVRFRGTAYGATAPMVAAFSSRGPSVAGPEIAKP
DIAAPGLNILAGWSPFSSPSLLRSDPRRVQFNIISGTSMACPHISGIAALIKSV
HGDWSPAMIKSAIMTTARITDNRNRPIGDRGAAGAESAATAFAFGAGNVDPT
RAVDPGLVYDTSTVDYLNYLCSLNYTSERILLFSGTNYTCASNAVVLSPGDLN
YPSFAVNLVNGANLKTVRYKRTVTNVGSPTCEYMVHVEEPKGVKVRVEPKVL
KFQKARERLSYTVTYDAEASRNSSSSSFGVLVWICDKYNVRSPIAVTWE
``` | Using R:
getPreceding <- function(A, B, N = 4) {
x <- regexpr(A, B)
substring(B, x - N, x + attr(x, "match.length") - 1)
}
A = "INETTDFR"
B = "...IGASAFYKGYESIVGKINETTDFRSTRDAQGHGTHTAST..."
getPreceding(A, B)
# [1] "IVGKINETTDFR"
| biostars | {"uid": 109077, "view_count": 4342, "vote_count": 1} |
I've installed BLAST and BLAT on my computer (OSX) but am confused with how many different files there are to download to set up a database on my computer and how to go about it. Can I use one with both BLAST and BLAT? Should I get the partially non-redundant database? | Once you have installed `blat`, you can download a FASTA file for your genome of interest from UCSC. If you wanted `hg38`, for instance:
$ for chr in `seq 1 22` X Y; do echo $chr; wget -qO- http://hgdownload.cse.ucsc.edu/goldenpath/hg38/chromosomes/chr$chr.fa.gz | gunzip -c - >> hg38.fa; done
You can use this with `blat` as described in [my answer here][1].
You could run `makeblastdb` to [make BLAST databases][2]:
$ makeblastdb -in hg38.fa -parse_seqids -dbtype nucl
Or [download nucleotide databases from NCBI][3].
[1]: https://www.biostars.org/p/261839/#261983
[2]: https://www.ncbi.nlm.nih.gov/books/NBK279688/
[3]: ftp://ftp.ncbi.nlm.nih.gov/blast/documents/blastdb.html | biostars | {"uid": 261970, "view_count": 2163, "vote_count": 1} |
I have an assembled transcriptome that I blasted against the nr_db from NCBI. I want to use MEGAN to get some taxonomic information on the reads. I imported my blast xml files into MEGAN5 and all seemed to work fine. When I click a node to inspect which hits are associated with a given taxon I encounter a problem. The majority of the hits are listed correctly with the query id header but some do not have a header and the hole sequence or parts of a sequence is displayed. This results in incomplete sequences when I want to extract subsets of reads.
I have no clue what is happening there and how to fix it. Anyone encountered a similar problem?
Could there be a problem with the blast xml input files? Or is it a parsing problem? Any idea how to fix this?
Any help is appreciated! | I figure out what went wrong. It was a tiny formatting error in my blast input files that corrupted everything, thought blast had no problem with it. Fixed it and now everything is working fine! | biostars | {"uid": 152206, "view_count": 1952, "vote_count": 1} |
Can I index a BAM file that is sorted by queryname instead of coordinates using samtools? | no, indexing is only for **coordinate** | biostars | {"uid": 258116, "view_count": 2004, "vote_count": 2} |
Hello Biostars Community,
General question(s) here, specifically in regards to sequencing platforms and some related questions on sequencing, as well?
1. Could using a newer/the newest computational genome/annotation (for example, presently, Ensembl 107 or the newest Gencode version) adversely effect the actual truth of what was sequenced?
2. When sequencing is done through an Illumina machine or other big name company machines, are those sequencing platforms *completely independent* from the genome or DNA/cDNA being sequencing?
3. What happens if, for example, "famous gene ABC" and "low-profile gene XYZ" are found to have different 3' and/or 5' ends by some new discovery, in Illumina, would adapters still link to them to perform those bridge PCR reactions on the flow cell lanes, or would it be that all the data published before on "famous gene ABC" and "low-profile gene XYZ" should be revisited? Or is it like question #2, "completely independent" - are even adapters independent from genes?
I was doing some reading, and I guess the gene sequence really only matters for probe-based sequencing (chips and arrays?). Hopefully this question could be a good resource for others?
Thank you in advance.
- Pratik | Excellent responses by GenoMax to wonderful questions by Pratik. I echo what GenoMax says.
IMHO, the platforms ar einherent to capture technologies as well. NOT necessarily or everytime one would intend to use the same samples with different sequencing platforms/technologies unless the lab is rich. Even otherwise, I wouldn't recommend that. Taken together, the need of the our is to emply machine learning heuristics and predict such outcomes on a large scale. Could completely independent be dependent can give some interesting reasons for us to explore.
Just my two cents
Prash | biostars | {"uid": 9539218, "view_count": 505, "vote_count": 2} |
What are currently the best tools for lifting over genome coordinates from one custom genome build to another custom genome build.
I found this 4 year old post that mentions **UCSC liftOver** as the top tool but that seems to be limited to model organisms as you need the chain file from UCSC . I can't find anywhere how to create these chain files your self:
https://www.biostars.org/p/65558/
Is it possible to create these UCSC liftOver chain files by your self for custom genome builds?
Are there other tools with the same quality and functionality as UCSC liftOver that support creating these "chain" files yourself for custom reference genomes? How difficult and time / resource consuming is it to build these chain files?
Do these chain files and the lift over process take strand into account? If I have a VCF with an A/T SNP positions do they flip it to T/A SNP position if the new reference genome has a genome region flipped to the different DNA strand?
Does lift over support INDEL variants?
| As mentioned elsewhere, our https://github.com/wurmlab/flo tool can do a decent chunk of the job | biostars | {"uid": 226368, "view_count": 3963, "vote_count": 1} |
<p>When I try this [vcftools_results]$ /home/X/vcftools_0.1.6/perl/vcf-validator sample1.flt.vcf.gz</p>
<p>it gives an error.</p>
<p>Can't locate Vcf.pm in @INC (@INC contains: /share/lib64/perl5/5.8.8 /share/lib64/perl5 /share/lib/perl5/5.10.0/x86_64-linux-thread-multi-ld /share/lib/perl5/5.10.0 </p>
<p>/share/lib/perl5/5.8.8 /share/lib/perl5 /usr/lib/perl5/5.8.8 /usr/lib/perl5 </p>
<p>/share/lib/perl5/5.10.0/x86_64-linux-thread-multi-ld /share/lib/perl5/5.10.0 </p>
<p>/share/lib/perl5/site_perl/5.10.0/x86_64-linux-thread-multi-ld </p>
<p>/share/lib/perl5/site_perl/5.10.0 /share/lib/perl5/site_perl/5.8.8 </p>
<p>/share/lib/perl5/site_perl .) at /home/X/vcftools_0.1.6/perl/vcf-validator line 9.</p>
<p>BEGIN failed--compilation aborted at /home/X/vcftools_0.1.6/perl/vcf-validator line 9.</p>
<p>how can I fix it?</p>
| <p>Hi Deniz</p>
<p>Have you set the environment variable PERL5LIB as it is defined in <a href="http://vcftools.sourceforge.net/docs.html#install">VCF Tools installation</a> ?</p>
<p>It is as simple as calling:</p>
<pre><code>export PERL5LIB=/path/to/your/installation/perl
</code></pre>
<p>Regards.</p>
| biostars | {"uid": 15163, "view_count": 32227, "vote_count": 7} |
Hi,
I am attempting to make some heat maps with some RNA-Seq data. I used Salmon for my quantification and followed [this][1] link to import my data into R so I can use Deseq2. So I am just going through the Deseq2 little tutorial [here][2]. I am trying to follow their instruction on how to create some heat maps. I believe my issue is with the row names. Someone asked a similar question to mine [here][3] but the issue was never resolved. So in that post, it was suggest that the row names were set to the default 1,2,3,4,5,6,7,8, which is true. This is the result I get when I run `rownames(df)`
[1] "1" "2" "3" "4" "5" "6" "7" "8"
So I trued to change the column names by running the following
rownames(df) <- c("adp1","adp2","adp3","adp4","fed1","fed2","fed3","fed4")
dds@colData@rownames <- c("adp1","adp2","adp3","adp4","fed1","fed2","fed3","fed4")
However, I am still getting the same error
select <- order(rowMeans(counts(dds,normalized=TRUE)),
decreasing=TRUE)[1:20]
df <- as.data.frame(colData(dds)[,c("condition")])
pheatmap(assay(ntd)[select,], cluster_rows=FALSE, show_rownames=FALSE,
cluster_cols=FALSE, annotation_col=df)
`Error in check.length("fill") : 'gpar' element 'fill' must not be length 0`I
think my attempts to change the row names were point less? I think the issues goes way back to important my Salmon data into R? I could be mistaken but when I went through their import tutorial, the samples were labeled as sample1, sample2, sample3, etc.
Any help would be amazing!
[1]: https://bioconductor.org/packages/devel/bioc/vignettes/tximport/inst/doc/tximport.html#salmon-sailfish
[2]: http://bioconductor.org/packages/devel/bioc/vignettes/DESeq2/inst/doc/DESeq2.html#heatmap-of-the-count-matrix
[3]: https://www.biostars.org/p/281586/ | You can just do this:
cdata <- colData(dds)
pheatmap(assay(ntd),
cluster_rows = FALSE,
show_rownames = FALSE,
cluster_cols = FALSE,
annotation_col = as.data.frame(cdata))
...or:
pheatmap(assay(ntd),
cluster_rows = FALSE,
show_rownames = FALSE,
cluster_cols = FALSE,
annotation_col = as.data.frame(cdata[,"condition"], row.names=rownames(cdata)))
You do not even have to create the *df* object.
Kevin
***Edit (October 24, 2018:**
the key is that the rownames of the object passed to `annotation_col` have to match the colnames of the object being clustered (here, `assay(ntd)`)* | biostars | {"uid": 332532, "view_count": 5235, "vote_count": 2} |
<p>My lab has started using the <a href='http://www.halogenomics.com/haloplex/how-it-works'>Haloplex technology</a> to capture the regions of interest.</p>
<p>With this technology most of the reads have the same start/end (I've found that removing the duplicates would reduce the coverage to ~1 base). The reads are grouped in a vertical 'cluster' that can contains more than 1000 reads.</p>
<p><img src='http://i.imgur.com/nW6Le.jpg' alt='enter image description here' /></p>
<p>Have you ever used this technology ? I'm currently using <a href='http://samtools.sourceforge.net/'>samtools</a> mpileup with the option
<strong>"-A"</strong> ( count anomalous read pairs) and <strong>"-d 8000"</strong> ( max per-BAM depth).</p>
<p>I'm afraid there is a kind of bias with haloplex: is it safe to use <a href='http://samtools.sourceforge.net/'>samtools</a> or Gatk to call the variants ? </p>
<p>Pierre</p>
| <p>In general, any capture-based technology will have bias. In your case, you're worried that the the bias will be amplified in the detection step.</p>
<p>I would use as many methods as possible to call these alleles, and see what the differences are. With coverage this high, there should be little variation in results.</p>
<p>Furthermore, simulation could provide an avenue to assessing potential bias in the case of haploplex resequencing. In simulated tests, I have observed that samtools and GATK have lower sensitivity than our caller (freebayes) at high depths, but all perform roughly the same at lower coverage. It's not entirely clear to me why this is, but I suspect default parameter selection may play a role. I haven't evaluated other callers, and it's possible that others may perform even better in high-depth contexts.</p>
| biostars | {"uid": 60433, "view_count": 5209, "vote_count": 4} |
I am looking for any two strains of S. pneumoniae for which I can find their raw output sequencer data in FASTQ format and for which I can find their reference genomes (since I plan to align the raw data in the FASTQ format to the reference genomes using Bowtie 2). I know that I could make artificial FASTQ files using a generator but I need to work with data that came out of a real sequencer.
I tried looking in NCBI, Ensembl, and ArrayExpress and I did find reference genomes for some strains such as [R6][1] and [others][2]. However, finding the raw data straight out of a sequencer (FASTQ format) seems a hard task.
In summary, where can I find any two strains of S. pneumoniae for which can I find the aforementioned raw data (I assume that there must be popular strains for which this is more easily found than for others) along with their reference genomes?
[1]: http://www.streppneumoniae.com/getntseq.asp
[2]: http://strepneumo-sybil.igs.umaryland.edu/cgi-bin/current/shared/index.cgi?site=strepneumo | You can get FASTQ and FASTA [here][1].
Most strains will provide the raw data associated with the study. For example: http://www.ebi.ac.uk/ena/data/view/ERP000241
[1]: http://www.ebi.ac.uk/ena/data/search?query=streptococcus%20pneumoniae | biostars | {"uid": 105527, "view_count": 2303, "vote_count": 2} |
<p>I should start by saying I have solved this problem, but I feel like my code is ugly and overkill.</p>
<p>I am trying to sort a dataframe on a column containing human chromosomes:</p>
<p>chr1,chr2...chrY, chrX.</p>
<p>the problem is:</p>
<p>chr1, is followed by chr10. </p>
<p>What tricks do you use to deal with this problem <strong>IN the [R] environment?</strong></p>
| <p>Hi Zev, </p>
<p>what you are looking for is: <a href="http://rss.acs.unt.edu/Rdoc/library/gtools/html/mixedsort.html">mixedsort {gtools}</a></p>
<blockquote>
<p>Order or Sort strings with embedded numbers so that the numbers are in the correct order</p>
</blockquote>
<pre><code>package 'gtools' was built under R version 2.13.2
n<- c('chr1','chr21','chr13','chr4','chr10')
> n
[1] "chr1" "chr21" "chr13" "chr4" "chr10"
> mixedsort(n)
[1] "chr1" "chr4" "chr10" "chr13" "chr21"
</code></pre>
<p>I hope this helps.</p> | biostars | {"uid": 17224, "view_count": 9162, "vote_count": 6} |
I am trying to characterize the depth of coverage for SNPs/INDELs/other variants in my BAM/SAM file. Using samtools mpileup generates a nice VCF that gives me information on this, but it's not quite complete. Entering the "-t AD" flag gives me the depth of the alternate alleles compared to the reference. That is what I want, though the actual depth is found in the column after the flag is indicated and it is colon delimited. It is readable, but not terribly concise.
However, I am trying to check specific positions along the genome for specific alternate alleles. Is there a convenient way to parse the VCF for the AD flag values by position and allele without complicated and format-specific if/else statements?
I can use bedtools intersect or a dozen other methods to reduce the file to specific positions, already, but I'm hoping there's a method that doesn't require piping a handful of tools together.
EDIT:
It would be straightforward to write an if statement of the kind:
if ALT column is T then return AD and if AD is > 20 return "likely to be target organism strain"
However, over entire genomes and many different variant calls, this won't be so trivial. I have a file that contains a list of positions along the genome and alleles that I'm specifically looking for. I guess I should better word my question to ask for some software that performs that level of analysis. I need to compare allele identity at a given position to a file containing many positions and alleles of interest and then return the coverage of those alleles in the sample that match the ones in the comparison file. After that, I need to review the depth to determine the significance of the variant - which is arbitrary. | > It would be straightforward to write an if statement of the kind
> if ALT column is T then return AD and if AD is > 20 return "likely to be target organism strain"
using vcffilterjs https://github.com/lindenb/jvarkit/wiki/VCFFilterJS (will add NOT_TARGET_ORGANISM_STRAIN in the column FILTER)
java -jar dist/vcffilterjs.jar -e 'function accept(v) {var foundT=false;var alts= v.getAlternateAlleles(); for(i=0;i< alts.size();++i) {if(alts.get(i).getDisplayString().equals("T")) {foundT=true;break;}} if(!foundT) return false; var g0=v.getGenotype(0); if(!g0.hasAD()) return false; var AD=g0.getAD(); for(var i in AD) if(AD[i]>20) return true; return false;} accept(variant);' -F NOT_TARGET_ORGANISM_STRAIN input.vcf
| biostars | {"uid": 229935, "view_count": 4185, "vote_count": 1} |
I have downloaded GnomAD vcf file and want to calculate the minor allele frequency of each variant in that file. But I am not quite sure how to do that. I have an INFO column which has AC,AF, AN in there. Does AF refers to minor allele frequency for that variant? If not then how can I calculate the minor allele frequency. I've given below three variants from the file just to show you what a gnomAD file looks like
`chr1 905315 . C A 293.0 PASS AC=1;AN=143276;AF=6.97954e-06;variant_type=snv... chr1 905311 906011 chr1:905311-906011 63 . 905431 905432 0,0,0 2 70,528 0,172`
`chr1 905315 rs1489213119 C T 1432.0 PASS AC=1;AN=143276;AF=6.97954e-06;variant_type=snv... chr1 905311 906011 chr1:905311-906011 63 . 905431 905432 0,0,0 2 70,528 0,172`
I am sorry if this is a basic question. I am new to bioinformatics and haven't been able to find a solution yet. | Yes AF is the MAF here.
It bears mentioning that sometimes the variant might not necessarily be the less common allele in all populations, and the major allele might even confer something weakly deleterious. But in these two cases, they only saw one individual with this mutation. | biostars | {"uid": 475377, "view_count": 1140, "vote_count": 1} |
I am using tabix to retrieve information in a specific genomic region. need not the hole information but only let say second and the third column.
My file is constructed as follows:
```
##INFO=<ID=GENE,Description="Neighbouring Gene">
##INFO=<ID=A1,Description="Allele 1 (+)">
#SNPID CHROM POS GENE A1 A2 STUDIES CASES CONTROLS
```
So, ideally I would like to have only SNPID, CHROM, POS, CASES.
Is there any possibility to retrieve only this information? | tabix file.tsv.gz chr12:345-678 | grep -v "##" | cut -f 1,2,3,7 | biostars | {"uid": 134808, "view_count": 1919, "vote_count": 1} |
I ran into an unexpected feature of blastn.
After extracting some gene sequences from a genome, creating a blast database and blasting back to the reference, lot of extracted genes are not found in the blast result, while they are certainly there in the genome (as they were extracted from there)
What can be the cause?
Some details (I cannot upload the whole files):
My command is:
blastn -query GCF_000005845.2_ASM584v2_genomic.fna -db MG1655_genes -outfmt 6
The fasta file is downloaded from https://www.ncbi.nlm.nih.gov/genome/167?genome_assembly_id=161521.
The database is gained from extracting the feature table belonging to the assembly above.
A missing gene from the blast is e.g. `aaaD`. However, if blasting only this gene, it is naturally found. | After doing some research, I found the answer for my question. The value for `max_target_seqs` is 500 by default.
If raising `max_target_seqs` to some irrationally high value, all genes are shown.
Hence I used
blastn -query GCF_000005845.2_ASM584v2_genomic.fna -db MG1655_genes -outfmt 6 -max_target_seqs 100000000
to obtain all genes. | biostars | {"uid": 394390, "view_count": 689, "vote_count": 1} |
Hi,
I analyse micro array datasets to get gene expression data.
I want to filter NCBI datasets by sample type.
For example I want to get only tumor/cancer sample in [GSE19826][1].
I look at all the samples one by one and select them manually.
Is there any simple way or tool to get it or I have to look manually?
![enter image description here][2]
[1]: https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE19826
[2]: https://i.ibb.co/j4XYzH9/Screenshot-1.png | a simple XSLT generating a shell script from the Sample names:
https://gist.github.com/lindenb/1d23b5d53f1a3694bbd1a7631935140f | biostars | {"uid": 460606, "view_count": 557, "vote_count": 1} |
i am new the area of bioinformatics . Right now working on the tuxedo protocol . Trying to analyze a data based on the e coli reference genome.
this is my file location - **/Desktop/prasanth/TUXEDO/Experiments/Glucose_limited/alignersd/Bowtie/index$ ls
reference_genome.fa**
when i run the command -----**bowtie2 build ~/Desktop/prasanth/TUXEDO/Experiments/Glucose_limited/alignersd/Bowtie/index/reference_genome.fa**
This is the error that comes up
**Could not locate a Bowtie index corresponding to basename "build"
Error: Encountered internal Bowtie 2 exception (#1)
Command: /usr/bin/bowtie2-align --wrapper basic-0 build /home/ubuntu08/Desktop/prasanth/TUXEDO/Experiments/Glucose_limited/alignersd/Bowtie/index/reference_genome.fa
bowtie2-align exited with value 1**
i understand the problem with the location of file. But i am not able to solve the issue even after maintaining the reference genome in the same folder ..
ps: i am a beginner to the area of bioinformatics | Microfuge is right. Your command should be
bowtie2-build -f ~/Desktop/prasanth/TUXEDO/Experiments/Glucose_limited/alignersd/Bowtie/index/reference_genome.fa ~/Desktop/prasanth/TUXEDO/Experiments/Glucose_limited/alignersd/Bowtie/index/reference_genome
Then ~/Desktop/prasanth/TUXEDO/Experiments/Glucose_limited/alignersd/Bowtie/index/reference_genome is the address of your index. You can provide it at the time of alignment. | biostars | {"uid": 212140, "view_count": 2051, "vote_count": 1} |
Dear all,
please would you advise, based on your experience, which algorithm is more reliable in confidently calling INDELS in cancer genomes (I am referring especially to INDELS of length > 50bp, let's say 50bp-500bp long).
many thanks,
-- bogdan | pindel can do it but, when you get near to the 500bp mark, are you not then in the realm of small somatic copy number alterations? BBMap can also do this: https://www.biostars.org/p/254305/
In all cases, read length and insert size are obviously key.
Kevin | biostars | {"uid": 304103, "view_count": 1264, "vote_count": 1} |
Hi
I have a folder with 4000 files in it. I would like make a new folder, and copy into it all of the files which contain >50 fasta sequences. How do I do this?
I know that I need to create a simple loop, then grep '>' | wc -l and select only those with >50. But I am new to programming and am unsure how to write this properly. | Here is how
grep -c '^>' *.fasta | sed 's/:/ /' | awk '$2>50 {print $1}' | xargs mv -t new_folder/
**Update**
grep -c '^>' *.fasta | sed 's/:/ /' | awk '$2>50 {print $1}' | xargs cp new_folder/
P.S: Updated as @[genomax2][1] pointed out.
[1]: https://www.biostars.org/u/18713/ | biostars | {"uid": 196413, "view_count": 2622, "vote_count": 1} |
<p>Hi all,
here are some questions about <strong>paired-end sequencing</strong> for NGS:</p>
<ul>
<li>What are the main differences between <strong>mate-paired</strong> sequencing and <strong>paired-end</strong> sequencing; Should I care when I use tools like 'samtools', maq, etc.... ? Should one, and only one short read, should be paired with another one (1-1)?</li>
<li>What is <strong>removing duplicates</strong> ? does it mean that a <strong>pair</strong> of short reads has been mapped at two distint positions on the genome or does it mean that a pair matched too many time at one position ?</li>
<li>Knowing that <strong>bwa sampe</strong> "Generates alignments in the <a href='http://samtools.sourceforge.net/SAM1.pdf'>SAM</a> format given paired-end reads. Repetitive read pairs will be placed randomly", is there any need to "remove the duplicates" ?</li>
<li>How does picard <strong>MarkDuplicates</strong> work ? How can I find the reads that have been 'tagged' ? will it remove the reads from the <a href='http://samtools.sourceforge.net/SAM1.pdf'>BAM</a> file ?</li>
</ul>
<p>Thanks</p>
<p>Pierre</p>
| <p>paired-ends and mate pairs are different protocols. The distance between mate pairs are much longer (2-5kb), while paired-end fragments are rarely more than 500bp apart and can even have <em>negative</em> distance (overlapping pairs)</p>
| biostars | {"uid": 788, "view_count": 34318, "vote_count": 27} |
I want to query the molecular weight of multiple proteins from R given uniprot ids. I thought of using biomaRt or UniProt.ws, but didn't see which field I should query.
There are online tools performing the calculation given a uniprot ID (PIR, ExPASy), but you have to manually enter the input and save the output.
I want the MW that appears in UniProt when looking in "Sequences" under Mass (for example for the entry http://www.uniprot.org/uniprot/Q9Z2Y8) | You can get this information from the [TogoWS REST service][1], starting from UniProt ID. For example, using PROSC_MOUSE (which is the ID, Q9Z2Y8 is the *accession*):
```
library(rjson)
library(RCurl)
u <- getURL("http://togows.dbcls.jp/entry/uniprot/PROSC_MOUSE.json")
j <- fromJSON(u)
j[[1]]$sq$MW
# [1] 30049
```
Conversion from accession to ID should be easy using R/biomaRt (search Biostar for numerous examples).
[1]: http://togows.dbcls.jp/site/en/rest.html | biostars | {"uid": 109706, "view_count": 5368, "vote_count": 2} |
Hi,
I want to extract sequence with header from a fasta file with specific ID given in another file.
As for example:
File: `main.fa`
```
>ADC37925 pep:novel supercontig:GCA_000025145.2:CP001844:1841991:1848551:-1
MNLLKKNKYSIRKYKVGIFSTLIGTVLLLSNPNGAQALTTDNNVQSDTNQATPVNSQDTNVANNRGLANSAQNTPNQSATTNQGIFSTLIGTVLLLSNPNGAQALTTDNNVQSDTNQATPVNSQDTNVANNRGLANSAQNTPNQSATSNPNGAQAL
>EFB95474 pep:novel supercontig:GCA_000175495.1:cont1.9:99969:106529:-1
MNLLKKNKYSIRKYKVGIFSTLIGTVLLLSNPNGAQALTTDNNVQSDTNQATPVNSQDTNVANNRGLANSAQNTPNQSATTNQSTNQALVNHNNGSIANQGIFSTLIGTVLLLSNPNGAQALTTDNNVQSDTNQATPVGIFSTLIGTVLLLSNPNGAQ
>EFC04694 pep:novel supercontig:GCA_000175955.1:cont1.2:270427:276987:-1
MNLLKKNKYSIRKYKVGIFSTLIGTVLLLSNPNGAQALTTDNNVQSDTNQATPVNSQDTNVANNRGLANSAQNTPNQSATTNQSTNQALVNHNNGSIANQATPTSVQSSTPSAQNNNHTDGNTTATETVSNANNKDVVSNNTTLNVPNKTNENGS
>EFH37336 pep:novel supercontig:GCA_000178015.1:cont1.4:98713:105273:1
MNLLKKNKYSIRKYKVGIFSTLIGTVLLLSNPNGAQALTTDNNVQSDTNQATPVNSQDTNVANNRGLANSAQNTPNQSATTNQSTNQALVNHNNGSIANQVLLLSNPNGAQALTTDNNVQSDTNQATPVNSQDTVLLLSNPNGAQALTTDNNVQSD
```
File: `id.txt`
```
ADC37925
EFC04694
```
Expected Outcome:
```
>ADC37925 pep:novel supercontig:GCA_000025145.2:CP001844:1841991:1848551:-1
MNLLKKNKYSIRKYKVGIFSTLIGTVLLLSNPNGAQALTTDNNVQSDTNQATPVNSQDTNVANNRGLANSAQNTPNQSATTNQGIFSTLIGTVLLLSNPNGAQALTTDNNVQSDTNQATPVNSQDTNVANNRGLANSAQNTPNQSATSNPNGAQAL
>EFC04694 pep:novel supercontig:GCA_000175955.1:cont1.2:270427:276987:-1
MNLLKKNKYSIRKYKVGIFSTLIGTVLLLSNPNGAQALTTDNNVQSDTNQATPVNSQDTNVANNRGLANSAQNTPNQSATTNQSTNQALVNHNNGSIANQATPTSVQSSTPSAQNNNHTDGNTTATETVSNANNKDVVSNNTTLNVPNKTNENGS
```
Tried Perl Program :
I have found and made the Perl code work for a single input (ID) [[Source][1]]
```
########################## script.pl
#Usage: `perl script.pl file(s) 'searchTerm [searchTerm]' [>outFile]`
use strict;
use warnings;
my $term = join '.', map "\Q$_\E", split ' ', pop;
my $found;
while (<>) {
if (/^>/) {
$found = /$term/i ? 1 : 0;
}
print if $found;
}
###########################
```
Now how to use this perl code by taking the searchTerm from `id.txt` file to search in main.fa and give the outcome as above?
Any help will be very much appreciated.
I am new in Perl.
[1]: https://www.biostars.org/p/64149/ | My Solution:
<a href="http://hgdownload.cse.ucsc.edu/admin/exe/linux.x86_64/">UCGS utilities</a>
$ ./faSomeRecords main.fasta id.txt output.fa
option `-exclude` will output sequences not present in "main.fasta" | biostars | {"uid": 127141, "view_count": 18082, "vote_count": 1} |
<p>Is there any way to detect if your rna-seq data is unstranded or stranded</p>
| In case anybody currently looking comes across this post...
The easy-to-use Salmon will check for you as described [here][1]. You can see what the resulting abbreviations correspond to with a nice illustration [here][2].
[1]: https://salmon.readthedocs.io/en/latest/salmon.html#what-s-this-libtype
[2]: https://salmon.readthedocs.io/en/latest/library_type.html#fraglibtype | biostars | {"uid": 98756, "view_count": 30700, "vote_count": 18} |
Hi folks. I need to run a de novo short-read genome assembler (on a paired-end/mate-pair library) that prefers outputting shorter but error-free contigs rather than longer contigs/scaffolds which may be mis-assembled. What assembler or what specific setting in an assembler of choice do you recommend to yield such contigs (as error-free as possible and no contig overlappings)? | According to the first GAGE paper, SGA makes shorter, but very much correct contigs. See http://genome.cshlp.org/content/early/2012/01/12/gr.131383.111.full.pdf | biostars | {"uid": 96350, "view_count": 2302, "vote_count": 1} |
Hi everyone,
I've performed ChIP-seq analysis on TFs for a specific cell line for which I only have three histones marks (H3K4me1, H3K27ac and H3K4me3). I used HOMER to annotate differents peak from my datas but now I would like to use these histones marks to improve the annotation (for example if one peak is present as the same region than H3K27ac and H3K4me1 peaks, I would like to say that this region is an active enhancer).
Does somebody have already done this kind of analysis?
Thanks in advance, | It sounds like you're trying to identify 'active enhancers'. There's a couple of interesting options you have to do this. For visualization @Vivek Bhardwaj's answer is great. But i'm assuming that you'd like to have a list of putative 'active enhancers' in which to over-lay your TF ChIP-seq data.
So literature tells us that enhancers (whether active or inactive) are typically marked with H3K4me1, and active enhancers are typically marked with H3K27Ac. What i'm getting from your post is that you aren't aware that there is also [literature][1] that has shown H3K4me3 to be associated with active enhancers. So you can see that enhancer definitions are sort of scattered, and a lot of histone modification will be cell line specific.
The first and simplest option is to use [bedtools intersect][2] to look for overlapping peaks of H3K4me1 + H3K27Ac. There's a couple of caveats to this, but it's a good starting point. To do this, you have to take into account that enhancers are typically distal sites from genes (though there ARE intragenic enhancers, but my post does not cover them, they're a whole other mess). So your best bet is to head on over to the [UCSC Table Browser][3] and download a RefSeq or Gencode annotation for your species of all coding transcripts and then use [bedtools slop][4] to extend all transcripts' TSS and TSE +5k in both directions.
` bedtools slop -i $REFSEQ -g $GENOME -b 5000 > $OUTPUT`
After getting your sloped transcripts, you can then overlay your histone data and identify peaks that contain H3K4me1 + H3K27Ac overlap. You could then argue that these areas are 'probable enhancers'.
`bedtools intersect -a $H3K4me1 -b $REFSEQ_SLOPED -v | bedtools intersect -a - -b H3K27Ac > $OUTPUT_ENHANCERS`
Your other option is to use [PARE][5] to help you identify possible enhancers based on this PVP pattern as indicated in a couple of enhancer reviews. Typically enhancers are marked with large 'peaks' of histone marker followed by a low 'valley' of no signal, and then another 'peak' of histone marker. Think: /\ __ /\
You could supply your H3K4me1 + H3K27Ac files to PARE instead of using two H3K4me1 replicates because you are interested in active enhancers. If you only care about identifying enhancers (poised or active) then you can use PARE as the manual suggests without modifications.
[ChromHMM][6] is also something that you could take a look at. It's more complicated than the above, and it requires you to do a bit of reading in order to be know what histone markers are typically found or annotate what regions of the genome. But essentially ChromHMM uses a hidden markov model to identify the presence or absence of each chromatin mark and then discovers combinational 'states' which can be used to annotate a genome.
I hope some of this is useful to you. Let me know if you have any questions.
[1]: http://www.ncbi.nlm.nih.gov/pubmed/21847099
[2]: http://bedtools.readthedocs.org/en/latest/content/tools/intersect.html
[3]: https://genome.ucsc.edu/cgi-bin/hgTables
[4]: http://bedtools.readthedocs.org/en/latest/content/tools/slop.html
[5]: https://github.com/spundhir/PARE
[6]: http://compbio.mit.edu/ChromHMM/ | biostars | {"uid": 186058, "view_count": 4402, "vote_count": 4} |
Hi guys :D
I'm working with distance matrices produced by clustal omega for moderately large fasta files combining sequences of two different plant species in each .
When I was about to finish the script and code the final pipeline step ; which is retrieving the actual sequences corresponding to ID's given in the distance matrices using the biopython function `SeqIO.index()` ... I realized that the original fasta files have duplicate ID's for different sequences resulting from different positions of SSR's on the same sequence , in which I extracted the left and right flanking regions for each SSR .
```
Traceback (most recent call last):
File "C:\Users\Al-Hammad\Desktop\Test Sample\dictionary.py", line 9, in <module>
dictionary=SeqIO.index("Left(Brachypodium_Brachypodium).fasta","fasta",IUPAC.unambiguous_dna)
File "C:\Python34\lib\site-packages\Bio\SeqIO\__init__.py", line 856, in index
key_function, repr, "SeqRecord")
File "C:\Python34\lib\site-packages\Bio\File.py", line 275, in __init__
raise ValueError("Duplicate key '%s'" % key)
ValueError: Duplicate key 'BRADI5G06067.1'
Tool completed with exit code 1
```
Here's a sample of one of my files :
```
>BRADI5G06067.1 cdna:novel chromosome:v1.0:5:7642747:7642899:-1 gene:BRADI5G06067 transcript:BRADI5G06067.1 description:"" Startpos_in_parent=24 Startpos_here=24 Length=26
ATGTATCTCCAACAACAACAACA
>BRADI5G06067.1 cdna:novel chromosome:v1.0:5:7642747:7642899:-1 gene:BRADI5G06067 transcript:BRADI5G06067.1 description:"" Startpos_in_parent=54 Startpos_here=54 Length=34
ATGTATCTCCAACAACAACAACAACGACGACGACGACGACGACGACGACAACG
>BRADI5G06067.1 cdna:novel chromosome:v1.0:5:7642747:7642899:-1 gene:BRADI5G06067 transcript:BRADI5G06067.1 description:"" Startpos_in_parent=102 Startpos_here=102 Length=26 ATGTATCTCCAACAACAACAACAACGACGACGACGACGACGACGACGACAACGACAACAACAACAACAACAACAACAACAACAACAAGAACGACGACGACG
```
**My question is**: What is the best, safest and most efficient way to rename the duplicate ID's for different sequences ?! and do I have to recompute the distance matrices again with the unique ID's after renaming or can I simply map the duplicates with their corresponding new unique values on the surface ?!
I'm really confused about that , and a little worried about the recomputing if considered since it's time consuming and takes nearly 4 days to produce the matrices .
I found this: http://stackoverflow.com/questions/7815553/iterate-through-fasta-entries-and-rename-duplicates/7836747#7836747 but it wasn't useful in my case, I'm working on a windows 7 64bit platform and python 3.4
Also I found this: https://www.biostars.org/p/10625/ but I believe it was the opposite of my case , I tried it though and ran on my files infinitely !! It wasn't that clear to me , for my bad luck :\
I desperately need this :( 😔
Any help would be appreciated, thanks in advance. | So , After looking at the options I have so far from the answers above ... I picked this approach to solve the issue for the reasons listed below and it works very fine for me :D
```
from Bio import SeqIO
from Bio.Alphabet import IUPAC
files=["Left(Aestivum_Japonica).fasta" , "Right(Aestivum_Japonica).fasta"]
for i in range(len(files)):
output_handle = open("$"+files[I],"w")
for idx,record in enumerate(SeqIO.parse(files[I],"fasta",IUPAC.unambiguous_dna)):
record.id=record.id+"_"+str(idx)
SeqIO.write(record,output_handle,"fasta")
output_handle.close()
dictionary=SeqIO.index("$Left(Brachypodium_Brachypodium).fasta","fasta",IUPAC.unambiguous_dna)
```
Reasons why I picked this solution:
1. It's more robust, integrated and consistent with the pythonic complicated pipeline I'm working on.
2. It's more safe - to me - to rename the duplicates than taking the risk of assigning the whole header as a unique identifier like suggested by RamRS (I hesitated 'cause I also found it a little confusing to see the ID as part of the description)!!
3. Renaming all sequences based on sequence No. in the file is much faster and more efficient - to me - than checking every single id in the file for duplicates and renaming with unique ones.
I'm pretty sure that the other provided solutions will work just as expected, I haven't tried them though.
Again , thank you guys so much ... I'm so grateful for your help. | biostars | {"uid": 130294, "view_count": 5487, "vote_count": 1} |
I have a list of protein GI and would like to get accession number of the genome (DBSOURCE in genbank file) using Eutilities.
What would be the easiest way to get genome accession numbers for a list of protein GI? | Here is a perl script to do this. This script is explained in detail [here](http://bioinformatics.cvr.ac.uk/blog/?p=276)
#!/usr/bin/perl
use Bio::DB::EUtilities;
#my @ids = qw(817524604 726965494);
my $infile = $ARGV[0];
my @ids;
open (IN,"$infile")||die "can't open $infile\n";
while(<IN>)
{
chomp($_);
my @ids=$_;
# print @ids."\n";
my $factory = Bio::DB::EUtilities->new(-eutil => 'elink',
-email => 'mymail@foo.bar',
-db => 'nucleotide',
-dbfrom => 'protein',
-correspondence => 1,
-id => \@ids);
# iterate through the LinkSet objects
while (my $ds = $factory->next_LinkSet)
{
#print " Link name: ",$ds->get_link_name,"\n";
my $protid = join(',',$ds->get_submitted_ids);
print "Protein ID:" . $protid ."\t";
#print "Protein ID: ",join(',',$ds->get_submitted_ids),"\t";
my $nucid = join(',',$ds->get_ids);
print "Nuc ID:" . $nucid ."\t";
my $factory = Bio::DB::EUtilities->new(-eutil => 'efetch',
-db => 'nucleotide',
-id => $nucid,
-email => 'mymail@foo.bar',
-rettype => 'acc');
my @accs = split(m{\n},$factory->get_Response->content);
print "Genome Accession: " .join(',',@accs), "\n";
}
} | biostars | {"uid": 145216, "view_count": 7854, "vote_count": 4} |
<p>I am looking for an antibody database where each antibody is linked with its potential/actual target genes/proteins, <em>i.e.</em> a database similar to <a href="http://drugbank.ca">DrugBank.ca</a> but only for antibodies. It would be really great if the data is in a parse-able format, up-to-date and comprehensive enough to cover most of the common antibodies -- probably asking too much here</p>
<p>Any suggestions on this?</p>
| Hi Arman,
Some of them are (Free+ Paid):
- <a href="http://www.antibodyresource.com/antibody-database.html">Antibody Related Databases and Software</a>:
> Antibody related amino acid sequencing tools, nucleotide sequencing tools, structural modeling tools, and hybridoma/cell culture databases can be found below. Speciality research databases that include monoclonal and polyclonal antibodies are also included.
- <a href="http://www.hupo.org/research/hai/">Human Antibody Initiative</a>
> The mission of the Human Antibody Initiative (HAI) aims to promote and facilitate the use of antibodies for proteomics research. The initiative consists of two separate activities; (1) the generation of a catalogue of validated antibodies from many different sources and (2) a protein atlas for the expression and localization of human proteins in normal and disease tissue. The two separate activities have as their primary deliverables to generate databases with free public accessibility.
> - The <a href="http://www.antibodypedia.org">Antibody Resource database</a> is aimed to produce a comprehensive catalogue of validated antibodies towards human proteins. This initiative depends on input from a large number of academic groups and commercial companies.
> - The <a href="http://www.proteinatlas.org">Protein Atlas initiative</a> is aimed to provide comprehensive and annotated database of high-resolution images showing tissue profiles in normal and cancer tissues. Both databases will be open to the public without restriction (no passwords).
- http://www.antibodies-online.com/
I hope this helps a bit. | biostars | {"uid": 45666, "view_count": 3733, "vote_count": 1} |
Hi,
I have few quick question regarding macs14?
- How one can treat the replicates? Do I have to run separately and then take average from the final peaks or is there any way to include it in one run?
- macs14 generates two wig files under "--single-profile" option; one for control and one for treated. which file to use for plotting final peak profile? Do I have to subtract treated - control manually or does it give the final peak wig files?
Thanks in advance | If you have the replicate, you can take the consistent peaks called in both replicates. There will be variability in ChIP-Seq data, so sometimes you may want to consider union of peaks instead of intersections.
Regarding peak profile, by control do you mean input ? or control in the sense of WT/KO ? You can plot both the treated and control using deeptools2 so that it will be clear that both separated if that's the intention. For UCSC tracks, I would get the bigwigs with -log10(p-value), to show enrichment over inputs. | biostars | {"uid": 380429, "view_count": 1201, "vote_count": 3} |
hi, I would like to create a list of random mm10 genomic coordinates with bedtools. I saw that the useful command to do this is `bedtools random [OPTIONS] -g <GENOME>`. The problem is that they must have different length, similar to my coordinates:
> summary (new $ width)
Min. 1st Qu. Median Mean 3rd Qu. Max.
188 1645 2649 2904 4196 11484
is there any other tool that can set different lengths when generating coordinates?
Thank you!
| There are a couple of different ways to approach this, I think, depending on what you have on hand. If you already have a set of coordinates such as the ones you've entered above, you could simply shuffle these coordinates along the genome using bedtools [shuffle][1]. If you only have a small list of such regions and need a relatively large amount, you could perform the operation repeatedly to get a large number of such regions.
Alternatively it looks like there is an R package for this task. I've not used it myself, but the documentation is [here][2].
[1]: https://bedtools.readthedocs.io/en/latest/content/tools/shuffle.html
[2]: https://rdrr.io/bioc/regioneR/man/createRandomRegions.html | biostars | {"uid": 9540657, "view_count": 305, "vote_count": 1} |
<p>So I have a 6 lists of differentially expressed genes taken at different periods of time. I need a way to show overlap in the genes that recur between lists. Naturally I tried to make a venn diagram, but 5-6 list venn diagrams are hard to follow. Is anyone aware of other methods of presenting data like this? I've been going through google scholars to find papers that present data like this, but I can't find anything.</p>
<p>Thanks, any help is appreciated!</p>
| <p>UpSetR is a recently released R package with powerful graphics for visualizing intersections in complicated cases such as yours.</p>
<p>https://cran.r-project.org/web/packages/UpSetR/index.html</p>
| biostars | {"uid": 152450, "view_count": 2241, "vote_count": 2} |
Hi all,
I have an RNA-seq dataset (comparing two different conditions) that I have analyzed using four different methods of differential expression (DE) analysis: DESeq2, limma-voom, liimma-trend, and edgeR. From these 4 different DE analysis, I get the set of raw p values (uncorrected for multiple hypothesis test) for all the genes. I would like to use Fisher's method for p value aggregation to combine the raw p values from these 4 different DE lists and then do the multiple hypothesis correction. Would that be OK? I am wondering because the p values are coming from different programs with different models. I would really appreciate any thoughts or comments. Thanks for the help! | My two cents / thoughts:
A DE program is supposed to give you a list of DE genes, testing the null hypothesis that gene is not DE. You have your list of DE genes that way. Just pick a DE program and stick with it.
There are a few problems with your approach:
1) Such an approach has not been benchmarked (most DE programs run under default configurations have been benchmarked and shown to work well -- hence they they were published). If combining results from multiple GLM regressions was good and ensured good sensitivity without losing control of the false discovery rate, those programs would already have done it.
2) The Fisher method has [implicitly] the alternative hypothesis that at least one of the programs would call a gene DE (given the info produced by all 4 programs). Is that really what you want to test? You get smaller p-values for each gene with a modest p-value -- more false positives because you're basically increasing the degrees of freedom. I'd use this alternative hypothesis for certain purposes (e.g. aggregating microarray probes for the same gene, or doing a meta-analysis across 10 different low-sample underpowered studies to, in a sense, increase sample size) but not here. In essence, these aggregation methods are used to combine incomplete parts to summarize your results. Using the exact same data AND testing the exact same prediction from the null (i.e. gene counts don't change) for the Fisher method does not really constitute "independence".
3) Related to above, it could be a form of p-value hacking. A t-test might not give me a significant p-value even though my data meets its assumptions so, before running that test, I throw in another test [also one where all the assumptions are satisfied] just to ensure my p-values are lower.
I would therefore not recommend doing this. For similar reasons, I wouldn't recommend making your "DE genes list" by looking at genes with adjusted p < 0.05 in at least one of the 4 programs. You could be more stringent and look at genes with adjusted p < 0.05 in ALL 4 programs, since that's being conservative and those genes have pretty good evidence of being bona fide DE but making a "DE genes list" with a bunch of false positives is much less desirable (I wouldn't even call it a "DE genes list" at that point).
tl;dr I'd recommend that you not do this. | biostars | {"uid": 9556573, "view_count": 291, "vote_count": 2} |
I have a pipeline which looks something like below, its written in bash
Trimmomatic --> Samtools --> Smalt --> Lastz --> custom perl programs.
If anyone of those processes fails they usually return a non-0 exit code, usually a `1`.
However, my pipeline program will continue to run, report the error and return an exit code of 0, indicating success.
How can I get the pipeline to quit/stop with a non-0 exit code when a part of the pipeline fails? | You're looking for `set -eo pipefail`. Add that after the first (`#!/bin/bash`) line and your script will fail if any part of your script fails.
See for more information: https://vaneyckt.io/posts/safer_bash_scripts_with_set_euxo_pipefail/ | biostars | {"uid": 300840, "view_count": 9609, "vote_count": 1} |
<p><br />
I have plant genomic reads (WGS method) through hiseq 2000 with paired-end reads (read1.fastq, read2.fastq). I need to remove chloroplast and mitchondria reads to assemble only nuclear reads. I need to know which is best and fast way to remove contaminants</p>
<p>1. First doing genome assembly and remove mitochondria and chloroplast reads from genome using Blast.</p>
<p>2. Remove first mitochondria and chloroplast reads from fastq files by mapping reads to respective mitochondria and chloroplast read using bwa and get unmapped reads as nuclear reads as fastq file using samtools and picard and then do genome assembly.</p>
| <p>Option 2 is better, mapping with BWA is fast, and you are reducing the total reads to assembly, therefore your assembly time and complexity is reduced.</p>
| biostars | {"uid": 101913, "view_count": 6726, "vote_count": 2} |
A colleague of mine and I have just been discussing the meaning of fold changes and though this question has been asked before non of the answers are actually as straight forward as needed so lets try here to solve this once and for all.
I list here what I understand so far and will update it from your answers.
What is the correct way to understand a fold change value of a gene or protein?
- A foldchange describes the difference of two values (eg. difference of expression in gene/protein A between healthy and diseased case)
- Biostatistical porgrams/packages calculate it via: "Log(FC)" = mean(log2(Group1)) - mean(log2(Group2))
- log2 fold changes are used/plotted in graphs as those are nicer to show because they center around 0, giving reductions a negative value and increments a positive value
- log2 fold change values (eg 1 or 2 or 3) can be converted to fold changes by taking 2^1 or 2^2 or 2^3 = 1 or 4 or 8
- To convert the fold change into change in % or anything that is actually tangible/understandable in "real life terms" ... need answers here! (= actual question I want to ask)
Lets get this solved once and for all, im looking forwards to your posts!
Here are great posts explaining more about fold changes:
[graphical representations][1]
[conversion of log2 fold change to fold change][2]
[1]: https://www.biostars.org/p/312980/
[2]: https://www.biostars.org/p/100460/ | Percent change = 2^(fold change) * 100, where the baseline then correspond to 100%
log2FC = 2 : 2^2 * 100 = 400, where the baseline then correspond to 100%
Log2FC = -2 : 2^-2 * 100 = 25, where the baseline then correspond to 100%
Hope this helps | biostars | {"uid": 342756, "view_count": 62393, "vote_count": 15} |
I have list of genes like
LOC109802912
LOC109788332
LOC109789928
LOC109809764
...
and have a fasta file like
>rna-XR_003803819.1 gene=LOC114915746
TGCTTCTTCTCTGTTGGAAGAGTTTAAAAGCAATAAAACTAAGTGTTTTGAGCTCTCTGAAATTGCTGGT
CATGTTGTTGAGTTCAGTGCGGATCAATATGGGAGCCGATTTATTCAGCAAAAGCTTGAAACAGCTActa
ca
>rna-XR_002239520.2 gene=LOC109794983
TCCTATTCATCATGCAGGATGTCAGAATCTTCATTTCAAATTGGACCAATCCTTCTTGTGAATGAACCCT
TCACGATTGATAATGGTCTAATGACACCTACTTTGAAAATTCGAAGAGATAGAGTTGTGGCTCAATACAG
G
how can i get the fasta sequences for genes?
| Try this:
pip install --user pyfaidx
xargs faidx -d " " YOURFASTA.fasta < GENEID_LIST.txt > extracted_sequences.fasta
| biostars | {"uid": 439929, "view_count": 554, "vote_count": 1} |
Hi all,
I am using R script from Getting Genetics Done ( http://www.gettinggeneticsdone.com/2014/05/r-volcano-plots-to-visualize-rnaseq-microarray.html) to generating Volcano plots but my problem is how to show particular single gene name on volcano plot without any condition (i.e foldchange or pvalue). I just want to highlight my gene name "WNT7A" on volcano plot.
It would be great help if someone help me how to modify following code:
> labs=Gene
I tried `labs <- res$Gene[WNT7A]`
but couldnt work...
This is original code from Getting Genetics Done:
res <- read.table("results.txt", header=TRUE)
head(res)
# Make a basic volcano plot
with(res, plot(log2FoldChange, -log10(pvalue), pch=20, main="Volcano plot", xlim=c(-2.5,2)))
# Add colored points: red if padj<0.05, orange of log2FC>1, green if both)
with(subset(res, padj<.05 ), points(log2FoldChange, -log10(pvalue), pch=20, col="red"))
with(subset(res, abs(log2FoldChange)>1), points(log2FoldChange, -log10(pvalue), pch=20, col="orange"))
with(subset(res, padj<.05 & abs(log2FoldChange)>1), points(log2FoldChange, -log10(pvalue), pch=20, col="green"))
# Label points with the textxy function from the calibrate plot
library(calibrate)
with(subset(res, padj<.05 & abs(log2FoldChange)>1), textxy(log2FoldChange, -log10(pvalue), labs=Gene, cex=.8)) | Perhaps I missed something, but it looks quite obvious to me:
with(subset(res, Gene == "WNT7A"), textxy(log2FoldChange, -log10(pvalue), labs=Gene, cex=.8))
| biostars | {"uid": 262817, "view_count": 5102, "vote_count": 2} |
<p>I have a very fundamental question that I can't seem to find an answer to.</p>
<p>I have a variety of TF and histone marks for a untreated cell line and a treated cell line. I ran a tool that returned a bed file of 'potential' enhancer regions. The untreated cell line identified 36k possible enhancer sites while the treated cell line identified 34k. If I am interested in seeing whether a histone mark or two are enriched / de-enriched at these enhancer sites, should I merge these two datasets together to generate one large dataset? Or should I take only the common enhancer regions (intersectBed) that appear between both datasets?</p>
<p>Why is it 'correct' to merge them together as opposed to identifying common regions? Or vice versa? Or is there something else I should be doing that is entirely different?</p>
<p>I have not worked with untreated vs treated cell types before so I don't wish to proceed too far before determining this.</p>
| I would do check how many enhancer sites are common b/w two conditions
- If high overlap (>90%), I would conclude, treatment has no significant effect and would proceed by the plotting the enrichment of histone marks/TF on the intersection or union (if >95% overlap) of enhancer regions
- If low overlap or very different enhancer sets, I would say treatment has an effect and would elaborate what kind of enhancers are common and what are the "new" enhancer sites plus which ones got lost. You go forward with GO analysis of the neighbouring genes etc, for these groups. Once you are clear with your groups, go forward with the enrichment calculations. | biostars | {"uid": 175258, "view_count": 1745, "vote_count": 2} |
GenCode has a wonderful breakdown of the number of coding/non-coding genes and transcripts here...
http://www.gencodegenes.org/stats.html
Does a similar breakdown exist for RefSeq anywhere? Thanks! | I don't believe RefSeq do as good a breakdown of transcript types as GENCODE, however, you may be interested in the following resources:
- <a
href="https://bmcgenomics.biomedcentral.com/articles/10.1186/1471-2164-16-S8-S2">Comparison
of GENCODE and RefSeq gene annotation and the impact of reference
geneset on variant effect prediction</a>
- <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3245008/">NCBI
Reference Sequences (RefSeq): current status, new features and genome
annotation policy</a>
- <a
href="https://bmcgenomics.biomedcentral.com/articles/10.1186/s12864-015-1308-8">A
comprehensive evaluation of ensembl, RefSeq, and UCSC annotations in
the context of RNA-seq read mapping and gene quantification</a>
Generally, I think that you'll find that GENCODE is more comprehensive for non-coding genes; however, for the majority of these, exact function is entirely unknown. Most people filter them out of, for example, RNA-seq experiments, in order to (in part) minimise the stringency of a false discovery rate threshold. On the other hand, RefSeq has the feel of a well-curated resource.
Kevin
| biostars | {"uid": 300143, "view_count": 1534, "vote_count": 1} |
Hello,
I am trying to figure out how DESeq2 calculates its log2FC measure - we see a strange pattern in the FCs in our data and I'd like to reproduce them by hand from scratch to make sure this pattern does not reflect an error with my pipeline.
Right now, I'm doing this:
deseqOutput<-DESeq(data_collapsedTechnicalReps)
estSizeFactors <- estimateSizeFactors(deseqOutput_DUP)
RLEnormedData <- data.frame(counts(estSizeFactors, normalized=TRUE))
meanOfRLECounts <- data.frame( rowMeans( RLEnormedData[,1:2]) , rowMeans( RLEnormedData[,3:5]) ) # here, condition 1 = cols 1 and 2, condition 2=cols 3,4,5
colnames(meanOfRLECounts)<-c('Condition1','Condition2')
meanOfRLECounts$log2FC <- log2(meanOfRLECounts$Condition1/meanOfRLECounts$Condition2)
Here, condition 2 is the wild type condition - i.e. the samples that I indicate when I use relevel().
Can anyone spot what I'm doing wrong? What data does DESeq2 use to generate its log2FC estimates?
Thank you! | Take a look at [this thread][1] and the [DESeq2 manuscript][2]
They are using a moderated (shrinkage) estimate.
[1]: http://seqanswers.com/forums/showthread.php?t=49101
[2]: http://genomebiology.biomedcentral.com/articles/10.1186/s13059-014-0550-8 | biostars | {"uid": 178010, "view_count": 3435, "vote_count": 1} |
I'm building a Snakefile and getting an error I'm struggling to fix when trying to add the `log` parameter in my homer annotation rule.
Here is the Snakefile:
threshold="25 33 5 66 75".split()
rule all:
input:
expand("homer_files/annotation_files/ATAC24to31_fetal_hMG_consensusPeak.{THRESH}.homerann.txt", THRESH=threshold),
rule homer_annotation:
input:
"diffbind_files/ATAC24_31_fetalMG_.{THRESH}peakset.bed"
output:
"homer_files/annotation_files/ATAC24to31_fetal_hMG_consensusPeak.{THRESH}.homerann.txt"
log:
"logs/homer/annotation.ATAC24to31_fetal_hMG.consensusPeak.(THRESH}.log"
shell:
"annotatePeaks.pl {input} hg19 > {output}"
Now when I run this without the `log` parameter I get 5 separate files created as expected, however if I try to include it, I get the following error:
SyntaxError:
Not all output, log and benchmark files of rule homer_annotation contain the same wildcards. This is crucial though, in order to avoid that two or more jobs write to the same file.
File "/c8000xd3/big-c1477909/foetal_hMG_analysis/ATAC_24to31_foetal_hMG_May19/Snakefile", line 26, in <module>
I'm not sure what is going on here, and have never came across this before when running very similar rules. The way I'm reading this, a separate `log` file should be created for each iteration of the rule, but the error would suggest otherwise.
Any ideas on how to remedy this would be greatly appreciated.
I'm using Snakemake 5.2.2.
| It was a typo in the braces!
log:
"logs/homer/annotation.ATAC24to31_fetal_hMG.consensusPeak.(THRESH}.log" | biostars | {"uid": 380849, "view_count": 2466, "vote_count": 1} |
Hello,
I am trying to use a for loop in python to loop through a bam file and take out reads that map to certain chromosomes, for further downstream analysis. I am having problems in the beginning, seemingly easiest part of my code.
samfile = pysam.Samfile("mybam.bam", "rb")
positions=["7:151970856", "7:151970856"]
for p in positions:
start = p.split(":")
chr=start[0]
pos=start[1]
chromreads=[]
for read in samfile:
if int(chr) == int(read.tid):
chromreads.append(read)
print(len(chromreads))
Output I'm getting:
2525527
0
Output I expect:
2525527
2525527
It seems like it's not even looping through the second position. What am I doing wrong?
Thank you. | Answered my own question [from StackOverflow][1]
[1]: http://stackoverflow.com/questions/10255273/iterating-on-a-file-using-python | biostars | {"uid": 178405, "view_count": 4136, "vote_count": 1} |
Deeptools is an incredible package, but I want more plotting versatility.
Anyone kind enough to share workflow/code for plotting computeMatrix heatmaps and average plots in R (ggplot, pheatmap etc.)?
Other suggestions for plotting from .bw coverage files and .bed feature files?
Thanks a lot!!! | I'm not sure it'd make sense to tidy a matrix of that size for use with ggplot2. Below is how to read a simple file produced by computeMatrix in R (the example file I'm using below can be found in the deepTools source code under `deeptools/test/test_data/`):
m = read.delim("computeMatrixOperations.mat.gz", skip=2, header=F)
m = as.matrix(m[,-c(1:6)])
The resulting matrix is exactly what's plotted in `plotHeatmap`, though you can also use `pheatmap()` if you really want:
pheatmap(m, cluster_rows=F, cluster_cols=F) # This is a useless heatmap, I just use it for testing
Note that there are likes NAs in the data, which cause a lot of things to have issues during clustering.
One big caveat to all of this is that you don't see delineations by region groups and samples. That information is stored in the header:
> h = scan("computeMatrixOperations.mat.gz", n=1, sep="\n", what=character())
Read 1 item
> h
[1] "@{\"verbose\":true,\"scale\":1,\"skip zeros\":false,\"nan after end\":false,\"sort using\":\"mean\",\"unscaled 5 prime\":0,\"body\":1000,\"sample_labels\":[\"SRR648667.forward\",\"SRR648668.forward\",\"SRR648669.forward\",\"SRR648670.forward\",\"SRR648667.reverse\",\"SRR648668.reverse\",\"SRR648669.reverse\",\"SRR648670.reverse\"],\"downstream\":0,\"unscaled 3 prime\":0,\"group_labels\":[\"genes\"],\"bin size\":10,\"upstream\":0,\"group_boundaries\":[0,196],\"sample_boundaries\":[0,100,200,300,400,500,600,700,800],\"missing data as zero\":false,\"ref point\":null,\"min threshold\":null,\"sort regions\":\"no\",\"proc number\":20,\"bin avg type\":\"mean\",\"max threshold\":null}"
That's a json string, which I don't think base R has a function to parse. The important part is that the bit after `sample_boundaries` denotes the beginning and end of each sample (the labels are in `sample_labels`) and `group_boundaries`/`group_labels` does the same for groups of regions. So you can subset matrices in the normal way in R accordingly.
For computing the equivalent of `plotProfile`'s output, you can simply `apply(m, 2, function(x) mean(x, na.rm=T)` (be sure to handle NA values!).
I usually prefer creating plots in R, but if it'd be a good bit of work to just reproduce a moderately complicated equivalent to `plotHeatmap` in R, so I would encourage you to take the advice from https://www.biostars.org/u/16222/ and tweak the python code in deepTools instead. | biostars | {"uid": 319187, "view_count": 6938, "vote_count": 2} |
I got the good quality genome IDs for 54000 genomes like below:
#genome
G001281285
G000014725
G000775715
G000254175
G001380675
G900057405
G001076295
and I also got all 74000 genome sequence files compressed in fna folder like below
cd fna/
G001284865.fna.bz2 G002910165.fna.bz2 G009390615.fna.bz2
G001284885.fna.bz2 G002910195.fna.bz2 G009390655.fna.bz2
Now could you please help how I can extract the 54000 genome sequence files based on above genome IDs from fna/ folder? | for i in $(cat list.txt); do mv "$i".fna.bz2 fna/filtered/; done
Where list.txt is your list of high quality genomes and filtered/ is a new directory. | biostars | {"uid": 454937, "view_count": 1295, "vote_count": 1} |
Hi,
I am using RMarkdown in Rstudio and I want to execute commands from a program using a bash chunk
````{bash}
```
I have a program called samtools on my computer so when I execute it in the chunk, it works.
```{bash}
samtools
```
When I type :
```{bash}
which samtools
```
The output is that it tells me samtools is located in the usr/local/bin directory.
However, when I execute a program with vcftools , I get an error because Rstudio, does not know where the program is:
I have it in another directory on my computer.
How do I get Rstudio or Rmarkdown to execute vcftools from the bash chunk?
Is there a way that I can tell RMarkdown which directory to look in to find the program?
For example (something like) :
```{bash}
$vcftools = /Users/m.o.l.s/Programs_For_Bioinformatics/vcftools
```
or would I have to move all of the programs to usr/bin/local?
Outside of Rstudio, I have made aliases to the programs so they work fine on the terminal.
I made the alias by writing in my .bash_profile
`alias bcftools=/Users/paths/to/where/the/program/is/installed`
but I added the path to vcftools to my export PATH in the bash profile completely.
| You can use the full path to the tool:
```{bash}
~/bin/bioinfotools/vcftools
```
Returns:
##
## VCFtools (0.1.16)
## © Adam Auton and Anthony Marcketta 2009
##
## Process Variant Call Format files
##
## For a list of options, please go to:
## https://vcftools.github.io/man_latest.html
##
## Alternatively, a man page is available, type:
## man vcftools
##
## Questions, comments, and suggestions should be emailed to:
## vcftools-help@lists.sourceforge.net
On a side note, it seems the RStudio / Rmarkdown bash engine is somewhat broken. This code block hangs at compilation:
```{bash in_path}
samtools view
```
Whereas at the command-line, it instantly returns the help:
Usage: samtools view [options] <in.bam>|<in.sam>|<in.cram> [region ...]
Options:
-b output BAM
-C output CRAM (requires -T)
-1 use fast BAM compression (implies -b)
-u uncompressed BAM output (implies -b)
-h include header in SAM output
-H print SAM header only (no alignments)
-c print only the count of matching records
-o FILE output file name [stdout]
-U FILE output reads not selected by filters to FILE [null]
-t FILE FILE listing reference names and lengths (see long help) [null]
-L FILE only include reads overlapping this BED FILE [null]
-r STR only include reads in read group STR [null]
-R FILE only include reads with read group listed in FILE [null]
-q INT only include reads with mapping quality >= INT [0]
-l STR only include reads in library STR [null]
-m INT only include reads with number of CIGAR operations consuming
query sequence >= INT [0]
-f INT only include reads with all of the FLAGs in INT present [0]
-F INT only include reads with none of the FLAGS in INT present [0]
-G INT only EXCLUDE reads with all of the FLAGs in INT present [0]
-s FLOAT subsample reads (given INT.FRAC option value, 0.FRAC is the
fraction of templates/read pairs to keep; INT part sets seed)
-M use the multi-region iterator (increases the speed, removes
duplicates and outputs the reads as they are ordered in the file)
-x STR read tag to strip (repeatable) [null]
-B collapse the backward CIGAR operation
-? print long help, including note about region specification
-S ignored (input format is auto-detected)
--input-fmt-option OPT[=VAL]
Specify a single input file format option in the form
of OPTION or OPTION=VALUE
-O, --output-fmt FORMAT[,OPT[=VAL]]...
Specify output format (SAM, BAM, CRAM)
--output-fmt-option OPT[=VAL]
Specify a single output file format option in the form
of OPTION or OPTION=VALUE
-T, --reference FILE
Reference sequence FASTA FILE [null]
-@, --threads INT
Number of additional threads to use [0]
| biostars | {"uid": 400685, "view_count": 7174, "vote_count": 2} |
I am very new to RNA-Seq. I am trying to align my samples with STAR. I am generating the genome index myself. Because I was hoping to add the spike-in sequence to the GTF and FASTA files.
I am downloading the GTF file from here: ftp://ftp.ensembl.org/pub/release-86/gtf/mus_musculus/
There are 2 GTF files one with CHR in the name and one without. I was wondering which one should I use and how they are different. I have not figured this out just by opening the files.
Thank you,
If you are not sure too, would you please let me know which one you use for your analysis.
The one with CHR seems to have more lines and it seems to be scaffold genes. Am I missing something? | The one without 'chr' contains annotations for genes on unplaced or unlocalized contigs, while the one with 'chr' only contains annotation for assembled chromosomes, both of them have no prefix 'chr' in chromosome name, see this example:
zcat Danio_rerio.GRCz10.87.gtf.gz | cut -f1 | awk '{dict[$1]++}END{for(i in dict) print i, dict[i]}'
KN150307.1 9
KN150451.1 7
KN150002.1 3
KN149765.1 3
KN149909.1 15
KN149998.1 13
#!genebuild-last-updated 1
KN150027.1 10
KN149917.1 7
KN150188.1 24
...
13 43069
20 48282
KN150399.1 11
KN150221.1 10
KN150670.1 32
KN149696.1 225
21 40566
...
zcat Danio_rerio.GRCz10.87.chr.gtf.gz | cut -f1 | awk '{dict[$1]++}END{for(i in dict) print i, dict[i]}' | head
#!genebuild-last-updated 1
MT 147
#!genome-date 1
10 40589
11 40971
12 39371
13 43069
20 48282
21 40566
14 35910 | biostars | {"uid": 217700, "view_count": 11902, "vote_count": 7} |
Hi,
I would like to filter out some sequences from a fasta file by using a specific pattern.
For example I have this file:
```
>input1
UGAGGUAGUAGG
>input2
CUAUGCUUACC
>out1
UCCCUGAGACCGUGA
>out2
CUCCGGGUACC
>desc1
ACUUCCUUACAUGCCC
```
I know already how I can extract all the fasta sequences with a specific pattern into a new file by using awk.
But what I would like to do is to remove all entries of a specific pattern from the original fasta file and save the newly made file into a new one. In my file above, I would like for example to remove all sequences with the header pattern out. and save only the other to a new file.
Is there a tool somewhere for doing that, or is it possible in awk/sed or even grep
Thanks
Assa | I like Pierre's answer to this since it's simple. However, I had been thinking about adding regular expression filtering to my [pyfaidx][1] project, and this morning I finished up adding this functionality:
```
$ pip install pyfaidx
$ faidx in.fasta -g "out" > out.fasta
```
The (small) advantage here is that faidx will perform filtering on an indexed file, preventing you from reading the entire file through your filter.
[1]: https://github.com/mdshw5/pyfaidx | biostars | {"uid": 133245, "view_count": 7571, "vote_count": 2} |
I don't understand in samtools tview the signification of an underlined read.
I understand what orphan means (in paired end, the other pair end is not mapped), but what secondary means? it means it's a duplicate ? Sometimes I can have two reads with the same name underlined and I don't know how to analyze them.
I found [Here][1] somebody talking about tview legend but it's not clearly explain for underlined reads.
Thanks
[1]: https://www.biostars.org/p/13299/ | <p><strong>Secondary</strong>: the read was mapped elsewhere (primary location). This position is another candidate position for this read.</p>
| biostars | {"uid": 140999, "view_count": 4045, "vote_count": 1} |
Hi ,
I would like to plot a bed file with numeric values ( = bedgraph).
My input data are : chr pos start end value ( value between -1 and 1)
exemple :
chr19 50906809 50907015 0.8173737057494912
chr19 50909381 50909562 0.2639404477044525
chr19 50909499 50909702 0.5421707627971464
chr19 50909697 50909863 0.7663584319162735
I would like something like this :
![enter image description here][1]
Do you have an easy way ? R package or matplotlib solution ?
[1]: https://snag.gy/AWx9hK.jpg | I just discovered the Sushi package which do the job pretty well !
https://bioconductor.org/packages/release/bioc/html/Sushi.html
See [plotbedgraph][1]
[1]: https://www.rdocumentation.org/packages/Sushi/versions/1.10.0/topics/plotBedgraph | biostars | {"uid": 306085, "view_count": 6589, "vote_count": 2} |
<p>Hey everyone,</p>
<p>I am trying to transpose large datasets in excel but it keeps giving me the message we can't paste because copy area and paste area aren't the same size. Is there a way to transpose all the data at one time instead of piece by piece? One dataset has a great amount of rows and columns. Thanks.</p>
| Stack Overflow <a href="http://stackoverflow.com/a/1729980/19410">offers a solution</a> that uses GNU *awk* and should run considerably faster than R (if you plan on doing a lot of transposing).
Also something to keep in mind is that Excel makes Microsoft-y text files. You may need a tool like <a href="http://sourceforge.net/projects/dos2unix/">*dos2unix*</a> to convert them to something that works better with command-line tools. | biostars | {"uid": 150611, "view_count": 17946, "vote_count": 2} |
Hi there,
Could anyone explain me a little bit these two lines of my GVCF file?
Why I'm getting "NON REF" alternative allele? And why for the second line I only see END information and not the rest?
```
GL000192.1 546636 . G A,<NON_REF> 49.77 . BaseQRankSum=1.026;ClippingRankSum=1.026;DP=4;MLEAC=1,0;MLEAF=0.500,0.00;MQ=49.02;MQ0=0;MQRankSum=-1.026;ReadPosRankSum=0.000 GT:AD:DP:GQ:PGT:PID:PL:SB 0/1:2,2,0:4:78:0|1:546636_G_A:78,0,87,84,93,177:1,1,1,1
GL000192.1 546637 . G <NON_REF> . . END=546645 GT:DP:GQ:MIN_DP:PL 0/0:4:11:4:0,12,99
```
I know that `NON_REF` represents any possible alternative allele at this location, but if my genotype is 0/0 (homozygous for reference)... this line makes no sense for me.
Thanks in advance. | <p>OK, I found the solution in GATK manual (<a href="https://www.broadinstitute.org/gatk/guide/article?id=4017">https://www.broadinstitute.org/gatk/guide/article?id=4017</a>). I'll answer my own question just in case that someone else have the same issue.</p>
<p><em>"The first thing you'll notice, hopefully, is the <code><NON_REF></code> symbolic allele listed in every record's <code>ALT</code> field. This provides us with a way to represent the possibility of having a non-reference allele at this site, and to indicate our confidence either way.</em></p>
<p><em>The second thing to look for is the <code>END</code> tag in the <code>INFO</code> field of non-variant block records. This tells you at what position the block ends."</em></p>
| biostars | {"uid": 128817, "view_count": 7765, "vote_count": 7} |
Hi.
I'm trying to annotate gene symbols next to probe IDs (Affymetrix Mouse Gene 1.0-ST Array).
I used "mogene10sttranscriptcluster.db" package (v8.7.0) of R for the annotation.
But here's the problem.
1) Using mogene10sttranscriptcluster.db directly
library(mogene10sttranscriptcluster.db)
a <- contents(mogene10sttranscriptclusterSYMBOL)
# a$'10344741'
# [1] NA
2) Using AnnotationDbi to extract the info
library(mogene10sttranscriptcluster.db)
library(AnnotationDbi)
k <- keys(mogene10sttranscriptcluster.db, keytype = "PROBEID")
b <- mapIds(mogene10sttranscriptcluster.db, keys=k, column=c("SYMBOL"), keytype="PROBEID")
b["10344741"]
# 10344741
# "Hnrnpa3"
length(a) = length(b) = 35556
But there are some symbols not in the (1) but in the (2).
They both used the same database - *mogene10sttranscriptcluster.db*, but how did they get different results?
Does the *AnnotationDbi* converts probe ids to some other ids and then convert them to gene symbols?
The second one seems to have more symbols, so that's the one I have to use?
I'm very confused right now.
| I found my own answer.
It seems like the mogene10sttranscriptcluster.db utilizes org.Hs.eg.db for annotation.
And the version of org.Hs.eg.db is different between mogene10sttranscriptcluster.db and AnnotationDbi.
I found this because when I loaded different version of org.Hs.eg.db, the same version of mogene10sttranscriptcluster.db (v8.7.0) produces different results.
So, check the version of your org.Hs.eg.db.
| biostars | {"uid": 380939, "view_count": 1154, "vote_count": 1} |
I have a list of genes from an Affymetrix microarray analysis.
However I have duplicate genes, I wonder how do I collapsed to unique genes by calculating the mean expression of transcripts from the same gene locus?
Any information is valid.
Thank you! | Hello Joe,
You can use `aggregate()` or *limma*'s `avereps()` function.
Kind regards,
Kevin | biostars | {"uid": 407358, "view_count": 2072, "vote_count": 1} |
Hi,
Do you know what these letters like R, Y, S, G... (in genotype column) mean in output file from VarScan? Or do I always need to generate a VCF file if I want to know the genotype in that position for tumour and normal samples?
<a href="https://ibb.co/qp6YQD5"><img src="https://i.ibb.co/SBSwC6r/gt.png" alt="gt" border="0" /></a> | [degenerate notation][1] ?
[1]: https://en.wikipedia.org/wiki/Nucleic_acid_notation#IUPAC_notation | biostars | {"uid": 484409, "view_count": 523, "vote_count": 1} |
Dear Biostars,
I'm a new PhD student and my main project is the identification/Differential Expression of piRNAs in Human colorectal cancer (CRC) cell lines with smallRNA-seq.
I would like to ask about the bioinformatic analysis workflow and which databases/libraries for different small RNAs
I have found various strategies regarding the identification of piRNAs and I will follow this kind of workflow:
1. Use bowtie for sequence alignment to map reads to the genome (hg38 [iGenome][1])
2. annotate reads to rRNA library (?which database)
3. annotate remaining reads to mature and hairpin libraries (miRNAs) (miRBase)
4a
4. annotate remaining reads to tRNA library (GtRNAdb)
5. annotate remaining reads to Rfam library (snRNA, snoRNA, lncRNA)
6. annotate remaining reads to piRNA database ([piRBase][2] or [piRNABank][3])
4b
4. annotate remaining reads to tRNA library (GtRNAdb)
5. annotate remaining reads to Rfam library (snRNA,snoRNA,lncRNA)
6. annotate remaining reads to piRNA cluster database ([piRNAclusterdb][4])
Three publications have shown that piRNA could derive also from tRNAs: 1) [The biogenesis pathway of tRNA-derived piRNAs in Bombyx germ cells][5] 2) [The human Piwi protein Hiwi2 associates with tRNA-derived piRNAs in somatic cells][6] 3) [tRNA processing defects induce replication stress and Chk2-dependent disruption of piRNA transcription.][7]
Thus, I should change the order of annotation libraries:
alternative 4a
4. annotate remaining reads to Rfam library (snRNA, snoRNA, lncRNA)
5. annotate remaining reads to piRNA database ([piRBase][8] or [piRNABank][9])
6. annotate remaining reads to tRNA library (GtRNAdb)
alternative 4b
4. annotate remaining reads to Rfam library (snRNA, snoRNA, lncRNA)
5. annotate remaining reads to piRNA cluster database ([piRNAclusterdb][10])
6. annotate remaining reads to tRNA library (GtRNAdb)
Also, I examine the possibility of re-mapping reads against transposon library (RepBase) found in piRNADBs in order to "filter" annotated results of piRBase/piRNABank libraries to a more robust final piRNAs dataset.
**A) Which database/library should I use for the annotation of rRNA?** I have read these posts:
[cannot find biostar stackexchange][11] [Human Non Coding Rrna Sequences For Download][12] [SEQanswers][13]
but it is not clear to me which one is the best option for annotation library... pardon my naiveness.
**B)** Next step would be to normalize counts, I have read about /RPM/RPKM/TPM and other types of normalization but which one should be more robust to use for small-RNA counts and which one should be used for piRNA clusters?
Because of the lenght of piRNA clusters ~(5k-60k bp) i think I should use TPM normalization to show the relative abundance in different cell lines, is this correct?
**C)** Regarding mature piRNAs for DE analysis should I follow the workflows of edgeR, DEseq2 or limma voom?
**D)** Does this workflow seems robust or it needs corrections?
Thank you for your time and consideration
Konstantinos
[1]: https://support.illumina.com/sequencing/sequencing_software/igenome.html
[2]: http://www.regulatoryrna.org/database/piRNA/
[3]: http://pirnabank.ibab.ac.in/
[4]: http://www.smallrnagroup.uni-mainz.de/piRNAclusterDB.html
[5]: https://academic.oup.com/nar/article/45/15/9108/3883741
[6]: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4132735/
[7]: https://www.ncbi.nlm.nih.gov/pubmed/26471728
[8]: http://www.regulatoryrna.org/database/piRNA/
[9]: http://pirnabank.ibab.ac.in/
[10]: http://www.smallrnagroup.uni-mainz.de/piRNAclusterDB.html
[11]: https://www.biostars.org/p/7955/
[12]: https://www.biostars.org/p/19154/
[13]: http://seqanswers.com/forums/showthread.php?t=41868 | Welcome to the wonderfully strange world of piRNAs :) I will first urge to read two of my previous answers on the subject:
https://www.biostars.org/p/235278/#235289
https://www.biostars.org/p/179944/#228109
TLDR; if you are novice, use piPipes. Very easy to set-up, specially if you are working with human data, and it has been developed by a lab with a good standing in the field. It will do pretty much everything you have listed with the exception of using the piRNA databases, which, and to my best knowledge, most high-profile, experienced, piRNA groups don't use at all. At least those working in fly/mouse/drosophila/zebrafish. And why you ask? Because piRNAs are quite flexible, and in some species, are not transcribed from a defined clusters/transcription units, unlike miRNAs for instance. So I don't see how useful those databases are. So when analysing piRNAs ignore what you would do for miRNAs (I am not even sure what "mature piRNAs" are).
If you still want to create/use your own pipeline, read the manual provided in piPipes - it will give a very good insight into the steps.You can also have a look at these tools:
http://www.smallrnagroup.uni-mainz.de/software.html
Second and more important. When looking at somatic piRNAs, extra care should be taken to make sure these are "real" piRNAs. As I mentioned, piRNAs are not (usually*) processed from particular clusters and are not very well defined in terms of sequence composition/size. So what one might think is a piRNA could very well be a degradation product. This is true specially if one is analyzing total small RNA libraries in somatic cells. Ideally one would be identifying them with an IP for an argonaut that processes them. In the germline, while the IP is important, is less critical because in these tissues piRNAs are the most abundant small RNA species. For instance, the 3 papers you reference, two are done in the germline and one IPs an argonaut. Also in Drosophila, some piRNAs are produced in clusters, which also appears to be case in human for _some_ piRNAs ([ref](https://www.ncbi.nlm.nih.gov/pubmed/28842508)). I will also add, that somatic piRNAs are still quite controversial, at least for some groups. You better make sure those are really piRNAs. I am agnostic on the matter, so as long as the evidence is strong I will accept it.
*everything is an exception in the piRNA world depending on which species is used as a reference for comparison.
If you something is not clear just ask and I will try my best to explain. | biostars | {"uid": 347827, "view_count": 2902, "vote_count": 4} |
Hello!
I am attempting to remove host cell line reads from viral reads for a viral genome sequencing effort. (Miseq 2x300 Nextera). I downloaded and set up deconseq according to the install file, and generated a database using bwa64 on my desired genome. This worked great and I was able to filter the host genomic reads from my viral reads. I used velvet to assemble my clean viral reads and everything looked good. I blasted my resulting contigs and discovered that the host cell mitochondrial DNA reads had not been filtered out. I attempted to generate a new database with a mitochondrial genome I got from genbank. I used a nearly identical command to generate this new database but it would fail to create all 8 necessary files. I would only ever get 4. The `.amb` `.ann` `.pac` `.rpac` files are generated but not the `.bwt` `.rbwt` `.sa` `.rsa` files.
This is the command I used:
bwa64 index -p dogmito -a bwtsw ~/deconseq-standalone-0.4.3/db/dogmito.fna > out.txt 2>&1
The out.txt file reads
[bwa_index] Pack FASTA... 0.00 sec
[bwa_index] Reverse the packed sequence... 0.00 sec
[bwa_index] Construct BWT for the packed sequence...
Additionally I noticed that when I typed `ls` to see if the files had been generated I got the following message:
[1]+ Segmentation fault (core dumped) bwa64 index -p dogmito -a bwtsw ~/deconseq-standalone-0.4.3/db/dogmito.fna > out.txt 2>&1
I've looked through my mitochondrial genome fasta and I can't see anything out of the ordinary .
Any idea what I'm doing wrong?
Thanks!! | bwtsw is only for large genomes..see
https://www.biostars.org/p/53546/ | biostars | {"uid": 143866, "view_count": 3078, "vote_count": 1} |
I have analyzed RNA-seq data with DESeq2 and am trying to plot a 3D PCA using rgl-plot3d.
I was trying to output PC1, PC2, and PC3 and then plot them. However, I realized that I get different results for PC1 (and PC2) when I try plotPCA (used with DESeq2) and prcomp.
What is the bug on my code?
dds <- DESeqDataSetFromHTSeqCount( sampleTable = sampleTable,
directory = directory,
design= ~group)
rld <- rlog(dds, blind=TRUE)
**From DESeq2:**
data <- plotPCA(rld, intgroup=c("treatment", "sex"), returnData=TRUE )
data$PC1
> [1] -1.9169863 -2.0420236 -1.9979900 -1.8891056 0.9242008 1.0638140
>[7] 0.6911183 1.0551864 0.9598643 -1.5947907 -1.5666862 -1.6694684
>[13] -1.2523658 -1.0785239 1.3005578 2.2913536 2.5381586 2.4287372
>[19] 1.7549495
**Using prcomp**
mat <- assay(rld)
pca<-prcomp(t(mat))
pca <- as.data.frame(pca$x)
pca$PC1
>[1] -1.29133735 -2.96001734 -3.08855648 -3.51855030 -0.68814370 -0.01753268
>[7] -2.31119461 -0.10533404 -1.45742308 -1.30239486 -1.36344946 -1.93761580
>[13] 6.04484324 4.83113873 0.75050886 -0.14905189 2.70759465 3.43851631
>[19] 2.41799979
| `plotPCA` by default uses the top 500 most variable genes prior to `prcomp`, and you used all genes.
Check the source code of `plotPCA` for details, e.g. by typing `getMethod("plotPCA","DESeqTransform")` in `R`.
Edit: Cool, https://www.biostars.org/u/41557/ and https://www.biostars.org/u/1767/ gave the same answers basically at the same time, so I guess this increases confidence :) | biostars | {"uid": 416573, "view_count": 3896, "vote_count": 2} |
<p>I should start by saying I have solved this problem, but I feel like my code is ugly and overkill.</p>
<p>I am trying to sort a dataframe on a column containing human chromosomes:</p>
<p>chr1,chr2...chrY, chrX.</p>
<p>the problem is:</p>
<p>chr1, is followed by chr10. </p>
<p>What tricks do you use to deal with this problem <strong>IN the [R] environment?</strong></p>
| <p>the mixedsort is pretty good, but it will think ChrM comes before ChrX, then ChrY</p>
<p>if you have your own arbitrary order you should just use factors</p>
<pre><code>> df<-data.frame("chr"=c("chr1","chrM","chr10","chr2","chrX","chr2"),"val"=c(1,2,3,4,5,6))
> df
chr val
1 chr1 1
2 chrM 2
3 chr10 3
4 chr2 4
5 chrX 5
6 chr2 6
> chrOrder<-c(paste("chr",1:22,sep=""),"chrX","chrY","chrM")
> df$chr<-factor(df$chr, levels=chrOrder)
> df$chr
[1] chr1 chrM chr10 chr2 chrX chr2
Levels: chr1 chr2 chr3 chr4 chr5 chr6 chr7 chr8 chr9 chr10 chr11 chr12 chr13 chr14 chr15 chr16 chr17 chr18 chr19 chr20 chr21 chr22 chrX chrY chrM
> df[order(df$chr),]
chr val
1 chr1 1
4 chr2 4
6 chr2 6
3 chr10 3
5 chrX 5
2 chrM 2
</code></pre> | biostars | {"uid": 17224, "view_count": 9162, "vote_count": 6} |
<p>I have this genome of a transposase from ciliate protozoa, i need to identify if the genome is from macronucleus or from micro nucleus, for that i have to know if it is from linear DNA or from circular DNA, how can i design an experiment for this genome sequence to know which nucleus it is from?</p>
| <p>I only have this Genome sequences, from Ciliate cell, only the sequence is known,</p>
| biostars | {"uid": 102378, "view_count": 1869, "vote_count": 1} |
I was trying to use this fgsea library for enrichment analysis but there was this error coming up due to some issue with BiocParallel , it was resolved by adding this line `register(SerialParam())` to the code .
Original [source][1] with a bit of modification
Error in registered()[[bpparamClass]] :
attempt to select less than one element in get1index
library(fgsea)
library(BiocParallel)
res <- read.csv("HSC_LSC_deseq2_CDS.txt",sep ='\t') # deseq2 output
head(res)
library(dplyr)
res2 <- res %>%
dplyr::select(Symbol, stat) %>%
na.omit() %>%
distinct() %>%
group_by(Symbol) %>%
summarize(stat=mean(stat))
res2
library(tidyverse)
ranks <- deframe(res2)
head(ranks, 20)
pathways.hallmark <- gmtPathways('~/mysigdb/h.all.v6.2.symbols.gmt') # files from mysigdb
pathways.hallmark %>%
head() %>%
lapply(head)
class(pathways.hallmark)
length(pathways.hallmark)
pathways.hallmark[1]
register(SerialParam())
fgseaRes <- fgsea(pathways=pathways.hallmark, stats=ranks, nperm=1000)
fgseaResTidy <- fgseaRes %>% as_tibble() %>% arrange(desc(NES))
fgseaResTidy %>%
dplyr::select(-leadingEdge, -ES, -nMoreExtreme) %>%
arrange(padj) %>%
DT::datatable()
library(ggplot2)
ggplot(fgseaResTidy, aes(reorder(pathway, NES), NES)) +
geom_col(aes(fill=padj<0.05)) +
coord_flip() +
labs(x="Pathway", y="Normalized Enrichment Score",
title="Hallmark pathways NES from GSEA") +
theme_minimal()
[1]: https://stephenturner.github.io/deseq-to-fgsea/
Any mistakes in the code found would be welcomed if any issues in running | I experienced the same thing just now on a not so well patched computer running an older version (R3.4.4/fgsea_1.0.2/BiocParallel_1.8.2) but tried it on another computer that I keep better updated (R3.5.2/fgsea_1.8.0/BiocParallel_1.8.2) and it worked. My advice would be to upgrade your R installation to the latest one and reinstall the packages. | biostars | {"uid": 327699, "view_count": 3941, "vote_count": 1} |
Hello all, I've recently started with DNA-related analysis, and was wondering, whether, if I take:
https://www.ensembl.org/info/data/ftp/index.html (cDNA)
does this represent, for example, the human exome? If not, what are the differences, and how can one obtain the missing information then?
Thank you! | the cDNA file will contain all mRNAs of the human genome. It will be CDS + UTR (if available) and thus represents the transcribed part of the genome that will eventually be translated into proteins. | biostars | {"uid": 339805, "view_count": 1187, "vote_count": 1} |
If I want to read a bam like this
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <htslib/sam.h>
int main(int argc, char *argv[]){
samFile *fp_in = hts_open(argv[1],"r"); //open bam file
bam_hdr_t *bamHdr = sam_hdr_read(fp_in); //read header
bam1_t *aln = bam_init1(); //initialize an alignment
char *chrom = argv[2];
int locus = atoi(argv[3]);
int comp ;
printf("%s\t%d\n", chrom, locus);
//header parse
//uint32_t *tar = bamHdr->text ;
//uint32_t *tarlen = bamHdr->target_len ;
//printf("%d\n",tar);
while(sam_read1(fp_in,bamHdr,aln) > 0){
int32_t pos = aln->core.pos +1; //left most position of alignment in zero based coordianate (+1)
char *chr = bamHdr->target_name[aln->core.tid] ; //contig name (chromosome)
uint32_t len = aln->core.l_qseq; //length of the read.
uint8_t *q = bam_get_seq(aln); //quality string
uint32_t q2 = aln->core.qual ; //mapping quality
char *qseq = (char *)malloc(len);
for(int i=0; i< len ; i++){
qseq[i] = seq_nt16_str[bam_seqi(q,i)]; //gets nucleotide id and converts them into IUPAC id.
}
//printf("%s\t%d\t%d\t%s\t%s\t%d\n",chr,pos,len,qseq,q,q2);
if(strcmp(chrom, chr) == 0){
if(locus > pos+len){
printf("%s\t%d\t%d\t%s\t%s\t%d\n",chr,pos,len,qseq,q,q2);
}
}
}
bam_destroy1(aln);
sam_close(fp_in);
return 0;
}
How do I get the strand info? In sam files it seems to be represented as 16 for reverse, 0 for positive. | I think you could call the following for each read:
bool is_reverse = bam_is_rev(aln);
See: https://github.com/samtools/htslib/blob/6e86e386bed5c80c5222bd2e8eb2fd8c99234909/htslib/sam.h#L200-L205
It's a convenience macro for the call described in Pierre's answer. You might need to add:
#include <stdbool.h>
to add support for the `bool` type. | biostars | {"uid": 344069, "view_count": 1767, "vote_count": 1} |
I have just over 100 files from an RNA-seq experiment and I have working on converting these BAM files to Bigwig files for visualization on UCSC Genome Browser.
I am new to the field of RNA-seq but from what I understand, I need to scale/normalise the BAM files before conversion to bigwig format. There appear to be all sorts of ways and different methods to normalise BAM files. I have found out that bedtools can allow you to scale samples (using the -scale option) using a specific scale factor. I am tempted to use this option as I was already using bedtools in my script doing the converting of BAM files to Bigwig files.
I guess my question is: which method do I use to normalise my samples? How do I decide which scaling factor to use in my bedtools command?
Thanks. | The most convenient solution is IMHO `deeptools bamCoverage`. It offers multiple options for normalization, binning and strand-specificity. Have a look at the [docs][1].
[1]: https://deeptools.readthedocs.io/en/develop/content/tools/bamCoverage.html | biostars | {"uid": 333137, "view_count": 2373, "vote_count": 2} |
Hello biostars,
I have a FASTA file of about 300 kbp & I have several sequences (400 bp - 800 bp) that I want to search in there.
I tried using BLAST, but I couldn't find the option to allow me to upload my own FASTA file in the Search Set box.
I tried this:
http://www.ebi.ac.uk/Tools/psa/emboss_water/nucleotide.html
The EMBOSS water stops and lists the first hit, but the sequences occur multiple times throughout the FASTA files.
Does anyone have any other software or other ideas? Thank you!
| You can use the command line version of blast+ to create a database out of your sequences (using `makeblastdb`) and then blast your query sequence against this database. It is not possible to do what you want on the web. | biostars | {"uid": 250341, "view_count": 2725, "vote_count": 2} |
Hi,
I would like to compare cytokine levels of four groups of mice with different treatments. There are four to six mice in each treatment group, so the sample size is not large. The standard deviation of most cytokine levels in each treatment group are much larger than their mean.
Also, there are many zero in data set( very low cytokine levels).
What type of tests should I use to compare cytokine levels in different treatment groups?
Any help is greatly appreciated. | Mann-whiteny test can be used as a non-parametric test to compare the median of two independent groups (samples do not follow normal distribution) | biostars | {"uid": 191264, "view_count": 1693, "vote_count": 1} |
Hello all,
I have a list of GO terms and I would like to create the corresponding network. Even better, I would like to be able to create the *entire* GO metabolic network and then highlight my terms in that network.
I have two challenges: I am studying Microbiome, which is largely unknown/predicted genes, so I have derived GO terms from blastX homology hits / Blast2GO. So this means (1) I do not have a specific organism to select, (2) I do not have gene names, and (3) I have only GO terms.
Now Cytoscape has many wonderful GO enrichment tools, and I was advised to use these by my colleagues.
But I can't seem to find **a method to create a Network from GO terms or import the full GO network**. | This is how I accomplished the task of making the GO network in Cytoscape.
1) Download go.obo from http://geneontology.org/page/download-ontology
2) Run my python script to convert go.obo to GO.cytoscape.sif file
3) Cytoscape -> Import -> Network -> File, select the GO.cytoscape.sif file
PRESTO, the full network is loaded!
*As far as my goal not being clear, yes my end game is exactly to do a comparison / enrichment analysis.* But the network needed to exist before I could do that. I honestly don't know the next step at this juncture... still learning. Next step will be to create sub-graphs representing new input file GO terms (for multiple treatments), and then compare them.
import re
if __name__ == "__main__":
inputOBO = "E:/go.obo"
outputSIF = "E:/GO.cytoscape.sif"
outF = open(outputSIF, "w")
newTerm = False
GOvalue = "initilize"
isAfound = False
for line in open(inputOBO, "r"):
if newTerm: #expect id: value line
GOvalue = line[4:-1]
newTerm = False
isAfound = False
if line == "[Term]\n":
newTerm = True
if line == "[Typedef]\n":
break #read all the nodes, don't need type defs
if line == "\n": #end of entry
isAfound = False
elif re.search("^is_a:",line) != None:
isAfound = True
if isAfound:
splits = line.split()
outLine="\n"
if splits[0] == "relationship:":
outLine = GOvalue + "\t" + splits[1] + "\t" + splits[2] + "\n"
outF.write( outLine )
elif splits[0] == "intersection_of:" or splits[0][:5] == "creat":
#do nothing, don't use
outLine=outLine
else:
outLine = GOvalue + "\t" + splits[0] + "\t" + splits[1] + "\n"
outF.write( outLine ) | biostars | {"uid": 193587, "view_count": 6546, "vote_count": 2} |
Hi guys,
I need to identify insertions of specific sequences in my genomes. I have WGS data with read length of 250 nucl. Previously I was working on identification of large sequences insertions and simply looked for discordantly mapped reads for that using bowtie2 and bbmap.
Now I am looking for sequences that are shorter than read length. As I want these data to be comparable to my previous computations I want to use bowtie2 for alignment again. As far as I understand, I should use local mode. I've tried this, but got alignment of very short parts of my reads. Is there a possibility to require the alignment length to be equal to reference sequence length? When I identify the reads that are mapped to the reference sequences, I will find where are their pairs mapped to the reference genome to find the insertion site, is it a correct strategy?
Thanks id advance! | You can call insertions near read length with BBMap like this (assuming paired interleaved reads that mostly overlap):
bbmerge.sh in=reads.fq out=merged.fq
reformat.sh in=merged.fq out=filtered.fq minlen=350
bbmap.sh in=filtered.fq out=mapped.sam ref=ref.fa slow
callvariants.sh in=mapped.sam out=vars.vcf ref=ref.fa ploidy=2
That's a variant of a longer and slower pipeline that I used to successfully call insertions from 1bp to 219bp using 2x100bp reads, on E.coli. If you have sufficient time, coverage, and memory, you can use this method (adjusted for 2x250bp reads) instead for even longer insertions (it's great on small genomes but would take a lot of memory for human-sized ones):
filterbytile.sh in=reads.fq.gz out=filtered_by_tile.fq.gz
bbduk.sh in=filtered_by_tile.fq.gz out=trimmed.fq.gz maxns=0 ktrim=r k=23 mink=11 hdist=1 tbo tpe minlen=220 ref=adapters.fa
clumpify.sh in=trimmed.fq.gz out=eccc.fq.gz passes=4 ecc unpair repair
tadpole.sh in=eccc.fq.gz out=ecct.fq.gz ecc prefilter=2
tadpole.sh in=ecct.fq.gz out=extended.fq.gz ordered mode=extend el=20 er=20 k=62 prefilter=2
bbmerge-auto.sh in=extended.fq.gz out=merged.fq.gz outu=unmerged.fq.gz rem k=93 extend2=120 prefilter=2
bbmap.sh in=merged.fq.gz out=merged.sam.gz slow ref=ref.fa
callvariants.sh in=mapped.sam out=vars.vcf ref=ref.fa ploidy=2
| biostars | {"uid": 259835, "view_count": 2263, "vote_count": 1} |
Does anyone know what the difference is between the Coriell sample NA12878 and the NIST sample RM 8398? As far as I can tell, RM 8398 and NA12878 are extracted from the exact same cell lines, except the Coriell sample is $84 and the NIST sample is $453. Links are below:
NA12878: https://www.coriell.org/0/Sections/Search/Sample_Detail.aspx?Ref=NA12878&Product=DNA
RM 8398: https://www-s.nist.gov/srmors/view_detail.cfm?srm=8398
Any help would be great! Thank you. | The NIST source preparation material discusses how RM8398 was extracted from the NA12878 cell. It notes that the cell lines are subject to mutation and so the sequence can start to vary over time. NIST controls for this by producing a large batch of NA12878 and mixing the extracted DNA to reduce the variation between the pooled DNA sequences (at least that's what I understand from the notes):
> SOURCE PREPARATION ( 1)Coriell Institute for Medical Research(Camden,
> NJ) grew a large growth of their cell line GM12878 in multiple stages,
> produced approximately 83 mg of extracted DNA, and then mixed the DNA
> and aliquoted it into vials, with the DNA divided approximately
> equally into vials. Specifically, the pool of cells was split into
> three separate volumes for DNA extraction, and the extracted DNA was
> re-pooled and gently mixed at 4 °C for greater than 48 h before the
> material was aliquoted automatically into vials of 10 μg of DNA. Note:
> This RM is isolated DNA rather than live cells because cells
> are less stable and can mutate with each cell division, so
> that the sequence of live cells may not be stable over time.
> Extracting DNA from a large batch of cells helps ensure that all vials
> contain essentially the same sequences of DNA. DNA is currently
> available from this same cell line from Coriell with the number
> NA12878, but it may contain small differences in the DNA sequence due
> to different mutations occurring in different batches of the cells.
>
> Stability:
> Stability was assessed by measuring the size distribution of DNA with pulsed field gel
> electrophoresis (PFGE). Using PFGE, no change in the size
> distribution was detected after storage at 4 °C for eight weeks, but
> the size distribution decreased significantly when stored at 37 °C for
> 8 weeks. In addition, no change was detected after five freeze-thaw
> cycles, pipetting vigorously, or vortexing. However, because we only
> measure size distribution, we still recommend storing at –20 °C for
> long periods of time and limiting freeze-thaw cycles, particularly if
> the measurement method requires long, undamaged DNA fragments.
>
> Homogeneity: NIST sequenced multiple vials in an experiment designed
> to assess homogeneity of the samples. No significant differences were
> detected in terms of proportion of variant or copy number, except for
> a few in regions known to be susceptible to systematic errors. These
> results, along with the mixing of DNA before aliquoting, provide
> confidence that no large differences in small variants or copy number
> are likely to exist between different vials
I guess the extra QC steps and the fact that it's this sample that was used for GIAB variant calls would make RM9398 the gold standard if you were looking to validate your pipeline etc.
[https://www-s.nist.gov/srmors/certificates/8398.pdf][1]
[1]: https://www-s.nist.gov/srmors/certificates/8398.pdf | biostars | {"uid": 339082, "view_count": 2718, "vote_count": 1} |
Hi all,
I am looking for a test data set for some software I'm developing, which would be small and simple, but still biologically meaningful. What I need is raw Illumina-style NGS data - basically Fastq format files (paired end). I thought that a phage genome might be a good choice due to it's small size and simplicity, but I couldn't find such data online. Maybe NGS is a bit of an overkill for phage genomes... Any other ideas?
Just to make things clear, I'm not looking for assembled genomes, but rather for the raw data.
Any database, publicly-available data set or publication that you know of?
Thanks! | http://www.ncbi.nlm.nih.gov/sra/?term=phage | biostars | {"uid": 209903, "view_count": 6130, "vote_count": 1} |
I wonder what is the relationship between Plink *.tped* and *.ped* files. From what I observe, it seems it is more complicated than a simple transpose.
For example, in Section 4.1.1 of [this manual][1], there is an example as following:
```
1 1 0 0 1 1 1 1 G G
1 2 0 0 2 1 0 0 A G
1 3 0 0 1 1 1 1 A G
1 4 0 0 2 1 2 1 A A
```
is transposed as
```
1 snp1 0 10001 1 1 0 0 1 1 2 1
1 snp2 0 20001 G G G A G A A A
# ^ ^ ^ ^
```
but instead of, what I thought should be:
```
1 snp1 0 10001 1 1 0 0 1 1 2 1
1 snp2 0 20001 G G A G A G A A
# ^ ^ ^ ^
```
Why there is a reverse relationship here?
And I think this reverse is not guaranteed to happen, for the reasons that in example of Section 3.4 of the same manual, it's hard to tell if there is any pattern for whether should be reversed or not.
(I am quite new to this area, and I hope the reason is not something very superficial as common sense in this domain)
[1]: http://pngu.mgh.harvard.edu/~purcell/plink/dist/plink-doc-1.07.pdf | Interesting, I didn't know about that! Could it be that PLINK internally just sorts the alleles using some arbitrary rules?
I just ran a test with input alleles "G A", "A G" in various combinations with other SNPs and they always came out as "G A" in the transposed dataset.
Similarly, "G T", "T G" always becomes "G T", "G C", "C G" always becomes "G C" etc. "A T"/"T A" is always "A T", "A C"/"C A" becomes "A C", "G C"/"C G" becomes "G C". It can't be alphabetically sorted for obvious reasons.
The funny thing is, if I repeat the same thing using <a href="https://www.cog-genomics.org/plink2/">PLINK2,</a> I get alphabetically sorted alleles: your example becomes G G A G A G A A (and my test-cases become alphabetically sorted, too). That makes me think that it's rather arbitrary and doesn't particularly matter.
Edit: I think it has to do with the way PLINK 1.07 stores genotypes as numbers - if you run
plink --file mytest --recode --transpose
you get the above inconsistent behaviour, but if you run
plink --file mytest --recode12 --transpose
so that all genotypes become numerically recoded, you'll always see "1 2" for all test cases, so these genotypes seem to be not alphabetically, but numerically sorted! | biostars | {"uid": 168804, "view_count": 3675, "vote_count": 3} |
Suppose I have a list of genes obtained from an experiment using mice. Now I have to perform an enrichment and I can choose to perform it using mouse GO terms, human GO terms, etc...
Is it ok to use human GO terms over mice genes?
If yes, then why bother creating specific mouse GO terms? I understand that mice may be used as a human model for some diseases but then...why are the two set of GO terms different?
If I can apply human GO terms over mice genes, should I perform enrichment analysis using zebrafish GO terms over mice genes (or human genes!), if those genes share a certain degree of similarity (such as homology)?
EDIT: I posted [the same question][1] in stack exchange bioinformatics but no answers there.
[1]: https://bioinformatics.stackexchange.com/questions/14760/is-it-ok-to-use-gene-sets-from-an-organism-to-enrich-genes-from-a-different-orga | Gene sets such as GO terms are developed by integrating the current scientific knowledge about the function of genes. Even in the case when you use human GO terms in human, many of the functions of those human genes which are "summarized" by the human ontologies, have actually been (at least in part) first discovered in other model organisms such as mouse. Thus, even if you look at them from the human point of view, they may reflect general knowledge which applies to many organisms.
In my opinion, this is a fuzzy characteristic of gene ontologies. All in all, a good strategy can be, when you use gene ontologies in another organism, to assume that the genes that are homologous/orthologous between the organisms share the same functions. This may sometimes not be the case, but I believe that the typical pathway enrichment analyses discover "general trends" which may be not be so impacted by particular exceptions.
Personally I am fond of the [msigdbr][1] package, which provides gene sets from the MSigDB (including GOs, KEGG, Reactome...) for different organisms. It is built by, starting from the human terms containing human genes, translating them to other species by selecting the orthologs as defined by the HUGO Gene Nomenclature Committee.
[1]: https://cran.r-project.org/web/packages/msigdbr/vignettes/msigdbr-intro.html | biostars | {"uid": 472892, "view_count": 787, "vote_count": 1} |
Hello,
I wanted to inspect the (non-N) regions that were added in, in GRCh38/hg38 that were not there in GRCh37/hg19. Is there a hg38 bed file with those "new" regions somewhere or is there an easy way to get that without lifting over again?
Thanks for your help! | I used the excellent remap service from NCBI:
http://www.ncbi.nlm.nih.gov/genome/tools/remap
to go from my hg19 assembly to my hg38 assembly.
I made a dummy bed file using the chromosome lengths, which you can get from here:
`mysql --user=genome --host=genome-mysql.cse.ucsc.edu -A -e "select chrom, size from hg19.chromInfo" > hg19.genome`
and selected:
Minimum ratio of bases that must be remapped: 0
Maximum ratio for difference between source length and target length: 100000
and that gave me all possible mappings. | biostars | {"uid": 195827, "view_count": 1888, "vote_count": 1} |
For a single paired end read the script is
hisat2 -x /path/to/hg19/indices -1 sample_1.fq.gz -2 sample_2.fq.gz | samtools view -Sbo sample.bam -
Our queue is full so I'm not able to test this but is it possible to allign multiple paired end reads in one go and how would I go about it?
| Probably your `sample_1.fq.gz` and `sample_2.fq.gz` **files** already contain multiple **reads** inside. I guess you want to align multiple files, right?
But do you want the output in a single file, or multiple files as output? For the former, you can pass a comma-separated list of files to hisat2 (see `-1` and `-2` on hisat2 [manual][1]). For the later, there are several option, such as a bash loop, using Parallel, a make script, job arrays (if your queue is managed with SLURM / Torque / SGE), among others options.
[1]: https://ccb.jhu.edu/software/hisat2/manual.shtml | biostars | {"uid": 293107, "view_count": 4763, "vote_count": 1} |
Hello
I need to convert gene stable ID like this (ENSMUSG00000028461.12) to gene name. I used Biomart and I have a list of gene stable ID. I don't know how to join mine list with Biomart together, because Biomart gene stable ID list has not the decimal number in ID, do you have any idea?
thanks | Hello,
I don't understand whether your problem is that your input has version numbers or that the output of biomart hasn't. Nevertheless we can solve both:
For the input choose under `Filters` the category `Input external references ID list` and select in the drop down `Gene stable ID(s) with version`.
For the output choose under `Attributes` `Gene stable ID version` in the category `GENE`.
fin swimmer | biostars | {"uid": 332886, "view_count": 3155, "vote_count": 1} |
I have a large fasta file of 16S sequences and I want to retrieve sequences using a list of organism names. Do you know a script capable of doing it?
EDIT:
Headers look like that:
```
>S000000859 Bacillus sp. USC14; AF346495
sequence
>S000001027 Paenibacillus borealis; KN25; AJ011325
sequence
```
And I have a list like the following:
```
Paenibacilus borealis
Paenibacillus sp. 1-18
Paenibacillus sp. 1-49
Paenibacillus sp. A9
Paenibacillus sp. Aloe-11
```
I want to retrieve those sequences that match with names present in the list. | As your list is containing 'Paenibaci' in common, I have followed this approach
```
grep '^>' input.fasta > headers.txt
sed -n '/Paenibaci/p' headers.txt > filter_headers.txt
$ ./faSomeRecords input.faa filter_headers.txt output.faa
```
https://www.biostars.org/p/136926/#136928 | biostars | {"uid": 141241, "view_count": 14949, "vote_count": 3} |
Dear all,
I am working on circular RNA and miRNA. I used the tool DCC to get possible circular RNA candidates from my STAR alignment. Since it is know that circular RNA expression can effect miRNA in various ways (et vice versa), I would like to compute the likelihood that one specific circRNA binds to a miRNA (or vice versa). Since some of my circRNAs are not annotated yet, I can just use the genomic coordinates and the according sequence. It would be too simplistic to just compare the sequences of circRNA and miRNA, without taking spatial structure and other binding proteins into account.
I have done quite some literature research, but I could not find any approach (mathematical nor data base), which deals with such a problem.
Does someone of you guys have a similar problem or a suggestion?
Best wishes
| Hi!
I have work a little on circRNA too, and as far as I know, there is no resource for doing what you want. However, in order to resolve your question, there are some web servers that could be helpful to address your problem:
> I would like to compute the likelihood that one specific circRNA binds to a miRNA (or vice versa). Since some of my circRNAs are not annotated yet, I can just use the genomic coordinates and the according sequence.
As far as I know there are no tools for dealing with this, what I usually do is to take the circRNA sequences and use them as if they were mRNA sequences in order to scan them in the miRNA- mRNA binding prediction web servers. If your circRNA are annotated, you can use some tools like [circinteractome][1] to search for both RBPs and miRNA binding to your circRNAs.
> It would be too simplistic to just compare the sequences of circRNA and miRNA, without taking spatial structure and other binding proteins into account.
[The RNA.fold][2] web server has an option were you can tell the software to assume that the RNA molecule is circular. This could allow you to have a computational overview of the posible folding structure of your molecule.
Bonus: Recent studies using ribosome footprinting have suggest that circRNA could code for proteins, it may be worthy for you to look for coding potential in your circRNA, see this [nature research highlight][3] were they explain this last experiments.
[1]: https://circinteractome.nia.nih.gov/
[2]: http://rna.tbi.univie.ac.at/cgi-bin/RNAWebSuite/RNAfold.cgi
[3]: http://www.nature.com/cr/journal/v27/n6/full/cr201770a.html | biostars | {"uid": 258242, "view_count": 2541, "vote_count": 2} |
Hi all,
I have fastQ file and I need to rename it using `sed` command. below the explanation :
The read names in my files are
```
@HWI-ST365:251:D0RP0ACXX:5:1101:4471:2213#12_1
@HWI-ST365:251:D0RP0ACXX:5:1101:4471:2213#12_2
```
And i want to transform them in the format:
```
@HWI-ST365:251:D0RP0ACXX:5:1101:4471:2213#12/1
@HWI-ST365:251:D0RP0ACXX:5:1101:4471:2213#12/2
```
Thank you very much in advance for your help.
| Well, you're very close to the solution. You only need to scape to '`/`' character: `sed -i 's/_/\//g'` should work.
Just a little advice, try to call to sed command in the following way:
cat file.fq | sed 's/_/\//g' > reformat.fq
In this way you can go back to the original input file in the case that something has gone wrong. In my opinion it is a good practice. | biostars | {"uid": 185065, "view_count": 5212, "vote_count": 2} |
Hi All,
I would like to ask that is there an accurate tool/script for deinterleaving fastq files? I managed a 'bam to fastq' conversion according to http://gatkforums.broadinstitute.org/discussion/2908/howto-revert-a-bam-file-to-fastq-format and I would like to deinterleave the output fastq.
Many thanks! | [Reformat][1] can do that, if you're sure the resulting fastq file is interleaved. You can run with the "vint" and "ain" flags to verify that it is interleaved properly. If it's not, you can fix it by running repair.sh (also in the BBMap package).
[1]: http://seqanswers.com/forums/showthread.php?t=46174 | biostars | {"uid": 141256, "view_count": 9567, "vote_count": 2} |
Hello guys,
Is there any database where I can download transcription factor's sequences, like if I want to download all the known transcription factors sequences of Tribolium castaneum (let's say), how can I do that?
i wanna compare them with a transcriptome in a fasta file. | <p>For the beetle you mentioned it doesn't look like there's a dedicated resource, in general this isn't exactly trivial. You could use BioPython/Java/etc or entreztools and download by searching for specific terms, however the success of this depends on the completion and quality of the annotations available. You could also download in bulk and see if there are annotations you can filter on.</p>
<p>The other approach would be to get a list of known TFs from a related species and blast them against the genome of your species keeping hits. You could also do your own Pfam annotations to double check.</p>
<p>If your transcriptome is already annotated, couldn't you look in your data?</p>
<p>What are you planning to do when you 'compare'? There's the likelihood that many TFs will have multiple copies in the genome. You also have the risk of very similar paralogs (e.g. zinc finger TFs). There can be very high homology, but subtle differences at the sequence level can create large differences in the binding motif, which can confound your ability to determine what may be regulating what.</p>
<p>Depending on the tissue and even cell population(s) used, you may run into problems trying to determine if the expression of a TF is impacting the expression of another transcript. Without prior knowledge of the regulatory mechanism of that gene you can only see if there are correlations over time/condition. It is a start, but not the greatest. If your transcriptome came from material with multiple cell types, you may encounter conflicting results due to cell type-specific gene expression.</p>
<p>I've wanted to try a similar analysis for some time, but this isn't easy with a small transcriptome datasets and little/no knowledge of regulatory pathways.</p>
| biostars | {"uid": 123431, "view_count": 1936, "vote_count": 1} |