INSTRUCTION
stringlengths 48
14.1k
| RESPONSE
stringlengths 14
8.79k
| SOURCE
stringclasses 1
value | METADATA
stringlengths 48
55
|
---|---|---|---|
Hello Biostars,
Can anyone tell me how to prepare input data set for GSEA after Differential Gene Expression Analysis by DESeq2?
How will I rank the genes? Should I rank based on log2FC or Adjusted P value? Is there any way to generate a GSEA ready data directly from DESeq2?.
I was using topGo for gene ontology enrichment analysis before and recently came across GSEA. Which one is better GO enrichment analysis or GSEA? Even after going through the papers I couldn't find a significant difference between above two.
Thank you | Hi Sreeraj
Genes can be ranked based on fold change and P value and that can be used in GSEA package.
you can use this R code for this purpose.
x <- read.table("DE_genes.txt",sep = "\t",header = T)
head(x)
x$fcsign <- sign(x$log2.fold_change.)
x$logP=-log10(x$p_value)
x$metric= x$logP/x$fcsign
y<-x[,c("Gene", "metric")]
head(y)
write.table(y,file="DE_genes.rnk",quote=F,sep="\t",row.names=F) | biostars | {"uid": 279097, "view_count": 20241, "vote_count": 11} |
Hey biostars,
How I can align multiple fastq files on reference genome or I have to align them individually? | You can definitely align multiple FASTQ files to a reference genome - I do it all the time. You'd ideally be able to pass as parameters pairs of PE fastq files and any number of SE fastq files. Check out the documentation of the alignment program you're using to figure out the exact method to pass multiple files. | biostars | {"uid": 142669, "view_count": 4033, "vote_count": 1} |
Hi All,
**library(TCGAbiolinks)** is giving errors as *gridExtra* was renamed so I can not use it to download and analyzing the TCGA data. Except **GDC Portal** (our IT support having problems to install the data transfer tool, so I can not use it right now) is there any other alternatives to get the TCGA-BRCA data to do differential gene expression analysis.
Thanks,
Gokce | Can you show what you are doing and replicate the error so as to make up help you, if a dependency package or module is renamed then it should be also forked into the TCGAbiolonks package if all install.dependancies are made TRUE. If it's a bug then you can always contact their GitHub repo to report it.
Alternative is [TCGA2STAT][1]. Take a look at it. I believe you are just downloading the open files and not controlled access data so this package should also work and then you can use your custom lima-voom, edgeR or DESeq2 for your downstream analysis once you have all the meta-data and the count data downloaded for yourself.
[1]: https://cran.r-project.org/web/packages/TCGA2STAT/vignettes/TCGA2STAT.html | biostars | {"uid": 272283, "view_count": 2521, "vote_count": 1} |
Hello,
I was asked to find which CpGs are unique of EPIC 850K methylation array that are not observed by RRBS (reduced representation bisulfite sequencing). I have EPIC 850K manifest, but is there a public site where I can download all coordinates/locations for all CpGs detected by RRBS?
Do you think there will be CpGs that EPIC 850K covers that RRBS does not?
Thanks a lot for your help. | There won't be an exact list of CpGs covered by RRBS because it depends on the exact enzymes used and how tightly you perform size selection. I would propose that you perform the following procedure:
1. Use biopython to determine all possible fragments generated by the restriction enzymes you'll be using (there are some convenient functions for performing restriction digests on sequences in that package).
2. Determine a rough range of sequencable fragments, which will likely be something like 75-500 bases.
3. Choose a read length (N), because the results of all of this will be length-dependent.
4. For each of the fragments you selected from step 2, write the regions corresponding to the first/last N bases to a file in BED format.
5. Load the BED file from step 4 into an interval tree (there might be something in biopython for this, worst case scenario you can use deeptoolsintervals from deepTools).
6. Use biopython to iterate over the CpGs and query them for overlaps with the interval from step 5.
7. Write output files appropriately
8. Compare them to what the EPIC 850K covers.
Note that the EPIC 850K may give a ballpark estimate of all of this in their sales materials. I wouldn't be surprised if the EPIC 850K covers some CpGs that RRBS doesn't. | biostars | {"uid": 340070, "view_count": 1791, "vote_count": 1} |
I have gff files and files containing intergenic regions for different bacterial genomes. I want to extract the adjacent genes and their co-ordinate which corresponding those intergenic regions for all of these genomes. Are there any scripts or tools are available?
Example:
For *Bacillus anthracis*, I have two files:
1. gff file
2. files containing coordinates of intergenic regions:
`intergenic_region.txt`
```
185457-185562
320958-321064
1146951-1147049
1285399-1285500
3894344-3894451
4075706-4075815
```
I want to extract the coordinate of adjacent genes for each of those intergenic regions from gff files.
I have to do such kind of process for 300+ bacterial genomes. | Convert the GFF file to BED with <a href="http://bedops.readthedocs.org/en/latest/content/reference/file-management/conversion/gff2bed.html">BEDOPS *gff2bed*</a>:
$ gff2bed < annotations.gff > annotations.bed
Convert the intergenic regions to BED. Assuming they are all on chromosome `chr1` and 0-indexed:
$ awk 'BEGIN { FS = "-"; } { print "chr1\t"$1"\t"$2; }' intergenic_region.txt > intergenic_region.bed
You may need to modify the name used for the chromosome, depending on the naming scheme used in the annotations file. You could browse through the first column of `annotations.bed` to see what the chromosome naming scheme looks like.
Once you have the annotations and intergenic regions in BED format, you can use <a href="http://bedops.readthedocs.org/en/latest/content/reference/set-operations/closest-features.html">BEDOPS *closest-features*</a> to find the nearest annotations to the regions:
$ closest-features intergenic_region.bed annotations.bed > answer.bed | biostars | {"uid": 157836, "view_count": 3961, "vote_count": 2} |
Having received a task to assemble phage genome, I and my colleague ran into several problems.
- First, the sequence duplication levels are abnormally high, reaching up to ~90% and ~40% for forward and reverse reads, respectively.
![Sequence duplication level for raw forward R1 reads][1]
- Second, per tile sequence quality displays a mixture of alarming patterns.
This is per-tile sequence quality for raw R1 reads.
![Per tile sequence quality for raw reverse R1 reads][2]
And per base sequence quality for raw R1 reads as well.
![per base sequence quality for raw R1 reads][3]
This is per-tile sequence quality for raw R2 reads.
![Per tile sequence quality for raw reverse R2 reads][4]
And per base sequence quality scores for raw R2 reads
![Per base sequence quality for raw R1 reads][5]
Initially, several possible explanations were born:
1) The cell might've been overloaded;
2) The duplication level might've been too high, hence the 4 distinct low quality bands;
3) Something is wrong with the sequencing platform, hence the long red bands in the per-tile sequence quality report.
After several unsuccessful rounds of fiddling with trimmomatic, I ended up specifying very strict quality control options: `HEADCROP:10 SLIDINGWINDOW:3:32 MINLEN:230`. Using these options I ended up with ~22% (~250k out of ~1.1mln) and ~4% (~41k out of ~1.1mln) of initial sequences for forward and reverse reads, respectively. I also specified the Nextera Transposase adapter sequences, because the samples were badly contaminated. Nevertheless, the problem with per tile sequence quality persists (the adapters have been removed, though).
Per tile sequence quality for R1 reads after running `trimmomatic`.
![Per tile sequence quality for `trimmed` R1 reads][6]
Together with the sequence quality scores.
![Per base sequence quality R1][7]
Per tile sequence quality for R2 reads after running `trimmomatic`. Red tiles didn't disappear and seemingly random bad quality patterns emerged to the left of the cell.
![Per tile sequence quality for `trimmed` R2 reads][8]
As well as the sequence quality scores.
![Per base sequence quality R2][9]
Sequencing was performed on a **Illumina HiSeq T1500** machine in rapid-run mode producing ~1.2 mln of 250bp paired-end reads.
What do you think might be the cause of such duplication levels, per tile sequence quality patterns and overall data quality?
- Was something wrong with the sequencing procedure or the machine?
- May this be due to the rapid-run? (We haven't seen anyone using it before).
- Should we raise an alarm and contact the sequencing facility or are we being overly cautious?
**Edit**
Update:
We've just received additional information regarding the run from the sequencing facility:
- Our samples make up only ~1.3% of the run.
- 0.5% of phiX was spiked in.
- Cluster density is 1200.
- 200GB of data were produced during the run (although the upper limit should be 150GB as per machine's specification).
- 200GB of data were produced by the sequencer in this run.
[1]: https://image.ibb.co/g1Eu9U/image.png
[2]: https://image.ibb.co/goyJKU/image.png
[3]: https://image.ibb.co/iPMZb9/image.png
[4]: https://image.ibb.co/i8oJ3p/image.png
[5]: https://image.ibb.co/fGqVip/image.png
[6]: https://image.ibb.co/cFmkYp/image.png
[7]: https://image.ibb.co/gEbHDp/image.png
[8]: https://image.ibb.co/gGAO3p/image.png
[9]: https://image.ibb.co/dKAcDp/image.png | Was this the only sample in that lane? If so you were over-sampling a small genome to a large extent. Some of the quality issues may be due to low-nucleotide diversity perceived by the sequencer. Do you know what the cluster density for this run was? How much phiX was spiked in to this run?
You would likely have to normalize this data down if you want to assemble the genome. I suggest you take a look at `bbnorm.sh` from [BBMap suite][1]. While there [you could also use][2] `tadpole.sh` (which is a k-mer based aligner). It is known to do well with small genomes like this one.
[1]: https://jgi.doe.gov/data-and-tools/bbtools/bb-tools-user-guide/bbnorm-guide/
[2]: https://jgi.doe.gov/data-and-tools/bbtools/bb-tools-user-guide/tadpole-guide/ | biostars | {"uid": 342540, "view_count": 2406, "vote_count": 1} |
I am using GAPIT package in R for GWAS. Already installed the package and import the genotype and phenotype file in R
myY <- read.table("mdp_traits.txt", head = TRUE), myG <- read.table("RabiGeno.hmp.txt", head = FALSE) and run the GAPIT using command
myGAPIT <- GAPIT(
Y=myY,
G=myG,
PCA.total=3
And the output is given below:
`There is an **Error in plot.window(...) : need finite 'xlim' values** .` So, how to solve this problem?
"--------------------- Welcome to GAPIT ----------------------------"
[1] "Converting genotype..."
[1] "Converting HapMap format to numerical under model of Middle"
[1] "Perform numericalization"
[1] "Succesfuly finished converting HapMap which has bits of 1"
[1] "Converting genotype done."
[1] "Calling prcomp..."
[1] "Creating PCA graphs..."
[1] "Joining taxa..."
[1] "Exporting PCs..."
[1] "PC created"
[1] "Calculating kinship..."
[1] "Number of individuals and SNPs are 182 and 45641"
[1] "Calculating kinship with VanRaden method..."
[1] "substracting P..."
[1] "Getting X'X..."
[1] "Adjusting..."
[1] "Calculating kinship with VanRaden method: done"
[1] "kinship calculated"
[1] "Creating heat map for kinship..."
[1] "Kinship heat map created"
[1] "Adding IDs to kinship..."
[1] "Writing kinship to file..."
[1] "Kinship save as file"
[1] "Kinship created!"
[1] "GAPIT.Genotype.View .Two pdf generate.successfully!"
[1] "--------------------Processing traits----------------------------------"
[1] "Phenotype provided!"
[1] "GAPIT.Phenotype.View in press..."
[1] "GAPIT.Phenotype.View .output pdf generate.successfully!"
[1] "GAPIT.Phenotype.PCA.View"
**Error in plot.window(...) : need finite 'xlim' values**
In addition: Warning messages:
1: In max(y2[, 2]) : no non-missing arguments to max; returning -Inf
2: In min(y2[, 2]) : no non-missing arguments to min; returning Inf
3: In min(x) : no non-missing arguments to min; returning Inf
4: In max(x) : no non-missing arguments to max; returning -Inf
5: In min(x) : no non-missing arguments to min; returning Inf
6: In max(x) :
Show Traceback
Rerun with Debug
Error in plot.window(...) : need finite 'xlim' values
| I've encountered this before - GAPIT needs numerical values for chromosome names to plot the Manhattan plot. If your chromosomes are named 'Chr1' or so it won't work, they have to be named 1, 2, 3, .... | biostars | {"uid": 305157, "view_count": 4932, "vote_count": 1} |
Can anyone share me a good GATK4 workflow, I Couldnt find anything good that explanins the process in a crisp and easy way. | If you're looking for a streamlined yet comprehensive explanation of GATK4 and the main Best Practices workflows, you may be interested in the O'Reilly book I co-authored called Genomics in the Cloud. (The cloud part is secondary, all the GATK-focused explanations are applicable pretty much regardless of where you will run the workflows).
See https://oreil.ly/genomics-cloud | biostars | {"uid": 9471704, "view_count": 2026, "vote_count": 2} |
What is the meaning of **(-) log10(FDR)** value
How to interpret higher negative value to lower negative value by comparing each other?
e.g. 1:
Ribosome : *(-) log10(FDR) = 25*
Golgi : *(-) log10(FDR) = 5*
e.g. 2:
[![enter image description here][1]][1]
[1]: https://i.stack.imgur.com/zCAMa.jpg | It is just to make the value easier to read.
Instead of 1E-5 you are shown 5 and instead of 1E-25 you are shown 25
The bigger number means a smaller FDR | biostars | {"uid": 430040, "view_count": 5433, "vote_count": 1} |
I am working on Arabidopsis thaliana omics project. How can I get short and proper gene description for each AGI (Gene IDs) numbers?
Downloaded database from Tair10 has pretty lengthy names which are difficult to work with (plotting , summarizing in downs-stream works)
e.g.
AT1G01050 Soluble inorganic pyrophosphatase 1 OS=Arabidopsis thaliana (sp|q93v56|ipyr1_arath : 419.0)
AT1G01800 Enzyme classification.EC_1 oxidoreductases.EC_1.1 oxidoreductase acting on CH-OH group of donor(50.1.1 : 434.7) & (+)-neomenthol dehydrogenase OS=Arabidopsis thaliana (sp|q9m2e2|sdr1_arath : 357.0) (original description: none)
From where I can get / how to **`modify the names short and sweet`**?
| - Go to ensembl's [BioMart][1]
- Choose `Dataset`: `Ensembl Plant Genes` and `Arabidopsis thaliana genes`
- Choose `Filters`->`Genes` -> `Input external references ID list` -> `Gene Stable ID(s)` and paste your IDs into the textfield
- Choose `Attributes`->`Gene` and select `Gene Stable ID` and ` Gene name`
- Click `Result` and download in the format you like
You can extract the IDs from the file example above with a simple `cut -f1 input_file > gene_ids.txt`.
fin swimmer
[1]: http://plants.ensembl.org/biomart/martview | biostars | {"uid": 364495, "view_count": 1404, "vote_count": 1} |
Hello everyone
I tried to install ggplots packages of R in Ubuntu server, but there were some errors which displayed as follows:
Warning messages:
1:In install.packages("ggplot2"):
Installation of package 'plyr' had non-zero exit status
2: In install.packages("ggplot2"):
Installation of package 'reshape2' had non-zero exit status
3: In install.packages("ggplot2"):
Installation of package 'scales' had non-zero exit status
4: In install.packages("ggplot2"):
Installation of package 'ggplot2' had non-zero exit status
Then I tried to use ggplot2 through command:
library('ggplot2')
But terminal told me that
Error in library (ggplot2): there is no package called 'ggplot2'
I have found many information about this problem on internet, but no available ways find and error was error.
If you have some suggestion, please tell me.
Thanks for your help.
| It seems that you are using a very old version R; so, even if you succeed in installing ggplot2, be aware that it may not work correctly afterwards. In any case, to solve the issue you should install Rcpp separately - e.g. `sudo apt-get install r-cran-rcpp`
| biostars | {"uid": 113340, "view_count": 36876, "vote_count": 1} |
<p>Hey all :]</p>
<p>I use samtools's depth, and occasionally samtool's pileup commands to calculate coverage of my reads to the genome before binning for coverage. I'm pretty sure everyone else does too ;)</p>
<p>One very common problem is that people find "samtools depth" and "samtools mpileup" don't match up - and it's commonly attributed to filtering poor quality reads, duplicates, etc (mpileup doing more filtering).</p>
<p>But there's a ton of other questions I just can't get the answers too from the sametools docs, namely:</p>
<p>Does depth/pileup count the region between a paired reads, or just the reads themselves?<br />
Does samtools depth/pileup count singletons in paired end sequencing? Does it extrapolate based on average read length to fillout the whole fragment?<br />
If a read maps to multiple locations, is it counted multiple times?</p>
<p>I'm working with ChIP-Seq data, so i might have to correct for peak-shift. What do you guys think?<br />
<br />
Thanks! :)</p>
| <p>There comes a time in every bioinformatician's life when he/she needs to just read the source code. This is one of those times.</p>
<p>So, depth actually uses pileup internally, just to confuse things a bit more. Depth does not count regions spliced over (i.e., covered by an N operator) or between reads in a pair, which is good because what's actually going on there would be undefined. I don't think that pileup counts regions between reads, though I guess I'd have to look to be 100% certain.</p>
<p>Neither tool cares whether a read is a singleton or a part of a pair unless you tell them to filter according to that. Samtools is a pretty general program, so it won't do ChIP-seq specific things like extrapolating bounds based on average template length.</p>
<p>In general, you'll need to start going through the code to get answers to questions like these.</p>
| biostars | {"uid": 107273, "view_count": 7874, "vote_count": 3} |
This is cross-posted [from here](https://bioinformatics.stackexchange.com/q/4419/351).
I would like to use uniprot's [sparql endpoint](https://sparql.uniprot.org/) to retrieve all proteins that
1. are reviewed (required)
2. are associated with taxonomy IDs `562` and `3702` (required)
3. have a KO associated with them (optional)
4. "evidence for the existence of a protein " should be either on protein or transcript level (required)
5. have an EC number associated with them (required)
I have so far (points 1 and 2):
PREFIX up:<http://purl.uniprot.org/core/>
PREFIX taxon:<http://purl.uniprot.org/taxonomy/>
PREFIX rdfs:<http://www.w3.org/2000/01/rdf-schema#>
SELECT ?protein ?taxon ?name
WHERE
{
?taxon a up:Taxon .
?taxon up:scientificName ?name .
VALUES ?taxonlist { taxon:562 taxon:3702 }
?taxon rdfs:subClassOf ?taxonlist .
?protein a up:Protein .
?protein up:organism ?taxon .
?protein up:reviewed true . # have to be reviewed
}
This, however, does not return anything for `3702`. How can this be fixed and how can I incorporate points 3-5?
Additionally, is there now a way to connect uniprot's sparql endpoint with rhea's [sparql endpoint](https://sparql.rhea-db.org/sparql) to retrieve all associated reactions and their stoichiometries (with ChEBI IDs) for the selected proteins from above? Example 19 seems to suggest that this connection is possible but I am not quite sure how to accomplish it.
| 1) Is correct in the query with
?protein up:reviewed true .
2) The query in the Q. does not return anything for taxon:3702 as there are no rdfs:subClasses for Aribidopsis Thaliana, it is a leaf node. This means the entry is directly linked to that taxon instead of via it's ancestors. This is fixed by changing the query slightly to deal with both the ancestor and direct case (both sides of the UNION below)
VALUES ?taxonlist { taxon:3702 taxon:562}
{
?taxon rdfs:subClassOf ?taxonlist .
?protein up:organism ?taxon .
} UNION {
?protein up:organism ?taxonlist .
}
3) we use the cross reference section which are done via rdfs:seeAlso . But as there is the possibility of more than one KO per entry we group them with a subquery.
OPTIONAL {
SELECT ?protein (GROUP_CONCAT(?ko; SEPARATOR=", ") AS ?kos)
WHERE{
?protein rdfs:seeAlso ?ko .
?ko up:database <http:
} GROUP BY ?protein
}
4) to use the existience/evidence for concept at Protein or Transcript level we add
{
?protein up:existence up:Evidence_at_Protein_Level_Existence .
} UNION {
?protein up:existence up:Evidence_at_Transcript_Level_Existence .
}
5) To make sure the entry is annotated as an enzyme. We use the same subquery idea as for the KO links but now not OPTIONAL. To make one value of the many potential ECs we use
a subquery with a GROUP_CONCAT. The long line with up:enzyme is the different ways uniprot links an ?ec to an entry.
SELECT ?protein (GROUP_CONCAT(?ec; SEPARATOR=", ") AS ?ecs)
WHERE{
?protein up:enzyme|((up:component|up:domain)/up:enzyme) ?ec
} GROUP BY ?protein
Combing it in one query gives
PREFIX up:<http://purl.uniprot.org/core/>
PREFIX taxon:<http://purl.uniprot.org/taxonomy/>
PREFIX rdfs:<http://www.w3.org/2000/01/rdf-schema#>
SELECT
?protein
?taxon
?name
?kos
?ecs
WHERE
{
?protein a up:Protein .
?protein up:reviewed true . # have to be reviewed
?taxon a up:Taxon .
?taxon up:scientificName ?name .
VALUES ?taxonlist { taxon:3702 taxon:562 }
{
?taxon rdfs:subClassOf ?taxonlist .
?protein up:organism ?taxon .
} UNION {
?protein up:organism ?taxonlist .
}
{
?protein up:existence up:Evidence_at_Protein_Level_Existence .
} UNION {
?protein up:existence up:Evidence_at_Transcript_Level_Existence .
}
{
SELECT ?protein (GROUP_CONCAT(?ec; SEPARATOR=", ") AS ?ecs)
WHERE{
?protein up:enzyme|((up:component|up:domain)/up:enzyme) ?ec
} GROUP BY ?protein
}
OPTIONAL {
SELECT ?protein (GROUP_CONCAT(?ko; SEPARATOR=", ") AS ?kos)
WHERE{
?protein rdfs:seeAlso ?ko .
?ko up:database <http://purl.uniprot.org/database/KO>
} GROUP BY ?protein
}
}
Which is testable at s[parql.uniprot.org][1].
[1]: http://tinyurl.com/y7mv68gp | biostars | {"uid": 318990, "view_count": 1341, "vote_count": 1} |
Terminology suggestions?
Original question removed, since it does not make sense. Changing it so that Kao's answer fits. | I've heard of the term "epiallels".
http://www.nature.com/pr/journal/v61/n5-2/full/pr2007128a.html | biostars | {"uid": 142349, "view_count": 1316, "vote_count": 1} |
Dear all,
I am new to the field. I am trying to analyze single end 100b FastQ files with ~70million reads/sample. I am trying to determine if adapter sequences are present and if so how to go about them. I ran FastQC on the files and reports show they each have an "overrepresented sequence" of an "illumina index adapter" in them.
<a href="https://ibb.co/R6N3Nr5"><img src="https://i.ibb.co/pKQfQVm/sample1.png" alt="sample1" border="0" /></a>
I have the following questions:
1. Does sample1 look like a trimmed file or it requires adapter trimming?
2. If further trimming is recommended what would be the best seq/adapter option to be used for cutadapt/TrimGalore? [See below for my thoughts so far]
3. Based on the FastQC report, do I need to worry about presence of any other adapter sequences beside the index?
My thoughts on question 2:
The sequences for illumina index adapter format appear to be:
GATCGGAAGAGCACACGTCTGAACTCCAGTCACNNNNNNATCTCGTATGCCGTCTTCTGCTTG
These are the adapter sequences found in my FastQC report for sample 1:
GATCGGAAGAGCACACGTCTGAACTCCAGTCACCATGGCATCTCGTATGC
AGATCGGAAGAGCACACGTCTGAACTCCAGTCACCATGGCATCTCGTATG
I am thinking of using below options for cutadapt/trimgalore to remove the adapter(s):
trim_galore sample1.fastq.gz -a GATCGGAAGAGCACACGTCTGAACTCCAGTCACNNNNNNATCTCGTATGCCGTCTTCTGCTTG -a AGATCGGAAGAGCACACGTCTGAACTCCAGTCACNNNNNNATCTCGTATGCCGTCTTCTGCTTG -q 20 --length 20 –fastqc
However, it seems that trimmomatics for instance only takes care of the initial sequence of the index adapter (only up to Ns and not after):
https://github.com/timflutre/trimmomatic/blob/master/adapters/TruSeq3-SE.fa
Many thanks for your time and reply beforehand. | 0.5%? You really don't have to worry about that if you don't want to.
The N's are for the variable index region. You know what the index is, you can see it in the fastqc report. Why would you put N's in? | biostars | {"uid": 426875, "view_count": 5307, "vote_count": 1} |
Hi!
Does anyone know any database, which collects a list of parasitic worm species? I've only found some dbs which describe diseases, caused by these organisms, and could not find a referenceable, well-organized list of them anywhere.
Thanks! | there is a **Species interaction datasets** database available for download at https://www.globalbioticinteractions.org/data.html
$ wget -O - -q "https://depot.globalbioticinteractions.org/snapshot/target/data/tsv/interactions.tsv.gz" | gunzip -c | cut -f1,3,33,35,37 | grep -i parasite
EOL:12001247 Leptoconchus massini parasiteOf EOL:200339 Fungia repanda
EOL:12001247 Leptoconchus massini parasiteOf EOL:200324 Fungia concinna
EOL:12001243 Leptoconchus inpleuractis parasiteOf EOL:200337 Fungia paumotensis
EOL:12001243 Leptoconchus inpleuractis parasiteOf EOL:200336 Fungia moluccensis
EOL:12001243 Leptoconchus inpleuractis parasiteOf EOL:200333 Fungia gravis
EOL:12001242 Leptoconchus inpileus parasiteOf EOL_V2:200352 Halomitra pileus
EOL:12001241 Leptoconchus inlimax parasiteOf EOL_V2:200354 Herpolitha limax
EOL:12001239 Leptoconchus ingranulosa parasiteOf EOL:200332 Fungia granulosa
EOL:12001238 Leptoconchus ingrandifungi parasiteOf EOL_V2:200365 Sandalolitha dentata
EOL:12001237 Leptoconchus infungites parasiteOf FBC:SLB:SpecCode:45838 Fungia fungites
EOL:12001236 Leptoconchus incycloseris parasiteOf EOL:200325 Fungia costulata
WORMS:847839 Umimayanthus parasiticus parasiteOf WORMS:166232 Callyspongia (Cladochalina) vaginalis
WORMS:847839 Umimayanthus parasiticus parasiteOf WORMS:166232 Callyspongia (Cladochalina) vaginalis
EOL_V2:620727 Spirobranchus giganteus parasiteOf EOL_V2:538613 Neofibularia nolitangere
EOL_V2:620727 Spirobranchus giganteus parasiteOf EOL_V2:335778 Ircinia felix
EOL_V2:620727 Spirobranchus giganteus parasiteOf WORMS:169016 Monanchora arbuscula
EOL_V2:620727 Spirobranchus giganteus parasiteOf WORMS:169121 Desmapsamma anchorata
EOL_V2:620727 Spirobranchus giganteus parasiteOf WORMS:166232 Callyspongia (Cladochalina) vaginalis
WORMS:328023 Fauchaldius cyrtauloni parasiteOf WORMS:171796 Cyrtaulon sigsbeei
WORMS:421835 Upogebia annae parasiteOf WORMS:131849 Xestospongia sp.
EOL_V2:1039933 Leucothoe spinicarpa parasiteOf EOL_V2:335032 Spheciospongia vesparium
you'll 'just' need to determine if an organism is a worm or not.
| biostars | {"uid": 393095, "view_count": 930, "vote_count": 1} |
I have 30 small fastq files from same sample, and I want to merge it into one file. I know the command is
cat file1.fastq file2.fastq > bigfile.fastq
but is there any short cut for doing it? It just looks silly to type 30 file names one by one...
Thank you for any idea! | cat file*.fastq > bigfile.fastq | biostars | {"uid": 136025, "view_count": 78611, "vote_count": 10} |
I have a table:
W X Y Z
A 2 3 4
A 2 3 6
B 1 2 3
C 3 2 1
B 1 3 4
B 1 2 2
# reproducible data
table <- read.table(textConnection("W\tX\tY\tZ
A\t2\t3\t4
A\t2\t3\t6
B\t1\t2\t3
C\t3\t2\t1
B\t1\t3\t4
B\t1\t2\t2"), header = TRUE)
I want to combine all rows with same value in Column W, so that it looks like
W X Y Z
A 2 3 2,6
B 1 2,3 3,4,2
C 3 2 1
**Note:** All different values of column **W** entries appear as comma separated. | You're looking for the R package 'plyr'
X1 <- read.table(textConnection("W\tX\tY\tZ
A\t2\t3\t4
A\t2\t3\t6
B\t1\t2\t3
C\t3\t2\t1
B\t1\t3\t4
B\t1\t2\t2"),header=T)
library(plyr)
X2 <- ddply(X1, .(W), summarize,
Xc=paste(X,collapse=","),
Zc=paste(Z,collapse=",") ,
Yc= paste(Y,collapse=","))
X2
W Xc Zc Yc
1 A 2,2 4,6 3,3
2 B 1,1,1 3,4,2 2,3,2
3 C 3 1 2
X2u <- ddply(X1, .(W), summarize,
Xc=paste(unique(X),collapse=","),
Zc=paste(unique(Z),collapse=",") ,
Yc= paste(unique(Y),collapse=","))
X2u
W Xc Zc Yc
1 A 2 4,6 3
2 B 1 3,4,2 2,3
3 C 3 1 2 | biostars | {"uid": 167028, "view_count": 42355, "vote_count": 5} |
I have a `VCF` file with `SNPs` and genes subset of `GFF` file (only genes are present). How to extract `SNPs` in `VCF` format located in genes from my data? | Use [bedtools][1]:
$ bedtools intersect -a input.vcf -b genes.gff -header -wa > output.vcf
---
**EDIT:**
For (very) large `vcf` files it might be more efficient to `bgzip` and `tabix` index the `vcf` file, convert your `gff` to `bed` and use `tabix` to query the regions
**1. bgzip and index**
$ bgzip -c input.vcf > input.vcf.gz
$ tabix input.vcf.gz
**2. gff to bed**
E.g with [BEDOPS][2]:
$ gff2bed < genes.gff > genes.bed
**3. Query the regions**
$ tabix -R genes.bed -h input.vcf.gz > output.vcf
fin swimmer
[1]: https://bedtools.readthedocs.io/en/latest/content/tools/intersect.html?highlight=intersect
[2]: https://bedops.readthedocs.io/en/latest/content/reference/file-management/conversion/gff2bed.html | biostars | {"uid": 333734, "view_count": 4487, "vote_count": 1} |
I've been using this method to rename files at the end of a step:
processed_fastq_1:
type: File
outputBinding:
glob: ${ return '**/' + inputs.fastq1.basename }
outputEval: |
${
self[0].basename = inputs.add_rg_SM + '_R1.fastq.gz';
return self[0]
}
But I'm not sure if this is an acceptable method. Should there be an intermediate step using python or calling to a script to rename files in the middle of a CWL workflow? | I would suggest adding a simple `mv` command to the end of the command line, via arguments, e.g.
```
arguments:
- position: 1000
shellQuote: false
valueFrom: ' && mv $(inputs.fastq1.basename) $(inputs.add_rg_SM)_R1.fastq.gz'
```
then you could just glob `$(inputs.add_rg_SM)_R1.fastq.gz`
This way you don't create duplicate files, which would be the case when using a separate step. Also, other users using the tool can see that renaming is happening just by looking at the command-line. | biostars | {"uid": 343413, "view_count": 2726, "vote_count": 1} |
I have two files.One file contains blast result of contigs and the next file contains the contigs that are used for blast.I want an output text file of sequences in a separate folder which gives no hits in blast result from original fasta file with all contigs. I have no idea about this.Can you please help me to create perl script for this problem. | <p><strong>1)</strong> Extract names of contigs without hit in blast:</p>
<pre>
grep -B5 "***** No hits found" blast.txt | grep Query | sed 's/Query= //g' > ids</pre>
<p><strong>2) </strong>Extract fasta records of those contigs without hit:</p>
<pre>
cat ids | xargs -n 1 samtools faidx contigs.fa > contigs_nohit.fa</pre>
| biostars | {"uid": 148975, "view_count": 2007, "vote_count": 1} |
Hi,
I want to annotate a vcf using [vcfanno](https://github.com/brentp/vcfanno) and [gnomAD v3 vcf](https://gnomad.broadinstitute.org/).
using this conf.toml (here using only chr1 vcf from gnomAD)
[[annotation]]
file="gnomad.genomes.r3.0.sites.chr1.vcf.bgz"
fields = ["AF", "AC", "AN", "AF_nfe","AC_nfe", "AN_nfe", "FILTER", "ID"]
names = ["GNOMAD_V3_AF", "GNOMAD_V3_AC", "GNOMAD_V3_AN","GNOMAD_V3_AF_NFE", "GNOMAD_V3_AC_NFE","GNOMAD_V3_AN_NFE","GNOMAD_V3_FILTER","GNOMAD_V3_ID"]
ops=["self", "self", "self", "self", "self", "self", "self", "self"]
and this test vcf
##fileformat=VCFv4.2
##INFO=<ID=AC,Number=A,Type=Integer,Description="Allele count in genotypes, for each ALT allele, in the same order as listed">
##INFO=<ID=AF,Number=A,Type=Float,Description="Allele Frequency, for each ALT allele, in the same order as listed">
##FORMAT=<ID=AD,Number=R,Type=Integer,Description="Allelic depths for the ref and alt alleles in the order listed">
##FORMAT=<ID=DP,Number=1,Type=Integer,Description="Approximate read depth (reads with MQ=255 or with bad mates are filtered)">
##FORMAT=<ID=GT,Number=1,Type=String,Description="Genotype">
#CHROM POS ID REF ALT QUAL FILTER INFO FORMAT SAMPLE_1 SAMPLE_2
chr1 12807 . C T 55.5 PASS AC=20;AF=0.5 GT:AD:DP 0/1:50,100:150 0/0:100,1:101
chr1 4545727 . C T 55.5 PASS AC=30;AF=0.25 GT:AD:DP 1/1:10,100:110 0/1:100,100:200
with this command :
vcfanno_linux64 gnomad_v3_conf.toml test.vcf > test_vcfanno.vcf
It results in :
=============================================
vcfanno version 0.3.1 [built with go1.11]
see: https://github.com/brentp/vcfanno
=============================================
vcfanno.go:115: found 8 sources from 1 files
api.go:804: WARNING: using op 'self' when with Number='1' for 'AF' from 'gnomad.genomes.r3.0.sites.chr1.vcf.bgz' can result in out-of-order values when the query is multi-allelic
api.go:805: : this is not an issue if the query has been decomposed.
api.go:804: WARNING: using op 'self' when with Number='1' for 'AC' from 'gnomad.genomes.r3.0.sites.chr1.vcf.bgz' can result in out-of-order values when the query is multi-allelic
api.go:805: : this is not an issue if the query has been decomposed.
api.go:804: WARNING: using op 'self' when with Number='1' for 'AN' from 'gnomad.genomes.r3.0.sites.chr1.vcf.bgz' can result in out-of-order values when the query is multi-allelic
api.go:805: : this is not an issue if the query has been decomposed.
api.go:804: WARNING: using op 'self' when with Number='1' for 'AF_nfe' from 'gnomad.genomes.r3.0.sites.chr1.vcf.bgz' can result in out-of-order values when the query is multi-allelic
api.go:805: : this is not an issue if the query has been decomposed.
api.go:804: WARNING: using op 'self' when with Number='1' for 'AC_nfe' from 'gnomad.genomes.r3.0.sites.chr1.vcf.bgz' can result in out-of-order values when the query is multi-allelic
api.go:805: : this is not an issue if the query has been decomposed.
api.go:804: WARNING: using op 'self' when with Number='1' for 'AN_nfe' from 'gnomad.genomes.r3.0.sites.chr1.vcf.bgz' can result in out-of-order values when the query is multi-allelic
api.go:805: : this is not an issue if the query has been decomposed.
api.go:804: WARNING: using op 'self' when with Number='1' for 'FILTER' from 'gnomad.genomes.r3.0.sites.chr1.vcf.bgz' can result in out-of-order values when the query is multi-allelic
api.go:805: : this is not an issue if the query has been decomposed.
api.go:804: WARNING: using op 'self' when with Number='1' for 'ID' from 'gnomad.genomes.r3.0.sites.chr1.vcf.bgz' can result in out-of-order values when the query is multi-allelic
api.go:805: : this is not an issue if the query has been decomposed.
vcfanno.go:241: annotated 2 variants in 0.09 seconds (22.9 / second)
resulting VCF didn't contain the gnomAD AF,AC and AN. Only the header has the INFO filed added from gnomAD VCF :
##fileformat=VCFv4.2
##INFO=<ID=AC,Number=A,Type=Integer,Description="Allele count in genotypes, for each ALT allele, in the same order as listed">
##INFO=<ID=AF,Number=A,Type=Float,Description="Allele Frequency, for each ALT allele, in the same order as listed">
##INFO=<ID=GNOMAD_V3_AC,Number=1,Type=String,Description="calculated by self of overlapping values in field AC from gnomad.genomes.r3.0.sites.chr1.vcf.bgz">
##INFO=<ID=GNOMAD_V3_AC_NFE,Number=1,Type=String,Description="calculated by self of overlapping values in field AC_nfe from gnomad.genomes.r3.0.sites.chr1.vcf.bgz">
##INFO=<ID=GNOMAD_V3_AF,Number=1,Type=String,Description="calculated by self of overlapping values in field AF from gnomad.genomes.r3.0.sites.chr1.vcf.bgz">
##INFO=<ID=GNOMAD_V3_AF_NFE,Number=1,Type=String,Description="calculated by self of overlapping values in field AF_nfe from gnomad.genomes.r3.0.sites.chr1.vcf.bgz">
##INFO=<ID=GNOMAD_V3_AN,Number=1,Type=String,Description="calculated by self of overlapping values in field AN from gnomad.genomes.r3.0.sites.chr1.vcf.bgz">
##INFO=<ID=GNOMAD_V3_AN_NFE,Number=1,Type=String,Description="calculated by self of overlapping values in field AN_nfe from gnomad.genomes.r3.0.sites.chr1.vcf.bgz">
##INFO=<ID=GNOMAD_V3_FILTER,Number=1,Type=String,Description="calculated by self of overlapping values in field FILTER from gnomad.genomes.r3.0.sites.chr1.vcf.bgz">
##INFO=<ID=GNOMAD_V3_ID,Number=1,Type=String,Description="calculated by self of overlapping values in field ID from gnomad.genomes.r3.0.sites.chr1.vcf.bgz">
##FORMAT=<ID=AD,Number=R,Type=Integer,Description="Allelic depths for the ref and alt alleles in the order listed">
##FORMAT=<ID=DP,Number=1,Type=Integer,Description="Approximate read depth (reads with MQ=255 or with bad mates are filtered)">
##FORMAT=<ID=GT,Number=1,Type=String,Description="Genotype">
#CHROM POS ID REF ALT QUAL FILTER INFO FORMAT SAMPLE_1 SAMPLE_2
chr1 12807 . C T 55.5 PASS AC=20;AF=0.5 GT:AD:DP 0/1:50,100:150 0/0:100,1:101
chr1 4545727 . C T 55.5 PASS AC=30;AF=0.25 GT:AD:DP 1/1:10,100:110 0/1:100,100:200
Did I miss something obvious here?
Thanks | Solved it by updating vcfanno from 0.3.1 to 0.3.2. Now it works ;)
| biostars | {"uid": 428695, "view_count": 1853, "vote_count": 2} |
Hello,
There is a tool on the Galaxy platform which could potentially be very useful for the type of research that I am undertaking. The tool is 'Stitch MAF blocks' which can be found under the 'Fetch Alignments/Sequences' header.
The problem I have is that, because of the size of the data that I would be working with, I would need to try and run the function locally on my own system and using a command line (to enable access to local HPC resources)
I have installed the bx-python library (https://bitbucket.org/james_taylor/bx-python/wiki/Home) on my local system.
I think I am right in saying that these are the set of python scripts which essentially underlie the MAF manipulation tools (as well having many other functions) on the Galaxy platform.
However - the problem I have now - is that looking at all the files contained within the bx-python library - I can't easily discern a clear relationship between the python scripts contained within the library and the particular tool of interest on the Galaxy platform.
As a result, my question is the following:
Which Python files in the bx-python library correspond to the 'Stitch MAF blocks' function on the Galaxy platform? | The tool is executing the command defined here:
https://github.com/galaxyproject/galaxy/blob/master/tools/maf/interval_maf_to_merged_fasta.xml#L7-L9
so you are probably looking for file `interval_maf_to_merged_fasta.py` | biostars | {"uid": 171533, "view_count": 2389, "vote_count": 1} |
Dear all,
Could you help he please to modify my loop in bash to multiple alignment my FASTQ files?
I have a many fastq files R1 and R2 and I would like to align all those read in bash loop - lets say I have `X1_R1_001.fastq` + `X1_R2_001-fastq` and `Y1_R1_001.fastq` + `Y1_R2_001.fastq`. Sample X1 is pair-end read R1 + R2 and so on.
```
#!/bin/bash
for i in *fastq;
do tophat2 -o ${i%.fastq}tpout -G path/to/my/reference.gtf -p 8 path/to/my/bowtie_index ${i}R1_001.fastq ${i}R2_001.fastq --rg-id X1 --rg-sample X1 --rg-library rna-seq --rg-platform Illumina
done;
```
And I would like to also change my `--rg-id` tag with name of my fastq files (in this example `X1` in first loop and `X2` in second loop). Output folder should have the name of the sample too.
Please do you have any idea how to modify my bash loop?
Thank you so much for any ideas!
Paul | So, this is what I came up with, based on our discussion:
```
for i in $(ls *.fastq | rev | cut -c 13- | rev | uniq)
do
tophat2 -o ${i%.fastq}tpout -G path/to/my/reference.gtf -p 8 path/to/my/bowtie_index ${i}_R1_001.fastq ${i}_R2_001.fastq --rg-id ${i} --rg-sample ${i} --rg-library rna-seq --rg-platform Illumina
done
```
Hope this helps!
---
OK, this answer is gotten a little popular, and my shell knowledge has improved so I'm going to enhance this with some fail-safes to address commonly observed challenges.
First off, the `rev | cut -c 13-` is just asking for trouble. Let's substitute that with a `sed` so the command makes a little more sense. Here, I'm trying to remove all `_R[12]_001.fastq`.
`ls *.fastq | sed -r 's/_R[12]_001[.]fastq//' | uniq ` looks better. I've also edited the variable name from `i` so it makes a little more sense. And I've added some formatting changes plus quotes to avoid shell problems.
Here's the updated code:
```
for prefix in $(ls *R1*.fastq | sed -r 's/_R1_001[.]fastq//' | uniq)
do
tophat2 -o "${prefix%.fastq}tpout" \
-G path/to/my/reference.gtf -p 8 \
path/to/my/bowtie_index \
"${prefix}_R1_001.fastq" "${prefix}_R2_001.fastq" \
--rg-id "${prefix}" --rg-sample "${prefix}" --rg-library rna-seq --rg-platform Illumina
done
```
Note (21-Sep-2022): This will only work for one-pair-per-sample FASTQs. Multi-lane FASTQs will need to be concatenated so each sample has just one pair. The command can be modified easily for compressed FASTQ files. | biostars | {"uid": 98222, "view_count": 22106, "vote_count": 5} |
Dear Biostars,
I have a multisample vcf. I would like a to turn this into a per sample "alts" file as below.
The desired outcome would be :
chr10:12342:A:T SAMPLEID1:0/1 SAMPLEID2:1/1 ...
chr10:432234:G:C SAMPLEID4:0/1 SAMPLEID7:1/1 ...
I have tried :
bcftools query -f "%CHROM:%POS:%REF:%ALT\n[%SAMPLE:%GT\t]' i- 'GT="alt"' input.vcf
which gives me:
chr10:12342:A:T SAMPLEID1:0/1 SAMPLEID2:1/1 chr10:432234:G:C
SAMPLEID4:0/1 SAMPLEID7:1/1 ...
or
bcftools query -f "%CHROM:%POS:%REF:%ALT\t[%SAMPLE:%GT\n'] i- 'GT="alt"' input.vcf
chr10:12342:A:T SAMPLEID1:0/1
SAMPLEID2:1/1
chr10:432234:G:C SAMPLEID4:0/1
SAMPLEID7:1/1
or
bcftools query -f "%CHROM:%POS:%REF:%ALT\t[%SAMPLE:%GT\t'] i- 'GT="alt"' input.vcf
chr10:12342:A:T SAMPLEID1:0/1 SAMPLEID2:1/1 chr10:432234:G:C SAMPLEID4:0/1 SAMPLEID7:1/1
I can't quite get my head around the correct syntax and sure there is an easy fix! Your help would be much appreciated.
All the best
| Figured it out!
bcftools query -f "%CHROM:%POS:%REF:%ALT[\t%SAMPLE:%GT]\n' i- 'GT="alt"' input.vcf
Just needed to move the \n out of the square brackets! | biostars | {"uid": 9492289, "view_count": 422, "vote_count": 1} |
I tried to download data from useast.ensemble.org, while the downloaded file only contains SNPs in chromosome 1. Is there a better way to do that?
Thank you in advance! | Ensembl's version at this link: https://ftp.ensembl.org/pub/release-92/variation/vcf/homo_sapiens/ They are split according to populations and also for 1000 genomes. | biostars | {"uid": 315654, "view_count": 2424, "vote_count": 1} |
I'm looking to do something like the following:
cwlVersion: v1.0
class: CommandLineTool
requirements:
- class: InlineJavascriptRequirement
- $import: Types.cwl
...
inputs:
an_input: string
outputs:
an_output:
type: 'Types.cwl#SpecialType'
outputSource:
valueFrom: ${
return {
'output_file': glob('the_tools_output.file'),
'this_should_be_associated_with_the_file_in_a_later_step': inputs.an_input
}
}
Is this possible? I see a possible solution here using an ExpressionTool
https://github.com/andersgs/cwl_flows/blob/master/limit_detection/seqtk_sample_PE.cwl
But I was wondering if it could be done inline in the tool output.
| That should be possible, however the syntax for the output would look like this:
```
outputs:
an_output:
type: 'Types.cwl#SpecialType'
outputBinding:
glob: something
outputEval: |-
${
return {
'output_file': self,
'this_should_be_associated_with_the_file_in_a_later_step': inputs.an_input
}
}
```
In the expression for outputEval, `self` has the value of whatever is caught by `glob` pattern. | biostars | {"uid": 314374, "view_count": 1414, "vote_count": 1} |
I use picard software to mark duplicates, and here is my command:
```
java -d64 -server -XX:+UseParallelGC -XX:ParallelGCThreads=2 -Xms8g -Xmx16g -Djava.io.tmpdir=tmp -jar ./picard.jar MarkDuplicates I=input.bam O=out_markdup.bam METRICS_FILE=out.metrics ASO=coordinate VALIDATION_STRINGENCY=LENIENT
```
It works well but when the `input.bam` file gets bigger, the speed is very slow! I found that the picard MarkDuplicates doesn't support multiple threads. So, is there anyway to speedup picard? Another way, is there any better software to do the same as picard MarkDuplicates but with less time? I know [elprep](https://github.com/ExaScience/elprep) is another choice, but it needs very large memory!
Besides, I found that samtools can also remove duplicates, but according to my search, samtools can not remove the duplicates cross different chromosomes, so picard is more universe.
Any reply will be much appreciated! | > So, is there anyway to speedup picard?
1) picard stores data in memory until it needs to flush them on disk. The bigger the memory is (option MAX_RECORDS_IN_RAM) , the less you need I/O operation:
2) use sambamba-markdup http://lomereiter.github.io/sambamba/docs/sambamba-markdup.html (not tested) | biostars | {"uid": 249290, "view_count": 5708, "vote_count": 2} |
I'm using ChIPQC_1.10.3,/DiffBind_2.2.12 and deeptools 2.5. **Basically the idea of this question, is to try to reproduce almost the same pca plot with the two tools.**
**On the right** , I have done the pca plot with ChIPQC :
I have used consensus mode with 250 bp around peaks.
I think that I used 275 777 regions where at least a peak was called for a mark. I retrieved these regions from ChipQC. (sum of all regions found with peaks for all my different marks to use with deeptools pca).
I think ChIPQ do a log2 transformation on the raw count.
ChIPQCreport(..)
The samples seems to be separated by condition/color (T1,T7,unT7) but PC2 is low compared to PC1. Pink one seems to have a strange behavior... Is PC2 the effect of the treatment and PC1 "genomic effect" ?
**On the left** I have use these 275 777 regions that I suspect to be use by ChIPQC with deeptools.
multiBamSummary BED-file --BED regions.bed --ignoreDuplicates ...
plotPCA ...
The samples merged well by condition.
I don't understand why the PC1 is very high but all the samples are aligned on a vertical axis ! Is this a bug ? [here][1]
How can you explain that ?
Also pink point that were disjoined in previous chipqc plot seems to be perfectly match now ! Is this a display effect ?
Does deeptools also transform the raw count in log2 ?
Devon Ryan said "the size of the effect accounted for by a PC doesn't tell you anything about how informative that will be in discriminating between samples...The size of an effect is telling you about how much it contributes to the variation inside a sample. In ChIPseq that tends to be dominated by genomic position, since you'll have things like GC bias or other random library-prep. artefacts that are common between samples" here][2]
Sorry but I'don't understand how you interpret the degree of PC.
Why your argument on genomic position is more valuable for ChipSeq than RnaSeq ? If you are looking at the same histone mark between samples with different treatment, it should not play a strong role ?
**Update :**
Also what is the impact of --rowCenter because it totally changes PC, and plot ;-/
![enter image description here][2]
![pca plot][3]
[1]: https://github.com/fidelram/deepTools/issues/477
[3]: https://github.com/ZheFrenchKitchen/pics/blob/master/pcaPLOT.png?raw=true
[2]: https://github.com/ZheFrenchKitchen/pics/blob/master/rawCenter.png?raw=true
| Until version 2.6, plotPCA is only able to calculate the PCA of a matrix with regions as rows and samples as columns. ChIPQC is likely performing the PCA on the transposed matrix. In version 2.6 deepTools will be able to do this too (you can install the develop branch from github if you need it now), because I had to implement my own PCA method since the one provided by matplotlib is artificially limited. Anyway, this transposition of the matrix is the difference between the two plots.
> Does deeptools also transform the raw count in log2 ?
DeepTools does not do anything different with the values you input when performing a PCA (just the standard centering/scaling).
> Sorry but I'don't understand how you interpret the degree of PC.
I don't know what you mean by "degree" in this context.
> Why your argument on genomic position is more valuable for ChipSeq than RnaSeq ?
ChIPseq has historically had a number of problems not generally seen in RNAseq. Chief among those are a variety of genomic position biases caused by things like GC bias in TAQ polymerases or "blacklisted regions". That generally tends to be what will dominate PC1 if you perform a PCA with regions as rows. In such cases, you would very much like to see your samples stacking up in a vertical line on PC1, since that indicates that these biases are similar between samples.
> If you are looking at the same histone mark between samples with different treatment, it should not play a strong role ?
Your biological effect will be minuscule compared to the background biases.
> Is PC2 the effect of the treatment and PC1 "genomic effect" ?
PCs will generally not correspond to any coherent biological concept, particularly when you perform the PCA on a transposed matrix.
> Also what is the impact of --rowCenter
It subtracts the row mean from each row, which effectively removes the "background genomic position bias" signal. Frankly, that was mostly a hack I put in because I rarely want to see that signal and hadn't yet gotten around to implementing the `--transpose` option. | biostars | {"uid": 269261, "view_count": 3001, "vote_count": 2} |
Before I report this as a bug can someone show me the proper way to add metadata using the `AddMetaData` function.
I first read a file that is new line delimited with the meta data labels I want using read table. The ftimecell_columns.txt file looks like this:
"12wks"
"12wks"
"12wks"
"12wks"
"12wks"
...
So I do this command to make the dataframe:
timecell_col <- read.table("~/Projects/FetalPancreas/timecell_columns.txt", header=FALSE, sep="\n")
and then this to AddMetaData:
AddMetaData(object = scfp, metadata = timecell_col, col.name = 'time.cell')
However, when I do:
head(x = scfp@meta.data)
All I see is the original meta data values:
orig.ident nCount_RNA nFeature_RNA
GSM2978830 scfetpan 382565 5080
GSM2978831 scfetpan 634726 4932
GSM2978832 scfetpan 565912 4501
GSM2978833 scfetpan 717152 5117
GSM2978834 scfetpan 207659 3508
GSM2978835 scfetpan 869148 5055
Am I reading the data in correctly?
Any help would be appreciated!
Very Respectfully,
Pratik | Is that your exact command line?
Not
scfp <- AddMetaData(object = scfp, metadata = timecell_col, col.name = 'time.cell') | biostars | {"uid": 462861, "view_count": 7884, "vote_count": 1} |
I am having some difficulty producing a script for parsing an alignment file in the following format generated from RepeatMasker. For example:
665 28.45 2.93 5.02 g5129s420 7350 7882 (1924) C MIR#SINE/MIR (1) 261 28 3
g5129s420 7350 ATCATAACAAACATTTAT--GGTGCCTCCTATGGAGCAGGGATTTTGCTT 7397
v v i i i v viv v i v v v
C MIR#SINE/MIR 261 ATAATAACCAACATTTATTGAGCGCTTACTATGTGCCAGGCACTGTTCTA 212
g5129s420 7398 AGGACTCTGAACTATAT---CTTACTT-GTCTTCATTAAAAACCTTATGA 7443
vi i iv i i i i i i v i
C MIR#SINE/MIR 211 AGCGCTTTACA-TGTATTAACTCATTTAATCCTCA-CAACAACCCTATGA 164
g5129s420 7444 AAAAGGTACTATTATTAACTGGGGXTGGGTTGTTTAACAGATAAGAAAGC 7787
iiv v i iii v i i i
C MIR#SINE/MIR 163 GGTAGGTACTATTATTATCC---------CCATTTTACAGATGAGGAAAC 123
g5129s420 7788 TTAAGAATTAGAGAGATAAATTATCTTGCTTAAGGTAACACAGTTAACAA 7837
v i v i i v v v ii v i ii
C MIR#SINE/MIR 122 TGAGGCA-CAGAGAGGTTAAGTAACTTGCCCAAGGTCACACAGCTAGTAA 74
g5129s420 7838 GCATTAG-GTCAAAGTTTGAACTCGGGCAGTCTGACTACAGAGCCC 7882
iivi i iiii i i i i v i
C MIR#SINE/MIR 73 GTGGCAGAGCCGGGATTCGAACCCAGGCAGTCTGGCTCCAGAGTCC 28
Transitions / transversions = 1.96 (45 / 23)
Gap_init rate = 0.03 (8 / 234), avg. gap size = 2.38 (19 / 8)
I would like to parse the file using BioPython such that I obtain the chromosome/scaffold name g5129s420, start + end (7350 7882), and the Transitions/transversions. Any ideas on how to write this script would be most welcome, as I am a complete novice to scripting.
| There may be some BioPython modules or some other parser exists for this job. Here is my solution based on `Python 3.x` if you know `chromosome/scaffold name`
seqname = ['g5129s420','g5129s421'] #list sequence names here
dic, trans = {}, {}
for names in seqname:
dic[names] = []
trans[names] = []
with open("file.txt") as f:
for line in f:
line = line.rstrip().lstrip().split(' ')
line = [item for item in line if item is not '']
if len(line)>0:
#print(line)
if line[0] in seqname:
dic[line[0]].append(line[1])
dic[line[0]].append(line[-1])
seqname1 = line[0]
elif line[0] == 'Transitions':
trans[seqname1].append(line[4])
trans[seqname1].append(line[5].strip('('))
trans[seqname1].append(line[7].strip(')'))
for key, values in dic.items():
if len(dic[key])>0:
print(key,':', min(values), max(values), trans[key][0], trans[key][1], trans[key][2])
Output:
g5129s420 : 7350 7882 1.96 45 23 | biostars | {"uid": 211504, "view_count": 1508, "vote_count": 3} |
I downloaded HapMap vcf file for one chromosome the file size is 11 Giga I am trying to use LiftoverVcf from Picard package, but it run out of memory. I am thinking to split the 11 giga file into smaller files each is one giga any tool can do that? or is there a better Idea?
already used ``` -Xmx200g -Djava.io.tmpdir=./mytmp ``` | Using GATK:
- [RandomlySplitVariants][1]
>This tool takes a VCF file, randomly splits variants into different sets, and writes the results to separate files. By default the tool splits the input into two new sets, but it can be made to output more than two separate call sets.
[1]: https://software.broadinstitute.org/gatk/documentation/tooldocs/3.8-0/org_broadinstitute_gatk_tools_walkers_variantutils_RandomlySplitVariants.php | biostars | {"uid": 232226, "view_count": 2547, "vote_count": 1} |
I've been doing some work that involves characterizing potential chromatin states in a HCT116 cell model. I've successfully been able to run `ChromHMM` to identify chromatin states using a variety of histone markers and then overlap them with other datasets in order to double-check their annotation.
However, I am having a problem understanding some of the output files that `ChromHMM` automatically generates. Specifically `_emissions.txt` and `_*.bed`. I know there's a couple of people here that are really familiar with the software and could probably help me out.
I have already searched google, and read the ChromHMM manuscript, but neither provided answers. | The `_emissions.txt` are the values that go into the `_emissions.png` figures. Each row is a state, each column is an input data file ("mark" or histone mark in the terminology of ChromHMM). Darker blue indicates a higher likelihood of finding that mark in that state. These, combined with running OverlapEnrichment with biologically meaningful datasets, are critical for figuring out how to interpret the states.
The `segments.bed` file partitions the genome into contiguous segments, and the names of each feature in that file (E1, E2, etc) correspond to the states (1, 2, etc) in the `_emissions.png`.
A typical workflow is to figure out what to label each state. Then choose some colors and post-process the BED file with labels and names to get something more useful for downstream analysis. | biostars | {"uid": 205093, "view_count": 5539, "vote_count": 1} |
<p>I have got a BED file (used in Plink and with .bim and .fam file) which contains Affy 500K data, I want to convert it back to PED file with map file. Is there anyone how to do this task? Thank you very much !</p>
| plink --bfile filename --recode --tab --out myfavpedfile
[plink](http://pngu.mgh.harvard.edu/~purcell/plink/dataman.shtml#recode)
| biostars | {"uid": 2277, "view_count": 43003, "vote_count": 16} |
Hey ya'll,
Struggling here.
The dataframe was generated using a new package called InterCom ([https://github.com/saschajung/InterCom][1]), which basically determines ligand receptor pairing and also together with sending and receiving cells using scRNA-seq data. Incredibly useful if you ask me.
Here is a sample of my data:
"score" "significance" "lig.rec.pop" "ligand.receptor"
25507.7307692308 0.950689655172414 "undef_acinar" "FSTL1_BMPR1A"
113286.378762542 0.994068965517241 "mesenchymal_acinar" "FSTL1_BMPR1A"
56944.7025939177 0.989103448275862 "endothelial_acinar" "FSTL1_BMPR1A"
24467.0634475014 0.94551724137931 "mesenchymal_acinar" "BMP4_BMPR1A"
45149.7188075585 0.985034482758621 "mesenchymal_acinar" "BMP4_BMPR2"
20099.7619047619 0.916 "blood_acinar" "IL1B_IL1R1"
19786.641 0.913310344827586 "alpha_acinar" "VTN_ITGAV"
56609.9870823529 0.989034482758621 "mesenchymal_acinar" "FN1_ITGAV"
The maximum score value is 6611701.91, and the score minimum value is 36320.27,
Could someone help me get started with a good way to scale the data (if I even need to?, and generate a ComplexHeatmap? My main struggle is figuring out how to get it into the format to plug it into ComplexHeatmap.
I was able to do it through ggplot2, but I want to do it through ComplexHeatmap as I think it is easier to add bells and whistles. The only reason I was able to do it using ggplot2 was because I specified the what x, y and scores to use:
ggplot(hm, aes(x = ligand.receptor, y = lig.rec.pop, fill = score)) +
geom_tile() + theme(axis.text.x = element_text(angle = 90))
Ideally, I would love a heatmap/plot like this:
![enter image description here][2]
I realize that adding the time points (ie. week 7 and 9) might be a matter of adding meta data. but really if someone could just help me get started - I would be grateful.
Thank you in advance!
Image Citation: Gonçalves, C.A., Larsen, M., Jung, S. et al. A 3D system to model human pancreas development and its reference single-cell transcriptome atlas identify signaling pathways required for progenitor expansion. Nat Commun 12, 3144 (2021). https://doi.org/10.1038/s41467-021-23295-6
[1]: https://github.com/saschajung/InterCom
[2]: /media/images/1a0a4c91-f922-495b-8927-f5671604 | Below chunk will convert your data to complexheatmap data format. U can then modify the heatmap.
Assuming your data is in **tdf** data frame.
lig = unique(tdf$lig.rec.pop)
rec = unique(tdf$ligand.receptor)
mat = matrix(0,nrow = length(lig), ncol = length(rec))
rownames(mat) = lig
colnames(mat) = rec
for(i in 1:nrow(tdf))
{
mat[tdf$lig.rec.pop[i], tdf$ligand.receptor[i]] = tdf$score[i]
}
Heatmap(mat)
| biostars | {"uid": 9474038, "view_count": 1253, "vote_count": 1} |
<p>Hi All,
I need to split my fastq file that is composed of 30 chromosomes into 30 different files containing each the information from one chromosome. Technically, I need to split this kind of file into seperated one:</p>
<pre><code>@chr1
nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnatgctgggtgatctttagtcnnnnnnnnnn
nnnnnnnnnnnnnnnnatggggtcatgtacacacacacattggatannnnnnnnnnnnnn
nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnatgctgggtgatctttagtcnnnnnnnnnn
...
@chr2
nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnatgctgggtgatctttagtcnnnnnnnnnn
nnnnnnnnnnnnnnnnatggggtcatgtacacacacacattggatannnnnnnnnnnnnn
nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnatgctgggtgatctttagtcnnnnnnnnnn
...
</code></pre>
<p>I tried to use :</p>
<pre><code>awk '/^@chr1$/,/^+$/' consensus.fastq | perl -pe "s/@/>/ ; s/\+//" > chr1.fasta
</code></pre>
<p>but that gives me this :</p>
<pre><code>@chr1
nnnnnagtnnnnnnnnnnnnnnnnnnnnnttgcnnnnnnnnnnnnnnnnnnnnnngcnnn
nnnntgaaannnnnnnnnnnntcnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn
nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn
</code></pre>
<p>Did somebody had already this kind of problems? Could somebody gives me some advices?
Thanks a lot!</p>
<p>Nath</p>
| If you have <a href="http://emboss.sourceforge.net/">EMBOSS</a> installed
seqretsplit -sformat fastq-sanger -osformat fastq file.fastq
will give you one Fastq record per file. The files will be named after the sequence ID, so chr1.fastq, chr2.fastq etc. Note that you should use `-sformat fastq-sanger`, `fastq-illumina` or `fastq-solexa`, depending on which encoding your file uses. See <a href="https://www.biostars.org/p/642/">this question</a> on Fastq format.
To get Fasta output (you don't explicitly ask for it, but your code implies it), simply change the `-osformat` argument.
seqretsplit -sformat fastq-sanger -osformat fasta file.fastq | biostars | {"uid": 3889, "view_count": 6843, "vote_count": 4} |
Hi, I am running BWA mem for aligning my PE reads against Human reference genome (GRCh38) although it is running but I am encountering with an error message. Please have a look on the command and the resulting message:
```
/usr/local/bwa-0.7.12/bwa mem -t 14 \
-M /san/illumina_two/rsindhu_sge/Human_ref_genomes/GRCh38/FINAL/GRCh38.fa \
Read1.fastq.gz Read2.fastq.gz > Read12.bwa.sam
```
A part of message:
```
[M::bwa_idx_load_from_disk] read 0 ALT contigs
[M::process] read 1429588 sequences (140000077 bp)...
[M::process] read 1431216 sequences (140000054 bp)...
[M::mem_pestat] # candidate unique pairs for (FF, FR, RF, RR): (19, 558476, 545, 5)
[M::mem_pestat] analyzing insert size distribution for orientation FF...
[M::mem_pestat] (25, 50, 75) percentile: (126, 180, 401)
[M::mem_pestat] low and high boundaries for computing mean and std.dev: (1, 951)
[M::mem_pestat] mean and std.dev: (243.78, 179.56)
[M::mem_pestat] low and high boundaries for proper pairs: (1, 1226)
[M::mem_pestat] analyzing insert size distribution for orientation FR...
[M::mem_pestat] (25, 50, 75) percentile: (334, 387, 448)
[M::mem_pestat] low and high boundaries for computing mean and std.dev: (106, 676)
[M::mem_pestat] mean and std.dev: (391.91, 87.32)
[M::mem_pestat] low and high boundaries for proper pairs: (1, 790)
[M::mem_pestat] analyzing insert size distribution for orientation RF...
[M::mem_pestat] (25, 50, 75) percentile: (24, 49, 80)
[M::mem_pestat] low and high boundaries for computing mean and std.dev: (1, 192)
[M::mem_pestat] mean and std.dev: (54.23, 40.99)
[M::mem_pestat] low and high boundaries for proper pairs: (1, 248)
[M::mem_pestat] skip orientation RR as there are not enough pairs
[M::mem_pestat] skip orientation FF
[M::mem_pestat] skip orientation RF
[M::mem_process_seqs] Processed 1429588 reads in 1170.454 CPU sec, 83.775 real sec
[M::process] read 1429172 sequences (140000054 bp)...
[M::mem_pestat] # candidate unique pairs for (FF, FR, RF, RR): (15, 558767, 537, 10)
[M::mem_pestat] analyzing insert size distribution for orientation FF...
[M::mem_pestat] (25, 50, 75) percentile: (100, 207, 290)
[M::mem_pestat] low and high boundaries for computing mean and std.dev: (1, 670)
[M::mem_pestat] mean and std.dev: (212.33, 136.90)
[M::mem_pestat] low and high boundaries for proper pairs: (1, 860)
[M::mem_pestat] analyzing insert size distribution for orientation FR...
[M::mem_pestat] (25, 50, 75) percentile: (332, 385, 446)
[M::mem_pestat] low and high boundaries for computing mean and std.dev: (104, 674)
[M::mem_pestat] mean and std.dev: (390.32, 87.22)
[M::mem_pestat] low and high boundaries for proper pairs: (1, 788)
[M::mem_pestat] analyzing insert size distribution for orientation RF...
[M::mem_pestat] (25, 50, 75) percentile: (24, 46, 86)
[M::mem_pestat] low and high boundaries for computing mean and std.dev: (1, 210)
[M::mem_pestat] mean and std.dev: (56.51, 44.37)
[M::mem_pestat] low and high boundaries for proper pairs: (1, 272)
[M::mem_pestat] analyzing insert size distribution for orientation RR...
[M::mem_pestat] (25, 50, 75) percentile: (379, 1210, 2961)
[M::mem_pestat] low and high boundaries for computing mean and std.dev: (1, 8125)
[M::mem_pestat] mean and std.dev: (1898.40, 2191.92)
[M::mem_pestat] low and high boundaries for proper pairs: (1, 10707)
[M::mem_pestat] skip orientation FF
[M::mem_pestat] skip orientation RF
[M::mem_pestat] skip orientation RR
..........
```
Also, The size of the .sam file was found to be small as compared to the file when there was no such error message (Earlier it was 34G and now 14G). Please provide your suggestions
Many Thanks
Ravi | These are just log messages, not errors. | biostars | {"uid": 147721, "view_count": 17152, "vote_count": 3} |
I grabbed the first couple of columns from my whole genome VCF file. It looks like this. Are there any tools either web-based, Python, or R that I could use to get SNP identifiers (e.g. `rs429358` or `rs7412`) for all of my SNPs that are in a particular database? I'm very new to working with VCF files and I want to figure out my blood-type. I'm very comfortable coding in both Python and R if there are any packages available for these languages. I would like to avoid, if possible, depositing my sequences in a 3rd party that would potentially use my information for their own gains but I am not opposed to having them as references in case I can't figure out any other options.
#CHROM POS ID REF ALT QUAL
chrM 64 . C T 3070.00
chrM 73 . A G 3070.00
chrM 146 . T C 3070.00
chrM 153 . A G 3070.00
chrM 263 . A G 3070.00
chrM 310 . T C 3070.00
chrM 513 . GCA G 3070.00
chrM 663 . A G 3070.00
chrM 750 . A G 3070.00
chrM 1438 . A G 3070.00
chrM 1598 . G A 3070.00
chrM 1736 . A G 3070.00
chrM 1888 . G A 3070.00
chrM 2706 . A G 3070.00
chrM 3106 . CN C 3070.00
chrM 4248 . T C 3070.00
chrM 4769 . A G 3070.00
chrM 4824 . A G 3070.00
chrM 7028 . C T 3070.00
chrM 8027 . G A 3070.00
chrM 8794 . C T 3070.00
chrM 8860 . A G 3070.00
chrM 11719 . G A 3070.00
| Make a BED file of SNPs for your genome of interest, e.g. `hg38` and dbSNP v147:
$ LC_ALL=C
$ wget -qO- http://hgdownload.cse.ucsc.edu/goldenpath/hg38/database/snp147.txt.gz \
| gunzip -c \
| awk -v OFS="\t" '{ print $2,$3,($3+1),$5 }' \
| sort-bed - \
> hg38.snp147.bed
Convert your VCF file to BED and map the ID field from the SNP file:
$ bedmap --echo --echo-map-id --delim '\t' <(vcf2bed --sort-tmpdir=$PWD < variants.vcf) hg38.snp147.bed > answer.bed
The `rs*` ID for the variant will be in the last column of `answer.bed`. | biostars | {"uid": 300169, "view_count": 8076, "vote_count": 2} |
<p>I wonder what is the relationship between the depth and gene expression level in RNAseq?</p>
<p>For example, let's say I have RNAseq with 10M reads for one sample. If I use the same machine and generate 20M reads for the same sample, then except for the new detected genes, will the expression profile for genes expressed in both case be similar? In other words, if I plot the density VS log2(count) for them, will they have the similar shape?</p>
| <p>Yes, if you use the same method to normalize the counts, you will get similar MA plots (in theory). At greater coverage, maybe you will see more dots, because there will be more genes detected.</p>
<p>The reads generated during the sequencing should be randomly distributed, except if the process runs into a problem. After normalization, you should have similar values for genes that were detected in both cases.</p>
<p>More coverage is useful to detect low expressed genes.</p>
<p>You will see significant differences in MA plot if you change the number of biological replicates. With more replicates, there will be more differentially expressed genes.</p>
| biostars | {"uid": 137117, "view_count": 2097, "vote_count": 1} |
Hi,
I would like to know how complex it is to analyze NGS Data. Is it possible to learn NGS data analysis from the online resources or should we learn under the guidance of an expert? How to get the core concepts of NGS data analysis? How to configure parameters while using open source tools? (Assembly, Alignment, statistics etc ). I have a masters degree in bioinformatics with unix, perl and basic core Java skills. Any advice is appreciated. | <ul>
<li>download <a href='http://bio-bwa.sourceforge.net/'>bwa</a>, <a href='http://samtools.sourceforge.net/'>samtools</a> and a reference genome.</li>
<li>generate a random set of reads using <a href='http://samtools.sourceforge.net/'>samtools</a>/misc/wgsim</li>
<li>index the genome</li>
<li>align the reads and generate a <a href='http://samtools.sourceforge.net/SAM1.pdf'>sam</a> output.</li>
<li>describe each column of the <a href='http://samtools.sourceforge.net/SAM1.pdf'>sam</a></li>
<li>generate a vcf from the <a href='http://samtools.sourceforge.net/SAM1.pdf'>sam</a> using <a href='http://samtools.sourceforge.net/'>samtools</a>/mpileup</li>
<li>describe each column of the vcf</li>
<li>use ensembl/vep to predict the consequences of the variations.</li>
</ul>
| biostars | {"uid": 80739, "view_count": 5569, "vote_count": 4} |
I was seeking the pathway analysis of a gene list and just saw two contrast things which made me too confused.
In https://support.bioconductor.org/p/54827/ the guys discuss enrichment analysis and "Paul Shannon" has suggested the below code to calculate p-values.
```r
library(KEGGREST)
library(org.Hs.eg.db)
# created named list, eg: path:map00010: "Glycolysis / Gluconeogenesis"
pathways.list <- keggList("pathway", "hsa")
# make them into KEGG-style human pathway identifiers
pathway.codes <- sub("path:", "", names(pathways.list))
# for demonstration, just use the first ten pathways
# not all pathways exist for human, so TODO: tryCatch the
# keggGet to be robust against those failures
# subsetting by c(TRUE, FALSE) -- which repeats
# as many times as needed, sorts through some
# unexpected packaging of geneIDs in the GENE element
# of each pw[[n]]
# genes.by.pathway <- sapply(pathway.codes,
# function(pwid){
# pw <- keggGet(pwid)
# pw[[1]]$GENE[c(TRUE, FALSE)]
# })
load(paste(Data_path, "KEGG_Gene_Pathways.RData"))
all.geneIDs <- keys(org.Hs.eg.db)
# chose one of these for demonstration. the first (a whole genome random
# set of 100 genes) has very little enrichment, the second, a random set
# from the pathways themsevles, has very good enrichment
genes.of.interest <- c("23118", "23119", "23304", "25998", "26001", "51043",
"55632", "55643", "55743", "55870", "7314", "56254",
"7316", "144193","784", "8837", "1111", "84706",
"200931","169522","5707", "5091", "5901", "55532",
"9777")# the hypergeometric distribution is traditionally explained in terms of
# drawing a sample of balls from an urn containing black and white balls.
# to keep the arguments straight (in my mind at least), I use these terms
# here also
pVals.by.pathway <- sapply(names(genes.by.pathway),
function(pathway) {
pathway.genes <- genes.by.pathway[[pathway]]
white.balls.drawn <- length(intersect(genes.of.interest, pathway.genes))
white.balls.in.urn <- length(pathway.genes)
total.balls.in.urn <- length(all.geneIDs)
black.balls.in.urn <- total.balls.in.urn - white.balls.in.urn
total.balls.drawn.from.urn <- length(genes.of.interest)
dhyper(white.balls.drawn,
white.balls.in.urn,
black.balls.in.urn,
total.balls.drawn.from.urn)
})
print(pVals.by.pathway)
```
As you know, dhyper returns
<img alt="" src="https://upload.wikimedia.org/math/c/4/9/c495069fa1408782dcaa8b1fc1e178cb.png" style="height:65px; width:203px" />
On the other side, http://www.tongji.edu.cn/~qiliu/help/help_3_ORA.html provides a complete definition of pathway over-representation analysis, and says the p-value is calculated as
<img alt="" src="http://www.tongji.edu.cn/~qiliu/images/helpIllus_TORA_1.jpg" style="height:84px; width:200px" />
Actually, my problem is not about the summation over 'x', but it is about the subtraction of the formula from '1'.
Which of them is true? Does anyone know about it? | It's shown in the notation. You have to read the whole paragraph to understand the difference.
The first formula is the probability that X=k. That's not a hypothesis test p-value at all. If you want to know how uncommon such an event is, you would sum the probabilities of the more unlikely events, and take the difference from 100%. That's what's happening in the second formula.
In the second formula it's adding up the probabilities of some events, and showing the difference from 100%, or the rareness instead of the likeness. | biostars | {"uid": 146425, "view_count": 2288, "vote_count": 1} |
Dear All,
I have two networks. Both the networks have common nodes as well as unique nodes. Is there any way I can combine both and analyze?? | Using **R** igraph union:
library(igraph)
net1 <- graph_from_literal(A-B:C:D, A-C, B-C)
net2 <- graph_from_literal(A-X:Y:Z, Z-Y)
par(mfrow = c(1, 3))
set.seed(1)
plot(net1, vertex.size = 50, main = "net1")
plot(net2, vertex.size = 50, main = "net2")
plot(union(net1, net2), , vertex.size = 50, main = "union of net1 and net2")
| biostars | {"uid": 444176, "view_count": 2135, "vote_count": 1} |
<p>Is there any way to convert a cytoband location (e.g., 17p13.3) to its genomic coordinates programmatically (i.e., without having to do a manual conversion)? A web API such as BioMart would be ideal, but I haven't found a way to do it this way. Entrez Eutils would also be okay, but I likewise haven't been able to figure out how to do it this way.</p>
| <pre>
$ CHR="chr17"
$ LOCATION="p13.3"
$ wget -qO- "http://hgdownload.cse.ucsc.edu/goldenPath/hg19/database/cytoBand.txt.gz" | gunzip -c | grep -F "^${CHR}" | grep -F ${LOCATION} > answer.bed</pre>
| biostars | {"uid": 127562, "view_count": 3621, "vote_count": 1} |
I have a list with 20 dictionaries that are similar to these:
**dic1** = {"aa":1, "ba":1, "ca":1, "dd":2, "ee":2, "fa":2, "ga":4, "hh":4, "ia":1}
**dic2** = {"aa":1, "bc":1, "cd":2, "dd":2, "ea":2, "fg":2, "gk":4, "hh":4, "mb":5}
**dic3** = {"ab":1, "bd":2, "cm":2, "dj":2, "ej":2, "fa":2, "gg":2, "ha":4}
---
I have to apply the same formula to each dictionary and I would like to output a list with the results of the formula per dictionary (thus, a list with 20 integers).
The formula will vary according to the "**value**" amount:
In summary, per dictionary I need to calculate final_formula:
If value == 1, then my partial formula is: formula1 = (number of unique keys that contain value 1) * [1 - x + x * ((x-10/x) ^1]
If the value is anything above 1 (else), then my intermediary formulas are formulan = (number of unique keys with value i) * [(value i) - x + x * ((x-10/x)^(value i)]
final_formula = formula 1 + formula2 + formula3 (n formulas, where n is the number of distinct values =! 1). X is a constant that I will define.
The output should be an integer.
---
A practical example:
**For my dic1**, I have 4 keys with value ==1, 3 keys with value ==2, 2 keys with value ==4
The formula for dic1 thus is:
> final_formula_dic1 = 4 * [1 - x + x ((x-10/x)^1] + 3 * [2 - x + x ((x-10/x)^2] + 2 * [4 - x + x ((x-10/x)^4]
**For my dic2**, I have 2 keys with value ==1, 4 keys with value == 2, 2 keys with value == 4, 1 key with value == 5
The formula for dic2 thus is:
> final_formula_dic2 = 2 * [1 - x + x ((x-10/x)^1] + 4 * [2 - x + x ((x-10/x)^2] + 2 * [4 - x + x ((x-10/x)^4]
**For my dic3**, I have 1 key1 with value ==1, 6 keys with value == 2, 1 key with value == 4
The formula for dic3 thus is:
> final_formula_dic3 = 1 * [1 - x + x ((x-10/x)^1] + 6 * [2 - x + x ((x-10/x)^2] + 1 * [4 - x + x ((x-10/x)^4]
I am applying each formula manually to each dictionary, but I have too many dictionaries in my list and this is error prone.
I would really like to iterate over all dictionaries and apply the formula (since it is the same criteria to all dictionaries) and create a list with results, for example, list = ['final_formula_dic1', 'final_formula_dic2', 'final_formula_dic3'...] which would be integers, for instance: list = ['3000', '3200', '1300'...].
I hope this makes sense. Thank you very much in advance! | Hope bioinformatics will benefit.
X = 2 # Define X here
DICTS = [
{"aa":1, "ba":1, "ca":1, "dd":2, "ee":2, "fa":2, "ga":4, "hh":4, "ia":1},
{"aa":1, "bc":1, "cd":2, "dd":2, "ea":2, "fg":2, "gk":4, "hh":4, "mb":5},
{"ab":1, "bd":2, "cm":2, "dj":2, "ej":2, "fa":2, "gg":2, "ha":4},
]
for d in DICTS:
dt = {}
for key, value in d.items():
if value not in dt:
dt[value] = set()
dt[value].add(key)
final_result = 0
final_formula = []
for value in sorted(dt):
n = len(dt[value])
formula = f'{n} * ({value} - X + X * (X - 10 / X^{value})'
final_formula.append(formula)
result = n * (value - X + X * (X - 10 / X) ** value)
final_result += result
print(" + ".join(final_formula))
print(final_result)
Output:
4 * (1 - X + X * (X - 10 / X^1) + 3 * (2 - X + X * (X - 10 / X^2) + 2 * (4 - X + X * (X - 10 / X^4)
354.0
2 * (1 - X + X * (X - 10 / X^1) + 4 * (2 - X + X * (X - 10 / X^2) + 2 * (4 - X + X * (X - 10 / X^4) + 1 * (5 - X + X * (X - 10 / X^5)
-97.0
1 * (1 - X + X * (X - 10 / X^1) + 6 * (2 - X + X * (X - 10 / X^2) + 1 * (4 - X + X * (X - 10 / X^4)
256.0
| biostars | {"uid": 9536822, "view_count": 629, "vote_count": 2} |
Deal all,
I exported human-pig ortholog relationship information from ensembl biomart. For gene SIRPA, it showed empty information of pig gene/transcrip/protein id nor any ortholog alignment information. But for most other genes, there are such information. So I thought pig doesn't have SIRPA. However, I found pig's SIRPA information when searching the gene in NCBI-gene. Why? I wondered maybe it's because they shared the same gene name but the properties of pig's SIRPA has changed quite a lot from human's SIRPA. But the gene description showed that they are quite the same. Could anyone explain this phenomenon a bit? Does ensembl biomart provide ortholog information incomplete? Thank you.
Best,
Yingzi | Orthologues in Ensembl are assigned using a [protein tree pipeline](http://www.ensembl.org/info/genome/compara/homology_method.html) which clusters proteins sequences using HMMs, then builds multiple alignments and reconciles against the species trees. The relationships between genes in these trees are used to [assign homologues](http://www.ensembl.org/info/genome/compara/homology_types.html).
These are recomputed with each Ensembl release and the pipeline was recently changed from using Blast for the clustering to HMMs, which has altered a lot of trees. The [previous version of Ensembl](http://jul2018.archive.ensembl.org/Homo_sapiens/Gene/Compara_Ortholog?anc=12844285;db=core;g=ENSG00000198053;g1=ENSSSCG00000007181;r=20:1894167-1940592;t=ENST00000356025) had human/pig orthologues with [many-to-many relationships](http://jul2018.archive.ensembl.org/Homo_sapiens/Gene/Compara_Tree?anc=12844285;db=core;g=ENSG00000198053;g1=ENSSSCG00000007181;r=20:1894167-1940592;t=ENST00000356025), which has changed. I think this needs to be re-examined and the orthologues restored. I recommend using the archive version of Ensembl for now. | biostars | {"uid": 352686, "view_count": 1713, "vote_count": 1} |
Hello everyone,
I'm pretty new to the field, so forgive me if this is a dumb question.
So, I am working on mapping out single reads to the reference mouse genome, with given 260 fastq files.
What I'd like to do is make all 260 files to be good quality reads. Right now it's a total mess, as in some have bad 'per base sequence quality' and 'per base sequence content', while some reads have bad 'per base GC content', and so on.
My original thought was run a program that fetches what passed and what failed from summary.txt from each fastqc directory. Then I'd have list of files that failed 'per base sequence content' or failed 'per base GC content' and such. However, because each reads are different within the list, I would still have to open html file and look individually to improve the quality. ( ex. I might have to trim 10 bp while other files need to trim more than that to improve the quality. )
Is there a way to somehow program it so that I can make all the files to be good quality reads at once, rather than checking each fastqc.html files myself and make different adjustments to each reads to improve the quality?
| Try [multiqc][1], it collects all the fastqc reports into one overall report - extremely handy if you have lots of files.
[1]: http://multiqc.info | biostars | {"uid": 194344, "view_count": 3553, "vote_count": 4} |
I am working on wheat genome, I want to do analyze comparative genome analysis of 3 varieties of wheat .I have sequence files from illumina 1.9 in fastq format. I checked the quality of reads by fastqc tool.GC content are not in normal range.(47- 49). What is the normal value of % GC for RNA seq reads ? The other question is that kmers are also not in correct range.How can I correct it.
For trimming,adaptor seq file is required,but i don't have this file..Is that possible to remove these two error?If yes then how can I do?Can I skip the trimming step and go to next step of mapping?
In this file all parameter values are correct except kmer s and GC content.Is there any need to trim it?If yes then how can I do?
file:///home/comsats-ra/fatimamphilldata/G1_cleaned_R1_fastqc.html#M11 | You can use [fastp](https://github.com/OpenGene/fastp) to trim adapters for Illumina sequencing data, without the need of knowing the adapter sequences.
Just download fastp and run:
fastp -i in.fq -o out.fq
And then everything is done, the adapters are trimmed in out.fq
For paired end data, the command is like:
fastp -i in1.fq -o out1.fq -I in2.fq -O out2.fq
Gzip is supported for both input and output. | biostars | {"uid": 288243, "view_count": 2575, "vote_count": 1} |
<p>Hi all,</p>
<p>I have a VCF file that was generated using HG18 and I want to convert it to VCF but that has HG19 data.</p>
<p>I am aware that there is the lift-over tool from UCSC but how do I convert a VCF file from HG18 to a VCF file to HG19, since it seems the lift-over tools produces a BED file instead.</p>
<p>Thanks in advance.</p>
| You may use GATK "LiftoverVariants". It allows you to use VCF file as an input and outputs a VCF file corresponding to the new assembly. Link: https://www.broadinstitute.org/gatk/gatkdocs/org_broadinstitute_gatk_tools_walkers_variantutils_LiftoverVariants.php | biostars | {"uid": 138042, "view_count": 19638, "vote_count": 2} |
Hello clever community!
I need your advice.
I am working on a *de novo* plant genome assembly of ~400 Mb.
I have Chromium 10x data, which was assembled with supernova. I also have Illumina paired end reads. Now I have additional data of PacBio reads, 120x roughly. The genome is diploid and I am thinking about using Falcon.
What do you think should be the best strategy:
1. Assembling PacBio reads and then using a tool to integrate the two assemblies? Is there anything like this? Which tool would you use?
2. Using a tool that can assemble the genome from both the chromium and the PacBio reads? Is there anything like it?
3. Assembling the PacBio reads and using chromium 10x and the illumina for polishing? If I assemble with Falcon, what tool should I use for polishing?
4? Anything else that I am missing to get the best out of what I can get?
Thank you very much in advance!
Alex | Hello again alslonik !
Here I'll add my little pinch of salt and recommend you having a look on that great manual : https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5100563/ . Of course it was not tested on plant genome, but it help you to orient your choice concerning assembly strategy depending on the technology you used and on your sequencing depth.
I already saw that you wanted to give it a try to quickmerge so you may have already saw that manual. As I tried quickmerge myself with 2 different PacBio only assembly, I have to say I was really satisfied with the result of quickmerge concerning contiguity and completeness. As you did a Falcon assembly, you can try merging a Falcon assembly and a Canu assembly, it may give some improvement as well, if you have the time of trying that of course !
Cheers,
Roxane | biostars | {"uid": 324156, "view_count": 3032, "vote_count": 2} |
Hi Biostar,
Apologies for this basic question but I can not find a simple answer
I want to sequence several samples of mRNA paired end 2x150 and for each sample I want 100 million reads (100 million for forward and 100 million for reverse)
the supplier proposes to sequence 3 samples per lane of flow cell of the Hiseq 4000 technology.
So what I understood 3 samples x 200 million reads (100 million for forward and 100 million for reverse) = 600 million reads per lane
but when I looked for the performances Illumina Hiseq 4000 I found this technology can generate 312000000 Clusters per lane only. How can this technology generate 600 million reads per lane ??
Would it be possible to give me a simple answer Thank you in advance | Illumina traditionally counts two reads coming from a single cluster independently. `312000000` passing clusters give rise to `312000000 x 2 = Number of reads` per lane.
Please keep in mind this number if attainable with `excellent` quality genomic DNA libraries. Any deviation from this will result in lower yields. | biostars | {"uid": 344950, "view_count": 3738, "vote_count": 1} |
Hello Everyone,
I am trying to use the R package KaryotypeR to produce rainfall plots in R. I am having trouble reading my VCF data into R and getting it to work in KaryotypeR. I've got their tutorial to work (link here):
[https://bernatgel.github.io/karyoploter_tutorial//Examples/Rainfall/Rainfall.html][1]
I have a VCF with example rows like this (excluding the many ##header lines for readability:
##FORMAT=<ID=FAZ,Number=1,Type=Integer,Description="Reads presenting a A for this position, forward strand">
##FORMAT=<ID=FCZ,Number=1,Type=Integer,Description="Reads presenting a C for this position, forward strand">
##FORMAT=<ID=FGZ,Number=1,Type=Integer,Description="Reads presenting a G for this position, forward strand">
##FORMAT=<ID=FTZ,Number=1,Type=Integer,Description="Reads presenting a T for this position, forward strand">
##FORMAT=<ID=RAZ,Number=1,Type=Integer,Description="Reads presenting a A for this position, reverse strand">
##FORMAT=<ID=RCZ,Number=1,Type=Integer,Description="Reads presenting a C for this position, reverse strand">
##FORMAT=<ID=RGZ,Number=1,Type=Integer,Description="Reads presenting a G for this position, reverse strand">
##FORMAT=<ID=RTZ,Number=1,Type=Integer,Description="Reads presenting a T for this position, reverse strand">
##FORMAT=<ID=PM,Number=1,Type=Float,Description="Proportion of mut allele">
##SAMPLE=<ID=NORMAL,Description="Normal",Accession=.,Platform=ILLUMINA,Protocol=WGS,SampleName=AD0001c,Source=.>
##SAMPLE=<ID=TUMOUR,Description="Tumour",Accession=.,Platform=ILLUMINA,Protocol=WGS,SampleName=AD0001b_lo0019,Source=.>
##FILTER=<ID=DTH,Description="Less than 1/3 mutant alleles were >= 25 base quality">
##INFO=<ID=CLPM,Number=1,Type=Float,Description="A soft flag median number of soft clipped bases in variant supporting reads">
##INFO=<ID=ASMD,Number=1,Type=Float,Description="A soft flag median alignement score of reads showing the variant allele">
##vcfProcessLog_20180918.1=<InputVCF=<AD0001b_lo0019_vs_AD0001c.muts.ids.vcf.gz>,InputVCFSource=<FlagCaVEManVCF.pl>,InputVCFVer=<1.7.3>,InputVCFParam=<sp=.,umv=.,h=.,g=.,f=AD0001b_lo0019_vs_AD0001c.muts.ids.vcf.gz,t=WGS,loud=.,n=AD0001c.bam,ref=genome.fa.fai,m=AD0001b_lo0019.bam,v=flag.to.vcf.convert.ini,s=TIGER,l=2000,ab=.,p=.,c=Tiger_flag.vcf.config.ini,b=.,idx=.,o=D0001b_lo0019_vs_AD0001c_flagged.vcf>>
#CHROM POS ID REF ALT QUAL FILTER INFO FORMAT NORMAL TUMOUR
contig01000014.1 1128 165247d6-b4df-11e8-bec4-89a28c540d42 T C . PASS DP=89;MP=8.8e-01;GP=5.0e-11;TG=TT/CTTTT;TP=8.8e-01;SG=TT/TTTTT;SP=1.2e-01;CLPM=0.00;ASMD=136.00 GT:FAZ:FCZ:FGZ:FTZ:RAZ:RCZ:RGZ:RTZ:PM 0|0:0:0:0:11:0:0:1:27:0.0e+00 0|1:0:0:0:19:0:5:0:26:1.0e-01
contig01000543.1 276 16524d8a-b4df-11e8-bec4-89a28c540d42 T C . PASS DP=67;MP=8.7e-01;GP=1.3e-01;TG=TT/CTTTT;TP=8.7e-01;SG=CT/CTTTT;SP=1.2e-01;CLPM=0.00;ASMD=57.50 GT:FAZ:FCZ:FGZ:FTZ:RAZ:RCZ:RGZ:RTZ:PM 0|0:0:0:0:3:0:1:0:12:6.2e-02 0|1:0:3:1:21:0:3:0:23:1.2e-01
contig01000692.1 2433 16524ee8-b4df-11e8-bec4-89a28c540d42 A T . PASS DP=74;MP=9.8e-01;GP=2.0e-02;TG=AA/AAAAT;TP=8.8e-01;SG=AA/AAATT;SP=9.6e-02;CLPM=9.00;ASMD=101.00 GT:FAZ:FCZ:FGZ:FTZ:RAZ:RCZ:RGZ:RTZ:PM 0|0:19:0:0:3:25:0:0:0:6.4e-02 0|1:11:0:0:5:11:0:0:0:1.9e-01
contig01000694.1 830 16525032-b4df-11e8-bec4-89a28c540d42 C T . PASS DP=114;MP=1.0e+00;GP=5.6e-14;TG=CC/CCCCT;TP=8.6e-01;SG=CC/CCCTT;SP=1.4e-01;CLPM=0.00;ASMD=111.00 GT:FAZ:FCZ:FGZ:FTZ:RAZ:RCZ:RGZ:RTZ:PM 0|0:0:26:0:0:0:42:0:2:2.9e-02 0|1:0:8:0:1:0:26:0:9:2.3e-01
contig01000694.1 872 16525172-b4df-11e8-bec4-89a28c540d42 G T . PASS DP=89;MP=1.0e+00;GP=2.9e-07;TG=GG/GGGTT;TP=7.4e-01;SG=GG/GGGGT;SP=2.4e-01;CLPM=0.00;ASMD=115.50 GT:FAZ:FCZ:FGZ:FTZ:RAZ:RCZ:RGZ:RTZ:PM 0|0:0:0:23:0:0:0:34:3:5.0e-02 0|1:0:0:9:0:0:0:11:9:3.1e-01
I would like to convert this into a GRanges format such that I can subset the 'TUMOUR' data (excluding the 'NORMAL') and plot the variants as a rainfall plot with the KaryotypeR package using these commands from the tutorial (where sm.gr is a GRanges object of the data):
library(karyoploteR)
kp <- plotKaryotype(plot.type=4)
kpPlotRainfall(kp, data = sm.gr)
Could someone please point me towards a good way to read in and convert a VCF to a GRanges format suitable for this task?
Thanks for your help.
[1]: https://bernatgel.github.io/karyoploter_tutorial//Examples/Rainfall/Rainfall.html | library(karyoploteR)
tmp.vcf<-readLines("your_file.vcf")
tmp.vcf.data<-read.table("your_file.vcf")
tmp.vcf<-tmp.vcf[-(grep("#CHROM",tmp.vcf)+1):-(length(tmp.vcf))]
vcf.names<-unlist(strsplit(tmp.vcf[length(tmp.vcf)],"\t"))
names(tmp.vcf.data)<-vcf.names
colnames(tmp.vcf.data)[1] <- "Chr"
colnames(tmp.vcf.data)[2] <- "Start"
tmp.vcf.data = cbind(tmp.vcf.data,End=rep(tmp.vcf.data$Start+1))
tmp.vcf.data <- tmp.vcf.data[, c(1,2,12, 3:11)]
toGRanges(tmp.vcf.data)
Edit : Good luck to generate a custum genome based on your contigs in KaryoploteR ! | biostars | {"uid": 342119, "view_count": 2558, "vote_count": 1} |
A lot of papers I've read have used [the qvalue package from Bioconductor][1] to do multiple testing correction. There is a function in the basic R stats package called [p.adjust][2] which seems to do the same thing. What is the advantage of the qvalue package, and why is it so frequently used instead of the built in R version?
[1]: http://www.bioconductor.org/packages/release/bioc/html/qvalue.html
[2]: https://stat.ethz.ch/R-manual/R-patched/library/stats/html/p.adjust.html | `p.adjust()` and the qvalue package aren't actually doing exactly the same thing. They are doing quite similar things with similar ends in mind, but the algorithms are different, so they produce different results (Gordon Smyth, who wrote `p.adjust()`, has a short summary of the history [here][1] and Tim Triche gives a nice explanation of the differences in [the same thread][2]). Interestingly, the packages I use almost always use `p.adjust()` rather than the qvalue package, though if you read a paper that uses them the authors might actually say "q-value" rather than "BH adjusted p-value".
BTW, q-values are a bit more difficult to calculate and you actually need quite a few data points. In effect, you have to use the data at hand to estimate the expected null rate and then multiply that by the BH adjusted p-value. Consequently, `p.adjust()` ends up being more generally applicable, but in cases where q-values are appropriate they tend to give more power.
[1]: https://stat.ethz.ch/pipermail/bioconductor/2012-December/049902.html
[2]: https://stat.ethz.ch/pipermail/bioconductor/attachments/20121219/00dc27b1/attachment.pl | biostars | {"uid": 128931, "view_count": 20308, "vote_count": 14} |
Hi folks,
I am trying to feed simple results from Muscle into RaxML for processing. The use case here is for cwl-runner to be used as a workflow engine for the Airavata science gateway software. We'll have Airavata run, "cwl-runner muscle-raxml.cwl --infile file.fa [--diags] --model BINGAMMA" (for example) and then have it return the result. The problem right now is that cwl-runner doesn't understand my outputs, which makes sense, because I don't really understand how CWL keeps track of outputs either! When I run it right now, I get the following:
Fornacis:science-gateway-experiment-code jtd$ cwl-runner muscle-raxml.cwl --infile unaligned.fa --diags --model BINGAMMA
/usr/local/bin/cwl-runner 1.0.20161128202906
Resolved 'muscle-raxml.cwl' to 'file:///Users/jtd/science-gateway-experiment-code/muscle-raxml.cwl'
Tool definition failed validation:
While checking field `outputs`
While checking object `file:///Users/jtd/science-gateway-experiment-code/muscle-raxml.cwl#classout`
Field `outputSource` contains undefined reference to `raxmloutput`, tried [u'file:///Users/jtd/science-gateway-experiment-code/muscle-raxml.cwl#classout/raxmloutput', u'file:///Users/jtd/science-gateway-experiment-code/muscle-raxml.cwl#raxmloutput']
While checking field `steps`
While checking object `file:///Users/jtd/science-gateway-experiment-code/muscle-raxml.cwl#raxml`
While checking field `in`
While checking object `file:///Users/jtd/science-gateway-experiment-code/muscle-raxml.cwl#raxml/raxmlinfile`
Field `source` contains undefined reference to `intermediatefile`, tried [u'file:///Users/jtd/science-gateway-experiment-code/muscle-raxml.cwl#intermediatefile']
The idea is for Muscle to generate a file called "intermediatefile" which is then fed into RaxML for processing. RaxML then produces several files, which because of the arguments we provided will terminate all of them with ".out". Sounds sort of logical, but doesn't actually work.
Here's the contents of my three CWL files.
muscle-raxml.cwl:
cwlVersion: v1.0
class: Workflow
inputs:
infile: File
diags: boolean
model: string
outputs:
classout:
type: File
outputSource: raxmloutput
steps:
muscle:
run: muscleraxml-muscle.cwl
in:
muscleinfile: infile
diagsflag: diags
out: [intermediatefile]
raxml:
run: muscleraxml-raxml.cwl
in:
raxmlinfile: intermediatefile
raxml_model: model
out: [raxmloutput]
muscleraxml-muscle.cwl:
cwlVersion: v1.0
class: CommandLineTool
baseCommand: [muscle]
arguments: ["-out intermediatefile"]
inputs:
muscleinfile:
type: File
inputBinding:
position: 1
prefix: -in
diagsflag:
type: boolean
inputBinding:
position: 2
prefix: -diags
outputs:
intermediatefile:
type: File
outputBinding:
glob: intermediatefile
muscleraxml-raxml.cwl:
cwlVersion: v1.0
class: CommandLineTool
label: RaxML wrapper
baseCommand: raxml
arguments: ["-n out -T 2"]
inputs:
raxmlinfile:
type: File
inputBinding:
position: 1
prefix: -s
raxml_model:
type: string
inputBinding:
position: 2
prefix: -m
outputs:
raxmloutput:
type: File
outputBinding:
glob: "*.out"
Help?
Thanks! | In your workflow level file (muscle-raxml.cwl):
- ```raxmloutput``` would refer to a workflow level input (which does not exist)
- ```intermediatefile``` would refer to a workflow level input (which does not exist)
You need to specify the workflow step where these outputs come from:
- instead of ```outputSource: raxmloutput``` you need ```outputSource: raxml/raxmloutput```
- instead of ```raxmlinfile: intermediatefile``` you need ```raxmlinfile: muscle/intermediatefile```
----------
ALSO: your example "cwl-runner muscle-raxml.cwl --infile file.fa [--diags] --model BINGAMMA" indicates that --flags is optional. So you need to make the corresponding input optionnals at the workflow level (muscle-raxml.cwl):
- instead of ```diags: boolean``` you need ```diags: boolean?```
as well as the tool level (muscleraxml-muscle.cwl):
- instead of: ```diagsflag: type: boolean``` you need: ```diagsflag: type: boolean?```
----------
ALSO: the arguments field in muscleraxml-muscle.cwl needs to be a list of 2 strings instead of just one :
- instead of: ```["-out intermediatefile"]``` you need: ```["-out", "intermediatefile"]```
Here's the contents of the modified CWL files.
muscle-raxml.cwl:
cwlVersion: v1.0
class: Workflow
inputs:
infile: File
diags: boolean?
model: string
outputs:
classout:
type: File
outputSource: raxml/raxmloutput
steps:
muscle:
run: muscleraxml-muscle.cwl
in:
muscleinfile: infile
diagsflag: diags
out: [intermediatefile]
raxml:
run: muscleraxml-raxml.cwl
in:
raxmlinfile: muscle/intermediatefile
raxml_model: model
out: [raxmloutput]
muscleraxml-muscle.cwl:
cwlVersion: v1.0
class: CommandLineTool
baseCommand: [muscle]
arguments: ["-out", "intermediatefile"]
inputs:
muscleinfile:
type: File
inputBinding:
position: 1
prefix: -in
diagsflag:
type: boolean?
inputBinding:
position: 2
prefix: -diags
outputs:
intermediatefile:
type: File
outputBinding:
glob: intermediatefile
| biostars | {"uid": 224430, "view_count": 1907, "vote_count": 2} |
Hello,
If I have a set of orthologous proteins, it is assumed that these will have a shared function due to the maintenance of evolutionary pressure following speciation (as I understand).
If I have a set of proteins that fall within a single PFAM, is the assumption of shared functionality as strong as it would be for the ortholog group? I am guessing not, as the second paragraph of the reference's introduction explicitly says they do not always share function. But then, no assumption is complete, so I imagine there will also be orthologs that do not share function.
I assume there is a degree of uncertainty regarding how high quality the group of orthologs is (e.g. if it is found by RSD or RBH or tree based methods). But with that in mind, would you say the assumption of shared functionality is stronger for the PFAM group or for the orthologous group?
Thanks!
Reference:
Nucleic Acids Research, Volume 47, Issue D1, 08 January 2019, Pages D427–D432, https://doi.org/10.1093/nar/gky995
Edit - for anyone interested there's some interesting discussion here - https://www.biostars.org/p/128/#128 | I would say stronger for the orthologous groups than in the PFAM groups. The pfam works only on conserved domains, while orthogroups (usually) take the whole sequence into account.
Moreover, it is not because you share a certain domain that you then also share a common function. Most eukaryotic proteins have a multi domain structure and thus the true biological function is a combination of those domains rather than 1 specific domain (though that also exists in eukaryotes). Some domain are very common (kinase domain, DNA binding, ... ) and others are quite specific.
If solely the absence or presence of a domain would determine function (and/or orthology) the problem of orthology defining would have been solved already (which it is not unfortunately) | biostars | {"uid": 9475830, "view_count": 1269, "vote_count": 2} |
I am trying to plot a Heatmap. My data is as follows.
GO s_3d s_6d s_17d
peptide metabolic process 4.71E-29 2.84828E-20 2.70719E-24
organonitrogen compound biosynthetic process 3.38158E-22 3.26674E-15 6.3903E-22
positive regulation of gene expression 0.057421474
negative regulation of RNA metabolic process 0.05933718
G-protein coupled receptor signaling pathway 0.061777665
positive regulation of macromolecule biosynthetic process 0.061906369
negative regulation of nitrogen compound metabolic process 0.065645467
Basically the data has 4 columns and a LOT of NA values.
I am using the function Heatmap.2 which usually has an option called na.color which can help me color the NA values to a distinct color
Since I have too many NA values, the heatmap function gives me an error
Error in hclustfun(distr) : NA/NaN/Inf in foreign function call (arg 11)
Therefore I tried to replace all NAs with zero and then give that a separate color. However the heatmap gives me a gradient even though I give the color separate. What I want is all 0's to be White and Everything which is not 0 to have a gradient from yellow to red. I do not want a gradient between white to yellow. How do I do this?
This is my code
rm(list = ls())
cat("\014")
if (!require("RColorBrewer")) {
install.packages("RColorBrewer", dependencies = TRUE)
library(RColorBrewer)
}
file <- "C:/Users/niran/Downloads/go_cluster.txt" #input
data <- read.table(file,header=T, sep = "\t",stringsAsFactors = F)
row.names(data) <- data[,1]
data <- data[-1]
# Since data is spread too far apart we can do a log transform to reduce the gaps
data[is.na(data)] <- 0 # Find possible solution for this
data <- -log10(data + 0.000000000001)
# creates a own color palette from yellow to red
color.palette <- colorRampPalette(c("#F8F8F8","yellow", "orange", "red"))(n=600)
col_breaks = c(seq(0,0.1,length=1), # for white
seq(0.1,2,length=100),
seq(2.01,4,length=100),
seq(4.01,6,length=100),
seq(6.01,8,length=100),
seq(8.01,10,length=100),
seq(10.01,12,length=100))
Heatmap <- heatmap.2(as.matrix(data),
main = "Heatmap of p-values for GO",
density.info = "none",
trace = "none",
margins = c(5,28),
key.xlab = "log10 Values",
cexRow = 1,
cexCol = 1.5,
keysize=0.75,
col = color.palette, # use on color palette defined earlier
breaks=col_breaks, # enable color transition at specified limits
dendrogram ="both")
dev.off()
| Hi Seigfried,
you give the separate colors, but **ColorRamPalette** still interpolates them to create a smooth gradient from white to yellow to orange to red. If I understood you correctly, you would like first create a smooth gradient from yellow to orange to red and **then** prepend "white". Thus your code should contain something like:
color.palette <- c("#F8F8F8", colorRampPalette(c("yellow", "orange", "red"))(n=599)) | biostars | {"uid": 261685, "view_count": 12760, "vote_count": 1} |
Hi,
We have run a pilot RNA-Seq study with one sample per condition, this is just a test run. I understand there is no valid statistical test in this case, however just curious to obtain differential expression through edgeR package in R assuming dispersion = 0.4 for the human data. I have a normal (baseline) sample followed by 5 different samples. When I run the edgeR package, I want to indicate my normal sample (as baseline), however, I am unsure what sample here is taken as a baseline for calculation of fold change.
**FC calculation**
> FC = Normal/Test_1 (OR)
> FC = Test_1/Normal
**Samples**
> Normal (baseline) = Normal
> Test (Treated) = Test_1
>
**Data**
dput(df_data)
structure(list(Normal = c(0L, 184L, 60L, 0L, 7L, 0L, 87L, 0L,
0L, 21L, 193L, 29L, 0L, 0L, 3L, 50L), Test_1 = c(0, 140.5, 64,
0, 4, 0, 83, 0, 1, 51.5, 199, 25, 0, 0, 5, 62)), class = "data.frame", row.names = c("Gene1",
"Gene2", "Gene3", "Gene4", "Gene5", "Gene6", "Gene7", "Gene8",
"Gene9", "Gene10", "Gene11", "Gene12", "Gene13", "Gene14", "Gene15",
"Gene16"))
dput(df_metadata)
structure(list(SampleID = c("xxxx1", "xxxx2"), CoreLabID = c("Normal",
"Test_1")), class = "data.frame", row.names = c("Normal", "Test_1"
))
**Here is the code that I am running**
bcv <- 0.4
y <- DGEList(counts=df_data, group=df_metadata$CoreLabID)
et <- exactTest(y, dispersion=bcv^2)
> View(et$table)
> structure(list(logFC = c(0, -0.67280976110796, -0.190706878123648,
> 0, -1.06592239047733, 0), logCPM = c(0.456451013758882, 6.84518828528986,
> 5.46338499556895, 0.456451013758882, 2.37389406911164, 0.456451013758882
> ), PValue = c(1, 0.433579402199822, 0.851984371429117, 1, 0.542580328250669,
> 1)), row.names = c("Gene1", "Gene2", "Gene3", "Gene4", "Gene5",
> "Gene6"), class = "data.frame")
>
> View(et$comparison)
> c("Test_1", "Normal")
Thank you,
Toufiq | edgeR has compared `Test_1` to `Normal` so, yes, the normal sample has been taken as the baseline. `et$comparison` records the numerator and denominator for the fold-change respectively.
You can find this out for yourself by reading the help page for `exactTest`, `help(exactTest)`. See the documentation for the `pair` argument.
If you use the standard edgeR pipeline
```
topTags(et)
```
then the results will be presented in a easy to read form and the output will tell you which sample has been compared to which.
| biostars | {"uid": 9472421, "view_count": 968, "vote_count": 1} |
I need to convert `blast` output in tabular format (`format 6`) to `bed` format. I'm interested only in `qstart` and `qend` coordinates (the 7th & 8th columns respectively). As far i know `bed` format is 0-based, but `blast` probably uses 1-based coordinates. Am i right?
So to do so with `AWK` i need to execute the code below:
cat balst_tabular_output.txt | awk '{print($2"\t"$7-1"\t"$8)}' > blast_otput.bed
| Yes, BLAST is 1-based and your conversion should be right (keeping qend 1-based makes it compatible with the open interval style that bed uses.) | biostars | {"uid": 317112, "view_count": 3810, "vote_count": 1} |
Hi,
I am using RMarkdown in Rstudio and I want to execute commands from a program using a bash chunk
````{bash}
```
I have a program called samtools on my computer so when I execute it in the chunk, it works.
```{bash}
samtools
```
When I type :
```{bash}
which samtools
```
The output is that it tells me samtools is located in the usr/local/bin directory.
However, when I execute a program with vcftools , I get an error because Rstudio, does not know where the program is:
I have it in another directory on my computer.
How do I get Rstudio or Rmarkdown to execute vcftools from the bash chunk?
Is there a way that I can tell RMarkdown which directory to look in to find the program?
For example (something like) :
```{bash}
$vcftools = /Users/m.o.l.s/Programs_For_Bioinformatics/vcftools
```
or would I have to move all of the programs to usr/bin/local?
Outside of Rstudio, I have made aliases to the programs so they work fine on the terminal.
I made the alias by writing in my .bash_profile
`alias bcftools=/Users/paths/to/where/the/program/is/installed`
but I added the path to vcftools to my export PATH in the bash profile completely.
| In an RMarkdown file in Rstudio the following can be applied:
For jar files, I haven't figured out how to do it with a short command, but as h.mon says I can use the full path to the program and it will work.
for example:
```{bash}
# The beagle program
java -jar /Users/m.o.l.s/Programs_For_Bioinformatics/beagle.19.jar
```
The aliases above work only for the execution of the command in the Terminal , not in the bash chunk. So this is great for working in the Terminal.
For example on Mac inside the Terminal:
open .bash_profile
alias bcftools=/Users/paths/to/where/the/program/is/installed
File > Save > click the red x
hold down command and N together (to open a New terminal)
bcftools
These comands work to show the settings of the program when the RMarkdown file is inside the directory where the programs are installed.
They show the commands of the program without using the ./ argument.
```{bash}
# The BCFtools program
$BCFTOOLS bcftools/bcftools
```
```{bash}
$PSIBLAST =blast_folder/bin/psiblast -h
```
... but they dont do anything after that. I though the $ held some great significance, but it doesn't seem to.
What does work however, as i.sudbery mentioned is putting all of the files in the .bashrc or .bash profile.
The first step in doing this is to go to the Terminal:
find the home directory
echo $HOME
The home direcotry is then the start of where these programs can be found
open .bashrc
Paste this into the bash_rc ( but change the home directory to the actual name of your home directory)
exportPATH=§PATH:/bin:/usr/bin/:/usr/local/bin:/usr/sbin:/sbin:/Users/home/Programs_For_Bioinformatics/vcftools/bin:/Users/home/Programs_For_Bioinformatics/bowtie:/Users/home/Programs_For_Bioinformatics/Kalign:
The pattern above is to list the path where the program is found on the computer relative to the home directory and separate each path with a semicolon.
File > save > click the x
Then it will be possible to use the program in Rmarkdown using a one word command.
```{bash}
source ~/.bash_profile
cufflinks
```
or
```{bash}
source ~/.bashrc
blastp -h
```
in the start of the code chunk.
Of course, commands like this work but as the path gets longer the code gets longer
./angsd_folder/angsd
And it also works if the programs are located in the usr/local/bin
| biostars | {"uid": 400685, "view_count": 7174, "vote_count": 2} |
Hello,
I wanted to obtain only the "mapped" reads as an output of the STAR.
I forgot to delete "**--outSAMunmapped Within** " and all of my mapped output files also contain all unmapped reads,too.
Data size is huge and re-mapping properly wiil take so much time...
**How can I fix the BAM files which were supposed to be mapped reads but also include unmapped reads in it ?**
#!/bin/bash
mkdir /mnt/data/Toxo_scan/GBR_Male/ToxoMap
while read -r line
do
mkdir ToxoMap/$line
echo $line" -> Running STAR - Toxo now"
STAR --runThreadN 12 --alignIntronMax 1 --outSAMunmapped Within --outSAMtype BAM SortedByCoordinate --genomeDir "/mnt/data/Toxo_scan/toxo_genome" --readFilesIn "/mnt/data/Toxo_scan/GBR_Male/sickle/"$line"_1_clean.fastq" "/mnt/data/Toxo_scan/GBR_Male/sickle/"$line"_2_clean.fastq" --outFileNamePrefix "/mnt/data/Toxo_scan/GBR_Male/ToxoMap/"$line"_" --outReadsUnmapped Fastx
done | To get your mapped reads:
samtools view -b -F 4 alignment.bam > mapped.bam
To get your unmapped reads:
samtools view -b -f 4 alignment.bam > unmapped.bam
The `-f` filters based on bitwise samflags. Flag 4 (or `0x0004`) is for unmapped reads. `-F` does the inverse. | biostars | {"uid": 248246, "view_count": 4954, "vote_count": 2} |
hi,
i have a correlation matrix of genes but i am going to remove alphabet from rows and columns to prepare my file for matlb i did like below
> mycounts <- read.table("tmatnorm_rld500.txt", header = T, sep = "\t", row.names=1)
> head(mycounts[,1:3])
AT1G01060 AT1G01170 AT1G01180
AT1G01060 1.0000000 0.3193360 0.6272994
AT1G01170 0.3193360 1.0000000 0.3178880
AT1G01180 0.6272994 0.3178880 1.0000000
AT1G01260 0.2658624 0.3588030 0.2542234
AT1G01380 0.6178751 0.1561297 0.7052692
AT1G01490 0.5990499 0.6129301 0.6424225
> rownames(mycounts) <- NULL
> head(mycounts[,1:3])
AT1G01060 AT1G01170 AT1G01180
[1,] 1.0000000 0.3193360 0.6272994
[2,] 0.3193360 1.0000000 0.3178880
[3,] 0.6272994 0.3178880 1.0000000
[4,] 0.2658624 0.3588030 0.2542234
[5,] 0.6178751 0.1561297 0.7052692
[6,] 0.5990499 0.6129301 0.6424225
> colnames(mycounts) <- NULL
>
> head(mycounts[,1:3])
[,1] [,2] [,3]
[1,] 1.0000000 0.3193360 0.6272994
[2,] 0.3193360 1.0000000 0.3178880
[3,] 0.6272994 0.3178880 1.0000000
[4,] 0.2658624 0.3588030 0.2542234
[5,] 0.6178751 0.1561297 0.7052692
[6,] 0.5990499 0.6129301 0.6424225
> write.table(mycounts, file = "correlation.txt", dec = ".", sep = "\t", quote = FALSE, row.names=F)
> mycounts <- read.table("correlation.txt", header = T, sep = "\t")
> head(mycounts[,1:3])
V1 V2 V3 <---this line
1 1.0000000 0.3193360 0.6272994
2 0.3193360 1.0000000 0.3178880
3 0.6272994 0.3178880 1.0000000
4 0.2658624 0.3588030 0.2542234
5 0.6178751 0.1561297 0.7052692
6 0.5990499 0.6129301 0.6424225
but as you consider, i have something in colnames?? then how I can get rid of alphabet and only have the similarity values please?
thank you | Add `col.names=FALSE` to the `write.table()` command and you should have a matrix with no row or column names.
| biostars | {"uid": 172716, "view_count": 14640, "vote_count": 1} |
Hi guys,
I have a multi-fasta like this
>Citrobacter_freundii_D8_6645..17576
gtgatcgtcaagaaggttaagaacccgcagaaggcagca
>Enterobacter_hormaechei_35012_3830..23574
atggacgatagagaaagaggcttagcatttttatttgcaatt
And I would like to eliminate the numbers flanking .., to have an output like this
>Citrobacter_freundii_D8
gtgatcgtcaagaaggttaagaacccgcagaaggcagca
>Enterobacter_hormaechei_35012
atggacgatagagaaagaggcttagcatttttatttgcaatt
Since the number are variable, I guess just creating a command to remove x characters from the end of the fasta-header won't be enough.
Thanks! | If the example is representative, then you basically intend to keep the first three elements that are separated by `_`. If so, do:
awk ' $1 ~ /^>/ { split($0,a,"_"); print a[1]"_"a[2]"_"a[3];next} {print}'
Command splits every line that starts with `>` at the `_` and then simply prints the first three separated by `_` again. Obviously that only works if all fasta headers look like the ones you showed. | biostars | {"uid": 372020, "view_count": 1059, "vote_count": 1} |
Using R I want to generate a list of genes only (without the accompanying text) from a pathway.
For example. If the input pathway is [KEGG prostate cancer](https://www.genome.jp/dbget-bin/www_bget?hsa05215),
I want my output to be a .csv list of the genes in that pathway. I tried:
library("KEGGREST")
keggGet("hsa05215")[[1]]$GENE
but that gives me a list of the number and gene description along with the gene symbol and I want a list consisting of the gene symbol alone.
How do I get this?
Thank you.
| I ended up solving it myself. Hopefully this helps anyone who has a similar need.
library("KEGGREST")
#Get the list of numbers, gene symbols and gene description
names <- keggGet("hsa05215")[[1]]$GENE
#Delete the gene number by deleting every other line
namesodd <- names[seq(0,length(names),2)]
#Create a substring deleting everything after the ; on each line (this deletes the gene description).
namestrue <- gsub("\\;.*","",namesodd)
#export the vector as a csv
write.csv(namestrue, file = "hsa05215",quote = F, row.names = F)
| biostars | {"uid": 366067, "view_count": 4968, "vote_count": 2} |
<p>Hello all</p>
<p>I want remove some snp from map and also genotype from ped file. I used this command in plink:</p>
<pre><code>plink --file FILE --exclude SNP-LIST-FILE
</code></pre>
<p>I think the some SNP in map file should be deleted and also genotype in ped, but when I check the number of raw in map and the column of ped file its same as before exclude</p>
<p>is there any one know my snp in snplist excluded or not? </p>
<p>thanks</p>
| Your command is correct, but you should add two options: `--out` and `--recode`. If you don't specify these, plink executes the commands and prints a summary to the screen (and to the .log file), but does not actually write the results to a ped file nor overwrites your original data.
The `--recode` option specifies that you want to write a .ped file as output. The `--out` option specifies the suffix of the output file:
plink --file mydata --exclude snp_list --recode --out filter_data
Also, check that the the SNP-LIST-FILE is formatted correctly. It must contain the ids of the SNPs that you want to remove. | biostars | {"uid": 56848, "view_count": 16732, "vote_count": 1} |
Does O, P, or Q stands for different things ?
I have looked at this webpage, but it didn't answer the quesion
https://www.uniprot.org/help/accession_numbers
| Confirmation from UniProt: Indeed accession numbers have no meaning and just serve as unique identifiers.
This is the short answer, and below is some background:
Originally, accession numbers started with P, later with O, P and Q,
but due to the large increase in the number of protein sequences in
UniProtKB, we had to extend the existing accession number format by
allowing the first character to be any of the 26 letters (instead of
only O, P and Q). To avoid assigning accession numbers identical to
those which have been used by the International Nucleotide Sequence
Database, the extension in the first position went along with a
restriction in the third position which can only be a letter. See
https://www.uniprot.org/news/2006/11/14/release
The most recent extension of the UniProtKB accession number format
took place in 2014 to
allow for 10-character accession numbers:
http://www.uniprot.org/help/2014/06/11/release
The "new-style" accession numbers do not have any different meaning at
all, except that they implicitly convey the information that the entry
they identify has been created since June 2014 - however the reverse
is not necessarily true, i.e. not all new entries have 10-character
accessions. In any case, it is best not to interpret ACs in any way,
but just consider them as unique identifiers for a UniProtKB record.
| biostars | {"uid": 464169, "view_count": 1233, "vote_count": 1} |
<p>Hi every one i am doing some plamid genome assembly with spades. After assembly i used SSPACE for scaffolding. But there are some gap each of the draft genome. I can fill the gap by PCR. But i want to reduce the number of gap insilico?? Can anyone suggest how to reduce the gap ?? If i do mapping the reads with contigs will it be give any promising result??? </p>
| <p>There are a few in silico tools that can fill in the gaps within scaffolds;</p>
<p>GapFiller (developed by myself, I'm also the developer of SSPACE):
<a href='http://www.baseclear.com/landingpages/basetools-a-wide-range-of-bioinformatics-solutions/gapfiller/'>http://www.baseclear.com/landingpages/basetools-a-wide-range-of-bioinformatics-solutions/gapfiller/</a>
<a href='http://genomebiology.com/2012/13/6/R56'>http://genomebiology.com/2012/13/6/R56</a></p>
<p>IMAGE2:
<a href='http://sourceforge.net/apps/mediawiki/image2/index.php?title=Main_Page'>http://sourceforge.net/apps/mediawiki/image2/index.php?title=Main_Page</a></p>
<p>GapCloser (part of SOAP):
<a href='http://sourceforge.net/projects/soapdenovo2/files/GapCloser/'>http://sourceforge.net/projects/soapdenovo2/files/GapCloser/</a></p>
<p>Regards,
Boetsie</p>
| biostars | {"uid": 85431, "view_count": 12133, "vote_count": 2} |
I tested two different options while running HTSeq-Count, `-s no` and `-s reverse`. This are the results:
For `-s no`:
> __no_feature 435592
> __ambiguous 953159
> __too_low_aQual 0
> __not_aligned 0
> __alignment_not_unique 8164048
For `-s reverse`:
> __no_feature 573728
> __ambiguous 410510
> __too_low_aQual 0
> __not_aligned 0
> __alignment_not_unique 8164048
For the option `-s reverse` there are lower ambiguous values but higher no_feature than for `-s no`.
As far as I know this option depends on the construction of the library, but when they gave me this sequences they didn't mention it. All I know is that it's was constructed under a Illumina protocol and that it was a Paired-End experiment of RNA-Seq from peach (Prunus persica).
I'm inclined to think that less ambiguous values are just better, even than with more no_feature values.
So, which one it's right?
===============
Edit:
This are the results from `-s yes`:
> __no_feature 41467373
> __ambiguous 506
> __too_low_aQual 0
> __not_aligned 0
> __alignment_not_unique 8164048 | This is a pretty common method to determine the strandedness of a library and is essentially what RSeQC is doing. In this case, `-s reverse` is the correct setting (it's also the majority of what's produced these days). The general reasoning is:
1. If it's an unstranded library, then each of the stranded methods will have ~2x more `_no_feature` counts than the `-s no` setting.
2. If it's a stranded library, one of the stranded setting will have "slightly" higher `_no_feature` counts (because reality is annoying like that) and the other will have vastly higher `_no_feature` counts. | biostars | {"uid": 192337, "view_count": 2849, "vote_count": 1} |
I am confused about an approach to evaluate enrichment and was wondering if you could help me understand if what I am doing makes sense.
I have counts from regions that overlap histone markers in a dataset and I would like to know if histone markers are enriched in my dataset compared to a random set using a randomization approach.
I have created 1000 similar datasets and created counts for number of regions overlapping histones in these null datasets. In some cases, this distribution is normal, but in others it is not.
In cases where the distribution of the proportions from the null datasets is **normal**:
I can use these data to find the mean, standard deviation, and degrees of freedom and compare this distribution (mean, sd and df) to my observed count. Is this correct?
```r
# "sim.null" is a normal distribution of counts of overlaps that I get from 1000 simulations (this is matched on my original dataset for some features)
sim.null= rnorm(sd=0.001, mean=0.01, n=1000)
# I would like to compare it with the counts I get from my dataset
observed = 0.0125
t = mean(sim.null)-observed / (sd(sim.null)/sqrt(1000))
2*pt(-abs(t),df=999)
# Is this the same as doing this?
t.test(sim.null, mu=observed, alternative="two.sided")$p.value
```
In cases where the null dataset is **not normal**: would maybe a Fisher exact test be appropriate?
Thank you very much, any suggestions are very appreciated! | What you meant to write was:
Tstat = (mean(sim.null)-0.0125)/(sd(sim.null)/sqrt(1000))
The p-value will then essentially match (the t.test will tell you <2e-16, which is the smallest value it'll output).
For non-normal distributions, either use the appropriate distribution, or just use an empirical distribution (the latter is required for messy distributions):
sum(sim.null>=0.0125)/1000
Using theoretical distributions (e.g., the normal distribution) will give much nicer p-values if they're appropriate for your dataset. | biostars | {"uid": 163340, "view_count": 1802, "vote_count": 2} |
I am looking at the ENCODE ATAC-seq pipeline: https://www.encodeproject.org/pipelines/ENCPL035XIO/
They have two different steps:
- "call nuclease accessible regions using FSeq" (in PDF) or "open chromatin region identification" (on diagram)
- "call nuclease accessible peaks using Homer" (in PDF) or "peak calling" (on diagram)
Regardless of the tool used, what is the difference between "regions" and "peaks"? I would think those are the same thing (in this context, a set of loci where the reads accumulate).
| I received a very helpful clarification after emailing ENCODE directly:
> Nuclease accessible regions tend to be long, e.g. 10 kb or longer.
> This was clear even in the early papers on DNase sensitivity
> (mid-to-late 1970's; Groudine and Weintraub). These accessible regions
> can contain entire genes or even clusters of genes. Within the
> nuclease accessible regions, some localized DNA segments are so
> readily cleaved that double-strand breaks are generated at that
> position in a substantial fraction of the cells in the population.
> These are the DNase-hypersensitive sites (DHSs) first mapped by Carl
> Wu (late 1970's). I see the Fseq "regions" as the equivalent of
> nuclease accessible regions, and the Homer "peals" as the equivalent
> of DHSs.
>
> If you look at the signal track for DNase-seq or ATAC-seq, you see
> broad regions of signal that are significantly above the background.
> Within those regions, you see localized peaks, often many peaks per
> region. Fseq calls the broad regions, and we use Homer to call the
> localized peaks. MACs can be used for peak calling as well, Anshul
> Kundaje is doing that. You can see similar analyses in the work from
> John Stamatoyannopoulos for DNase-seq. I think Hotspots are like
> regions, and DHSs are peaks confined to a defined length.
| biostars | {"uid": 210068, "view_count": 7672, "vote_count": 1} |
Hi guys I am trying to learn use R for RNA-SEQ analysis and I am trying some functions and guides from bioconductor, however I am getting some errors.
http://www.sthda.com/english/wiki/rna-seq-differential-expression-work-flow-using-deseq2#introduction
One of them I quickly found a answer, because they change the name of the fuunction 8(.
But this error I didn't find any answer still:
txdb <- makeTranscriptDbFromBiomart( biomart="ENSEMBL_MART_ENSEMBL",
dataset="hsapiens_gene_ensembl" )
Error in makeTranscriptDbFromBiomart(biomart = "ENSEMBL_MART_ENSEMBL", :
could not find function "makeTranscriptDbFromBiomart"
I import the library as due
library( "GenomicFeatures" )
But got that error.
Any tip?
thanks
| I got it done with makeTxDbFromBiomart...
well I guess they change the functions names a lot in R packages.
Thsnks | biostars | {"uid": 411417, "view_count": 779, "vote_count": 1} |
I want to query the molecular weight of multiple proteins from R given uniprot ids. I thought of using biomaRt or UniProt.ws, but didn't see which field I should query.
There are online tools performing the calculation given a uniprot ID (PIR, ExPASy), but you have to manually enter the input and save the output.
I want the MW that appears in UniProt when looking in "Sequences" under Mass (for example for the entry http://www.uniprot.org/uniprot/Q9Z2Y8) | <p>On the uniprot site you can ask for the predicted mass column.</p>
<p>Click on the columns button. click on the check-box labeled mass under the sequence category.</p>
<p>Save, search for your accession using the syntax below and get tab delimited output back.</p>
<p>http://www.uniprot.org/uniprot/?format=tab&query=accession:Q6GZX4&columns=id,entry+name,mass</p>
| biostars | {"uid": 109706, "view_count": 5368, "vote_count": 2} |
Q: What should I put in the read groups in my BAM files?
Yes, I've read about the "read groups" thing on biostars (e.g. Picard provides a very useful tool to add/replace read groups). But I think I've missed something very fundamental so that I still couldn't understand what they are exactly, where I can find them, and if I cannot find them, what I should do so that downstream analyses could proceed (yes, I guess the answer is to add "some" read groups, but what exactly I should add ?)
From what I've found, ID, SM, PL, LB seem important read groups (for GATK at least). But if I am to add these read groups to my BAM files, assuming the files don't have them, can I just assign some dummy names to each of them? Okay, PL probably needs to be specific, like either illumina, solid, or others, but does it matter if I assign them all lowercase or should they be all CAP ?? What about the other RGs ?
For example, if I have only one BAM file to add/replace the read groups, could I simply assign "A", "B", "illumina" and "D" for ID, SM, PL, LB respectively.
And if I have two BAM files, could I simply assign "A1, B1, illumina, D1" for file 1 and "A2, B2, illumina, D2) for file 2?
I found that GATK forum mentioned that dummy info is OKAY, so would A,B,C,D like the examples above be fine ? And what exactly are the purposes for these read groups? If they are so essential, why couldn't they be incorporated by default when running early steps (or even 1st step, e.g. from fastq) of NGS data processing ?
Any input on any of the issues in this question will be greatly appreciated. Thank you. | Yes, you can assign dummy names for any and all of these. The read group tags are meant to enable grouping of alignment to account for biases due to things like the library preparation, the machine things were sequenced on, etc.
This is mostly useful where you have samples that were each sequenced multiple times, but from different libraries. So then you'd have alignments with the same SM but a different LB. In cases where you just have a single run of each sample, with all samples done in a single batch, then read groups aren't particularly useful. | biostars | {"uid": 129860, "view_count": 3666, "vote_count": 1} |
Hi All,
I was wondering how important is to get rid of exact duplicate Illumina reads before --
1. Before using it for correcting PacBio reads (planning to use ProovRead)
2. Before using it to polish a Pac-Bio only assembly using Pilon (Assembly was done using uncorrected PacBio reads - miniasm)
3. Before using the reads to do a hybrid de-novo-assembly using PBcR
Some of my Illumina libraries have significant amounts of reads duplicated >10 times. What are your recommendations to handle these duplicate reads considering the scenarios mentioned above?
Many thanks in advance! | It's not a good idea to remove duplicate reads unless your libraries are amplified. If they *are* amplified, and you have reads appearing 10+ times, I highly recommend you change to an unamplified protocol, because you are wasting sequence. And by duplicates, I mean that both read 1 and read 2 of pairs are duplicates... otherwise the pairs are not, in fact, duplicates.
But - if you have a situation in which you are using an amplified library, and duplicate pairs occur, I recommend eliminating all duplicates and replacing them with a single copy of their consensus, in any situation other than quantification (e.g. RNA-seq).
| biostars | {"uid": 203428, "view_count": 3002, "vote_count": 1} |
I am new to R but I would like to explore this package. I read a similar posted question about installing this package using a Linux machine, but I was not able to successfully use the information posted there.
The paper describing this tool links to https://github.com/csuzfq/mapsnp_pkg/tree/master/mapsnp, but I am not sure what to do with files from github.
Thank you for any help. | Next time please post the error when you ask for help and also any material you used.
There's 2 errors your getting and I can replicate them on my system (macos). So the good (or bad) news is that it's not just you. First, the two packages skipped are on Bioconductor, if you google you'll find out how to install them.
Second, I think this is an issue with how devtools is trying to install. Something has gone wrong probably with how the authors have setup the package on github (badly). I suggest you download the source files and install manually like this:
install.packages("~/Downloads/mapsnp_0.1.tar.gz", repos = NULL, type = "source")
This is how I installed the package.
Source files from here: https://github.com/csuzfq/mapsnp_pkg/blob/master/mapsnp_0.1.tar.gz
P.S Make sure you install the two bioconductor packages before installing mapsnp. Then load the two bioc packages first then load mapsnp just to be sure. | biostars | {"uid": 409449, "view_count": 1431, "vote_count": 1} |
Dear Biostars,
I want to prepare the rank file for GSEA analysis based on RNA-seq results that were generated by DESeq2. I have found different recommendations as how to create the pre-ranked gene list. The GSEA site mentions the gene list can be sorted by any value, however, other people have pointed out in this blog that the direction of fold change is important for GSEA analysis. Now, if the genes are sorted only based on their log fold change, a gene with a large fold change but a poor p-value will be ranked higher than a gene with a statistically significant fold change that is smaller in magnitude.
I've also read Mark Zeimann's post about his approach to this issue where he generates a new scoring metric by multiplying the sign of fold change by its inverse p-value: http://genomespot.blogspot.com.au/2014/09/data-analysis-step-8-pathway-analysis.html
He also adds that: "at the top of the list are the genes with "strongest" up-regulation and the bottom of the list are genes with "strongest" down-regulation and genes not changing are in the middle". I am not sure if this is the right assumption for GSEA input file?
I greatly appreciate if you could help me understand this, and explain me your preferred method for creating a GSEA rank file for RNASeq expression results.
Many Thanks,
Noushin | Hi Noushin, As you note there is no set ranking method for GSEA. There are many alternatives. Some people use fold change. In our group we use signed p-value. It is not a new method, nor did I invent or "generate" it. We have published this extensively and no reviewer has ever commented on the validity. I've included a list of papers in the last 2 years using this method. Now what was your specific question? Cheers, Mark
http://www.ncbi.nlm.nih.gov/pubmed/27157830
http://www.ncbi.nlm.nih.gov/pubmed/27108813
http://www.ncbi.nlm.nih.gov/pubmed/25941940
http://www.ncbi.nlm.nih.gov/pubmed/25491312
http://www.ncbi.nlm.nih.gov/pubmed/24875254
http://www.ncbi.nlm.nih.gov/pubmed/24850797
http://www.ncbi.nlm.nih.gov/pubmed/24732587 | biostars | {"uid": 159029, "view_count": 8782, "vote_count": 3} |
This may not be technically related to bioinformatics.
Using bioinformatics method, lots of biomarker (methylation; microRNA; mRNA; etc...) based classifiers (for benign / malignant tumor; tumor early detection; tumor subtype; etc) have been published and claim to have good performance. However, it seems that few of them have been used in clinical setting. I can only guess that all these classifiers have some kind of shared issue that forbid them being used in clinical setting. Can anyone please share some opinions on this matter? |
Hey, thanks for posting. Other opinions than mine are very welcome.
Some issues that I have identified over the years:
1, lack of reproducibility of results
-------------
This is compounded by poor study design, which itself relates to any number of things:
- no statistical power
- imbalanced sample groups
- failure to control for sources of bias, including age, gender, sampling time, ethnicity, *et cetera*
- untested software that contains bugs
I could add here the fact that every instrument that we use has windows of specification in which the instrument is meant to be operated, and they also have error rates. No NGS instrument, for example, can faithfully sequence any sample of DNA - error will always exist.
2, lack of appreciation of biological variability and how to best capture this
------------
This is mainly for expression, ChIP, metabolomic, and proteomic studies
3, slow and costly clinical trials
-----------------
Clinical trials are very costly and take many years to conduct. Most go nowhere.
4, No 'translational' mechanisms in place
----------------------
In most cases, there are no official mechanisms / systems in place such that data from a research setting can be readily used in a clinical environment, or, if one exists, validation obviously has to take place, and this may be governed by national and / or international law. If something new is introduced, I notice that it is usually a 'local' change to the health system, i.e., in a hospital unit / department, and not something that is global.
------------------------------
--------------
On a side note, I have noticed research change even in the time during which I have been in it (or maybe it is that I have changed). I notice researchers exhibiting greater signs of stress and with less focus on the end goal of the very research that they are conducting. Most do not ponder on what their results could do in terms of improving a health service. Many are also focused intensely on publications and winning the next grant to simply stay in the job. This is a vicious cycle that is ultimately eroding quality research.
I should finally add that I believe more tests will be produced in the next few years. In fact, in certain countries (e.g. Brazil) where there is less regulation, many tests are enrolled straight into practice from research settings. This may prove dangerous, though.
Kevin
| biostars | {"uid": 384866, "view_count": 1143, "vote_count": 5} |
Given a missense variant ID, is there a way (via a database or even flat file) to request its estimate frequency or some other importance metric?
By importance metric I mean: how common is this mutation in the average global population or how many times it happens for each 1000 individuals etc.
Thanks you in advance! | <p>Of course, check the FAQs from the 1000 Genomes paper: http://www.1000genomes.org/faq/how-can-i-get-allele-frequency-my-variant</p>
<p>You can also use the <strong>--indv </strong>option in vcftools to select only the individuals belonging to a single population, to obtain the MAF only in that population.</p>
| biostars | {"uid": 101112, "view_count": 1408, "vote_count": 1} |
<p>I'm tying to extract a specific region of a bam-file into a fasta-file (ultimately). All of the methods I've tried so far give me all reads that OVERLAP the desired region, I'm trying to find a way to trim those to only the desired region.</p>
<p>I've tried:</p>
<p>samtools view</p>
<p><code>samtools view compiled.sorted.bam ConB:2185-2195</code></p>
<p>intersectBed</p>
<p><code>intersectBed -b test.bed -abam compiled.sorted.bam -ubam > out.bam</code></p>
<p>but these will give the entire read that overlaps my desired region, I'm trying to get something that will trim everything to sam/bam file where the 'reads' are 10 nucleotides long. Am I just missing a flag somewhere to limit the returned region?</p>
| I wrote [SAM4WebLogo](https://github.com/lindenb/jvarkit/wiki/SAM4WebLogo) for https://www.biostars.org/p/73021/ and I think it could do what you need
$ java -jar dist/sam4weblogo.jar -r seq1:80-110 sorted.bam 2> /dev/null | head -n 50
>B7_593:4:106:316:452/1
TGTTG--------------------------
>B7_593:4:106:316:452a/1
TGTTG--------------------------
>B7_593:4:106:316:452b/1
TGTTG--------------------------
>B7_589:8:113:968:19/2
TGGGG--------------------------
>B7_589:8:113:968:19a/2
TGGGG--------------------------
>B7_589:8:113:968:19b/2
TGGGG--------------------------
>EAS54_65:3:321:311:983/1
TGTGGG-------------------------
>EAS54_65:3:321:311:983a/1
TGTGGG-------------------------
>EAS54_65:3:321:311:983b/1
TGTGGG-------------------------
>B7_591:6:155:12:674/2
TGTGGGGG-----------------------
>B7_591:6:155:12:674a/2
TGTGGGGG-----------------------
>B7_591:6:155:12:674b/2
TGTGGGGG-----------------------
>EAS219_FC30151:7:51:1429:1043/2
TGTGGGGGGCGCCG-----------------
>EAS219_FC30151:7:51:1429:1043a/2
TGTGGGGGGCGCCG-----------------
>EAS219_FC30151:7:51:1429:1043b/2
TGTGGGGGGCGCCG-----------------
>B7_591:5:42:540:501/1
TGTGGGGGCCGCAGTG---------------
>EAS192_3:5:223:142:410/1
TGGGGGGGGCGCAGT----------------
>B7_591:5:42:540:501a/1
TGTGGGGGCCGCAGTG---------------
>EAS192_3:5:223:142:410a/1
TGGGGGGGGCGCAGT----------------
>B7_591:5:42:540:501b/1
TGTGGGGGCCGCAGTG---------------
>EAS192_3:5:223:142:410b/1
TGGGGGGGGCGCAGT---------------- | biostars | {"uid": 103052, "view_count": 5547, "vote_count": 3} |
Hi,
I want to performe reactome analysis with `reactomePA` package but I have a chick dataset and *Gallus gallus* organism is not supported by the package. I want to compare the results obtained with those previously found in my human dataset. so I was wondering:
1. it's reasonable to convert the chick entrezID into human entrezID and then performed the analysis? I have noticed that reactome website before running the analysis convert non-human identifiers into human ones
2. if so, which package can I use to convert chick entrezID into human entrezID?
Some example IDs:
A tibble: 16 x 3
SYMBOL ENSEMBL.x ENTREZID.x
<chr> <chr> <chr>
5_8S_rRNA NA NA
5S_rRNA NA NA
7SK NA NA
A0FK60 NA NA
A0MPA7 NA NA
A1CF ENSGALG00000003765 423680
A2LD1 ENSGALG00000022758 418773
A2M NA 418251
A2ML1 NA 418254
A4GALT ENSGALG00000014128 418223
A4GNT ENSGALG00000006600 429136
AAAS ENSGALG00000032843 100859661
AACS ENSGALG00000002899 416811
Thank you
Camilla
| Reactome seems to support *Gallus gallus* already? - https://reactome.org/PathwayBrowser/#/SPECIES=49591
I am not sure of the validity of converting the genes to human and then running this on human-curated pathways.
For the conversion, you will likely find a programmatic way via Ensembl REST: https://rest.ensembl.org/
Otherwise, from a previous answer that I gave on Bioc ( https://support.bioconductor.org/p/132551/#132568 ), this is achievable; however, my experience tells me that overlap between *Gallus gallus* and *Homo sapiens* is not extensive.
1, load packages
----------------
require(biomaRt)
2, search for the chicken mart
---------
datasets <- listDatasets(useMart('ensembl'))
datasets[grep('Chicken', datasets[,2]),]
dataset description version
71 ggallus_gene_ensembl Chicken genes (GRCg6a) GRCg6a
3, 'connect to' the chicken and human marts
----------------
chicken <- useMart('ensembl', dataset = 'ggallus_gene_ensembl')
human <- useMart('ensembl', dataset = 'hsapiens_gene_ensembl')
4, create a lookup table for all chicken genes
-----------------
table <- getBM(
attributes = c('ensembl_gene_id','external_gene_name','entrezgene_id'),
mart = chicken)
head(table[table$external_gene_name != '',], 10)
ensembl_gene_id external_gene_name entrezgene_id
6 ENSGALG00000042750 ND1 39116926
10 ENSGALG00000043768 MT-ND2 NA
16 ENSGALG00000032142 MT-CO1 NA
19 ENSGALG00000032456 COII NA
21 ENSGALG00000032465 ATP8 39116937
22 ENSGALG00000041091 ATP6 39116912
23 ENSGALG00000035334 COX3 39116913
25 ENSGALG00000030436 ND3 39116938
27 ENSGALG00000042478 ND4L 39116939
28 ENSGALG00000036229 ND4 39116916
5, map some chicken Entrez gene IDs to human
----------------
chicken_ids <- table$entrezgene_id[1:20]
getLDS(
filters = 'entrezgene_id',
values = chicken_ids,
mart = chicken,
attributes = c('ensembl_gene_id','external_gene_name','entrezgene_id'),
martL = human,
attributesL = c('hgnc_symbol','ensembl_gene_id','entrezgene_id','gene_biotype'))
Gene.stable.ID Gene.name NCBI.gene..formerly.Entrezgene..ID HGNC.symbol
1 ENSGALG00000042750 ND1 39116926 MT-ND1
Gene.stable.ID.1 NCBI.gene..formerly.Entrezgene..ID.1 Gene.type
1 ENSG00000198888 4535 protein_coding
---------------
--------
Edit: One can even create a 'master' lookup of all Chicken genes to human:
master <- getLDS(
filters = 'entrezgene_id',
values = table$entrezgene_id,
mart = chicken,
attributes = c('ensembl_gene_id','external_gene_name','entrezgene_id'),
martL = human,
attributesL = c('hgnc_symbol','ensembl_gene_id','entrezgene_id','gene_biotype'))
head(master)
Gene.stable.ID Gene.name NCBI.gene..formerly.Entrezgene..ID HGNC.symbol
1 ENSGALG00000001714 PUSL1 419419 PUSL1
2 ENSGALG00000032413 NADK 419403 NADK
3 ENSGALG00000001835 SCNN1D 428184 SCNN1D
4 ENSGALG00000037713 FNDC10 107054923 FNDC10
5 ENSGALG00000001551 CCNL2 395124 CCNL2
6 ENSGALG00000039193 AGRN 396538 AGRN
Gene.stable.ID.1 NCBI.gene..formerly.Entrezgene..ID.1 Gene.type
1 ENSG00000169972 126789 protein_coding
2 ENSG00000008130 65220 protein_coding
3 ENSG00000162572 6339 protein_coding
4 ENSG00000228594 643988 protein_coding
5 ENSG00000221978 81669 protein_coding
6 ENSG00000188157 375790 protein_coding
dim(master)
[1] 14591 7
Kevin
| biostars | {"uid": 462130, "view_count": 1006, "vote_count": 1} |
Hi pretty basic question, but I have a data.frame that looks as follows:
chrom start stop frag dev span
1 chr1 3466334 3466335 60 0.09543313 0.94
2 chr1 4970790 4970791 51 0.08546289 1.00
3 chr1 5022767 5022768 27 0.45680640 0.20
4 chr1 5022807 5022808 34 0.35155312 0.20
5 chr1 5022904 5022905 45 0.12787640 0.20
6 chr1 5023063 5023064 29 0.26792518 0.20
I am trying to do a scatter plot at specific genomic locations, just not sure how to limit it to the right chromosome, here is my ggplot2 command:
my_data %>% ggplot() + geom_point(aes(x = start, y = frag)) + scale_y_continuous(limits=c(0,80)) + xlim(109005904, 109017426) + xlab("Genomic Position") + geom_smooth(aes(x = start, y = frag), method="loess", span=.66, color = "red", fill = "black") + theme_bw()
Thanks!
| Maybe:
my_data %>% filter(chrom == "chr1") %>% ggplot() + ... | biostars | {"uid": 409028, "view_count": 2053, "vote_count": 2} |
I am attempting to use R to plot copy-number along chromosomes via read depth. I have already calculated the normalized read counts per 75kb bins, and the relative integer copy number, so plotting the data in R is where I am getting hung up.
My data (named "75kb_run") looks like this:
Chrom Strt End Sample1 Sample2 Sample3 Sample4 Sample5 Sample6
chr1 1 75000 2.133 1.979 2.005 2.154 2.076 2.112
chr1 75001 150000 1.989 2.075 2.089 2.052 2.019 1.965
chr1 150001 225000 2.234 1.936 2.181 2.108 2.242 2.158
chr1 225001 300000 2.453 1.651 2.235 1.932 2.472 2.524
chr1 300001 375000 2.19 2.001 2.106 2.132 1.98 2.174
chr2 1 75000 1.941 2.243 1.906 2.012 2.154 1.969
chr2 75001 150000 1.899 2.316 1.959 2.053 1.995 1.887
chr2 150001 225000 1.92 2.104 1.942 2.035 2.191 1.719
chr2 225001 300000 2.25 1.921 1.99 2.213 2.237 1.665
chr2 300001 375000 1.631 2.595 1.816 1.904 1.75 2.131
chr2 375001 450000 2.068 2.372 2.134 1.959 1.899 1.684
chr2 450001 525000 1.933 2.291 2.026 2.001 1.966 1.822
chr3 1 75000 2.222 1.225 1.753 0.657 2.844 2.719
chr3 75001 150000 2.403 1.44 2.123 1.514 2.574 2.63
chr3 150001 225000 2.244 1.62 2.401 2.025 2.095 2.324
chr3 225001 300000 2.042 1.261 2.009 2.045 2.161 1.901
chr3 300001 375000 2.049 1.132 2.016 2.125 2.291 2.065
chr3 375001 450000 2.184 1.66 2.013 2.404 1.895 2.695
chr3 450001 525000 2.742 0.955 1.481 2.296 2.342 2.003
Using R, I can make individual scatter plots for each chromosome for each Sample.
chr2=`75kb_run`[grep("^chr2$", `75kb_run`$Chrom),]
Sample1_chr2=ggplot(`chr2`, aes(x=Strt, y=`chr2`$Sample1))
Sample1_chr2+
geom_point() + scale_y_continuous(limits=c(0,4))
However, I would like to plot all of the chromosomes per sample at once, looking something like [Figure 2 in this paper][1].
I think I don't quite understand how to arrange the Chrom column as a factor. Any help would be appreciated.
Bonus points if anyone could provide an example script that loops through all of the Samples given a list ( sampleList=c("Sample1"," Sample2"," Sample3") ).
Thank you,
Mike
[1]: http://www.plantcell.org/content/27/9/2370.short | You can use [karyoploteR](http://bioconductor.org/packages/karyoploteR/) for that. It's a new R/Bioconductor package to plot data on the genome with lots of room for configuration.
With karyoploteR you start with an empty plot and keep adding new data iteratively. In your case, we'll use a for loop to add one sample after the other. We'll use `r0` and `r1` to specify the vertical position of the data for each sample (similar to defining where each sample "track" starts and ends).
Your sample data is quite short, so the image is not very enticing, but you'll get the idea
library(karyoploteR)
read.depth <- read.table("./sample_data.txt", sep="\t", header=TRUE, stringsAsFactors=FALSE)
max.cnv <- 3
nsamples <- length(read.depth)-3
bins <- toGRanges(read.depth[,c(1,2,3)])
sample.height <- 1/nsamples
kp <- plotKaryotype(genome="hg19", chromosomes=c("chr1", "chr2", "chr3"))
for(i in seq_len(nsamples)) {
sample.name <- names(read.depth)[i+3]
r0 <- (i-1)*sample.height
r1 <- (i)*sample.height
kpPoints(kp, data=bins, y=read.depth[,i+3], r0=r0, r1=r1, ymin=0, ymax=max.cnv)
}
![enter image description here][1]
We can improve the plot quite a lot: add labels to identify each sample, add axis and guide lines... and use a different genome layout, with all chromosomes in a single line with `plot.type=4`.
I'll use simulated data to show how it would look like with complete data.
library(karyoploteR)
#Simulate the data
nsamples <- 6
genome <- filterChromosomes(getGenome("hg19"))
chromosome.lengths <- setNames(end(genome), seqlevels(genome))
sim.data <- toDataframe(tileGenome(chromosome.lengths, tilewidth = 75000, cut.last.tile.in.chrom = TRUE))
for(i in seq_len(nsamples)) {
#add simulated baseline 2n data
cnv.val <- 2+rnorm(nrow(sim.data), sd=0.1)
#add some CNVs
gains <- createRandomRegions(nregions=3, length.mean = 10e6, length.sd = 2e6, mask=NA)
cnv.val[which(overlapsAny(toGRanges(sim.data), gains))] <- cnv.val[which(overlapsAny(toGRanges(sim.data), gains))] + 1
losses <- createRandomRegions(nregions=3, length.mean = 6e6, length.sd = 2e6, mask=NA)
cnv.val[which(overlapsAny(toGRanges(sim.data), losses))] <- cnv.val[which(overlapsAny(toGRanges(sim.data), losses))] -1
#Add the cnv.data to the bins
sim.data <- cbind(sim.data, cnv.val, stringsAsFactors=FALSE)
}
names(sim.data)[4:(nsamples+3)] <- paste0("Sample", 1:nsamples)
And plot it.
max.cnv <- 3
bins <- toGRanges(sim.data[,c(1,2,3)])
nsamples <- length(sim.data)-3
png(filename = "simulated.cnv.png", width=1500, height=1000)
pp <- getDefaultPlotParams(plot.type=4)
pp$leftmargin <- 0.1
pp$data1inmargin <- 2
kp <- plotKaryotype(genome="hg19", plot.type=4, ideogram.plotter = NULL, labels.plotter = NULL, plot.params = pp)
kpAddCytobandsAsLine(kp, color.schema = "circos")
kpAddChromosomeNames(kp, srt=45)
sample.margin <- 0.02
sample.height <- (1-sample.margin*nsamples)/nsamples #the first three columns are chr, start and end
for(i in seq_len(nsamples)) {
sample.name <- names(read.depth)[i+3]
r0 <- (i-1)*(sample.height+sample.margin)
r1 <- r0 + sample.height
kpAddLabels(kp, r0=r0, r1=r1, labels = sample.name, label.margin = 0.04)
kpAxis(kp, r0=r0, r1=r1, tick.pos = 0:max.cnv, ymin=0, ymax=max.cnv, cex=0.8)
kpAbline(kp, h=c(0:max.cnv), col="#aaaaaa", r0=r0, r1=r1, ymin=0, ymax=max.cnv)
kpPoints(kp, data=bins, y=sim.data[,i+3], r0=r0, r1=r1, ymin=0, ymax=max.cnv)
}
dev.off()
![enter image description here][2]
You could then add a highlight in color for the regions with gains or losses, add markers for your favorite genes, or anything else you need. You can find more information on how to use it at [karyoploteR Tutorial and Examples](https://bernatgel.github.io/karyoploter_tutorial/).
I hope this helps!
[1]: http://i.imgur.com/8TGt7Gx.png
[2]: http://i.imgur.com/4i6ApyH.png | biostars | {"uid": 257170, "view_count": 4146, "vote_count": 1} |
I am trying to run blastx (version 2.2.28) with the following command line:
blastx -db nr -outfmt 5 -evalue 1e-3 -word_size 3 -show_gis -num_alignments 1 \
-max_hsps 20 -num_threads 5 -out transcripts_refonly.xml -query transcripts_refonly.fa
So when blastx is running, I can see the output file that grows in size, however when blastx finishes, the output file then is empty (0kb).
This happens when I run it with big files like transcprit fasta file (from a RNA-seq project) of ~30Mb (21000 sequences), but if I run just a smaller file (20 sequences) it works good.
I tried also with .txt ouput format and it happens the same, and also run it into the cluster and different servers and still not working.
Does someone knows what I am doing wrong?
Thank you | I guess that it was something about memory.. still don't know.
My solution: just split my file into several small files (~100 sequences per file) and run multiple blastx at the same time in the cluster. | biostars | {"uid": 179727, "view_count": 2198, "vote_count": 1} |
Dear community,
I am working with a gene family across a wide range of chordate species, and one of the steps was to retrieve potential homologs from Ensembl. Interestingly, there were some of these genes that had their ID deprecated in newer versions of Ensembl. However, annotated sequences from these deprecated genes had reasonably good quality, and they matched the phylogenetic position that I'd expect, in phylogenetic analyses (so I could tentatively assume that the annotation was correct).
I am curious of the reasons why some gene annotations are deprecated from version to version, even though they look like the annotation is correct, and whether it is wise to use such sequences nonetheless.
All best, | All these genes are slightly different.
The Mouse Lemur genes were all changed we moved to a new genome assembly. [ENSMICG00000011090](http://aug2017.archive.ensembl.org/Microcebus_murinus/Gene/Summary?db=core;g=ENSMICG00000011090;r=KQ056493.1:1017881-1023258;t=ENSMICT00000011083) and [ENSMICG00000011081](http://aug2017.archive.ensembl.org/Microcebus_murinus/Gene/Summary?g=ENSMICG00000011081;r=KQ056493.1:1033459-1039527) both disappeared when we moved from v2 to v3 of the assembly, whereas [ENSMICG00000011093](http://jul2016.archive.ensembl.org/Microcebus_murinus/Gene/Summary?db=core;g=ENSMICG00000011093;r=GeneScaffold_456:993434-999177;t=ENSMICT00000011088) was lost when we moved from v1 to v2. The good news is that the genes for two of them, [*HTR3C*](http://www.ensembl.org/Microcebus_murinus/Gene/Summary?db=core;g=ENSMICG00000049301;r=1:76618788-76624165;t=ENSMICT00000073294) and [*HTR3E*](http://www.ensembl.org/Microcebus_murinus/Gene/Summary?db=core;g=ENSMICG00000049068;r=1:76660281-76677598;t=ENSMICT00000072625) are in the latest version of Ensembl. The contigs underlying both of them have changed significantly, and this is the reason why the identifiers were changed and not mapped between the releases. I cannot find Mouse Lemur *HTR3D* in the current Ensembl database, although I suspect that [ENSMICG00000049384](http://www.ensembl.org/Microcebus_murinus/Gene/Summary?db=core;g=ENSMICG00000049384;r=1:76604424-76611926;t=ENSMICT00000072460) could be the correct gene based on sequence and genomic position – I will see if this ought to be annotated as this gene.
Kangaroo rat [ENSDORG00000003549](http://dec2016.archive.ensembl.org/Dipodomys_ordii/Gene/Summary?db=core;g=ENSDORG00000003549;r=GeneScaffold_6098:3429-11276;t=ENSDORT00000003549) is different. It was not lost when there was a new genome assembly. Looking that the gene on the assembly at that time, it has two exons that overlaps gaps in the assembly, which is probably why it was deprecated. Since it doesn't have a gene name, I [searched for the sequence using BLAST](http://www.ensembl.org/Dipodomys_ordii/Tools/Blast/Ticket?tl=NdAcgUH9C90qs6W6) in the current database, and there are few solid options for the gene it might be now. It has since had a new genome assembly and the gaps repaired, so could be annotated properly. | biostars | {"uid": 344126, "view_count": 2095, "vote_count": 1} |
I know my samples are from C57BL6.
Should I use the [specific for C57BL6][1] or just [generic mus musculus][2] reference transcriptomes/genomes for alignments?
I am worried that the file size for the reference transcriptome for C57BL6 (`Mus_musculus_c57bl6nj.C57BL_6NJ_v1.cdna.all.fa.gz`, 39.7 MB) is 22% smaller than its generic counterpart (`Mus_musculus.GRCm39.cdna.all.fa.gz`, 51.2 MB). Clearly, biologically the C57BL6 transcriptome cannot be 22% smaller than the transcriptome of some other strain, so there is something going on with the lower detalization for C57BL6??
What do I gain, what do I lose if I opt for C57BL6-specific transcriptome/genome?
[1]: https://ftp.ensembl.org/pub/release-109/fasta/mus_musculus_c57bl6nj/
[2]: https://ftp.ensembl.org/pub/release-109/fasta/mus_musculus/ | There's some basic literature on the topic here: https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1010552 . Slightly improved uniquely mapped reads; though mapping parameters appear to have a stronger impact than choice of reference.
However, for *differential* expression (BL6/J untreaded vs BL6/J treated) the question isn't so much "how do the quantifications change" but "how do the logFCs change" -- and I don't see published results on this. I should imagine that, by aligning to the strain transcription, some genes might get slightly higher coverage to bump them over the soft-filtering threshold; but few (if any) logFC values should alter based on the reference.
It's probably worth doing twice just to put your mind at ease. | biostars | {"uid": 9555931, "view_count": 315, "vote_count": 1} |
HI!
I would like to get an rs number using (any) of the following information that I have in a data spreadsheet
Gene name (e.g. EXOSC3)/ HGVS cDNA (old nomenclature) (e.g. c.92G>C)/ HGVS protein (p.G31A for protein)/ cDNA_reference (e.g. NM_016042.2)
I was thinking that this would be relatively straightforward - i.e. download similar data with rs number - then merge the files - but can't find this! Would be extremely grateful if anyone can help, Thanks so much, Fleur | You can use the [UCSC Genome Browser][1] to get this information, but the process depends on the specific HGVS terms you have.
The general idea is to split your list into two lists, one with just your different HGVS terms, and one with just your gene names. Then you can use a combination of the [Table Browser][2] and [Data Integrator][3] to get the rsID's (if any) corresponding to your gene names, and the [Variant Annotation Integrator][4] to get the rsID's (if any) corresponding to your HGVS terms.
We can provide more specific steps if you email this question along with some example data to our mailing list genome@soe.ucsc.edu, that way our entire team will see the question and can help with a solution.
Thanks,
ChrisL from the UCSC Genome Browser
[1]: http://genome.ucsc.edu "UCSC Genome Browser"
[2]: http://genome.ucsc.edu/cgi-bin/hgTables "Table Browser"
[3]: http://genome.ucsc.edu/cgi-bin/hgIntegrator "Data Integrator"
[4]: http://genome.ucsc.edu/cgi-bin/hgVai "Variant Annotation Integrator" | biostars | {"uid": 248425, "view_count": 2440, "vote_count": 1} |
<p>I want to have a bigwig file converted into gff after having the genome coordinates and genome annotation files converted from Hg19 to Hg18. One way, would be to convert bigwig into BED, use the liftover tool and then have the BED file converted into gff. Could someone please suggest a method to convert bigwig to BED, nothing available on the net.</p>
| <p>Instead of <code>bigWigToBedGraph</code>, you might use <code>bigWigToWig</code>, instead, as bedgraph is BED-like but isn't exactly BED, which may affect use of UCSC's liftover tool. In any case, both binaries are available from the <a href="http://hgdownload.cse.ucsc.edu/admin/exe/">UCSC executables download page</a>.</p>
<p>Once you have the file in WIG format, you can convert it to a UCSC BED file with the <a href="http://bedops.readthedocs.org/en/latest/content/reference/file-management/conversion/wig2bed.html">BEDOPS <code>wig2bed</code> conversion utility</a>.</p>
<p>Alternatively, you could take the bedgraph output and convert it to BED with an added GNU <code>awk</code> step:</p>
<pre>
<code>$ awk '{ \
if ($1 ~ /^chr/) { \
print $1"\t"$2"\t"$3"\tid-"NR"\t"$4; \
} \
}' foo.bedgraph \
> foo.bed
</code></pre>
<p>You should be able to safely run it through liftover at that point.</p>
| biostars | {"uid": 71692, "view_count": 42589, "vote_count": 5} |
I have a list of genes. and i want to extract the snps and indels from my VCF file (that i generated using GATK pipeline ) from genes coordinates on . The list of genes coordinates:
Gene Name Accession_no. Start_Position End_Position Strand
Rv0194 NC_000962.3 226878 230462 +
I was looking bedtools but it is asking for .bed format of genes nd as well .bed of bam files. how to do it ? or any other options/tools/scripts?
Like i tried tabix:
bgzip ERR038736_UnifiedGenotyper_variants_raw_snp.vcf
tabix ERR038736_UnifiedGenotyper_variants_raw_snp.vcf.gz
tabix ERR038736_UnifiedGenotyper_variants_raw_snp.vcf.gz AL123456.3:226878-230462 > Rv0194
and this gave me the variants like this:
AL123456.3 227098 . T C 6730.77 . AC=2;AF=1.00;AN=2;DP=172;Dels=0.
AL123456.3 228069 . G A 7132.77 . AC=2;AF=1.00;AN=2;BaseQRankSum=-
AL123456.3 228168 . G C 6682.77 . AC=2;AF=1.00;AN=2;DP=171;Dels=0.
But this is not a vcf file and i can only extract it one at a time. I want to extract all variants against a list of coordinates and store it in a vcf output.
Can anyone help me it this?
| Hello,
you can convert this list into a valid `bed`file by:
$ cut -f2-4 genes.txt|tail -n+2 > genes.bed
You can than take this bed file with `tabix`
$ tabix input.vcf.gz -R genes.bed
fin swimmer | biostars | {"uid": 342354, "view_count": 2633, "vote_count": 2} |
What exactly is "unmappable regions"? My understanding from some google searches is that they are some short regions on the gene that are difficult to map. Is this correct? If so, why are there short region and long region, aren't they randomly splitted?
Thank you. | In the genome, there is a lot of what is called "repetitive DNA", these are sequences that appear many times throughout the genome. For example LINE1 and Alu are two types of repetitive sequences, that make up a large fraction of the human genome. Naturally, repetitive DNA is processed in sequencing assays like WGS and ChIP-seq, but aligners have a hard time figuring out where the read comes from as the sequence could have originated from many different places. The same thing happens when there are paralogous genes with very similar sequences, the aligner can't exactly distinguish where the sequence originated. This is why in short read sequencing, a lot of reads are discarded from the analysis as we don't know the true genomic origin of those reads. Long read sequencing mostly avoids this problem. | biostars | {"uid": 9550791, "view_count": 420, "vote_count": 1} |
Apologies if this has already been answered, but if I want to update samtools 1.3 to the latest release, what's the best way to go about it? Uninstall the older version first and then do a fresh install, or is there a update command that automatically does that from the terminal? | Remove the old version and install the new one (or just install the new one over top). But really, as Wouter mentioned, conda is your friend: `conda install -c bioconda samtools`. I recommend making a separate conda environment for each software version, since then you don't have to worry something breaking because you've accidentally up/down-graded some random dependency of another tool. | biostars | {"uid": 300114, "view_count": 9767, "vote_count": 1} |
Is it possible to use `console.log` in CWL javascript expressions? Also I was wondering about `debugger` statements or stepping through the code? Or what is the recommended way to go about debugging with CWL javascript? | Rabix composer tool editor (http://rabix.io/) has the JS editor integrated. You can simulate tool inputs and it shows what is returned with the expression. | biostars | {"uid": 303401, "view_count": 2776, "vote_count": 1} |
Hi Biostars!,
I am looking to calculate Aneuploidy across a number of tumor samples downloaded from TCGA and ICGC and I had a couple questions regarding the provided copy number data, specifically the segment mean column (which I think is the most relevant data for my cause).
From what I understand from the documentation and previous questions on biostars the values are in Log2 - from this can I read the following into the data?
- A value of 0 should represent CN2
- If so CN2 represents diploidy and variations represent aberrations
- These changes in copy number are a result of the gain or loss of segments of chromosome
I am rather new at CNV work so please let me know if I am completely missing something or barking up the wrong data column.
Thanks for your time, have a great weekend!
Graeme | You have it about right. Variants represent aberrations, yes. A value of 0 is probably close to CN2 state, but without some detail on the methods, it isn't really easy to tell. Some tumors may have total copy number significantly less than or more than 2, in which case the 0 may not represent CN2 state. | biostars | {"uid": 137769, "view_count": 2868, "vote_count": 1} |
Hi everyone, I was wondering if I can get only the longest trancript in the corresponding column of the Annovar annotation? -onetranscript argument chooses randomly but not the longest one.
So could you please help me? Thanks in advance | The answer from the author of ANNOVAR is this:
> There has never been a consensus in the field which transcript should
> be used to represent a gene when multiple transcripts are available.
> The most popular approach is to use the longest transcript nowadays.
> However, in the medical genetics field, for certain specific diseases
> and specific genes, there are 'canonical' transcripts that everybody
> uses by default for historical reasons, and you will need to manually
> select this canonical transcript from ANNOVAR output file to
> communicate with the rest of the field.
[source: http://annovar.openbioinformatics.org/en/latest/misc/faq/]
In a way, he is correct, and I feel that the field should start to embrace (and report) multiple transcript isoforms more and more, even with the increased data load. There is too much reporting of variants on isoforms that may have minimal relevance in the tissue of study. Also, for many well-studied genes, like *BRCA1*, we have identified >10 isoforms; whilst, for other less-studies genes, we don't yet understand the alternate splicing patterns of the gene.
Note that VEP does allow you to output the canonical isoform, but to Ensembl the canonical is always the isoform with the longest CCDS: https://www.ensembl.org/Help/Glossary?id=346
On the last point, researchers even disagree about what canonical means. For some it is the highest expressed isoform in the tissue being studied, which may not necessarily be the longest. At least Ensembl's definition is broad-sweeping and covers all tissues.
Kevin | biostars | {"uid": 298933, "view_count": 1897, "vote_count": 1} |
Hi,
I am currently trying to filter a multi-sample vcf file obtained with samtools mpileup. I would like to keep the positions with all samples depths > 4.
I found a way with SnpSift filter, but I have to specify each sample one by one:
cat test.vcf | java -jar SnpSift.jar filter "(GEN[0].DP>4) & (GEN[1].DP>4) & ... &(GEN[66].DP>4)" > filtered.vcf
(I've got 66 samples ... )
Does anyone know how to apply this filtering option to all samples without specifying each samples one by one? (GEN[*] does not seem to work)
Moreover I would like to apply others genotype calling filters on subsets of my samples, is there a way to apply a filter to a specific subset?
Thanks | How about generating the script?
cat test.vcf | java -jar SnpSift.jar filter `seq 1 66 | awk '{ printf("%s (GEN[%d].DP>4) ",(NR==1?"":" & "), $1);}'`
See also my tool : https://github.com/lindenb/jvarkit/wiki/VCFFilterJS to filter with javascript. | biostars | {"uid": 124032, "view_count": 4983, "vote_count": 2} |
Could you explain the difference between `STAR`, `KALLISTO`, `SALMON` etc. to experimental Biologist/non-bioinformatician.
If possible, the pros and cons of each pipeline.
**Edit below**
***I ask this because three of my colleagues use this 3 difference tools for RNASeq. Basically to answer the same type of biological questions.*** | Regarding the pros and cons of the programs, you might find this article useful:
[https://www.nature.com/articles/s41598-017-01617-3][1]
It finds that kallisto and Salmon produce near identical results, and that STAR (with HTseq for producing gene counts) is less accurate (due to some of the reasons explained above). In terms of speed/memory requirements, the difference between programs is substantial. In recent benchmarking of kallisto vs. STAR on workflows for single-cell RNA-seq
[https://www.biorxiv.org/content/10.1101/673285v2][2]
we found that kallisto was 2.6 times faster than STAR. More importantly, kallisto used much less memory, in some cases 15x less RAM than STAR. This makes it possible to run kallisto on a laptop rather than a server, and facilitates reproducible workflows.
[1]: https://www.nature.com/articles/s41598-017-01617-3
[2]: https://www.biorxiv.org/content/10.1101/673285v2 | biostars | {"uid": 400009, "view_count": 23017, "vote_count": 28} |
I have pair end Fastq files from illumina (**Fast1.fastq** and **Fast2.fastq**). Now, how should I proceed further with de-novo genome assembly? I'am completely new to this field of assembling genome sequences? Which open-end tools can be used?
Please help me with the tools and tutorials which can be used for de-novo assembly of sequences. | If you download the BBMap package, there is a (hopefully) helpful guide in bbmap/docs/guides/PreprocessingGuide.txt
For isolate bacterial assembly with SPAdes, I recommend:
1) Adapter-trimming (you can do quality-trimming at the same time; I suggest a low cutoff, such as Q10)
2) Artificial contaminant filtering
3) Human contaminant removal
4) Error correction
5) Paired-read merging
...then assemble with SPAdes, using both the merged and unmerged reads.
| biostars | {"uid": 249829, "view_count": 4627, "vote_count": 1} |
Hi, all!
I've obtained the enriched GO BP terms using fGSEA in R as shown below.
1: GO_POSITIVE_REGULATION_OF_VIRAL_TRANSCRIPTION 0.020378457 0.08797654 0.4748196 1.5113374 13 41 TFAP4,NELFCD,POLR2H,POLR2D,NELFA,DHX9,...
2: GO_CARDIAC_CHAMBER_DEVELOPMENT 0.005847953 0.04827840 -0.3775346 -1.7146207 0 166 TEK,FOXF1,CCM2L,TGFBR3,SLIT3,TGFBR2,...
3: GO_DNA_DEPENDENT_DNA_REPLICATION_MAINTENANCE_OF_FIDELITY 0.001461988 0.04816766 0.6526836 2.0851705 0 42 EME1,DNA2,TIMELESS,BLM,TONSL,RAD51,...
4: GO_CIRCADIAN_RHYTHM 0.626463700 0.79544296 0.2392950 0.9379309 534 194 TOP2A,EZH2,TIMELESS,LGR4,NMU,SUV39H2,...
5: GO_PHOSPHATIDYLSERINE_ACYL_CHAIN_REMODELING 0.488165680 0.69185315 -0.3223920 -0.9673269 164 21 PLA2G4F,PLA2G5,LPCAT3,PLA2G2A,PLA2G1B
6: GO_SPINAL_CORD_DEVELOPMENT 0.334183673 0.55175132 0.2905675 1.0667585 261 104 MNX1,SOX4,HOXC10,SOX11,LHX5,SOX12,...
What I'm troubling is saving the result to a file.
I've tried 'write.csv' and 'write_csv', but both failed because the presence of a list (or lists) in the result that I couldn't see.
> write.csv(fgseaRes,"test.csv")
Error in write.table(fgseaRes, "test.csv", col.names = NA, sep = ",", :
unimplemented type 'list' in 'EncodeElement'
> write_csv(fgseaRes,"test.csv")
Error in stream_delim_(df, path, ..., bom = bom, quote_escape = quote_escape) :
Don't know how to handle vector of type list.
Anyone knows how to save this to txt or csv files?
Thank you! | Use fwrite from data.table package: `fwrite(fgseaRes, file="fgsea.tsv", sep="\t", sep2=c("", " ", ""))` | biostars | {"uid": 410796, "view_count": 2768, "vote_count": 2} |
I seem to have more reads after alignment than before.
Before alignment
` awk '{s++}END{print s/4}' reads.fastq `
> 153265
After alignment
`samtools flagstat align.bam`
> 180051 + 0 in total (QC-passed reads + QC-failed reads)
> 0 + 0 secondary
> 26786 + 0 supplementary
> 0 + 0 duplicates
> 171567 + 0 mapped (95.29% : N/A)
> [...]
I do not understand how can that be.
Can you help? | the bam contains the **supplementary** (part of the read that maps elsewhere) + **secondary** (some other probable hits for the read) alignments | biostars | {"uid": 326276, "view_count": 3145, "vote_count": 1} |
Hi,
I have a text file of 7500 genomic positions with their chr, start, end and want to get their nucleotide sequences. Can someone point me some tool or any thoughts on how to do it?
Thank you,
| if you use python:
import pysam
genome = pysam.Fastafile(path_to_genome+'genome.fa')
sequence = genome.fetch(chr, start, end)
path_to_genome: would be any genome you have downloaded (e.g. hg19). | biostars | {"uid": 196364, "view_count": 3541, "vote_count": 3} |
Dear All,
I have to deal some small molecules screening for my protein-drug docking studies. Please suggest me few small molecule databases ..also the free software can use to screen them...thank you | You may want to have a look at <a href="https://www.ebi.ac.uk/chembl/about">ChEMBL</a> (database of bioactive drug-like small molecules) and <a href="https://www.ebi.ac.uk/chebi/aboutChebiForward.do">ChEBI</a> too! | biostars | {"uid": 115992, "view_count": 5855, "vote_count": 1} |
Hi all,
I have two datasets A = 7 and B = 12 and the overlapping region is C =4. I want to draw a publication standard venn diagram using a R package. How can I make it? | <p>very good and simple tutorials from the power of <a href="https://www.google.com/search?q=venn+diagram+in+r&oq=venn+diagram+in+&aqs=chrome.1.69i57j0l5.2975j0j7&sourceid=chrome&es_sm=119&ie=UTF-8">Google search</a> are:</p>
<p><a href="http://www.ats.ucla.edu/stat/r/faq/venn.htm">http://www.ats.ucla.edu/stat/r/faq/venn.htm</a></p>
<p><a href="http://rstudio-pubs-static.s3.amazonaws.com/13301_6641d73cfac741a59c0a851feb99e98b.html">http://rstudio-pubs-static.s3.amazonaws.com/13301_6641d73cfac741a59c0a851feb99e98b.html</a></p>
| biostars | {"uid": 139304, "view_count": 33404, "vote_count": 4} |
<p>What is the most appropriate journal for publishing <a href="http://www.iicb.res.in/bpga/index.html">this</a> pan-genomics tool.</p>
<p>Also listed at <a href="http://omictools.com/bpga-s10835.html">omictools.com</a></p>
<p>(except Oxford's Bioinformatics)</p>
| **JANE** http://biosemantics.org/jane/suggestions.php says:
- BMC bioinformatics
- Bioinformatics
- NAR
- ....
:-) | biostars | {"uid": 160679, "view_count": 1746, "vote_count": 1} |