text
stringlengths 35
445k
| metadata
dict |
|---|---|
# 01-preprocessing_1.ipynb
Repository: BIMSBbioinfo/scregseg
<code>
import os
import pandas as pd
from anndata import read_h5ad
import scanpy as sc
import scregseg
import matplotlib.pyplot as plt
</code>
# Processing and preparing raw data
This tutorial shows how to create and manipulate count matrices.
Specifically, we shall illustrate:
* How to create a count matrix using `scregseg fragments_to_counts` and `scregseg bam_to_counts`
* How to filter and subset the count matrix using `scregseg filter` and `scregseg subset`
* How to collapse cells within groups `scregseg collapse`
* How to combine dataset using `scregseg merge`
* How to create pseudobulk bam and bigwig tracks
### Obtaining the dataset
We will obtain the tutorial dataset from 10x Genomics.
Caution: The bam file is rather large. One might want to skip downloading it.
<code>
# first we download example scATAC-seq data
!wget -O atac_v1_pbmc_5k_fragments.tsv.gz https://cf.10xgenomics.com/samples/cell-atac/1.2.0/atac_v1_pbmc_5k/atac_v1_pbmc_5k_fragments.tsv.gz
!wget -O atac_v1_pbmc_5k_possorted_bam.bam https://cg.10xgenomics.com/samples/cell-atac/1.2.0/atac_v1_pbmc_5k/atac_v1_pbmc_5k_possorted_bam.bam
!wget -O atac_v1_pbmc_5k_possorted_bam.bam.bai https://cg.10xgenomics.com/samples/cell-atac/1.2.0/atac_v1_pbmc_5k/atac_v1_pbmc_5k_possorted_bam.bam.bai
# sometimes bedtools fails when processing the original *.tsv.gz files, but unpacking and packing seems to help
!gunzip -f atac_v1_pbmc_5k_fragments.tsv.gz
!gzip -f atac_v1_pbmc_5k_fragments.tsv
# prefiltered cells from CellRanger
!wget -O atac_v1_pbmc_5k_singlecell.csv https://cf.10xgenomics.com/samples/cell-atac/1.2.0/atac_v1_pbmc_5k/atac_v1_pbmc_5k_singlecell.csv
!wget -O atac_v1_pbmc_5k_analysis.tar.gz https://cf.10xgenomics.com/samples/cell-atac/1.2.0/atac_v1_pbmc_5k/atac_v1_pbmc_5k_analysis.tar.gz
!tar xvf atac_v1_pbmc_5k_analysis.tar.gz
</code>
### Preparing a single-cell count matrix
First, be build genome-wide tiling windows. This will be used as the basis to construct the countmatrix.
The chromosome sizes are extracted from the fragments file. Alternatively, a bam-file could be used
for this step as well.
<code>
!scregseg make_tile \
--regions tile1kb.bed \
--binsize 1000 \
--fragmentfile atac_v1_pbmc_5k_fragments.tsv.gz
</code>
Next, we construct a countmatrix.
This step will require a fragments or bam-file (used with `scregseg bam_to_counts`)
and a bed-file specifying the genomic intervals.
The result of this step will be a regions by barcodes matrix.
The `--with-fraglen` determines that the fragment length information per interval is collected as well. This might be useful
for exploring informative states in the HMM-model later on.
<code>
!scregseg fragments_to_counts \
--fragmentfile atac_v1_pbmc_5k_fragments.tsv.gz \
--regions tile1kb.bed \
--with-fraglen \
--counts countmatrix.h5ad
</code>
In the example, we'll save the countmatrix as AnnData dataset, which facilitates easy interoperability with scanpy.
Alternatively, one could save the countmatrix also as `countmatrix.mtx` to save the data in matrix market format.
The latter option makes it easier to continue with the dataset in a different environment, e.g. when using R.
Next, we subset/filter the raw countmatrix to remove poor quality barcodes.
The 10x Genomics data already contains information from the CellRanger pipeline about the cell quality.
So we can continue with the pre-determined high-quality cells.
To do so, we extract the desired cells (indicated by the is_cell_barcode column) from the 10x Genomics metadata:
<code>
df = pd.read_csv('atac_v1_pbmc_5k_singlecell.csv')
df = df[df.is__cell_barcode==1]
df[['barcode']].to_csv('qcontrolled_cells.csv', index=False)
print(f'{df.shape[0]} high-quality cells are left for downstream processing')
</code>
Then we subset the original countmatrix and retain only the quality controlled cells:
<code>
!scregseg subset \
--incounts countmatrix.h5ad \
--outcounts filtered_countmatrix.h5ad \
--subset qcontrolled_cells.csv
</code>
In addition or as an alternative, it is possible to filter cells and regions using `scregseg filter`.
#!scregseg filter \
--incounts countmatrix.h5ad \
--outcounts filtered2_countmatrix.h5ad \
--mincount 1000 \
--maxcount 40000 \
--trimcount 1
Now, let's check the content of the count matrix
adata = read_h5ad('filtered2_countmatrix.h5ad')
adata
<code>
adata = read_h5ad('filtered_countmatrix.h5ad')
adata
</code>
Sometimes it might be useful to concatenate count matrices, e.g. stemming from different experiments.
This can be achieved by using `scregseg merge`
<code>
!scregseg merge \
--incounts filtered_countmatrix.h5ad filtered_countmatrix.h5ad \
--outcounts merged_countmatrix.h5ad
</code>
### Preparing cell-group collapsed count matrices
After having performed some initial analysis, including feature identification,
dimensionality reduction and cell clustering, it might be of interest
to investigate the accessibility profiles across cell-groups or cell-clusters.
To this end, a count matrix can be constructed that collapses cell within groups: `scregseg collapse`
We shall utilize the pre-determined clustering results from CellRanger as an example to illustrate how to compile a pseudo-bulk count matrix.
In addition, we compile the pseudo-bulk count matrix with 500 bp resolution (for later use).
<code>
!scregseg make_tile \
--regions tile500b.bed \
--binsize 500 \
--fragmentfile atac_v1_pbmc_5k_fragments.tsv.gz
</code>
<code>
!scregseg fragments_to_counts \
--fragmentfile atac_v1_pbmc_5k_fragments.tsv.gz \
--regions tile500b.bed \
--counts countmatrix_500.h5ad
</code>
<code>
!scregseg subset \
--incounts countmatrix_500.h5ad \
--outcounts filtered_countmatrix_500.h5ad \
--subset qcontrolled_cells.csv
</code>
<code>
!scregseg filter \
--incounts filtered_countmatrix_500.h5ad \
--outcounts filtered_countmatrix_500.h5ad \
--trimcount 1
</code>
<code>
!scregseg collapse \
--incounts filtered_countmatrix_500.h5ad \
--outcounts collapsed_countmatrix_500.h5ad \
--cellgroup analysis/clustering/graphclust/clusters.csv
</code>
<code>
psadata = read_h5ad('collapsed_countmatrix_500.h5ad')
</code>
<code>
psadata
</code>
The collapsed countmatrix contains 11 columns, each corresponding to one of the clusters
defined in the clusters.csv file.
Finally, if we have access to raw bam files, we could split the bamfiles into pseudobulk
tracks determined by `--cellgroup`. This will also generate associated bigwig files
if deeptools is installed/available.
<code>
!scregseg pseudobulk_tracks \
--bamfile atac_v1_pbmc_5k_possorted_bam.bam \
--barcodetag CB \
--outdir pseudobulktracks \
--cellgroup analysis/clustering/graphclust/clusters.csv
</code>
<code>
os.listdir('pseudobulktracks')
</code>
|
{
"filename": "01-preprocessing_1.ipynb",
"repository": "BIMSBbioinfo/scregseg",
"query": "transformed_from_existing",
"size": 35713,
"sha": ""
}
|
# This_Python_analysis_pipeline_integrates_singlecell_and_spatial_transcriptomics_data_to_identify_clonal_transitions_and_correlate_them_with_TME_interactions_1.ipynb
Repository: connerlambden/BioloGPT
Below we describe the steps for downloading PDAC single-cell datasets and processing them for clonal analysis.
<code>
import scanpy as sc
# Download and preprocess PDAC scRNA-seq data
adata = sc.read_h5ad('pdac_scRNAseq.h5ad')
sc.pp.filter_cells(adata, min_genes=200)
sc.pp.filter_genes(adata, min_cells=3)
# Run clustering and infer copy number variations using inferCNV adaptations
sc.tl.pca(adata, svd_solver='arpack')
sc.pp.neighbors(adata)
sc.tl.leiden(adata)
sc.tl.umap(adata)
adata.write('processed_pdac_data.h5ad')
</code>
Guide for visualizing spatial transcriptomics data and overlaying clonal signatures using integrated libraries.
<code>
import squidpy as sq
spatial_data = sq.read_visium('pdac_spatial_data')
sq.pl.spatial(scdata=spatial_data, color=['clonal_signature', 'EMT_markers'])
</code>
The code integrates both scRNA-seq and spatial datasets, allowing users to visualize the spatial architecture of clonal populations along with their molecular characteristics.
***
### [**Evolve This Code**](https://biologpt.com/?q=Evolve%20Code%3A%20This%20Python%20analysis%20pipeline%20integrates%20single-cell%20and%20spatial%20transcriptomics%20data%20to%20identify%20clonal%20transitions%20and%20correlate%20them%20with%20TME%20interactions.%0A%0AIncorporate%20advanced%20trajectory%20inference%20and%20single-cell%20variation%20analysis%20modules%20to%20better%20capture%20dynamic%20clonal%20transitions.%0A%0AClonal%20heterogeneity%20pancreatic%20ductal%20adenocarcinoma%20tumor%20progression%20review%0A%0ABelow%20we%20describe%20the%20steps%20for%20downloading%20PDAC%20single-cell%20datasets%20and%20processing%20them%20for%20clonal%20analysis.%0A%0Aimport%20scanpy%20as%20sc%0A%23%20Download%20and%20preprocess%20PDAC%20scRNA-seq%20data%0Aadata%20%3D%20sc.read_h5ad%28%27pdac_scRNAseq.h5ad%27%29%0Asc.pp.filter_cells%28adata%2C%20min_genes%3D200%29%0Asc.pp.filter_genes%28adata%2C%20min_cells%3D3%29%0A%23%20Run%20clustering%20and%20infer%20copy%20number%20variations%20using%20inferCNV%20adaptations%0Asc.tl.pca%28adata%2C%20svd_solver%3D%27arpack%27%29%0Asc.pp.neighbors%28adata%29%0Asc.tl.leiden%28adata%29%0Asc.tl.umap%28adata%29%0Aadata.write%28%27processed_pdac_data.h5ad%27%29%0A%0AGuide%20for%20visualizing%20spatial%20transcriptomics%20data%20and%20overlaying%20clonal%20signatures%20using%20integrated%20libraries.%0A%0Aimport%20squidpy%20as%20sq%0Aspatial_data%20%3D%20sq.read_visium%28%27pdac_spatial_data%27%29%0Asq.pl.spatial%28scdata%3Dspatial_data%2C%20color%3D%5B%27clonal_signature%27%2C%20%27EMT_markers%27%5D%29%0A%0AThe%20code%20integrates%20both%20scRNA-seq%20and%20spatial%20datasets%2C%20allowing%20users%20to%20visualize%20the%20spatial%20architecture%20of%20clonal%20populations%20along%20with%20their%20molecular%20characteristics.%0A%0A)
***
### [Created with BioloGPT](https://biologpt.com/?q=Paper%20Review%3A%20Clonal%20Heterogeneity%20in%20Human%20Pancreatic%20Ductal%20Adenocarcinoma%20and%20Its%20Impact%20on%20Tumor%20Progression)
[](https://biologpt.com/)
***
|
{
"filename": "This_Python_analysis_pipeline_integrates_singlecell_and_spatial_transcriptomics_data_to_identify_clonal_transitions_and_correlate_them_with_TME_interactions_1.ipynb",
"repository": "connerlambden/BioloGPT",
"query": "transformed_from_existing",
"size": 4490,
"sha": ""
}
|
# 0_Index.ipynb
Repository: KitwareMedicalPublications/2018-05-30-KRSCourseInBiomedicalImageAnalysisAndVisualization
# Biomedical Image Analysis and Visualization: ITK
### Kitware, Carrboro, North Carolina
### May, 2018
Instructors:
- Matt McCormick, PhD
- Dženan Zukić, PhD
- Francois Budin
[](https//www.kitware.com)
## Abstract
[](https://www.itk.org/)
The [Insight Segmentation and Registration Toolkit (ITK) (www.itk.org)](
http://www.itk.org) has become a standard in academia and industry for
medical image analysis. In recent years, the ITK community has
focused on providing programming interfaces to ITK from Python and JavaScript
and making ITK available via leading applications such as Slicer and ImageJ.
In this course we present best practices for taking advantage of ITK in your
imaging research and commercial products. We demonstrate how script writing
and can be used to access the algorithms in ITK and the
multitude of ITK extensions that are freely available on the web.
## Tutorials
1. [Introduction to the Insight Toolkit (ITK)](1_Introduction_to_the_Insight_Toolkit.ipynb)
2. [Image Filtering](2_Image_Filtering.ipynb)
3. [Segmentation](3_Segmentation.ipynb)
4. [ITK and NumPy](4_ITK_and_NumPy.ipynb)
5. [Registration](5_Registration.ipynb)
6. [Extending the Toolkit](6_Extending_the_Toolkit.ipynb)
|
{
"filename": "0_Index.ipynb",
"repository": "KitwareMedicalPublications/2018-05-30-KRSCourseInBiomedicalImageAnalysisAndVisualization",
"query": "transformed_from_existing",
"size": 2727,
"sha": ""
}
|
# ode2_lie.ipynb
Repository: bigfooted/maxima-odesolve
<h1 align="center"> Integrating factors for second order ODEs </h1>
<h3 align="center">A symbolic algorithm for the maxima CAS.</h3>
In this manual you will find how to use ode2_lie to find an integrating factor, a lambda symmetry or a first integral of a second order ODE. This method is based on the paper of Cheb-terrab and Roche <a href='#ref:chebterrabroche'>[1]</a>. The link between integrating factors and lambda symmetries comes from the paper of Muriel and Romero <a href='#ref:murielromero'>[2]</a>.
# step 1: loading the files
<code>
kill(all);batch("~/mathematics/maxima_files/ode2_lie.mac");batch("~/mathematics/maxima_files/kamke6_1.mac");
</code>
To test our implementation, we use the database of second order nonlinear ordinary differential equations found in the book of Kamke <a href='#ref:kamke'>[3]</a>. These ODEs are defined in chapter 6, and provided as a list in the file kamke6_1.mac
The code is rather new and therefore produces some intermediate messages to let you know what's going on. To show only warning and error messages, we lower the message verbosity to 1:
<code>
DEBUGFLAG:1;
</code>
A value of 0 shows only error messages, a value of 1 additionally shows warning messages, and values 2..5 show increasingly more messages and intermediate results to help you figure out what is going on.
# example 1
Since ode2_lie is based on the paper of Cheb-terrab and Roche <a href='#ref:chebterrabroche'>[1]</a>, we will first show that it produces the same results as the examples in the paper. The first ODE we will test is kamke ode 6.226, which has an integrating factor of the form $\mu(x,y')$:
<code>
ode226: kamke6[226];
</code>
<code>
mu:ode2_lie(ode226,y,x);
</code>
The ODE was already in exact form, so the integrating factor $y'$ was detected immediately to be the coefficient of the highest derivative by testing for exactness. We now check if the integratingfactor is correct:
<code>
isIntegratingFactor(mu,ode226,y,x);
</code>
# example 2
now we are going to solve the ode $\frac{dy}{dx}=\frac{h(y')}{x-y}$, which is kamke ode 6.136. This is another example of an ODE with an integrating factor of the form $\mu(x,y')$:
<code>
ode136: kamke6[136];
</code>
we can compute an integrating factor using the command ode2_lie(ode,y,x), where $y=y(x)$ is the dependent variable and $x$ is the independent variable. Note that we do not need to define explicit dependencies.
<code>
mu:ode2_lie(ode136,y,x);
</code>
we check if the integrating factor is correct:
<code>
isIntegratingFactor(mu,ode136,y,x);
</code>
# example 3
Another ODE with an integrating factor of the form $\mu(x,y')$ is Kamke ode 6.66:
<code>
ode66:kamke6[66];
</code>
<code>
mu:ode2_lie(ode66,y,x);
</code>
<code>
isIntegratingFactor(mu,ode66,y,x);
</code>
# example 4
This example is not in the Kamke database:
<code>
ode: 'diff(y,x,2)=('diff(y,x)*(x*'diff(y,x)+1)*(-2+exp(y)))/('diff(y,x)*x^2+'diff(y,x)-1);
</code>
<code>
mu: ode2_lie(ode,y,x);
</code>
Note that this result is different from the paper. We check if the integrating factor is correct:
<code>
isIntegratingFactor(mu,ode,y,x);
</code>
It seems that in the paper, the first term in the intermediate expression of eq. (2.90) has a wrong sign: $(y'-\frac{1}{x})$ should be $(y'+\frac{1}{x})$ and the term cancels with the denominater of the second term in (2.90).
# Integrating factors of the form $\mu(y,y')$
An ODE having an integrating factor of the form $\mu(y,y')$ is the following ODE:
<code>
ode: 'diff(y,x,2)-'diff(y,x)^2/y + sin(x)*'diff(y,x)*y + cos(x)*y^2=0;
</code>
It turns out that when a change of variables $y(x)\rightarrow x, x\rightarrow y(x)$ is applied to an ODE with $\mu(y,y')$ as integrating factor, the transformed ODE has an integrating factor of the form $\mu(x,1/y')/y'^2$. These changes of variables are carried out automatically by ode2_lie in the search for symmetries. In the following example, maxima asks a question during the search. We provide the answer beforehand by defining an assumption first:
<code>
assume(y^4>4*'diff(y,x)^2);
</code>
<code>
mu:ode2_lie(ode,y,x);
</code>
The paper also gives a first integral for this ODE, which we can compute as well:
<code>
I:firstIntegral(ode,y,x);
</code>
We can check if the first integral is correct by testing it against the ode. The ode was first made exact by multiplying with the integrating factor:
<code>
isFirstIntegral(I,ratexpand(mu*(lhs(ode)-rhs(ode))),y,x);
</code>
It is interesting to note that integrating factors are directly related to $\lambda$-symmetries, see <a href='#ref:murielromero'>[2]</a>. We can compute the $\lambda$-symmetry from the integrating factor:
<code>
L:lambdaSymmetry(ode,mu,y,x);
</code>
<code>
isLambdaSymmetry(L,ode,y,x);
</code>
# second order ODE admitting an integrating factor
We can compute the general second order ODE that admits a certain integrating factor $\mu$. For instance, the general ODE that admits an integrating factor of the form $\mu=\mu(x,y)$ is given by:
<code>
odeconstruct(m(x,y),y,x);
</code>
Here we recognize the ODE eq. (2.13) given in <a href='#ref:chebterrabroche'>[1]</a>.
This concludes the tutorial for ode2_lie.
# bibliography
[1] <a id='ref:chebterrabroche'></a> E.S. Cheb-terrab and A.D. Roche, Integrating Factors for Second-order ODEs, J. Symbolic Computation 27 (1999) https://arxiv.org/abs/math-ph/0002025
[2] <a id='ref:murielromero'></a> C. Muriel and J.L. Romero, First integrals, integrating factors and lambda-symmetries of second order differential equations, J. Phys. A: Math Theor. 42 (2009)
[3] <a id='ref:kamke'></a> E. Kamke, Differentialgleichungen, L$\ddot o$sungsmethoden und L$\ddot o$sungen, Leipzig, 1959
|
{
"filename": "ode2_lie.ipynb",
"repository": "bigfooted/maxima-odesolve",
"query": "transformed_from_existing",
"size": 27511,
"sha": ""
}
|
# sp24_lab03.ipynb
Repository: bethanyj0/data271
<code>
# Initialize Otter
import otter
grader = otter.Notebook("lab03.ipynb")
</code>
# Lab 3: Regular Expression with Python
Welcome to Lab 3 of DATA 271!
This document contains examples and small tasks ("appetizers") for you to make sure you understand the examples. The culminating task ("main course") at the end of the document is more complex, and uses most of the topics you have will have worked through. You should rarely remain stuck for more than a few minutes on questions in labs, so feel free to ask for help. Collaborating on labs is more than okay -- it's encouraged! Explaining things is beneficial -- the best way to solidify your knowledge of a subject is to explain it. Please don't just share answers, though.
For this lab and all future ones, please be sure to not re-assign variables throughout the notebook! For example, if you use `my_list` in your answer to one question, do not reassign it later on. Otherwise, you will fail tests that you passed previously!
### In today's lab, we will
- Learn basic syntax for regular expression in Python and be able to write simple regular expressions using common operations in pattern matching.
- Understand the flexibility regular expression affords in searching and articulate at least one real world example where this is useful
- Become familiar with Python's `re` module and some of its functions such as `findall()`, `search()`, and `sub()`.
- Become more familiar with using online resources such as documentation, "cheat sheets" and Stack Exchange to independently learn more about a technical topic.
## Overview
Regular expression (shortened as regex or regexp) can be used for pattern matching in a text editor. For example, when you use the "find and replace" feature in Microsoft Word, you are asking the computer to find specific strings which match a pattern and replace them with another string. We might desire a more flexible way to search and replace. For example, we might wish to locate and replace a word spelled two different ways in a text: serialise and serialize (British and American spelling). The regular expression `seriali[sz]e` matches both "serialise" and "serialize". Wildcard characters also achieve this, but are more limited in what they can pattern.
Other examples where this flexibilty is useful might be searching for and extracting email addresses from a file. We know there will be an at sign (@), but don't know what the constraints are in front of it in terms of word length or characters used. It is possible to read through texts and look for patterns using string methods like `split()` and `find()`. However, searching and extracting is so common that there is a powerful library for these tasks (`re`).
The `re` module provides a set of powerful regular expression facilities, which allows you to quickly check whether a given string matches a given pattern (using the match function), or contains such a pattern (using the search function). A regular expression is a string pattern written in a compact (and quite cryptic) syntax.
The module functions fall into three categories:
- pattern matching
- substitution
- splitting
The regex describes a pattern to locate in the text, and then we can use specific methods to accomplish tasks. You can then ask questions such as “Does this string match the pattern?”, or “Is there a match for the pattern anywhere in this string?” You can also use regular expression to modify a string or to split it apart in various ways.
The documentation provides more details: https://docs.python.org/3/library/re.html.
### Pattern matching
<code>
import re # Imports the re module
#Check if the string starts with "The" and ends with "Spain":
txt = "The rain in Spain"
x = re.search("^The.*Spain$", txt)
if x:
print("YES! We have a match!")
else:
print("No match")
</code>
<code>
#Check if the string starts with "The" and ends with "Spain":
txt = "The Running of the Bulls occurs in Pamplona, Spain"
x = re.search("^The.*Spain$", txt)
if x:
print("YES! We have a match!")
else:
print("No match")
</code>
<code>
#Check if the string starts with "The" and ends with "Spain":
txt = "The Louvre is in Paris, France"
x = re.search("^The.*Spain$", txt)
if x:
print("YES! We have a match!")
else:
print("No match")
</code>
### Print Matches
You can print any matches found with the following.
<code>
txt = "The rain in Spain"
x = re.findall("ai", txt)
print(x)
len(x) # how many matches are found
</code>
<code>
# returns an empty list if no matches are found
txt = "The rain in Spain"
x = re.findall("Portugal", txt)
print(x)
</code>
### Search
The `search()` function searches the string for a match and returns a match object if there is a match. If there is more than one match, only the first occurence will be returned. If there are no matches `None` is returned. The match object returned has properties and methods which can provide more information about the search such as
- `span()` which returns a tuple containing the start and end positions of the match
- `group()` returns the part of the string where there was a match
<code>
txt = "The rain in Spain"
# search for first white space character \s
x = re.search("\s", txt)
print("The first white-space character is located in position:", x.start())
</code>
<code>
x = re.search("Portugal", txt)
print(x)
</code>
<code>
x = re.search("ai", txt)
print(x) #this will print an object
</code>
<code>
x = re.search(r"\bS\w+", txt)
print(x)
# span returns the start and end position of the first match occurrence.
print(x.span())
</code>
<code>
# look for upper case S
x = re.search(r"\bS\w+", txt)
# print the part of the string where there was a match
print(x.group())
</code>
### Split
The `split()` function returns a list where the string has been split on each match. Notice this can also be accomplished with the string method `split()`.
<code>
# split on white space
x = re.split("\s", txt)
print(x)
</code>
<code>
txt = "The rain in Spain"
x = txt.split()
print(x)
</code>
### Substitution
The `sub()` function replaces the matches found with a string you indicate. You can control how many replacements are done with the optional count parameter.
<code>
# replace spaces with the number nine as a string
x = re.sub("\s", "9", txt)
print(x)
</code>
<code>
x = re.sub("\s", "9", txt, 2)
print(x)
</code>
### Example application: Bioinformatics
The flexibility of regular expression is particularly useful in bioinformatics. A codon is a DNA or RNA sequence of 3 nucleotides that encodes a particular amino acid or gives a stop signal. For DNA, there are three stop codons: TAG, TAA, and TGA. If we want to match any sequence of DNA terminated by a stop codon, we can use this syntax:
`([ACTG])+(TAG|TAA|TGA)`.
- `[ACTG]` indicates any of the nucleotide bases (A, C, T, G)
- the parentheses group patterns
- `+` modifies the previous group to match one or more times
- `(TAG|TAA|TGA)` indicates followed by one of the stop codons (the | notation signifies or)
Curly brackets allow flexibility in terms of how many repetitions we are searching for. For example, `(AT){10,100}` matches an "AT" repeated 10 to 100 times. `(AT){10,}` matches an "AT" repeated 10 or more times (no upper bound).
The GATA protein is a transcription factor and is important for regulating transcription (the process where cells make an RNA copy of a piece of DNA which will later be used to make proteins). It binds to any short DNA sequence which matches the pattern GATA with either an A or a T before and either a G or an A after. For example, in this sequence
- `AAAAAAATGATAGAAAAAGATAAAAAA`
there are two matches (find the substring GATA, and then check that before you see an A or a T and after you see a G or an A).
Given a specific string, we can use regular expression to find out how many times this motif occurs.
<code>
seq = 'AAAAAAATGATAGAAAAAGATAAAAAA'
matches = re.findall('[AT]GATA[GA]',seq)
print(matches)
count_motifs = len(matches)
print(count_motifs)
</code>
## Appetizers
Now it's time for you to get some practice.
**Question 1:** We have seen now different syntaxes to flexibly control what we want to match or replace. Google "regular expression in Python cheat sheet" and download one of your choosing. Or use [this one](https://canvas.humboldt.edu/courses/71553/files/5254145?wrap=1). Take a moment to read through it.
Choose four different syntaxes on the cheat sheet and write a small example with a string of your choosing like "The rain in Spain" to test them.
Specifically experiment with
- `[]` vs `()`
- `+` vs `*`
- `{}`
- character classes like `\d`, `\w`, etc.
If you can't think of things to try, try going to Stack Overflow and find a question related to regular expression in Python. Read the answer and test it out with code. (If the first one you find doesn't make sense, look for another.) Did you learn anything about syntax from the example you found? Explain the problem and solution to a peer.
<code>
any_string = ...
some_practice = ...
some_practice
</code>
<code>
any_string = ...
some_practice = ...
some_practice
</code>
<code>
any_string = ...
some_practice = ...
some_practice
</code>
<code>
any_string = ...
some_practice = ...
some_practice
</code>
**Question 2.1:** We will use the text file *emails.txt* for this exercise. If you are working from your local device, this file needs to be in the same directory as your Jupyter notebook. If you are using JupyterHub, this is already done for you.
Here is a snippet of the file:<br />
*From bkirschn@umich.edu Fri Dec 21 09:55:06 2007<br />
Return-Path: <postmaster@collab.sakaiproject.org><br />
Received: from murder (mail.umich.edu [141.211.14.25])<br />
by frankenstein.mail.umich.edu (Cyrus v2.3.8) with LMTPA;<br />
Fri, 21 Dec 2007 09:55:06 -0500<br />
X-Sieve: CMU Sieve 2.3<br />
Received: from murder ([unix socket])<br />
by mail.umich.edu (Cyrus v2.2.12) with LMTPA;<br />
Fri, 21 Dec 2007 09:55:06 -0500<br />
Received: from dreamcatcher.mr.itd.umich.edu (dreamcatcher.mr.itd.umich.edu [141.211.14.43])<br />
by panther.mail.umich.edu () with ESMTP id lBLEt6x8006098;<br />
Fri, 21 Dec 2007 09:55:06 -0500<br />
Received: FROM paploo.uhi.ac.uk (app1.prod.collab.uhi.ac.uk [194.35.219.184])<br />
BY dreamcatcher.mr.itd.umich.edu ID 476BD3C4.BFDC1.28307 ; <br />
21 Dec 2007 09:55:03 -0500<br />
Received: from paploo.uhi.ac.uk (localhost [127.0.0.1])<br />
by paploo.uhi.ac.uk (Postfix) with ESMTP id A4CC6A7DD7;<br />
Fri, 21 Dec 2007 14:51:39 +0000 (GMT)<br />
Message-ID: <200712211454.lBLEs7d9009944@nakamura.uits.iupui.edu><br />
Mime-Version: 1.0<br />
Content-Transfer-Encoding: 7bit*<br />
Create a list `from_lines` containing the lines in the .txt file which contain the word `"From"` Be sure that the lines that end up in your list do not include any trailing characters (like spaces or newlines).
*HINTS:* You can iterate though `hand` (i.e. `for line in hand` is valid). `rstrip()` method removes any trailing characters (characters at the end a string) with space as the default character to remove.
*NOTE:* Feel free to solve this with list comprehension if you prefer that.
<code>
hand = open('emails.txt')
from_lines = []
for ... in ...:
clean_line = ... # remove trailing characters
if ...: # search for the word "From"
... # add the line to list
hand.close
from_lines
</code>
<code>
grader.check("q2_1")
</code>
**Question 2.2:** The real power of regular expression comes from adding special characters to the search string to more precisely control which lines match the string. For example, what if there were some lines in the file that contained the word "From" but were not specifically a line indicating who sent the email? How could we adjust our regular expression if we just want the lines that contain "From" and a sender.
*HINTs:* What should the line start with? What symbol is always present in email addresses? How do we handle the fact that the number of characters in email addresses vary? For example, Humboldt's Math Department email is math@humboldt.edu and the Biology Deparment's email is biosci@humboldt.edu.
<code>
hand = open('emails.txt')
from_lines2 = ...
hand.close
from_lines2
</code>
<code>
grader.check("q2_2")
</code>
### Extracting Data with Regular Expression
The method `findall()` finds *all* the matches and returns them as a list of strings, with each string representing one match.
If we would like to find all the email addresses, we can search `'\S+@\S+'`. This works because
- `\S` matches a single character other than white space. Adding the + means one or more characters other than white space, so `\S+` matches as many nonwhite space characters as possible (greedy).
- `@` looks for the sign in all email addresses
- `\S+` again looks for non white space characters
The terms greedy and lazy in regular expression mean
- greedy (default): keep searching until the condition is not satisfied
- lazy (indicated with a ? at the end of the quantifier): stop searching once the condition is satisfied
<code>
hand = open('emails.txt')
emails = [re.findall('\S+@\S+',line) for line in hand if re.search('\S+@\S+',line)]
hand.close
emails
</code>
**Question 2.3:** We see some of the email addresses returned have characters we might not want. For example,we might want to remove the `<` and `>` in this address: `<postmaster@collab.sakaiproject.org>`. Write a regex to extract cleaner email addresses.
*HINT*: Email addresses have a single lowercase character, uppercase character, or digit followed by zero or more non whitespace characters follwed by an `@` sign followed by zero or more non whitespace characters, and they end with a letter.
<code>
hand = open('emails.txt')
emails2 = ...
hand.close
emails2
</code>
<code>
grader.check("q2_3")
</code>
**Question 2.4:** Based on what you know about regular expressions, can you come up with a differnt expression that will accomplish the same goal as the previous cell? That is, can you create a list equivalent to `emails2` using a different regex?
<code>
hand = open('emails.txt')
emails3 = ...
hand.close
emails3
</code>
<code>
grader.check("q2_4")
</code>
## `re` vs string methods
**Question 3.1:** Let the following string be considered: `'X-DSPAM-Confidence: 0.8475'`. Use `find` and string slicing to extract the number and convert it to a float. *HINT:* Everything after the colon is the number.
*NOTE:* You should not use regular expressions in your solution. The next question will ask you to do the same task with a regex.
<code>
phrase = 'X-DSPAM-Confidence: 0.8475'
col_pos = ...
number = ...
number
</code>
<code>
grader.check("q3_1")
</code>
**Question 3.2:** Consider the same string: `'X-DSPAM-Confidence: 0.8475'`. Complete the same task you did in question 3.1, but this time use a regular expression.
<code>
number_with_re = ...
number_with_re = ...
number_with_re
</code>
<code>
grader.check("q3_2")
</code>
**Question 4.1:** Using *emails.txt*, extract the hour of day that email messages were sent and put them into a list (e.g., `'09'` for 9am). Your final answer should be a list of strings. Reminder: the emails looked something like
`From gsilver@umich.edu Wed Dec 19 09:35:37 2007`
*HINT:* One way to do this is by splitting each line twice. What could we split by to isolate the hour?
*NOTE:* You should not use regular expression in your solution. The next question will ask you to do the same task with a regex. As always, feel free to use comprehension if you prefer.
<code>
hand = open('emails.txt')
hour_of_day = ...
for ... in ...:
clean_line = line.rstrip() # remove trailing characters
if not clean_line.startswith('From '):
continue # do nothing if it is not a line we are interested in
x = ... # split once
y = ... # split again
... # add hour to list
hand.close()
hour_of_day
</code>
<code>
grader.check("q4_1")
</code>
**Question 4.2:** Using *emails.txt*, extract the hour of day that email messages were sent and put them into a list like you did in question 4.1. This time, use regular expression.
*HINT:* Look for lines that start with `From` then have a space, potentially some number of characters followed by a space and two digits followed by a colon. Extract the two digits. Your final answer should be a list of strings. Use list comprehension if you want.
<code>
hand = open('emails.txt')
hour_of_day_with_re = ...
...
clean_line = ...
x = ...
...
...
hand.close()
hour_of_day
</code>
<code>
grader.check("q4_2")
</code>
<!-- BEGIN QUESTION -->
**Question 4.3:** We see that there are numerous tasks that can be accomplished with either string methods or regular expressions. Describe at least one scenario in which regular expression would be our only (or at least a much easier) option. Be sure to include what the data would look like, and what the task would be.
_Type your answer here, replacing this text._
<!-- END QUESTION -->
### 5. Main Course
In this problem you will read through and parse a file with text and numbers. You will extract all the numbers in the file and compute the sum of the numbers.
The file contains text from a data science textbook introduction with random numbers inserted through the verbage.
For example, the text might look like this:
*Why should you learn to write programs? 7746 <br>
12 1929 8827<br>
Writing programs (or programming) is a very creative<br>
7 and rewarding activity. You can write programs for <br>
many reasons, ranging from making your living to solving<br>
8837 a difficult data analysis problem to having fun to helping 128<br>
someone else solve a problem. This book assumes that <br>
everyone needs to know how to program ...*<br>
The data can be found at this link: http://py4e-data.dr-chuck.net/regex_sum_1742785.txt.
**Question 5.1:** Open the `regex_sum_1742785.txt` file. Make a list of lists, where each sublist contains the numbers (str type) that appear in the cooresponding line of the file.
*NOTE:* Add an empty list the list when there are no numbers in a given line and consider groups of digits. i.e. For the example snippet above, `['7746']` would be the first element of the list.
<code>
file = open('regex_sum_1742785.txt')
numbers_in_line = ...
...
...
file.close()
numbers_in_line
</code>
<code>
grader.check("q5_1")
</code>
**Question 5.2:** Convert the strings from the previous question to integers.
*NOTE:* By the end, you should have a list of ints. This new list should NOT contain elements corresponding to the empty lists from the previous question.
<code>
strings_to_ints = ...
...
...
...
...
strings_to_ints
</code>
<code>
grader.check("q5_2")
</code>
**Question 5.3:** Add up all the integers from problem 5.2.
<code>
sum_all_nums = ...
sum_all_nums
</code>
<code>
grader.check("q5_3")
</code>
**Question 5.4:** Create a list where each element is a string cooresponding to a line from the original `regex_sum_1742785.txt` file with all the numbers and trailing characters removed.
<code>
file = ...
cleaned_lines = ...
...
cleaned_lines
</code>
<code>
grader.check("q5_4")
</code>
### 6. Dessert
Huntington's Disease is a neurogenerative disorder and is linked to the anomalous expansion of the number of tribucleotide repeats in particular genes. Human beings have 23 pairs of chromosomes in our cells and each of our parents contributes one chromosome to each pair. The gene that causes Huntington's Disease (HD) is found on chromosome 4. Each of us gets one copy of the gene from our mother and one copy from our father.
The gene responsible for HD contains a sequence with several CAG repeats (cytosine, adenine, guanine which are bases forming this specific codon). We all have these CAG repeats in the gene that codes for the huntingtin protein, but people with HD have a greater number than usual of CAG repeats in one of the genes they inherited. (This protein is found in many of the body's tissues, with the highest levels of activity in the brain. Within cells, this protein may be involved in chemical signaling, transporting materials, binding to proteins and other structures, etc.)
The actual number of repeats of a specific codon determines the risk of developing HD. More than 35 repeats virtually assures the disease. In this task, we will use regular expression to find the number of repeats of the CAG codon in a specific mRNA sequence.
**Question 6:** Using the *HTTmRNA.txt* file ([source](https://www.ncbi.nlm.nih.gov/nuccore/NM_002111.8?report=fasta)), use regular expression to determine how many times either CAG is repeated. *HINT:* You should play around with the `htt_pattern` to figure this out. Then manually enter your answer in `num_repeats`. If you think of another way to solve this, just put your answer in `num_repeats`.
<code>
fhand = open('HTTmRNA.txt')
htt_mRNA = fhand.read()
htt_pattern = ...
match = ...
print(len(match))
num_repeats = ...
fhand.close()
</code>
<code>
grader.check("q6_1")
</code>
### Submission
Congratulations on finishing Lab 3! Gus is very proud of you. Run the cell below to download a zip and upload to Canvas.
<img src="gus_spies_on_neighbors.JPG" alt="drawing" width="300"/>
### References
- Python for Everybody: Exploring Data in Python 3 by Charles Severance. https://www.py4e.com/book.php
- Möncke‐Buchner, Elisabeth, et al. "Counting CAG repeats in the Huntington’s disease gene by restriction endonuclease Eco P15I cleavage." Nucleic Acids Research 30.16 (2002): e83-e83.
- A Primer for Computational Biology by Shawn T. ONeil https://open.oregonstate.education/computationalbiology/chapter/bioinformatics-knick-knacks-and-regular-expressions/
- Using Regular Expression in Genetics with Python by Stephen Fordham. https://towardsdatascience.com/using-regular-expression-in-genetics-with-python-175e2b9395c2
## Submission
Make sure you have run all cells in your notebook in order before running the cell below, so that all images/graphs appear in the output. The cell below will generate a zip file for you to submit. **Please save before exporting!**
<code>
# Save your notebook first, then run this cell to export your submission.
grader.export(pdf=False, run_tests=True)
</code>
|
{
"filename": "sp24_lab03.ipynb",
"repository": "bethanyj0/data271",
"query": "transformed_from_existing",
"size": 50947,
"sha": ""
}
|
# prepare_data_2.ipynb
Repository: TJU-CMC-Org/CorrAdjust
# Preparing input data
To use the CorrAdjust, you will need to prepare the following input data:
- Data table and additional tables with feature/sample annotations.
- One or more GMT files listing which features (e.g., genes) belong to the same reference sets (e.g., pathways).
- Configuration dict.
## Data and annotation tables
Main data table's rows and columns should represent samples and features (e.g., genes), respectively. CorrAdjust operates with Pandas data frames. Data should be normalized in a way that allows between-sample comparisons for each feature. Make sure you don't have constant features (they cannot be used for correlation analysis).
If you have more than ~100 samples, you could split the samples into training and test sets, and the module will provide methods to process them without any training/test leaks with sklearn-style interface.
Below, we use the [GTEx whole blood RNA-seq data](https://storage.googleapis.com/adult-gtex/bulk-gex/v8/rna-seq/counts-by-tissue/gene_reads_2017-06-05_v8_whole_blood.gct.gz) (this tutorial will fully reproduce Case 1 from the CorrAdjust paper). We import read counts data (pre-filtered to exclude low-expressed genes), normalize it with [median-of-ratios method](https://doi.org/10.1186/s13059-014-0550-8), and then log-transform.
<code>
import pandas as pd
import numpy as np
from corradjust.utils import MedianOfRatios
# Raw read counts data
df_counts = pd.read_csv(
"input_data/GTEx_Whole_Blood/raw_counts.tsv",
sep="\t", index_col=0
)
display(df_counts)
# Split samples into 50%/50% training and test sets
df_counts_train = df_counts.iloc[::2]
df_counts_test = df_counts.iloc[1::2]
# Normalize data using DESeq2 median of ratios algorithm
# This interface is train/test-set friendly, i.e., test data
# has no influence on training data
normalizer = MedianOfRatios()
normalizer.fit(df_counts_train)
df_norm_counts_train = normalizer.transform(df_counts_train)
df_norm_counts_test = normalizer.transform(df_counts_test)
# Log2-transform
df_data_train = np.log2(df_norm_counts_train + 1)
df_data_test = np.log2(df_norm_counts_test + 1)
</code>
Feature annotation table is mandatory and should have 3 columns:
1. **feature_id**: should match with columns of data and needs to be unique. For example, ENSEMBL gene IDs.
1. **feature_name**: should match feature names in the reference GMT files, allows duplicates. For example, gene symbols.
1. **feature_type**: discrete set of feature types. E.g., if you are analyzing only mRNA-seq data, put `mRNA` for all genes; if you are integrating miRNA, mRNA, or any other data type, you could use more than one type (e.g., `miRNA` and `mRNA`), see [Advanced example](advanced_run.ipynb).
Rows of feature annotation table should be **identical** to the columns of `df_data`.
<code>
df_feature_ann = pd.read_csv(
"input_data/GTEx_Whole_Blood/gene_annotation.tsv",
sep="\t", index_col=0
)
display(df_feature_ann)
</code>
Finally, if you have two or more distinct sample groups (e.g., normal and tumor samples), you might provide sample annotation table, see [Advanced example](advanced_run.ipynb).
## Reference GMT files
Each reference collection of feature pairs should be represented as a separate GMT file. GMT file is tab-separated file with the following structure:
- **Column 1:** reference feature set name.
- **Column 2:** ignored, you can put any string there (e.g., `NA`).
- **Columns 3-...:** feature names (number of features can differ between rows).
You can find a [toy GMT file](https://github.com/TJU-CMC-Org/CorrAdjust/blob/master/corradjust/tests/test_data/ref_feature_sets.gmt) in the CorrAdjust GitHub repository. For the current tutorial, we downloaded Canonical Pathways and Gene Ontology databases from [MSigDB](https://www.gsea-msigdb.org/gsea/msigdb/index.jsp).
The package will process the file line-by-line, and label all possible feature pairs from each line as **reference pairs**.
Thus, $n$ features on a line will generate $n*(n-1)/2$ reference pairs.
One reference set might be represented by several lines. This can be useful, e.g., for miRNA-target gene pairs:
|Column 1|Column 2|Column 3|Column 4|
|-------------|--|-----|-------------|
|...|...|...|...|
|miR-X-targets|NA|miR-X|target-gene-1|
|miR-X-targets|NA|miR-X|target-gene-2|
|miR-X-targets|NA|miR-X|target-gene-3|
|...|...|...|...|
If we instead put everything in one line like this,
|Column 1|Column 2|Column 3|Column 4|Column 5|Column 6|
|-------------|--|-----|-------------|-------------|-------------|
|...|...|...|...|...|...|
|miR-X-targets|NA|miR-X|target-gene-1|target-gene-2|target-gene-3|
|...|...|...|...|...|...|
then pairs composed of two target genes will also be labeled as `miR-X-targets`,
which will be incorrect for the analysis of miRNA-mRNA targeting interactions. See [Advanced example](advanced_run.ipynb) for a miRNA-mRNA run.
## Reference feature collections configuration
Configuration dict specifies how reference collections should be handled by CorrAdjust. Below is the example of such a config for two mRNA-mRNA pathway databases. One config can contain an arbitrary number of different collections (GMT files). See more detailed documentation in [API reference](../modules/corradjust.corradjust.rst#corradjust.corradjust.CorrAdjust).
<code>
ref_feature_colls = {
"Canonical Pathways": {
# Relative or absolute path to the GMT file.
"path": "input_data/GMT_files/c2.cp.v2023.2.Hs.symbols.gmt",
# Rank feature pairs by absolute correlations.
"sign": "absolute",
# Allowed feature pair types.
# Should match annotation's "feature_type" column.
"feature_pair_types": ["mRNA-mRNA"],
# Fraction of all feature pairs to define highly ranked correlations.
# This is paramter alpha in the CorrAdjust paper notation.
# 0.01 is a good default value for mRNA-mRNA analysis.
"high_corr_frac": 0.01
},
"Gene Ontology": {
"path": "input_data/GMT_files/c5.go.v2023.2.Hs.symbols.gmt",
"sign": "absolute",
"feature_pair_types": ["mRNA-mRNA"],
"high_corr_frac": 0.01
}
}
</code>
|
{
"filename": "prepare_data_2.ipynb",
"repository": "TJU-CMC-Org/CorrAdjust",
"query": "transformed_from_existing",
"size": 36729,
"sha": ""
}
|
# demo1_1.ipynb
Repository: ZJUFanLab/bulk2space
## Demonstration of Bulk2Space on demo1 dataset
### Import Bulk2Space
<code>
from bulk2space import Bulk2Space
model = Bulk2Space()
</code>
### Decompose bulk-seq data into scRNA-seq data
Train β-VAE model to generate scRNA-seq data
<code>
generate_sc_meta, generate_sc_data = model.train_vae_and_generate(
input_bulk_path='tutorial/data/example_data/demo1/demo1_bulk.csv',
input_sc_data_path='tutorial/data/example_data/demo1/demo1_sc_data.csv',
input_sc_meta_path='tutorial/data/example_data/demo1/demo1_sc_meta.csv',
input_st_data_path='tutorial/data/example_data/demo1/demo1_st_data.csv',
input_st_meta_path='tutorial/data/example_data/demo1/demo1_st_meta.csv',
ratio_num=1,
top_marker_num=500,
gpu=0,
batch_size=512,
learning_rate=1e-4,
hidden_size=256,
epoch_num=20,
vae_save_dir='tutorial/data/example_data/demo1/predata/save_model',
vae_save_name='demo1_vae',
generate_save_dir='tutorial/data/example_data/demo1/predata/output',
generate_save_name='demo1')
</code>
<code>
generate_sc_meta
</code>
<code>
generate_sc_data
</code>
Load trained β-VAE model to generate scRNA-seq data
<code>
generate_sc_meta, generate_sc_data = model.load_vae_and_generate(
input_bulk_path='tutorial/data/example_data/demo1/demo1_bulk.csv',
input_sc_data_path='tutorial/data/example_data/demo1/demo1_sc_data.csv',
input_sc_meta_path='tutorial/data/example_data/demo1/demo1_sc_meta.csv',
input_st_data_path='tutorial/data/example_data/demo1/demo1_st_data.csv',
input_st_meta_path='tutorial/data/example_data/demo1/demo1_st_meta.csv',
vae_load_dir='tutorial/data/example_data/demo1/predata/save_model/demo1_vae.pth',
generate_save_dir='tutorial/data/example_data/demo1/predata/output',
generate_save_name='demo1_new',
ratio_num=1,
top_marker_num=500)
</code>
### Decompose spatial barcoding-based spatial transcriptomics data into spatially resolved single-cell transcriptomics data
Train deep-forest model to generate spatially resolved single-cell transcriptomics data
<code>
df_meta, df_data = model.train_df_and_spatial_deconvolution(
generate_sc_meta,
generate_sc_data,
input_st_data_path='tutorial/data/example_data/demo1/demo1_st_data.csv',
input_st_meta_path='tutorial/data/example_data/demo1/demo1_st_meta.csv',
spot_num=500,
cell_num=10,
df_save_dir='tutorial/data/example_data/demo1/predata/save_model/',
df_save_name='deom1_df',
map_save_dir='tutorial/data/example_data/demo1/result',
map_save_name='demo1',
top_marker_num=500,
marker_used=True,
k=10)
</code>
<code>
df_meta
</code>
<code>
df_data
</code>
Load trained deep-forest model to generate spatially resolved single-cell transcriptomics data
<code>
df_meta, df_data = model.load_df_and_spatial_deconvolution(
generate_sc_meta,
generate_sc_data,
input_st_data_path='tutorial/data/example_data/demo1/demo1_st_data.csv',
input_st_meta_path='tutorial/data/example_data/demo1/demo1_st_meta.csv',
spot_num=500,
cell_num=10,
df_load_dir='tutorial/data/example_data/demo1/predata/save_model/deom1_df',
map_save_dir='tutorial/data/example_data/demo1/result', # file_dir
map_save_name='demo1_new', # file_name
top_marker_num=500,
marker_used=True,
k=10)
</code>
|
{
"filename": "demo1_1.ipynb",
"repository": "ZJUFanLab/bulk2space",
"query": "transformed_from_existing",
"size": 56277,
"sha": ""
}
|
# ERP009703_QC_analysis_v4_1.ipynb
Repository: EBI-Metagenomics/examples
# Download QC ERP009703 pipeline v4
List all runs
https://www.ebi.ac.uk/metagenomics/api/v0.2/pipelines/4.0/analysis?experiment_type=metagenomic&study_accession=ERP009703
<code>
import collections
try:
from urllib import urlencode
except ImportError:
from urllib.parse import urlencode
from pandas import DataFrame
import matplotlib.pyplot as plt
import numpy as np
</code>
<code>
from jsonapi_client import Session, Filter
API_BASE = 'https://www.ebi.ac.uk/metagenomics/api/v0.2/'
</code>
<code>
def find_metadata(metadata, key):
"""
Extract metadata value for given key
"""
for m in metadata:
if m.var_name.lower() == key.lower():
return m.var_value
return None
qc_keys = ['Predicted CDS', 'Predicted CDS with InterProScan match']
pipeline = '4.0'
# map GO terms to the temperature
result = {}
header = set()
qc_meta = dict()
with Session(API_BASE) as s:
# list of runs missing metadata
print('Loading data from API.', end='', flush=True)
# preparing url
params = {
'experiment_type': 'metagenomic',
'study_accession': 'ERP009703',
}
f = Filter(urlencode(params))
# list runs
for anls in s.iterate(('pipelines/%s/analysis' % pipeline), f):
print('.', end='', flush=True)
try:
result[anls.accession]
except KeyError:
result[anls.accession] = dict()
_qc_meta = anls.metadata
for k in qc_keys:
_pcds = int(find_metadata(_qc_meta, k))
if _pcds is not None:
try:
qc_meta[anls.accession]
except KeyError:
qc_meta[anls.accession] = dict()
qc_meta[anls.accession][k] = _pcds
rt = "runs/%s/pipelines/%s/go-slim" % (anls.accession, anls.pipeline_version)
af = Filter(urlencode({'page_size': 100}))
for ann in s.iterate(rt, af):
h = "%s %s" % (ann.accession, ann.description)
try:
result[anls.accession][h]
except KeyError:
result[anls.accession][h] = int(ann.count)
header.add(h)
print("DONE")
</code>
<code>
import csv
with open("ERP009703_v4.csv", "w") as csvfile:
fieldnames = ['run',] + qc_keys + sorted(list(header))
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
for k,v in result.items():
row = {**qc_meta[k], **v}
row['run'] = k
writer.writerow(row)
</code>
<code>
df = DataFrame().from_csv('ERP009703_v4.csv').fillna("")
df
</code>
|
{
"filename": "ERP009703_QC_analysis_v4_1.ipynb",
"repository": "EBI-Metagenomics/examples",
"query": "transformed_from_existing",
"size": 149046,
"sha": ""
}
|
# nlp-5.ipynb
Repository: juniantowicaksono06/belajar-nlp
# Tokenization
<code>
import spacy
nlp = spacy.load('en_core_web_sm')
</code>
<code>
mystring = '"We\'re moving to L.A.!"'
mystring
</code>
<code>
print(mystring)
</code>
<code>
doc = nlp(mystring)
</code>
<code>
for token in doc:
print(token.text)
</code>
<code>
doc2 = nlp(u"We're to help! Send snail-mail, email support@oursite.com or visit us at http://oursite.com! and we also have ftp server at ftp://ourftpserver.com!")
</code>
<code>
for t in doc2:
print(t.text)
</code>
<code>
doc3 = nlp(u"A Skm NYC can ride costs $10.30")
</code>
<code>
for token in doc3:
print(token.text)
</code>
<code>
doc4 = nlp(u"Let's visit St. Louis in the U.S. next year.")
</code>
<code>
for t in doc4:
print(t)
</code>
<code>
doc4_1 = nlp(u"Mr. Clinton is a biology teacher")
</code>
<code>
for t in doc4_1:
print(t)
</code>
<code>
len(doc4)
</code>
<code>
len(doc4.vocab)
</code>
<code>
doc5 = nlp(u"It is better to give than receive.")
</code>
<code>
doc5[0]
</code>
<code>
doc5[2:5]
</code>
<code>
doc8 = nlp(u"Apple to build a Hong Kong factory for $6 million")
</code>
<code>
for token in doc8:
print(token.text, end=" | ")
</code>
<code>
for entity in doc8.ents:
print(entity)
</code>
<code>
doc8_1 = nlp(u"He that apple last night.")
</code>
<code>
for entity in doc8_1.ents:
print(entity)
</code>
<code>
doc8_2 = nlp(u"Rockstar Games is expected to release Grand Theft Auto VI on January 26th 2025")
</code>
<code>
for entity in doc8_2.ents:
print(entity)
print(str(spacy.explain(entity.label_)))
print(entity.label_)
</code>
<code>
doc9 = nlp(u'Autonomous cars shift insurance liability toward manufacturers.')
</code>
<code>
for chunk in doc9.noun_chunks:
print(chunk)
</code>
<code>
doc9_1 = nlp(u"We're going to Moscow next week")
</code>
<code>
for token in doc9_1.ents:
print(token.text)
print(token.label_)
print(str(spacy.explain(token.label_)))
</code>
|
{
"filename": "nlp-5.ipynb",
"repository": "juniantowicaksono06/belajar-nlp",
"query": "transformed_from_existing",
"size": 11416,
"sha": ""
}
|
# genai_rag.ipynb
Repository: tPrashant1729/prashant
<code>
import streamlit as st
import os
from groq import Groq
import random
import requests
from bs4 import BeautifulSoup
from langchain.chains import ConversationChain, LLMChain
from langchain_core.prompts import (
ChatPromptTemplate,
HumanMessagePromptTemplate,
MessagesPlaceholder,
)
from langchain_community.document_loaders import AsyncChromiumLoader
from langchain_community.document_transformers import BeautifulSoupTransformer
from langchain_core.messages import SystemMessage
from langchain.chains.conversation.memory import ConversationBufferWindowMemory
from langchain_groq import ChatGroq
from langchain.prompts import PromptTemplate
from dotenv import load_dotenv, find_dotenv
from langchain_community.document_loaders import WikipediaLoader, WebBaseLoader, TextLoader
_ = load_dotenv(find_dotenv()) # read local .env file
</code>
<code>
import requests
from bs4 import BeautifulSoup
def fetch_content(url):
response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')
text_content = ''
for paragraph in soup.find_all('p'): # Extract text content from <p> tags
text_content += paragraph.get_text() + ' '
return text_content
</code>
<code>
from langchain.docstore.document import Document
urls = ["https://indianexpress.com/article/india/om-birla-election-lok-sabha-speaker-modi-opposition-9415980/" , "https://www.crummy.com/software/BeautifulSoup/bs4/doc/"]
list_text = []
docs = []
for url in urls:
doc = Document(page_content=fetch_content(url),
metadata={
"source": url
}
)
docs.append(doc)
</code>
<code>
print(docs[1].page_content)
</code>
<code>
from langchain.text_splitter import RecursiveCharacterTextSplitter, CharacterTextSplitter
</code>
<code>
chunk_size =1000
chunk_overlap = 150
</code>
<code>
r_splitter = RecursiveCharacterTextSplitter(
chunk_size=chunk_size,
chunk_overlap=chunk_overlap,
separators=["\n\n", "\n", "(?<=\. )", " ", ""]
)
c_splitter = CharacterTextSplitter(
chunk_size=chunk_size,
chunk_overlap=chunk_overlap,
separator = '\n'
)
</code>
<code>
c_pages = c_splitter.split_text(docs[0].page_content)
print(len(c_pages))
</code>
<code>
r_pages = r_splitter.split_text(docs[0].page_content)
print(len(r_pages))
</code>
<code>
import re
def clean_text(text):
# Replace multiple newlines with a single newline
text = re.sub(r'\n+', '\n', text)
# Remove leading and trailing whitespaces
text = text.strip()
return text
</code>
<code>
len(docs[0].page_content)
docs[0].page_content = clean_text(docs[0].page_content)
</code>
<code>
cleaned_r_pages = r_splitter.split_documents(docs)
cleaned_c_pages = c_splitter.split_documents(docs)
print(len(cleaned_r_pages))
print(len(cleaned_c_pages))
# Print the cleaned pages
for i, page in enumerate(cleaned_r_pages):
print(f"Page {i+1}:\n{page}\n{'-'*40}")
</code>
<code>
cleaned_r_pages
</code>
<code>
from langchain.embeddings import OllamaEmbeddings
from langchain.vectorstores import Chroma, FAISS
</code>
<code>
splits = cleaned_r_pages
embedding = OllamaEmbeddings(model="mxbai-embed-large")
vectordb = FAISS.from_documents(
documents=splits,
embedding=embedding
)
</code>
<code>
print(vectordb.index.ntotal)
</code>
<code>
query = "who is om birla"
retriever = vectordb.as_retriever()
docs = retriever.invoke(query)
</code>
<code>
docs
# Print the cleaned pages
for i, page in enumerate(docs):
print(f"Page {i+1}:\n{page.metadata}\n{'-'*40}")
</code>
<code>
vectordb.save_local("faiss_index",)
new_db = FAISS.load_local("faiss_index", embedding, allow_dangerous_deserialization = True)
docs = new_db.similarity_search(query)
</code>
<code>
for doc in docs:
print(doc.metadata)
</code>
<code>
question = "what is beautifulsoup?"
docs_ss = new_db.similarity_search(question,k=3)
len(docs_ss)
</code>
<code>
for doc in docs_ss:
print(doc.metadata)
</code>
<code>
docs_mmr = vectordb.max_marginal_relevance_search(question,k=3)
len(docs_mmr)
for doc in docs_mmr:
print(doc.metadata)
</code>
<code>
class Sot_list:
def sort(self, list1):
for i in range(len(list1)-1):
for i in range(len(list1)-1):
curr = list1[i]
next = list1[i+1]
if curr >= next:
list1[i] , list1[i+1] = list1[i+1], list1[i]
else:
continue
return list1
l = [2,7,4,3,9,1]
sort = Sot_list()
sl = sort.sort(l)
print(sl)
</code>
|
{
"filename": "genai_rag.ipynb",
"repository": "tPrashant1729/prashant",
"query": "transformed_from_existing",
"size": 353950,
"sha": ""
}
|
# CDD P3_2.ipynb
Repository: agusscarmu/Aromatase-Drug-Discovery
# PART 3
---
Se calcularán los descriptores moleculares. Y finalmente se preparara el DataSet
<code>
import pandas as pd
</code>
<code>
!ls
</code>
<code>
df3 = pd.read_csv('bioactivity_data_pIC50.csv')
</code>
<code>
df3
</code>
<code>
selection = ['canonical_smiles', 'molecule_chembl_id']
df3_selection = df3[selection]
df3_selection.to_csv('molecule.smi', sep='\t', index=False, header=False)
</code>
<code>
! cat molecule.smi | head -5
</code>
<code>
! cat molecule.smi | wc -l
</code>
<code>
! wget https://github.com/dataprofessor/bioinformatics/raw/master/padel.sh
</code>
<code>
!ls
</code>
<code>
! wget https://github.com/dataprofessor/bioinformatics/raw/master/padel.zip
</code>
<code>
!unzip padel.zip
</code>
<code>
!bash padel.sh
</code>
<code>
df3_X = pd.read_csv('descriptors_output.csv')
</code>
<code>
df3_X
</code>
<code>
df3_X = df3_X.drop(columns='Name')
</code>
<code>
df3_X
</code>
<code>
df3_Y = df3['pIC50']
df3_Y
</code>
<code>
dataset3 = pd.concat([df3_X, df3_Y], axis=1)
dataset3
</code>
<code>
dataset3.to_csv('bioactivity_data_pIC50_after_PaDEL_descriptors.csv', index=False)
</code>
<code>
!ls
</code>
|
{
"filename": "CDD P3_2.ipynb",
"repository": "agusscarmu/Aromatase-Drug-Discovery",
"query": "transformed_from_existing",
"size": 210232,
"sha": ""
}
|
# project_drug_1.ipynb
Repository: satish2705/major
<code>
import pandas as pd
import numpy as np
import random
# Generate synthetic dataset
num_samples = 1000
# Patient Information
patient_ids = [f"P{str(i).zfill(5)}" for i in range(1, num_samples + 1)]
ages = np.random.randint(18, 90, num_samples)
genders = np.random.choice(["Male", "Female"], num_samples)
medical_history = np.random.choice(["Diabetes", "Hypertension", "Cancer", "None"], num_samples)
drug_names = np.random.choice(["DrugA", "DrugB", "DrugC", "DrugD"], num_samples)
dosages = np.random.randint(50, 500, num_samples)
treatment_durations = np.random.randint(5, 60, num_samples)
effectiveness = np.random.uniform(0, 100, num_samples)
side_effects = np.random.choice(["None", "Nausea", "Dizziness", "Fatigue"], num_samples)
disease_types = np.random.choice(["Lung Cancer", "Breast Cancer", "Diabetes", "Heart Disease"], num_samples)
genetic_markers = np.random.choice(["MarkerA", "MarkerB", "MarkerC", "MarkerD"], num_samples)
# Treatment Outcome
response_to_treatment = np.random.choice(["Positive", "Negative"], num_samples)
success_rates = np.random.uniform(50, 100, num_samples)
# Create DataFrame
dataset = pd.DataFrame({
"Patient_ID": patient_ids,
"Age": ages,
"Gender": genders,
"Medical_History": medical_history,
"Drug_Name": drug_names,
"Dosage_mg": dosages,
"Treatment_Duration_days": treatment_durations,
"Effectiveness_%": effectiveness,
"Side_Effects": side_effects,
"Disease_Type": disease_types,
"Genetic_Marker": genetic_markers,
"Response_to_Treatment": response_to_treatment,
"Success_Rate_%": success_rates
})
# Save to CSV
dataset.to_csv("synthetic_medical_dataset.csv", index=False)
print("Dataset generated successfully!")
</code>
<code>
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv1D, MaxPooling1D, Flatten, Dense, Dropout, BatchNormalization
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder, StandardScaler
# Load dataset
dataset = pd.read_csv("synthetic_medical_dataset.csv")
# Encode categorical variables
label_encoders = {}
categorical_columns = ["Gender", "Medical_History", "Drug_Name", "Side_Effects", "Disease_Type", "Genetic_Marker", "Response_to_Treatment"]
for col in categorical_columns:
le = LabelEncoder()
dataset[col] = le.fit_transform(dataset[col])
label_encoders[col] = le
# Selecting features and target variables
X = dataset.drop(columns=["Patient_ID", "Response_to_Treatment"]).values
y = dataset["Response_to_Treatment"].values
# Normalize features
scaler = StandardScaler()
X = scaler.fit_transform(X)
# Reshape for CNN input (assuming 1D features per patient)
X = np.expand_dims(X, axis=2)
# Split dataset
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Build CNN model
model = Sequential([
Conv1D(filters=64, kernel_size=3, activation='relu', input_shape=(X.shape[1], 1)),
BatchNormalization(),
MaxPooling1D(pool_size=2),
Dropout(0.2),
Conv1D(filters=128, kernel_size=3, activation='relu'),
BatchNormalization(),
MaxPooling1D(pool_size=2),
Dropout(0.3),
Flatten(),
Dense(128, activation='relu'),
Dropout(0.4),
Dense(1, activation='sigmoid') # Binary classification
])
# Compile the model
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
# Train the model
model.fit(X_train, y_train, epochs=30, batch_size=32, validation_data=(X_test, y_test))
# Evaluate the model
test_loss, test_acc = model.evaluate(X_test, y_test)
print(f"Test Accuracy: {test_acc:.2f}")
# Save the model
model.save("cnn_drug_discovery_model.h5")
print("Model saved successfully!")
</code>
<code>
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.models import Sequential, load_model
from tensorflow.keras.layers import Conv1D, MaxPooling1D, Flatten, Dense, Dropout, BatchNormalization
import numpy as np
from sklearn.preprocessing import StandardScaler, LabelEncoder
# Load the trained model
model_path = r"cnn_drug_discovery_model.h5"
model = load_model(model_path)
print("Model loaded successfully!")
# Define label encoders for categorical values
label_encoders = {
"Gender": LabelEncoder().fit(["Male", "Female"]),
"Medical_History": LabelEncoder().fit(["Diabetes", "Hypertension", "Cancer", "None"]),
"Drug_Name": LabelEncoder().fit(["DrugA", "DrugB", "DrugC", "DrugD"]),
"Side_Effects": LabelEncoder().fit(["None", "Nausea", "Dizziness", "Fatigue"]),
"Disease_Type": LabelEncoder().fit(["Lung Cancer", "Breast Cancer", "Diabetes", "Heart Disease"]),
"Genetic_Marker": LabelEncoder().fit(["MarkerA", "MarkerB", "MarkerC", "MarkerD"])
}
# Define a scaler (use values from training phase if available)
scaler = StandardScaler()
# Example input values
input_data = {
"Age": 45,
"Gender": "Male",
"Medical_History": "Diabetes",
"Drug_Name": "DrugA",
"Dosage_mg": 200,
"Treatment_Duration_days": 30,
"Effectiveness_%": 85.4,
"Side_Effects": "Nausea",
"Disease_Type": "Lung Cancer",
"Genetic_Marker": "MarkerB"
}
# Encode categorical values
for key in label_encoders:
if key in input_data:
input_data[key] = label_encoders[key].transform([input_data[key]])[0]
# Convert input data to array
input_array = np.array(list(input_data.values())).reshape(1, -1)
# Normalize input features (use values from training phase if available)
input_array = scaler.fit_transform(input_array) # Use transform() instead of fit_transform() if scaler was previously trained
# Reshape for CNN input
input_array = np.expand_dims(input_array, axis=2)
# Make prediction
prediction = model.predict(input_array)
predicted_class = (prediction > 0.5).astype(int)
print(f"Predicted Response: {predicted_class[0][0]}")
</code>
<code>
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.models import Sequential, load_model
from tensorflow.keras.layers import Conv1D, MaxPooling1D, Flatten, Dense, Dropout, BatchNormalization
import numpy as np
from sklearn.preprocessing import StandardScaler, LabelEncoder
# Load the trained model
model_path = r"cnn_drug_discovery_model.h5"
model = load_model(model_path)
print("Model loaded successfully!")
# Define label encoders for categorical values
label_encoders = {
"Gender": LabelEncoder().fit(["Male", "Female"]),
"Medical_History": LabelEncoder().fit(["Diabetes", "Hypertension", "Cancer", "None"]),
"Drug_Name": LabelEncoder().fit(["DrugA", "DrugB", "DrugC", "DrugD"]),
"Side_Effects": LabelEncoder().fit(["None", "Nausea", "Dizziness", "Fatigue"]),
"Disease_Type": LabelEncoder().fit(["Lung Cancer", "Breast Cancer", "Diabetes", "Heart Disease"]),
"Genetic_Marker": LabelEncoder().fit(["MarkerA", "MarkerB", "MarkerC", "MarkerD"])
}
# Define a scaler (use values from training phase if available)
scaler = StandardScaler()
# Example input values for drug effectiveness prediction
input_data = {
"Age": 45,
"Gender": "Male",
"Medical_History": "Diabetes",
"Drug_Name": "DrugB",
"Dosage_mg": 100,
"Treatment_Duration_days": 40,
"Effectiveness_%": 15.4,
"Side_Effects": "Nausea",
"Disease_Type": "Lung Cancer",
"Genetic_Marker": "MarkerB"
}
# Encode categorical values
for key in label_encoders:
if key in input_data:
input_data[key] = label_encoders[key].transform([input_data[key]])[0]
# Convert input data to array
input_array = np.array(list(input_data.values())).reshape(1, -1)
# Normalize input features (use values from training phase if available)
input_array = scaler.fit_transform(input_array) # Use transform() instead of fit_transform() if scaler was previously trained
# Reshape for CNN input
input_array = np.expand_dims(input_array, axis=2)
# Make prediction for drug effectiveness
prediction = model.predict(input_array)
effectiveness_score = prediction[0][0] * 100 # Convert to percentage
print(f"Predicted Drug Effectiveness: {effectiveness_score:.2f}%")
</code>
<code>
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv1D, MaxPooling1D, Flatten, Dense, Dropout, BatchNormalization
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder, StandardScaler
import joblib # Import joblib for saving the scaler
# Load dataset
dataset = pd.read_csv("synthetic_medical_dataset.csv")
# Encode categorical variables
label_encoders = {}
categorical_columns = ["Gender", "Medical_History", "Drug_Name", "Side_Effects", "Disease_Type", "Genetic_Marker", "Response_to_Treatment"]
for col in categorical_columns:
le = LabelEncoder()
dataset[col] = le.fit_transform(dataset[col])
label_encoders[col] = le
# Selecting features and target variables
# Selecting features and target variables
import joblib
# Selecting features (Ensure "Response_to_Treatment" is excluded)
X = dataset.drop(columns=["Patient_ID", "Response_to_Treatment"]).values
y = dataset["Response_to_Treatment"].values
# Save feature names (to ensure consistency during prediction)
feature_names = list(dataset.drop(columns=["Patient_ID", "Response_to_Treatment"]).columns)
joblib.dump(feature_names, "feature_names.pkl") # Save feature names
# Normalize features
scaler = StandardScaler()
X = scaler.fit_transform(X)
# Save the trained scaler
joblib.dump(scaler, "scalers.pkl")
print("Scaler and feature names saved successfully!")
# Reshape for CNN input (assuming 1D features per patient)
X = np.expand_dims(X, axis=2)
# Split dataset
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Build CNN model
model = Sequential([
Conv1D(filters=64, kernel_size=3, activation='relu', input_shape=(X.shape[1], 1)),
BatchNormalization(),
MaxPooling1D(pool_size=2),
Dropout(0.2),
Conv1D(filters=128, kernel_size=3, activation='relu'),
BatchNormalization(),
MaxPooling1D(pool_size=2),
Dropout(0.3),
Flatten(),
Dense(128, activation='relu'),
Dropout(0.4),
Dense(1, activation='sigmoid') # Binary classification
])
# Compile the model
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
# Train the model
model.fit(X_train, y_train, epochs=30, batch_size=32, validation_data=(X_test, y_test))
# Evaluate the model
test_loss, test_acc = model.evaluate(X_test, y_test)
print(f"Test Accuracy: {test_acc:.2f}")
# Save the model
model.save("cnn_drug_discovery_model.h5")
print("Model saved successfully!")
</code>
<code>
import joblib
import numpy as np
import tensorflow as tf
from tensorflow.keras.models import load_model
from sklearn.preprocessing import LabelEncoder
# Load model, scaler, and feature names
model = load_model("cnn_drug_discovery_model.h5")
scaler = joblib.load("scalers.pkl")
expected_features = joblib.load("feature_names.pkl") # Load expected feature names
print("Model, scaler, and feature names loaded successfully!")
# Print expected features for debugging
print("Expected Features from Training:", expected_features)
# Define label encoders
label_encoders = {
"Gender": LabelEncoder().fit(["Male", "Female"]),
"Medical_History": LabelEncoder().fit(["Diabetes", "Hypertension", "Cancer", "None"]),
"Drug_Name": LabelEncoder().fit(["DrugA", "DrugB", "DrugC", "DrugD"]),
"Side_Effects": LabelEncoder().fit(["None", "Nausea", "Dizziness", "Fatigue"]),
"Disease_Type": LabelEncoder().fit(["Lung Cancer", "Breast Cancer", "Diabetes", "Heart Disease"]),
"Genetic_Marker": LabelEncoder().fit(["MarkerA", "MarkerB", "MarkerC", "MarkerD"])
}
# Example input values (Ensure all expected features are included)
input_data = {
"Age": 45,
"Gender": "Male",
"Medical_History": "Diabetes",
"Drug_Name": "DrugA",
"Dosage_mg": 900,
"Treatment_Duration_days": 10,
"Effectiveness_%": 95.4,
"Side_Effects": "Nausea",
"Disease_Type": "Lung Cancer",
"Genetic_Marker": "MarkerB"
}
# Encode categorical values
for key in label_encoders:
if key in input_data:
input_data[key] = label_encoders[key].transform([input_data[key]])[0]
# Ensure input_data has all expected features
for feature in expected_features:
if feature not in input_data:
print(f"Warning: Missing feature '{feature}' in input data. Assigning default value 0.")
input_data[feature] = 0 # Default value (adjust if needed)
# Convert input data to NumPy array in correct order
input_array = np.array([input_data[feature] for feature in expected_features]).reshape(1, -1)
# Verify feature count consistency
if input_array.shape[1] != scaler.n_features_in_:
raise ValueError(f"Feature mismatch: Expected {scaler.n_features_in_}, but got {input_array.shape[1]}.")
# Normalize input using the pre-trained scaler
input_array = scaler.transform(input_array)
# Reshape for CNN input
input_array = np.expand_dims(input_array, axis=2)
# Make prediction
prediction = model.predict(input_array)
effectiveness_score = prediction[0][0] * 100 # Convert to percentage
print(f"Predicted Drug Effectiveness: {effectiveness_score:.2f}%")
</code>
|
{
"filename": "project_drug_1.ipynb",
"repository": "satish2705/major",
"query": "transformed_from_existing",
"size": 46092,
"sha": ""
}
|
# Tumor Tissue Normal Matched TCGA_1.ipynb
Repository: satsumas/okAPI
# Programmatically Access TCGA Data using the Seven Bridges Cancer Genomics Cloud via the Datasets API
TCGA is one of the world’s largest cancer genomics data collections, including more than eleven thousand patients, representing 33 cancers, and over half a million total files. Seven Bridges has created a unified metadata ontology from the diverse cancer studies, made this data available, and provided compute infrastructure to facilitate customized analyses on the Cancer Genomics Cloud (the CGC). The CGC provides powerful methods to query and reproducibly analyze TCGA data - alone or in conjunction with your own data.
We continue to develop new methods of interacting with data on the CGC, however, we also appreciate that sometimes it is useful to be able to analyze data locally, or in an AWS environment that you have configured yourself. While the CGC has undergone thorough testing and is certified as a FISMA-moderate system, if you wish to analyze data in alternative locations, you must take the appropriate steps to ensure your computing environment is secure and compliant with current best practices. If you plan to download large numbers of files for local analysis, we recommend using the download utilities available from the Genomic Data Commons which have been specifically optimized for this purpose.
Below, we provide a tutorial showing how to find and access TCGA data using the Datasets API. Alternatively, you can try to query TCGA data using a SPARQL query.
## Goal of this Tutorial
During this tutorial, you will learn how to use the Datasets API to get the gene expression files for tumor-normal tissue matched Breast Cancer datasets. In order to do this, we need to first identify the primary tumor and normal tissue samples from BRCA for which RNA-seq experiments have been performed. We then identify the tumor-tissue normal matched RNA-seq datasets by identifying which cases or patients had both experiments performed on them. After identifying these patients, we can then get the gene expression files for these tumor-normal matched datasets.
## Prerequisites
Before you begin this tutorial, you should:
1. ** Set up your CGC account. ** If you haven't already done so, navigate to https://cgc.sbgenomics.com/ and follow these directions to register for the CGC. This tutorial uses Open Data, which is available to all CGC users. The same approach can be used by approved researchers to access Controlled Data. Learn more about TCGA data access here.3
2. ** Install the Seven Bridges' API Python library. ** This tutorial uses the library sevenbridges-python. Learn how to install it before continuing.
3. ** Obtain your authentication token. ** You'll use your authentication token to encode your user credentials when interacting with the CGC programmatically. Learn how to access your authentication token. It is important to store your authentication token in a safe place as it can be used to access your account. The time and location your token was last used is shown on the developer dashboard. If for any reason you believe your token has been compromised, you can regenerate it at any time.
## Query using the Datasets API
The Datasets API is an API designed around the TCGA data structure and focused on search functionality. You can use the Datasets API to browse TCGA using API requests written in JSON. Queries made using the Datasets API return entities and are particularly suitable for browsing TCGA data.
We'll write a Python script to issue our query into TCGA using the Datasets API. Since the Datasets API is not included in our Python library, sevenbridges-python, we will use two Python modules, json and requests, to interact with it instead. We'll use these modules to write a wrapper around the API request.
<code>
import json
from requests import request
</code>
Below, we define a simple function to send and receive JSONs from the API using the correctly formatted HTTP calls. The necessary imports are handled above.
<code>
def api_call(path, method='GET', query=None, data=None, token=None):
base_url = 'https://cgc-datasets-api.sbgenomics.com/datasets/tcga/v0/'
data = json.dumps(data) if isinstance(data, dict) \
or isinstance(data,list) else None
headers = {
'X-SBG-Auth-Token': token,
'Accept': 'application/json',
'Content-type': 'application/json',
}
response = request(method, base_url + path, params=query, \
data=data, headers=headers)
response_dict = response.json() if \
response.json() else {}
if response.status_code / 100 != 2:
print(response_dict['message'])
print('Error Code: %i.' % (response_dict['code']))
print(response_dict['more_info'])
raise Exception('Server responded with status code %s.' \
% response.status_code)
return response_dict
</code>
Then, provide your authentication token, as shown below. Examples of proper coding of your auth_token are available for sevenbridges-python bindings
<code>
auth_token = 'Enter your Authentication token here'
</code>
Now, we can define a query in JSON for finding all primary tumor samples that are Breast Invasive Carcinoma and those that have RNA-seq experiments performed.
<code>
tumor_samples_query = {
"entity": "samples",
"hasSampleType": "Primary Tumor",
"hasCase": {
"hasDiseaseType" : "Breast Invasive Carcinoma",
"hasGender" : "FEMALE",
"hasVitalStatus" : "Alive"
},
"hasFile": {
"hasExperimentalStrategy": "RNA-Seq",
"hasDataType" : "Gene expression"
}
}
</code>
<code>
total = api_call(method='POST', path ='query/total', \
token=auth_token, data=tumor_samples_query)
print("There are {} samples matching the query".format(total['total']))
</code>
Below, we define a simple function to get all matches to the query in the API using the correctly formatted HTTP calls.
<code>
import math
def getAllMatches(auth_token, query_body):
numberFiles = api_call(method="POST", path="query/total", \
token=auth_token, data=query_body)["total"]
numCalls = int(math.ceil(numberFiles/100.0))
matches = []
entity = query_body["entity"]
for i in range(0, numCalls):
query_body["offset"] = str(i * 100)
currSet = api_call(method="POST", path="query" \
, token=auth_token, data=query_body)["_embedded"][entity]
for currMatch in currSet:
matches.append(currMatch)
return matches
</code>
Using these functions, we can now get all the samples that match the required queries.
<code>
tumor_samples = getAllMatches(auth_token, tumor_samples_query)
tumor_sample_ids = [curr_sample["id"] for curr_sample in tumor_samples]
</code>
Now, we can define a query in JSON for getting all normal tissue samples that are Breast Invasive Carcinoma and those that have RNA-seq experiments performed.
<code>
tissue_normal_samples_query = {
"entity": "samples",
"hasSampleType": "Solid Tissue Normal",
"hasCase": {
"hasDiseaseType" : "Breast Invasive Carcinoma",
"hasGender" : "FEMALE",
"hasVitalStatus" : "Alive"
},
"hasFile": {
"hasExperimentalStrategy": "RNA-Seq",
"hasDataType" : "Gene expression"
}
}
</code>
<code>
tissue_normal_samples = getAllMatches(auth_token, tissue_normal_samples_query)
tissue_normal_sample_ids = [curr_sample["id"] for curr_sample in tissue_normal_samples]
</code>
<code>
total = api_call(method='POST', path ='query/total', \
token=auth_token, data=tissue_normal_samples_query)
print("There are {} samples matching the query".format(total['total']))
</code>
Now, we are ready to identify the corresponding cases (patients) that have both tumor/normal matched RNA-seq experiments
<code>
tumor_cases_query = {
"entity": "cases",
"hasSample": tumor_sample_ids
}
tumor_cases = getAllMatches(auth_token, tumor_cases_query)
tumor_case_ids = [curr_case["id"] for curr_case in tumor_cases]
</code>
<code>
tissue_normal_cases_query = {
"entity": "cases",
"hasSample": tissue_normal_sample_ids
}
tissue_normal_cases = getAllMatches(auth_token, tissue_normal_cases_query)
tissue_normal_case_ids = [curr_case["id"] for curr_case in tissue_normal_cases]
</code>
<code>
tumor_match_case_ids = list(set(tumor_case_ids) & set(tissue_normal_case_ids))
print("There are {} cases that have both primary tumor and solid tissue normal samples with RNA-seq experiments".format(len(tumor_match_case_ids)))
</code>
Now that we know the case IDs, we can use them to get the appropriate files
<code>
tumor_match_files_query = {
"entity": "files",
"hasExperimentalStrategy": "RNA-Seq",
"hasDataType" : "Gene expression",
"hasSample": {
"hasSampleType" : "Primary Tumor"
},
"hasCase": tumor_match_case_ids
}
tumor_match_files = getAllMatches(auth_token, tumor_match_files_query)
</code>
<code>
tissue_normal_match_files_query = {
"entity": "files",
"hasExperimentalStrategy": "RNA-Seq",
"hasDataType" : "Gene expression",
"hasSample": {
"hasSampleType" : "Solid Tissue Normal"
},
"hasCase": tumor_match_case_ids
}
tissue_normal_match_files = getAllMatches(auth_token, tissue_normal_match_files_query)
</code>
<code>
print("There are {} files corresponding to Gene Expression for Tumor samples in tumor-normal matched cases for BRCA".format(len(tumor_match_files)))
print("There are {} files corresponding to Gene Expression for Solid tissue normal samples in tumor-normal matched cases for BRCA".format(len(tissue_normal_match_files)))
</code>
## Initialize the sevenbridges-python library
We've now installed sevenbridges-python and stored our credentials in a config file. Let's import the api class from the official sevenbridges-python bindings.
<code>
import sevenbridges as sbg
</code>
Let's initialize the api object so the API knows our credentials.
<code>
# [USER INPUT] specify platform {cgc, sbg}
prof = 'cgc'
config_file = sbg.Config(profile=prof)
api = sbg.Api(config=config_file)
</code>
Create a new project
<code>
# [USER INPUT] Set project name here:
new_project_name = 'Matched Tumor-Control Samples'
# What are my funding sources?
billing_groups = api.billing_groups.query()
# Pick the first group (arbitrary)
print((billing_groups[0].name + \
' will be charged for computation and storage (if applicable) for your new project'))
# Set up the information for your new project
new_project = {
'billing_group': billing_groups[0].id,
'description': """A project created by the API recipe (projects_makeNew.ipynb).
This also supports **markdown**
_Pretty cool_, right?
""",
'name': new_project_name
}
# check if this project already exists. LIST all projects and check for name match
my_project = [p for p in api.projects.query(limit=100).all() \
if p.name == new_project_name]
if my_project: # exploit fact that empty list is False, {list, tuple, etc} is True
print('A project with the name (%s) already exists, please choose a unique name' \
% new_project_name)
raise KeyboardInterrupt
else:
# CREATE the new project
my_project = api.projects.create(name = new_project['name'], \
billing_group = new_project['billing_group'], \
description = new_project['description'])
# (re)list all projects, and get your new project
my_project = [p for p in api.projects.query(limit=100).all() \
if p.name == new_project_name][0]
</code>
Copy the files based on file IDs that are transferrable across the Datasets and platform APIs
<code>
def copyToProject(api, my_project, finalFiles):
my_files = api.files.query(limit = 100, project = my_project.id).all()
# pop out the file names
my_file_names = [f.name for f in my_files]
newFiles = []
for currFile in finalFiles:
if currFile["label"] in my_file_names:
print('file already exists in second project, please try another file')
else:
fileObject = api.files.get(id = currFile['id'])
#print fileObject.name, fileObject.id
my_new_file = fileObject.copy(project = my_project.id, name = fileObject.name)
newFiles.append(my_new_file)
print "Files Imported!"
copyToProject(api, my_project, tumor_match_files)
copyToProject(api, my_project, tissue_normal_match_files)
</code>
|
{
"filename": "Tumor Tissue Normal Matched TCGA_1.ipynb",
"repository": "satsumas/okAPI",
"query": "transformed_from_existing",
"size": 20433,
"sha": ""
}
|
# genomica_13.Modulo_13_filogenetica.ipynb
Repository: cabana-online/Vigilancia
# Módulo 13: Filogenética
## Descripción general
La filogenética es el estudio de las relaciones evolutivas entre entidades biológicas, a menudo especies, individuos o genes (que pueden denominarse taxones). Los principales elementos de la filogenética se resumen en la siguiente figura.

*Tomado de: https://www.ebi.ac.uk/training/online/courses/introduction-to-phylogenetics/what-is-phylogenetics/*
Los árboles filogenéticos basados en datos del genoma completo nos informan de las relaciones entre aislados bacterianos a una escala muy fina. Cuando combinamos esa información de alta resolución sobre las relaciones evolutivas de los aislados con datos geográficos, podemos comprender mejor la distribución actual del patógeno e inferir los procesos epidemiológicos que han actuado sobre la bacteria a lo largo del tiempo. El ejemplo más sencillo sería que una filogenia mostrara que un patógeno está limitado geográficamente (por ejemplo, los aislados de la misma región siempre se agrupan juntos). Esto podría indicar que el patógeno no se está extendiendo rápidamente. Mientras que un patógeno con una filogenia que muestra que los aislados de regiones distantes están probablemente relacionados con los aislados de regiones cercanas, la interpretación es que es probable que el patógeno se propague a través de las fronteras regionales. La referenciación geográfica de los datos genómicos también puede combinarse con información temporal para estudiar el movimiento de los patógenos en el espacio y el tiempo. Esto es más útil cuando se hace en tiempo real y, por tanto, puede ser útil para la detección y el seguimiento de brotes.
En este módulo está dividido en tres partes donde abordaremos:
1. Construcción de árboles filogenéticos
2. Identificación de regiones de recombinación
3. Agrupación mediante popPUNK.
### Instalar condacolab
<code>
!pip install -q condacolab
import condacolab
condacolab.install()
</code>
<code>
!conda config --add channels defaults
!conda config --add channels bioconda
!conda config --add channels conda-forge
</code>
### Instalar programas
<code>
# Instalar FastTree
!conda install bioconda::fasttree
</code>
<code>
# Instalar snp-sites
!conda install snp-sites
</code>
<code>
# Istalar Gubbins
!conda install gubbins
</code>
<code>
# Instalar PopPunk
!conda install poppunk
</code>
### Descargar datos
<code>
!wget https://zenodo.org/records/14231070/files/Module_13.tar.gz
</code>
### Extraer el archivo .tar.gz
<code>
!tar xvf Module_13.tar.gz
</code>
## Parte 1: Creando un árbol filogenético usando FastTree
### Paso 1: Realice un alineamiento de sólo SNP utilizando snp-sites
Crear una filogenia a partir de secuencias genómicas completas puede ser un proceso muy lento y de gran carga computacional. Podemos acelerarlo utilizando sólo los sitios variables (SNPs). Sin embargo, debemos ser conscientes de que incluir sólo los sitios variables puede afectar a las estimaciones de la tasa evolutiva realizadas por el software filogenético, por lo que debemos tener en cuenta los sitios que eliminamos en nuestro análisis.
Para ello utilizaremos snp-sites. Puede ver las opciones de snp-sites con el comando:
<code>
%cd Module_13
</code>
<code>
# Ejecutar snp-sites
!snp-sites -h
</code>
En primer lugar, elimine todos los sitios invariantes y cree una alineación de secuencias múltiples de sólo SNP. Utilizaremos los resultados de las ejecuciones rápidas descritas en la página anterior. Ejecute el comando:
<code>
!snp-sites -o clean.full.SNPs.aln clean.full.aln
</code>
La explicación de este comando es la siguiente:
`snp-sites`: es la herramienta/programa
`-o clean.full.SNPs.aln`: especifica el archivo de salida
`clean.full.aln`: especifica el archivo de entrada - que es una salida de snippy
Podemos ver cuántos sitios invariantes se eliminaron (y qué proporción de A, T, G, C tenían) utilizando:
<code>
!snp-sites -C clean.full.aln
</code>
Obtendrá la siguiente salida:
### Paso 2: Creación de un árbol filogenético a partir de los SNPs utilizando FastTree
Puede ver las opciones para fasttree de la siguiente manera:
<code>
!FastTree -h
</code>
Generaremos un árbol filogenético de máxima verosimilitud utilizando este comando:
<code>
!FastTree -nt -gtr clean.full.SNPs.aln > clean.full.SNPs.aln.tree
</code>
La explicación de este comando es la siguiente:
`FastTree`: es la herramienta/programa
`-nt`: especifica que el alineamiento de entrada es de nucleótidos
`gtr`: especifica el modelo evolutivo
`clean.full.SNPs.aln`: alineamiento de entrada
`clean.full.SNPs.aln.tree`: especifica el nombre del árbol de salida
Exploremos la salida de nuestro comando anterior utilizando el comando:
<code>
!ls -lh clean.full.*
</code>
Nuestro árbol de máxima verosimilitud se etiqueta clean.full.SNPs.aln.tree. Podemos visualizarlo con figtree o iTOL.
### Paso 3: Visualización de un arbol
Descargue el archivo clean.full.SNPs.aln.tree en su ordenador en el ícono de "Archivo" en la parte izquierda de Colab y diríjase a https://itol.embl.de/
En la parte superior izquierda, seleccione la opción "Upload" como se muestra a continuación:

Posteriormente, seleccione el botón de "Browse...", seleccione su archivo .tree y seleccione "Upload":

Finalmente, obtendrá el siguiente arbol:

O utilizando el paquete ggtree en R:
```
library(ggtree)
mltree <- midpoint.root(read.tree("clean.full.SNPs.aln.treef"))
ggtree(mltree) + # plot basic tree
geom_tiplab(size=3) + # add tip labels
geom_treescale() + # add scale bar
xlim(0, 0.0005) # set limits so the plot fits nicely on the screen
```
### Paso 3: Interpretar un árbol filogenético
Debemos recordar que el árbol que hemos creado procede de un alineamiento de SNP únicamente. Recuerde que cuanto más larga es la rama de una cepa, más mutaciones tiene. La cepa de la muestra ERR2667737 y la cepa de referencia son parecidas entre sí, y presentan algunas mutaciones que también se encuentran en la cepa de la muestra ERR2667694. Las cepas de las muestras ERR2667707 y ERR2667708 comparten mutaciones y presentean mutaciones diferentes al resto de cepas.
___
## Parte 2: Identificación de regiones de recombinación usando Gubbins
Muchas bacterias participan en altas tasas de recombinación homóloga. Esto significa que donan y reciben segmentos de ADN entre sí. En el contexto de un árbol filogenético, en el que comparamos regiones similares y disímiles para determinar el parentesco de los aislados, esto puede resultar problemático y dar lugar a longitudes de rama que reflejen recombinaciones en lugar de divergencia. Esto es muy importante en el caso del *Streptococcus pneumoniae*, que es naturalmente competente, lo que significa que puede captar fácilmente ADN.
[Gubbins](https://github.com/nickjcroucher/gubbins/blob/master/docs/gubbins_manual.md) (Genealogies Unbiased By recomBinations In Nucleotide Sequences) es un algoritmo que identifica iterativamente los loci que contienen elevadas densidades de sustituciones de bases, al tiempo que construye una filogenia basada en las mutaciones puntuales putativas fuera de estas regiones. Las simulaciones demuestran que el algoritmo genera reconstrucciones muy precisas bajo modelos realistas de diversificación a corto plazo de las secuencias mediante mutaciones puntuales y recombinación, y puede ejecutarse en alineaciones de muchos cientos de secuencias de genomas bacterianos. Por tanto, no es adecuado para estudiar la recombinación en la diversidad de toda la especie, algo que puede hacerse gen a gen con programas como fastGEAR. En cambio, funciona con muestras de diversidad limitada, que comparten un ancestro común reciente: una cepa o linaje.
El archivo de entrada necesario para Gubbins es un alineamiento FASTA de todo el genoma. Cada secuencia debe tener un identificador único y deben evitarse los caracteres especiales. Las secuencias sólo deben utilizar los caracteres ACGT (bases de ADN),N (base desconocida) o - (hueco en la alineación). Si se va a incluir un árbol de partida, éste deberá tener un formato Newick. La alineación se genera más fácilmente mediante el mapeo de secuencias contra una secuencia de referencia. Esto puede hacerse con el conocido programa de mapeo Snippy.
Puede ver los comandos Gubbins de la siguiente manera:
<code>
!run_gubbins.py -h
</code>
Ejecutaremos la herramienta gubbins en un alineamiento completo del genoma y no en un alineamiento SNPs. Ahora vamos a ejecutar este comando:
<code>
!run_gubbins.py --mar -p output clean.full.aln
</code>
La explicación de este comando es la siguiente:
`run_gubbins.py`: es la herramienta/programa
`--mar`: Esta opción indica a Gubbins que utilice un algoritmo de Monte Carlo con cadena de Markov (MCMC) para inferir la historia evolutiva de los genomas.
`-p output`: Esta opción especifica el nombre del directorio de salida donde se guardarán los resultados del análisis de Gubbins. Como /data dir es el punto de montaje de PWD de su host debido a -v, verá los archivos en el PWD de su host
`clean.full.aln`: fichero de entrada
Este comando puede tardar unos minutos en ejecutarse.
Veamos lo que gubbins ha hecho utilizando el comando (`ls -l output.*`)
Una explicación de los archivos de salida:
`output.branch_base_reconstruction.embl`: Reconstrucción de sustitución de bases en formato EMBL
`output.recombination_predictions.embl`: Predicciones de recombinación en formato EMBL.
`salida.predicciones_recombinación.gff`: Predicciones de recombinación en formato GFF
`salida.sitios_polimorfos_filtrados.phylip`: Alineación en formato Phylip de los sitios polimórficos filtrados utilizados para generar la filogenia en la iteración final
`output.final_tree.tre`: este archivo contiene la filogenia final en formato Newick; las longitudes de rama están en mutaciones puntuales
`output.node_labelled.final_tree.tre`: árbol filogenético final en formato Newick pero con etiquetas internas en los nodos; las longitudes de las ramas están en mutaciones puntuales
`output.log`: archivo de registro que especifica el software utilizado en cada paso del análisis, con las citas correspondientes
`output.per_branch_statistics.csv`: informe por rama de las sustituciones de bases dentro y fuera de los eventos de recombinación
`output.summary_of_snp_distribution.vcf`: archivo VCF que resume la distribución de las mutaciones puntuales.
Puedes explorar estos archivos de salida usando el comando `head`. Por ejemplo, "output.recombination_predictions.gff" es un archivo GFF que contiene un registro de cada bloque de recombinación identificado, cuántos SNPs contiene y qué muestras están afectadas.
EJEMPLO
**output.final_tree.tre** es una filogenia sin regiones de recombinación. Lea en el árbol filtrado gubbins y grafíquelo con ggtree o visualice en Figtree o iTOL como se explicó previamente.
Usted tendrá la salida:

O utilizando el paquete ggtree en R:
```
!gubbins.tree <- midpoint.root(read.tree("gubbins.final_tree.tre"))
ggtree(gubbins.tree) + # plot basic tree
geom_tiplab(size=3) + # add tip labels
geom_treescale() + # add a scale bar
xlim(0, 300) # change the sizing of the plot so it fits nicely on the screen
```
También podemos visualizar los bloques de recombinación utilizando una herramienta web llamada Phandango. En su navegador, navegue a: https://jameshadfield.github.io/phandango/#/
Necesitarás los siguientes archivos (arrástrelos y suéltelos)::
1. output.final_tree.tre
2. output.recombination_predictions.gff
3. reference.gff (output from Prokka of Reference_sequence_GPSC1.fa)
Phandango debería mostrar automáticamente los bloques de recombinación en rojo (ancestral) y azul (específico de una muestra).

___
## Parte 3: Agrupación mediante popPUNK
[PopPUNK](https://poppunk.readthedocs.io/en/latest/index.html) es una herramienta para agrupar genomas. Nos referimos a los clusters como clusters de k-mer de longitud variable, o VLKCs. Desde el punto de vista biológico, estos grupos suelen representar cepas distintas. Los subgrupos de cepas se denominan linajes.
La siguiente figura muestra un resumen de cómo ejecutar popPUNK

*Tomado de: https://poppunk.readthedocs.io/en/latest/overview.html*
### Base de datos
Como estamos trabajando con *Streptococcus pneumoniae*, usaremos la base de [datos de referencia GPS](https://gps-project.cog.sanger.ac.uk/GPS_v9.tar.gz) y la [designación GPS](https://gps-project.cog.sanger.ac.uk/GPS_v9_external_clusters.csv) que utilizaremos para agrupar nuestro genoma. También puede acceder a los genomas de referencia de otras especies bacterianas desde esta base de datos. Si una especie no está incluida en esta base de datos, se recomienda que cree su propia base de datos.
La base de datos del genoma de referencia GPS de *Streptococcus pneumoniae* se guarda en su directorio como (GPS_v9) y la designación GPS como (GPS_v9_external_clusters.csv)
<code>
# Descomprimir la base de datos
!tar xvf GPS_v9.tar.gz
</code>
### Archivo de texto con los detalles de sus muestras
Necesita un archivo que enumere los nombres de sus muestras y las rutas a sus datos de secuencia. En este caso es el archivo: poppunk_input.tsv
Puede revisarlo con el siguiente comando:
<code>
!cat poppunk_input.tsv
</code>
Este archivo de texto contiene los nombres de las muestras y sus datos de secuencia. No tiene encabezamiento, está separado por tabuladores y contiene el nombre de la muestra en la primera columna. Las columnas siguientes pueden contener rutas a datos de lectura ensamblados o sin procesar (el tipo se deducirá automáticamente comprobando la presencia de puntuaciones de calidad).
### Agrupar los genomas
El comando para agrupar sus genomas es el siguiente:
<code>
!poppunk_assign --db GPS_v9 --query poppunk_input.tsv --output poppunk_clusters --external-clustering GPS_v9_external_clusters.csv
</code>
La explicación de este comando es la siguiente:
`poppunk_assign`: es la herramienta/programa/script
`--db GPS_v9`: especifica la base de datos
`--query /data/poppunk_input.tsv`: recibe la entrada del list.txt
`--output poppunk_clusters`: especifica el archivo de salida
`--external-clustering GPS_v9_external_clusters.csv`: directorio que contiene los clusters GPS para las referencias
Al finalizar, se generará una nueva carpeta "poppunk_clusters". Navega hasta esta carpeta y explora su contenido.
Los archivos de salida:
**poppunk_clusters_clusters.csv**: clusters popPUNK con nomenclatura específica del conjunto de datos
**poppunk_clusters_external_clusters.csv**: Designaciones de esquemas GPSC v9
Podemos explorar "poppunk_clusters_clusters.csv"
>Nota: Si a una cepa ya se le ha asignado un cluster, por favor cambie el nombre para ejecutar popPUNK (esto es para evitar aplastar la herramienta) A los nuevos clusters se les asigna NA en el archivo _external_clusters.csv ya que no han sido definidos en el conjunto de datos v6 utilizado para designar los GPSCs. Por favor, envíe un correo electrónico a: globalpneumoseq@gmail.com para que se añadan nuevos clusters a la base de datos y se les asigne un nombre de cluster GPSC después de haber comprobado que no hay contaminación de bajo nivel que pueda contribuir a distancias accesorias sesgadas.
Este módulo popPUNK ha sido desarrollado a partir de las siguientes páginas web:
https://poppunk.readthedocs.io/en/latest/query_assignment.html
https://www.pneumogen.net/gps/training_command_line.html
|
{
"filename": "genomica_13.Modulo_13_filogenetica.ipynb",
"repository": "cabana-online/Vigilancia",
"query": "transformed_from_existing",
"size": 24600,
"sha": ""
}
|
# II_Run_CLASTER.ipynb
Repository: RasmussenLab/CLASTER
# 2. CREATE & RUN CLASTER
CLASTER is, at its core, a deep convolutional neural network aimed to translate a given chromatin landscape and its matching 3D structure to the corresponding nascent RNA landscape.
The network consists of:
- **Feature extractors**: two separate feature extractors with a number of convolutional layers implementing dilated convolutions and some residual connections.
- The first one extracts patterns from the chromatin landscape, described by combinations of ATAC-seq, H3K4me3, H3K27ac and H3K27me3 enrichments.
- The second one extracts features from the Micro-C maps (e.g. point-like contacts, compartments or domains).
- **Fusion module**: The extracted high-level, abstract features are then combined in a number of dense layers.
- **Output module**: A final set of dense layers maps the feature vectors to the targets in a regression task per node, which will represent the EU-seq signal at a distance from the TSS of the gene defining the sample (located in the middle).
CLASTER was built using the EIR framework, which makes it easy to replicate and adapt to new tasks, so feel free to do so! Documentation on EIR can be found at https://eir.readthedocs.io/en/latest/. Have a look at the tutorials to get a feel for the config files required and all possibilities that EIR offers. The framework uses hydra (https://hydra.cc/docs/intro/) to manage a set of configuration files, which allow you to streamline the process.
First, install eir in your environment using pip
> *Note: If you created claster_env, it would be nice to proceed in that environment.*
``` bash
! pip install eir-dl
```
<code>
from pathlib import Path
import pandas as pd
</code>
### Create config files:
The config files used to build CLASTER can be created as follows. This code creates all config files at once so that we can compare them and replicate the results.
We will mainly need:
- Global config: setting global parameters like the device, learning rate, batch size, whether we want to compute feature importance scores using attributions...
- Input configs: as many as feature extractors/ data modalities we have or want to integrate. Describes the feature extractor architecture
- Fusion config: describing how to we want to merge the features extracted from the different branches.
- Output config: describing the type of output we expect, our target file, the loss we want to optimize against and the names of the nodes.
For a more detailed description, please go through EIR's documentation.
<code>
config_paths = [Path("../configurations/conf_pure_conv/"),
Path("../configurations/conf_pure_conv_predict/"),
Path("../configurations/conf_pure_conv_predict_perturbations/"),
Path("../configurations/conf_microc_pure_conv/"),
Path("../configurations/conf_microc_rotated_pure_conv/"),
Path("../configurations/conf_microc_pure_conv_predict/"),
Path("../configurations/conf_microc_rotated_pure_conv_predict/"),
Path("../configurations/conf_only_chrom_attention/"),
Path("../configurations/conf_only_chrom_attention_predict/"),
Path("../configurations/conf_microc_pure_conv_latents/"),
Path("../configurations/conf_pure_conv_predict_perturbations_H3K27ac/")]
for config_path in config_paths:
config_path.mkdir(parents=True, exist_ok=True)
training_yaml_contents = {"globals.yaml":"""
output_folder: ./runs/gene_expression_only_chrom_pure_conv/ #gene_expression_prediction_no_H3K27ac_uncoupled/ #gene_expression_cnn_1kbp_401_bins_2_cond_reloaded/
manual_valid_ids_file: ./annotations/manual_validation_ids_chr17.txt #manual_validation_ids_chr17_uncoupled.txt
checkpoint_interval: 30300 # 100 epochs #60000
sample_interval: 30300 #60000
n_epochs: 120
batch_size: 64
optimizer: "adamw"
lr: 0.0001 #0.0001
device: "cuda"
compute_attributions: true
# latent_sampling:
# layers_to_sample:
# - "input_modules.contact_maps.feature_extractor.conv.0.conv_1"
# - "input_modules.contact_maps.feature_extractor.conv.1.conv_2"
# - "input_modules.contact_maps.feature_extractor.conv.2.conv_2"
# - "input_modules.contact_maps.feature_extractor.conv.3.conv_2"
# attribution_background_samples: 512
# attributions_every_sample_factor: 1
#pretrained_checkpoint: best_models/gene_expression_exformer_unlimited_chrom_and_micro_with_attention_model_117600_perf-average=0.8435.pt
""",
"input_cnn.yaml": """
input_info:
input_source: ./inputs/landscape_arrays/training/
input_name: gene_expression
input_type: array
model_config:
model_type: cnn
#pre_normalization: "instancenorm"
model_init_config:
num_output_features: 512 # before fc_repr_dim
layers: [4,4]
kernel_height: 1
down_stride_width: 2
first_stride_expansion_width: 1
first_kernel_expansion_height: 4 #5
kernel_width: 10 #10
dilation_factor_width: 2
dilation_factor_height: 1
channel_exp_base: 5 #3 #-1
first_channel_expansion: 2
rb_do: .3
stochastic_depth_p: .1
attention_inclusion_cutoff: 1 #50 #256
""",
"fusion.yaml": """
model_config:
fc_do: 0.4
fc_task_dim: 256
layers:
- 2
rb_do: 0.4
stochastic_depth_p: 0.5
model_type: "mlp-residual"
""",
"outputs_2_cond.yaml":"""
output_info:
output_name: expression_output
output_source: ./targets/training_targets.csv
output_type: tabular
model_config: # <- new
model_type: linear # <- new
output_type_info:
con_loss_name: "SmoothL1Loss"
target_con_columns:
- "-200_ctrl"
- "-199_ctrl"
- "-198_ctrl"
- "-197_ctrl"
- "-196_ctrl"
- "-195_ctrl"
- "-194_ctrl"
- "-193_ctrl"
- "-192_ctrl"
- "-191_ctrl"
- "-190_ctrl"
- "-189_ctrl"
- "-188_ctrl"
- "-187_ctrl"
- "-186_ctrl"
- "-185_ctrl"
- "-184_ctrl"
- "-183_ctrl"
- "-182_ctrl"
- "-181_ctrl"
- "-180_ctrl"
- "-179_ctrl"
- "-178_ctrl"
- "-177_ctrl"
- "-176_ctrl"
- "-175_ctrl"
- "-174_ctrl"
- "-173_ctrl"
- "-172_ctrl"
- "-171_ctrl"
- "-170_ctrl"
- "-169_ctrl"
- "-168_ctrl"
- "-167_ctrl"
- "-166_ctrl"
- "-165_ctrl"
- "-164_ctrl"
- "-163_ctrl"
- "-162_ctrl"
- "-161_ctrl"
- "-160_ctrl"
- "-159_ctrl"
- "-158_ctrl"
- "-157_ctrl"
- "-156_ctrl"
- "-155_ctrl"
- "-154_ctrl"
- "-153_ctrl"
- "-152_ctrl"
- "-151_ctrl"
- "-150_ctrl"
- "-149_ctrl"
- "-148_ctrl"
- "-147_ctrl"
- "-146_ctrl"
- "-145_ctrl"
- "-144_ctrl"
- "-143_ctrl"
- "-142_ctrl"
- "-141_ctrl"
- "-140_ctrl"
- "-139_ctrl"
- "-138_ctrl"
- "-137_ctrl"
- "-136_ctrl"
- "-135_ctrl"
- "-134_ctrl"
- "-133_ctrl"
- "-132_ctrl"
- "-131_ctrl"
- "-130_ctrl"
- "-129_ctrl"
- "-128_ctrl"
- "-127_ctrl"
- "-126_ctrl"
- "-125_ctrl"
- "-124_ctrl"
- "-123_ctrl"
- "-122_ctrl"
- "-121_ctrl"
- "-120_ctrl"
- "-119_ctrl"
- "-118_ctrl"
- "-117_ctrl"
- "-116_ctrl"
- "-115_ctrl"
- "-114_ctrl"
- "-113_ctrl"
- "-112_ctrl"
- "-111_ctrl"
- "-110_ctrl"
- "-109_ctrl"
- "-108_ctrl"
- "-107_ctrl"
- "-106_ctrl"
- "-105_ctrl"
- "-104_ctrl"
- "-103_ctrl"
- "-102_ctrl"
- "-101_ctrl"
- "-100_ctrl"
- "-99_ctrl"
- "-98_ctrl"
- "-97_ctrl"
- "-96_ctrl"
- "-95_ctrl"
- "-94_ctrl"
- "-93_ctrl"
- "-92_ctrl"
- "-91_ctrl"
- "-90_ctrl"
- "-89_ctrl"
- "-88_ctrl"
- "-87_ctrl"
- "-86_ctrl"
- "-85_ctrl"
- "-84_ctrl"
- "-83_ctrl"
- "-82_ctrl"
- "-81_ctrl"
- "-80_ctrl"
- "-79_ctrl"
- "-78_ctrl"
- "-77_ctrl"
- "-76_ctrl"
- "-75_ctrl"
- "-74_ctrl"
- "-73_ctrl"
- "-72_ctrl"
- "-71_ctrl"
- "-70_ctrl"
- "-69_ctrl"
- "-68_ctrl"
- "-67_ctrl"
- "-66_ctrl"
- "-65_ctrl"
- "-64_ctrl"
- "-63_ctrl"
- "-62_ctrl"
- "-61_ctrl"
- "-60_ctrl"
- "-59_ctrl"
- "-58_ctrl"
- "-57_ctrl"
- "-56_ctrl"
- "-55_ctrl"
- "-54_ctrl"
- "-53_ctrl"
- "-52_ctrl"
- "-51_ctrl"
- "-50_ctrl"
- "-49_ctrl"
- "-48_ctrl"
- "-47_ctrl"
- "-46_ctrl"
- "-45_ctrl"
- "-44_ctrl"
- "-43_ctrl"
- "-42_ctrl"
- "-41_ctrl"
- "-40_ctrl"
- "-39_ctrl"
- "-38_ctrl"
- "-37_ctrl"
- "-36_ctrl"
- "-35_ctrl"
- "-34_ctrl"
- "-33_ctrl"
- "-32_ctrl"
- "-31_ctrl"
- "-30_ctrl"
- "-29_ctrl"
- "-28_ctrl"
- "-27_ctrl"
- "-26_ctrl"
- "-25_ctrl"
- "-24_ctrl"
- "-23_ctrl"
- "-22_ctrl"
- "-21_ctrl"
- "-20_ctrl"
- "-19_ctrl"
- "-18_ctrl"
- "-17_ctrl"
- "-16_ctrl"
- "-15_ctrl"
- "-14_ctrl"
- "-13_ctrl"
- "-12_ctrl"
- "-11_ctrl"
- "-10_ctrl"
- "-9_ctrl"
- "-8_ctrl"
- "-7_ctrl"
- "-6_ctrl"
- "-5_ctrl"
- "-4_ctrl"
- "-3_ctrl"
- "-2_ctrl"
- "-1_ctrl"
- "0_ctrl"
- "1_ctrl"
- "2_ctrl"
- "3_ctrl"
- "4_ctrl"
- "5_ctrl"
- "6_ctrl"
- "7_ctrl"
- "8_ctrl"
- "9_ctrl"
- "10_ctrl"
- "11_ctrl"
- "12_ctrl"
- "13_ctrl"
- "14_ctrl"
- "15_ctrl"
- "16_ctrl"
- "17_ctrl"
- "18_ctrl"
- "19_ctrl"
- "20_ctrl"
- "21_ctrl"
- "22_ctrl"
- "23_ctrl"
- "24_ctrl"
- "25_ctrl"
- "26_ctrl"
- "27_ctrl"
- "28_ctrl"
- "29_ctrl"
- "30_ctrl"
- "31_ctrl"
- "32_ctrl"
- "33_ctrl"
- "34_ctrl"
- "35_ctrl"
- "36_ctrl"
- "37_ctrl"
- "38_ctrl"
- "39_ctrl"
- "40_ctrl"
- "41_ctrl"
- "42_ctrl"
- "43_ctrl"
- "44_ctrl"
- "45_ctrl"
- "46_ctrl"
- "47_ctrl"
- "48_ctrl"
- "49_ctrl"
- "50_ctrl"
- "51_ctrl"
- "52_ctrl"
- "53_ctrl"
- "54_ctrl"
- "55_ctrl"
- "56_ctrl"
- "57_ctrl"
- "58_ctrl"
- "59_ctrl"
- "60_ctrl"
- "61_ctrl"
- "62_ctrl"
- "63_ctrl"
- "64_ctrl"
- "65_ctrl"
- "66_ctrl"
- "67_ctrl"
- "68_ctrl"
- "69_ctrl"
- "70_ctrl"
- "71_ctrl"
- "72_ctrl"
- "73_ctrl"
- "74_ctrl"
- "75_ctrl"
- "76_ctrl"
- "77_ctrl"
- "78_ctrl"
- "79_ctrl"
- "80_ctrl"
- "81_ctrl"
- "82_ctrl"
- "83_ctrl"
- "84_ctrl"
- "85_ctrl"
- "86_ctrl"
- "87_ctrl"
- "88_ctrl"
- "89_ctrl"
- "90_ctrl"
- "91_ctrl"
- "92_ctrl"
- "93_ctrl"
- "94_ctrl"
- "95_ctrl"
- "96_ctrl"
- "97_ctrl"
- "98_ctrl"
- "99_ctrl"
- "100_ctrl"
- "101_ctrl"
- "102_ctrl"
- "103_ctrl"
- "104_ctrl"
- "105_ctrl"
- "106_ctrl"
- "107_ctrl"
- "108_ctrl"
- "109_ctrl"
- "110_ctrl"
- "111_ctrl"
- "112_ctrl"
- "113_ctrl"
- "114_ctrl"
- "115_ctrl"
- "116_ctrl"
- "117_ctrl"
- "118_ctrl"
- "119_ctrl"
- "120_ctrl"
- "121_ctrl"
- "122_ctrl"
- "123_ctrl"
- "124_ctrl"
- "125_ctrl"
- "126_ctrl"
- "127_ctrl"
- "128_ctrl"
- "129_ctrl"
- "130_ctrl"
- "131_ctrl"
- "132_ctrl"
- "133_ctrl"
- "134_ctrl"
- "135_ctrl"
- "136_ctrl"
- "137_ctrl"
- "138_ctrl"
- "139_ctrl"
- "140_ctrl"
- "141_ctrl"
- "142_ctrl"
- "143_ctrl"
- "144_ctrl"
- "145_ctrl"
- "146_ctrl"
- "147_ctrl"
- "148_ctrl"
- "149_ctrl"
- "150_ctrl"
- "151_ctrl"
- "152_ctrl"
- "153_ctrl"
- "154_ctrl"
- "155_ctrl"
- "156_ctrl"
- "157_ctrl"
- "158_ctrl"
- "159_ctrl"
- "160_ctrl"
- "161_ctrl"
- "162_ctrl"
- "163_ctrl"
- "164_ctrl"
- "165_ctrl"
- "166_ctrl"
- "167_ctrl"
- "168_ctrl"
- "169_ctrl"
- "170_ctrl"
- "171_ctrl"
- "172_ctrl"
- "173_ctrl"
- "174_ctrl"
- "175_ctrl"
- "176_ctrl"
- "177_ctrl"
- "178_ctrl"
- "179_ctrl"
- "180_ctrl"
- "181_ctrl"
- "182_ctrl"
- "183_ctrl"
- "184_ctrl"
- "185_ctrl"
- "186_ctrl"
- "187_ctrl"
- "188_ctrl"
- "189_ctrl"
- "190_ctrl"
- "191_ctrl"
- "192_ctrl"
- "193_ctrl"
- "194_ctrl"
- "195_ctrl"
- "196_ctrl"
- "197_ctrl"
- "198_ctrl"
- "199_ctrl"
- "200_ctrl"
"""
}
test_yaml_contents = {"fusion.yaml":"""
model_config:
fc_do: 0.4
fc_task_dim: 256
layers:
- 2
rb_do: 0.4
stochastic_depth_p: 0.5
model_type: "mlp-residual"
""",
"globals.yaml": """
checkpoint_interval: 30300 # 100 epochs #60000
sample_interval: 30300 #60000
n_epochs: 120
batch_size: 64
optimizer: "adamw"
lr: 0.0001 #0.0001
device: "cuda"
compute_attributions: true
""",
"input_cnn.yaml": """
input_info:
input_source: ./inputs/landscape_arrays/test/ #inserted_enhancer_test #inputs/silenced_arrays/silenced_arrays_H2B_S.D
#./data/parsed_data/inputs/arrays_train_100bp_no_H3K27ac/ #arrays_train_100bp_no_H3K27ac_uncoupled/
input_name: gene_expression
input_type: array
model_config:
model_type: cnn
#pre_normalization: "instancenorm"
model_init_config:
num_output_features: 512 # before fc_repr_dim
layers: [4,4]
kernel_height: 1
down_stride_width: 2
first_stride_expansion_width: 1
first_kernel_expansion_height: 4 #5
kernel_width: 10 #10
dilation_factor_width: 2
dilation_factor_height: 1
channel_exp_base: 5 #3 #-1
first_channel_expansion: 2
rb_do: .3
stochastic_depth_p: .1
attention_inclusion_cutoff: 1 #50 #256
""",
"outputs_2_cond.yaml": """
output_info:
output_name: expression_output
output_source: ./targets/test_targets.csv
#./data/parsed_data/targets/target_arrays_perturbational_inserted_enhancer.csv #target_arrays_perturbational_S.D.csv #target_arrays_1kbp_401_bins_2_conditions_decareads_abs.csv
output_type: tabular
model_config: # <- new
model_type: linear # <- new
output_type_info:
con_loss_name: "SmoothL1Loss"
target_con_columns:
- "-200_ctrl"
- "-199_ctrl"
- "-198_ctrl"
- "-197_ctrl"
- "-196_ctrl"
- "-195_ctrl"
- "-194_ctrl"
- "-193_ctrl"
- "-192_ctrl"
- "-191_ctrl"
- "-190_ctrl"
- "-189_ctrl"
- "-188_ctrl"
- "-187_ctrl"
- "-186_ctrl"
- "-185_ctrl"
- "-184_ctrl"
- "-183_ctrl"
- "-182_ctrl"
- "-181_ctrl"
- "-180_ctrl"
- "-179_ctrl"
- "-178_ctrl"
- "-177_ctrl"
- "-176_ctrl"
- "-175_ctrl"
- "-174_ctrl"
- "-173_ctrl"
- "-172_ctrl"
- "-171_ctrl"
- "-170_ctrl"
- "-169_ctrl"
- "-168_ctrl"
- "-167_ctrl"
- "-166_ctrl"
- "-165_ctrl"
- "-164_ctrl"
- "-163_ctrl"
- "-162_ctrl"
- "-161_ctrl"
- "-160_ctrl"
- "-159_ctrl"
- "-158_ctrl"
- "-157_ctrl"
- "-156_ctrl"
- "-155_ctrl"
- "-154_ctrl"
- "-153_ctrl"
- "-152_ctrl"
- "-151_ctrl"
- "-150_ctrl"
- "-149_ctrl"
- "-148_ctrl"
- "-147_ctrl"
- "-146_ctrl"
- "-145_ctrl"
- "-144_ctrl"
- "-143_ctrl"
- "-142_ctrl"
- "-141_ctrl"
- "-140_ctrl"
- "-139_ctrl"
- "-138_ctrl"
- "-137_ctrl"
- "-136_ctrl"
- "-135_ctrl"
- "-134_ctrl"
- "-133_ctrl"
- "-132_ctrl"
- "-131_ctrl"
- "-130_ctrl"
- "-129_ctrl"
- "-128_ctrl"
- "-127_ctrl"
- "-126_ctrl"
- "-125_ctrl"
- "-124_ctrl"
- "-123_ctrl"
- "-122_ctrl"
- "-121_ctrl"
- "-120_ctrl"
- "-119_ctrl"
- "-118_ctrl"
- "-117_ctrl"
- "-116_ctrl"
- "-115_ctrl"
- "-114_ctrl"
- "-113_ctrl"
- "-112_ctrl"
- "-111_ctrl"
- "-110_ctrl"
- "-109_ctrl"
- "-108_ctrl"
- "-107_ctrl"
- "-106_ctrl"
- "-105_ctrl"
- "-104_ctrl"
- "-103_ctrl"
- "-102_ctrl"
- "-101_ctrl"
- "-100_ctrl"
- "-99_ctrl"
- "-98_ctrl"
- "-97_ctrl"
- "-96_ctrl"
- "-95_ctrl"
- "-94_ctrl"
- "-93_ctrl"
- "-92_ctrl"
- "-91_ctrl"
- "-90_ctrl"
- "-89_ctrl"
- "-88_ctrl"
- "-87_ctrl"
- "-86_ctrl"
- "-85_ctrl"
- "-84_ctrl"
- "-83_ctrl"
- "-82_ctrl"
- "-81_ctrl"
- "-80_ctrl"
- "-79_ctrl"
- "-78_ctrl"
- "-77_ctrl"
- "-76_ctrl"
- "-75_ctrl"
- "-74_ctrl"
- "-73_ctrl"
- "-72_ctrl"
- "-71_ctrl"
- "-70_ctrl"
- "-69_ctrl"
- "-68_ctrl"
- "-67_ctrl"
- "-66_ctrl"
- "-65_ctrl"
- "-64_ctrl"
- "-63_ctrl"
- "-62_ctrl"
- "-61_ctrl"
- "-60_ctrl"
- "-59_ctrl"
- "-58_ctrl"
- "-57_ctrl"
- "-56_ctrl"
- "-55_ctrl"
- "-54_ctrl"
- "-53_ctrl"
- "-52_ctrl"
- "-51_ctrl"
- "-50_ctrl"
- "-49_ctrl"
- "-48_ctrl"
- "-47_ctrl"
- "-46_ctrl"
- "-45_ctrl"
- "-44_ctrl"
- "-43_ctrl"
- "-42_ctrl"
- "-41_ctrl"
- "-40_ctrl"
- "-39_ctrl"
- "-38_ctrl"
- "-37_ctrl"
- "-36_ctrl"
- "-35_ctrl"
- "-34_ctrl"
- "-33_ctrl"
- "-32_ctrl"
- "-31_ctrl"
- "-30_ctrl"
- "-29_ctrl"
- "-28_ctrl"
- "-27_ctrl"
- "-26_ctrl"
- "-25_ctrl"
- "-24_ctrl"
- "-23_ctrl"
- "-22_ctrl"
- "-21_ctrl"
- "-20_ctrl"
- "-19_ctrl"
- "-18_ctrl"
- "-17_ctrl"
- "-16_ctrl"
- "-15_ctrl"
- "-14_ctrl"
- "-13_ctrl"
- "-12_ctrl"
- "-11_ctrl"
- "-10_ctrl"
- "-9_ctrl"
- "-8_ctrl"
- "-7_ctrl"
- "-6_ctrl"
- "-5_ctrl"
- "-4_ctrl"
- "-3_ctrl"
- "-2_ctrl"
- "-1_ctrl"
- "0_ctrl"
- "1_ctrl"
- "2_ctrl"
- "3_ctrl"
- "4_ctrl"
- "5_ctrl"
- "6_ctrl"
- "7_ctrl"
- "8_ctrl"
- "9_ctrl"
- "10_ctrl"
- "11_ctrl"
- "12_ctrl"
- "13_ctrl"
- "14_ctrl"
- "15_ctrl"
- "16_ctrl"
- "17_ctrl"
- "18_ctrl"
- "19_ctrl"
- "20_ctrl"
- "21_ctrl"
- "22_ctrl"
- "23_ctrl"
- "24_ctrl"
- "25_ctrl"
- "26_ctrl"
- "27_ctrl"
- "28_ctrl"
- "29_ctrl"
- "30_ctrl"
- "31_ctrl"
- "32_ctrl"
- "33_ctrl"
- "34_ctrl"
- "35_ctrl"
- "36_ctrl"
- "37_ctrl"
- "38_ctrl"
- "39_ctrl"
- "40_ctrl"
- "41_ctrl"
- "42_ctrl"
- "43_ctrl"
- "44_ctrl"
- "45_ctrl"
- "46_ctrl"
- "47_ctrl"
- "48_ctrl"
- "49_ctrl"
- "50_ctrl"
- "51_ctrl"
- "52_ctrl"
- "53_ctrl"
- "54_ctrl"
- "55_ctrl"
- "56_ctrl"
- "57_ctrl"
- "58_ctrl"
- "59_ctrl"
- "60_ctrl"
- "61_ctrl"
- "62_ctrl"
- "63_ctrl"
- "64_ctrl"
- "65_ctrl"
- "66_ctrl"
- "67_ctrl"
- "68_ctrl"
- "69_ctrl"
- "70_ctrl"
- "71_ctrl"
- "72_ctrl"
- "73_ctrl"
- "74_ctrl"
- "75_ctrl"
- "76_ctrl"
- "77_ctrl"
- "78_ctrl"
- "79_ctrl"
- "80_ctrl"
- "81_ctrl"
- "82_ctrl"
- "83_ctrl"
- "84_ctrl"
- "85_ctrl"
- "86_ctrl"
- "87_ctrl"
- "88_ctrl"
- "89_ctrl"
- "90_ctrl"
- "91_ctrl"
- "92_ctrl"
- "93_ctrl"
- "94_ctrl"
- "95_ctrl"
- "96_ctrl"
- "97_ctrl"
- "98_ctrl"
- "99_ctrl"
- "100_ctrl"
- "101_ctrl"
- "102_ctrl"
- "103_ctrl"
- "104_ctrl"
- "105_ctrl"
- "106_ctrl"
- "107_ctrl"
- "108_ctrl"
- "109_ctrl"
- "110_ctrl"
- "111_ctrl"
- "112_ctrl"
- "113_ctrl"
- "114_ctrl"
- "115_ctrl"
- "116_ctrl"
- "117_ctrl"
- "118_ctrl"
- "119_ctrl"
- "120_ctrl"
- "121_ctrl"
- "122_ctrl"
- "123_ctrl"
- "124_ctrl"
- "125_ctrl"
- "126_ctrl"
- "127_ctrl"
- "128_ctrl"
- "129_ctrl"
- "130_ctrl"
- "131_ctrl"
- "132_ctrl"
- "133_ctrl"
- "134_ctrl"
- "135_ctrl"
- "136_ctrl"
- "137_ctrl"
- "138_ctrl"
- "139_ctrl"
- "140_ctrl"
- "141_ctrl"
- "142_ctrl"
- "143_ctrl"
- "144_ctrl"
- "145_ctrl"
- "146_ctrl"
- "147_ctrl"
- "148_ctrl"
- "149_ctrl"
- "150_ctrl"
- "151_ctrl"
- "152_ctrl"
- "153_ctrl"
- "154_ctrl"
- "155_ctrl"
- "156_ctrl"
- "157_ctrl"
- "158_ctrl"
- "159_ctrl"
- "160_ctrl"
- "161_ctrl"
- "162_ctrl"
- "163_ctrl"
- "164_ctrl"
- "165_ctrl"
- "166_ctrl"
- "167_ctrl"
- "168_ctrl"
- "169_ctrl"
- "170_ctrl"
- "171_ctrl"
- "172_ctrl"
- "173_ctrl"
- "174_ctrl"
- "175_ctrl"
- "176_ctrl"
- "177_ctrl"
- "178_ctrl"
- "179_ctrl"
- "180_ctrl"
- "181_ctrl"
- "182_ctrl"
- "183_ctrl"
- "184_ctrl"
- "185_ctrl"
- "186_ctrl"
- "187_ctrl"
- "188_ctrl"
- "189_ctrl"
- "190_ctrl"
- "191_ctrl"
- "192_ctrl"
- "193_ctrl"
- "194_ctrl"
- "195_ctrl"
- "196_ctrl"
- "197_ctrl"
- "198_ctrl"
- "199_ctrl"
- "200_ctrl"
""" }
predict_perturbations_yaml_contents = {"fusion.yaml":"""
model_config:
fc_do: 0.4
fc_task_dim: 256
layers:
- 2
rb_do: 0.4
stochastic_depth_p: 0.5
model_type: "mlp-residual"
""",
"globals.yaml": """
checkpoint_interval: 30300 # 100 epochs #60000
sample_interval: 30300 #60000
n_epochs: 120
batch_size: 64
optimizer: "adamw"
lr: 0.0001 #0.0001
device: "cuda"
compute_attributions: false
""",
"input_cnn.yaml": """
input_info:
input_source: ./inputs/perturbed_landscape_arrays/test/ #inserted_enhancer_test #inputs/silenced_arrays/silenced_arrays_H2B_S.D
#./data/parsed_data/inputs/arrays_train_100bp_no_H3K27ac/ #arrays_train_100bp_no_H3K27ac_uncoupled/
input_name: gene_expression
input_type: array
model_config:
model_type: cnn
#pre_normalization: "instancenorm"
model_init_config:
num_output_features: 512 # before fc_repr_dim
layers: [4,4]
kernel_height: 1
down_stride_width: 2
first_stride_expansion_width: 1
first_kernel_expansion_height: 4 #5
kernel_width: 10 #10
dilation_factor_width: 2
dilation_factor_height: 1
channel_exp_base: 5 #3 #-1
first_channel_expansion: 2
rb_do: .3
stochastic_depth_p: .1
attention_inclusion_cutoff: 1 #50 #256
""",
"outputs_2_cond.yaml": """
output_info:
output_name: expression_output
output_source: ./targets/perturbed_targets.csv
#./data/parsed_data/targets/target_arrays_perturbational_inserted_enhancer.csv #target_arrays_perturbational_S.D.csv #target_arrays_1kbp_401_bins_2_conditions_decareads_abs.csv
output_type: tabular
model_config: # <- new
model_type: linear # <- new
output_type_info:
con_loss_name: "SmoothL1Loss"
target_con_columns:
- "-200_ctrl"
- "-199_ctrl"
- "-198_ctrl"
- "-197_ctrl"
- "-196_ctrl"
- "-195_ctrl"
- "-194_ctrl"
- "-193_ctrl"
- "-192_ctrl"
- "-191_ctrl"
- "-190_ctrl"
- "-189_ctrl"
- "-188_ctrl"
- "-187_ctrl"
- "-186_ctrl"
- "-185_ctrl"
- "-184_ctrl"
- "-183_ctrl"
- "-182_ctrl"
- "-181_ctrl"
- "-180_ctrl"
- "-179_ctrl"
- "-178_ctrl"
- "-177_ctrl"
- "-176_ctrl"
- "-175_ctrl"
- "-174_ctrl"
- "-173_ctrl"
- "-172_ctrl"
- "-171_ctrl"
- "-170_ctrl"
- "-169_ctrl"
- "-168_ctrl"
- "-167_ctrl"
- "-166_ctrl"
- "-165_ctrl"
- "-164_ctrl"
- "-163_ctrl"
- "-162_ctrl"
- "-161_ctrl"
- "-160_ctrl"
- "-159_ctrl"
- "-158_ctrl"
- "-157_ctrl"
- "-156_ctrl"
- "-155_ctrl"
- "-154_ctrl"
- "-153_ctrl"
- "-152_ctrl"
- "-151_ctrl"
- "-150_ctrl"
- "-149_ctrl"
- "-148_ctrl"
- "-147_ctrl"
- "-146_ctrl"
- "-145_ctrl"
- "-144_ctrl"
- "-143_ctrl"
- "-142_ctrl"
- "-141_ctrl"
- "-140_ctrl"
- "-139_ctrl"
- "-138_ctrl"
- "-137_ctrl"
- "-136_ctrl"
- "-135_ctrl"
- "-134_ctrl"
- "-133_ctrl"
- "-132_ctrl"
- "-131_ctrl"
- "-130_ctrl"
- "-129_ctrl"
- "-128_ctrl"
- "-127_ctrl"
- "-126_ctrl"
- "-125_ctrl"
- "-124_ctrl"
- "-123_ctrl"
- "-122_ctrl"
- "-121_ctrl"
- "-120_ctrl"
- "-119_ctrl"
- "-118_ctrl"
- "-117_ctrl"
- "-116_ctrl"
- "-115_ctrl"
- "-114_ctrl"
- "-113_ctrl"
- "-112_ctrl"
- "-111_ctrl"
- "-110_ctrl"
- "-109_ctrl"
- "-108_ctrl"
- "-107_ctrl"
- "-106_ctrl"
- "-105_ctrl"
- "-104_ctrl"
- "-103_ctrl"
- "-102_ctrl"
- "-101_ctrl"
- "-100_ctrl"
- "-99_ctrl"
- "-98_ctrl"
- "-97_ctrl"
- "-96_ctrl"
- "-95_ctrl"
- "-94_ctrl"
- "-93_ctrl"
- "-92_ctrl"
- "-91_ctrl"
- "-90_ctrl"
- "-89_ctrl"
- "-88_ctrl"
- "-87_ctrl"
- "-86_ctrl"
- "-85_ctrl"
- "-84_ctrl"
- "-83_ctrl"
- "-82_ctrl"
- "-81_ctrl"
- "-80_ctrl"
- "-79_ctrl"
- "-78_ctrl"
- "-77_ctrl"
- "-76_ctrl"
- "-75_ctrl"
- "-74_ctrl"
- "-73_ctrl"
- "-72_ctrl"
- "-71_ctrl"
- "-70_ctrl"
- "-69_ctrl"
- "-68_ctrl"
- "-67_ctrl"
- "-66_ctrl"
- "-65_ctrl"
- "-64_ctrl"
- "-63_ctrl"
- "-62_ctrl"
- "-61_ctrl"
- "-60_ctrl"
- "-59_ctrl"
- "-58_ctrl"
- "-57_ctrl"
- "-56_ctrl"
- "-55_ctrl"
- "-54_ctrl"
- "-53_ctrl"
- "-52_ctrl"
- "-51_ctrl"
- "-50_ctrl"
- "-49_ctrl"
- "-48_ctrl"
- "-47_ctrl"
- "-46_ctrl"
- "-45_ctrl"
- "-44_ctrl"
- "-43_ctrl"
- "-42_ctrl"
- "-41_ctrl"
- "-40_ctrl"
- "-39_ctrl"
- "-38_ctrl"
- "-37_ctrl"
- "-36_ctrl"
- "-35_ctrl"
- "-34_ctrl"
- "-33_ctrl"
- "-32_ctrl"
- "-31_ctrl"
- "-30_ctrl"
- "-29_ctrl"
- "-28_ctrl"
- "-27_ctrl"
- "-26_ctrl"
- "-25_ctrl"
- "-24_ctrl"
- "-23_ctrl"
- "-22_ctrl"
- "-21_ctrl"
- "-20_ctrl"
- "-19_ctrl"
- "-18_ctrl"
- "-17_ctrl"
- "-16_ctrl"
- "-15_ctrl"
- "-14_ctrl"
- "-13_ctrl"
- "-12_ctrl"
- "-11_ctrl"
- "-10_ctrl"
- "-9_ctrl"
- "-8_ctrl"
- "-7_ctrl"
- "-6_ctrl"
- "-5_ctrl"
- "-4_ctrl"
- "-3_ctrl"
- "-2_ctrl"
- "-1_ctrl"
- "0_ctrl"
- "1_ctrl"
- "2_ctrl"
- "3_ctrl"
- "4_ctrl"
- "5_ctrl"
- "6_ctrl"
- "7_ctrl"
- "8_ctrl"
- "9_ctrl"
- "10_ctrl"
- "11_ctrl"
- "12_ctrl"
- "13_ctrl"
- "14_ctrl"
- "15_ctrl"
- "16_ctrl"
- "17_ctrl"
- "18_ctrl"
- "19_ctrl"
- "20_ctrl"
- "21_ctrl"
- "22_ctrl"
- "23_ctrl"
- "24_ctrl"
- "25_ctrl"
- "26_ctrl"
- "27_ctrl"
- "28_ctrl"
- "29_ctrl"
- "30_ctrl"
- "31_ctrl"
- "32_ctrl"
- "33_ctrl"
- "34_ctrl"
- "35_ctrl"
- "36_ctrl"
- "37_ctrl"
- "38_ctrl"
- "39_ctrl"
- "40_ctrl"
- "41_ctrl"
- "42_ctrl"
- "43_ctrl"
- "44_ctrl"
- "45_ctrl"
- "46_ctrl"
- "47_ctrl"
- "48_ctrl"
- "49_ctrl"
- "50_ctrl"
- "51_ctrl"
- "52_ctrl"
- "53_ctrl"
- "54_ctrl"
- "55_ctrl"
- "56_ctrl"
- "57_ctrl"
- "58_ctrl"
- "59_ctrl"
- "60_ctrl"
- "61_ctrl"
- "62_ctrl"
- "63_ctrl"
- "64_ctrl"
- "65_ctrl"
- "66_ctrl"
- "67_ctrl"
- "68_ctrl"
- "69_ctrl"
- "70_ctrl"
- "71_ctrl"
- "72_ctrl"
- "73_ctrl"
- "74_ctrl"
- "75_ctrl"
- "76_ctrl"
- "77_ctrl"
- "78_ctrl"
- "79_ctrl"
- "80_ctrl"
- "81_ctrl"
- "82_ctrl"
- "83_ctrl"
- "84_ctrl"
- "85_ctrl"
- "86_ctrl"
- "87_ctrl"
- "88_ctrl"
- "89_ctrl"
- "90_ctrl"
- "91_ctrl"
- "92_ctrl"
- "93_ctrl"
- "94_ctrl"
- "95_ctrl"
- "96_ctrl"
- "97_ctrl"
- "98_ctrl"
- "99_ctrl"
- "100_ctrl"
- "101_ctrl"
- "102_ctrl"
- "103_ctrl"
- "104_ctrl"
- "105_ctrl"
- "106_ctrl"
- "107_ctrl"
- "108_ctrl"
- "109_ctrl"
- "110_ctrl"
- "111_ctrl"
- "112_ctrl"
- "113_ctrl"
- "114_ctrl"
- "115_ctrl"
- "116_ctrl"
- "117_ctrl"
- "118_ctrl"
- "119_ctrl"
- "120_ctrl"
- "121_ctrl"
- "122_ctrl"
- "123_ctrl"
- "124_ctrl"
- "125_ctrl"
- "126_ctrl"
- "127_ctrl"
- "128_ctrl"
- "129_ctrl"
- "130_ctrl"
- "131_ctrl"
- "132_ctrl"
- "133_ctrl"
- "134_ctrl"
- "135_ctrl"
- "136_ctrl"
- "137_ctrl"
- "138_ctrl"
- "139_ctrl"
- "140_ctrl"
- "141_ctrl"
- "142_ctrl"
- "143_ctrl"
- "144_ctrl"
- "145_ctrl"
- "146_ctrl"
- "147_ctrl"
- "148_ctrl"
- "149_ctrl"
- "150_ctrl"
- "151_ctrl"
- "152_ctrl"
- "153_ctrl"
- "154_ctrl"
- "155_ctrl"
- "156_ctrl"
- "157_ctrl"
- "158_ctrl"
- "159_ctrl"
- "160_ctrl"
- "161_ctrl"
- "162_ctrl"
- "163_ctrl"
- "164_ctrl"
- "165_ctrl"
- "166_ctrl"
- "167_ctrl"
- "168_ctrl"
- "169_ctrl"
- "170_ctrl"
- "171_ctrl"
- "172_ctrl"
- "173_ctrl"
- "174_ctrl"
- "175_ctrl"
- "176_ctrl"
- "177_ctrl"
- "178_ctrl"
- "179_ctrl"
- "180_ctrl"
- "181_ctrl"
- "182_ctrl"
- "183_ctrl"
- "184_ctrl"
- "185_ctrl"
- "186_ctrl"
- "187_ctrl"
- "188_ctrl"
- "189_ctrl"
- "190_ctrl"
- "191_ctrl"
- "192_ctrl"
- "193_ctrl"
- "194_ctrl"
- "195_ctrl"
- "196_ctrl"
- "197_ctrl"
- "198_ctrl"
- "199_ctrl"
- "200_ctrl"
""" }
training_microc_yaml_contents = {"globals.yaml":"""
output_folder: ./runs/gene_expression_microc_pure_conv/ #gene_expression_prediction_no_H3K27ac_uncoupled/ #gene_expression_cnn_1kbp_401_bins_2_cond_reloaded/
manual_valid_ids_file: ./annotations/manual_validation_ids_chr17.txt #manual_validation_ids_chr17_uncoupled.txt
checkpoint_interval: 30300
sample_interval: 30300
n_epochs: 120
batch_size: 64
optimizer: "adamw"
lr: 0.0001
device: "cuda"
compute_attributions: false
# latent_sampling:
# layers_to_sample:
# - "input_modules.contact_maps.feature_extractor.conv.0.conv_1"
# - "input_modules.contact_maps.feature_extractor.conv.1.conv_2"
# - "input_modules.contact_maps.feature_extractor.conv.1.conv_2"
""",
"input_cnn.yaml": """
input_info:
input_source: ./inputs/landscape_arrays/training/
input_name: gene_expression
input_type: array
model_config:
model_type: cnn
#pre_normalization: "instancenorm"
model_init_config:
num_output_features: 512 # before fc_repr_dim
layers: [4,4]
kernel_height: 1
down_stride_width: 2
first_stride_expansion_width: 1
first_kernel_expansion_height: 4 #5
kernel_width: 10 #10
dilation_factor_width: 2
dilation_factor_height: 1
channel_exp_base: 5 #3 #-1
first_channel_expansion: 2
rb_do: .3
stochastic_depth_p: .1
attention_inclusion_cutoff: 1 #50 #256
""",
"""input_cnn_microc.yaml""": """
input_info:
input_source: ./inputs/microC/training/
input_name: contact_maps
input_type: array
model_config:
model_type: cnn
#pre_normalization: "instancenorm"
model_init_config:
num_output_features: 512 #before fc_repr_dim
layers: [1,2]
kernel_height: 5
down_stride_width: 2
down_stride_height: 2 #5
kernel_width: 5 #10
dilation_factor_width: 2
dilation_factor_height: 2
channel_exp_base: 2 #3 #-1
first_channel_expansion: 2
rb_do: .3
stochastic_depth_p: .1
attention_inclusion_cutoff: 1 #128 #256
""",
"fusion.yaml": """
model_config:
fc_do: 0.4
fc_task_dim: 256
layers:
- 2
rb_do: 0.4
stochastic_depth_p: 0.5
model_type: "mlp-residual"
""",
"outputs_2_cond.yaml":"""
output_info:
output_name: expression_output
output_source: ./targets/training_targets.csv
output_type: tabular
model_config: # <- new
model_type: linear # <- new
output_type_info:
con_loss_name: "SmoothL1Loss"
target_con_columns:
- "-200_ctrl"
- "-199_ctrl"
- "-198_ctrl"
- "-197_ctrl"
- "-196_ctrl"
- "-195_ctrl"
- "-194_ctrl"
- "-193_ctrl"
- "-192_ctrl"
- "-191_ctrl"
- "-190_ctrl"
- "-189_ctrl"
- "-188_ctrl"
- "-187_ctrl"
- "-186_ctrl"
- "-185_ctrl"
- "-184_ctrl"
- "-183_ctrl"
- "-182_ctrl"
- "-181_ctrl"
- "-180_ctrl"
- "-179_ctrl"
- "-178_ctrl"
- "-177_ctrl"
- "-176_ctrl"
- "-175_ctrl"
- "-174_ctrl"
- "-173_ctrl"
- "-172_ctrl"
- "-171_ctrl"
- "-170_ctrl"
- "-169_ctrl"
- "-168_ctrl"
- "-167_ctrl"
- "-166_ctrl"
- "-165_ctrl"
- "-164_ctrl"
- "-163_ctrl"
- "-162_ctrl"
- "-161_ctrl"
- "-160_ctrl"
- "-159_ctrl"
- "-158_ctrl"
- "-157_ctrl"
- "-156_ctrl"
- "-155_ctrl"
- "-154_ctrl"
- "-153_ctrl"
- "-152_ctrl"
- "-151_ctrl"
- "-150_ctrl"
- "-149_ctrl"
- "-148_ctrl"
- "-147_ctrl"
- "-146_ctrl"
- "-145_ctrl"
- "-144_ctrl"
- "-143_ctrl"
- "-142_ctrl"
- "-141_ctrl"
- "-140_ctrl"
- "-139_ctrl"
- "-138_ctrl"
- "-137_ctrl"
- "-136_ctrl"
- "-135_ctrl"
- "-134_ctrl"
- "-133_ctrl"
- "-132_ctrl"
- "-131_ctrl"
- "-130_ctrl"
- "-129_ctrl"
- "-128_ctrl"
- "-127_ctrl"
- "-126_ctrl"
- "-125_ctrl"
- "-124_ctrl"
- "-123_ctrl"
- "-122_ctrl"
- "-121_ctrl"
- "-120_ctrl"
- "-119_ctrl"
- "-118_ctrl"
- "-117_ctrl"
- "-116_ctrl"
- "-115_ctrl"
- "-114_ctrl"
- "-113_ctrl"
- "-112_ctrl"
- "-111_ctrl"
- "-110_ctrl"
- "-109_ctrl"
- "-108_ctrl"
- "-107_ctrl"
- "-106_ctrl"
- "-105_ctrl"
- "-104_ctrl"
- "-103_ctrl"
- "-102_ctrl"
- "-101_ctrl"
- "-100_ctrl"
- "-99_ctrl"
- "-98_ctrl"
- "-97_ctrl"
- "-96_ctrl"
- "-95_ctrl"
- "-94_ctrl"
- "-93_ctrl"
- "-92_ctrl"
- "-91_ctrl"
- "-90_ctrl"
- "-89_ctrl"
- "-88_ctrl"
- "-87_ctrl"
- "-86_ctrl"
- "-85_ctrl"
- "-84_ctrl"
- "-83_ctrl"
- "-82_ctrl"
- "-81_ctrl"
- "-80_ctrl"
- "-79_ctrl"
- "-78_ctrl"
- "-77_ctrl"
- "-76_ctrl"
- "-75_ctrl"
- "-74_ctrl"
- "-73_ctrl"
- "-72_ctrl"
- "-71_ctrl"
- "-70_ctrl"
- "-69_ctrl"
- "-68_ctrl"
- "-67_ctrl"
- "-66_ctrl"
- "-65_ctrl"
- "-64_ctrl"
- "-63_ctrl"
- "-62_ctrl"
- "-61_ctrl"
- "-60_ctrl"
- "-59_ctrl"
- "-58_ctrl"
- "-57_ctrl"
- "-56_ctrl"
- "-55_ctrl"
- "-54_ctrl"
- "-53_ctrl"
- "-52_ctrl"
- "-51_ctrl"
- "-50_ctrl"
- "-49_ctrl"
- "-48_ctrl"
- "-47_ctrl"
- "-46_ctrl"
- "-45_ctrl"
- "-44_ctrl"
- "-43_ctrl"
- "-42_ctrl"
- "-41_ctrl"
- "-40_ctrl"
- "-39_ctrl"
- "-38_ctrl"
- "-37_ctrl"
- "-36_ctrl"
- "-35_ctrl"
- "-34_ctrl"
- "-33_ctrl"
- "-32_ctrl"
- "-31_ctrl"
- "-30_ctrl"
- "-29_ctrl"
- "-28_ctrl"
- "-27_ctrl"
- "-26_ctrl"
- "-25_ctrl"
- "-24_ctrl"
- "-23_ctrl"
- "-22_ctrl"
- "-21_ctrl"
- "-20_ctrl"
- "-19_ctrl"
- "-18_ctrl"
- "-17_ctrl"
- "-16_ctrl"
- "-15_ctrl"
- "-14_ctrl"
- "-13_ctrl"
- "-12_ctrl"
- "-11_ctrl"
- "-10_ctrl"
- "-9_ctrl"
- "-8_ctrl"
- "-7_ctrl"
- "-6_ctrl"
- "-5_ctrl"
- "-4_ctrl"
- "-3_ctrl"
- "-2_ctrl"
- "-1_ctrl"
- "0_ctrl"
- "1_ctrl"
- "2_ctrl"
- "3_ctrl"
- "4_ctrl"
- "5_ctrl"
- "6_ctrl"
- "7_ctrl"
- "8_ctrl"
- "9_ctrl"
- "10_ctrl"
- "11_ctrl"
- "12_ctrl"
- "13_ctrl"
- "14_ctrl"
- "15_ctrl"
- "16_ctrl"
- "17_ctrl"
- "18_ctrl"
- "19_ctrl"
- "20_ctrl"
- "21_ctrl"
- "22_ctrl"
- "23_ctrl"
- "24_ctrl"
- "25_ctrl"
- "26_ctrl"
- "27_ctrl"
- "28_ctrl"
- "29_ctrl"
- "30_ctrl"
- "31_ctrl"
- "32_ctrl"
- "33_ctrl"
- "34_ctrl"
- "35_ctrl"
- "36_ctrl"
- "37_ctrl"
- "38_ctrl"
- "39_ctrl"
- "40_ctrl"
- "41_ctrl"
- "42_ctrl"
- "43_ctrl"
- "44_ctrl"
- "45_ctrl"
- "46_ctrl"
- "47_ctrl"
- "48_ctrl"
- "49_ctrl"
- "50_ctrl"
- "51_ctrl"
- "52_ctrl"
- "53_ctrl"
- "54_ctrl"
- "55_ctrl"
- "56_ctrl"
- "57_ctrl"
- "58_ctrl"
- "59_ctrl"
- "60_ctrl"
- "61_ctrl"
- "62_ctrl"
- "63_ctrl"
- "64_ctrl"
- "65_ctrl"
- "66_ctrl"
- "67_ctrl"
- "68_ctrl"
- "69_ctrl"
- "70_ctrl"
- "71_ctrl"
- "72_ctrl"
- "73_ctrl"
- "74_ctrl"
- "75_ctrl"
- "76_ctrl"
- "77_ctrl"
- "78_ctrl"
- "79_ctrl"
- "80_ctrl"
- "81_ctrl"
- "82_ctrl"
- "83_ctrl"
- "84_ctrl"
- "85_ctrl"
- "86_ctrl"
- "87_ctrl"
- "88_ctrl"
- "89_ctrl"
- "90_ctrl"
- "91_ctrl"
- "92_ctrl"
- "93_ctrl"
- "94_ctrl"
- "95_ctrl"
- "96_ctrl"
- "97_ctrl"
- "98_ctrl"
- "99_ctrl"
- "100_ctrl"
- "101_ctrl"
- "102_ctrl"
- "103_ctrl"
- "104_ctrl"
- "105_ctrl"
- "106_ctrl"
- "107_ctrl"
- "108_ctrl"
- "109_ctrl"
- "110_ctrl"
- "111_ctrl"
- "112_ctrl"
- "113_ctrl"
- "114_ctrl"
- "115_ctrl"
- "116_ctrl"
- "117_ctrl"
- "118_ctrl"
- "119_ctrl"
- "120_ctrl"
- "121_ctrl"
- "122_ctrl"
- "123_ctrl"
- "124_ctrl"
- "125_ctrl"
- "126_ctrl"
- "127_ctrl"
- "128_ctrl"
- "129_ctrl"
- "130_ctrl"
- "131_ctrl"
- "132_ctrl"
- "133_ctrl"
- "134_ctrl"
- "135_ctrl"
- "136_ctrl"
- "137_ctrl"
- "138_ctrl"
- "139_ctrl"
- "140_ctrl"
- "141_ctrl"
- "142_ctrl"
- "143_ctrl"
- "144_ctrl"
- "145_ctrl"
- "146_ctrl"
- "147_ctrl"
- "148_ctrl"
- "149_ctrl"
- "150_ctrl"
- "151_ctrl"
- "152_ctrl"
- "153_ctrl"
- "154_ctrl"
- "155_ctrl"
- "156_ctrl"
- "157_ctrl"
- "158_ctrl"
- "159_ctrl"
- "160_ctrl"
- "161_ctrl"
- "162_ctrl"
- "163_ctrl"
- "164_ctrl"
- "165_ctrl"
- "166_ctrl"
- "167_ctrl"
- "168_ctrl"
- "169_ctrl"
- "170_ctrl"
- "171_ctrl"
- "172_ctrl"
- "173_ctrl"
- "174_ctrl"
- "175_ctrl"
- "176_ctrl"
- "177_ctrl"
- "178_ctrl"
- "179_ctrl"
- "180_ctrl"
- "181_ctrl"
- "182_ctrl"
- "183_ctrl"
- "184_ctrl"
- "185_ctrl"
- "186_ctrl"
- "187_ctrl"
- "188_ctrl"
- "189_ctrl"
- "190_ctrl"
- "191_ctrl"
- "192_ctrl"
- "193_ctrl"
- "194_ctrl"
- "195_ctrl"
- "196_ctrl"
- "197_ctrl"
- "198_ctrl"
- "199_ctrl"
- "200_ctrl"
"""
}
test_microc_yaml_contents = {"globals.yaml":"""
#output_folder: ./runs/gene_expression_microc_pure_conv/ #gene_expression_prediction_no_H3K27ac_uncoupled/ #gene_expression_cnn_1kbp_401_bins_2_cond_reloaded/
#manual_valid_ids_file: ./annotations/manual_validation_ids_chr17.txt #manual_validation_ids_chr17_uncoupled.txt
checkpoint_interval: 30300
sample_interval: 30300
n_epochs: 120
batch_size: 64
optimizer: "adamw"
lr: 0.0001
device: "cuda"
compute_attributions: false
# latent_sampling:
# layers_to_sample:
# - "input_modules.contact_maps.feature_extractor.conv.0.conv_1"
# - "input_modules.contact_maps.feature_extractor.conv.1.conv_1"
# - "input_modules.contact_maps.feature_extractor.conv.1.conv_2"
""",
"input_cnn.yaml": """
input_info:
input_source: ./inputs/landscape_arrays/test/
input_name: gene_expression
input_type: array
model_config:
model_type: cnn
#pre_normalization: "instancenorm"
model_init_config:
num_output_features: 512 # before fc_repr_dim
layers: [4,4]
kernel_height: 1
down_stride_width: 2
first_stride_expansion_width: 1
first_kernel_expansion_height: 4 #5
kernel_width: 10 #10
dilation_factor_width: 2
dilation_factor_height: 1
channel_exp_base: 5 #3 #-1
first_channel_expansion: 2
rb_do: .3
stochastic_depth_p: .1
attention_inclusion_cutoff: 1 #50 #256
""",
"""input_cnn_microc.yaml""": """
input_info:
input_source: ./inputs/microC/test/
input_name: contact_maps
input_type: array
model_config:
model_type: cnn
#pre_normalization: "instancenorm"
model_init_config:
num_output_features: 512 #before fc_repr_dim
layers: [1,2]
kernel_height: 5
down_stride_width: 2
down_stride_height: 2 #5
kernel_width: 5 #10
dilation_factor_width: 2
dilation_factor_height: 2
channel_exp_base: 2 #3 #-1
first_channel_expansion: 2
rb_do: .3
stochastic_depth_p: .1
attention_inclusion_cutoff: 1 #128 #256
""",
"fusion.yaml": """
model_config:
fc_do: 0.4
fc_task_dim: 256
layers:
- 2
rb_do: 0.4
stochastic_depth_p: 0.5
model_type: "mlp-residual"
""",
"outputs_2_cond.yaml":"""
output_info:
output_name: expression_output
output_source: ./targets/test_targets.csv
output_type: tabular
model_config: # <- new
model_type: linear # <- new
output_type_info:
con_loss_name: "SmoothL1Loss"
target_con_columns:
- "-200_ctrl"
- "-199_ctrl"
- "-198_ctrl"
- "-197_ctrl"
- "-196_ctrl"
- "-195_ctrl"
- "-194_ctrl"
- "-193_ctrl"
- "-192_ctrl"
- "-191_ctrl"
- "-190_ctrl"
- "-189_ctrl"
- "-188_ctrl"
- "-187_ctrl"
- "-186_ctrl"
- "-185_ctrl"
- "-184_ctrl"
- "-183_ctrl"
- "-182_ctrl"
- "-181_ctrl"
- "-180_ctrl"
- "-179_ctrl"
- "-178_ctrl"
- "-177_ctrl"
- "-176_ctrl"
- "-175_ctrl"
- "-174_ctrl"
- "-173_ctrl"
- "-172_ctrl"
- "-171_ctrl"
- "-170_ctrl"
- "-169_ctrl"
- "-168_ctrl"
- "-167_ctrl"
- "-166_ctrl"
- "-165_ctrl"
- "-164_ctrl"
- "-163_ctrl"
- "-162_ctrl"
- "-161_ctrl"
- "-160_ctrl"
- "-159_ctrl"
- "-158_ctrl"
- "-157_ctrl"
- "-156_ctrl"
- "-155_ctrl"
- "-154_ctrl"
- "-153_ctrl"
- "-152_ctrl"
- "-151_ctrl"
- "-150_ctrl"
- "-149_ctrl"
- "-148_ctrl"
- "-147_ctrl"
- "-146_ctrl"
- "-145_ctrl"
- "-144_ctrl"
- "-143_ctrl"
- "-142_ctrl"
- "-141_ctrl"
- "-140_ctrl"
- "-139_ctrl"
- "-138_ctrl"
- "-137_ctrl"
- "-136_ctrl"
- "-135_ctrl"
- "-134_ctrl"
- "-133_ctrl"
- "-132_ctrl"
- "-131_ctrl"
- "-130_ctrl"
- "-129_ctrl"
- "-128_ctrl"
- "-127_ctrl"
- "-126_ctrl"
- "-125_ctrl"
- "-124_ctrl"
- "-123_ctrl"
- "-122_ctrl"
- "-121_ctrl"
- "-120_ctrl"
- "-119_ctrl"
- "-118_ctrl"
- "-117_ctrl"
- "-116_ctrl"
- "-115_ctrl"
- "-114_ctrl"
- "-113_ctrl"
- "-112_ctrl"
- "-111_ctrl"
- "-110_ctrl"
- "-109_ctrl"
- "-108_ctrl"
- "-107_ctrl"
- "-106_ctrl"
- "-105_ctrl"
- "-104_ctrl"
- "-103_ctrl"
- "-102_ctrl"
- "-101_ctrl"
- "-100_ctrl"
- "-99_ctrl"
- "-98_ctrl"
- "-97_ctrl"
- "-96_ctrl"
- "-95_ctrl"
- "-94_ctrl"
- "-93_ctrl"
- "-92_ctrl"
- "-91_ctrl"
- "-90_ctrl"
- "-89_ctrl"
- "-88_ctrl"
- "-87_ctrl"
- "-86_ctrl"
- "-85_ctrl"
- "-84_ctrl"
- "-83_ctrl"
- "-82_ctrl"
- "-81_ctrl"
- "-80_ctrl"
- "-79_ctrl"
- "-78_ctrl"
- "-77_ctrl"
- "-76_ctrl"
- "-75_ctrl"
- "-74_ctrl"
- "-73_ctrl"
- "-72_ctrl"
- "-71_ctrl"
- "-70_ctrl"
- "-69_ctrl"
- "-68_ctrl"
- "-67_ctrl"
- "-66_ctrl"
- "-65_ctrl"
- "-64_ctrl"
- "-63_ctrl"
- "-62_ctrl"
- "-61_ctrl"
- "-60_ctrl"
- "-59_ctrl"
- "-58_ctrl"
- "-57_ctrl"
- "-56_ctrl"
- "-55_ctrl"
- "-54_ctrl"
- "-53_ctrl"
- "-52_ctrl"
- "-51_ctrl"
- "-50_ctrl"
- "-49_ctrl"
- "-48_ctrl"
- "-47_ctrl"
- "-46_ctrl"
- "-45_ctrl"
- "-44_ctrl"
- "-43_ctrl"
- "-42_ctrl"
- "-41_ctrl"
- "-40_ctrl"
- "-39_ctrl"
- "-38_ctrl"
- "-37_ctrl"
- "-36_ctrl"
- "-35_ctrl"
- "-34_ctrl"
- "-33_ctrl"
- "-32_ctrl"
- "-31_ctrl"
- "-30_ctrl"
- "-29_ctrl"
- "-28_ctrl"
- "-27_ctrl"
- "-26_ctrl"
- "-25_ctrl"
- "-24_ctrl"
- "-23_ctrl"
- "-22_ctrl"
- "-21_ctrl"
- "-20_ctrl"
- "-19_ctrl"
- "-18_ctrl"
- "-17_ctrl"
- "-16_ctrl"
- "-15_ctrl"
- "-14_ctrl"
- "-13_ctrl"
- "-12_ctrl"
- "-11_ctrl"
- "-10_ctrl"
- "-9_ctrl"
- "-8_ctrl"
- "-7_ctrl"
- "-6_ctrl"
- "-5_ctrl"
- "-4_ctrl"
- "-3_ctrl"
- "-2_ctrl"
- "-1_ctrl"
- "0_ctrl"
- "1_ctrl"
- "2_ctrl"
- "3_ctrl"
- "4_ctrl"
- "5_ctrl"
- "6_ctrl"
- "7_ctrl"
- "8_ctrl"
- "9_ctrl"
- "10_ctrl"
- "11_ctrl"
- "12_ctrl"
- "13_ctrl"
- "14_ctrl"
- "15_ctrl"
- "16_ctrl"
- "17_ctrl"
- "18_ctrl"
- "19_ctrl"
- "20_ctrl"
- "21_ctrl"
- "22_ctrl"
- "23_ctrl"
- "24_ctrl"
- "25_ctrl"
- "26_ctrl"
- "27_ctrl"
- "28_ctrl"
- "29_ctrl"
- "30_ctrl"
- "31_ctrl"
- "32_ctrl"
- "33_ctrl"
- "34_ctrl"
- "35_ctrl"
- "36_ctrl"
- "37_ctrl"
- "38_ctrl"
- "39_ctrl"
- "40_ctrl"
- "41_ctrl"
- "42_ctrl"
- "43_ctrl"
- "44_ctrl"
- "45_ctrl"
- "46_ctrl"
- "47_ctrl"
- "48_ctrl"
- "49_ctrl"
- "50_ctrl"
- "51_ctrl"
- "52_ctrl"
- "53_ctrl"
- "54_ctrl"
- "55_ctrl"
- "56_ctrl"
- "57_ctrl"
- "58_ctrl"
- "59_ctrl"
- "60_ctrl"
- "61_ctrl"
- "62_ctrl"
- "63_ctrl"
- "64_ctrl"
- "65_ctrl"
- "66_ctrl"
- "67_ctrl"
- "68_ctrl"
- "69_ctrl"
- "70_ctrl"
- "71_ctrl"
- "72_ctrl"
- "73_ctrl"
- "74_ctrl"
- "75_ctrl"
- "76_ctrl"
- "77_ctrl"
- "78_ctrl"
- "79_ctrl"
- "80_ctrl"
- "81_ctrl"
- "82_ctrl"
- "83_ctrl"
- "84_ctrl"
- "85_ctrl"
- "86_ctrl"
- "87_ctrl"
- "88_ctrl"
- "89_ctrl"
- "90_ctrl"
- "91_ctrl"
- "92_ctrl"
- "93_ctrl"
- "94_ctrl"
- "95_ctrl"
- "96_ctrl"
- "97_ctrl"
- "98_ctrl"
- "99_ctrl"
- "100_ctrl"
- "101_ctrl"
- "102_ctrl"
- "103_ctrl"
- "104_ctrl"
- "105_ctrl"
- "106_ctrl"
- "107_ctrl"
- "108_ctrl"
- "109_ctrl"
- "110_ctrl"
- "111_ctrl"
- "112_ctrl"
- "113_ctrl"
- "114_ctrl"
- "115_ctrl"
- "116_ctrl"
- "117_ctrl"
- "118_ctrl"
- "119_ctrl"
- "120_ctrl"
- "121_ctrl"
- "122_ctrl"
- "123_ctrl"
- "124_ctrl"
- "125_ctrl"
- "126_ctrl"
- "127_ctrl"
- "128_ctrl"
- "129_ctrl"
- "130_ctrl"
- "131_ctrl"
- "132_ctrl"
- "133_ctrl"
- "134_ctrl"
- "135_ctrl"
- "136_ctrl"
- "137_ctrl"
- "138_ctrl"
- "139_ctrl"
- "140_ctrl"
- "141_ctrl"
- "142_ctrl"
- "143_ctrl"
- "144_ctrl"
- "145_ctrl"
- "146_ctrl"
- "147_ctrl"
- "148_ctrl"
- "149_ctrl"
- "150_ctrl"
- "151_ctrl"
- "152_ctrl"
- "153_ctrl"
- "154_ctrl"
- "155_ctrl"
- "156_ctrl"
- "157_ctrl"
- "158_ctrl"
- "159_ctrl"
- "160_ctrl"
- "161_ctrl"
- "162_ctrl"
- "163_ctrl"
- "164_ctrl"
- "165_ctrl"
- "166_ctrl"
- "167_ctrl"
- "168_ctrl"
- "169_ctrl"
- "170_ctrl"
- "171_ctrl"
- "172_ctrl"
- "173_ctrl"
- "174_ctrl"
- "175_ctrl"
- "176_ctrl"
- "177_ctrl"
- "178_ctrl"
- "179_ctrl"
- "180_ctrl"
- "181_ctrl"
- "182_ctrl"
- "183_ctrl"
- "184_ctrl"
- "185_ctrl"
- "186_ctrl"
- "187_ctrl"
- "188_ctrl"
- "189_ctrl"
- "190_ctrl"
- "191_ctrl"
- "192_ctrl"
- "193_ctrl"
- "194_ctrl"
- "195_ctrl"
- "196_ctrl"
- "197_ctrl"
- "198_ctrl"
- "199_ctrl"
- "200_ctrl"
"""
}
training_microc_rotated_yaml_contents = {"globals.yaml":"""
output_folder: ./runs/gene_expression_microc_rotated_pure_conv/ #gene_expression_prediction_no_H3K27ac_uncoupled/ #gene_expression_cnn_1kbp_401_bins_2_cond_reloaded/
manual_valid_ids_file: ./annotations/manual_validation_ids_chr17.txt #manual_validation_ids_chr17_uncoupled.txt
checkpoint_interval: 30300
sample_interval: 30300
n_epochs: 120
batch_size: 64
optimizer: "adamw"
lr: 0.0001
device: "cuda"
compute_attributions: false
# latent_sampling:
# layers_to_sample:
# - "input_modules.contact_maps.feature_extractor.conv.0.conv_1"
# - "input_modules.contact_maps.feature_extractor.conv.1.conv_2"
# - "input_modules.contact_maps.feature_extractor.conv.1.conv_2"
""",
"input_cnn.yaml": """
input_info:
input_source: ./inputs/landscape_arrays/training/
input_name: gene_expression
input_type: array
model_config:
model_type: cnn
#pre_normalization: "instancenorm"
model_init_config:
num_output_features: 512 # before fc_repr_dim
layers: [4,4]
kernel_height: 1
down_stride_width: 2
first_stride_expansion_width: 1
first_kernel_expansion_height: 4 #5
kernel_width: 10 #10
dilation_factor_width: 2
dilation_factor_height: 1
channel_exp_base: 5 #3 #-1
first_channel_expansion: 2
rb_do: .3
stochastic_depth_p: .1
attention_inclusion_cutoff: 1 #50 #256
""",
"""input_cnn_microc_rotated.yaml""": """
input_info:
input_source: ./inputs/microC_rotated/training/ #arrays_train_100bp_no_H3K27ac_uncoupled/
input_name: contact_maps
input_type: array
model_config:
model_type: cnn
#pre_normalization: "instancenorm"
model_init_config:
num_output_features: 512 #before fc_repr_dim
layers: [1,2]
kernel_height: 1
down_stride_width: 2
first_stride_expansion_width: 1
first_kernel_expansion_height: 129 #Height of the array
kernel_width: 10 #10
dilation_factor_width: 2
channel_exp_base: 4 #3 #-1
first_channel_expansion: 2
rb_do: .3
stochastic_depth_p: .1
attention_inclusion_cutoff: 1 # 50 #128 #256
""",
"fusion.yaml": """
model_config:
fc_do: 0.4
fc_task_dim: 256
layers:
- 2
rb_do: 0.4
stochastic_depth_p: 0.5
model_type: "mlp-residual"
""",
"outputs_2_cond.yaml":"""
output_info:
output_name: expression_output
output_source: ./targets/training_targets.csv
output_type: tabular
model_config: # <- new
model_type: linear # <- new
output_type_info:
con_loss_name: "SmoothL1Loss"
target_con_columns:
- "-200_ctrl"
- "-199_ctrl"
- "-198_ctrl"
- "-197_ctrl"
- "-196_ctrl"
- "-195_ctrl"
- "-194_ctrl"
- "-193_ctrl"
- "-192_ctrl"
- "-191_ctrl"
- "-190_ctrl"
- "-189_ctrl"
- "-188_ctrl"
- "-187_ctrl"
- "-186_ctrl"
- "-185_ctrl"
- "-184_ctrl"
- "-183_ctrl"
- "-182_ctrl"
- "-181_ctrl"
- "-180_ctrl"
- "-179_ctrl"
- "-178_ctrl"
- "-177_ctrl"
- "-176_ctrl"
- "-175_ctrl"
- "-174_ctrl"
- "-173_ctrl"
- "-172_ctrl"
- "-171_ctrl"
- "-170_ctrl"
- "-169_ctrl"
- "-168_ctrl"
- "-167_ctrl"
- "-166_ctrl"
- "-165_ctrl"
- "-164_ctrl"
- "-163_ctrl"
- "-162_ctrl"
- "-161_ctrl"
- "-160_ctrl"
- "-159_ctrl"
- "-158_ctrl"
- "-157_ctrl"
- "-156_ctrl"
- "-155_ctrl"
- "-154_ctrl"
- "-153_ctrl"
- "-152_ctrl"
- "-151_ctrl"
- "-150_ctrl"
- "-149_ctrl"
- "-148_ctrl"
- "-147_ctrl"
- "-146_ctrl"
- "-145_ctrl"
- "-144_ctrl"
- "-143_ctrl"
- "-142_ctrl"
- "-141_ctrl"
- "-140_ctrl"
- "-139_ctrl"
- "-138_ctrl"
- "-137_ctrl"
- "-136_ctrl"
- "-135_ctrl"
- "-134_ctrl"
- "-133_ctrl"
- "-132_ctrl"
- "-131_ctrl"
- "-130_ctrl"
- "-129_ctrl"
- "-128_ctrl"
- "-127_ctrl"
- "-126_ctrl"
- "-125_ctrl"
- "-124_ctrl"
- "-123_ctrl"
- "-122_ctrl"
- "-121_ctrl"
- "-120_ctrl"
- "-119_ctrl"
- "-118_ctrl"
- "-117_ctrl"
- "-116_ctrl"
- "-115_ctrl"
- "-114_ctrl"
- "-113_ctrl"
- "-112_ctrl"
- "-111_ctrl"
- "-110_ctrl"
- "-109_ctrl"
- "-108_ctrl"
- "-107_ctrl"
- "-106_ctrl"
- "-105_ctrl"
- "-104_ctrl"
- "-103_ctrl"
- "-102_ctrl"
- "-101_ctrl"
- "-100_ctrl"
- "-99_ctrl"
- "-98_ctrl"
- "-97_ctrl"
- "-96_ctrl"
- "-95_ctrl"
- "-94_ctrl"
- "-93_ctrl"
- "-92_ctrl"
- "-91_ctrl"
- "-90_ctrl"
- "-89_ctrl"
- "-88_ctrl"
- "-87_ctrl"
- "-86_ctrl"
- "-85_ctrl"
- "-84_ctrl"
- "-83_ctrl"
- "-82_ctrl"
- "-81_ctrl"
- "-80_ctrl"
- "-79_ctrl"
- "-78_ctrl"
- "-77_ctrl"
- "-76_ctrl"
- "-75_ctrl"
- "-74_ctrl"
- "-73_ctrl"
- "-72_ctrl"
- "-71_ctrl"
- "-70_ctrl"
- "-69_ctrl"
- "-68_ctrl"
- "-67_ctrl"
- "-66_ctrl"
- "-65_ctrl"
- "-64_ctrl"
- "-63_ctrl"
- "-62_ctrl"
- "-61_ctrl"
- "-60_ctrl"
- "-59_ctrl"
- "-58_ctrl"
- "-57_ctrl"
- "-56_ctrl"
- "-55_ctrl"
- "-54_ctrl"
- "-53_ctrl"
- "-52_ctrl"
- "-51_ctrl"
- "-50_ctrl"
- "-49_ctrl"
- "-48_ctrl"
- "-47_ctrl"
- "-46_ctrl"
- "-45_ctrl"
- "-44_ctrl"
- "-43_ctrl"
- "-42_ctrl"
- "-41_ctrl"
- "-40_ctrl"
- "-39_ctrl"
- "-38_ctrl"
- "-37_ctrl"
- "-36_ctrl"
- "-35_ctrl"
- "-34_ctrl"
- "-33_ctrl"
- "-32_ctrl"
- "-31_ctrl"
- "-30_ctrl"
- "-29_ctrl"
- "-28_ctrl"
- "-27_ctrl"
- "-26_ctrl"
- "-25_ctrl"
- "-24_ctrl"
- "-23_ctrl"
- "-22_ctrl"
- "-21_ctrl"
- "-20_ctrl"
- "-19_ctrl"
- "-18_ctrl"
- "-17_ctrl"
- "-16_ctrl"
- "-15_ctrl"
- "-14_ctrl"
- "-13_ctrl"
- "-12_ctrl"
- "-11_ctrl"
- "-10_ctrl"
- "-9_ctrl"
- "-8_ctrl"
- "-7_ctrl"
- "-6_ctrl"
- "-5_ctrl"
- "-4_ctrl"
- "-3_ctrl"
- "-2_ctrl"
- "-1_ctrl"
- "0_ctrl"
- "1_ctrl"
- "2_ctrl"
- "3_ctrl"
- "4_ctrl"
- "5_ctrl"
- "6_ctrl"
- "7_ctrl"
- "8_ctrl"
- "9_ctrl"
- "10_ctrl"
- "11_ctrl"
- "12_ctrl"
- "13_ctrl"
- "14_ctrl"
- "15_ctrl"
- "16_ctrl"
- "17_ctrl"
- "18_ctrl"
- "19_ctrl"
- "20_ctrl"
- "21_ctrl"
- "22_ctrl"
- "23_ctrl"
- "24_ctrl"
- "25_ctrl"
- "26_ctrl"
- "27_ctrl"
- "28_ctrl"
- "29_ctrl"
- "30_ctrl"
- "31_ctrl"
- "32_ctrl"
- "33_ctrl"
- "34_ctrl"
- "35_ctrl"
- "36_ctrl"
- "37_ctrl"
- "38_ctrl"
- "39_ctrl"
- "40_ctrl"
- "41_ctrl"
- "42_ctrl"
- "43_ctrl"
- "44_ctrl"
- "45_ctrl"
- "46_ctrl"
- "47_ctrl"
- "48_ctrl"
- "49_ctrl"
- "50_ctrl"
- "51_ctrl"
- "52_ctrl"
- "53_ctrl"
- "54_ctrl"
- "55_ctrl"
- "56_ctrl"
- "57_ctrl"
- "58_ctrl"
- "59_ctrl"
- "60_ctrl"
- "61_ctrl"
- "62_ctrl"
- "63_ctrl"
- "64_ctrl"
- "65_ctrl"
- "66_ctrl"
- "67_ctrl"
- "68_ctrl"
- "69_ctrl"
- "70_ctrl"
- "71_ctrl"
- "72_ctrl"
- "73_ctrl"
- "74_ctrl"
- "75_ctrl"
- "76_ctrl"
- "77_ctrl"
- "78_ctrl"
- "79_ctrl"
- "80_ctrl"
- "81_ctrl"
- "82_ctrl"
- "83_ctrl"
- "84_ctrl"
- "85_ctrl"
- "86_ctrl"
- "87_ctrl"
- "88_ctrl"
- "89_ctrl"
- "90_ctrl"
- "91_ctrl"
- "92_ctrl"
- "93_ctrl"
- "94_ctrl"
- "95_ctrl"
- "96_ctrl"
- "97_ctrl"
- "98_ctrl"
- "99_ctrl"
- "100_ctrl"
- "101_ctrl"
- "102_ctrl"
- "103_ctrl"
- "104_ctrl"
- "105_ctrl"
- "106_ctrl"
- "107_ctrl"
- "108_ctrl"
- "109_ctrl"
- "110_ctrl"
- "111_ctrl"
- "112_ctrl"
- "113_ctrl"
- "114_ctrl"
- "115_ctrl"
- "116_ctrl"
- "117_ctrl"
- "118_ctrl"
- "119_ctrl"
- "120_ctrl"
- "121_ctrl"
- "122_ctrl"
- "123_ctrl"
- "124_ctrl"
- "125_ctrl"
- "126_ctrl"
- "127_ctrl"
- "128_ctrl"
- "129_ctrl"
- "130_ctrl"
- "131_ctrl"
- "132_ctrl"
- "133_ctrl"
- "134_ctrl"
- "135_ctrl"
- "136_ctrl"
- "137_ctrl"
- "138_ctrl"
- "139_ctrl"
- "140_ctrl"
- "141_ctrl"
- "142_ctrl"
- "143_ctrl"
- "144_ctrl"
- "145_ctrl"
- "146_ctrl"
- "147_ctrl"
- "148_ctrl"
- "149_ctrl"
- "150_ctrl"
- "151_ctrl"
- "152_ctrl"
- "153_ctrl"
- "154_ctrl"
- "155_ctrl"
- "156_ctrl"
- "157_ctrl"
- "158_ctrl"
- "159_ctrl"
- "160_ctrl"
- "161_ctrl"
- "162_ctrl"
- "163_ctrl"
- "164_ctrl"
- "165_ctrl"
- "166_ctrl"
- "167_ctrl"
- "168_ctrl"
- "169_ctrl"
- "170_ctrl"
- "171_ctrl"
- "172_ctrl"
- "173_ctrl"
- "174_ctrl"
- "175_ctrl"
- "176_ctrl"
- "177_ctrl"
- "178_ctrl"
- "179_ctrl"
- "180_ctrl"
- "181_ctrl"
- "182_ctrl"
- "183_ctrl"
- "184_ctrl"
- "185_ctrl"
- "186_ctrl"
- "187_ctrl"
- "188_ctrl"
- "189_ctrl"
- "190_ctrl"
- "191_ctrl"
- "192_ctrl"
- "193_ctrl"
- "194_ctrl"
- "195_ctrl"
- "196_ctrl"
- "197_ctrl"
- "198_ctrl"
- "199_ctrl"
- "200_ctrl"
"""
}
test_microc_rotated_yaml_contents = {"globals.yaml":"""
#output_folder: ./runs/gene_expression_microc_rotated_pure_conv/ #gene_expression_prediction_no_H3K27ac_uncoupled/ #gene_expression_cnn_1kbp_401_bins_2_cond_reloaded/
#manual_valid_ids_file: ./annotations/manual_validation_ids_chr17.txt #manual_validation_ids_chr17_uncoupled.txt
checkpoint_interval: 30300
sample_interval: 30300
n_epochs: 120
batch_size: 64
optimizer: "adamw"
lr: 0.0001
device: "cuda"
compute_attributions: true
# latent_sampling:
# layers_to_sample:
# - "input_modules.contact_maps.feature_extractor.conv.0.conv_1"
# - "input_modules.contact_maps.feature_extractor.conv.1.conv_2"
# - "input_modules.contact_maps.feature_extractor.conv.1.conv_2"
""",
"input_cnn.yaml": """
input_info:
input_source: ./inputs/landscape_arrays/test/
input_name: gene_expression
input_type: array
model_config:
model_type: cnn
#pre_normalization: "instancenorm"
model_init_config:
num_output_features: 512 # before fc_repr_dim
layers: [4,4]
kernel_height: 1
down_stride_width: 2
first_stride_expansion_width: 1
first_kernel_expansion_height: 4 #5
kernel_width: 10 #10
dilation_factor_width: 2
dilation_factor_height: 1
channel_exp_base: 5 #3 #-1
first_channel_expansion: 2
rb_do: .3
stochastic_depth_p: .1
attention_inclusion_cutoff: 1 #50 #256
""",
"""input_cnn_microc_rotated.yaml""": """
input_info:
input_source: ./inputs/microC_rotated/test/ #arrays_train_100bp_no_H3K27ac_uncoupled/
input_name: contact_maps
input_type: array
model_config:
model_type: cnn
#pre_normalization: "instancenorm"
model_init_config:
num_output_features: 512 #before fc_repr_dim
layers: [1,2]
kernel_height: 1
down_stride_width: 2
first_stride_expansion_width: 1
first_kernel_expansion_height: 129 #Height of the array
kernel_width: 10 #10
dilation_factor_width: 2
channel_exp_base: 4 #3 #-1
first_channel_expansion: 2
rb_do: .3
stochastic_depth_p: .1
attention_inclusion_cutoff: 1 # 50 #128 #256
""",
"fusion.yaml": """
model_config:
fc_do: 0.4
fc_task_dim: 256
layers:
- 2
rb_do: 0.4
stochastic_depth_p: 0.5
model_type: "mlp-residual"
""",
"outputs_2_cond.yaml":"""
output_info:
output_name: expression_output
output_source: ./targets/test_targets.csv
output_type: tabular
model_config: # <- new
model_type: linear # <- new
output_type_info:
con_loss_name: "SmoothL1Loss"
target_con_columns:
- "-200_ctrl"
- "-199_ctrl"
- "-198_ctrl"
- "-197_ctrl"
- "-196_ctrl"
- "-195_ctrl"
- "-194_ctrl"
- "-193_ctrl"
- "-192_ctrl"
- "-191_ctrl"
- "-190_ctrl"
- "-189_ctrl"
- "-188_ctrl"
- "-187_ctrl"
- "-186_ctrl"
- "-185_ctrl"
- "-184_ctrl"
- "-183_ctrl"
- "-182_ctrl"
- "-181_ctrl"
- "-180_ctrl"
- "-179_ctrl"
- "-178_ctrl"
- "-177_ctrl"
- "-176_ctrl"
- "-175_ctrl"
- "-174_ctrl"
- "-173_ctrl"
- "-172_ctrl"
- "-171_ctrl"
- "-170_ctrl"
- "-169_ctrl"
- "-168_ctrl"
- "-167_ctrl"
- "-166_ctrl"
- "-165_ctrl"
- "-164_ctrl"
- "-163_ctrl"
- "-162_ctrl"
- "-161_ctrl"
- "-160_ctrl"
- "-159_ctrl"
- "-158_ctrl"
- "-157_ctrl"
- "-156_ctrl"
- "-155_ctrl"
- "-154_ctrl"
- "-153_ctrl"
- "-152_ctrl"
- "-151_ctrl"
- "-150_ctrl"
- "-149_ctrl"
- "-148_ctrl"
- "-147_ctrl"
- "-146_ctrl"
- "-145_ctrl"
- "-144_ctrl"
- "-143_ctrl"
- "-142_ctrl"
- "-141_ctrl"
- "-140_ctrl"
- "-139_ctrl"
- "-138_ctrl"
- "-137_ctrl"
- "-136_ctrl"
- "-135_ctrl"
- "-134_ctrl"
- "-133_ctrl"
- "-132_ctrl"
- "-131_ctrl"
- "-130_ctrl"
- "-129_ctrl"
- "-128_ctrl"
- "-127_ctrl"
- "-126_ctrl"
- "-125_ctrl"
- "-124_ctrl"
- "-123_ctrl"
- "-122_ctrl"
- "-121_ctrl"
- "-120_ctrl"
- "-119_ctrl"
- "-118_ctrl"
- "-117_ctrl"
- "-116_ctrl"
- "-115_ctrl"
- "-114_ctrl"
- "-113_ctrl"
- "-112_ctrl"
- "-111_ctrl"
- "-110_ctrl"
- "-109_ctrl"
- "-108_ctrl"
- "-107_ctrl"
- "-106_ctrl"
- "-105_ctrl"
- "-104_ctrl"
- "-103_ctrl"
- "-102_ctrl"
- "-101_ctrl"
- "-100_ctrl"
- "-99_ctrl"
- "-98_ctrl"
- "-97_ctrl"
- "-96_ctrl"
- "-95_ctrl"
- "-94_ctrl"
- "-93_ctrl"
- "-92_ctrl"
- "-91_ctrl"
- "-90_ctrl"
- "-89_ctrl"
- "-88_ctrl"
- "-87_ctrl"
- "-86_ctrl"
- "-85_ctrl"
- "-84_ctrl"
- "-83_ctrl"
- "-82_ctrl"
- "-81_ctrl"
- "-80_ctrl"
- "-79_ctrl"
- "-78_ctrl"
- "-77_ctrl"
- "-76_ctrl"
- "-75_ctrl"
- "-74_ctrl"
- "-73_ctrl"
- "-72_ctrl"
- "-71_ctrl"
- "-70_ctrl"
- "-69_ctrl"
- "-68_ctrl"
- "-67_ctrl"
- "-66_ctrl"
- "-65_ctrl"
- "-64_ctrl"
- "-63_ctrl"
- "-62_ctrl"
- "-61_ctrl"
- "-60_ctrl"
- "-59_ctrl"
- "-58_ctrl"
- "-57_ctrl"
- "-56_ctrl"
- "-55_ctrl"
- "-54_ctrl"
- "-53_ctrl"
- "-52_ctrl"
- "-51_ctrl"
- "-50_ctrl"
- "-49_ctrl"
- "-48_ctrl"
- "-47_ctrl"
- "-46_ctrl"
- "-45_ctrl"
- "-44_ctrl"
- "-43_ctrl"
- "-42_ctrl"
- "-41_ctrl"
- "-40_ctrl"
- "-39_ctrl"
- "-38_ctrl"
- "-37_ctrl"
- "-36_ctrl"
- "-35_ctrl"
- "-34_ctrl"
- "-33_ctrl"
- "-32_ctrl"
- "-31_ctrl"
- "-30_ctrl"
- "-29_ctrl"
- "-28_ctrl"
- "-27_ctrl"
- "-26_ctrl"
- "-25_ctrl"
- "-24_ctrl"
- "-23_ctrl"
- "-22_ctrl"
- "-21_ctrl"
- "-20_ctrl"
- "-19_ctrl"
- "-18_ctrl"
- "-17_ctrl"
- "-16_ctrl"
- "-15_ctrl"
- "-14_ctrl"
- "-13_ctrl"
- "-12_ctrl"
- "-11_ctrl"
- "-10_ctrl"
- "-9_ctrl"
- "-8_ctrl"
- "-7_ctrl"
- "-6_ctrl"
- "-5_ctrl"
- "-4_ctrl"
- "-3_ctrl"
- "-2_ctrl"
- "-1_ctrl"
- "0_ctrl"
- "1_ctrl"
- "2_ctrl"
- "3_ctrl"
- "4_ctrl"
- "5_ctrl"
- "6_ctrl"
- "7_ctrl"
- "8_ctrl"
- "9_ctrl"
- "10_ctrl"
- "11_ctrl"
- "12_ctrl"
- "13_ctrl"
- "14_ctrl"
- "15_ctrl"
- "16_ctrl"
- "17_ctrl"
- "18_ctrl"
- "19_ctrl"
- "20_ctrl"
- "21_ctrl"
- "22_ctrl"
- "23_ctrl"
- "24_ctrl"
- "25_ctrl"
- "26_ctrl"
- "27_ctrl"
- "28_ctrl"
- "29_ctrl"
- "30_ctrl"
- "31_ctrl"
- "32_ctrl"
- "33_ctrl"
- "34_ctrl"
- "35_ctrl"
- "36_ctrl"
- "37_ctrl"
- "38_ctrl"
- "39_ctrl"
- "40_ctrl"
- "41_ctrl"
- "42_ctrl"
- "43_ctrl"
- "44_ctrl"
- "45_ctrl"
- "46_ctrl"
- "47_ctrl"
- "48_ctrl"
- "49_ctrl"
- "50_ctrl"
- "51_ctrl"
- "52_ctrl"
- "53_ctrl"
- "54_ctrl"
- "55_ctrl"
- "56_ctrl"
- "57_ctrl"
- "58_ctrl"
- "59_ctrl"
- "60_ctrl"
- "61_ctrl"
- "62_ctrl"
- "63_ctrl"
- "64_ctrl"
- "65_ctrl"
- "66_ctrl"
- "67_ctrl"
- "68_ctrl"
- "69_ctrl"
- "70_ctrl"
- "71_ctrl"
- "72_ctrl"
- "73_ctrl"
- "74_ctrl"
- "75_ctrl"
- "76_ctrl"
- "77_ctrl"
- "78_ctrl"
- "79_ctrl"
- "80_ctrl"
- "81_ctrl"
- "82_ctrl"
- "83_ctrl"
- "84_ctrl"
- "85_ctrl"
- "86_ctrl"
- "87_ctrl"
- "88_ctrl"
- "89_ctrl"
- "90_ctrl"
- "91_ctrl"
- "92_ctrl"
- "93_ctrl"
- "94_ctrl"
- "95_ctrl"
- "96_ctrl"
- "97_ctrl"
- "98_ctrl"
- "99_ctrl"
- "100_ctrl"
- "101_ctrl"
- "102_ctrl"
- "103_ctrl"
- "104_ctrl"
- "105_ctrl"
- "106_ctrl"
- "107_ctrl"
- "108_ctrl"
- "109_ctrl"
- "110_ctrl"
- "111_ctrl"
- "112_ctrl"
- "113_ctrl"
- "114_ctrl"
- "115_ctrl"
- "116_ctrl"
- "117_ctrl"
- "118_ctrl"
- "119_ctrl"
- "120_ctrl"
- "121_ctrl"
- "122_ctrl"
- "123_ctrl"
- "124_ctrl"
- "125_ctrl"
- "126_ctrl"
- "127_ctrl"
- "128_ctrl"
- "129_ctrl"
- "130_ctrl"
- "131_ctrl"
- "132_ctrl"
- "133_ctrl"
- "134_ctrl"
- "135_ctrl"
- "136_ctrl"
- "137_ctrl"
- "138_ctrl"
- "139_ctrl"
- "140_ctrl"
- "141_ctrl"
- "142_ctrl"
- "143_ctrl"
- "144_ctrl"
- "145_ctrl"
- "146_ctrl"
- "147_ctrl"
- "148_ctrl"
- "149_ctrl"
- "150_ctrl"
- "151_ctrl"
- "152_ctrl"
- "153_ctrl"
- "154_ctrl"
- "155_ctrl"
- "156_ctrl"
- "157_ctrl"
- "158_ctrl"
- "159_ctrl"
- "160_ctrl"
- "161_ctrl"
- "162_ctrl"
- "163_ctrl"
- "164_ctrl"
- "165_ctrl"
- "166_ctrl"
- "167_ctrl"
- "168_ctrl"
- "169_ctrl"
- "170_ctrl"
- "171_ctrl"
- "172_ctrl"
- "173_ctrl"
- "174_ctrl"
- "175_ctrl"
- "176_ctrl"
- "177_ctrl"
- "178_ctrl"
- "179_ctrl"
- "180_ctrl"
- "181_ctrl"
- "182_ctrl"
- "183_ctrl"
- "184_ctrl"
- "185_ctrl"
- "186_ctrl"
- "187_ctrl"
- "188_ctrl"
- "189_ctrl"
- "190_ctrl"
- "191_ctrl"
- "192_ctrl"
- "193_ctrl"
- "194_ctrl"
- "195_ctrl"
- "196_ctrl"
- "197_ctrl"
- "198_ctrl"
- "199_ctrl"
- "200_ctrl"
"""
}
training_attention_yaml_contents = {"globals.yaml":"""
output_folder: ./runs/gene_expression_only_chrom_attention/ #gene_expression_prediction_no_H3K27ac_uncoupled/ #gene_expression_cnn_1kbp_401_bins_2_cond_reloaded/
manual_valid_ids_file: ./annotations/manual_validation_ids_chr17.txt #manual_validation_ids_chr17_uncoupled.txt
checkpoint_interval: 30300 # 100 epochs #60000
sample_interval: 30300 #60000
n_epochs: 120
batch_size: 64
optimizer: "adamw"
lr: 0.0001 #0.0001
device: "cuda"
compute_attributions: false
# latent_sampling:
# layers_to_sample:
# - "input_modules.contact_maps.feature_extractor.conv.0.conv_1"
# - "input_modules.contact_maps.feature_extractor.conv.1.conv_2"
# - "input_modules.contact_maps.feature_extractor.conv.2.conv_2"
# - "input_modules.contact_maps.feature_extractor.conv.3.conv_2"
# attribution_background_samples: 512
# attributions_every_sample_factor: 1
#pretrained_checkpoint: best_models/gene_expression_exformer_unlimited_chrom_and_micro_with_attention_model_117600_perf-average=0.8435.pt
""",
"input_cnn.yaml": """
input_info:
input_source: ./inputs/landscape_arrays/training/
input_name: gene_expression
input_type: array
model_config:
model_type: cnn
#pre_normalization: "instancenorm"
model_init_config:
num_output_features: 512 # before fc_repr_dim
layers: [4,4]
kernel_height: 1
down_stride_width: 2
first_stride_expansion_width: 1
first_kernel_expansion_height: 4 #5
kernel_width: 10 #10
dilation_factor_width: 2
dilation_factor_height: 1
channel_exp_base: 5 #3 #-1
first_channel_expansion: 2
rb_do: .3
stochastic_depth_p: .1
attention_inclusion_cutoff: 50 #256
""",
"fusion.yaml": """
model_config:
fc_do: 0.4
fc_task_dim: 256
layers:
- 2
rb_do: 0.4
stochastic_depth_p: 0.5
model_type: "mlp-residual"
""",
"outputs_2_cond.yaml":"""
output_info:
output_name: expression_output
output_source: ./targets/training_targets.csv
output_type: tabular
model_config: # <- new
model_type: linear # <- new
output_type_info:
con_loss_name: "SmoothL1Loss"
target_con_columns:
- "-200_ctrl"
- "-199_ctrl"
- "-198_ctrl"
- "-197_ctrl"
- "-196_ctrl"
- "-195_ctrl"
- "-194_ctrl"
- "-193_ctrl"
- "-192_ctrl"
- "-191_ctrl"
- "-190_ctrl"
- "-189_ctrl"
- "-188_ctrl"
- "-187_ctrl"
- "-186_ctrl"
- "-185_ctrl"
- "-184_ctrl"
- "-183_ctrl"
- "-182_ctrl"
- "-181_ctrl"
- "-180_ctrl"
- "-179_ctrl"
- "-178_ctrl"
- "-177_ctrl"
- "-176_ctrl"
- "-175_ctrl"
- "-174_ctrl"
- "-173_ctrl"
- "-172_ctrl"
- "-171_ctrl"
- "-170_ctrl"
- "-169_ctrl"
- "-168_ctrl"
- "-167_ctrl"
- "-166_ctrl"
- "-165_ctrl"
- "-164_ctrl"
- "-163_ctrl"
- "-162_ctrl"
- "-161_ctrl"
- "-160_ctrl"
- "-159_ctrl"
- "-158_ctrl"
- "-157_ctrl"
- "-156_ctrl"
- "-155_ctrl"
- "-154_ctrl"
- "-153_ctrl"
- "-152_ctrl"
- "-151_ctrl"
- "-150_ctrl"
- "-149_ctrl"
- "-148_ctrl"
- "-147_ctrl"
- "-146_ctrl"
- "-145_ctrl"
- "-144_ctrl"
- "-143_ctrl"
- "-142_ctrl"
- "-141_ctrl"
- "-140_ctrl"
- "-139_ctrl"
- "-138_ctrl"
- "-137_ctrl"
- "-136_ctrl"
- "-135_ctrl"
- "-134_ctrl"
- "-133_ctrl"
- "-132_ctrl"
- "-131_ctrl"
- "-130_ctrl"
- "-129_ctrl"
- "-128_ctrl"
- "-127_ctrl"
- "-126_ctrl"
- "-125_ctrl"
- "-124_ctrl"
- "-123_ctrl"
- "-122_ctrl"
- "-121_ctrl"
- "-120_ctrl"
- "-119_ctrl"
- "-118_ctrl"
- "-117_ctrl"
- "-116_ctrl"
- "-115_ctrl"
- "-114_ctrl"
- "-113_ctrl"
- "-112_ctrl"
- "-111_ctrl"
- "-110_ctrl"
- "-109_ctrl"
- "-108_ctrl"
- "-107_ctrl"
- "-106_ctrl"
- "-105_ctrl"
- "-104_ctrl"
- "-103_ctrl"
- "-102_ctrl"
- "-101_ctrl"
- "-100_ctrl"
- "-99_ctrl"
- "-98_ctrl"
- "-97_ctrl"
- "-96_ctrl"
- "-95_ctrl"
- "-94_ctrl"
- "-93_ctrl"
- "-92_ctrl"
- "-91_ctrl"
- "-90_ctrl"
- "-89_ctrl"
- "-88_ctrl"
- "-87_ctrl"
- "-86_ctrl"
- "-85_ctrl"
- "-84_ctrl"
- "-83_ctrl"
- "-82_ctrl"
- "-81_ctrl"
- "-80_ctrl"
- "-79_ctrl"
- "-78_ctrl"
- "-77_ctrl"
- "-76_ctrl"
- "-75_ctrl"
- "-74_ctrl"
- "-73_ctrl"
- "-72_ctrl"
- "-71_ctrl"
- "-70_ctrl"
- "-69_ctrl"
- "-68_ctrl"
- "-67_ctrl"
- "-66_ctrl"
- "-65_ctrl"
- "-64_ctrl"
- "-63_ctrl"
- "-62_ctrl"
- "-61_ctrl"
- "-60_ctrl"
- "-59_ctrl"
- "-58_ctrl"
- "-57_ctrl"
- "-56_ctrl"
- "-55_ctrl"
- "-54_ctrl"
- "-53_ctrl"
- "-52_ctrl"
- "-51_ctrl"
- "-50_ctrl"
- "-49_ctrl"
- "-48_ctrl"
- "-47_ctrl"
- "-46_ctrl"
- "-45_ctrl"
- "-44_ctrl"
- "-43_ctrl"
- "-42_ctrl"
- "-41_ctrl"
- "-40_ctrl"
- "-39_ctrl"
- "-38_ctrl"
- "-37_ctrl"
- "-36_ctrl"
- "-35_ctrl"
- "-34_ctrl"
- "-33_ctrl"
- "-32_ctrl"
- "-31_ctrl"
- "-30_ctrl"
- "-29_ctrl"
- "-28_ctrl"
- "-27_ctrl"
- "-26_ctrl"
- "-25_ctrl"
- "-24_ctrl"
- "-23_ctrl"
- "-22_ctrl"
- "-21_ctrl"
- "-20_ctrl"
- "-19_ctrl"
- "-18_ctrl"
- "-17_ctrl"
- "-16_ctrl"
- "-15_ctrl"
- "-14_ctrl"
- "-13_ctrl"
- "-12_ctrl"
- "-11_ctrl"
- "-10_ctrl"
- "-9_ctrl"
- "-8_ctrl"
- "-7_ctrl"
- "-6_ctrl"
- "-5_ctrl"
- "-4_ctrl"
- "-3_ctrl"
- "-2_ctrl"
- "-1_ctrl"
- "0_ctrl"
- "1_ctrl"
- "2_ctrl"
- "3_ctrl"
- "4_ctrl"
- "5_ctrl"
- "6_ctrl"
- "7_ctrl"
- "8_ctrl"
- "9_ctrl"
- "10_ctrl"
- "11_ctrl"
- "12_ctrl"
- "13_ctrl"
- "14_ctrl"
- "15_ctrl"
- "16_ctrl"
- "17_ctrl"
- "18_ctrl"
- "19_ctrl"
- "20_ctrl"
- "21_ctrl"
- "22_ctrl"
- "23_ctrl"
- "24_ctrl"
- "25_ctrl"
- "26_ctrl"
- "27_ctrl"
- "28_ctrl"
- "29_ctrl"
- "30_ctrl"
- "31_ctrl"
- "32_ctrl"
- "33_ctrl"
- "34_ctrl"
- "35_ctrl"
- "36_ctrl"
- "37_ctrl"
- "38_ctrl"
- "39_ctrl"
- "40_ctrl"
- "41_ctrl"
- "42_ctrl"
- "43_ctrl"
- "44_ctrl"
- "45_ctrl"
- "46_ctrl"
- "47_ctrl"
- "48_ctrl"
- "49_ctrl"
- "50_ctrl"
- "51_ctrl"
- "52_ctrl"
- "53_ctrl"
- "54_ctrl"
- "55_ctrl"
- "56_ctrl"
- "57_ctrl"
- "58_ctrl"
- "59_ctrl"
- "60_ctrl"
- "61_ctrl"
- "62_ctrl"
- "63_ctrl"
- "64_ctrl"
- "65_ctrl"
- "66_ctrl"
- "67_ctrl"
- "68_ctrl"
- "69_ctrl"
- "70_ctrl"
- "71_ctrl"
- "72_ctrl"
- "73_ctrl"
- "74_ctrl"
- "75_ctrl"
- "76_ctrl"
- "77_ctrl"
- "78_ctrl"
- "79_ctrl"
- "80_ctrl"
- "81_ctrl"
- "82_ctrl"
- "83_ctrl"
- "84_ctrl"
- "85_ctrl"
- "86_ctrl"
- "87_ctrl"
- "88_ctrl"
- "89_ctrl"
- "90_ctrl"
- "91_ctrl"
- "92_ctrl"
- "93_ctrl"
- "94_ctrl"
- "95_ctrl"
- "96_ctrl"
- "97_ctrl"
- "98_ctrl"
- "99_ctrl"
- "100_ctrl"
- "101_ctrl"
- "102_ctrl"
- "103_ctrl"
- "104_ctrl"
- "105_ctrl"
- "106_ctrl"
- "107_ctrl"
- "108_ctrl"
- "109_ctrl"
- "110_ctrl"
- "111_ctrl"
- "112_ctrl"
- "113_ctrl"
- "114_ctrl"
- "115_ctrl"
- "116_ctrl"
- "117_ctrl"
- "118_ctrl"
- "119_ctrl"
- "120_ctrl"
- "121_ctrl"
- "122_ctrl"
- "123_ctrl"
- "124_ctrl"
- "125_ctrl"
- "126_ctrl"
- "127_ctrl"
- "128_ctrl"
- "129_ctrl"
- "130_ctrl"
- "131_ctrl"
- "132_ctrl"
- "133_ctrl"
- "134_ctrl"
- "135_ctrl"
- "136_ctrl"
- "137_ctrl"
- "138_ctrl"
- "139_ctrl"
- "140_ctrl"
- "141_ctrl"
- "142_ctrl"
- "143_ctrl"
- "144_ctrl"
- "145_ctrl"
- "146_ctrl"
- "147_ctrl"
- "148_ctrl"
- "149_ctrl"
- "150_ctrl"
- "151_ctrl"
- "152_ctrl"
- "153_ctrl"
- "154_ctrl"
- "155_ctrl"
- "156_ctrl"
- "157_ctrl"
- "158_ctrl"
- "159_ctrl"
- "160_ctrl"
- "161_ctrl"
- "162_ctrl"
- "163_ctrl"
- "164_ctrl"
- "165_ctrl"
- "166_ctrl"
- "167_ctrl"
- "168_ctrl"
- "169_ctrl"
- "170_ctrl"
- "171_ctrl"
- "172_ctrl"
- "173_ctrl"
- "174_ctrl"
- "175_ctrl"
- "176_ctrl"
- "177_ctrl"
- "178_ctrl"
- "179_ctrl"
- "180_ctrl"
- "181_ctrl"
- "182_ctrl"
- "183_ctrl"
- "184_ctrl"
- "185_ctrl"
- "186_ctrl"
- "187_ctrl"
- "188_ctrl"
- "189_ctrl"
- "190_ctrl"
- "191_ctrl"
- "192_ctrl"
- "193_ctrl"
- "194_ctrl"
- "195_ctrl"
- "196_ctrl"
- "197_ctrl"
- "198_ctrl"
- "199_ctrl"
- "200_ctrl"
"""
}
test_attention_yaml_contents = {"fusion.yaml":"""
model_config:
fc_do: 0.4
fc_task_dim: 256
layers:
- 2
rb_do: 0.4
stochastic_depth_p: 0.5
model_type: "mlp-residual"
""",
"globals.yaml": """
checkpoint_interval: 30300 # 100 epochs #60000
sample_interval: 30300 #60000
n_epochs: 120
batch_size: 64
optimizer: "adamw"
lr: 0.0001 #0.0001
device: "cuda"
compute_attributions: false
""",
"input_cnn.yaml": """
input_info:
input_source: ./inputs/landscape_arrays/test/ #inserted_enhancer_test #inputs/silenced_arrays/silenced_arrays_H2B_S.D
#./data/parsed_data/inputs/arrays_train_100bp_no_H3K27ac/ #arrays_train_100bp_no_H3K27ac_uncoupled/
input_name: gene_expression
input_type: array
model_config:
model_type: cnn
#pre_normalization: "instancenorm"
model_init_config:
num_output_features: 512 # before fc_repr_dim
layers: [4,4]
kernel_height: 1
down_stride_width: 2
first_stride_expansion_width: 1
first_kernel_expansion_height: 4 #5
kernel_width: 10 #10
dilation_factor_width: 2
dilation_factor_height: 1
channel_exp_base: 5 #3 #-1
first_channel_expansion: 2
rb_do: .3
stochastic_depth_p: .1
attention_inclusion_cutoff: 50 #256
""",
"outputs_2_cond.yaml": """
output_info:
output_name: expression_output
output_source: ./targets/test_targets.csv
#./data/parsed_data/targets/target_arrays_perturbational_inserted_enhancer.csv #target_arrays_perturbational_S.D.csv #target_arrays_1kbp_401_bins_2_conditions_decareads_abs.csv
output_type: tabular
model_config: # <- new
model_type: linear # <- new
output_type_info:
con_loss_name: "SmoothL1Loss"
target_con_columns:
- "-200_ctrl"
- "-199_ctrl"
- "-198_ctrl"
- "-197_ctrl"
- "-196_ctrl"
- "-195_ctrl"
- "-194_ctrl"
- "-193_ctrl"
- "-192_ctrl"
- "-191_ctrl"
- "-190_ctrl"
- "-189_ctrl"
- "-188_ctrl"
- "-187_ctrl"
- "-186_ctrl"
- "-185_ctrl"
- "-184_ctrl"
- "-183_ctrl"
- "-182_ctrl"
- "-181_ctrl"
- "-180_ctrl"
- "-179_ctrl"
- "-178_ctrl"
- "-177_ctrl"
- "-176_ctrl"
- "-175_ctrl"
- "-174_ctrl"
- "-173_ctrl"
- "-172_ctrl"
- "-171_ctrl"
- "-170_ctrl"
- "-169_ctrl"
- "-168_ctrl"
- "-167_ctrl"
- "-166_ctrl"
- "-165_ctrl"
- "-164_ctrl"
- "-163_ctrl"
- "-162_ctrl"
- "-161_ctrl"
- "-160_ctrl"
- "-159_ctrl"
- "-158_ctrl"
- "-157_ctrl"
- "-156_ctrl"
- "-155_ctrl"
- "-154_ctrl"
- "-153_ctrl"
- "-152_ctrl"
- "-151_ctrl"
- "-150_ctrl"
- "-149_ctrl"
- "-148_ctrl"
- "-147_ctrl"
- "-146_ctrl"
- "-145_ctrl"
- "-144_ctrl"
- "-143_ctrl"
- "-142_ctrl"
- "-141_ctrl"
- "-140_ctrl"
- "-139_ctrl"
- "-138_ctrl"
- "-137_ctrl"
- "-136_ctrl"
- "-135_ctrl"
- "-134_ctrl"
- "-133_ctrl"
- "-132_ctrl"
- "-131_ctrl"
- "-130_ctrl"
- "-129_ctrl"
- "-128_ctrl"
- "-127_ctrl"
- "-126_ctrl"
- "-125_ctrl"
- "-124_ctrl"
- "-123_ctrl"
- "-122_ctrl"
- "-121_ctrl"
- "-120_ctrl"
- "-119_ctrl"
- "-118_ctrl"
- "-117_ctrl"
- "-116_ctrl"
- "-115_ctrl"
- "-114_ctrl"
- "-113_ctrl"
- "-112_ctrl"
- "-111_ctrl"
- "-110_ctrl"
- "-109_ctrl"
- "-108_ctrl"
- "-107_ctrl"
- "-106_ctrl"
- "-105_ctrl"
- "-104_ctrl"
- "-103_ctrl"
- "-102_ctrl"
- "-101_ctrl"
- "-100_ctrl"
- "-99_ctrl"
- "-98_ctrl"
- "-97_ctrl"
- "-96_ctrl"
- "-95_ctrl"
- "-94_ctrl"
- "-93_ctrl"
- "-92_ctrl"
- "-91_ctrl"
- "-90_ctrl"
- "-89_ctrl"
- "-88_ctrl"
- "-87_ctrl"
- "-86_ctrl"
- "-85_ctrl"
- "-84_ctrl"
- "-83_ctrl"
- "-82_ctrl"
- "-81_ctrl"
- "-80_ctrl"
- "-79_ctrl"
- "-78_ctrl"
- "-77_ctrl"
- "-76_ctrl"
- "-75_ctrl"
- "-74_ctrl"
- "-73_ctrl"
- "-72_ctrl"
- "-71_ctrl"
- "-70_ctrl"
- "-69_ctrl"
- "-68_ctrl"
- "-67_ctrl"
- "-66_ctrl"
- "-65_ctrl"
- "-64_ctrl"
- "-63_ctrl"
- "-62_ctrl"
- "-61_ctrl"
- "-60_ctrl"
- "-59_ctrl"
- "-58_ctrl"
- "-57_ctrl"
- "-56_ctrl"
- "-55_ctrl"
- "-54_ctrl"
- "-53_ctrl"
- "-52_ctrl"
- "-51_ctrl"
- "-50_ctrl"
- "-49_ctrl"
- "-48_ctrl"
- "-47_ctrl"
- "-46_ctrl"
- "-45_ctrl"
- "-44_ctrl"
- "-43_ctrl"
- "-42_ctrl"
- "-41_ctrl"
- "-40_ctrl"
- "-39_ctrl"
- "-38_ctrl"
- "-37_ctrl"
- "-36_ctrl"
- "-35_ctrl"
- "-34_ctrl"
- "-33_ctrl"
- "-32_ctrl"
- "-31_ctrl"
- "-30_ctrl"
- "-29_ctrl"
- "-28_ctrl"
- "-27_ctrl"
- "-26_ctrl"
- "-25_ctrl"
- "-24_ctrl"
- "-23_ctrl"
- "-22_ctrl"
- "-21_ctrl"
- "-20_ctrl"
- "-19_ctrl"
- "-18_ctrl"
- "-17_ctrl"
- "-16_ctrl"
- "-15_ctrl"
- "-14_ctrl"
- "-13_ctrl"
- "-12_ctrl"
- "-11_ctrl"
- "-10_ctrl"
- "-9_ctrl"
- "-8_ctrl"
- "-7_ctrl"
- "-6_ctrl"
- "-5_ctrl"
- "-4_ctrl"
- "-3_ctrl"
- "-2_ctrl"
- "-1_ctrl"
- "0_ctrl"
- "1_ctrl"
- "2_ctrl"
- "3_ctrl"
- "4_ctrl"
- "5_ctrl"
- "6_ctrl"
- "7_ctrl"
- "8_ctrl"
- "9_ctrl"
- "10_ctrl"
- "11_ctrl"
- "12_ctrl"
- "13_ctrl"
- "14_ctrl"
- "15_ctrl"
- "16_ctrl"
- "17_ctrl"
- "18_ctrl"
- "19_ctrl"
- "20_ctrl"
- "21_ctrl"
- "22_ctrl"
- "23_ctrl"
- "24_ctrl"
- "25_ctrl"
- "26_ctrl"
- "27_ctrl"
- "28_ctrl"
- "29_ctrl"
- "30_ctrl"
- "31_ctrl"
- "32_ctrl"
- "33_ctrl"
- "34_ctrl"
- "35_ctrl"
- "36_ctrl"
- "37_ctrl"
- "38_ctrl"
- "39_ctrl"
- "40_ctrl"
- "41_ctrl"
- "42_ctrl"
- "43_ctrl"
- "44_ctrl"
- "45_ctrl"
- "46_ctrl"
- "47_ctrl"
- "48_ctrl"
- "49_ctrl"
- "50_ctrl"
- "51_ctrl"
- "52_ctrl"
- "53_ctrl"
- "54_ctrl"
- "55_ctrl"
- "56_ctrl"
- "57_ctrl"
- "58_ctrl"
- "59_ctrl"
- "60_ctrl"
- "61_ctrl"
- "62_ctrl"
- "63_ctrl"
- "64_ctrl"
- "65_ctrl"
- "66_ctrl"
- "67_ctrl"
- "68_ctrl"
- "69_ctrl"
- "70_ctrl"
- "71_ctrl"
- "72_ctrl"
- "73_ctrl"
- "74_ctrl"
- "75_ctrl"
- "76_ctrl"
- "77_ctrl"
- "78_ctrl"
- "79_ctrl"
- "80_ctrl"
- "81_ctrl"
- "82_ctrl"
- "83_ctrl"
- "84_ctrl"
- "85_ctrl"
- "86_ctrl"
- "87_ctrl"
- "88_ctrl"
- "89_ctrl"
- "90_ctrl"
- "91_ctrl"
- "92_ctrl"
- "93_ctrl"
- "94_ctrl"
- "95_ctrl"
- "96_ctrl"
- "97_ctrl"
- "98_ctrl"
- "99_ctrl"
- "100_ctrl"
- "101_ctrl"
- "102_ctrl"
- "103_ctrl"
- "104_ctrl"
- "105_ctrl"
- "106_ctrl"
- "107_ctrl"
- "108_ctrl"
- "109_ctrl"
- "110_ctrl"
- "111_ctrl"
- "112_ctrl"
- "113_ctrl"
- "114_ctrl"
- "115_ctrl"
- "116_ctrl"
- "117_ctrl"
- "118_ctrl"
- "119_ctrl"
- "120_ctrl"
- "121_ctrl"
- "122_ctrl"
- "123_ctrl"
- "124_ctrl"
- "125_ctrl"
- "126_ctrl"
- "127_ctrl"
- "128_ctrl"
- "129_ctrl"
- "130_ctrl"
- "131_ctrl"
- "132_ctrl"
- "133_ctrl"
- "134_ctrl"
- "135_ctrl"
- "136_ctrl"
- "137_ctrl"
- "138_ctrl"
- "139_ctrl"
- "140_ctrl"
- "141_ctrl"
- "142_ctrl"
- "143_ctrl"
- "144_ctrl"
- "145_ctrl"
- "146_ctrl"
- "147_ctrl"
- "148_ctrl"
- "149_ctrl"
- "150_ctrl"
- "151_ctrl"
- "152_ctrl"
- "153_ctrl"
- "154_ctrl"
- "155_ctrl"
- "156_ctrl"
- "157_ctrl"
- "158_ctrl"
- "159_ctrl"
- "160_ctrl"
- "161_ctrl"
- "162_ctrl"
- "163_ctrl"
- "164_ctrl"
- "165_ctrl"
- "166_ctrl"
- "167_ctrl"
- "168_ctrl"
- "169_ctrl"
- "170_ctrl"
- "171_ctrl"
- "172_ctrl"
- "173_ctrl"
- "174_ctrl"
- "175_ctrl"
- "176_ctrl"
- "177_ctrl"
- "178_ctrl"
- "179_ctrl"
- "180_ctrl"
- "181_ctrl"
- "182_ctrl"
- "183_ctrl"
- "184_ctrl"
- "185_ctrl"
- "186_ctrl"
- "187_ctrl"
- "188_ctrl"
- "189_ctrl"
- "190_ctrl"
- "191_ctrl"
- "192_ctrl"
- "193_ctrl"
- "194_ctrl"
- "195_ctrl"
- "196_ctrl"
- "197_ctrl"
- "198_ctrl"
- "199_ctrl"
- "200_ctrl"
""" }
training_microc_latents_yaml_contents = {"globals.yaml":"""
output_folder: ./runs/gene_expression_microc_pure_conv_latents/ #gene_expression_prediction_no_H3K27ac_uncoupled/ #gene_expression_cnn_1kbp_401_bins_2_cond_reloaded/
#manual_valid_ids_file: ./annotations/manual_validation_ids_chr17.txt #manual_validation_ids_chr17_uncoupled.txt
checkpoint_interval: 1200
sample_interval: 1200
n_epochs: 120
batch_size: 64
optimizer: "adamw"
lr: 0.0001
device: "cuda"
compute_attributions: false
latent_sampling:
layers_to_sample:
- "input_modules.contact_maps.feature_extractor.conv.0.conv_1"
- "input_modules.contact_maps.feature_extractor.conv.1.conv_1"
- "input_modules.contact_maps.feature_extractor.conv.2.conv_1"
- "input_modules.contact_maps.feature_extractor.conv.3.conv_1"
""",
"input_cnn.yaml": """
input_info:
input_source: ./inputs/landscape_arrays/test/
input_name: gene_expression
input_type: array
model_config:
model_type: cnn
#pre_normalization: "instancenorm"
model_init_config:
num_output_features: 512 # before fc_repr_dim
layers: [4,4]
kernel_height: 1
down_stride_width: 2
first_stride_expansion_width: 1
first_kernel_expansion_height: 4 #5
kernel_width: 10 #10
dilation_factor_width: 2
dilation_factor_height: 1
channel_exp_base: 5 #3 #-1
first_channel_expansion: 2
rb_do: .3
stochastic_depth_p: .1
attention_inclusion_cutoff: 1 #50 #256
""",
"""input_cnn_microc.yaml""": """
input_info:
input_source: ./inputs/microC/test/
input_name: contact_maps
input_type: array
model_config:
model_type: cnn
#pre_normalization: "instancenorm"
model_init_config:
num_output_features: 512 #before fc_repr_dim
layers: [1,2]
kernel_height: 5
down_stride_width: 2
down_stride_height: 2 #5
kernel_width: 5 #10
dilation_factor_width: 2
dilation_factor_height: 2
channel_exp_base: 2 #3 #-1
first_channel_expansion: 2
rb_do: .3
stochastic_depth_p: .1
attention_inclusion_cutoff: 1 #128 #256
""",
"fusion.yaml": """
model_config:
fc_do: 0.4
fc_task_dim: 256
layers:
- 2
rb_do: 0.4
stochastic_depth_p: 0.5
model_type: "mlp-residual"
""",
"outputs_2_cond.yaml":"""
output_info:
output_name: expression_output
output_source: ./targets/test_targets.csv
output_type: tabular
model_config: # <- new
model_type: linear # <- new
output_type_info:
con_loss_name: "SmoothL1Loss"
target_con_columns:
- "-200_ctrl"
- "-199_ctrl"
- "-198_ctrl"
- "-197_ctrl"
- "-196_ctrl"
- "-195_ctrl"
- "-194_ctrl"
- "-193_ctrl"
- "-192_ctrl"
- "-191_ctrl"
- "-190_ctrl"
- "-189_ctrl"
- "-188_ctrl"
- "-187_ctrl"
- "-186_ctrl"
- "-185_ctrl"
- "-184_ctrl"
- "-183_ctrl"
- "-182_ctrl"
- "-181_ctrl"
- "-180_ctrl"
- "-179_ctrl"
- "-178_ctrl"
- "-177_ctrl"
- "-176_ctrl"
- "-175_ctrl"
- "-174_ctrl"
- "-173_ctrl"
- "-172_ctrl"
- "-171_ctrl"
- "-170_ctrl"
- "-169_ctrl"
- "-168_ctrl"
- "-167_ctrl"
- "-166_ctrl"
- "-165_ctrl"
- "-164_ctrl"
- "-163_ctrl"
- "-162_ctrl"
- "-161_ctrl"
- "-160_ctrl"
- "-159_ctrl"
- "-158_ctrl"
- "-157_ctrl"
- "-156_ctrl"
- "-155_ctrl"
- "-154_ctrl"
- "-153_ctrl"
- "-152_ctrl"
- "-151_ctrl"
- "-150_ctrl"
- "-149_ctrl"
- "-148_ctrl"
- "-147_ctrl"
- "-146_ctrl"
- "-145_ctrl"
- "-144_ctrl"
- "-143_ctrl"
- "-142_ctrl"
- "-141_ctrl"
- "-140_ctrl"
- "-139_ctrl"
- "-138_ctrl"
- "-137_ctrl"
- "-136_ctrl"
- "-135_ctrl"
- "-134_ctrl"
- "-133_ctrl"
- "-132_ctrl"
- "-131_ctrl"
- "-130_ctrl"
- "-129_ctrl"
- "-128_ctrl"
- "-127_ctrl"
- "-126_ctrl"
- "-125_ctrl"
- "-124_ctrl"
- "-123_ctrl"
- "-122_ctrl"
- "-121_ctrl"
- "-120_ctrl"
- "-119_ctrl"
- "-118_ctrl"
- "-117_ctrl"
- "-116_ctrl"
- "-115_ctrl"
- "-114_ctrl"
- "-113_ctrl"
- "-112_ctrl"
- "-111_ctrl"
- "-110_ctrl"
- "-109_ctrl"
- "-108_ctrl"
- "-107_ctrl"
- "-106_ctrl"
- "-105_ctrl"
- "-104_ctrl"
- "-103_ctrl"
- "-102_ctrl"
- "-101_ctrl"
- "-100_ctrl"
- "-99_ctrl"
- "-98_ctrl"
- "-97_ctrl"
- "-96_ctrl"
- "-95_ctrl"
- "-94_ctrl"
- "-93_ctrl"
- "-92_ctrl"
- "-91_ctrl"
- "-90_ctrl"
- "-89_ctrl"
- "-88_ctrl"
- "-87_ctrl"
- "-86_ctrl"
- "-85_ctrl"
- "-84_ctrl"
- "-83_ctrl"
- "-82_ctrl"
- "-81_ctrl"
- "-80_ctrl"
- "-79_ctrl"
- "-78_ctrl"
- "-77_ctrl"
- "-76_ctrl"
- "-75_ctrl"
- "-74_ctrl"
- "-73_ctrl"
- "-72_ctrl"
- "-71_ctrl"
- "-70_ctrl"
- "-69_ctrl"
- "-68_ctrl"
- "-67_ctrl"
- "-66_ctrl"
- "-65_ctrl"
- "-64_ctrl"
- "-63_ctrl"
- "-62_ctrl"
- "-61_ctrl"
- "-60_ctrl"
- "-59_ctrl"
- "-58_ctrl"
- "-57_ctrl"
- "-56_ctrl"
- "-55_ctrl"
- "-54_ctrl"
- "-53_ctrl"
- "-52_ctrl"
- "-51_ctrl"
- "-50_ctrl"
- "-49_ctrl"
- "-48_ctrl"
- "-47_ctrl"
- "-46_ctrl"
- "-45_ctrl"
- "-44_ctrl"
- "-43_ctrl"
- "-42_ctrl"
- "-41_ctrl"
- "-40_ctrl"
- "-39_ctrl"
- "-38_ctrl"
- "-37_ctrl"
- "-36_ctrl"
- "-35_ctrl"
- "-34_ctrl"
- "-33_ctrl"
- "-32_ctrl"
- "-31_ctrl"
- "-30_ctrl"
- "-29_ctrl"
- "-28_ctrl"
- "-27_ctrl"
- "-26_ctrl"
- "-25_ctrl"
- "-24_ctrl"
- "-23_ctrl"
- "-22_ctrl"
- "-21_ctrl"
- "-20_ctrl"
- "-19_ctrl"
- "-18_ctrl"
- "-17_ctrl"
- "-16_ctrl"
- "-15_ctrl"
- "-14_ctrl"
- "-13_ctrl"
- "-12_ctrl"
- "-11_ctrl"
- "-10_ctrl"
- "-9_ctrl"
- "-8_ctrl"
- "-7_ctrl"
- "-6_ctrl"
- "-5_ctrl"
- "-4_ctrl"
- "-3_ctrl"
- "-2_ctrl"
- "-1_ctrl"
- "0_ctrl"
- "1_ctrl"
- "2_ctrl"
- "3_ctrl"
- "4_ctrl"
- "5_ctrl"
- "6_ctrl"
- "7_ctrl"
- "8_ctrl"
- "9_ctrl"
- "10_ctrl"
- "11_ctrl"
- "12_ctrl"
- "13_ctrl"
- "14_ctrl"
- "15_ctrl"
- "16_ctrl"
- "17_ctrl"
- "18_ctrl"
- "19_ctrl"
- "20_ctrl"
- "21_ctrl"
- "22_ctrl"
- "23_ctrl"
- "24_ctrl"
- "25_ctrl"
- "26_ctrl"
- "27_ctrl"
- "28_ctrl"
- "29_ctrl"
- "30_ctrl"
- "31_ctrl"
- "32_ctrl"
- "33_ctrl"
- "34_ctrl"
- "35_ctrl"
- "36_ctrl"
- "37_ctrl"
- "38_ctrl"
- "39_ctrl"
- "40_ctrl"
- "41_ctrl"
- "42_ctrl"
- "43_ctrl"
- "44_ctrl"
- "45_ctrl"
- "46_ctrl"
- "47_ctrl"
- "48_ctrl"
- "49_ctrl"
- "50_ctrl"
- "51_ctrl"
- "52_ctrl"
- "53_ctrl"
- "54_ctrl"
- "55_ctrl"
- "56_ctrl"
- "57_ctrl"
- "58_ctrl"
- "59_ctrl"
- "60_ctrl"
- "61_ctrl"
- "62_ctrl"
- "63_ctrl"
- "64_ctrl"
- "65_ctrl"
- "66_ctrl"
- "67_ctrl"
- "68_ctrl"
- "69_ctrl"
- "70_ctrl"
- "71_ctrl"
- "72_ctrl"
- "73_ctrl"
- "74_ctrl"
- "75_ctrl"
- "76_ctrl"
- "77_ctrl"
- "78_ctrl"
- "79_ctrl"
- "80_ctrl"
- "81_ctrl"
- "82_ctrl"
- "83_ctrl"
- "84_ctrl"
- "85_ctrl"
- "86_ctrl"
- "87_ctrl"
- "88_ctrl"
- "89_ctrl"
- "90_ctrl"
- "91_ctrl"
- "92_ctrl"
- "93_ctrl"
- "94_ctrl"
- "95_ctrl"
- "96_ctrl"
- "97_ctrl"
- "98_ctrl"
- "99_ctrl"
- "100_ctrl"
- "101_ctrl"
- "102_ctrl"
- "103_ctrl"
- "104_ctrl"
- "105_ctrl"
- "106_ctrl"
- "107_ctrl"
- "108_ctrl"
- "109_ctrl"
- "110_ctrl"
- "111_ctrl"
- "112_ctrl"
- "113_ctrl"
- "114_ctrl"
- "115_ctrl"
- "116_ctrl"
- "117_ctrl"
- "118_ctrl"
- "119_ctrl"
- "120_ctrl"
- "121_ctrl"
- "122_ctrl"
- "123_ctrl"
- "124_ctrl"
- "125_ctrl"
- "126_ctrl"
- "127_ctrl"
- "128_ctrl"
- "129_ctrl"
- "130_ctrl"
- "131_ctrl"
- "132_ctrl"
- "133_ctrl"
- "134_ctrl"
- "135_ctrl"
- "136_ctrl"
- "137_ctrl"
- "138_ctrl"
- "139_ctrl"
- "140_ctrl"
- "141_ctrl"
- "142_ctrl"
- "143_ctrl"
- "144_ctrl"
- "145_ctrl"
- "146_ctrl"
- "147_ctrl"
- "148_ctrl"
- "149_ctrl"
- "150_ctrl"
- "151_ctrl"
- "152_ctrl"
- "153_ctrl"
- "154_ctrl"
- "155_ctrl"
- "156_ctrl"
- "157_ctrl"
- "158_ctrl"
- "159_ctrl"
- "160_ctrl"
- "161_ctrl"
- "162_ctrl"
- "163_ctrl"
- "164_ctrl"
- "165_ctrl"
- "166_ctrl"
- "167_ctrl"
- "168_ctrl"
- "169_ctrl"
- "170_ctrl"
- "171_ctrl"
- "172_ctrl"
- "173_ctrl"
- "174_ctrl"
- "175_ctrl"
- "176_ctrl"
- "177_ctrl"
- "178_ctrl"
- "179_ctrl"
- "180_ctrl"
- "181_ctrl"
- "182_ctrl"
- "183_ctrl"
- "184_ctrl"
- "185_ctrl"
- "186_ctrl"
- "187_ctrl"
- "188_ctrl"
- "189_ctrl"
- "190_ctrl"
- "191_ctrl"
- "192_ctrl"
- "193_ctrl"
- "194_ctrl"
- "195_ctrl"
- "196_ctrl"
- "197_ctrl"
- "198_ctrl"
- "199_ctrl"
- "200_ctrl"
"""
}
predict_perturbations_H3K27ac_yaml_contents = {"fusion.yaml":"""
model_config:
fc_do: 0.4
fc_task_dim: 256
layers:
- 2
rb_do: 0.4
stochastic_depth_p: 0.5
model_type: "mlp-residual"
""",
"globals.yaml": """
checkpoint_interval: 30300 # 100 epochs #60000
sample_interval: 30300 #60000
n_epochs: 120
batch_size: 64
optimizer: "adamw"
lr: 0.0001 #0.0001
device: "cuda"
compute_attributions: false
""",
"input_cnn.yaml": """
input_info:
input_source: ./inputs/perturbed_landscape_arrays/test_only_H3K27ac_silenced/ #inserted_enhancer_test #inputs/silenced_arrays/silenced_arrays_H2B_S.D
#./data/parsed_data/inputs/arrays_train_100bp_no_H3K27ac/ #arrays_train_100bp_no_H3K27ac_uncoupled/
input_name: gene_expression
input_type: array
model_config:
model_type: cnn
#pre_normalization: "instancenorm"
model_init_config:
num_output_features: 512 # before fc_repr_dim
layers: [4,4]
kernel_height: 1
down_stride_width: 2
first_stride_expansion_width: 1
first_kernel_expansion_height: 4 #5
kernel_width: 10 #10
dilation_factor_width: 2
dilation_factor_height: 1
channel_exp_base: 5 #3 #-1
first_channel_expansion: 2
rb_do: .3
stochastic_depth_p: .1
attention_inclusion_cutoff: 1 #50 #256
""",
"outputs_2_cond.yaml": """
output_info:
output_name: expression_output
output_source: ./targets/perturbed_targets.csv
#./data/parsed_data/targets/target_arrays_perturbational_inserted_enhancer.csv #target_arrays_perturbational_S.D.csv #target_arrays_1kbp_401_bins_2_conditions_decareads_abs.csv
output_type: tabular
model_config: # <- new
model_type: linear # <- new
output_type_info:
con_loss_name: "SmoothL1Loss"
target_con_columns:
- "-200_ctrl"
- "-199_ctrl"
- "-198_ctrl"
- "-197_ctrl"
- "-196_ctrl"
- "-195_ctrl"
- "-194_ctrl"
- "-193_ctrl"
- "-192_ctrl"
- "-191_ctrl"
- "-190_ctrl"
- "-189_ctrl"
- "-188_ctrl"
- "-187_ctrl"
- "-186_ctrl"
- "-185_ctrl"
- "-184_ctrl"
- "-183_ctrl"
- "-182_ctrl"
- "-181_ctrl"
- "-180_ctrl"
- "-179_ctrl"
- "-178_ctrl"
- "-177_ctrl"
- "-176_ctrl"
- "-175_ctrl"
- "-174_ctrl"
- "-173_ctrl"
- "-172_ctrl"
- "-171_ctrl"
- "-170_ctrl"
- "-169_ctrl"
- "-168_ctrl"
- "-167_ctrl"
- "-166_ctrl"
- "-165_ctrl"
- "-164_ctrl"
- "-163_ctrl"
- "-162_ctrl"
- "-161_ctrl"
- "-160_ctrl"
- "-159_ctrl"
- "-158_ctrl"
- "-157_ctrl"
- "-156_ctrl"
- "-155_ctrl"
- "-154_ctrl"
- "-153_ctrl"
- "-152_ctrl"
- "-151_ctrl"
- "-150_ctrl"
- "-149_ctrl"
- "-148_ctrl"
- "-147_ctrl"
- "-146_ctrl"
- "-145_ctrl"
- "-144_ctrl"
- "-143_ctrl"
- "-142_ctrl"
- "-141_ctrl"
- "-140_ctrl"
- "-139_ctrl"
- "-138_ctrl"
- "-137_ctrl"
- "-136_ctrl"
- "-135_ctrl"
- "-134_ctrl"
- "-133_ctrl"
- "-132_ctrl"
- "-131_ctrl"
- "-130_ctrl"
- "-129_ctrl"
- "-128_ctrl"
- "-127_ctrl"
- "-126_ctrl"
- "-125_ctrl"
- "-124_ctrl"
- "-123_ctrl"
- "-122_ctrl"
- "-121_ctrl"
- "-120_ctrl"
- "-119_ctrl"
- "-118_ctrl"
- "-117_ctrl"
- "-116_ctrl"
- "-115_ctrl"
- "-114_ctrl"
- "-113_ctrl"
- "-112_ctrl"
- "-111_ctrl"
- "-110_ctrl"
- "-109_ctrl"
- "-108_ctrl"
- "-107_ctrl"
- "-106_ctrl"
- "-105_ctrl"
- "-104_ctrl"
- "-103_ctrl"
- "-102_ctrl"
- "-101_ctrl"
- "-100_ctrl"
- "-99_ctrl"
- "-98_ctrl"
- "-97_ctrl"
- "-96_ctrl"
- "-95_ctrl"
- "-94_ctrl"
- "-93_ctrl"
- "-92_ctrl"
- "-91_ctrl"
- "-90_ctrl"
- "-89_ctrl"
- "-88_ctrl"
- "-87_ctrl"
- "-86_ctrl"
- "-85_ctrl"
- "-84_ctrl"
- "-83_ctrl"
- "-82_ctrl"
- "-81_ctrl"
- "-80_ctrl"
- "-79_ctrl"
- "-78_ctrl"
- "-77_ctrl"
- "-76_ctrl"
- "-75_ctrl"
- "-74_ctrl"
- "-73_ctrl"
- "-72_ctrl"
- "-71_ctrl"
- "-70_ctrl"
- "-69_ctrl"
- "-68_ctrl"
- "-67_ctrl"
- "-66_ctrl"
- "-65_ctrl"
- "-64_ctrl"
- "-63_ctrl"
- "-62_ctrl"
- "-61_ctrl"
- "-60_ctrl"
- "-59_ctrl"
- "-58_ctrl"
- "-57_ctrl"
- "-56_ctrl"
- "-55_ctrl"
- "-54_ctrl"
- "-53_ctrl"
- "-52_ctrl"
- "-51_ctrl"
- "-50_ctrl"
- "-49_ctrl"
- "-48_ctrl"
- "-47_ctrl"
- "-46_ctrl"
- "-45_ctrl"
- "-44_ctrl"
- "-43_ctrl"
- "-42_ctrl"
- "-41_ctrl"
- "-40_ctrl"
- "-39_ctrl"
- "-38_ctrl"
- "-37_ctrl"
- "-36_ctrl"
- "-35_ctrl"
- "-34_ctrl"
- "-33_ctrl"
- "-32_ctrl"
- "-31_ctrl"
- "-30_ctrl"
- "-29_ctrl"
- "-28_ctrl"
- "-27_ctrl"
- "-26_ctrl"
- "-25_ctrl"
- "-24_ctrl"
- "-23_ctrl"
- "-22_ctrl"
- "-21_ctrl"
- "-20_ctrl"
- "-19_ctrl"
- "-18_ctrl"
- "-17_ctrl"
- "-16_ctrl"
- "-15_ctrl"
- "-14_ctrl"
- "-13_ctrl"
- "-12_ctrl"
- "-11_ctrl"
- "-10_ctrl"
- "-9_ctrl"
- "-8_ctrl"
- "-7_ctrl"
- "-6_ctrl"
- "-5_ctrl"
- "-4_ctrl"
- "-3_ctrl"
- "-2_ctrl"
- "-1_ctrl"
- "0_ctrl"
- "1_ctrl"
- "2_ctrl"
- "3_ctrl"
- "4_ctrl"
- "5_ctrl"
- "6_ctrl"
- "7_ctrl"
- "8_ctrl"
- "9_ctrl"
- "10_ctrl"
- "11_ctrl"
- "12_ctrl"
- "13_ctrl"
- "14_ctrl"
- "15_ctrl"
- "16_ctrl"
- "17_ctrl"
- "18_ctrl"
- "19_ctrl"
- "20_ctrl"
- "21_ctrl"
- "22_ctrl"
- "23_ctrl"
- "24_ctrl"
- "25_ctrl"
- "26_ctrl"
- "27_ctrl"
- "28_ctrl"
- "29_ctrl"
- "30_ctrl"
- "31_ctrl"
- "32_ctrl"
- "33_ctrl"
- "34_ctrl"
- "35_ctrl"
- "36_ctrl"
- "37_ctrl"
- "38_ctrl"
- "39_ctrl"
- "40_ctrl"
- "41_ctrl"
- "42_ctrl"
- "43_ctrl"
- "44_ctrl"
- "45_ctrl"
- "46_ctrl"
- "47_ctrl"
- "48_ctrl"
- "49_ctrl"
- "50_ctrl"
- "51_ctrl"
- "52_ctrl"
- "53_ctrl"
- "54_ctrl"
- "55_ctrl"
- "56_ctrl"
- "57_ctrl"
- "58_ctrl"
- "59_ctrl"
- "60_ctrl"
- "61_ctrl"
- "62_ctrl"
- "63_ctrl"
- "64_ctrl"
- "65_ctrl"
- "66_ctrl"
- "67_ctrl"
- "68_ctrl"
- "69_ctrl"
- "70_ctrl"
- "71_ctrl"
- "72_ctrl"
- "73_ctrl"
- "74_ctrl"
- "75_ctrl"
- "76_ctrl"
- "77_ctrl"
- "78_ctrl"
- "79_ctrl"
- "80_ctrl"
- "81_ctrl"
- "82_ctrl"
- "83_ctrl"
- "84_ctrl"
- "85_ctrl"
- "86_ctrl"
- "87_ctrl"
- "88_ctrl"
- "89_ctrl"
- "90_ctrl"
- "91_ctrl"
- "92_ctrl"
- "93_ctrl"
- "94_ctrl"
- "95_ctrl"
- "96_ctrl"
- "97_ctrl"
- "98_ctrl"
- "99_ctrl"
- "100_ctrl"
- "101_ctrl"
- "102_ctrl"
- "103_ctrl"
- "104_ctrl"
- "105_ctrl"
- "106_ctrl"
- "107_ctrl"
- "108_ctrl"
- "109_ctrl"
- "110_ctrl"
- "111_ctrl"
- "112_ctrl"
- "113_ctrl"
- "114_ctrl"
- "115_ctrl"
- "116_ctrl"
- "117_ctrl"
- "118_ctrl"
- "119_ctrl"
- "120_ctrl"
- "121_ctrl"
- "122_ctrl"
- "123_ctrl"
- "124_ctrl"
- "125_ctrl"
- "126_ctrl"
- "127_ctrl"
- "128_ctrl"
- "129_ctrl"
- "130_ctrl"
- "131_ctrl"
- "132_ctrl"
- "133_ctrl"
- "134_ctrl"
- "135_ctrl"
- "136_ctrl"
- "137_ctrl"
- "138_ctrl"
- "139_ctrl"
- "140_ctrl"
- "141_ctrl"
- "142_ctrl"
- "143_ctrl"
- "144_ctrl"
- "145_ctrl"
- "146_ctrl"
- "147_ctrl"
- "148_ctrl"
- "149_ctrl"
- "150_ctrl"
- "151_ctrl"
- "152_ctrl"
- "153_ctrl"
- "154_ctrl"
- "155_ctrl"
- "156_ctrl"
- "157_ctrl"
- "158_ctrl"
- "159_ctrl"
- "160_ctrl"
- "161_ctrl"
- "162_ctrl"
- "163_ctrl"
- "164_ctrl"
- "165_ctrl"
- "166_ctrl"
- "167_ctrl"
- "168_ctrl"
- "169_ctrl"
- "170_ctrl"
- "171_ctrl"
- "172_ctrl"
- "173_ctrl"
- "174_ctrl"
- "175_ctrl"
- "176_ctrl"
- "177_ctrl"
- "178_ctrl"
- "179_ctrl"
- "180_ctrl"
- "181_ctrl"
- "182_ctrl"
- "183_ctrl"
- "184_ctrl"
- "185_ctrl"
- "186_ctrl"
- "187_ctrl"
- "188_ctrl"
- "189_ctrl"
- "190_ctrl"
- "191_ctrl"
- "192_ctrl"
- "193_ctrl"
- "194_ctrl"
- "195_ctrl"
- "196_ctrl"
- "197_ctrl"
- "198_ctrl"
- "199_ctrl"
- "200_ctrl"
""" }
# Training configs
for file,content in training_yaml_contents.items():
with open(config_paths[0] / file, 'w') as f:
f.write(content)
# Test configs
for file,content in test_yaml_contents.items():
with open(config_paths[1] / file, 'w') as f:
f.write(content)
# Perturbation configs
# Test configs
for file,content in predict_perturbations_yaml_contents.items():
with open(config_paths[2] / file, 'w') as f:
f.write(content)
# Training microc configs
for file,content in training_microc_yaml_contents.items():
with open(config_paths[3] / file, 'w') as f:
f.write(content)
# Training microc rotated configs
for file,content in training_microc_rotated_yaml_contents.items():
with open(config_paths[4] / file, 'w') as f:
f.write(content)
# Test microc configs
for file,content in test_microc_yaml_contents.items():
with open(config_paths[5] / file, 'w') as f:
f.write(content)
# Test microc rotated configs
for file,content in test_microc_rotated_yaml_contents.items():
with open(config_paths[6] / file, 'w') as f:
f.write(content)
for file,content in training_attention_yaml_contents.items():
with open(config_paths[7] / file, 'w') as f:
f.write(content)
# Test configs
for file,content in test_attention_yaml_contents.items():
with open(config_paths[8] / file, 'w') as f:
f.write(content)
# Training configs 2 branch latent
for file,content in training_microc_latents_yaml_contents.items():
with open(config_paths[9] / file, 'w') as f:
f.write(content)
for file,content in predict_perturbations_H3K27ac_yaml_contents.items():
with open(config_paths[10] / file, 'w') as f:
f.write(content)
</code>
### Create validation split
All samples from chromosome 17 were used as a validation set. Samples from chromosome 4 were in turn used as a final test set.
<code>
def create_validation_ids(path: Path, val_chrom: str = "chr17"):
""" All genes encoded in chr17 will be used as a validation set."""
gene_annotations_df = pd.read_csv(path / "Final_gene_annotations.tsv", sep="\t").set_index('ID')
missing_samples_report = pd.read_csv(path / "missing_samples_report.csv")["Sample"].values
indices = gene_annotations_df[gene_annotations_df["chr"] == val_chrom].index
# Prepare the strings to write in the file
lines = [f"{index}_forward\n{index}_rev\n" for index in indices if index+'_forward' not in missing_samples_report]
# Write to the text file
with open(path / f"manual_validation_ids_{val_chrom}.txt", 'w') as file:
file.writelines(lines)
path = Path("../annotations/")
create_validation_ids(path)
</code>
These two ids were removed from the manual validation ids list
ENSMUSG00000055660.9_forward
ENSMUSG00000055660.9_rev
## Training CLASTER
The model can be trained by running the following command:
```bash
eirtrain \
--global_configs ./configurations/conf_pure_conv/globals.yaml \
--input_configs ./configurations/conf_pure_conv/input_cnn.yaml \
--fusion_configs ./configurations/conf_pure_conv/fusion.yaml \
--output_configs ./configurations/conf_pure_conv/outputs_2_cond.yaml
```
We trained CLASTER on an A100 GPU with 80GBs of RAM in a slurm-based high performance cluster (HPC).
This will yield a folder in runs/ with the name of the run and:
- The summary of the configs
- Results:
- predictions
- attributions if true
- latents
- The saved model
- Serialisations
- Tensorboard logs
- The logging history
- A model summary
- Train and validation logs
...
All of these will be handeled in notebook 3.
Training the 2 branch model with the squared matrices:
```bash
eirtrain \
--global_configs ./configurations/conf_microc_pure_conv/globals.yaml \
--input_configs ./configurations/conf_microc_pure_conv/input_cnn.yaml ./configurations/conf_microc_pure_conv/input_cnn_microc.yaml \
--fusion_configs ./configurations/conf_microc_pure_conv/fusion.yaml \
--output_configs ./configurations/conf_microc_pure_conv/outputs_2_cond.yaml
```
Training the 2 branch model with rotated and cropped Micro-C matrices:
```bash
eirtrain \
--global_configs ./configurations/conf_microc_rotated_pure_conv/globals.yaml \
--input_configs ./configurations/conf_microc_rotated_pure_conv/input_cnn.yaml ./configurations/conf_microc_rotated_pure_conv/input_cnn_microc_rotated.yaml \
--fusion_configs ./configurations/conf_microc_rotated_pure_conv/fusion.yaml \
--output_configs ./configurations/conf_microc_rotated_pure_conv/outputs_2_cond.yaml
Training 1 branch model with attention before MLP:
eirtrain \
--global_configs ./configurations/conf_only_chrom_attention/globals.yaml \
--input_configs ./configurations/conf_only_chrom_attention/input_cnn.yaml \
--fusion_configs ./configurations/conf_only_chrom_attention/fusion.yaml \
--output_configs ./configurations/conf_only_chrom_attention/outputs_2_cond.yaml
Training 2 branch model on test data to get latent representations of Micro-C branch.
eirtrain \
--global_configs ./configurations/conf_microc_pure_conv_latents/globals.yaml \
--input_configs ./configurations/conf_microc_pure_conv_latents/input_cnn.yaml ./configurations/conf_microc_pure_conv_latents/input_cnn_microc.yaml \
--fusion_configs ./configurations/conf_microc_pure_conv_latents/fusion.yaml \
--output_configs ./configurations/conf_microc_pure_conv_latents/outputs_2_cond.yaml
```
## Testing CLASTER
> _Note:_
>
>1) You should first create the output folder.
>2) You should modify the name of the saved model according to the split in which you saved it (batch number 60600 for us, ca. 100 epochs) and performance average (0.8161).
<code>
! mkdir -p ../runs/test_runs/gene_expression_only_chrom_pure_conv
! mkdir -p ../runs/test_runs/gene_expression_microc_pure_conv
! mkdir -p ../runs/test_runs/gene_expression_microc_rotated_pure_conv
</code>
Test predictions can be obtained by typing the following command.
> _Note:_
For GPU trained models, we need to test using the same device given the serializations.
To run on a SLURM based cluster:
srun --partition=gpuqueue --gres=gpu:a100:1 --
```bash
eirpredict \
--global_configs ./configurations/conf_pure_conv_predict/globals.yaml \
--input_configs ./configurations/conf_pure_conv_predict/input_cnn.yaml \
--fusion_configs ./configurations/conf_pure_conv_predict/fusion.yaml \
--output_configs ./configurations/conf_pure_conv_predict/outputs_2_cond.yaml \
--evaluate \
--model_path ./runs/gene_expression_only_chrom_pure_conv/saved_models/gene_expression_only_chrom_pure_conv_model_60600_perf-average=0.8161.pt \
--output_folder ./runs/test_runs/gene_expression_only_chrom_pure_conv
```
Testing with squared matrices:(we'll plot the inner filter outputs)
```bash
eirpredict \
--global_configs ./configurations/conf_microc_pure_conv_predict/globals.yaml \
--input_configs ./configurations/conf_microc_pure_conv_predict/input_cnn.yaml ./configurations/conf_microc_pure_conv_predict/input_cnn_microc.yaml \
--fusion_configs ./configurations/conf_microc_pure_conv_predict/fusion.yaml \
--output_configs ./configurations/conf_microc_pure_conv_predict/outputs_2_cond.yaml \
--evaluate \
--model_path ./runs/gene_expression_microc_pure_conv/saved_models/gene_expression_microc_pure_conv_model_60600_perf-average=0.8045.pt \
--output_folder ./runs/test_runs/gene_expression_microc_pure_conv
```
Testing with rotated and cropped matrices:
```bash
eirpredict \
--global_configs ./configurations/conf_microc_rotated_pure_conv_predict/globals.yaml \
--input_configs ./configurations/conf_microc_rotated_pure_conv_predict/input_cnn.yaml ./configurations/conf_microc_rotated_pure_conv_predict/input_cnn_microc_rotated.yaml \
--fusion_configs ./configurations/conf_microc_rotated_pure_conv_predict/fusion.yaml \
--output_configs ./configurations/conf_microc_rotated_pure_conv_predict/outputs_2_cond.yaml \
--evaluate \
--model_path ./runs/gene_expression_microc_rotated_pure_conv/saved_models/ gene_expression_microc_rotated_pure_conv_model_60600_perf-average=0.8196.pt \
--output_folder ./runs/test_runs/gene_expression_microc_rotated_pure_conv
```
Testing chrom branch with attention:
```bash
eirpredict \
--global_configs ./configurations/conf_only_chrom_attention_predict/globals.yaml \
--input_configs ./configurations/conf_only_chrom_attention_predict/input_cnn.yaml \
--fusion_configs ./configurations/conf_only_chrom_attention_predict/fusion.yaml \
--output_configs ./configurations/conf_only_chrom_attention_predict/outputs_2_cond.yaml \
--evaluate \
--model_path ./runs/gene_expression_only_chrom_attention/saved_models/gene_expression_only_chrom_attention_60600_perf-average=0.8217.pt \
--output_folder ./runs/test_runs/gene_expression_only_chrom_attention
```
## Predicting _in silico_ perturbed chromatin landscapes
>Note: Make sure to create the output folder beforehand: ./runs/perturbation_runs/gene_expression_only_chrom_pure_conv/
```bash
eirpredict \
--global_configs ./configurations/conf_pure_conv_predict_perturbations/globals.yaml
--input_configs ./configurations/conf_pure_conv_predict_perturbations/input_cnn.yaml
--fusion_configs ./configurations/conf_pure_conv_predict_perturbations/fusion.yaml
--output_configs ./configurations/conf_pure_conv_predict_perturbations/outputs_2_cond.yaml
--evaluate
--model_path ./runs/gene_expression_only_chrom_pure_conv/saved_models/gene_expression_only_chrom_pure_conv_model_60600_perf-average=0.8161.pt
--output_folder ./runs/perturbation_runs/gene_expression_only_chrom_pure_conv \
```
Predicting only H3K27ac modified profiles
```bash
eirpredict \
--global_configs ./configurations/conf_pure_conv_predict_perturbations_H3K27ac/globals.yaml \
--input_configs ./configurations/conf_pure_conv_predict_perturbations_H3K27ac/input_cnn.yaml \
--fusion_configs ./configurations/conf_pure_conv_predict_perturbations_H3K27ac/fusion.yaml \
--output_configs ./configurations/conf_pure_conv_predict_perturbations_H3K27ac/outputs_2_cond.yaml \
--evaluate \
--model_path ./runs/gene_expression_only_chrom_pure_conv/saved_models/gene_expression_only_chrom_pure_conv_model_60600_perf-average=0.8161.pt \
--output_folder ./runs/perturbation_runs/gene_expression_pure_conv_perturbed_only_H3K27ac \
```
That's it! Now we will train a couple of DNA-sequence models on the same samples (mapping DNA to EU-seq now) and compare the results!
|
{
"filename": "II_Run_CLASTER.ipynb",
"repository": "RasmussenLab/CLASTER",
"query": "transformed_from_existing",
"size": 187287,
"sha": ""
}
|
# Workshop_1.ipynb
Repository: NGSchoolEU/ngs19
# Import the necessary libraries
<code>
import pandas as pd
import numpy as np
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
from sklearn.decomposition import LatentDirichletAllocation
from sklearn import preprocessing
from sklearn import svm
from sklearn.model_selection import train_test_split
import spacy as sp
import re
import pickle as pkl
from sklearn import metrics
from sklearn.svm import libsvm,SVC
import matplotlib.pyplot as plt
from sklearn.metrics import classification_report
</code>
# Load the spacy model
<code>
nlp = sp.load('en_core_web_lg')
</code>
# Load the data file and do some preliminary exploration
- Use Panda to read the data csv file
- Check the top 5 rows in the data frame
- Review the unique values of the Category, i.e. the labels
- Get the number of samples per class
- Sample some Snippets
<code>
df = pd.read_csv('sampled_data.csv',index_col=0)
</code>
<code>
df.head()
</code>
<code>
df.Category.unique()
</code>
<code>
df.Category.value_counts()
</code>
<code>
df['Snippet'][10]
</code>
<code>
df['Snippet'][2000]
</code>
<code>
df['Snippet'][5000]
</code>
# Data Pre-Processing
- Remove the NaNs
- Combine the string data in a new column after converting all the string to lower case
- Clean the text
+ remove non_alphabets
+ use spaCy model to keep Nouns, Verbs and ProNouns
+ use spaCy model to remove non-English words
<code>
df.Title.fillna('',inplace=True)
df.Snippet.fillna('',inplace=True)
</code>
<code>
df['Doc']=df.Keyword+' '+ df.Title.str.lower()+' '+ df.Snippet.str.lower()
</code>
<code>
df['Doc'][100]
</code>
<code>
def string_preprocessing(data):
''' The function processing the data including: keeping only nouns verbs and pronouns, remove extra characters
Args:
data: string
Returns:
string
'''
data = re.sub('[^a-z]',' ',data)
doc = nlp(data)
text = []
for word in doc:
if word.pos_ in ['PROPN','NOUN','VERB'] and np.sum(word.vector) !=0:
text.append(word.text)
return ' '.join(text)
</code>
<code>
df['Doc'][1004]
</code>
<code>
string_preprocessing(df['Doc'][1004])
</code>
# Build LDA
- write functions to build an LDA model and to view the generated topics,
- the model uses CountVectorizer as an input to the LDA model
- build LDA for the class Technology to demonstrate the output
<code>
def build_lda(data,num_topics):
tf_vectorizer = CountVectorizer(max_df=0.95, min_df=2,
stop_words='english')
tf = tf_vectorizer.fit_transform(data)
lda = LatentDirichletAllocation(n_components=num_topics, max_iter=5,
learning_method='online',
learning_offset=50.,
random_state=0)
lda.fit(tf)
return lda,tf_vectorizer
def print_top_words(model, feature_names, n_top_words):
for topic_idx, topic in enumerate(model.components_):
message = "Topic #%d: " % topic_idx
message += " ".join([feature_names[i]
for i in topic.argsort()[:-n_top_words - 1:-1]])
print(message)
print()
</code>
<code>
#data_tech = df[df['Category']=='Technology']['Doc'].apply(lambda x: string_preprocessing(x))
data_tech = df['Doc'].apply(lambda x: string_preprocessing(x))
</code>
<code>
lda_technology,tf_vectorizer = build_lda(data_tech,num_topics=10) # change num_topics to see its impact
print_top_words(lda_technology,tf_vectorizer.get_feature_names(),n_top_words=10) # change n_top_words to see its impact
</code>
## Run LDA for every class in the data
- Assume all classes require the same number of topics
- Clean the text before building the LDA models
<code>
n_components = 20
ldas = []
tf_vectorizers = []
for cat in df.Category.unique():
cat_df = df['Doc'].where(df['Category'] == cat).dropna(how='any')
cat_df = cat_df.apply(lambda x: string_preprocessing(x))
lda_t,tf_vectorizer_t = build_lda(cat_df,num_topics=n_components)
ldas.append(lda_t)
tf_vectorizers.append(tf_vectorizer_t)
</code>
# Save the LDA models
Do not run if you do not want to override the supplied file
<code>
with open('tutorial_ldas.pkl','wb') as f:
pkl.dump((ldas,tf_vectorizers),f,pkl.HIGHEST_PROTOCOL)
</code>
## Load the LDA models from an already provided file
<code>
with open('tutorial_ldas.pkl','rb') as f:
(ldas,tf_vectorizers)= pkl.load(f)
</code>
# Feature Engineering
- Pass the data to every LDA model, extract and combine the topic features
- if wordEmbed == True then add the spaCy langauge model embedding to the extracted LDA features
<code>
def feature_extraction(data,ldas,tf_vects, wordEmbed=True):
features=[]
labels =[]
assert(len(ldas)==len(tf_vects))
for i,d in data.iterrows():
labels.append(d['Category'])
line= []
for j in range(len(ldas)):
line.extend(ldas[j].transform(tf_vects[j].transform([d['Doc']]))[0])
if wordEmbed:
line.extend(list(nlp(d['Doc']).vector))
features.append(line)
return features,labels
</code>
<code>
features,labels=feature_extraction(df.dropna(how='any'),ldas,tf_vectorizers)
</code>
# Save the extracted features
Do not run if you do not want to override the supplied file
<code>
with open('tutorial_features.pkl','wb') as f:
pkl.dump((features,labels),f,pkl.HIGHEST_PROTOCOL)
</code>
## Load the extracted features from the provided file
<code>
with open('tutorial_features.pkl','rb') as f:
(features,labels)= pkl.load(f)
</code>
# Prepare for classification
- Fit a label encoder to convert String labels into numbers
<code>
# encode the labels
le = preprocessing.LabelEncoder()
le.fit(labels)
encoded_labels = le.transform(labels)
n_classes = len(le.classes_)
</code>
# Shallow Classifier
- Split the data into training and testing sets
- Train a linear SVM on the training data
- Provide results of the accuracy on the test data
<code>
X_train, X_test, y_train, y_test = train_test_split(features, encoded_labels,test_size=0.25)
clf = SVC(kernel='linear',probability=True)
clf.fit(X_train,y_train)
</code>
<code>
print(metrics.classification_report(y_test,clf.predict(X_test)))
</code>
# Can you test the shallow classifier on LDA features only?
# Demo
- Define a function to process a string to be suitable to run against the model
- Use the built SVM model to predict the class
- Output the top three classes with their probabilities
<code>
def string_features(text,ldas,tf_vectorizers):
''' extract lda features for a given text.
Args:
text: string
ldas: a list of LDA models
tf_vectorizers: a list of CounterVectorizers associated with the ldas
Returns:
a list of the lda features with length [number of lda models] X [number of topics per model]
'''
line= []
for j in range(len(ldas)):
line.extend(ldas[j].transform(tf_vectorizers[j].transform([text]))[0])
vec = nlp(text).vector
line.extend(list(vec))
return line
</code>
<code>
while True:
print('Enter a business description please, q to exit:\n')
st = input()
if st == 'q':
break
clean_st = string_preprocessing(st)
feats = string_features(clean_st,ldas,tf_vectorizers)
probs = clf.predict_proba([feats])[0]
idx = np.argsort(probs)[::-1]
top_probs = probs[idx[:3]]
top_labels = le.inverse_transform(idx[:3])
for lbl,prob in zip(top_labels,top_probs):
print(lbl,':',100*prob)
print ('*****************\n')
</code>
# Deep Learning
- Define a Multi-Layer Perceptron to classify the data
- Use Two layer and a softmax layer
- Use Dropout
- Use Relu activation functions
<code>
from keras.models import Sequential
from keras.layers import Dense, Activation, Dropout
model = Sequential()
# First Layer
model.add(Dense(1150, input_dim=len(X_train[0])))
model.add(Activation('relu'))
model.add(Dropout(0.5))
#Second Layer
model.add(Dense(500))
model.add(Activation('relu'))
#Third Layer
model.add(Dense(n_classes))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.summary()
</code>
## Convert the labels from a noiminal value to one-hot encoded
this is a requirment to be able to run the keras model
<code>
from keras.utils import to_categorical
train_label = to_categorical(y_train, num_classes=n_classes)
test_label = to_categorical(y_test, num_classes=n_classes)
</code>
## Train the model
- Define a model checkpoint to save the best model through the training iterations
- Define batch size and Epochs
<code>
from keras.callbacks import ModelCheckpoint
checkpointer = ModelCheckpoint(filepath='tutorial_weights.{epoch:02d}-{val_loss:.2f}.hdf5', verbose=1, save_best_only=True,monitor='val_acc')
hist=model.fit(np.asarray(X_train), train_label, epochs=20, batch_size=100,
validation_data=(np.asarray(X_test),test_label),callbacks=[checkpointer])
</code>
## Plot the output to understand the change of training and validation accuracy over training epochs
<code>
plt.plot(hist.history['acc'])
plt.plot(hist.history['val_acc'])
</code>
<code>
preds = np.argmax(model.predict(np.asarray(X_test)),axis=1)
print(metrics.classification_report(y_test,preds))
</code>
## Can you test different variations of the model?
## Prepare Data to use in LSTM
- For LSTM the data has to be prepared differently to format it as sequences
- Each element in the sequence is a word embedding from the spaCy language model
- Define max sequence length
- Add zero paddings for shorter seuquences
<code>
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.utils import to_categorical
from keras.layers import Dense, Input, GlobalMaxPooling1D
from keras.layers import Conv1D, MaxPooling1D, Embedding,LSTM,Bidirectional
from keras.models import Model,Sequential
from keras.initializers import Constant
from keras.callbacks import ModelCheckpoint
</code>
<code>
MAX_NUM_WORDS = nlp.vocab.length # the number of words in the dictionary
MAX_SEQUENCE_LENGTH = 10
EMBEDDING_DIM = 300 # comes from spaCy, if you select a different model this has to change accordingly
</code>
<code>
#prepare text samples and their labels
texts = df['Doc'].apply(lambda x: string_preprocessing(x))
</code>
## Build a Keras tokenizer and convert text into sequences
<code>
tokenizer = Tokenizer(num_words=MAX_NUM_WORDS)
tokenizer.fit_on_texts(texts)
sequences = tokenizer.texts_to_sequences(texts)
</code>
<code>
word_index = tokenizer.word_index
print('Found %s unique tokens.' % len(word_index))
</code>
## Add padding if required
<code>
data_seq = pad_sequences(sequences, maxlen=MAX_SEQUENCE_LENGTH)
</code>
<code>
data_seq.shape
</code>
## Create the embedding layer which is not trainable, this will be the input to the LSTM model
<code>
# prepare embedding matrix
num_words = min(MAX_NUM_WORDS, len(word_index)) + 1
embedding_matrix = np.zeros((num_words, EMBEDDING_DIM))
for word, i in word_index.items():
if i > MAX_NUM_WORDS:
continue
embedding_vector = nlp(word).vector
if embedding_vector is not None:
# words not found in embedding index will be all-zeros.
embedding_matrix[i] = embedding_vector
# load pre-trained word embeddings into an Embedding layer
# set trainable = False so as to keep the embeddings fixed
embedding_layer = Embedding(num_words,
EMBEDDING_DIM,
embeddings_initializer=Constant(embedding_matrix),
input_length=MAX_SEQUENCE_LENGTH,
trainable=False)
print('Training model.')
</code>
<code>
#LSTM
batch_size = 100
print('Build model...')
lstm_model = Sequential()
lstm_model.add(embedding_layer)
lstm_model.add(LSTM(200, dropout=0.3, recurrent_dropout=0.3,return_sequences=False))
# if you want to add another layer set return_sequences to True
#lstm_model.add(Bidirectional(LSTM(50, dropout=0.5, recurrent_dropout=0.5)))
lstm_model.add(Dense(30, activation='tanh'))
lstm_model.add(Dense(19, activation='softmax'))
# try using different optimizers and different optimizer configs
lstm_model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
print('Train...')
lstm_model.summary()
</code>
<code>
# Reformat the labels
seq_labels = to_categorical(encoded_labels,num_classes=n_classes)
</code>
<code>
checkpoint = ModelCheckpoint(filepath='weights-improvement-lstm-{epoch:02d}-{val_acc:.2f}.hdf5',
monitor='val_loss', verbose=0, save_best_only=True)
hist = lstm_model.fit(data_seq, seq_labels,
batch_size=batch_size,
epochs=20,validation_split=0.2,shuffle=True,callbacks=[checkpoint])
score, acc = model.evaluate(data_seq, seq_labels,
batch_size=batch_size)
print('Train score:', score)
print('Train accuracy:', acc)
</code>
<code>
plt.plot(hist.history['acc'])
plt.plot(hist.history['val_acc'])
plt.show()
</code>
<code>
pred = lstm_model.predict_classes(data_seq)
print(classification_report(np.argmax(seq_labels,axis=1),pred))
</code>
# Is that a better model than an MLP?
- if not what can you change?
- Is the test correct?
|
{
"filename": "Workshop_1.ipynb",
"repository": "NGSchoolEU/ngs19",
"query": "transformed_from_existing",
"size": 79839,
"sha": ""
}
|
# example_transcriptomics_obs_segmentations_polygon_1.ipynb
Repository: vitessce/vitessce-python-tutorial
View this example on [Google Colab](https://colab.research.google.com/drive/1iB-GWk-hAmjuOUjYehHs_S94bhjxaVAP?usp=sharing)
<code>
import importlib.util
if importlib.util.find_spec('vitessce') is None:
!pip install vitessce[all]
</code>
<code>
from vitessce import (
VitessceConfig,
Component as cm,
CoordinationType as ct,
FileType as ft,
)
</code>
<code>
vc = VitessceConfig(schema_version="1.0.15", name='Transcriptomics example')
dataset = vc.add_dataset(name='Cell segmentations').add_file(
file_type="anndata.zarr",
url="https://s3.amazonaws.com/vitessce-data/0.0.33/main/codeluppi-2018-via-zarr/codeluppi_2018_nature_methods.cells.h5ad.zarr",
options={
"obsSegmentations": {
"path": "obsm/X_segmentations"
},
"obsLocations": {
"path": "obsm/X_spatial"
},
}
)
spatial_plot = vc.add_view(cm.SPATIAL, dataset=dataset)
layer_controller = vc.add_view(cm.LAYER_CONTROLLER, dataset=dataset)
spatial_segmentation_layer_value = {
"opacity": 1,
"radius": 0,
"visible": True,
"stroked": False
}
vc.link_views([spatial_plot, layer_controller], [ct.SPATIAL_ZOOM, ct.SPATIAL_TARGET_X, ct.SPATIAL_TARGET_Y, ct.SPATIAL_SEGMENTATION_LAYER], [-5.5, 16000, 20000, spatial_segmentation_layer_value])
vc.layout(spatial_plot | layer_controller);
</code>
<code>
from IPython.display import display, HTML
url = vc.web_app()
display(HTML(f'<a href="{url}" target="_blank">View on Vitessce.io</a>'))
</code>
<code>
vw = vc.widget()
vw
</code>
|
{
"filename": "example_transcriptomics_obs_segmentations_polygon_1.ipynb",
"repository": "vitessce/vitessce-python-tutorial",
"query": "transformed_from_existing",
"size": 18159,
"sha": ""
}
|
# DataIngestion_1.ipynb
Repository: sateeshfrnd/LangChain
# Data Ingestion using Documentloaders
A Document Loader in LangChain is a tool that helps load data from various sources, such as text files, PDFs, web pages, databases, and more. Once the data is loaded, it can be used for natural language processing (NLP), question answering, summarization, and chatbots.
- *API* : https://python.langchain.com/v0.1/docs/modules/data_connection/document_loaders/
## Load a Text File
<code>
from langchain_community.document_loaders import TextLoader
textloader = TextLoader("sample.txt")
textloader
</code>
<code>
text_doc = textloader.load()
text_doc
</code>
Here it will read the content of sample.txt as one document
## Read a PDF File
<code>
from langchain_community.document_loaders import PyPDFLoader
pdf_loader = PyPDFLoader("Attention_is_ All_You_Need.pdf")
pdf_doc = pdf_loader.load()
pdf_doc
</code>
Read the content of the PDF and store each page as a separate document.
## Load a Web Page
<code>
from langchain_community.document_loaders import WebBaseLoader
webloader = WebBaseLoader(web_path=("https://medium.com/@sateeshfrnd/understanding-the-langchain-ecosystem-616b33f5cd15"))
webloader
</code>
<code>
webloader = WebBaseLoader(web_paths=("https://github.com/sateeshfrnd/sateeshfrnd/blob/main/README.md", ),)
webloader
</code>
<code>
webloader.load()
</code>
<code>
import bs4
webloader1 = WebBaseLoader(web_paths=("https://github.com/sateeshfrnd/sateeshfrnd/blob/main/README.md", ),
bs_kwargs=dict(
parse_only = bs4.SoupStrainer(class_ =("markdown-body entry-content container-lg","markdown-heading",))
)
)
webloader1
</code>
<code>
webloader1.load()
</code>
Here we only reading particular sections only
## Arxiv Loader
<code>
from langchain.document_loaders import ArxivLoader
arxicloader = ArxivLoader(query="1706.03762") # Load a research paper
arxicloader
</code>
<code>
arxicloader.load()
</code>
## Load Wikipedia
<code>
from langchain.document_loaders import WikipediaLoader
wikidocument = WikipediaLoader(query="Artificial Intelligence", lang="en").load()
wikidocument
</code>
<code>
len(wikidocument)
</code>
|
{
"filename": "DataIngestion_1.ipynb",
"repository": "sateeshfrnd/LangChain",
"query": "transformed_from_existing",
"size": 249150,
"sha": ""
}
|
# autoencoder_autoencoder_citeseq_saturn_3.ipynb
Repository: naity/citeseq
# Integrative analysis of single-cell multiomics data using deep learning
**Jupyter notebook:**
[](https://github.com/naity/citeseq_autoencoder/blob/master/autoencoder_citeseq_saturn.ipynb)
**Recording:**
[](https://youtu.be/tad9TPCMWbU)
**Author:** Yuan Tian [](https://www.linkedin.com/in/ytiancompbio)
<div style="font-size:larger;">
<p><span style="font-size:xx-large;">S</span>ingle-cell RNA sequencing (scRNA-seq) has offered a comprehensive and unbiased approach to profile various type of cells such as immune cells with a single-cell resolution using next‑generation sequencing. More recently, exciting technologies such as cellular indexing of transcriptomes and epitopes by sequencing (CITE-seq) have been developed to extend scRNA-seq by jointly measuring multiple molecular modalities such as proteome and transcriptome from the same cell as illustrated in the figure below. By utilizing antibodies that are conjugated to oligonucleotides, CITE-seq simultaneously generates sequencing-based readouts for surface protein expression along with gene expression.</p>
<p>Since gene and protein expressions convey distinct and complementary information about a cell, CITE-seq offers a unique opportunity to combine both transcriptomic and proteomic data to decipher the biology of individual cells at a considerably higher resolution than using either one alone. This requires computational methods that can effectively integrate single-cell data from both modalities. In this tutorial, we will conduct integrative analysis of CITE-seq data using an unsupervised deep learning method named autoencoder.</p>
<p>In essence:</p>
<ul>
<li>Single-cell technologies offer considerable promise in dissecting the heterogeneity among individual cells and are being utilized in biomedical studies at an astounding pace.</li>
<li>CITE-seq simultaneously measures gene expression and surface protein at a single-cell level.</li>
</ul>
</div>
<figure>
<center><img src="imgs/citeseq.jpg"/></center>
<center><figcaption>Image source: 10x Genomics</figcaption></center>
</figure>
<code>
# Standard libraries
import time
import pandas as pd
import numpy as np
import urllib.request
from pathlib import Path
from urllib.error import HTTPError
from tqdm.notebook import tqdm
from sklearn import preprocessing
# Pytorch and Pytorch Lightning
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import pytorch_lightning as pl
from torch.utils.data import Dataset, DataLoader, random_split
from pytorch_lightning.callbacks import LearningRateMonitor, ModelCheckpoint
# Visualization and plotting
import umap
import plotly.express as px
# Tensorboard extension
from torch.utils.tensorboard import SummaryWriter
%load_ext tensorboard
# Path to datasets
DATASET_PATH = Path("data")
if not DATASET_PATH.exists():
DATASET_PATH.mkdir()
# Path to saved models
CHECKPOINT_PATH = Path("saved_models")
if not CHECKPOINT_PATH.exists():
CHECKPOINT_PATH.mkdir()
# for reproducibility
pl.seed_everything(42)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
# Use GPU if available, otherwise use cpu instead
device = torch.device("cuda:0") if torch.cuda.is_available() else torch.device("cpu")
print("Device:", device)
</code>
## Data
We will be using the CITE-seq dataset published by [Stuart and Butler et al.](https://www.cell.com/cell/fulltext/S0092-8674(19)30559-8) in 2019. The authors measured the single-cell transcriptomics of 30,672 bone marrow cells together with the expression of 25 proteins. I have already preprocessed the data to generate normalized counts and cell type anonations using [Seurat](https://satijalab.org/seurat/index.html), which is a popular R package for analyzing single-cell genomics data. The script used for preprocessing can be found [here](https://github.com/naity/citeseq_autoencoder/blob/master/preprocessing.R). There are three CSV files (RNA, protein, and cell type annotation), which can be downloaded from my [github repo](https://github.com/naity/citeseq_autoencoder) using the code below:
<code>
# URL for downloading data
data_url = "https://raw.githubusercontent.com/naity/citeseq_autoencoder/master/data/"
# Files to download
data_files = ["rna_scale.csv.gz", "protein_scale.csv.gz", "metadata.csv.gz"]
# Download datafile if necessary
for file_name in data_files:
file_path = Path(DATASET_PATH/file_name)
if not file_path.exists():
file_url = data_url + file_name
print(f"Downloading {file_url}...")
try:
urllib.request.urlretrieve(file_url, file_path)
except HTTPError as e:
print("Something went wrong. Please try downloading the file from the Google Drive folder\n", e)
</code>
<code>
# use Pandas to read the data
rna = pd.read_csv(DATASET_PATH/"rna_scale.csv.gz", index_col=0).T
pro = pd.read_csv(DATASET_PATH/"protein_scale.csv.gz", index_col=0).T
ncells = rna.shape[0]
nfeatures_rna = rna.shape[1]
nfeatures_pro = pro.shape[1]
print("Number of cells:", ncells)
print("Number of genes:", nfeatures_rna)
print("Number of proteins:", nfeatures_pro)
</code>
Next, gene and protein expression data are concatenated together, where each column is a gene or protein while each row is a cell (each cell has a unique barcode). The dataset contains the expression levels of 2000 genes and 25 proteins for a total of 30672 cells. We will also import the annotations of each cell for visualizaiton purpose later.
<code>
# concat rna and pro
assert all(rna.index == pro.index), "RNA and protein data cell barcodes do not match!"
citeseq = pd.concat([rna, pro], axis=1)
print(citeseq.shape)
citeseq.head()
</code>
<code>
# cell type annotations
metadata = pd.read_csv(DATASET_PATH/"metadata.csv.gz", index_col=0)
metadata.head()
</code>
<code>
assert all(citeseq.index == pro.index), "CITE-seq data and metadata cell barcodes do not match!"
# separate CD4 and CD8 in l1
metadata["celltype.l1.5"] = metadata["celltype.l1"].values
metadata.loc[metadata["celltype.l2"].str.startswith("CD4"), "celltype.l1.5"] = "CD4 T"
metadata.loc[metadata["celltype.l2"].str.startswith("CD8"), "celltype.l1.5"] = "CD8 T"
metadata.loc[metadata["celltype.l2"]=="Treg", "celltype.l1.5"] = "CD4 T"
metadata.loc[metadata["celltype.l2"]=="MAIT", "celltype.l1.5"] = "MAIT"
metadata.loc[metadata["celltype.l2"]=="gdT", "celltype.l1.5"] = "gdT"
# convert cell type annoations to integers
le = preprocessing.LabelEncoder()
labels = le.fit_transform(metadata["celltype.l1.5"])
</code>
### Pytorch datasets and dataloaders
<code>
class TabularDataset(Dataset):
"""Custome dataset for tabular data"""
def __init__(self, df: pd.DataFrame, labels: np.ndarray):
self.data = torch.tensor(df.to_numpy(), dtype=torch.float)
self.labels = torch.tensor(labels, dtype=torch.float)
def __len__(self):
return len(self.data)
def __getitem__(self, idx):
x = self.data[idx]
y = self.labels[idx]
return x, y
</code>
<code>
dataset = TabularDataset(citeseq, labels)
# train, validation, and test split
train_size = int(ncells*0.7)
val_size = int(ncells*0.15)
train_ds, val_ds, test_ds = random_split(dataset, [train_size, val_size, ncells-train_size-val_size],
generator=torch.Generator().manual_seed(0))
</code>
<code>
print("Number of cells for training:", len(train_ds))
print("Number of cells for validation:", len(val_ds))
print("Number of cells for test:", len(test_ds))
</code>
<code>
# batch size
bs = 256
train_dl = DataLoader(train_ds, batch_size=bs, shuffle=True, drop_last=True, pin_memory=True)
val_dl = DataLoader(val_ds, batch_size=bs, shuffle=False, drop_last=False)
test_dl = DataLoader(test_ds, batch_size=bs, shuffle=False, drop_last=False)
</code>
Let’s look at one example of the dataset:
<code>
x, y = train_dl.dataset[0]
print("Input data:", x)
print("Label: ", y)
</code>
## Use autoencoders for single-cell analysis
Autoencoder is a type of unsupervised deep learning model or neural network that consists of three major components: an encoder, a bottleneck, and a decoder as shown in the figure below. The encoder compresses the input, and the bottleneck layer stores the compressed representation of the input. In contrast, the decoder tries to reconstruct the input based upon the compressed data.
The dimension of the bottleneck layer is normally substantially lower than that of the input. As a result, the encoder will try to learn as much meaningful information about the input as possible while ignoring the noise so that the decoder can do a better job reconstructing the input. Autoencoder can function as a dimensionality reduction algorithm and the low-dimensional representation of the input stored in the bottleneck layer can be used for data visualization and other purposes. Moreover, thanks to its flexible neural network architecture, it offers unlimited ways to incorporate gene and protein expression data as we shall see below.
<figure>
<center><img src="imgs/autoencoder.png"/></center>
<center><figcaption>Image source: Eraslan et al. Nat Rev Genet. 2019</figcaption></center>
</figure>
### Implementation
Since gene and protein data have dramatically different dimensions, we will first encode them separately using two different encoders and then concatenate the outputs, which will be passed through another encoder to generate the bottleneck layer. Subsequently, the decoder will try to reconstruct the input based on the bottleneck layer. The overall neural network architecture is illustrated below:
<figure>
<center><img src="imgs/autoencoder_arch.png"/></center>
<center><figcaption><b>Autoencoder architecture for CITE-seq data</b></figcaption></center>
</figure>
We use the module below to group linear, batchnorm, and dropout layers together in order to make it easier to implement encoder and decoder later:
<code>
class LinBnDrop(nn.Sequential):
"""Module grouping `BatchNorm1d`, `Dropout` and `Linear` layers, adapted from fastai."""
def __init__(self, n_in, n_out, bn=True, p=0., act=None, lin_first=True):
layers = [nn.BatchNorm1d(n_out if lin_first else n_in)] if bn else []
if p != 0: layers.append(nn.Dropout(p))
lin = [nn.Linear(n_in, n_out, bias=not bn)]
if act is not None: lin.append(act)
layers = lin+layers if lin_first else layers+lin
super().__init__(*layers)
</code>
We start by implementing the encoder, which consists of three fully connected layer groups, one for RNA, one for protein, and one for the concatenated output that generates the latent representation of size `latent_dim` stored in the bottlenect layer.
<code>
class Encoder(nn.Module):
"""Encoder for CITE-seq data"""
def __init__(self,
nfeatures_rna: int,
nfeatures_pro: int,
hidden_rna: int,
hidden_pro: int,
latent_dim: int,
p: float = 0):
super().__init__()
self.nfeatures_rna = nfeatures_rna
self.nfeatures_pro = nfeatures_pro
hidden_dim = hidden_rna + hidden_pro
self.encoder_rna = nn.Sequential(
LinBnDrop(nfeatures_rna, nfeatures_rna // 2, p=p, act=nn.LeakyReLU()),
LinBnDrop(nfeatures_rna // 2, hidden_rna, act=nn.LeakyReLU())
)
self.encoder_protein = LinBnDrop(nfeatures_pro, hidden_pro, p=p, act=nn.LeakyReLU())
self.encoder = LinBnDrop(hidden_dim, latent_dim, act=nn.LeakyReLU())
def forward(self, x):
x_rna = self.encoder_rna(x[:, :self.nfeatures_rna])
x_pro = self.encoder_protein(x[:, self.nfeatures_rna:])
x = torch.cat([x_rna, x_pro], 1)
return self.encoder(x)
</code>
The decoder is a flipped version of the encoder.
<code>
class Decoder(nn.Module):
"""Decoder for CITE-seq data"""
def __init__(self,
nfeatures_rna: int,
nfeatures_pro: int,
hidden_rna: int,
hidden_pro: int,
latent_dim: int):
super().__init__()
hidden_dim = hidden_rna + hidden_pro
out_dim = nfeatures_rna + nfeatures_pro
self.decoder = nn.Sequential(
LinBnDrop(latent_dim, hidden_dim, act=nn.LeakyReLU()),
LinBnDrop(hidden_dim, out_dim // 2, act=nn.LeakyReLU()),
LinBnDrop(out_dim // 2, out_dim, bn=False)
)
def forward(self, x):
return self.decoder(x)
</code>
Next, we assemble the encoder and decoder into an autoencoder, which is defined as a PyTorch Lightning Module to simplify the training process. We will define the following:
* `__init__` for creating and saving parameters and model
* `forward`: for inference, which we will use to generate latent representations for downstream analysis
* `configure_optimizers` for creating the optimizer and learning rate scheduler
* `training_step` for calculating the loss (mean squared error (MSE) for our example) of a single batch
* `validation_step` similar to `training_step` but on the validation set
* `test_step` same as `validation_step` but on a test set.
<code>
class CiteAutoencoder(pl.LightningModule):
def __init__(self,
nfeatures_rna: int,
nfeatures_pro: int,
hidden_rna: int,
hidden_pro: int,
latent_dim: int,
p: float = 0,
lr: float = 0.1):
""" Autoencoder for citeseq data """
super().__init__()
# save hyperparameters
self.save_hyperparameters()
self.encoder = Encoder(nfeatures_rna, nfeatures_pro, hidden_rna, hidden_pro, latent_dim, p)
self.decoder = Decoder(nfeatures_rna, nfeatures_pro, hidden_rna, hidden_pro, latent_dim)
# example input array for visualizing network graph
self.example_input_array = torch.zeros(256, nfeatures_rna + nfeatures_pro)
def forward(self, x):
# extract latent embeddings
z = self.encoder(x)
return z
def configure_optimizers(self):
optimizer = optim.Adam(self.parameters(), lr=self.hparams.lr)
scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer)
return {"optimizer": optimizer, "lr_scheduler": scheduler, "monitor": "val_loss"}
def _get_reconstruction_loss(self, batch):
""" Calculate MSE loss for a given batch. """
x, _ = batch
z = self.encoder(x)
x_hat = self.decoder(z)
# MSE loss
loss = F.mse_loss(x_hat, x)
return loss
def training_step(self, batch, batch_idx):
loss = self._get_reconstruction_loss(batch)
self.log("train_loss", loss)
return loss
def validation_step(self, batch, batch_idx):
loss = self._get_reconstruction_loss(batch)
self.log("val_loss", loss)
def test_step(self, batch, batch_idx):
loss = self._get_reconstruction_loss(batch)
self.log("test_loss", loss)
</code>
### Training the model
We will take advantage of the `Trainer` API from PyTorch Lightning to execute the training process. The two functions that we will be using are:
* `fit`: Train a lightning module using the given train dataloader, and validate on the provided validation dataloader.
* `test`: Test the given model on the provided dataloader.
<code>
def train_citeseq(hidden_rna: int = 30, hidden_pro: int = 18,
latent_dim: int = 24, p: float = 0.1, lr: float = 0.1):
trainer = pl.Trainer(default_root_dir=CHECKPOINT_PATH,
gpus=1 if "cuda" in str(device) else 0,
max_epochs=50,
callbacks=[ModelCheckpoint(save_weights_only=True, mode="min", monitor="val_loss"),
LearningRateMonitor("epoch")])
trainer.logger._log_graph = True
trainer.logger._default_hp_metric=None
model = CiteAutoencoder(nfeatures_rna,
nfeatures_pro,
hidden_rna=hidden_rna,
hidden_pro=hidden_pro,
latent_dim=latent_dim,
p=p,
lr=lr)
trainer.fit(model, train_dl, val_dl)
train_result = trainer.test(model, train_dl, verbose=False)
val_result = trainer.test(model, val_dl, verbose=False)
test_result = trainer.test(model, test_dl, verbose=False)
result = {"train": train_result, "val": val_result, "test": test_result, }
return model, result
</code>
<code>
model, result = train_citeseq()
</code>
<code>
print(f"Training loss: {result['train'][0]['test_loss']:.3f}")
print(f"Validation loss: {result['val'][0]['test_loss']:.3f}")
print(f"Test loss: {result['test'][0]['test_loss']:.3f}")
</code>
PyTorch Lightning automatically logs the training results into TensorBoard, which we can open like below:
<code>
%tensorboard --host 0.0.0.0 --port 8000 --logdir saved_models/lightning_logs/version_0
</code>
<code>
# kill tensorboard process
!kill $(ps -e | grep 'tensorboard' | awk '{print $1}')
</code>
### Visualize latent representations
The latent space in our example has 24 dimensions. In order to visualize and inspect how different types of immune cells cluster in the latent space, we first use the trained model to generate the latent representations of the test dataset and then use UMAP, which is widely used in single-cell analysis, to reduce the dimensions for visualization in 2D.
<code>
test_encodings = []
test_labels = []
model.eval()
with torch.no_grad():
for x, y in tqdm(test_dl, desc="Encoding cells"):
test_encodings.append(model(x.to(model.device)))
test_labels += y.to(torch.int).tolist()
test_embeds = torch.cat(test_encodings, dim=0).cpu().numpy()
test_labels = le.inverse_transform(test_labels)
</code>
<code>
# run umap for dimensionality reduction and visualization
embeds_umap = umap.UMAP(random_state=0).fit_transform(test_embeds)
</code>
<code>
# visualize umap
fig = px.scatter(x=embeds_umap[:, 0], y=embeds_umap[:, 1], color=test_labels, width=800, height=600,
labels={
"x": "UMAP1",
"y": "UMAP2",
"color": "Cell type"}
)
fig.show(renderer="colab")
</code>
We can also visualize and explore latent representations using TensorBoard, which provides a convinient interface for popular dimensionality reduction methods such as UMAP, TSNE, and PCA.
<code>
# visualization with tensorboard
writer = SummaryWriter("tensorboard/")
writer.add_embedding(test_embeds, metadata=test_labels)
# wait for saving files
time.sleep(3)
</code>
<code>
%tensorboard --host 0.0.0.0 --port 8000 --logdir tensorboard/
</code>
<code>
writer.close()
</code>
<code>
# kill tensorboard process
!kill $(ps -e | grep 'tensorboard' | awk '{print $1}')
</code>
## Summary
<p style="font-size:larger;">In this tutorial, we have built an autoencoder-based deep learning model for dimensionality reduction and visualization of single-cell CITE-seq data. We demonstrate that the integrative analysis of both transcriptomic and proteomic data achieves superior resolution in distinguishing between various immune cell types.</p>
|
{
"filename": "autoencoder_autoencoder_citeseq_saturn_3.ipynb",
"repository": "naity/citeseq",
"query": "transformed_from_existing",
"size": 31161,
"sha": ""
}
|
# QA_APP_RAG_NoteBook_1.ipynb
Repository: karthikbharadhwajKB/RAG
### RAG Application
<code>
# monitoring & tracing
import os
monitoring = True
if monitoring:
os.environ['LANGCHAIN_TRACING_V2'] = "true"
os.environ['LANGCHAIN_PROJECT'] = "Rag_App"
</code>
<code>
from dotenv import load_dotenv
# loading all the environment variables
load_dotenv()
</code>
### RAG Pipeline - Indexing + Retrieval + Generation
LLM Model
<code>
from langchain_openai import ChatOpenAI
chat_llm = ChatOpenAI()
</code>
#### 1. Indexing - Loading Documents
We need to first load the blog post contents. We can use `DocumentLoaders` for this, which are objects that load in data from source and return a list of `Documents`
* A `Document` is an object with some `page_content` (str) and `metadata` (dict).
In this case we'll use the `WebBaseLoader`, which uses `urllib` to load HTML page from web URLs and `BeautifulSoup` to parse it to text.
- We can customize the HTML -> text parsing by passing in parameters to the `BeautifulSoup` parser via `bs_kwargs`. In this case only HTML tags with class `post-content`, `post-title`, `post-header` are relevant, so we'll remove all others.
<code>
!pip install bs4 -qqq
</code>
<code>
import bs4
from langchain_community.document_loaders import WebBaseLoader
# only keep post content, title and header from the full HTML
bs4_strainer = bs4.SoupStrainer(class_=("post-title", "post-header", "post-content"))
# web loader
loader = WebBaseLoader(
web_path="https://lilianweng.github.io/posts/2023-06-23-agent/",
bs_kwargs={"parse_only": bs4_strainer},
)
</code>
<code>
# loading docs
docs = loader.load()
len(docs[0].page_content)
</code>
<code>
print(docs[0].page_content[:500])
</code>
#### 2. Indexing - Splitting
Our loaded document is over 42k character long. This is too long to fit in the context window of many models. Even for those models that could fit the full post in their context window, models can struggle to find information in very long inputs.
To handle this we'll split the `Document` into `chunks` for embedding and vector storage. This should help us retrieve only the most relevant bits of the blog post at run time.
In this case we'll split our documents into chunks of 1000 characters with 200 characters of overlap between chunks. The overlap helps mitigate the possibility of seperating a statement from important context related to it. We use the `RecursiveCharacterTextSplitter`, which will recursively split the documents using common seperators like new lines until each chunk is the appropriate size. "This is the recommended text splitter for generic text use cases".
* we set `add_start_index=True` so that the character index at which each split Document starts within the intial Document is preserved as metadata attribute `start_index`.
<code>
from langchain_text_splitters import RecursiveCharacterTextSplitter
# text splitter
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=1000, chunk_overlap=200, add_start_index=True
)
# splitting the text
all_splits = text_splitter.split_documents(docs)
len(all_splits)
</code>
<code>
len(all_splits[0].page_content)
</code>
<code>
all_splits[50].metadata
</code>
#### 3 Indexing - Store
Now we need to index our 66 text chunks so that we can search over them at runtime. The most common way to do this is to embed the contents of each document split and insert these embeddings into a vector database (or) a vector store.
When we want to search over our splits, we take a text search query, embed it and perform some sort of `similarity` search to identify the stored splits with the most similar embeddings to our query embedding. The simplest similarity measure is `cosine` similarity - we measure the cosine of the angle between each pair of embeddings.
<code>
!pip install langchain_chroma -qqq
</code>
<code>
from langchain_chroma import Chroma
from langchain_openai import OpenAIEmbeddings
# storing our documents in a vector store
vector_store = Chroma.from_documents(
documents=all_splits,
embedding=OpenAIEmbeddings(),
)
</code>
This completes the `Indexing` portion of the pipeline. At this point we have a query-able vector store containing the chunked contents of our blog post. Given a user question, we should ideally be able to return the snippets of the blog post that answer the question.
#### 4. Retrieval and Generation: Retrieve
Now let's write the actual application logic. We want to create a simple application that takes a user question, searches for documents relevant to that question, passes the retrieved documents and intial question to a model, and returns an answer.
First we need to define our logic for searching over documents. Langchain defines a `Retriever` interface which wraps an index that can return relevant `Documents` given a string query.
The most common type of `Retriever` is the `VectorStoreRetriever`, which uses the similarity search capabilities of a vector store to facilitate retrieval.
* Any `VectorStore` can easily be turned into a `Retriever` with `VectorStore.as_retriever()`:
<code>
retriever = vector_store.as_retriever(search_type="similarity", search_kwargs={"k": 5})
# retrieved documents
retrieved_docs = retriever.invoke("what are the approaches to Task Decomposition?")
len(retrieved_docs)
</code>
<code>
retrieved_docs
</code>
<code>
print(retrieved_docs[0].page_content)
</code>
#### 5. Retrieval and Generation: Generate
Let's put it all together into a chain that takes a question, retrieves relevant documents, constructs a prompt, passes that to a model, and parses the output.
<code>
from langchain_openai import ChatOpenAI
llm = ChatOpenAI()
</code>
We'll use a prompt for RAG that is checked into the Langchain prompt hub.
<code>
!pip install langchainhub -qqq
</code>
<code>
from langchain import hub
# pulling RAG prompt from the hub
prompt = hub.pull('rlm/rag-prompt')
print(prompt)
</code>
<code>
print(prompt.messages[0].prompt.template)
</code>
<code>
example_message = prompt.invoke(
{"context": "filler context", "question": "filler question"}
).to_messages()
print(example_message)
</code>
<code>
print(example_message[0].content)
</code>
We'll use `LCEL Runnable` protocol to define the chain, allowing us to:
* Pipe together components and functions in a transparent way.
* Automatically trace our chain in LangSmith.
* Get streaming, async, and batched calling out of the box.
<code>
# formatting the retrieved documents
def format_docs(docs):
return "\n\n".join(doc.page_content for doc in docs)
sample_context = format_docs(all_splits[30:40])
sample_context
</code>
<code>
print(sample_context)
</code>
<code>
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnablePassthrough
def format_docs(docs):
return "\n\n".join(doc.page_content for doc in docs)
output_parser = StrOutputParser()
# Rag chain
rag_chain = (
{"context": retriever | format_docs, "question": RunnablePassthrough()}
| prompt
| llm
| output_parser
)
</code>
<code>
# normal response
answer = rag_chain.invoke("what is Task Decomposition?")
print(answer)
</code>
<code>
# streamed response
for chunk in rag_chain.stream("what is Task Decomposition?"):
print(chunk, end="", flush=True)
</code>
<code>
# without output parser
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnablePassthrough
def format_docs(docs):
return "\n\n".join(doc.page_content for doc in docs)
output_parser = StrOutputParser()
# Rag chain
rag_chain = (
{"context": retriever | format_docs, "question": RunnablePassthrough()}
| prompt
| llm
)
</code>
<code>
response = rag_chain.invoke("what is Task Decomposition?")
</code>
<code>
response.content
</code>
<code>
response.response_metadata
</code>
Let's dissect the LCEL to understand what's going on..
First: Each of these components (`retriever`, `prompt`, `llm`, etc). are instances of `Runnable`. This means that they implement the same methods -- such as `sync`, `async`, `.invoke`, `.stream`, `.batch` - which makes them easier to connect together.
* They can be connected into a `RunnableSequence`-- another Runnable -- via the `|` operator.
* Langchain automatically cast certain objects to runnables when met with the `|` operator. Here, `format_docs` is cast to a `RunnableLambda`, and the dict with "context" and "question" is cast to a `RunnableParallel`. The details are less important than the bigger point, which is that each object is a `Runnable`.
Let's trace how the input question flows through the above runnables.
As we've seen above, the input to `prompt` is expected to be a dict with keys `"context"` and `"question"`. So the first element of this chain builds runnables that will calculate both if these from the input question:
* `retriever | format_docs` passes the question through the retriever, generating the `Document` objects, and then to `format_docs` to generate strings;
* `RunnablePassThrough()` passes through the input question unchanged.
That is, if you constructed:
<code>
chain = (
{"context": retriever | format_docs, "question": RunnablePassthrough()}
| prompt
)
</code>
Then `chain.invoke(question)` would build a formatted prompt, ready for inference. (Note: when developing with LCEL, it can be pratical to test with sub-chains like this.)
The last steps of the chain are `llm`, which runs the inference, and `StrOuputParser()`, which just plucks the strings content out of the LLM's output message.
* You can analyze the individual steps of this chain via its `LangSmith trace`.
<code>
constrcuted_prompt = chain.invoke("what is Task Decomposition?")
</code>
<code>
print(constrcuted_prompt.messages[0].content)
</code>
### Built-in chains
If preferred, LangChain includes convenience functions that implement the above LCEL. We compose two functionsç
* `create_stuff_documents_chain`: specifies how retrieved context is fed into a prompt and LLM. In this case, we will `"stuff"` the contents into the prompts --i.e., we will include all retrieved context without any summarization or other processing. It largely implements our above `rag_chain`, with input keys `context` and `input` -- it generates an answer using retrieved context and query.
* `create_retrieval_chain`: adds the retrieval step and propagates the retrieved context through the chain, providing it alongside the fine answer. It has input key `input`, `context`, and `answer` in its output.
<code>
from langchain.chains.retrieval import create_retrieval_chain
from langchain.chains.combine_documents import create_stuff_documents_chain
from langchain_core.prompts import ChatPromptTemplate
# system prompt
system_prompt = (
"You are an assistant for question-answering tasks. "
"Use the following pieces of retrieved context to answer "
"the question. If you don't know the answer, say that you "
"don't know. Use three sentences maximum and keep the "
"answer concise."
"\n\n"
"{context}"
)
prompt = ChatPromptTemplate.from_messages(
messages=[
("system", system_prompt),
("human", "{input}")
]
)
# stuff documents chain
question_answer_chain = create_stuff_documents_chain(llm, prompt)
# # rag chain
rag_chain = create_retrieval_chain(retriever, question_answer_chain)
# response
response = rag_chain.invoke({"input":"what is Task Decomposition?"})
print(response)
print('Answer: ',response['answer'],"\n\n")
</code>
<code>
for doc in response['context']:
print(doc)
print()
</code>
<code>
from langchain.chains import create_retrieval_chain
from langchain.chains.combine_documents import create_stuff_documents_chain
from langchain_core.prompts import ChatPromptTemplate
template = """
You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.
Context: {context}
Question: {input}
Answer:
"""
# prompt template
prompt = ChatPromptTemplate.from_template(template)
question_answer_chain = create_stuff_documents_chain(llm, prompt)
rag_chain = create_retrieval_chain(retriever, question_answer_chain)
# response
response = rag_chain.invoke({"input":"what is Task Decomposition?"})
print(response)
print('Answer: ',response['answer'],"\n\n")
</code>
<code>
from langchain.chains import create_retrieval_chain
from langchain.chains.combine_documents import create_stuff_documents_chain
from langchain_core.prompts import ChatPromptTemplate
prompt = hub.pull('rlm/rag-prompt')
print(prompt)
prompt.input_variables[1] = "input"
print(prompt)
# question_answer_chain = create_stuff_documents_chain(llm, prompt)
# rag_chain = create_retrieval_chain(retriever, question_answer_chain)
# # response
# response = rag_chain.invoke({"input":"what is Task Decomposition?"})
# print(response)
# print('Answer: ',response['answer'],"\n\n")
</code>
<code>
question = "what is Task Decomposition?"
result = rag_chain.invoke({"input": question})
</code>
<code>
result
</code>
<code>
def extract_source(doc):
source = []
for doc in result['context']:
source.append(doc.metadata['source'])
return source
</code>
<code>
extract_source(result)
</code>
|
{
"filename": "QA_APP_RAG_NoteBook_1.ipynb",
"repository": "karthikbharadhwajKB/RAG",
"query": "transformed_from_existing",
"size": 69918,
"sha": ""
}
|
# cellxgene_nexus_index_2.ipynb
Repository: BiomedSciAI/biomed-multi-omic
# Create split index for CellXGeneNexusDataModule
The NexusDB data-loader consists of two layers: a front-end and a back-end. The front-end serves data to multiple node GPUs, while the back-end is responsible for data storage. We use the universal data storage engine [TileDB](https://tiledb.com/) as our back-end. For distributed data parallel training, the front-end is based on the [LitData package](https://github.com/Lightning-AI/litdata). NexusDB supports indexing to reuse the same dataset files for multiple training splits and works with the existing dataset [CELLxGENE Census](https://chanzuckerberg.github.io/cellxgene-census/), which is based on [TileDB-SOMA](https://github.com/single-cell-data/TileDB-SOMA).
This notebook is designed to show how to generate indexes for NexusDB.
*WARNING* : This notebook is not intended to be run through multiple times. The indices created are shared by all users and should not be rebuilt unless an error is discovered.
## `dataset_id`-level split for cellxgene
First, refer to `cellxgene_dataset_split` notebook to learn about dataset-id split. The code reuses `celltypes_split.csv` to generate train and dev split. The cell generates new index in `cellxgene_nexus_index` folder.
<code>
from pathlib import Path
from bmfm_targets.datasets.cellxgene import create_litdata_index_for_dataset_split
# CCC URI
uri = "/dccstor/bmfm-targets/data/omics/transcriptome/scRNA/pretrain/cellxgene/soma-2023-12-15"
# ZUVELA URI
uri='/proj/bmfm/omics/data/scRNA/cellxgene/2025-01-30/soma'
</code>
<code>
full_label_columns = ['dataset_id', 'assay', 'assay_ontology_term_id',
'cell_type', 'cell_type_ontology_term_id', 'development_stage',
'development_stage_ontology_term_id', 'disease',
'disease_ontology_term_id', 'donor_id',
'self_reported_ethnicity', 'self_reported_ethnicity_ontology_term_id',
'sex', 'sex_ontology_term_id', 'suspension_type', 'tissue',
'tissue_ontology_term_id', 'tissue_general',
'tissue_general_ontology_term_id']
limited_label_columns = ['cell_type', 'tissue', 'tissue_general', 'disease', 'donor_id', "sex"]
</code>
## Human data index creation
The following code will create an index for all of the human data
<code>
create_litdata_index_for_dataset_split(uri=uri, index_dir="cellxgene_nexus_index")
</code>
Example of creating an index with 10% random samples
<code>
create_litdata_index_for_dataset_split(
uri=uri,
value_filter="scTab",
index_dir="cellxgene_random_10pct_nexus_index",
sampling_strategy="random",
sampling_fraction=0.1,
)
</code>
<code>
create_litdata_index_for_dataset_split(
uri=uri,
experiment='homo_sapiens',
value_filter="scTab",
census_version="2025-01-30",
index_dir="/proj/bmfm/omics/data/scRNA/cellxgene/2025-01-30/cellxgene_equal_downsample_1pct_nexus_index",
sampling_strategy="equal_downsample",
sampling_fraction=0.01,
label_columns=['cell_type', 'tissue', 'tissue_general', 'disease', 'donor_id', "sex"],
groupby_columns=['cell_type', 'tissue', 'tissue_general', 'disease']
)
</code>
<code>
create_litdata_index_for_dataset_split(
uri=uri,
experiment='homo_sapiens',
value_filter="scTab",
census_version="2025-01-30",
index_dir="/proj/bmfm/omics/data/scRNA/cellxgene/2025-01-30/cellxgene_equal_downsample_10pct_nexus_index",
sampling_strategy="equal_downsample",
sampling_fraction=0.1,
label_columns=['cell_type', 'tissue', 'tissue_general', 'disease', 'donor_id', "sex"],
groupby_columns=['cell_type', 'tissue', 'tissue_general', 'disease']
)
</code>
## Mouse index creation
<code>
create_litdata_index_for_dataset_split(
uri=uri,
experiment='mus_musculus',
dataset_split_file=Path().cwd() / "celltypes_split_mouse.csv",
value_filter="scTab",
census_version="2025-01-30",
index_dir="/proj/bmfm/omics/data/scRNA/cellxgene/2025-01-30/cellxgene_random_mouse_10pct_nexus_index",
sampling_strategy="random",
sampling_fraction=0.1,
)
</code>
<code>
create_litdata_index_for_dataset_split(
uri=uri,
experiment='mus_musculus',
dataset_split_file=Path().cwd() / "celltypes_split_mouse.csv",
value_filter="scTab",
census_version="2025-01-30",
index_dir="/proj/bmfm/omics/data/scRNA/cellxgene/2025-01-30/cellxgene_random_mouse_1pct_nexus_index",
sampling_strategy="random",
sampling_fraction=0.01,
)
</code>
<code>
create_litdata_index_for_dataset_split(
uri=uri,
experiment='mus_musculus',
dataset_split_file=Path().cwd() / "celltypes_split_mouse.csv",
value_filter="scTab",
census_version="2025-01-30",
index_dir="/proj/bmfm/omics/data/scRNA/cellxgene/2025-01-30/cellxgene_mouse_nexus_index",
)
</code>
<code>
from bmfm_targets.datasets.cellxgene.cellxgene_soma_utils import get_obs_as_pandas
</code>
<code>
obs = get_obs_as_pandas(uri=uri,
experiment="homo_sapiens",
value_filter="is_primary_data == True",
census_version="2025-01-30",
)
</code>
<code>
obs.iloc[0]
</code>
<code>
from bmfm_targets.datasets.datasets_utils import equal_samples_per_set_downsample
</code>
<code>
groupby_columns= ["cell_type", "disease", "tissue"]
quota_ds = equal_samples_per_set_downsample(obs, groupby_columns=groupby_columns, frac=0.01)
</code>
<code>
quota_ds_10pct = equal_samples_per_set_downsample(obs, groupby_columns=groupby_columns, frac=0.1)
</code>
<code>
quota_ds.shape
</code>
<code>
quota_ds.shape[0] / quota_ds.groupby(groupby_columns, observed=True).size().shape[0]
</code>
<code>
quota_ds_10pct.shape[0] / quota_ds_10pct.groupby(groupby_columns, observed=True).size().shape[0]
</code>
<code>
import matplotlib.pyplot as plt
#quota_ds.groupby(groupby_columns, observed=True).size().hist(bins=70)
quota_ds_10pct.groupby(groupby_columns, observed=True).size().hist(bins=100,ax=plt.gca())
plt.title("CellXGene Homo Sapiens 10pct downsample on columns " + ", ".join(groupby_columns))
plt.xlabel("Samples in group")
plt.ylabel("Number of groups")
plt.tight_layout()
</code>
## Create short index for debugging proposes
<code>
import os
import shutil
from bmfm_targets.datasets.cellxgene.cellxgene_soma_utils import build_range_index
uri = "/dccstor/bmfm-targets/data/omics/transcriptome/scRNA/pretrain/cellxgene/soma-2023-12-15"
index_dir="cellxgene_debug_nexus_index"
os.mkdir(index_dir)
train_index_dir = os.path.join(index_dir, "train")
build_range_index(
uri,
train_index_dir,
n_records=32,
chunk_size=8,
label_columns=["cell_type", "tissue"],
value_filter="is_primary_data == True and nnz <= 512",
)
shutil.copytree(train_index_dir, os.path.join(index_dir, "dev"), dirs_exist_ok=True)
</code>
|
{
"filename": "cellxgene_nexus_index_2.ipynb",
"repository": "BiomedSciAI/biomed-multi-omic",
"query": "transformed_from_existing",
"size": 59021,
"sha": ""
}
|
# Figure10g_Random_Current_1.ipynb
Repository: Fw-Franz/Volvox
# Import packages and intilize functions
<code>
from __future__ import division, unicode_literals, print_function # for compatibility with Python 2 and 3
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.font_manager as font_manager
import os
from pathlib import Path
import numpy as np
import pandas as pd
from pandas import DataFrame, Series # for convenience
import pims
import trackpy as tp
# change the following to %matplotlib notebook for interactive plotting
%matplotlib inline
# Optionally, tweak styles.
mpl.rc('figure', figsize=(10, 5))
mpl.rc('image', cmap='gray')
font = font_manager.FontProperties(family='Candara', math_fontfamily='custom', size=18)
titlefont = {'fontname':'Candara',
'size' : 18}
figurefont = {'fontname':'Candara',
'size' : 16}
tickfont = {'fontname':'Candara',
'size' : 14}
# Add the font file
font_path = r"../Code/candara-font-family/Candara.ttf"
font_path_full = os.path.abspath(font_path)
font_manager.fontManager.addfont(font_path_full)
plt.rcParams['mathtext.fontset'] = 'custom' # supported values are ['dejavusans', 'dejavuserif', 'cm', 'stix', 'stixsans', 'custom']
plt.rcParams['mathtext.it'] = 'Candara:italic'
print('ready to go')
@pims.pipeline
def gray(image):
return image[:, :] # Take just the green channel
</code>
# Parameters to define
<code>
## 1) Parameters governing loading and tracking
############################################
# Do the tracking, linking and filtering. Set these to false if you already performed the tracking and just want to analzye/plot differently
perform_tracking = False
perform_linking_trajectories = False
perform_filtering = False
# Directories
data_path = r"../Data/Figure10_Electroshocked_Phototaxis/lightSwtiching60sec_random_current3V"
directory = os.path.abspath(data_path)
save_path = r"../Code/Graphs"
save_dir = os.path.abspath(save_path)
if not os.path.exists(save_dir):
# Create the directory if it does not exist
os.makedirs(save_dir)
compress_images=True # True or False if to compress images for better and faster trajectory linking
compression_ratio=0.25 # compression ratio (0.1 means images are resized to a tenth of both x and y dimensions)
compresion_directory_name_ending='_compressed_0p25'
# Note 1: If you have already created the compressed images (because you have run it before), no need to change anything
# it won't do it again and just juse those images. If you want to recompress images, just delete the directory
# that containsthe previously compressed images
# Note 2: For all the pixel sizes, midpoint calculations and chamber dimesions, use the original uncompressed image.
# the script will automatically recalculate those based on the ratio. You might have to adjust the filter
# proporties though (see 2) below), as those are not that linearly calculated
analyze_subset=True # if this is True, it will only analyze frames starting at frames_to_analyze_start
# going to frames_to_analyze_end. Check in the image sequence if there was any moving of the chamber at beginning or end.
frames_to_analyze_start=1 # first frame to analyze
frames_to_analyze_end=2990 # last frame to analyze
minimal_mass_base=1000 # threshold to use for mass (total brightness of an particle= number of pixels * their intensities)
# so for example, a 3x3 pixel sized volvox with each of a max intensity of 255 (8bit) would have a mass of 9*255=2295
Volvox_pixel_size_estimate=11 # err on the larger side
link_distance_pixels=15 # number of pixels to try link trajectories between frames (=distance Volvox move between frames)
# need only be larger than 10 if we are undersampling. the larger this distance, the longer the analysis takes.
link_memory=5 # number of frames to try link trajectories (=number of frames Volvox could disappear in frames)
# need only be larger than 5 for partial illumination conditions like the fixed and random series, but 5 should work too
# again, the higher this value, the longer the analysis takes, and very large values will actually cause an error
## 2) Parameters governing filtering trajectories by various characteristics
#########################################################################
filter_stubs_size=5 # filter out trajctories of things that only appear of x number of frames
# for mass and size, look at plot and imagej
minimal_mass=1000 # threshold to use for mass (total brightness of an particle= number of pixels * their intensities)
filter_by_size=False # True or False to filter by size
min_size=2
max_size=5
filter_by_ecc=False # True or False to filter by eccentricity (non circular shape)
max_ecc=0.3
filter_by_stds=True # True or False to filter by how much variation you would expect in position in x
# This filters out non-moving objects.
filter_std=5
filter_by_frame_count=False # True or False to filter by how many frames are contained in one trajectory
# This filters out Volvox that disappeared, but also removes broken up trajectories (i.e. if the linking failed)
filter_frame_count=50
## 3) Parameters for counting and plotting
#########################################################################
midpoint_x=2070 # midpoint in pixel x coordinates of the chamber width in the original, uncompressed image
midpoint_y=1120 # midpoint in pixel y coordinates of the chamber height in the original, uncompressed image
quartiles=True # if True, is counts volvox in quartile ends of the chamber instead of halfs for more prounced bias counts
chamber_width=3000 # chamber width in pixels of the chamber in the original, uncompressed image
chamber_height=2000 # chamber height in pixels of the chamber in the original, uncompressed image
save_plots=True # Do you want to save the plots?
gaussian_smooth=True # True or False for gaussian smoothing of the Volvox count graph over time
sigma=15 # frames to average over for gaussian smoothing
fps=5 # frames per second of video taking (as in what was your framerate during acquisition)
plot_left_right_bias=False # True or False to plot the left-right bias counts
plot_top_bottom_bias=True # True or False to plot the top-bottom bias counts
# Labels for the individual stimulation condtions (light pattern, anode/cathode side etc.)
label_left="Right half"
label_right="Left half"
label_top="Top Light"
label_bottom="Bottom Light"
</code>
# Compress Images
<code>
from PIL import Image, ImageDraw
base_dir=str(Path(directory).parents[0])
sub_folder_name=os.path.basename(directory)
if compress_images and perform_tracking:
directory_new=directory+compresion_directory_name_ending
if os.path.exists(directory_new)==False:
os.mkdir(directory_new)
for file_name in os.listdir(directory):
#print("Processing %s" % file_name)
img = Image.open(os.path.join(directory, file_name))
x,y = img.size
output = img.resize((int(x * compression_ratio), int(y* compression_ratio)), Image.ANTIALIAS)
output_file_name = os.path.join(directory_new, file_name)
output.save(output_file_name, "tiff", quality = 95)
</code>
# Get trajectories
<code>
if not compress_images:
directory_new=directory
else:
directory_new=directory+compresion_directory_name_ending
base_dir=str(Path(directory_new).parents[0])
sub_folder_name=os.path.basename(directory_new)
if perform_tracking:
print(directory_new)
frames = gray(pims.open(directory_new+'/*.tiff'))
plt.imshow(frames[frames_to_analyze_start])
f = tp.locate(frames[frames_to_analyze_start], int(Volvox_pixel_size_estimate), invert=False)
f.head()
tp.annotate(f, frames[frames_to_analyze_start]);
if analyze_subset:
f = tp.batch(frames[frames_to_analyze_start:frames_to_analyze_end], int(Volvox_pixel_size_estimate), minmass=minimal_mass_base, invert=False);
else:
f = tp.batch(frames, int(Volvox_pixel_size_estimate), minmass=minimal_mass, invert=False);
base_dir=str(Path(directory_new).parents[0])
sub_folder_name=os.path.basename(directory_new)
f.to_csv(base_dir+'\\'+sub_folder_name+'_frames.csv')
print(f.columns)
elif perform_linking_trajectories:
f=pd.read_csv(base_dir+'\\'+sub_folder_name+'_frames.csv')
</code>
# Link trajectories
<code>
if perform_linking_trajectories:
t = tp.link_df(f, link_distance_pixels , memory=link_memory)
t.to_csv(base_dir+'\\'+sub_folder_name+'_trajectories_raw.csv')
elif perform_filtering:
t=pd.read_csv(base_dir+'\\'+sub_folder_name+'_trajectories_raw.csv')
</code>
# Filter particle trajectory by size, mass, eccentricity, stds, and frame count
<code>
import math
if perform_filtering:
t.head()
t1 = tp.filter_stubs(t, filter_stubs_size)
plt.figure()
tp.mass_size(t1.groupby('particle').mean()); # convenience function -- just plots size vs. mass
if filter_by_size:
if filter_by_ecc:
t2 = t1[((t1['mass'] > minimal_mass) & (t1['size'] > min_size)& (t1['size'] < max_size) & (t1['ecc'] < max_ecc))]
else:
t2 = t1[((t1['mass'] > minimal_mass) & (t1['size'] > min_size)& (t1['size'] < max_size))]
else:
if filter_by_ecc:
t2 = t1[((t1['mass'] > minimal_mass) & (t1['ecc'] < max_ecc))]
else:
t2 = t1[((t1['mass'] > minimal_mass))]
print('particles filter by size/mass/ecc =', len((list(set(t1.particle))))-len((list(set(t2.particle)))), 'out of', len((list(set(t1.particle)))))
if filter_by_stds:
t2p5=t2.copy()
std=t2.groupby(['particle']).std()
std.reset_index(level=0, inplace=True)
list_set = set(t2.particle)
unique_list = (list(list_set))
unique_list.sort()
for i in unique_list:
std_x=float(std.loc[std.particle==i,'x'])
if std_x<filter_std:
t2p5=t2p5[t2p5['particle'] != i]
elif math.isnan(std_x):
t2p5=t2p5[t2p5['particle'] != i]
len((list(set(t2p5.particle))))
print('particles filter by stds =', len((list(set(t2.particle))))-len((list(set(t2p5.particle)))), 'out of', len((list(set(t2.particle)))))
t2=t2p5.copy()
t3=t2.copy()
if filter_by_frame_count:
list_set = set(t3.frame)
unique_list = (list(list_set))
unique_list.sort()
for i in unique_list:
t3i=t3[t3.frame==i]
if t3i.frame.count()<filter_frame_count:
t3=t3[t3['frame'] != i]
print('particles filter by frame count =', len((list(set(t2.particle))))-len((list(set(t3.particle)))), 'out of', len((list(set(t2.particle)))))
plt.figure()
tp.annotate(t3[t3['frame'] == frames_to_analyze_start], frames[frames_to_analyze_start]);
plt.figure()
tp.plot_traj(t3);
t4=t3.copy()
t4.to_csv(base_dir+'\\'+sub_folder_name+'_trajectories_filtered.csv')
else:
t4=pd.read_csv(base_dir+'\\'+sub_folder_name+'_trajectories_filtered.csv')
list_set = set(t4.frame)
unique_list = (list(list_set))
unique_list.sort()
</code>
# Count particles in each frame in each half of the chamber
<code>
l=np.zeros(len(unique_list))
r=np.zeros(len(unique_list))
t=np.zeros(len(unique_list))
b=np.zeros(len(unique_list))
if not compress_images:
compression_ratio=1
if quartiles:
x1=int(midpoint_x*compression_ratio)-int(chamber_width*compression_ratio/4)
x2=int(midpoint_x*compression_ratio)+int(chamber_width*compression_ratio/4)
y1=int(midpoint_y*compression_ratio)-int(chamber_height*compression_ratio/4)
y2=int(midpoint_y*compression_ratio)+int(chamber_height*compression_ratio/4)
else:
x1=int(midpoint_x*compression_ratio)
x2=int(midpoint_x*compression_ratio)-1
y1=int(midpoint_y*compression_ratio)
y2=int(midpoint_y*compression_ratio)-1
# print(unique_list)
ii=0
for i in unique_list:
t5=t4.loc[t4.frame==i]
t_left=t5.loc[t5.x<x1]
t_right=t5.loc[t5.x>x2]
t_top=t5.loc[t5.y<y1]
t_bottom=t5.loc[t5.y>y2]
l[ii]=t_left.shape[0]
r[ii]=t_right.shape[0]
t[ii]=t_top.shape[0]
b[ii]=t_bottom.shape[0]
ii=ii+1
print('mean # left : ',l.mean())
print('mean # right : ',r.mean())
print('mean # top : ',t.mean())
print('mean # bottom : ',b.mean())
</code>
# Smooth and plot trajectories
<code>
import scipy as sp
if gaussian_smooth:
l=sp.ndimage.gaussian_filter1d(l,sigma)
r=sp.ndimage.gaussian_filter1d(r,sigma)
t=sp.ndimage.gaussian_filter1d(t,sigma)
b=sp.ndimage.gaussian_filter1d(b,sigma)
unique_list_time=np.array(unique_list)/fps
fig=plt.figure(figsize=(7,4))
if plot_left_right_bias:
ax=plt.plot(unique_list_time, l, "-b", label=label_left)
ax=plt.plot(unique_list_time, r, "--b", label=label_right)
if plot_top_bottom_bias:
ax=plt.plot(unique_list_time, t, "-r", label=label_top)
ax=plt.plot(unique_list_time, b, "--r", label=label_bottom)
ax=plt.legend(bbox_to_anchor=(1,1), loc="upper left")
ax=plt.xlabel('time (in s)')
ax=plt.ylabel('number of Volvox')
plt.show()
save_path=save_dir+'\\'+sub_folder_name+'_Volvox_counts_line_graph.png'
if save_plots:
fig.savefig(save_path, bbox_inches='tight')
</code>
# Half only - Counting and Ploting Combined
<code>
l=np.zeros(len(unique_list))
r=np.zeros(len(unique_list))
t=np.zeros(len(unique_list))
b=np.zeros(len(unique_list))
if not compress_images:
compression_ratio=1
x1=int(midpoint_x*compression_ratio)
x2=int(midpoint_x*compression_ratio)-1
y1=int(midpoint_y*compression_ratio)
y2=int(midpoint_y*compression_ratio)-1
# print(unique_list)
ii=0
for i in unique_list:
t5=t4.loc[t4.frame==i]
t_left=t5.loc[t5.x<x1]
t_right=t5.loc[t5.x>x2]
t_top=t5.loc[t5.y<y1]
t_bottom=t5.loc[t5.y>y2]
l[ii]=t_left.shape[0]
r[ii]=t_right.shape[0]
t[ii]=t_top.shape[0]
b[ii]=t_bottom.shape[0]
ii=ii+1
print('mean # left : ',l.mean())
print('mean # right : ',r.mean())
print('mean # top : ',t.mean())
print('mean # bottom : ',b.mean())
import scipy as sp
if gaussian_smooth:
l=sp.ndimage.gaussian_filter1d(l,sigma)
r=sp.ndimage.gaussian_filter1d(r,sigma)
t=sp.ndimage.gaussian_filter1d(t,sigma)
b=sp.ndimage.gaussian_filter1d(b,sigma)
unique_list_time=np.array(unique_list)/fps
fig=plt.figure(figsize=(7,4))
if plot_left_right_bias:
ax=plt.plot(unique_list_time, l, "-b", label=label_left)
ax=plt.plot(unique_list_time, r, "--b", label=label_right)
if plot_top_bottom_bias:
ax=plt.plot(unique_list_time, t, "-r", label=label_top)
ax=plt.plot(unique_list_time, b, "--r", label=label_bottom)
ax=plt.legend(bbox_to_anchor=(1,1), loc="upper left")
ax=plt.xlabel('time (in s)')
ax=plt.ylabel('number of Volvox')
plt.show()
save_path=save_dir+'\\'+sub_folder_name+'_Volvox_counts_line_graph_halfs.png'
if save_plots:
fig.savefig(save_path, bbox_inches='tight')
</code>
# Quartile only - Counting and Ploting Combined
<code>
l=np.zeros(len(unique_list))
r=np.zeros(len(unique_list))
t=np.zeros(len(unique_list))
b=np.zeros(len(unique_list))
if not compress_images:
compression_ratio=1
x1=int(midpoint_x*compression_ratio)-int(chamber_width*compression_ratio/4)
x2=int(midpoint_x*compression_ratio)+int(chamber_width*compression_ratio/4)
y1=int(midpoint_y*compression_ratio)-int(chamber_height*compression_ratio/4)
y2=int(midpoint_y*compression_ratio)+int(chamber_height*compression_ratio/4)
# print(unique_list)
ii=0
for i in unique_list:
t5=t4.loc[t4.frame==i]
t_left=t5.loc[t5.x<x1]
t_right=t5.loc[t5.x>x2]
t_top=t5.loc[t5.y<y1]
t_bottom=t5.loc[t5.y>y2]
l[ii]=t_left.shape[0]
r[ii]=t_right.shape[0]
t[ii]=t_top.shape[0]
b[ii]=t_bottom.shape[0]
ii=ii+1
print('mean # left : ',l.mean())
print('mean # right : ',r.mean())
print('mean # top : ',t.mean())
print('mean # bottom : ',b.mean())
import scipy as sp
if gaussian_smooth:
l=sp.ndimage.gaussian_filter1d(l,sigma)
r=sp.ndimage.gaussian_filter1d(r,sigma)
t=sp.ndimage.gaussian_filter1d(t,sigma)
b=sp.ndimage.gaussian_filter1d(b,sigma)
unique_list_time=np.array(unique_list)/fps
fig=plt.figure(figsize=(7,4))
if plot_left_right_bias:
ax=plt.plot(unique_list_time, l, "-b", label=label_left)
ax=plt.plot(unique_list_time, r, "--b", label=label_right)
if plot_top_bottom_bias:
ax=plt.plot(unique_list_time, t, "-b", label=label_top)
ax=plt.plot(unique_list_time, b, "-r", label=label_bottom)
ax=plt.legend(bbox_to_anchor=(1,1), loc="upper left")
ax=plt.xlabel('time (in s)')
ax=plt.ylabel('number of Volvox')
plt.show()
save_path=save_dir+'\\'+sub_folder_name+'_Volvox_counts_line_graph_quartiles2.png'
if save_plots:
fig.savefig(save_path, bbox_inches='tight')
</code>
|
{
"filename": "Figure10g_Random_Current_1.ipynb",
"repository": "Fw-Franz/Volvox",
"query": "transformed_from_existing",
"size": 26238,
"sha": ""
}
|
# distributed-end-to-end-flow.ipynb
Repository: aws-samples/sagemaker-distributed-training-digital-pathology-images
# Distributed training of tissue slide images using SageMaker and Horovod
## Background
Neural networks have proven effective at solving complex computer vision tasks such as object detection, image similarity, and classification. With the evolution of low cost GPUs, the computational cost of building and deploying a neural network has drastically reduced. However, most of the techniques are designed to handle pixel resolutions commonly found in visual media, as an example, typical resolution size are 544 and 416 pixels for YOLOv3, 300 and 512 pixels for SSD, and 224 pixels for VGG. Training a classifier over a dataset consisting of gigapixel images (10^9+ pixels) such as satellite, CT, or pathology images is computationally challenging. These images cannot be directly input to a neural network due to their size, as each GPU is limited by available memory. This requires specific pre-processing techniques such as tiling to be able to process the original images in smaller chunks. Furthermore, due to the large size of these images, the overall training time tends to be high, often requiring from several days to weeks without the use of proper scaling techniques such as distributed training.
In this notebook, using detection of cancer from tissue slide images as our use-case, we will deploy a highly scalable machine learning pipeline to:
* Pre-process gigapixel images by tiling, zooming, and sorting them into train and test splits using Amazon SageMaker Processing.
* Train an image classifier on pre-processed tiled images using Amazon SageMaker, Horovod and SageMaker Pipe mode.
* Deploy a pre-trained model as an API using Amazon SageMaker.
## Setup
### Install library for visualizing SVS images
First, we install the `slideio` package for visualizing our digital pathology images.
<code>
!pip install slideio===0.5.225
!mkdir -p images
</code>
### Imports
Here we import the necessary libraries to interact with SageMaker. We define our execution role, region, and the name of the S3 bucket in the account to which the tissue slide images will be downloaded. We also create our SageMaker session.
<code>
import boto3
import sagemaker
from sagemaker.processing import Processor, ProcessingInput, ProcessingOutput
from sagemaker import get_execution_role
from sagemaker.tensorflow import TensorFlow
from sagemaker.tensorflow.model import TensorFlowModel
from sagemaker.session import s3_input
role = get_execution_role()
region = boto3.Session().region_name
bucket = 'tcga-data' # Please specify the bucket where the SVS images are downloaded
sagemaker_session = sagemaker.Session()
</code>
Next, we'll import the Python libraries we'll need for the remainder of the exercise.
<code>
import os
import slideio
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
</code>
### TCGA SVS files
In this blog, we will be using a dataset consisting of whole-slide images obtained from The Cancer Genome Atlas (https://www.cancer.gov/about-nci/organization/ccg/research/structural-genomics/tcga) (TCGA) to accurately and automatically classify them into LUAD (Adenocarcinoma), LUSC (squamous cell carcinoma), or normal lung tissue, where LUAD and LUSC are the two most prevalent subtypes of lung cancer. The dataset is available for public use by NIH and NCI. Instructions for downloading data are provided here (http://www.andrewjanowczyk.com/download-tcga-digital-pathology-images-ffpe/). The raw high resolution images are in SVS (https://openslide.org/formats/aperio/) format. SVS files are used for archiving and analyzing Aperio microscope images.The techniques and tools used in this blog can be applied to any ultra-high resolution image datasets such as MRI, CT scans, and satellite.
Please refer to README file for instructions on downloading SVS images from TCGA. Before running the next cell, make sure to create a folder called `tcga-svs` within the S3 bucket specified above and download the SVS image data to that location.
The output of the next cell contains a sample image of a tissue slide. Notice that this single image contains over quarter million pixels, and occupies over 750 MBs of memory. This image cannot be fed into directly to a neural network in its original form, and therefore it is necessary to tile the image into many smaller images.
<code>
# Download sample svs file from S3
s3 = boto3.resource('s3', region_name=region)
image_file = 'TCGA-55-8514-01A-01-TS1.0e0f5cf3-96e9-4a35-aaed-4340df78d389.svs'
key = f'tcga-svs/0000b231-7c05-4e2e-8c9e-6d0675bfbb34/{image_file}'
s3.Bucket(bucket).download_file(key, f'./images/{image_file}')
# Read svs image
slide = slideio.open_slide(path=f"./images/{image_file}", driver="SVS")
scene = slide.get_scene(0)
block = scene.read_block()
# Display image
plt.imshow(block,aspect="auto")
plt.show()
</code>
## Build Docker container for preprocessing SVS files into TFRecords
### Dockerfile
Visualize the Docker file that defines the container to be used by SageMaker Processing.
<code>
!pygmentize Dockerfile
</code>
### Python script for preprocessing
Visualize the python script that orchestrates the preprocessing of the images within the Docker container.
<code>
!pygmentize src/script.py
</code>
### Build container and upload it to ECR
Build and push the Docker image to Amazon's Elastic Container Registry (ECR) so that it can be used by SageMaker Processing.
<code>
from docker_utils import build_and_push_docker_image
repository_short_name = 'tcga-tissue-slides-preprocess'
image_name = build_and_push_docker_image(repository_short_name)
</code>
## Launch SageMaker Processing Job
Now we are ready to launch the SageMaker Processing job on our images. The SVS slide images are pre-processed in three steps.
* *Tiling images*: The images are tiled by non-overlapping 512×512-pixel windows, and tiles containing over 50% background are discarded. The tiles are stored as JPEG images.
* *Converting images to TFRecords*: We use SageMaker Pipe Mode to reduce our training time, which requires the data to be available in a proto-buffer format. TFRecord is a popular proto-buffer format used for training models with TensorFlow. SageMaker Pipe Mode and proto-buffer format are explained in detail in the following section
* *Sorting TFRecords :* We sort the dataset into test, train and validation cohorts for a 3-way classifier (LUAD/LUSC/Normal). In the TCGA dataset, there can be multiple slide images corresponding to a single patient. We need to make sure all the tiles generated from slides corresponding to the same patient should occupy the same split to avoid data leakage. For the test set, we create per-slide TFRecords containing all of the tiles from that slide so that we may evaluate the model in the way it will eventually be realistically deployed.
<code>
processor = Processor(image_uri=image_name,
role=get_execution_role(),
instance_count=16, # run the job on 16 instances
base_job_name='processing-base', # should be unique name
instance_type='ml.m5.4xlarge',
volume_size_in_gb=1000)
processor.run(inputs=[ProcessingInput(
source=f's3://{bucket}/tcga-svs', # s3 input prefix
s3_data_type='S3Prefix',
s3_input_mode='File',
s3_data_distribution_type='ShardedByS3Key', # Split the data across instances
destination='/opt/ml/processing/input')], # local path on the container
outputs=[ProcessingOutput(
source='/opt/ml/processing/output', # local output path on the container
destination=f's3://{bucket}/tcga-svs-tfrecords/' # output s3 location
)],
arguments=['10000'], # number of tiled images per TF record for training dataset
wait=True,
logs=True)
</code>
### Visualize tiled images stored within TFRecords
Here are samples of tiled images generated after pre-processing the above tissue slide image. These RGB 3 channels images are of size 512*512 and can be directly used as inputs to a neural network. Each of these tiled images are assigned the same label as the parent slide. Additionally, tiled images with more than 50% background are discarded.
<code>
%matplotlib inline
print(tf.__version__)
print(tf.executing_eagerly())
HEIGHT=512
WIDTH=512
DEPTH=3
NUM_CLASSES=3
def dataset_parser(value):
image_feature_description = {
'label': tf.io.FixedLenFeature([], tf.int64),
'image_raw': tf.io.FixedLenFeature([], tf.string),
'slide_string': tf.io.FixedLenFeature([], tf.string)
}
record = tf.io.parse_single_example(value, image_feature_description)
image = tf.io.decode_raw(record['image_raw'], tf.float32)
image = tf.cast(image, tf.float32)
image.set_shape([DEPTH * HEIGHT * WIDTH])
image = tf.cast(tf.reshape(image, [HEIGHT, WIDTH, DEPTH]), tf.float32)
label = tf.cast(record['label'], tf.int32)
slide = record['slide_string']
return image, label, slide
# List first 10 tiled images
key = 'tcga-svs-tfrecords/test'
file = [f for f in s3.Bucket(bucket).objects.filter(Prefix=key).limit(1)][0]
local_file = file.key.split('/')[-1]
s3.Bucket(bucket).download_file(file.key, f'./images/{local_file}')
raw_image_dataset = tf.data.TFRecordDataset(f'./images/{local_file}')
parsed_image_dataset = raw_image_dataset.map(dataset_parser)
c = 0
for image_features in parsed_image_dataset:
image_raw = image_features[0].numpy()
label = image_features[1].numpy()
plt.figure()
plt.imshow(image_raw/255)
plt.title(f'Full image: {image_features[2].numpy().decode()}, Label: {label}')
c += 1
if c == 10:
break
</code>
## Distributed training with Horovod and SageMaker Pipe Mode input
When training a model with large amount of data, the data needs to distributed across multiple CPUs/GPUs on either a single instance or multiple instances. Deep learning frameworks provide their own methods to support distributed training. [Horovod](https://eng.uber.com/horovod/) is a popular, framework-agnostic toolkit for distributed deep learning. It utilizes an all-reduce algorithm for fast distributed training (compared with parameter server approach) and also includes multiple optimization methods to make the distributed training faster. Examples of distributed training with Horovod on SageMaker are available via other AWS blogs ([TensorFlow](https://aws.amazon.com/blogs/machine-learning/multi-gpu-and-distributed-training-using-horovod-in-amazon-sagemaker-pipe-mode/), [MXNet](https://aws.amazon.com/blogs/machine-learning/reducing-training-time-with-apache-mxnet-and-horovod-on-amazon-sagemaker/)).
The following cell defines useful variables for the distributed training process. This includes the computation of the appropriate number of shards given the chosen `train_instance_type` and `train_instance_count`. Also note that the value of `gpus_per_host` should reflect the number of GPUs associated with the `train_instance_type`, which in this case is 4.
<code>
train_instance_type='ml.p3.8xlarge'
train_instance_count = 4
gpus_per_host = 4
num_of_shards = gpus_per_host * train_instance_count
distributions = {'mpi': {
'enabled': True,
'processes_per_host': gpus_per_host
}
}
</code>
### Sharding the tiles
SageMaker Pipe Mode is a mechanism for providing S3 data to a training job via Linux pipes. Training programs can read from the fifo pipe and get high-throughput data transfer from S3, without managing the S3 access in the program itself. Pipe Mode is covered in more detail in the SageMaker [documentation](https://sagemaker.readthedocs.io/en/stable/frameworks/tensorflow/using_tf.html#training-with-pipe-mode-using-pipemodedataset).
There are few considerations that we need to keep in mind when working with SageMaker Pipe mode and Horovod:
* The data that is streamed through each pipe is mutually exclusive of each of the other pipes. The number of pipes dictates the number of data shards that need to be created.
* Horovod wraps the training script for each compute instance. This means that data for each compute instance needs to be allocated to a different shard.
* With the SageMaker Training parameter S3DataDistributionType set to `ShardedByS3Key`, we can share a pipe with more than one instance. The data is streamed in round-robin fashion across instance as shown in the figure below.
The following cell shards the data within S3 to prepare it as input for distributed training with Pipe mode.
<code>
# Sharding
client = boto3.client('s3')
result = client.list_objects(Bucket=bucket, Prefix='tcga-svs-tfrecords/train/', Delimiter='/')
j = -1
for i in range(num_of_shards):
copy_source = {
'Bucket': bucket,
'Key': result['Contents'][i]['Key']
}
print(result['Contents'][i]['Key'])
if i % gpus_per_host == 0:
j += 1
dest = 'tcga-svs-tfrecords/train_sharded/' + str(j) +'/' + result['Contents'][i]['Key'].split('/')[2]
print(dest)
s3.meta.client.copy(copy_source, bucket, dest)
</code>
Now that the data is sharded, we can assign these shards as `remote_inputs` to our SageMaker training job.
<code>
svs_tf_sharded = f's3://{bucket}/tcga-svs-tfrecords'
shuffle_config = sagemaker.session.ShuffleConfig(234)
train_s3_uri_prefix = svs_tf_sharded
remote_inputs = {}
for idx in range(gpus_per_host):
train_s3_uri = f'{train_s3_uri_prefix}/train_sharded/{idx}/'
train_s3_input = s3_input(train_s3_uri, distribution ='ShardedByS3Key', shuffle_config=shuffle_config)
remote_inputs[f'train_{idx}'] = train_s3_input
remote_inputs['valid_{}'.format(idx)] = '{}/valid'.format(svs_tf_sharded)
remote_inputs['test'] = '{}/test'.format(svs_tf_sharded)
remote_inputs
</code>
### Training script
First, we visualize the training script to be used by SageMaker.
<code>
!pygmentize src/train.py
</code>
Now we are ready to initialize our SageMaker TensorFlow estimator, specifying `input_mode='Pipe'` to engage Pipe mode and providing our `distributions` variable defined above to activate distributed training. Finally, we call the `fit` method with the `remote_inputs` as the first argument.
<code>
local_hyperparameters = {'epochs': 5, 'batch-size' : 16, 'num-train':160000, 'num-val':8192, 'num-test':8192}
estimator_dist = TensorFlow(base_job_name='svs-horovod-cloud-pipe',
entry_point='src/train.py',
role=role,
framework_version='2.1.0',
py_version='py3',
distribution=distributions,
volume_size=1024,
hyperparameters=local_hyperparameters,
output_path=f's3://{bucket}/output/',
instance_count=4,
instance_type=train_instance_type,
input_mode='Pipe')
estimator_dist.fit(remote_inputs, wait=True)
</code>
## Deploy the trained model
After training the model using Amazon SageMaker, we can now deploy the trained model to peform inference on new images. A model can be deployed using Amazon SageMaker to get predictions in the following ways:
* To set up a persistent endpoint to get one prediction at a time, use SageMaker hosting services.
* To get predictions for an entire dataset, use SageMaker batch transform.
In this blog post, we will deploy the trained model as a SageMaker endpoint.
<code>
%matplotlib inline
plt.style.use('bmh')
</code>
The `deploy()` method creates an endpoint that serves prediction requests in real-time.
The model saves keras artifacts; to use TensorFlow serving for deployment, you'll need to save the artifacts in SavedModel format.
<code>
# Create predictor from S3 instead
model_data = f's3://{bucket}/output/{estimator_dist.latest_training_job.name}/output/model.tar.gz'
model = TensorFlowModel(model_data=model_data,
role=role, framework_version='2.1.0')
predictor = model.deploy(initial_instance_count=1, instance_type='ml.c5.xlarge')
</code>
## Make some predictions
Remember that the model is trained on individual tile images. During inference, the SageMaker endpoint provides classification scores for each tile. These scores are averaged out across all tiles to generate the slide-level score and prediction. A majority-vote scheme would also be appropriate.
The following cells read preprocessed image data from a TFRecords file and use the SageMaker endpoint to compute predictions for each of the tiles. We first define a helper function to extract the individual tile images.
<code>
HEIGHT=512
WIDTH=512
DEPTH=3
NUM_CLASSES=3
def _dataset_parser_with_slide(value):
image_feature_description = {
'label': tf.io.FixedLenFeature([], tf.int64),
'image_raw': tf.io.FixedLenFeature([], tf.string),
'slide_string': tf.io.FixedLenFeature([], tf.string)
}
example = tf.io.parse_single_example(value, image_feature_description)
image = tf.io.decode_raw(example['image_raw'], tf.float32)
image = tf.cast(image, tf.float32)
image.set_shape([DEPTH * HEIGHT * WIDTH])
image = tf.cast(tf.reshape(image, [HEIGHT, WIDTH, DEPTH]), tf.float32)
label = tf.cast(example['label'], tf.int32)
slide = example['slide_string']
return image, label, slide
</code>
### Tile-level prediction
In the following cell, we create and parse a `TFRecordDataset` from a TFRecords file stored locally at `./images` and use the `predict()` method to perform inference on each of the extracted tiles.
<code>
local_file = [each for each in os.listdir('./images') if each.endswith('.tfrecords')][0]
raw_image_dataset = tf.data.TFRecordDataset(f'./images/{local_file}') ## read a TFrecord
parsed_image_dataset = raw_image_dataset.map(_dataset_parser_with_slide) ## Parse TFrecord to JPEGs
pred_scores_list = []
for i, element in enumerate(parsed_image_dataset):
if i > 10:
break
image = element[0].numpy()
label = element[1].numpy()
slide = element[2].numpy().decode()
if i == 0:
print(f'Making tile-level predictions for slide: {slide}...')
print(f'Querying endpoint for a prediction for tile {i+1}...')
pred_scores = predictor.predict(np.expand_dims(image, axis=0))['predictions'][0]
print(pred_scores)
pred_class = np.argmax(pred_scores)
print(pred_class)
if i > 0 and i % 10 == 0:
plt.figure()
plt.title(f'Tile {i} prediction: {pred_class}')
plt.imshow(image / 255)
pred_scores_list.append(pred_scores)
print('Done.')
</code>
### Slide-level prediction (average score over all tiles)
Once the endpoint has classified each of the tiles, we can average them together for a final classification of the entire slide image.
<code>
mean_pred_scores = np.mean(np.vstack(pred_scores_list), axis=0)
mean_pred_class = np.argmax(mean_pred_scores)
print(f"Slide-level prediction for {slide}:", mean_pred_class)
</code>
|
{
"filename": "distributed-end-to-end-flow.ipynb",
"repository": "aws-samples/sagemaker-distributed-training-digital-pathology-images",
"query": "transformed_from_existing",
"size": 28991,
"sha": ""
}
|
# project_231116_1.ipynb
Repository: sriku2412/dataraction
<code>
import pandas as pd
import re
import nltk
from nltk.stem.porter import PorterStemmer
from nltk.corpus import stopwords
from bs4 import BeautifulSoup
from datasets import load_dataset
</code>
<code>
data = pd.read_csv(r"C:\Users\srika\OneDrive\Documents\York\Sem-2 york\MBAN 6090 - Analytics Consulting Project\webscrape\communitech\jobs_info.csv")
</code>
<code>
data.head()
</code>
<code>
data.iloc[123]['description']
</code>
<code>
# keywords = {
# "Core Responsibilities": ["responsibilities", "role", "duties"],
# "Required Skills": ["skills", "qualifications", "requirements"],
# "Preferred Qualifications": ["preferred", "plus", "advantage"],
# "Compensation and Benefits": ["compensation", "benefits", "salary"]
# }
# def process_description(description):
# if not isinstance(description, str):
# return {key: "N/A" for key in keywords}
# soup = BeautifulSoup(description, 'html.parser')
# text = soup.get_text(separator=' ').lower()
# job_info = {key: "N/A" for key in keywords}
# for key, key_list in keywords.items():
# for phrase in key_list:
# if phrase in text:
# start_idx = text.find(phrase)
# end_idx = text.find('.', start_idx) + 1
# job_info[key] = text[start_idx:end_idx].strip()
# break
# job_info["Educational Requirements"] = "N/A"
# job_info["Experience Level"] = "N/A"
# return job_info
# description = data.iloc[123]['description']
# processed_description = process_description(description)
</code>
<code>
# processed_descriptions = [] # List to hold processed descriptions
# for job in data['description']:
# processed_description = process_description(job)
# processed_descriptions.append(processed_description)
# data_2 = pd.DataFrame(processed_descriptions)
# data_2.head()
</code>
<code>
dataset = load_dataset("jacob-hugging-face/job-descriptions")
print(dataset)
</code>
<code>
data_3 = pd.DataFrame(dataset)
</code>
<code>
data_3.head()
</code>
<code>
data.head()
</code>
<code>
ps = PorterStemmer()
</code>
<code>
def cleaning(txt):
cleaned_txt = re.sub(r'[^a-zA-Z0-9\s]', ' ', txt)
token = nltk.word_tokenize(cleaned_txt)
return token
</code>
<code>
data.iloc[12]['description']
</code>
<code>
cleaning(data.iloc[123]['description'])
</code>
<code>
data = pd.read_csv(r"C:\Users\srika\OneDrive\Documents\York\Sem-2 york\MBAN 6090 - Analytics Consulting Project\webscrape\communitech\jobs_info.csv")
unique_tags = {}
for description in data['description']:
if not isinstance(description, str):
continue
soup = BeautifulSoup(description, 'html.parser')
for tag in soup.find_all(True):
tag_name = tag.name
if tag_name in unique_tags:
unique_tags[tag_name] += 1
else:
unique_tags[tag_name] = 1
unique_tags
</code>
<code>
unique_tags_df = pd.DataFrame(list(unique_tags.items()), columns=['Tag', 'Count'])
unique_tags_df = unique_tags_df.sort_values(by='Count', ascending=False)
unique_tags_df.reset_index(drop=True, inplace=True)
unique_tags_df.head(10)
</code>
<code>
head = unique_tags_df[unique_tags_df['Tag'].isin(['h1', 'h2', 'h3', 'h4', 'h5', 'h6'])]
</code>
<code>
unique_tags_df = unique_tags_df.head(20)
unique_tags_df = unique_tags_df[~unique_tags_df['Tag'].isin(['span','strong', 'div', 'img', 'input', 'font', 'u','em','i','a'])]
</code>
<code>
unique_tags_df = pd.merge(unique_tags_df, head, on='Tag', how='outer')
unique_tags_df.drop_duplicates(inplace=True)
unique_tags_df['count'] = unique_tags_df['Count_x'].fillna(unique_tags_df['Count_y'])
unique_tags_df.drop(['Count_x', 'Count_y'], axis=1, inplace=True)
unique_tags_df
</code>
<code>
li_data = []
for description in data['description']:
if isinstance(description, str):
soup = BeautifulSoup(description, 'html.parser')
def find_li_tags(tag, level):
if tag.name == 'li':
li_text = tag.get_text()
li_data.append({'text': li_text, 'level': level})
for child in tag.children:
if child.name:
find_li_tags(child, level + 1)
find_li_tags(soup, level=1)
tag_descriptions = pd.DataFrame(li_data)
tag_descriptions.head(10)
</code>
<code>
tag_descriptions['level'].value_counts()
</code>
<code>
tag_descriptions.loc[tag_descriptions['level'] == 20]['text']
</code>
<code>
first_line = tag_descriptions.loc[tag_descriptions['level'] == 20]['text'].iloc[0]
matching_description = data[data['description'].fillna('').str.contains(first_line)]['description'].iloc[0]
matching_description
</code>
<code>
<div data-testid="careerPage"><div class="so-panel widget widget_sow-editor panel-first-child" data-index="4" id="panel-18-1-0-0">\n<div class="marbot-20 panel-widget-style panel-widget-style-for-18-1-0-0">\n<div class="so-widget-sow-editor so-widget-sow-editor-base">\n<div class="siteorigin-widget-tinymce textwidget">\n<div class="so-panel widget widget_sow-editor panel-first-child" data-index="4" id="panel-18-1-0-0">\n<div class="marbot-20 panel-widget-style panel-widget-style-for-18-1-0-0">\n<div class="so-widget-sow-editor so-widget-sow-editor-base">\n<div class="siteorigin-widget-tinymce textwidget">\n<div class="so-panel widget widget_sow-editor panel-first-child" data-index="6" id="panel-18-2-0-0">\n<div class="marbot-20 panel-widget-style panel-widget-style-for-18-2-0-0">\n<div class="so-widget-sow-editor so-widget-sow-editor-base">\n<div class="siteorigin-widget-tinymce textwidget">\n<h4>Test Automation Software Developer</h4>\n<h6><span style="font-weight: 400;">Job ID: 2023070104</span></h6>\n<p><b>Job Description</b></p>\n<p><span style="font-weight: 400;">You will design and develop an automated test framework designed for application layer and embedded software for bleeding edge networking technologies including ultra-fast network processors (up to 12.8Tbps). The products bring together Open Systems, Network Virtualization and fully Programmable Network Logic to meet the needs of Data Centers, Network Service Providers and researchers in Software Defined Networking technology.</span></p>\n<p>\xa0</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li style="list-style-type: none;">\n<ul>\n<li style="list-style-type: none;">\n<ul>\n<li aria-level="1" style="font-weight: 400;"><span style="font-weight: 400;">Writing automated functional and performance test cases from requirements, executing them in the lab, troubleshooting issues to identify their root cause and reporting results</span></li>\n<li aria-level="1" style="font-weight: 400;"><span style="font-weight: 400;">Troubleshooting system and network problems and diagnosing and solving hardware or software faults</span></li>\n<li aria-level="1" style="font-weight: 400;"><span style="font-weight: 400;">Reproducing issues in the lab and testing fixes and workarounds</span></li>\n<li aria-level="1" style="font-weight: 400;"><span style="font-weight: 400;">Writing procedural documentation and event reports</span></li>\n</ul>\n</li>\n</ul>\n</li>\n</ul>\n<p>\xa0</p>\n<p><strong>Qualifications and Skills Required</strong></p>\n<ul>\n<li style="list-style-type: none;">\n<ul>\n<li style="list-style-type: none;">\n<ul>\n<li aria-level="1" style="font-weight: 400;"><span style="font-weight: 400;">Academic and ideally professional experience writing software and/or scripts</span></li>\n<li aria-level="1" style="font-weight: 400;"><span style="font-weight: 400;">Experience with Python, BASH, and REST</span></li>\n<li aria-level="1" style="font-weight: 400;"><span style="font-weight: 400;">Knowledge of some of the following areas and drive to learn more: L2 to L4 of TCP/IP, traffic generators, hardware interfaces, chipset based solutions.</span></li>\n<li aria-level="1" style="font-weight: 400;"><span style="font-weight: 400;">GIT, bug tracking software and documentation</span></li>\n<li aria-level="1" style="font-weight: 400;"><span style="font-weight: 400;">Excellent problem solving and troubleshooting skills</span></li>\n<li aria-level="1" style="font-weight: 400;"><span style="font-weight: 400;">Ability to write clear procedures and reports</span></li>\n<li aria-level="1" style="font-weight: 400;"><span style="font-weight: 400;">Ability to perform under pressure in a deadline driven environment</span></li>\n<li aria-level="1" style="font-weight: 400;"><span style="font-weight: 400;">Excellent team player with a high level of self-motivation and initiative</span></li>\n<li aria-level="1" style="font-weight: 400;"><span style="font-weight: 400;">Excellent communication skills, both verbal and written (multilingualism is an asset)</span></li>\n<li aria-level="1" style="font-weight: 400;"><span style="font-weight: 400;">B Sc. in Computer Science, Software Engineering, B.Elec.Eng. completed</span></li>\n</ul>\n</li>\n</ul>\n</li>\n</ul>\n<p>\xa0</p>\n<p><b>Good to have</b></p>\n<ul>\n<li aria-level="1" style="font-weight: 400;"><span style="font-weight: 400;">Networking certifications</span></li>\n</ul>\n</div>\n</div>\n</div>\n</div>\n<div class="so-panel widget widget_sow-editor panel-last-child" data-index="7" id="panel-18-2-0-1">\n<div class="so-widget-sow-editor so-widget-sow-editor-base">\n<div class="siteorigin-widget-tinymce textwidget">\n<p>\xa0</p>\n<p><strong>Additional Information</strong></p>\n<p>Type: Full-time</p>\n<p>Location: Montreal, QC, Canada</p>\n<hr/>\n<p>For more\xa0information, or to submit your resumé, please\xa0e-mail\xa0NoviFlow at\xa0<a href="https://mail.google.com/mail/?view=cm&fs=1&tf=1&to=careers@noviflow.com">careers@noviflow.com</a></p>\n</div>\n</div>\n</div>\n</div>\n</div>\n</div>\n</div>\n</div>\n</div>\n</div>\n</div></div>'
</code>
<div data-testid="careerPage"><div class="so-panel widget widget_sow-editor panel-first-child" data-index="4" id="panel-18-1-0-0">\n<div class="marbot-20 panel-widget-style panel-widget-style-for-18-1-0-0">\n<div class="so-widget-sow-editor so-widget-sow-editor-base">\n<div class="siteorigin-widget-tinymce textwidget">\n<div class="so-panel widget widget_sow-editor panel-first-child" data-index="4" id="panel-18-1-0-0">\n<div class="marbot-20 panel-widget-style panel-widget-style-for-18-1-0-0">\n<div class="so-widget-sow-editor so-widget-sow-editor-base">\n<div class="siteorigin-widget-tinymce textwidget">\n<div class="so-panel widget widget_sow-editor panel-first-child" data-index="6" id="panel-18-2-0-0">\n<div class="marbot-20 panel-widget-style panel-widget-style-for-18-2-0-0">\n<div class="so-widget-sow-editor so-widget-sow-editor-base">\n<div class="siteorigin-widget-tinymce textwidget">\n<h4>Test Automation Software Developer</h4>\n<h6><span style="font-weight: 400;">Job ID: 2023070104</span></h6>\n<p><b>Job Description</b></p>\n<p><span style="font-weight: 400;">You will design and develop an automated test framework designed for application layer and embedded software for bleeding edge networking technologies including ultra-fast network processors (up to 12.8Tbps). The products bring together Open Systems, Network Virtualization and fully Programmable Network Logic to meet the needs of Data Centers, Network Service Providers and researchers in Software Defined Networking technology.</span></p>\n<p>\xa0</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li style="list-style-type: none;">\n<ul>\n<li style="list-style-type: none;">\n<ul>\n<li aria-level="1" style="font-weight: 400;"><span style="font-weight: 400;">Writing automated functional and performance test cases from requirements, executing them in the lab, troubleshooting issues to identify their root cause and reporting results</span></li>\n<li aria-level="1" style="font-weight: 400;"><span style="font-weight: 400;">Troubleshooting system and network problems and diagnosing and solving hardware or software faults</span></li>\n<li aria-level="1" style="font-weight: 400;"><span style="font-weight: 400;">Reproducing issues in the lab and testing fixes and workarounds</span></li>\n<li aria-level="1" style="font-weight: 400;"><span style="font-weight: 400;">Writing procedural documentation and event reports</span></li>\n</ul>\n</li>\n</ul>\n</li>\n</ul>\n<p>\xa0</p>\n<p><strong>Qualifications and Skills Required</strong></p>\n<ul>\n<li style="list-style-type: none;">\n<ul>\n<li style="list-style-type: none;">\n<ul>\n<li aria-level="1" style="font-weight: 400;"><span style="font-weight: 400;">Academic and ideally professional experience writing software and/or scripts</span></li>\n<li aria-level="1" style="font-weight: 400;"><span style="font-weight: 400;">Experience with Python, BASH, and REST</span></li>\n<li aria-level="1" style="font-weight: 400;"><span style="font-weight: 400;">Knowledge of some of the following areas and drive to learn more: L2 to L4 of TCP/IP, traffic generators, hardware interfaces, chipset based solutions.</span></li>\n<li aria-level="1" style="font-weight: 400;"><span style="font-weight: 400;">GIT, bug tracking software and documentation</span></li>\n<li aria-level="1" style="font-weight: 400;"><span style="font-weight: 400;">Excellent problem solving and troubleshooting skills</span></li>\n<li aria-level="1" style="font-weight: 400;"><span style="font-weight: 400;">Ability to write clear procedures and reports</span></li>\n<li aria-level="1" style="font-weight: 400;"><span style="font-weight: 400;">Ability to perform under pressure in a deadline driven environment</span></li>\n<li aria-level="1" style="font-weight: 400;"><span style="font-weight: 400;">Excellent team player with a high level of self-motivation and initiative</span></li>\n<li aria-level="1" style="font-weight: 400;"><span style="font-weight: 400;">Excellent communication skills, both verbal and written (multilingualism is an asset)</span></li>\n<li aria-level="1" style="font-weight: 400;"><span style="font-weight: 400;">B Sc. in Computer Science, Software Engineering, B.Elec.Eng. completed</span></li>\n</ul>\n</li>\n</ul>\n</li>\n</ul>\n<p>\xa0</p>\n<p><b>Good to have</b></p>\n<ul>\n<li aria-level="1" style="font-weight: 400;"><span style="font-weight: 400;">Networking certifications</span></li>\n</ul>\n</div>\n</div>\n</div>\n</div>\n<div class="so-panel widget widget_sow-editor panel-last-child" data-index="7" id="panel-18-2-0-1">\n<div class="so-widget-sow-editor so-widget-sow-editor-base">\n<div class="siteorigin-widget-tinymce textwidget">\n<p>\xa0</p>\n<p><strong>Additional Information</strong></p>\n<p>Type: Full-time</p>\n<p>Location: Montreal, QC, Canada</p>\n<hr/>\n<p>For more\xa0information, or to submit your resumé, please\xa0e-mail\xa0NoviFlow at\xa0<a href="https://mail.google.com/mail/?view=cm&fs=1&tf=1&to=careers@noviflow.com">careers@noviflow.com</a></p>\n</div>\n</div>\n</div>\n</div>\n</div>\n</div>\n</div>\n</div>\n</div>\n</div>\n</div></div>''dt',
'em',
'fieldset',
'font',
'footer',
'form',
'g',
'h1',
'h2',
'h3',
'h4',
'h5',
'h6',
...
'thead',
'time',
'tr',
'u',
'ul'}
Output is truncated. View as a scrollable element or open in a text editor. Adjust cell output settings...
{'Your role as In-person Network Coordinator': " Reporting to the EAP Manager, the In-person Network Coordinator plays a vital role in enhancing our provider network's reach and effectiveness, as well as ensuring comprehensive support to our external mental health providers. What you’ll be doing:", 'What you’ll be doing:': " Execution of in-person provider recruitment, such as candidate sourcing, inbox management and contract drafting, in close collaboration with the Talent Acquisition SpecialistSupport manager in tracking and planning network coverage across CanadaProvide seamless onboarding and off-boarding experience, keeping internal management tools up-to-date and offering personalized coaching to providers to drive completion of HR and IT requirementsProvide and maintain a great ongoing provider experience, with responsive support and answer inquiries from providers, addressing their questions and concerns promptly Execution of in-person provider recruitment, such as candidate sourcing, inbox management and contract drafting, in close collaboration with the Talent Acquisition Specialist Support manager in tracking and planning network coverage across Canada Provide seamless onboarding and off-boarding experience, keeping internal management tools up-to-date and offering personalized coaching to providers to drive completion of HR and IT requirements Provide and maintain a great ongoing provider experience, with responsive support and answer inquiries from providers, addressing their questions and concerns promptly We'd love to hear from you if you have:", "We'd love to hear from you if you have:": ' Experience in network coordination, provider relations, administrative support or a related fieldStrong communication skills in French and English, both written and verbal, with the ability to effectively interact with providers and internal teamsDetail-oriented and able to maintain accurate records and documentationProficiency in Google Sheets or ExcelYou have a problem-solving mindset and are proactive in bringing your solutions to lifeAbility to work independently and as part of a team, collaborating effectively to achieve common goalsExperience in healthcare or a similar industry is a plus Experience in network coordination, provider relations, administrative support or a related field Strong communication skills in French and English, both written and verbal, with the ability to effectively interact with providers and internal teams Detail-oriented and able to maintain accurate records and documentation Proficiency in Google Sheets or Excel You have a problem-solving mindset and are proactive in bringing your solutions to life Ability to work independently and as part of a team, collaborating effectively to achieve common goals Experience in healthcare or a similar industry is a plus At Dialogue, your well-being is our priority', 'At Dialogue, your well-being is our priority': " Taking care of others also means taking care of our team. We’ve got you covered! Taking care of others also means taking care of our team. We’ve got you covered! A fully funded benefits plan, includinga wellness reimbursement programUnlimited access to a variety of Dialogue's programs for you and your immediate family4 weeks of vacation, 9 wellness days and 1 paid volunteer dayA flexible schedule and a hybrid work approachAn allocated budget for continuous trainingShort and long-term incentive plans, including restricted stock units (RSUs)An optional parental benefits program A fully funded benefits plan, includinga wellness reimbursement program A fully funded benefits plan, including a wellness reimbursement program Unlimited access to a variety of Dialogue's programs for you and your immediate family Unlimited access to a variety of Dialogue's programs for you and your immediate family 4 weeks of vacation, 9 wellness days and 1 paid volunteer day 4 weeks of vacation, 9 wellness days and 1 paid volunteer day A flexible schedule and a hybrid work approach A flexible schedule and a hybrid work approach An allocated budget for continuous training An allocated budget for continuous training Short and long-term incentive plans, including restricted stock units (RSUs) Short and long-term incentive plans, including restricted stock units (RSUs) An optional parental benefits program An optional parental benefits program About Dialogue", 'About Dialogue': ' Dialogue is the #1 virtual care provider in Canada. By developingour Integrated Health Platform🅫, we provide exceptional online health and wellness programs (primary care, mental health, iCBT, EAP, and wellness) to organizations that want to improve the wellness of their employees and families. Dialogue is the #1 virtual care provider in Canada. By developing our Integrated Health Platform🅫, we provide exceptional online health and wellness programs (primary care, mental health, iCBT, EAP, and wellness) to organizations that want to improve the wellness of their employees and families. When it comes to our work, we set the bar high. Together, we’re transforming health and helping millions improve their well-being. We’re firm believers that great people don’t settle on: When it comes to our work, we set the bar high. Together, we’re transforming health and helping millions improve their well-being. We’re firm believers that great people don’t settle on: Impact Impact Community Community Growth Growth Excellence Excellence Feel like you can make a difference? Good news, we saved you a seat! Feel like you can make a difference? Good news, we saved you a seat!'}
li p ul b br section h3 h2 h4 h1 h5 h6
0 Driving a culture of achievement to exceed rev... At Vidyard, we make life easier for sellers, m... Driving a culture of achievement to exceed rev... None None None None About the Role None None None None
1 Setting clear expectations around revenue gene... Vidyard is looking for a Director Sales - Ent... ~ 5 years of proven sales leadership experienc... None None None None About the Team None None None None
2 Working with marketing for in-market events an... The ideal candidate for this role should have ... Salesforce, Salesloft, Zoominfo & SalesNavigator None None None None What You’ll Work On None None None None
3 Coaching team in effective sales strategies, p... This is a remote role open to candidates locat... Competitive pay Comprehensive, flexible bene... None None None None What You’ll Bring to this Role and Your New Team: None None None None
4 Cultivating and maintaining strong relationshi... Our Enterprise New Business team is a small ... None None None None None Our Sales Tech Stack None None None None
... ... ... ... ... ... ... ... ... ... ... ... ...
98821 Help None None None None None None None None None None None
98822 Support \n \n Contact Support \n Customer Zone... None None None None None None None None None None None
98823 Contact Support None None None None None None None None None None None
98824 Customer Zone None None None None None None None None None None None
98825 Help None None None None None None None None None None None
98826 rows × 12 columns
li 90851
p 40688
ul 13744
b 11216
br 0
section 5057
h3 4319
h2 2102
h4 673
h1 340
h5 141
h6 22
dtype: int64
array(['#LI-RB1', 'Ready To Join Us?', 'Engineering Technician',
'Industrial Hygienist',
'Demo Artist & Trainer - Generalist (Job Req #2024-010)',
'Senior FPGA Designer/Architect (Job Req #2024-008)', 'Summary:',
'A day in the life might look like:',
'You may be a fit for this role if:', 'Maintenance Technician',
'Business Development Manager - Graphics, Americas (Job Req #2024-002)',
'Customer Success Manager',
'Regional Sales Manager, Southeast Asia (Job Req #2024-003)',
'Manufacturing Training Specialist (Job Req #2024-001)',
'Quality Control Chemist',
'Senior Product Verification Specialist (4-Month Contract) (Job Req #2024-006)',
'POSITION OVERVIEW', 'QUALIFICATIONS', 'COMPANY OVERVIEW',
'Qualifications', 'Company Overview', 'Position Overview',
'SOC Analyst I (Temporary Full-time: 12-month Contract)',
'Quality Assurance Associate, Process Development',
'Production Technician', 'PD Student Scientist',
'Global Proposals Manager (Temporary Full-time: 14 month Contract)',
'Global Proposals Manager (Temporary Full-time: 14-month Contract)',
'Stagiaire - Génie électrique - Liaison - H24', 'Commis PDI',
'Business Analyst (Oracle Fusion Cloud)', 'What your team does:',
'Who you are:', 'What you’ll work on:', 'What you may have:',
'Serious bonus points if you have:', 'What you should have:',
'Serious bonus points if you:', '#LI-LI1',
'Manager, Manufacturing Quality Systems',
...
'Business Development Representative', 'Senior Account Executive',
'Registered Dietitian', 'Registered Social Worker/Psychotherapist',
'Registered Clinical Psychologist', 'Customer Success Specialist',
'Full Time Customer Support Representative - Ottawa West',
'Client Operations Assistant'], dtype=object)
Output is truncated. View as a scrollable element or open in a text editor. Adjust cell output settings...
# Test Automation Software Developer
**Job ID: 2023070104**
**Job Description**
You will design and develop an automated test framework designed for application layer and embedded software for bleeding-edge networking technologies, including ultra-fast network processors (up to 12.8Tbps). The products bring together Open Systems, Network Virtualization, and fully Programmable Network Logic to meet the needs of Data Centers, Network Service Providers, and researchers in Software Defined Networking technology.
**Responsibilities**
- Writing automated functional and performance test cases from requirements, executing them in the lab, troubleshooting issues to identify their root cause, and reporting results.
- Troubleshooting system and network problems and diagnosing and solving hardware or software faults.
- Reproducing issues in the lab and testing fixes and workarounds.
- Writing procedural documentation and event reports.
**Qualifications and Skills Required**
- Academic and ideally professional experience writing software and/or scripts.
- Experience with Python, BASH, and REST.
- Knowledge of some of the following areas and a drive to learn more: L2 to L4 of TCP/IP, traffic generators, hardware interfaces, chipset-based solutions.
- GIT, bug tracking software, and documentation.
- Excellent problem-solving and troubleshooting skills.
- Ability to write clear procedures and reports.
- Ability to perform under pressure in a deadline-driven environment.
- Excellent team player with a high level of self-motivation and initiative.
- Excellent communication skills, both verbal and written (multilingualism is an asset).
- B Sc. in Computer Science, Software Engineering, B.Elec.Eng. completed.
**Good to have**
- Networking certifications.
**Additional Information**
- Type: Full-time
- Location: Montreal, QC, Canada
---
For more information or to submit your resumé, please email NoviFlow at [careers@noviflow.com](https://mail.google.com/mail/?view=cm&fs=1&tf=1&to=careers@noviflow.com).
<code>
unique_tags = set()
for description in data['description']:
if not isinstance(description, str):
continue
soup = BeautifulSoup(description, 'html.parser')
for tag in soup.find_all(True):
unique_tags.add(tag.name)
unique_tags
</code>
<code>
def extract_heading_content_pairs(html_content):
soup = BeautifulSoup(html_content, 'html.parser')
heading_tags = {'h1', 'h2', 'h3', 'h4', 'h5', 'h6', 'strong'}
heading_content_pairs = {}
current_heading = None
for element in soup.find_all(True):
if element.name in heading_tags:
current_heading = element.get_text(strip=True)
heading_content_pairs[current_heading] = ''
elif current_heading:
heading_content_pairs[current_heading] += ' ' + element.get_text(strip=True)
return heading_content_pairs
description = data.iloc[123]['description']
heading_content_pairs = extract_heading_content_pairs(description)
print(heading_content_pairs)
</code>
<code>
def parse_html_based_on_tags(html_content, tags_to_extract):
if isinstance(html_content, str):
soup = BeautifulSoup(html_content, 'html.parser')
parsed_data = {}
for tag_name in tags_to_extract:
tag_data = []
for tag in soup.find_all(tag_name):
tag_text = tag.get_text(separator=' ').strip()
if tag_text:
tag_data.append(tag_text)
parsed_data[tag_name] = tag_data
max_length = max(len(data) for data in parsed_data.values())
for tag_name, tag_data in parsed_data.items():
if len(tag_data) < max_length:
parsed_data[tag_name] += [None] * (max_length - len(tag_data))
df = pd.DataFrame(parsed_data)
return df
else:
return pd.DataFrame()
tags_to_extract = unique_tags_df['Tag'].tolist()
combined_parsed_df = pd.DataFrame()
for description in data['description']:
parsed_df = parse_html_based_on_tags(description, tags_to_extract)
combined_parsed_df = pd.concat([combined_parsed_df, parsed_df], ignore_index=True)
combined_parsed_df
</code>
<code>
combined_parsed_df.count()
</code>
<code>
unique_h1_values = combined_parsed_df[combined_parsed_df['h1'].notnull()]['h1'].unique()
unique_h1_values
</code>
<code>
> BERT
> LLM - PFT parameter efficient fine tune. is a catigory.
two methods : P tuning & lora(low rank adaptation)
4 bin - condisato - GPU moemory
8GB ( nvidia) 3050
M series mac.
</code>
<code>
# LLama - 7B.
# 32B or GPT4 as teacher to 7B
manualy label only 100 datset ,.. GPT4 to use the job.
</code>
|
{
"filename": "project_231116_1.ipynb",
"repository": "sriku2412/dataraction",
"query": "transformed_from_existing",
"size": 141859,
"sha": ""
}
|
# pfizer_correlations_1.ipynb
Repository: rheashroff/Lobbying-and-the-Market
<code>
import os, sys, time
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import pandas as pd
from tqdm import tqdm
sns.set_style("whitegrid")
</code>
<code>
data_dir = 'LDA_data/Filings_2021/'
ldf = pd.read_csv(data_dir + 'filings_2021_Q1.csv')
ldf['dt_posted_proc'] = ldf['dt_posted'].apply(lambda x: x.split('T')[0])
ldf['dt_posted_proc'] = pd.to_datetime(ldf['dt_posted_proc'])
</code>
<code>
ldf['dt_posted_proc'].max()
</code>
<code>
# import pfizer stock data
import yfinance as yf
pfe_yf_tick = yf.Ticker('PFE')
ydf = pfe_yf_tick.history(start = ldf['dt_posted_proc'].min() - pd.Timedelta(weeks = 4)
, end = ldf['dt_posted_proc'].max() + pd.Timedelta(weeks = 4))
</code>
<code>
ydf
</code>
<code>
ydf_dates_unprocessed = ydf.index.tolist()
ydf_dates = []
for t in ydf_dates_unprocessed:
ydf_dates.append(
str(t.year) + '-' + str.format("{:02d}", t.month) + '-' + str.format("{:02d}", t.day)
)
ydf['Date'] = ydf_dates
ydf['Date'] = pd.to_datetime(ydf['Date'])
</code>
<code>
plt.plot(ydf['Date'], ydf['Close'])
</code>
<code>
data_dir = 'LDA_data/Filings_2020/'
df = pd.read_csv(data_dir + 'filings_2020_Q4.csv', parse_dates = ['dt_posted'])
</code>
<code>
# some column elements are formatted as strings containing a list object
# process it so that the elements are lists
# example
string = df['lobbying_activities'][0]
print(string)
print(type(string))
# use ast.literal_eval to convert string of list to list
import ast
list_from_string = ast.literal_eval(string) # convert string of list to list
print(list_from_string)
print(type(list_from_string))
</code>
<code>
# format columns whose elements are strings, but are used as list or dict objs
keys = ['lobbying_activities', 'client', 'registrant']
for k in tqdm(keys):
df[k] = df[k].apply(ast.literal_eval)
# CODE BELOW IS NOT GOOD; LOBBYING ACTIVITIES CAN REPORT ON MORE THAN ONE CATEGORY
# no reason for lobbying_activities to be a list with one element, extract dict inside list
if k == 'lobbying_activities':
df[k] = df['lobbying_activities'].apply(lambda x: x[0])
</code>
<code>
# format expenses and income NANs to be zero
keys = ['expenses', 'income']
df[keys] = df[keys].fillna(0.)
</code>
<code>
# let's start filtering the data
dfc = df.copy()
# given time period (covid-19 vaccine delivery), let's see lobbying related to pharmacy
is_pha = (df['lobbying_activities'].apply(lambda x: x['general_issue_code']) == 'PHA')
dfc = dfc[is_pha]
</code>
<code>
df.income.values
</code>
<code>
Nrow, Ncol = dfc.shape
expense_arr = np.array( [ dfc.expenses.values[n] for n in range(Nrow) if dfc.expenses.values[n] != 0. ] )
income_arr = np.array( [ dfc.income.values[n] for n in range(Nrow) if dfc.income.values[n] != 0. ] )
</code>
<code>
# the following plot shows the total reported expenses / income from lobbying activity from this group
sum_exp = expense_arr.sum()
sum_inc = income_arr.sum()
plt.hist(expense_arr, alpha = 0.5, label = 'Expense')
plt.hist(income_arr, alpha = 0.5, label = 'Income')
plt.legend()
plt.title('Cumulative Expenses / Income: = ' + str(sum_exp) + ' / ' + str(sum_inc) )
</code>
<code>
is_exp = (dfc['expenses'] != 0.)
dfc_e = dfc[is_exp] ; dfc_i = dfc[~is_exp]
</code>
<code>
dfc_e['name'] = dfc_e['client'].apply(lambda x: x['name'])
dfc_i['name'] = dfc_i['client'].apply(lambda x: x['name'])
</code>
<code>
print(dfc_i['name'].values)
</code>
<code>
# strange that we don't see any of the big names under this category
# let's add some more categories since Pfizer Inc. performs lobbying under
# other categories (more conditioning would be useful if we fix to a certain sector)
# let's start filtering the data
dfc = df.copy()
# given time period (covid-19), let's see lobbying related to pharmacy
is_code = (df['lobbying_activities'].apply(lambda x: (x['general_issue_code']) in ['PHA', 'HCR'] ))
dfc = dfc[is_code]
</code>
<code>
Nrow, Ncol = dfc.shape
expense_arr = np.array( [ dfc.expenses.values[n] for n in range(Nrow) if dfc.expenses.values[n] != 0. ] )
income_arr = np.array( [ dfc.income.values[n] for n in range(Nrow) if dfc.income.values[n] != 0. ] )
</code>
<code>
# the following plot shows the total reported expenses / income from lobbying activity from this group
sum_exp = expense_arr.sum()
sum_inc = income_arr.sum()
plt.hist(expense_arr, alpha = 0.5, label = 'Expense', log = True)
plt.hist(income_arr, alpha = 0.5, label = 'Income', log = True)
plt.legend()
plt.title('Cumulative Expenses / Income: = ' + str(sum_exp) + ' / ' + str(sum_inc) )
</code>
<code>
dfc['name'] = dfc['client'].apply(lambda x: x['name'])
dfc['name']
</code>
<code>
# given time period (covid-19), let's see lobbying related to pharmacy
is_pfizer = ( dfc['name'] == 'PFIZER, INC.')
dfc = dfc[is_pfizer]
</code>
<code>
dfc['expenses']
</code>
|
{
"filename": "pfizer_correlations_1.ipynb",
"repository": "rheashroff/Lobbying-and-the-Market",
"query": "transformed_from_existing",
"size": 104746,
"sha": ""
}
|
# paresSL_blitzGSEA.ipynb
Repository: MartinSenPom/HNSCC
# Análisis de enriquecimiento con blitzGSEA
```
Autor: Martín Sende Pombo (email: martinsendepombo@outlook.com)
Se utilizó ChatGPT 3.5 como asistente de programación, para elaborar este código basado en los ejemplos proporcionados por el Ma'ayan Laboratory.
Creado: 16-01-2024
Última modificación: 13-08-2024
Propósito: Este cuaderno de Jupyter permite efectuar un análisis de enriquecimiento de conjuntos de genes (GSEA) utilizando la biblioteca blitzGSEA.
```
Copyright (C) 2024 Martín Sende Pombo
Este programa es software libre: puede redistribuirlo y/o modificarlo
bajo los términos de la Licencia Pública General de GNU publicada por la
Free Software Foundation, ya sea la versión 3 de la Licencia,
o (a su elección) cualquier versión posterior.
Este programa se distribuye con la esperanza de que sea útil,
pero SIN NINGUNA GARANTÍA; ni siquiera la garantía implícita
de COMERCIABILIDAD o IDONEIDAD PARA UN PROPÓSITO PARTICULAR.
Consulte la Licencia Pública General GNU para más detalles.
Debería haber recibido una copia de la Licencia Pública General
GNU junto con este programa. Si no es así, consulte <https://www.gnu.org/licenses/>.
## Instalación
<code>
!pip3 install blitzgsea
</code>
## Ejecución del análisis de enriquecimiento con blitzGSEA
### Ejemplo en Python
<code>
import blitzgsea as blitz
import pandas as pd
# read signature as pandas dataframe
signature = pd.read_csv("https://github.com/MaayanLab/blitzgsea/raw/main/testing/ageing_muscle_gtex.tsv")
# list available gene set libraries in Enrichr
blitz.enrichr.print_libraries()
# use enrichr submodule to retrieve gene set library
library = blitz.enrichr.get_library("KEGG_2021_Human")
# run enrichment analysis
if __name__ == "__main__": # make sure process is main, when run in a script it can cause errors otherwise
result = blitz.gsea(signature, library)
</code>
### Bibliotecas de conjuntos de genes seleccionables:
<code>
import blitzgsea as blitz
# lista de bibliotecas de conjuntos de genes disponibles en Enrichr
blitz.enrichr.print_libraries()
</code>
Bibliotecas escogidas:
* Cancer_Cell_Line_Encyclopedia
* MSigDB_Computational
### Para analizar todas las firma de genes
<code>
import os
import blitzgsea as blitz
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
def generar_graficos_desde_excel(result_file_path, signature, library, result):
"""
Genera y guarda gráficos a partir de un archivo Excel.
Args:
result_file_path (str): Ruta del archivo Excel.
signature: Tu definición de signature.
library: Tu definición de library.
result: Tu definición de result.
"""
# Leer el archivo .xlsx
df = pd.read_excel(result_file_path)
# Filtrar los términos que cumplen con los criterios especificados
terms = []
for index, row in df.iterrows():
if pd.isna(row[0]) or row[5] > 0.05:
break
terms.append(row[0])
# Carpeta para guardar los resultados
output_folder = file_output_dir
# Generar y guardar los gráficos para cada término
for term in terms:
fig = blitz.plot.running_sum(signature, term, library, result=result, compact=False)
fig.savefig(os.path.join(output_folder, f"running_sum_{term.replace(' ', '_')}.png"), bbox_inches='tight')
plt.close(fig)
fig_compact = blitz.plot.running_sum(signature, term, library, result=result, compact=True)
fig_compact.savefig(os.path.join(output_folder, f"running_sum_compact_{term.replace(' ', '_')}.png"), bbox_inches='tight')
plt.close(fig_compact)
# Cerrar todas las figuras abiertas (opcional, por seguridad)
plt.close('all')
# Directorios de entrada y salida
input_dir = "Firmas_genes/CSV"
output_dir = "Resultados"
output_dir2 = "Hojas_resultados"
# Obtener lista de archivos .csv en el directorio de entrada
csv_files = [file for file in os.listdir(input_dir) if file.endswith(".csv")]
# Solicitar al usuario el nombre de la librería
library_name = input("Introduce el nombre de la librería a utilizar: ")
# Iterar sobre cada archivo .csv en el directorio de entrada
for file in csv_files:
# Obtener el nombre del archivo sin la extensión
file_name = os.path.splitext(file)[0]
# Leer el archivo .csv como pandas DataFrame
signature = pd.read_csv(os.path.join(input_dir, file))
# Obtener la librería especificada por el usuario
library = blitz.enrichr.get_library(library_name)
# Realizar análisis de enriquecimiento
try:
if __name__ == "__main__": # make sure process is main, when run in a script it can cause errors otherwise
result = blitz.gsea(signature, library)
except (ZeroDivisionError, ValueError) as e:
print(f"Error en el archivo '{file}': {e}. Saltando este archivo.")
continue
# Crear directorio para los resultados del archivo actual
file_output_dir = os.path.join(output_dir, file_name)
os.makedirs(file_output_dir, exist_ok=True)
# Guardar los resultados en un archivo .xlsx en el directorio de salida correspondiente
result_file_path = os.path.join(file_output_dir, file_name + "_resultados_GSEA.xlsx")
result.to_excel(result_file_path, index=True)
result_file_path2 = os.path.join(output_dir2, file_name + "_resultados_GSEA.xlsx")
result.to_excel(result_file_path2, index=True)
# Generar y guardar el gráfico de la tabla superior como .png
try:
fig_table = blitz.plot.top_table(signature, library, result, n=15)
fig_table.savefig(os.path.join(file_output_dir, "top_table.png"), bbox_inches='tight')
plt.close(fig_table)
except IndexError as e:
print(f"Error al generar el gráfico para el archivo '{file}': {e}. No se guardará el gráfico.")
# Llama a la función
generar_graficos_desde_excel(result_file_path, signature, library, result)
# Cerrar todas las figuras abiertas (opcional, por seguridad)
plt.close('all')
print("Proceso completado.")
</code>
### Para analizar una sola firma de expresión génica
<code>
import blitzgsea as blitz
import pandas as pd
# read signature as pandas dataframe
signature = pd.read_csv("Firmas_genes/CSV/APC_2.csv")
# use enrichr submodule to retrieve gene set library
library = blitz.enrichr.get_library("MSigDB_Computational")
#library = blitz.enrichr.get_library("Cancer_Cell_Line_Encyclopedia")
# run enrichment analysis
if __name__ == "__main__": # make sure process is main, when run in a script it can cause errors otherwise
result = blitz.gsea(signature, library)
</code>
<code>
# Ejecute este bloque de código si quiere ver un resumen de la tabla resultante de la ejecuión de blitzGSEA.
result
</code>
<code>
result.to_excel("Resultados/Resultados_GSEA.xlsx", index=True)
</code>
<code>
# plot the enrichment results and save to pdf
fig_table = blitz.plot.top_table(signature, library, result, n=15)
fig_table.savefig("top_table.png", bbox_inches='tight')
</code>
<code>
import pandas as pd
import numpy as np
import os
import matplotlib.pyplot as plt
# Leer el archivo .xlsx
file_path = 'Resultados/Resultados_GSEA.xlsx'
df = pd.read_excel(file_path)
# Filtrar los términos que cumplen con los criterios especificados
terms = []
for index, row in df.iterrows():
if pd.isna(row[0]) or row[5] > 0.49051:
break
terms.append(row[0])
# Carpeta para guardar los resultados
output_folder = 'Resultados'
# Generar y guardar los gráficos para cada término
for term in terms:
fig = blitz.plot.running_sum(signature, term, library, result=result, compact=False)
fig.savefig(os.path.join(output_folder, f"running_sum_{term.replace(' ', '_')}.png"), bbox_inches='tight')
plt.close(fig)
fig_compact = blitz.plot.running_sum(signature, term, library, result=result, compact=True)
fig_compact.savefig(os.path.join(output_folder, f"running_sum_compact_{term.replace(' ', '_')}.png"), bbox_inches='tight')
plt.close(fig_compact)
</code>
## Guardado de los resultados del GSEA estadísticamente significativos
<code>
import os
import pandas as pd
# Ruta de la carpeta que contiene los archivos .xlsx
folder_path = 'Hojas_resultados'
# Obtener la lista de todos los archivos .xlsx en la carpeta
files = [f for f in os.listdir(folder_path) if f.endswith('.xlsx')]
# Preguntar al usuario por el método de filtrado
print("Los métodos estadísticos disponibles para filtrar los resultados son:")
print("1: Tasa de descubrimiento falso (FDR)")
print("2: Corrección de Šidák")
print("3: La combinación de ambos métodos")
filter_choice = input("Introduzca el número correspondiente a la opción elegida (1, 2 o 3): ")
# Validar la entrada del usuario
while filter_choice not in ['1', '2', '3']:
filter_choice = input("Entrada no válida. Introduce 1, 2, o 3: ")
for file in files:
# Leer el archivo .xlsx
input_path = os.path.join(folder_path, file)
df = pd.read_excel(input_path)
# Filtrar las filas que cumplen las condiciones especificadas
if filter_choice == '1':
filtered_df = df[df['fdr'] <= 0.05]
elif filter_choice == '2':
filtered_df = df[df['sidak'] <= 0.05]
else:
filtered_df = df[(df['fdr'] <= 0.05) & (df['sidak'] <= 0.05)]
# Verificar si el DataFrame filtrado está vacío
if filtered_df.empty:
print(f"El archivo {file} no contiene términos enriquecidos de forma estadísticamente significativa.")
continue
# Crear el nombre del archivo de salida
output_file_name = file.replace('.xlsx', '_significativos.xlsx')
output_path = 'Resultados_significativos'
if not os.path.exists(output_path):
os.makedirs(output_path)
output_path = os.path.join(output_path, output_file_name)
# Escribir el DataFrame resultante a un nuevo archivo .xlsx
filtered_df.to_excel(output_path, index=False)
print(f"El archivo filtrado ha sido guardado en: {output_path}")
print("Proceso completado.")
</code>
## Filtrar resultados según su relevancia funcional para el tipo de tumor de interés
<code>
import os
import pandas as pd
# Definimos las rutas de las carpetas
ruta_resultados_significativos = 'Resultados_significativos'
ruta_criterios_filtrado = 'Criterios_filtrado'
ruta_resultados_filtrados = 'Resultados_filtrados'
# Leer los términos de Inclusion.txt y Exclusion.txt
with open(os.path.join(ruta_criterios_filtrado, 'Inclusion.txt'), 'r') as file:
inclusion_terms = [line.strip() for line in file]
with open(os.path.join(ruta_criterios_filtrado, 'Exclusion.txt'), 'r') as file:
exclusion_terms = [line.strip() for line in file]
# Aseguramos que la carpeta de resultados filtrados exista
os.makedirs(ruta_resultados_filtrados, exist_ok=True)
# Procesar cada archivo .xlsx en la carpeta Resultados_significativos
for filename in os.listdir(ruta_resultados_significativos):
if filename.endswith('.xlsx'):
filepath = os.path.join(ruta_resultados_significativos, filename)
df = pd.read_excel(filepath)
# Filtrar las filas según los términos de inclusión y exclusión
inclusion_mask = df['Term'].apply(lambda x: any(term in x for term in inclusion_terms))
exclusion_mask = df['Term'].apply(lambda x: any(term in x for term in exclusion_terms))
filtered_df = df[inclusion_mask & ~exclusion_mask]
if not filtered_df.empty:
# Guardar el nuevo archivo si hay filas que cumplen los criterios
new_filename = filename.replace('_resultados_GSEA_significativos.xlsx', '_resultados_GSEA_RF.xlsx')
filtered_df.to_excel(os.path.join(ruta_resultados_filtrados, new_filename), index=False)
else:
# Avisar al usuario si no hay filas que cumplan los criterios
print(f'No hay filas que cumplan los criterios en el archivo: {filename}')
print("Proceso completado.")
</code>
## Vaciado de las carpetas de archivos
Tras copiar los archivos que considere de interés a otra ubicación, puede ejecutar este código para borrar todos los archivos presentes en las carpetas indicadas en la "Lista de las carpetas".
<code>
import os
# Lista de las carpetas
carpetas = [
"Hojas_resultados",
"Resultados_significativos",
"Resultados_filtrados"
]
def eliminar_archivos(carpeta):
# Listar todos los elementos en la carpeta
for filename in os.listdir(carpeta):
file_path = os.path.join(carpeta, filename)
try:
# Verificar si es un archivo y eliminarlo
if os.path.isfile(file_path) or os.path.islink(file_path):
os.unlink(file_path)
print(f'Archivo {file_path} eliminado')
# Si es un directorio, lo ignoramos
except Exception as e:
print(f'No se pudo eliminar {file_path}. Razón: {e}')
# Eliminar archivos en cada carpeta
for carpeta in carpetas:
eliminar_archivos(carpeta)
print("Proceso completado.")
</code>
|
{
"filename": "paresSL_blitzGSEA.ipynb",
"repository": "MartinSenPom/HNSCC",
"query": "transformed_from_existing",
"size": 61015,
"sha": ""
}
|
# Project.ipynb
Repository: Nocnava/EmergingTechnologies
## **Deutsch's Algorithm**
##### By Conor Murphy
<br>
## **Introduction**
---
In the quantum computing field, constant advancements are being made in the realm of information computation. Among these advancements is Deutsch's algorithm, which was created by David Deutsch and has proven invaluable. As part of my studies for the fourth-year module, Emerging Technologies, taught by Ian McLoughlin, I have been tasked with exploring Deutsch's algorithm. Through this assignment, I intend to explain the basics of quantum computers, draw a comparison between quantum and classical computers, and dive into the topic of Deutsch's algorithm in greater detail.
<br><br><br>
## **Quantum Computer Basics**
---
### **What is Quantum Computing**
Quantum computing is a cutting-edge approach to processing and storing information that uses the fundamental principles of quantum mechanics. Quantum computing uses qubits(quantum bits), whereas classic computers use bits, e.g. 0 or 1. Qubits can exist in a superposition of states, meaning that qubits can represent 0 and 1 simultaneously. This unique feature of quantum computing and entanglement allows us to solve complex problems more efficiently.
#### **The Fundamental Principles of Quantum Computing**
With a brief introduction to the basic concept of quantum computing, we can now delve deeper into the core principles and components that enable us to harness the cutting-edge capabilities of quantum computing. Quantum computing's core principles and building blocks distinguish it from classic computing, offering us a greater possibility of solving complex problems more efficiently. So, let us delve deeper into the key elements that allow us to harness the true potential of quantum computing.
1. **Qubits**<br>
Qubits, short for Quantum Bits, are the building blocks for quantum computing. Unlike classic computing bits, which can either be 0 OR 1, quantum bits have an ability called superposition, which means that a quantum bit can be 0 AND 1 simultaneously. This ability allows us to open the door to process many calculations or processes simultaneously. Another ability of qubits is that they can become entangled, which is another way quantum bits and classic bits differ, as classic bits cannot entangle. The combination of both superposition and entanglement allows quantum computers to solve complex problems more efficiently.
2. **Superposition**<br>
Superposition is a concept in quantum mechanics that forms the basis for the capabilities of quantum computing. It allows qubits to exist in multiple states at the same time. E.g. qubits can represent both 0 and 1 at the same time. Superposition changes how information is processed because it allows quantum computers to explore possible solutions to a problem. This allows quantum computations to work in parallel, which can offer the completion of a specific task compared to what you could expect from a classic computer. Superposition forms the foundation of what quantum computing offers and shows us the difference between it and classic quantum computing, incapable of simultaneously representing bits.
3. **Entanglement**<br>
Entanglement in quantum mechanics is where two or more particles(electrons and photons, for example) become deeply interconnected, regardless of their physical distance. When particles become entangled, the state of one of the particles will influence the state of the other, and it does so in a way that defies the boundaries of physics. In quantum computing, entangled qubits(also known as quantum bits) allow for coordinated and synchronised operations that classical bits or classic computers cannot perform.
4. **Quantum Gates & Quantum Algorithms**<br>
Quantum gates and quantum algorithms are responsible for the impressive computational power of quantum computing. Quantum Gates are responsible for manipulating the states of qubits to perform specific quantum operations such as superposition and entanglement. Quantum gates are the building blocks that enable us to create quantum circuits, which play an essential part in the computing capabilities of quantum computing. However, what about quantum algorithms? Quantum algorithms are the driving force behind quantum computer's computational power. These algorithms are used to leverage the properties of quantum bits in quantum computers. They are designed so that we can use quantum computers to solve complex problems efficiently compared to how a classic computer may solve the same complex problems.
Quantum algorithms are created to solve specific problems efficiently, such as scientific simulations, cryptography, and other processes. Quantum algorithms harness the power of superposition, entanglement and quantum gates to help us unlock the full potential of quantum computing.
With these principles, quantum computing has the potential to help us transform various fields such as cryptography, drug discovery, and problems that we would have thought were computationally impossible. Unfortunately, quantum computing is still in its early days, and quantum computers are not yet at a level where they can perform practical tasks faster, cheaper, or more efficiently than a classical computer. So, what are quantum computers used for if that is the case? Well, one use of quantum computing is machine learning. Machine learning is when we analyse large amounts of data to help computers make better predictions and decisions. For example, we can create a model to detect/track cars, buses or bikes with machine learning. Which in turn could help someone study traffic patterns or other interests.
<br><br><br>
## **The Challenges of Quantum Computing**
---
Even though Quantum Computing is a cutting-edge approach and has the possibility to help many different fields, there are still many challenges and limitations that quantum computing will need to overcome before quantum computers can be adopted widely. Here are some of the challenges and limitations that are experienced by quantum computing.
1. **Scalability**: The current quantum computers we have are actually quite difficult to scale compared to todays classical computers. This is because quantum computers today have have a small number of quantum bits which is what limits them and their processing power. And to build larger quantum computers (vertical scaling) we would have to figure out how to fix the issue of quantum bits fragility and how to maintain quantum coherence. Quantum coherence being a property of quantum computing/mechanics which is the ability to maintain multiple states at the sane time.
2. **Error Correction**: Quantum computers need to be able to have reliable error correction so that they can provide reliable calculations. Implementing this can be quite difficult and as a side effect of implementing error correction it will cause these quantum algorithms to require more resources.
3. **Environmental Noise**: Another challenge that you may never have thought abut when it comes to quantum computing is environmental noise e.g. Thermal noise, Photon noise, Electronic noise and electronic noise. These factors such as photon or heat can cause errors when it comes to measuring a quantum bit which will of course cause an error in the process/calculation.
4. **Decoherence**: Dechoherence is a challenge in quantum computing that happens from the interaction of quantum computers with their surroundings. This interaction can cause the quantum states of qubits to become entangled with the environment which leads to the loss of coherence which ends up ruining the quantum information that the qubits represent.Quantum systems conncetion with the external surroundings shows the requirement for perfect conditions for quantum computers to perform ath their best.
<br>
## **Solving the Challenges of Quantum Computing**
---
Of course, Quantum Computing has immense promise, but as discussed, it faces challenges that researchers are working tirelessly to overcome.
### **Scalability**:
One of the ways researchers are using to address the challenge of scaling in quantum computing is to try and improve the physical implementation and engineering of qubits and quantum systems. Of course, to do so would involve developing new technologies and finding/developing new materials that can support more stable and coherent qubits. This also enhances the control and connectivity of qubits with technology such as microwave pulses and quantum dots. Another way research is trying to improve the scalability of quantum computers is to enhance the design and programming of qubits and quantum algorithms. This involves finding new methods and tools that can help simplify and create more efficient quantum computation and processes by using quantum gates, circuits and error correcting code.
### **Error Correction**:
At Princeton University, led by Jeff Thompson and his team, a method to fix the errors in quantum computers was developed. Rather than just trying to prevent errors completely, they have devised a way of spotting errors easily when they occur. They have done this using a special quantum computer with neutral atoms. By watching these atoms closely during calculations and processes, they can catch errors without ruining the process it is working on. This new method is essentially finding and fixing mistakes in a better and smarter way. It not only reduces the errors in the computer but also makes it easier to figure out how to correct them.[Princeton University: Illuminating errors creates a new paradigm for quantum computing October 11, 2023](https://engineering.princeton.edu/news/2023/10/11/illuminating-errors-creates-new-paradigm-quantum-computing#:~:text=Researchers%20have%20developed%20a%20method,computational%20problems%2C%20the%20researchers%20said.)
### **Environmental Noise**:
Researchers at the University of Chicago in the Pritzker School of Molecular Engineering have developed a new method of monitoring the noise around quantum systems and adjusting qubits in real time to minimise errors. This new method uses noise-cancelling qubits called spectator bits to measure the environmental noise rather than storing information by constantly monitoring the environmental noise around the system. This system will detect changes in noise and use that information to cancel out noise in critical data-processing qubits, which is an ingenious way to solve the environmental noise issue of quantum computing. A great example to help put it simply is used by Asst. Prof. Hannes Bernien, who led the research, said you can liken this new method to noise-cancelling headphones that constantly listen to surrounding noises. It then cancels them out by emitting an opposing frequency. [PHYS.ORG - Researchers develop 'noise-cancelling' qubits to minimize errors in quantum computers. May 25, 2023](https://phys.org/news/2023-05-noise-canceling-qubits-minimize-errors-quantum.html)
<br><br>
## **Quantum Computing compared to Classical Computing**
---
Quantum computers and classic computers have two very different approaches to computation. These computing methods share the goal of completing a process for information, but their principles and capabilities differ.
### **Computational Power**
Theoretically, quantum computers have an advantage over classic computers with their computational power. Quantum algorithms can use quantum principles such as superposition and entanglement to perform specific processes and calculations faster and more efficiently than classical computers. But now, these advantages are mainly theoretical; today, quantum computers are still in the early stages of development, and the practicality of quantum computers still has many challenges, such as scalability, error correction, environmental noise and decoherence. In comparison, the computers we use every day, known as classic computers, have much more maturity than quantum computers and are widely used by all types of people for various tasks. These computers are excellent at sequential algorithms and are optimized for classic computational computers. So, despite quantum computers' potential, they cannot yet consistently outperform classic computers for everyday general-purpose computing tasks.
### **Information Representation**
As I've mentioned previously, the fundamental unit of information in quantum computing is a quantum bit, also known as a qubit. It can exist in a superposition, simultaneously representing a 0 and a 1. This qubit property allows quantum computers to perform various calculations and processes that may be impossible for classical computers. Classical computers, on the other hand, have what is simply known as a bit that can only represent itself as either a 0 or a 1, but not both like a quantum bit can.
### **Physical Requirements**
Quantum computers need extremely cold temperatures to operate correctly and keep their quantum coherence, e.g., maintaining the qubits' states. Specifically, they must be kept at -273 degrees Celsius. Classical computers can be operated at room temperature without any real concern for the temperature dipping above or below a specific temperature, which is within reason for a regular classical computer. Another hardware difference between quantum computers and classical computers is the actual hardware inside of them. For example, quantum processes and calculations are performed using specialised quantum processors to manipulate and control qubits. These quantum processors are still under development and face scalability and error correction challenges. Classical computers, on the other hand, have their processes carried out by central processing units, also known as CPUs. Compared to quantum processors, CPUs have been developed and refined over decades, resulting in fast, efficient and reliable processors that we use to power various devices.
<br><br>
## **Deutsch's Algorithm**
---
### **Introduction to Deutsch's Algorithm**
Deutsch's Algorithm was developed in 1985 by British physicist David Deutsch. The groundbreaking Algorithm showcased a new era of quantum computer problem-solving by showing how quantum computers could outperform classic computers in completing specific tasks. But what was Deutsch's Algorithm created to do? Deutsch's Algorithm was developed to solve a problem, The Deutsch Problem.
### **The Deutsch Problem**
Deutsch's algorithm was created to solve the problem, The Deutsch problem. Deutsch's problem is a particular computational problem in quantum computing. In this problem, we are presented with a black-box function (also known as an oracle) where we can either have a single bit input of 0 or 1 and will receive a single bit output of either 0 or 1. This function can fall into one of two categories: Constant or Balanced. But what does this mean?
**Constant**: A constant function means that we can expect to get the same single bit output every time regardless of what single bit input we give the function. So, for example, if we were to input 0, we could get a 1 back, but if we input a 1, we could also expect a 1 back.
**Balanced**: A balanced function means that the single bit output depends on what single bit input is given to the function, which means there will be different outputs for the two outputs it can receive, e.g. 0, 1.
The objective behind Deutsch's problem was to determine whether or not the function in the black-box was constant or balanced, but we also had to find this out with the fewest amount of queries to the black-box function. Before we had Deutsch's algorithm, with the worst case scenario, there would have to be at least two queries made to the black-box function to find out if we had a constant or balanced function.
### **How Deutsch's Algorithm works?**
Deutsch's Algorithm Step by Step:
1. **Initialisation**: In the first step of Deutsch's algorithm, we initialise two qubits and have them set to a know state. E.g. 0
2. **Hadamard Transformation**: In this step we apply Hadamard's transformation to both of the qubits that we set in the first step. This process places each qubit in a superposition of states, meaning that have a 50 percent chance of being 0 or a 50 percent chance of being 1. To understand superposition more you can go back to "The Fundamental Principles of Quantum Computing" part of this assignment.
3. **Black-Box**: In this third step Deutsch's algorithm uses a black-box function. This checks to see if the function is constant or balanced through the use of entanglement. Entanglement is used to distinguish between constant and balanced functions.
4. **Measurement Basis**: In this step we return the qubits to the standard measurement basis, this step is required to get the final result from the quantum state.
5. **Measurement and Solution**: In this fifth and final step the qubits are measured which can give us the answer to whether or not the black-box function is constant or balanced.
So if the measurement we receive is a 1, this means according to Deutsch's algorithm we have a balanced black-box function, but if we have a measurement of 0 according to Deutsch's algorithm we have a constant black-box function.
### **Plotting Deutsch Algorithm Circuits**
#### **Step One: Initalise a 2 qubit circuit**
<code>
from qiskit import QuantumCircuit
circ = QuantumCircuit(2,1)
circ.i(0)
circ.x(1)
print(circ)
</code>
#### **Step Two: Apply a Hadamard Gate to all qubits**
<code>
circ.h(0)
circ.h(1)
print(circ)
</code>
#### **Step Three: Apply CNOT Gate**
<code>
circ.cx(0,1)
print(circ)
</code>
#### **Step Four: Add the Hadamard Gates to the Qubits**
<code>
circ.h(0)
circ.h(1)
print(circ)
</code>
#### **Step Five: Take the Measurement of the First Qubit of the Circuit**
<code>
circ.measure(0,0)
print(circ)
</code>
#### **Step Six: Run the Circuit with a Quantum Simulator(Qiskit)**
<code>
from qiskit import *
from qiskit.visualization import plot_histogram
# Using Qiskit Aer quantum simulator for backend
backend=Aer.get_backend('qasm_simulator')
# Execute the quantum circuit on the backend with 1024 shots
result=execute(circ,backend=backend,shots=1024).result()
# Retrieve the counts of measurement outcomes from the results
counts=result.get_counts(circ)
print(counts)
# Plot a histogram of the measurement outcome for visualization
plot_histogram(counts)
</code>
#### **Step Seven: Run the Circuit with an IBM Quantum System**
<code>
from qiskit import *
from qiskit import IBMQ
from qiskit.visualization import plot_histogram
api_token = 'a91e700957697c7d4d163a9eaf667e5483550279433faed88d07d4f4ffa0fdbec5ab380ebd3de8fe846d2fd03a2a89f08a01545a17747c4c9844f9ed3f85d809'
</code>
<code>
# Save your account
IBMQ.save_account(api_token)
#Loads IBM credentials
IBMQ.load_account()
#Get provider for IBM Quantum devices
provide=IBMQ.get_provider('ibm-q')
# Gets specific quantum backend named ibm_brisbane
qcomp=provide.get_backend('ibm_brisbane')
# Executes the quantum circuit 'circ' on the backend 'qcomp' with 1024 shots, 'execute' function and sends quantum circuit for processing.
result=execute(circ,backend=qcomp,shots=1024).result()
# Retrieves the measurement results from the executed circuit. The counts represent the number of times each computational basis state
# was observed during the measurement
counts=result.get_counts(circ)
# Prints the outcomes
print(counts)
#Generates and displays a histogram plot of the counts
plot_histogram(counts)
</code>
### **Deutsch's Algorithm Code Example**
The following code implements Deutsch's algorithm in Qiskit, a quantum computing framework programmed in Python. This algorithm is designed to solve a particular problem more efficiently than classical algorithms, and this problem is known as the Deutsch problem. In classical computing, you need at least two queries to determine if a function is balanced or constant(unbalanced). But Deutsch's algorithm uses just one single query to make this distinction. The qiskit code simulates the quantum circuits, which help us provide insights into the capabilities of the power of quantum computing.
### **Qiskit**
Qiskit is an open-source quantum computing software development framework created by IBM in 2017. It provides the user with tools and libraries for working with quantum computers. Quantum computers use the principles of quantum mechanics to perform computations that would be practically impossible for classical computers.
- Qiskit has several different components that cater to different aspects of quantum computing, which are outlined below:
- Qiskit Terra: This is the foundation of Qiskit and provides users with a set of tools for quantum circuit design and execution. It also allows users to create quantum circuits using a set of quantum gates and run them on different quantum systems or quantum simulators.
- Qiskit Aer: Aer focuses on quantum simulation and provides users with high-performance simulators that enable us to mimic the behaviour of quantum simulators. It also includes a noise model to mimic the errors caused by environmental factors in the real world.
- Qiskit Ignis: Ignis's job is error correction and mitigation. It allows users to understand and reduce the impact of errors on computations by implementing error correction codes and calibration techniques.
- Qiskit Aqua: This is a library of quantum algorithms. The algorithms can be used for various applications such as optimization and machine learning.
- Qiskit Nature: Qiskit Nature is a module included in Qiskit that is designed to solve problems in quantum chemistry and materials science.
Qiskit has been designed to be accessible to experts in quantum computing and people new to the field, like students. It can support various quantum devices, such as IBM's quantum processors and can be used for quantum simulations on classical computers without access to a real quantum computer. Qiskit being open source encourages the community to collaborate and develop upon the Qiskit.
<code>
## https://jan-czechowski.medium.com/implementing-deutschs-algorithm-in-qiskit-and-cirq-48949d60e59d
## pip install qiskit, pip install qiskit-aer
# Imported the required modules
import qiskit
from qiskit import *
</code>
<code>
regs = [QuantumRegister(2, 'q'), ClassicalRegister(1, 'c')]
# state initialization: Apply X on the second qubit, and Hadamard gates on both qubits
init = QuantumCircuit(*regs)
init.x(1)
init.h(0)
init.h(1)
init.barrier()
</code>
<code>
# state initialization: Apply X on the second qubit, and Hadamard gates on both qubits
init = QuantumCircuit(*regs)
init.x(1)
init.h(0)
init.h(1)
init.barrier()
# Balanced Oracle: Apply a controlled-X (CNOT) gate, the ouput depends on the input
balanced = QuantumCircuit(*regs)
balanced.cx(0,1)
balanced.barrier()
# Unbalanced Oracle 0: Unity operation, no change to input
unbalanced0 = QuantumCircuit(*regs)
unbalanced0.barrier()
# Unbalanced Oracle 1: Apply X gate on the second qubit, output flipped regardless of input
unbalanced1 = QuantumCircuit(*regs)
unbalanced1.x(1)
unbalanced1.barrier()
</code>
<code>
end = QuantumCircuit(*regs)
end.h(0)
end.measure(0, 0)
# Use Qiskit Aer simulator
sim = Aer.get_backend('qasm_simulator')
</code>
<code>
#Unbalanced can be seen as the same as constant
# Run simulation for each kind of oracle (balanced, unbalanced0, unbalanced1)
for kind, oracle in (('balanced', balanced),
('unbalanced-0', unbalanced0),
('unbalanced-1', unbalanced1)):
# Compose the circuits with oracles and the end circuit, and transpile
#changed circuit. coompose concatenates circuits and the front = true makes sure that operations are added to the begining of the circuit whic fixed the issue
# I was having from the example
circuit = init.compose(oracle, front=True).compose(end, front=True)
transpiled = qiskit.transpile(circuit)
# Run the simulation and print the measurement outcomes
counts = sim.run(transpiled, shots=10).result().get_counts()
print(kind, counts)
</code>
### **Sample Output Explaination**
Sample output:<br>
<pre>
balanced {'0': 4, '1': 6}<br>
unbalanced-0 {'1': 8, '0': 2}<br>
unbalanced-1 {'0': 3, '1': 7}<br>
</pre>
**Balanced Output**: Out of 10 measurements, 4 resulted in a '0' and 6 resulted in '1'. This means that the the quantum algorithm saw a higher chance of '1' rather than '0'. The balanced oracle creates an entangled state and the measurementsreflect the superposition of states.<br>
**Unbalanced Ouput 0**: Out of the 10 measurements, 2 resulted in a '0' and 8 resulted in a '1'. This means that the quantum algorithm saw a higher chances of a '1', suggesting that the oracle is constant(unbalanced)<br>
**Unbalanced Ouput 1**: Out of the 10 measurements, 3 resulted in a '0' and 7 resulted in a '1'. Similar to the balanced output this means that the quantum algorithm saw a greater chance of a '1'. The unbalanced oracle flips the output, and the measurements reflect the superposition of states.<br>
Even though the results of these may not be deterministic, the quantum algorithm shows us a probablisitic behaviour that allows us to correctly identify whether an oracle is constant or balanced. This differs from classic computing algorithms which would have to make multiple queries to reach the same conclusion.
<code>
import matplotlib.pyplot as plt
# Results from sample output
results = {
'balanced': {'0': 4, '1': 6},
'unbalanced-0': {'1': 8, '0': 2},
'unbalanced-1': {'0': 3, '1': 7}
}
# Extract data for plotting
labels = results.keys()
zeros = [value.get('0', 0) for value in results.values()]
ones = [value.get('1', 0) for value in results.values()]
# Plotting
width = 0.35
fig, ax = plt.subplots()
rects1 = ax.bar(labels, zeros, width, label='Outcome 0')
rects2 = ax.bar(labels, ones, width, bottom=zeros, label='Outcome 1')
# Add labels, title, and legend
ax.set_ylabel('Number of Measurements')
ax.set_title('Measurement Outcomes in Deutsch\'s Algorithm')
ax.legend()
# Show the plot
plt.show()
</code>
## **Conclusion**
---
After analyzing Deutsch's algorithm, it becomes evident that quantum systems have the potential to solve complex computational tasks more efficiently than classical computers. Deutsch's algorithm can determine the nature of a black box function in a single query, which highlights the limitations of classical computers and the potential of quantum computers. David Deutsch's algorithmic breakthrough is a significant step towards the future of quantum computing, where quantum computers will become more efficient and commonplace. The ingenuity behind Deutsch's algorithm shows us the great potential for the future of quantum computing.In conclusion, Deutsch's algorithm helps showcase quantum systems' power in solving complex computational tasks. By being able to discern the nature of a black box function in a single query, Deutsch's algorithm defines classical computer limitations, allowing us to see the potential efficiency of quantum computers. The algorithmic breakthrough that David Deutsch made sparks optimism for the future of quantum computing advances in the future, a future where quantum computers become commonplace and excel at all tasks faster and more efficiently than classical computers. However, seeing the ingenuity behind Deutsch's algorithm shows us the great potential for the future of quantum computing.
## **References**
---
[classiq.io - How Does Deutsch's Algorithm Work?](https://www.classiq.io/insights/the-deutsch-jozsa-algorithm-explained#:~:text=Using%20the%20Deutsch%2DJozsa%20approach,or%20all%20outputs%20are%201.)
[What is Parallel Computing](https://en.wikipedia.org/wiki/Parallel_computing)
[AWS Quantum Computing](https://aws.amazon.com/what-is/quantum-computing/#:~:text=superposition%20of%20states.-,What%20are%20the%20principles%20of%20quantum%20computing%3F,superposition%2C%20entanglement%2C%20and%20decoherence)
[Medium Article on Deutsch-Jozsa Algorithm](https://anonymousket.medium.com/quantum-algo-deutsch-jozsa-algorithm-7181bd1e6a02)
[Medium Article - Deutsch's Algorithm Code Example](https://jan-czechowski.medium.com/implementing-deutschs-algorithm-in-qiskit-and-cirq-48949d60e59d)
[Cornell University - Error Correction of Quantum Algorithms: Arbitrarily Accurate Recovery Of Noisy Quantum Signal Processing](https://arxiv.org/abs/2301.08542#:~:text=While%20current%20error%2Dcorrecting%20strategies,quantum%20algorithms%20of%20increasing%20complexity)
[Nature - Quantum Computers: What are they good for?](https://www.nature.com/articles/d41586-023-01692-9#:~:text=The%20quantum%20rules%20of%20this,those%20do%20not%20yet%20exist)
[UC Chicago News - Noise-Cancelling qubits can minimize errors in quantum computers](https://news.uchicago.edu/story/noise-cancelling-qubits-can-minimize-errors-quantum-computers#:~:text=A%20daunting%20challenge,to%20high%20rates%20of%20error)
[Phys.org - Researchers develop 'noise-cancelling' qubits to minimize errors in quantum computers](https://phys.org/news/2023-05-noise-canceling-qubits-minimize-errors-quantum.html)
[Nature - Unfolding quantum computer readout noise](https://www.nature.com/articles/s41534-020-00309-7#:~:text=With%20active%20research%20and%20development,corrected%20to%20improve%20measurement%20fidelity)
[Qubit devices and the issue of quantum decoherence](https://www.sciencedirect.com/science/article/abs/pii/S0079672799000038#:~:text=Quantum%20decoherence%20arises%20from%20the,expressed%20in%20an%20appropriate%20basis)
[Paragraf - Quantum Computing](https://www.paragraf.com/quantum-computing/#:~:text=Quantum%20computers%20require%20a%20highly,fundamental%20units%20of%20quantum%20information)
[GitHub - qiskit-community-tutorials](https://github.com/qiskit-community/qiskit-community-tutorials/tree/master)
[Linkedin Scalability - Physical Solutions](https://www.linkedin.com/advice/0/how-do-qubits-affect-scalability-complexity-quantum#physical-solutions)
[Princeton University - Illuminating errors creates a new paradigm for quantum computing](https://engineering.princeton.edu/news/2023/10/11/illuminating-errors-creates-new-paradigm-quantum-computing#:~:text=Researchers%20have%20developed%20a%20method,computational%20problems%2C%20the%20researchers%20said)
|
{
"filename": "Project.ipynb",
"repository": "Nocnava/EmergingTechnologies",
"query": "transformed_from_existing",
"size": 103111,
"sha": ""
}
|
# Comparison in single-cell data.ipynb
Repository: cantinilab/momix-notebook
# SUB-BENCHMARK3: Comparing jDR methods on single-cell datasets
The performances of the 9 jDR methods are here compared based on their ability to cluster cells based on their cancer cell line of origine. The clustering is performed jointly considering scRNA-seq and scATAC-seq data.
## Data preprocessing
First the data are read in their original format and adapted to be read as input of our run_factorization function.
<code>
# Load data and processing
# Load RNA-seq data
exp <- readRDS("../data/single-cell/CellLines_RNAseqCounts.RDS", refhook = NULL) #ENS for genes and counts
# Apply log2 on RNA-seq data
exp <- log2(exp+1)
# Load ATAC-seq data
atac_counts<-readRDS("../data/single-cell/CellLines_ATACseqCounts.RDS", refhook = NULL) # peaks counts
# Load metadata
metadata<-readRDS("../data/single-cell/CellLines_metadata.RDS", refhook = NULL)
# Rename columns from metadata
colnames(atac_counts) <- metadata[,1]
# Export RNA-seq data as tab-separated table
write.table(exp, "../data/single-cell/CellLines_RNAseqCounts.txt",
sep="\t", col.names=TRUE, row.names=TRUE)
# Add a name ("probe") to the first column
system("sed -i '1s/^/probe\t/' ../data/single-cell/CellLines_RNAseqCounts.txt")
# Export ATAC-seq data as tab-separated table
write.table(atac_counts, "../data/single-cell/CellLines_ATACseqCounts.txt",
sep="\t", col.names=TRUE, row.names=TRUE)
# Add a name ("probe") to the first column
system("sed -i '1s/^/probe\t/' ../data/single-cell/CellLines_ATACseqCounts.txt")
</code>
## Running comparison
Two factor are then detected for each jDR method and the distribution of the cells with respect of Factor1 and Factor2 is plotted as a scatter plot. The obtained plots are available in the Results folder. The capability of the different jDR methods to cluster the cells accoridng to their cell line of origin is finally evaluated through the C-index, whose value is reported in the Results folder.
<code>
library("ggplot2")
library("clusterCrit")
source("runfactorization.R")
# Parameters for the plots
dot_size <- 1.5
dot_alpha <- 1.0
xlabel <- "Factor 1"
ylabel <- "Factor 2"
# Load annotations from the metadata
sample_annot <- metadata[, c("sample.rna", "celltype")]
# Folder for results
results_folder <- "../results_single_cell/"
# Create output folder
dir.create(results_folder, showWarnings = FALSE)
# Run factorization methods
out <- runfactorization("../data/single-cell/",
c("CellLines_RNAseqCounts.txt", "CellLines_ATACseqCounts.txt"),
2,
sep="\t",
filtering="stringent")
c_index <- numeric(0)
# For each factorization method
for(i in 1:length(out$factorizations)){
# Get factorization result
factors <- out$factorizations[[i]][[1]]
# Delete NAs
factors <- factors[!is.na(factors[,1]) & !is.na(factors[,2]), ]
sample_annot <- sample_annot[!is.na(sample_annot[,1]) & !is.na(sample_annot[,2]), ]
# Data to be plotted
df <- data.frame(x = factors[,1], y = factors[,2], color_by = sample_annot[,2])
# Plot results
p <- ggplot(df, aes_string(x = "x", y = "y")) +
geom_point(aes_string(color = "color_by"), size=dot_size, alpha=dot_alpha) +
xlab(xlabel) + ylab(ylabel) +
# scale_shape_manual(values=c(19,1,2:18)[seq_along(unique(shape_by))]) +
theme(plot.margin = margin(20, 20, 10, 10),
axis.text = element_text(size = rel(1), color = "black"),
axis.title = element_text(size = 16),
axis.title.y = element_text(size = rel(1.1), margin = margin(0, 10, 0, 0)),
axis.title.x = element_text(size = rel(1.1), margin = margin(10, 0, 0, 0)),
axis.line = element_line(color = "black", size = 0.5),
axis.ticks = element_line(color = "black", size = 0.5),
panel.border = element_blank(),
panel.grid.major = element_blank(),
panel.grid.minor = element_blank(),
panel.background = element_blank(),
legend.key = element_rect(fill = "white"),
legend.text = element_text(size = 16),
legend.title = element_text(size =16)
)
p + scale_color_manual(values=c("#0072B2", "#D55E00", "#CC79A7"))
# Export plot as JPEG image
ggsave(paste0(results_folder, "plot_",out$method[i],".jpg"))
# Encode cell type annotations by numeric codes
ann <- factor(sample_annot[,2], levels=c("HCT", "Hela", "K562"))
ann <- as.integer(ann)
# Compare factors and annotations
c_index <- c(c_index, intCriteria(factors, as.integer(ann), crit=c("C_index"))$c_index)
}
# Build output table
report_cindex <- data.frame(method=out$method, cindex=c_index)
# Export results as one tab-separated table
write.table(report_cindex, file = paste0(results_folder, "singlecell_cindex.txt"),
sep="\t", col.names=FALSE, row.names=FALSE, quote=FALSE)
</code>
|
{
"filename": "Comparison in single-cell data.ipynb",
"repository": "cantinilab/momix-notebook",
"query": "transformed_from_existing",
"size": 11968,
"sha": ""
}
|
# Project02_factoranalysis.ipynb
Repository: deeplife4eu/Lecture-materials
## Project: Factor analysis for multimodal data using pyro
### Introduction
Single-cell genomics allows to profile not only a single data modality (gene expression, chromatin accessibility,...) but multiple modalities at once from the same cell ("multimodal data"). This allows to gain a better understanding of the cellular state by looking at different biological processes within the same cell, and to characterize better the cell state. This includes for example profiling of gene expression and chromatin accessibility (scRNA-seq and scATAC-seq) or gene expression and DNA methylation. Another type of multimodal data is [CITE-seq](https://cite-seq.com/), in which gene expression is simultaneously profiled with the expression of proteins expressed at the surface of the cell. This is for example used to profile immune cells, which are often characterized by a combination of surface proteins.
### Goal
The goal of this project is to build a pyro-based factor model to analyse CITE-seq data and couple the two data modalities, i.e. gene expression and surface protein expression. As a start, you can take the code from the FA class of the lab in week 6 to learn factor and weights matrices for each individual data modality. Based on this, you can then include a second weight matrix (for the second modality) with a shared factor matrix for both modalities as sued in the MOFA method. After checking that your model works on simulated data, apply it to the CITE-seq data set described below. We then suggest to investigate some of the following questions:
1. How does the performance of your pyro model compare to MOFA on the CITE-seq data? What additional priors and sparsity assumptions could you add to your model to make it more similar to MOFA? Try out some different prior distributions and model settings.
2. How could we leverage correspondence of transcripts and proteins in the data? Since we know which protein corresponds to which transcript, we could try to couple the corresponding values in the two weight matrices instead of having independent values in the model. Would you expect to have the same (or more similar) weights for a transcript and its corresponding protein? Can you think of a hierarchical model, in which a shared parameter for every transcript-protein pair is learnt and serves as a parameter for the distribution of the corresponding weight in each modality?
3. How does this approach compare to using a non-linear method such as a VAE? Compare your results with other groups working with a VAE model on this data set.
### Data and model
The data to be used is a CITE-seq dataset provided by 10x genomics. Check the tutorial of [scanpy of CITE-seq](https://scanpy-tutorials.readthedocs.io/en/multiomics/cite-seq/pbmc5k.html) to see how to obtain the dataset and convert it into the AnnData format.
Alternatively, you can also use the CITE-seq dataset provided [as part of the NeurIPS21 competition](https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE194122). This is the [link](https://www.ncbi.nlm.nih.gov/geo/download/?acc=GSE194122&format=file&file=GSE194122%5Fopenproblems%5Fneurips2021%5Fcite%5FBMMC%5Fprocessed%2Eh5ad%2Egz) to the CITE-seq data in anndata format.
You can use snippets of code from the lab of week 6 where we showed how to built a factor model with pyro as a starting point and modify the architecture as needed. For details on pyro also take a look at the documentation [here](https://docs.pyro.ai/en/stable/)
|
{
"filename": "Project02_factoranalysis.ipynb",
"repository": "deeplife4eu/Lecture-materials",
"query": "transformed_from_existing",
"size": 4901,
"sha": ""
}
|
# KSEA_example_1.ipynb
Repository: saezlab/kinact
# Protocol for Kinase-Substrate Enrichment Analysis (KSEA)
This IPython notebook accompanies the chapter 'Phosphoproteomics-based profiling of kinase activities in cancer cell' in the book 'Methods of Molecular Biology: Cancer Systems Biology' from Springer, 2016.
The script aims to demonstrate the methodology of KSEA, to facilitate grasping the operations performed in the provided code, and to enable reproduction of the implementation in other programming languages where required.
<code>
# Import useful libraries
import numpy as np
import pandas as pd
# Import required libraries for data visualisation
import matplotlib.pyplot as plt
import seaborn as sns
# Import the package
import kinact
# Magic
%matplotlib inline
</code>
## Quick Start
<code>
# import data
data_fc, data_p_value = kinact.get_example_data()
# import prior knowledge
adj_matrix = kinact.get_kinase_targets()
print data_fc.head()
print
print data_p_value.head()
</code>
<code>
# Perform ksea using the Mean method
score, p_value = kinact.ksea.ksea_mean(data_fc=data_fc['5min'].dropna(),
interactions=adj_matrix,
mP=data_fc['5min'].values.mean(),
delta=data_fc['5min'].values.std())
print pd.DataFrame({'score': score, 'p_value': p_value}).head()
</code>
<code>
# Perform ksea using the Alternative Mean method
score, p_value = kinact.ksea.ksea_mean_alt(data_fc=data_fc['5min'].dropna(),
p_values=data_p_value['5min'],
interactions=adj_matrix,
mP=data_fc['5min'].values.mean(),
delta=data_fc['5min'].values.std())
print pd.DataFrame({'score': score, 'p_value': p_value}).head()
</code>
<code>
# Perform ksea using the Delta method
score, p_value = kinact.ksea.ksea_delta(data_fc=data_fc['5min'].dropna(),
p_values=data_p_value['5min'],
interactions=adj_matrix)
print pd.DataFrame({'score': score, 'p_value': p_value}).head()
</code>
### 1. Loading the data
In order to perform the described kinase enrichment analysis, we load the data into a Pandas DataFrame. Here, we use the data from <em>de Graaf et al., 2014</em> for demonstration of KSEA. The data is available as supplemental material to the article online under http://mcponline.org/content/13/9/2426/suppl/DC1. The dataset of interest can be found in the Supplemental Table 2.
When downloading the dataset from the internet, it will be provided as Excel spreadsheet. For the use in this script, it will have to saved as csv-file, using the 'Save As' function in Excel.
In the accompanying github repository, we will provide an already processed csv-file together with the code for KSEA.
<code>
# Read data
data_raw = pd.read_csv('../kinact/data/deGraaf_2014_jurkat.csv', sep=',', header=0)
# Filter for those p-sites that were matched ambiguously
data_reduced = data_raw[~data_raw['Proteins'].str.contains(';')]
# Create identifier for each phosphorylation site, e.g. P06239_S59 for the Serine 59 in the protein Lck
data_reduced.loc[:, 'ID'] = data_reduced['Proteins'] + '_' + data_reduced['Amino acid'] + \
data_reduced['Positions within proteins']
data_indexed = data_reduced.set_index('ID')
# Extract only relevant columns
data_relevant = data_indexed[[x for x in data_indexed if x.startswith('Average')]]
# Rename columns
data_relevant.columns = [x.split()[-1] for x in data_relevant]
# Convert abundaces into fold changes compared to control (0 minutes after stimulation)
data_fc = data_relevant.sub(data_relevant['0min'], axis=0)
data_fc.drop('0min', axis=1, inplace=True)
# Also extract the p-values for the fold changes
data_p_value = data_indexed[[x for x in data_indexed if x.startswith('p value') and x.endswith('vs0min')]]
data_p_value.columns = [x.split('_')[-1].split('vs')[0] + 'min' for x in data_p_value]
data_p_value = data_p_value.astype('float') # Excel saved the p-values as strings, not as floating point numbers
print data_fc.head()
print data_p_value.head()
</code>
### 2. Import prior-knowledge kinase-substrate relationships from PhosphoSitePlus
In the following example, we use the data from the PhosphoSitePlus database, which can be downloaded here: http://www.phosphosite.org/staticDownloads.action.
Consider, that the downloaded file contains a disclaimer at the top of the file, which has to be removed before the file can be used as described below.
<code>
# Read data
ks_rel = pd.read_csv('../kinact/data/PhosphoSitePlus.txt', sep='\t')
# The data from the PhosphoSitePlus database is not provided as comma-separated value file (csv),
# but instead, a tab = \t delimits the individual cells
# Restrict the data on interactions in the organism of interest
ks_rel_human = ks_rel.loc[(ks_rel['KIN_ORGANISM'] == 'human') & (ks_rel['SUB_ORGANISM'] == 'human')]
# Create p-site identifier of the same format as before
ks_rel_human.loc[:, 'psite'] = ks_rel_human['SUB_ACC_ID'] + '_' + ks_rel_human['SUB_MOD_RSD']
# Create adjencency matrix (links between kinases (columns) and p-sites (rows) are indicated with a 1, NA otherwise)
ks_rel_human.loc[:, 'value'] = 1
adj_matrix = pd.pivot_table(ks_rel_human, values='value', index='psite', columns='GENE', fill_value=0)
print adj_matrix.head()
print adj_matrix.sum(axis=0).sort_values(ascending=False).head()
</code>
# 3. KSEA
## 3.1 Quick start for KSEA
Together with this tutorial, we will provide an implementation of KSEA as custom Python functions. Examplary, the use of the function for the dataset by de Graaf et al. could look like this.
<code>
score, p_value = kinact.ksea.ksea_delta(data_fc=data_fc['5min'],
p_values=data_p_value['5min'],
interactions=adj_matrix,
)
print pd.DataFrame({'score': score, 'p_value': p_value}).head()
</code>
<code>
# Calculate the KSEA scores for all data with the ksea_mean method
activity_mean = pd.DataFrame({c: kinact.ksea.ksea_mean(data_fc=data_fc[c],
interactions=adj_matrix,
mP=data_fc.values.mean(),
delta=data_fc.values.std())[0]
for c in data_fc})
activity_mean = activity_mean[['5min', '10min', '20min', '30min', '60min']]
print activity_mean.head()
# Calculate the KSEA scores for all data with the ksea_mean method, using the median
activity_median = pd.DataFrame({c: kinact.ksea.ksea_mean(data_fc=data_fc[c],
interactions=adj_matrix,
mP=data_fc.values.mean(),
delta=data_fc.values.std(), median=True)[0]
for c in data_fc})
activity_median = activity_median[['5min', '10min', '20min', '30min', '60min']]
print activity_median.head()
# Calculate the KSEA scores for all data with the ksea_mean_alt method
activity_mean_alt = pd.DataFrame({c: kinact.ksea.ksea_mean_alt(data_fc=data_fc[c],
p_values=data_p_value[c],
interactions=adj_matrix,
mP=data_fc.values.mean(),
delta=data_fc.values.std())[0]
for c in data_fc})
activity_mean_alt = activity_mean_alt[['5min', '10min', '20min', '30min', '60min']]
print activity_mean_alt.head()
# Calculate the KSEA scores for all data with the ksea_mean method, using the median
activity_median_alt = pd.DataFrame({c: kinact.ksea.ksea_mean_alt(data_fc=data_fc[c],
p_values=data_p_value[c],
interactions=adj_matrix,
mP=data_fc.values.mean(),
delta=data_fc.values.std(),
median=True)[0]
for c in data_fc})
activity_median_alt = activity_median_alt[['5min', '10min', '20min', '30min', '60min']]
print activity_median_alt.head()
# Calculate the KSEA scores for all data with the ksea_delta method
activity_delta = pd.DataFrame({c: kinact.ksea.ksea_delta(data_fc=data_fc[c],
p_values=data_p_value[c],
interactions=adj_matrix)[0]
for c in data_fc})
activity_delta = activity_delta[['5min', '10min', '20min', '30min', '60min']]
print activity_delta.head()
</code>
<code>
sns.set(context='poster', style='ticks')
sns.heatmap(activity_mean_alt, cmap=sns.blend_palette([sns.xkcd_rgb['amber'],
sns.xkcd_rgb['almost black'],
sns.xkcd_rgb['bright blue']],
as_cmap=True))
plt.show()
</code>
In de Graaf et al., they associated (amongst others) the Casein kinase II alpha (CSNK2A1) with higher activity after prolonged stimulation with prostaglandin E2. Here, we plot the activity scores of CSNK2A1 for all three methods of KSEA, which are in good agreement.
<code>
kinase='CSNK2A1'
df_plot = pd.DataFrame({'mean': activity_mean.loc[kinase],
'delta': activity_delta.loc[kinase],
'mean_alt': activity_mean_alt.loc[kinase]})
df_plot['time [min]'] = [5, 10, 20, 30, 60]
df_plot = pd.melt(df_plot, id_vars='time [min]', var_name='method', value_name='activity score')
g = sns.FacetGrid(df_plot, col='method', sharey=False, size=3, aspect=1)
g = g.map(sns.pointplot, 'time [min]', 'activity score')
plt.subplots_adjust(top=.82)
plt.show()
</code>
## 3.2. KSEA in detail
In the following, we show in detail the computations that are carried out inside the provided functions. Let us concentrate on a single condition (60 minutes after stimulation with prostaglandin E2) and a single kinase (CDK1).
<code>
data_condition = data_fc['60min'].copy()
p_values = data_p_value['60min']
kinase = 'CDK1'
</code>
<code>
substrates = adj_matrix[kinase].replace(0, np.nan).dropna().index
detected_p_sites = data_fc.index
intersect = list(set(substrates).intersection(detected_p_sites))
</code>
### 3.2.1. Mean method
<code>
mS = data_condition.loc[intersect].mean()
mP = data_fc.values.mean()
m = len(intersect)
delta = data_fc.values.std()
z_score = (mS - mP) * np.sqrt(m) * 1/delta
from scipy.stats import norm
p_value_mean = norm.sf(abs(z_score))
print mS, p_value_mean
</code>
### 3.2.2. Alternative Mean method
<code>
cut_off = -np.log10(0.05)
set_alt = data_condition.loc[intersect].where(p_values.loc[intersect] > cut_off).dropna()
mS_alt = set_alt.mean()
z_score_alt = (mS_alt - mP) * np.sqrt(len(set_alt)) * 1/delta
p_value_mean_alt = norm.sf(abs(z_score_alt))
print mS_alt, p_value_mean_alt
</code>
### 3.2.3. Delta Method
<code>
cut_off = -np.log10(0.05)
score_delta = len(data_condition.loc[intersect].where((data_condition.loc[intersect] > 0) &
(p_values.loc[intersect] > cut_off)).dropna()) -\
len(data_condition.loc[intersect].where((data_condition.loc[intersect] < 0) &
(p_values.loc[intersect] > cut_off)).dropna())
M = len(data_condition)
n = len(intersect)
N = len(np.where(p_values.loc[adj_matrix.index.tolist()] > cut_off)[0])
from scipy.stats import hypergeom
hypergeom_dist = hypergeom(M, n, N)
p_value_delta = hypergeom_dist.pmf(len(p_values.loc[intersect].where(p_values.loc[intersect] > cut_off).dropna()))
print score_delta, p_value_delta
</code>
|
{
"filename": "KSEA_example_1.ipynb",
"repository": "saezlab/kinact",
"query": "transformed_from_existing",
"size": 149932,
"sha": ""
}
|
# Tangram_osmFISH.ipynb
Repository: ericcombiolab/HarmoDecon
<code>
import scanpy as sc
import squidpy as sq
import numpy as np
import pandas as pd
import anndata as ad
from anndata import AnnData
import pathlib
import matplotlib.pyplot as plt
import matplotlib as mpl
import skimage
import os
import time
</code>
<code>
# import tangram for spatial deconvolution
import tangram as tg
</code>
<code>
stdir = "/home/comp/cszrwang/data/osmfish/"
stfile = 'osmfish.st.cnt.genexrow.tsv'
cellfile = "osmfish.cell_proportion.txt"
scdir = "/home/comp/cszrwang/data/osmfish/SSp_ref/external/"
reffile = "sc_cnt.33gene.5392cell.genexrow.tsv"
metafile = "sc_mta.5392cell.tsv"
resultdir = "./Tangram/"
resultfile = "osmfish.tangram.csv"
result_cellfile = "osmfish.tangram.csv"
if not os.path.exists(resultdir):
os.mkdir(resultdir)
</code>
<code>
# read in ST data
st = pd.read_csv(stdir + stfile, sep='\t', index_col=0)
st = st.transpose()
adata_st = AnnData(st)
adata_st
</code>
<code>
## append cell count for each spot
cell = pd.read_csv(stdir + cellfile, sep='\t', index_col=0)
adata_st.obs = adata_st.obs.merge(cell.sum(axis = 1).to_frame(name="cell_count"), how = 'outer', left_index = True, right_index = True)
## create spatial coordinates information
spatial_coord = adata_st.obs.reset_index()['index'].str.split('_', expand = True).to_numpy().astype(int)
spatial_coord[:,0] = spatial_coord[:,0] + spatial_coord[:,1]
spatial_coord = spatial_coord[:, 0:2]
adata_st.obsm['spatial'] = spatial_coord
centroid = pd.Series(index = adata_st.obs.index, dtype = "object")
for i in range(len(centroid)):
centroid[i] = np.tile(spatial_coord[i], (adata_st.obs.cell_count[i],1))
adata_st.obsm['image_features'] = cell.sum(axis = 1).to_frame(name="segmentation_label").merge(centroid.to_frame(name = "segmentation_centroid"),left_index = True, right_index = True)
</code>
<code>
spatial_coord
</code>
<code>
# Read in scRNA-seq data
scdat = pd.read_csv(scdir + reffile, sep='\t', index_col=0)
adata_sc = AnnData(scdat.T)
sc_meta = pd.read_csv(scdir + metafile, sep='\t')
sc_meta.set_index('sample_name', inplace = True)
sc_meta.index = sc_meta.index.astype('str')
adata_sc.obs = adata_sc.obs.merge(sc_meta, how = 'left', left_index=True, right_index=True)
adata_sc.obs["bio_celltype"] = pd.Categorical(adata_sc.obs['bio_celltype'])
adata_sc
start_time = time.time()
# preprocessing: find the common genes between sc and st
tg.pp_adatas(adata_sc, adata_st)
</code>
<code>
# 用了额外的信息!!!!
# Deconvolution
ad_map = tg.map_cells_to_space(
adata_sc,
adata_st,
mode="constrained",
target_count=adata_st.obs.cell_count.sum(),
density_prior=np.array(adata_st.obs.cell_count) / adata_st.obs.cell_count.sum(),
num_epochs=1000,
device="cuda:0",
#device='cpu',
)
</code>
<code>
# Gather deconvolution results
## map the cell type information to the st AnnData object
## The output created is the unnormalized probability matrix
tg.project_cell_annotations(ad_map, adata_st, annotation="bio_celltype")
end_time = time.time()
print("--- %.2f seconds ---" % (time.time() - start_time))
</code>
<code>
## normalize the probability matrix and save as csv
prob_mat = adata_st.obsm["tangram_ct_pred"]
prob_mat = prob_mat.div(prob_mat.sum(axis=1), axis=0)
prob_mat.to_csv(resultdir + resultfile, sep = '\t')
## create cell-level mapping assignments
tg.create_segment_cell_df(adata_st)
tg.count_cell_annotations(
ad_map,
adata_sc,
adata_st,
annotation="bio_celltype",
)
adata_st.obsm["tangram_ct_count"].drop(columns = ['centroids']).to_csv(resultdir + result_cellfile)
</code>
|
{
"filename": "Tangram_osmFISH.ipynb",
"repository": "ericcombiolab/HarmoDecon",
"query": "transformed_from_existing",
"size": 10457,
"sha": ""
}
|
# log_reg_1.ipynb
Repository: RasmussenLab/njab
# Logistic regression model
Procedure:
Example: Alzheimers mass spectrometry-based proteomics dataset
> Predict Alzheimer disease based on proteomics measurements.
<code>
# Setup colab installation
# You need to restart the runtime after running this cell
%pip install njab heatmapz openpyxl plotly umap-learn
</code>
<code>
import itertools
import logging
from pathlib import Path
from typing import Optional
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import plotly.express as px
import seaborn
import sklearn
import sklearn.impute
import statsmodels.api as sm
import umap
from heatmap import corrplot
from IPython.display import display
from sklearn.metrics import log_loss, make_scorer
import njab.sklearn
from njab.plotting.metrics import plot_auc, plot_prc
from njab.sklearn import StandardScaler
from njab.sklearn import pca as njab_pca
from njab.sklearn.scoring import (
ConfusionMatrix,
get_lr_multiplicative_decomposition,
get_pred,
get_score,
get_target_count_per_bin,
)
from njab.sklearn.types import Splits
logger = logging.getLogger("njab")
logger.setLevel(logging.INFO)
njab.pandas.set_pandas_options()
pd.options.display.min_rows = 10
pd.options.display.max_columns = 20
njab.plotting.set_font_sizes("x-small")
seaborn.set_style("whitegrid")
njab.plotting.set_font_sizes(8)
</code>
## Set parameters
<code>
CLINIC: str = (
"https://raw.githubusercontent.com/RasmussenLab/njab/HEAD/docs/tutorial/data/alzheimer/clinic_ml.csv" # clincial data
)
fname_omics: str = (
"https://raw.githubusercontent.com/RasmussenLab/njab/HEAD/docs/tutorial/data/alzheimer/proteome.csv" # omics data
)
TARGET: str = "AD" # target column in CLINIC dataset (binary)
TARGET_LABEL: Optional[str] = None # optional: rename target variable
n_features_max: int = 5
freq_cutoff: float = 0.5 # Omics cutoff for sample completeness
VAL_IDS: str = "" #
VAL_IDS_query: str = ""
weights: bool = True
FOLDER = "alzheimer"
model_name = "all"
</code>
## Setup
### Load data
<code>
clinic = pd.read_csv(CLINIC, index_col=0).convert_dtypes()
cols_clinic = njab.pandas.get_colums_accessor(clinic)
omics = pd.read_csv(fname_omics, index_col=0)
</code>
Data shapes
<code>
omics.shape, clinic.shape
</code>
See how common omics features are and remove feature below choosen frequency cutoff
<code>
ax = omics.notna().sum().sort_values().plot(rot=45)
</code>
<code>
M_before = omics.shape[1]
omics = omics.dropna(thresh=int(len(omics) * freq_cutoff), axis=1)
M_after = omics.shape[1]
msg = (
f"Removed {M_before-M_after} features with more than {freq_cutoff*100}% missing values."
f"\nRemaining features: {M_after} (of {M_before})"
)
print(msg)
# keep a map of all proteins in protein group, but only display first protein
# proteins are unique to protein groups
pg_map = {k: k.split(";")[0] for k in omics.columns}
omics = omics.rename(columns=pg_map)
# log2 transform raw intensity data:
omics = np.log2(omics + 1)
omics
</code>
## Clinical data
View clinical data
<code>
clinic
</code>
## Target
Tabulate target and check for missing values
<code>
njab.pandas.value_counts_with_margins(clinic[TARGET])
</code>
<code>
target_counts = clinic[TARGET].value_counts()
if target_counts.sum() < len(clinic):
print(
"Target has missing values."
f" Can only use {target_counts.sum()} of {len(clinic)} samples."
)
mask = clinic[TARGET].notna()
clinic, omics = clinic.loc[mask], omics.loc[mask]
</code>
<code>
if TARGET_LABEL is None:
TARGET_LABEL = TARGET
y = clinic[TARGET].rename(TARGET_LABEL).astype(int)
clinic_for_ml = clinic.drop(TARGET, axis=1)
</code>
## Test IDs
Select some test samples:
<code>
olink_val, clinic_val = None, None
if not VAL_IDS:
if VAL_IDS_query:
logging.warning(f"Querying index using: {VAL_IDS_query}")
VAL_IDS = clinic.filter(like=VAL_IDS_query, axis=0).index.to_list()
logging.warning(f"Found {len(VAL_IDS)} Test-IDs")
else:
logging.warning("Create train and test split.")
_, VAL_IDS = sklearn.model_selection.train_test_split(
clinic.index, test_size=0.2, random_state=123, stratify=clinic[TARGET]
)
VAL_IDS = list(VAL_IDS)
elif isinstance(VAL_IDS, str):
VAL_IDS = VAL_IDS.split(",")
else:
raise ValueError("Provide IDs in csv format as str: 'ID1,ID2'")
VAL_IDS
</code>
## Combine clinical and olink data
<code>
# in case you need to subselect
feat_to_consider = clinic_for_ml.columns.to_list()
feat_to_consider += omics.columns.to_list()
feat_to_consider
</code>
View data for training
<code>
X = clinic_for_ml.join(omics)[feat_to_consider]
X
</code>
## Data Splits
Separate train and test split
<code>
TRAIN_LABEL = "train"
TEST_LABEL = "test"
if VAL_IDS:
diff = pd.Index(VAL_IDS)
VAL_IDS = X.index.intersection(VAL_IDS)
if len(diff) < len(VAL_IDS):
logging.warning(
"Some requested validation IDs are not in the data: "
",".join(str(x) for x in diff.difference(VAL_IDS))
)
X_val = X.loc[VAL_IDS]
X = X.drop(VAL_IDS)
use_val_split = True
y_val = y.loc[VAL_IDS]
y = y.drop(VAL_IDS)
</code>
## Output folder
<code>
FOLDER = Path(FOLDER)
FOLDER.mkdir(exist_ok=True, parents=True)
print(f"Output folder: {FOLDER}")
</code>
### Outputs
Save outputs to excel file:
<code>
# out
files_out = {}
fname = FOLDER / "log_reg.xlsx"
files_out[fname.stem] = fname
writer = pd.ExcelWriter(fname)
print(f"Excel-file for tables: {fname}")
</code>
## Collect test predictions
<code>
predictions = y_val.to_frame("true")
</code>
## Fill missing values with training median
<code>
feat_w_missings = X.isna().sum()
feat_w_missings = feat_w_missings.loc[feat_w_missings > 0]
feat_w_missings
</code>
<code>
row_w_missing = X.isna().sum(axis=1).astype(bool)
col_w_missing = X.isna().sum(axis=0).astype(bool)
X.loc[row_w_missing, col_w_missing]
</code>
Impute using median of training data
<code>
median_imputer = sklearn.impute.SimpleImputer(strategy="median")
X = njab.sklearn.transform_DataFrame(X, median_imputer.fit_transform)
X_val = njab.sklearn.transform_DataFrame(X_val, median_imputer.transform)
assert X.isna().sum().sum() == 0
X.shape, X_val.shape
</code>
## Principal Components
on standard normalized training data:
<code>
scaler = StandardScaler()
X_scaled = scaler.fit_transform(X)
PCs, pca = njab_pca.run_pca(X_scaled, n_components=None)
files_out["var_explained_by_PCs.pdf"] = FOLDER / "var_explained_by_PCs.pdf"
ax = njab_pca.plot_explained_variance(pca)
ax.locator_params(axis="x", integer=True)
njab.plotting.savefig(ax.get_figure(), files_out["var_explained_by_PCs.pdf"])
X_scaled.shape
</code>
Plot first 5 PCs with binary target label annotating each sample::
<code>
files_out["scatter_first_5PCs.pdf"] = FOLDER / "scatter_first_5PCs.pdf"
fig, axes = plt.subplots(5, 2, figsize=(6, 8), layout="constrained")
PCs.columns = [s.replace("principal component", "PC") for s in PCs.columns]
PCs = PCs.join(y.astype("category"))
up_to = min(PCs.shape[-1], 5)
# https://github.com/matplotlib/matplotlib/issues/25538
# colab: old pandas version and too new matplotlib version (2023-11-6)
for (i, j), ax in zip(itertools.combinations(range(up_to), 2), axes.flatten()):
PCs.plot.scatter(i, j, c=TARGET_LABEL, cmap="Paired", ax=ax)
_ = PCs.pop(TARGET_LABEL)
njab.plotting.savefig(fig, files_out["scatter_first_5PCs.pdf"])
</code>
## UMAP
of training data:
<code>
reducer = umap.UMAP()
embedding = reducer.fit_transform(X_scaled)
files_out["umap.pdf"] = FOLDER / "umap.pdf"
embedding = pd.DataFrame(
embedding, index=X_scaled.index, columns=["UMAP 1", "UMAP 2"]
).join(y.astype("category"))
ax = embedding.plot.scatter("UMAP 1", "UMAP 2", c=TARGET_LABEL, cmap="Paired")
njab.plotting.savefig(ax.get_figure(), files_out["umap.pdf"])
</code>
## Baseline Model - Logistic Regression
Based on parameters, use weighting:
<code>
if weights:
weights = "balanced"
cutoff = 0.5
else:
cutoff = None
weights = None
</code>
## Logistic Regression
Procedure:
1. Select best set of features from entire feature set selected using CV on train split
2. Retrain best model configuration using entire train split and evalute on test split
Define splits and models:
<code>
splits = Splits(
X_train=X_scaled, X_test=scaler.transform(X_val), y_train=y, y_test=y_val
)
model = sklearn.linear_model.LogisticRegression(penalty="l2", class_weight=weights)
</code>
<code>
scoring = [
"precision",
"recall",
"f1",
"balanced_accuracy",
"roc_auc",
"average_precision",
]
scoring = {k: k for k in scoring}
# do not average log loss for AIC and BIC calculations
scoring["log_loss"] = make_scorer(log_loss, greater_is_better=True, normalize=False)
cv_feat = njab.sklearn.find_n_best_features(
X=splits.X_train,
y=splits.y_train,
model=model,
name=TARGET_LABEL,
groups=splits.y_train,
n_features_max=n_features_max,
scoring=scoring,
return_train_score=True,
# fit_params=dict(sample_weight=weights)
)
cv_feat = cv_feat.drop("test_case", axis=1).groupby("n_features").agg(["mean", "std"])
cv_feat
</code>
Add AIC and BIC for model selection
<code>
# AIC vs BIC on train and test data with bigger is better
IC_criteria = pd.DataFrame()
N_split = {
"train": round(len(splits.X_train) * 0.8),
"test": round(len(splits.X_train) * 0.2),
}
for _split in ("train", "test"):
IC_criteria[(f"{_split}_neg_AIC", "mean")] = -(
2 * cv_feat.index.to_series() - 2 * cv_feat[(f"{_split}_log_loss", "mean")]
)
IC_criteria[(f"{_split}_neg_BIC", "mean")] = -(
cv_feat.index.to_series() * np.log(N_split[_split])
- 2 * cv_feat[(f"{_split}_log_loss", "mean")]
)
IC_criteria.columns = pd.MultiIndex.from_tuples(IC_criteria.columns)
IC_criteria
</code>
All cross-validation metrics:
<code>
cv_feat = cv_feat.join(IC_criteria)
cv_feat = cv_feat.filter(regex="train|test", axis=1).style.highlight_max(
axis=0, subset=pd.IndexSlice[:, pd.IndexSlice[:, "mean"]]
)
cv_feat
</code>
Save:
<code>
cv_feat.to_excel(writer, sheet_name="CV", float_format="%.3f")
cv_feat = cv_feat.data
</code>
Optimal number of features to use based on cross-validation by metric:
<code>
mask = cv_feat.columns.levels[0].str[:4] == "test"
scores_cols = cv_feat.columns.levels[0][mask]
n_feat_best = cv_feat.loc[:, pd.IndexSlice[scores_cols, "mean"]].idxmax()
n_feat_best.name = "best"
n_feat_best.to_excel(writer, sheet_name="n_feat_best")
n_feat_best
</code>
Retrain model with best number of features by selected metric::
<code>
results_model = njab.sklearn.run_model(
model=model,
splits=splits,
n_feat_to_select=n_feat_best.loc["test_roc_auc", "mean"],
)
results_model.name = model_name
</code>
## Receiver Operating Curve of final model
<code>
ax = plot_auc(
results_model, label_train=TRAIN_LABEL, label_test=TEST_LABEL, figsize=(4, 2)
)
files_out["ROAUC"] = FOLDER / "plot_roauc.pdf"
njab.plotting.savefig(ax.get_figure(), files_out["ROAUC"])
</code>
## Precision-Recall Curve for final model
<code>
ax = plot_prc(
results_model, label_train=TRAIN_LABEL, label_test=TEST_LABEL, figsize=(4, 2)
)
files_out["PRAUC"] = FOLDER / "plot_prauc.pdf"
njab.plotting.savefig(ax.get_figure(), files_out["PRAUC"])
</code>
## Coefficients with/out std. errors
<code>
pd.DataFrame(
{
"coef": results_model.model.coef_.flatten(),
"name": results_model.model.feature_names_in_,
}
)
</code>
<code>
results_model.model.intercept_
</code>
## Selected Features
<code>
des_selected_feat = splits.X_train[results_model.selected_features].describe()
des_selected_feat.to_excel(writer, sheet_name="sel_feat", float_format="%.3f")
des_selected_feat
</code>
### Heatmap of correlations
<code>
fig = plt.figure(figsize=(6, 6))
files_out["corr_plot_train.pdf"] = FOLDER / "corr_plot_train.pdf"
_ = corrplot(X[results_model.selected_features].join(y).corr(), size_scale=300)
njab.plotting.savefig(fig, files_out["corr_plot_train.pdf"])
</code>
## Plot training data scores
<code>
N_BINS = 20
score = get_score(
clf=results_model.model, X=splits.X_train[results_model.selected_features], pos=1
)
ax = score.hist(bins=N_BINS)
files_out["hist_score_train.pdf"] = FOLDER / "hist_score_train.pdf"
njab.plotting.savefig(ax.get_figure(), files_out["hist_score_train.pdf"])
pred_bins = get_target_count_per_bin(score, y, n_bins=N_BINS)
ax = pred_bins.plot(kind="bar", ylabel="count")
files_out["hist_score_train_target.pdf"] = FOLDER / "hist_score_train_target.pdf"
njab.plotting.savefig(ax.get_figure(), files_out["hist_score_train_target.pdf"])
# pred_bins
</code>
## Test data scores
<code>
score_val = get_score(
clf=results_model.model, X=splits.X_test[results_model.selected_features], pos=1
)
predictions["score"] = score_val
ax = score_val.hist(bins=N_BINS) # list(x/N_BINS for x in range(0,N_BINS)))
ax.set_ylabel("count")
ax.set_xlim(0, 1)
files_out["hist_score_test.pdf"] = FOLDER / "hist_score_test.pdf"
njab.plotting.savefig(ax.get_figure(), files_out["hist_score_test.pdf"])
pred_bins_val = get_target_count_per_bin(score_val, y_val, n_bins=N_BINS)
ax = pred_bins_val.plot(kind="bar", ylabel="count")
ax.locator_params(axis="y", integer=True)
files_out["hist_score_test_target.pdf"] = FOLDER / "hist_score_test_target.pdf"
njab.plotting.savefig(ax.get_figure(), files_out["hist_score_test_target.pdf"])
# pred_bins_val
</code>
## Performance evaluations
Check if the cutoff can be adapted to maximize the F1 score
between precision and recall:
<code>
prc = pd.DataFrame(results_model.train.prc, index="precision recall cutoffs".split())
prc
</code>
<code>
prc.loc["f1_score"] = (
2
* (prc.loc["precision"] * prc.loc["recall"])
/ (1 / prc.loc["precision"] + 1 / prc.loc["recall"])
)
f1_max = prc[prc.loc["f1_score"].argmax()]
f1_max
</code>
Cutoff set
<code>
cutoff = float(f1_max.loc["cutoffs"])
cutoff
</code>
<code>
y_pred_val = njab.sklearn.scoring.get_custom_pred(
clf=results_model.model,
X=splits.X_test[results_model.selected_features],
cutoff=cutoff,
)
predictions[model_name] = y_pred_val
predictions["dead"] = y_val
_ = ConfusionMatrix(y_val, y_pred_val).as_dataframe()
_.columns = pd.MultiIndex.from_tuples(
[(t[0] + f" - {cutoff:.3f}", t[1]) for t in _.columns]
)
_.to_excel(writer, sheet_name="CM_test_cutoff_adapted")
_
</code>
<code>
y_pred_val = get_pred(
clf=results_model.model, X=splits.X_test[results_model.selected_features]
)
predictions[model_name] = y_pred_val
predictions["dead"] = y_val
_ = ConfusionMatrix(y_val, y_pred_val).as_dataframe()
_.columns = pd.MultiIndex.from_tuples([(t[0] + f" - {0.5}", t[1]) for t in _.columns])
_.to_excel(writer, sheet_name="CM_test_cutoff_0.5")
_
</code>
## Multiplicative decompositon
Decompose the model into its components for both splits:
<code>
components = get_lr_multiplicative_decomposition(
results=results_model, X=splits.X_train, prob=score, y=y
)
components.to_excel(writer, sheet_name="decomp_multiplicative_train")
components.to_excel(
writer, sheet_name="decomp_multiplicative_train_view", float_format="%.5f"
)
components.head(10)
</code>
<code>
components_test = get_lr_multiplicative_decomposition(
results=results_model, X=splits.X_test, prob=score_val, y=y_val
)
components_test.to_excel(writer, sheet_name="decomp_multiplicative_test")
components_test.to_excel(
writer, sheet_name="decomp_multiplicative_test_view", float_format="%.5f"
)
components_test.head(10)
</code>
## Plot TP, TN, FP and FN on PCA plot
### UMAP
<code>
reducer = umap.UMAP(random_state=42)
# bug: how does UMAP works with only one feature?
# make sure to have two or more features?
M_sel = len(results_model.selected_features)
if M_sel > 1:
embedding = reducer.fit_transform(X_scaled[results_model.selected_features])
embedding = pd.DataFrame(
embedding,
index=X_scaled.index,
columns=["UMAP dimension 1", "UMAP dimension 2"],
).join(y.astype("category"))
display(embedding.head(3))
else:
embedding = None
</code>
Annotate using target variable and predictions:
<code>
predictions["label"] = predictions.apply(
lambda x: njab.sklearn.scoring.get_label_binary_classification(
x["true"], x[model_name]
),
axis=1,
)
mask = predictions[["true", model_name]].sum(axis=1).astype(bool)
predictions.loc[mask].sort_values("score", ascending=False)
</code>
<code>
X_val_scaled = scaler.transform(X_val)
if embedding is not None:
embedding_val = pd.DataFrame(
reducer.transform(X_val_scaled[results_model.selected_features]),
index=X_val_scaled.index,
columns=["UMAP dimension 1", "UMAP dimension 2"],
)
embedding_val.sample(3)
</code>
<code>
pred_train = (
y.to_frame("true")
# .join(get_score(clf=results_model.model, X=splits.X_train[results_model.selected_features], pos=1))
.join(score.rename("score")).join(
get_pred(
results_model.model, splits.X_train[results_model.selected_features]
).rename(model_name)
)
)
pred_train["label"] = pred_train.apply(
lambda x: njab.sklearn.scoring.get_label_binary_classification(
x["true"], x[model_name]
),
axis=1,
)
pred_train.sample(5)
</code>
<code>
colors = seaborn.color_palette(n_colors=4)
colors
</code>
<code>
if embedding is not None:
fig, axes = plt.subplots(1, 2, figsize=(8, 4), sharex=True, sharey=True)
for _embedding, ax, _title, _model_pred_label in zip(
[embedding, embedding_val],
axes,
[TRAIN_LABEL, TEST_LABEL],
[pred_train["label"], predictions["label"]],
): # noqa: E129
ax = seaborn.scatterplot(
x=_embedding.iloc[:, 0],
y=_embedding.iloc[:, 1],
hue=_model_pred_label,
hue_order=["TN", "TP", "FN", "FP"],
palette=[colors[0], colors[2], colors[1], colors[3]],
ax=ax,
)
ax.set_title(_title)
# files_out['pred_pca_labeled'] = FOLDER / 'pred_pca_labeled.pdf'
# njab.plotting.savefig(fig, files_out['pred_pca_labeled'])
files_out["umap_sel_feat.pdf"] = FOLDER / "umap_sel_feat.pdf"
njab.plotting.savefig(ax.get_figure(), files_out["umap_sel_feat.pdf"])
</code>
### Interactive UMAP plot
> Not displayed in online documentation
<code>
if embedding is not None:
embedding = embedding.join(X[results_model.selected_features])
embedding_val = embedding_val.join(X_val[results_model.selected_features])
embedding["label"], embedding_val["label"] = (
pred_train["label"],
predictions["label"],
)
embedding["group"], embedding_val["group"] = TRAIN_LABEL, TEST_LABEL
combined_embeddings = pd.concat([embedding, embedding_val])
combined_embeddings.index.name = "ID"
</code>
<code>
if embedding is not None:
cols = combined_embeddings.columns
TEMPLATE = "none"
defaults = dict(width=800, height=400, template=TEMPLATE)
fig = px.scatter(
combined_embeddings.round(3).reset_index(),
x=cols[0],
y=cols[1],
color="label",
facet_col="group",
hover_data=["ID"] + results_model.selected_features,
**defaults,
)
fig.for_each_annotation(lambda a: a.update(text=a.text.split("=")[1]))
fname = FOLDER / "umap_sel_feat.html"
files_out[fname.name] = fname
fig.write_html(fname)
print(fname)
display(fig)
</code>
### PCA
<code>
PCs_train, pca = njab_pca.run_pca(
X_scaled[results_model.selected_features], n_components=None
)
ax = njab_pca.plot_explained_variance(pca)
ax.locator_params(axis="x", integer=True)
fname = FOLDER / "feat_sel_PCA_var_explained_by_PCs.pdf"
files_out[fname.name] = fname
njab.plotting.savefig(ax.get_figure(), fname)
</code>
Applied to the test split:
<code>
PCs_val = pca.transform(X_val_scaled[results_model.selected_features])
PCs_val = pd.DataFrame(PCs_val, index=X_val_scaled.index, columns=PCs_train.columns)
PCs_val
</code>
<code>
if M_sel > 1:
fig, axes = plt.subplots(1, 2, figsize=(6, 3), sharex=True, sharey=True)
for _embedding, ax, _title, _model_pred_label in zip(
[PCs_train, PCs_val],
axes,
[TRAIN_LABEL, TEST_LABEL],
[pred_train["label"], predictions["label"]],
): # noqa: E129
ax = seaborn.scatterplot(
x=_embedding.iloc[:, 0],
y=_embedding.iloc[:, 1],
hue=_model_pred_label,
hue_order=["TN", "TP", "FN", "FP"],
palette=[colors[0], colors[2], colors[1], colors[3]],
ax=ax,
)
ax.set_title(_title)
fname = FOLDER / "pca_sel_feat.pdf"
files_out[fname.name] = fname
njab.plotting.savefig(ax.get_figure(), fname)
</code>
<code>
if M_sel > 1:
max_rows = min(3, len(results_model.selected_features))
fig, axes = plt.subplots(
max_rows, 2, figsize=(6, 8), sharex=False, sharey=False, layout="constrained"
)
for axes_col, (_embedding, _title, _model_pred_label) in enumerate(
zip(
[PCs_train, PCs_val],
[TRAIN_LABEL, TEST_LABEL],
[pred_train["label"], predictions["label"]],
)
):
_row = 0
axes[_row, axes_col].set_title(_title)
for i, j in itertools.combinations(range(max_rows), 2):
ax = seaborn.scatterplot(
x=_embedding.iloc[:, i],
y=_embedding.iloc[:, j],
hue=_model_pred_label,
hue_order=["TN", "TP", "FN", "FP"],
palette=[colors[0], colors[2], colors[1], colors[3]],
ax=axes[_row, axes_col],
)
_row += 1
fname = FOLDER / f"pca_sel_feat_up_to_{max_rows}.pdf"
files_out[fname.name] = fname
njab.plotting.savefig(ax.get_figure(), fname)
</code>
### Features
- top 3 scaled n_features_max (scatter)
- or unscalled single features (swarmplot)
<code>
if M_sel > 1:
max_rows = min(3, len(results_model.selected_features))
fig, axes = plt.subplots(
max_rows, 2, figsize=(6, 8), sharex=False, sharey=False, layout="constrained"
)
for axes_col, (_embedding, _title, _model_pred_label) in enumerate(
zip(
[
X_scaled[results_model.selected_features],
X_val_scaled[results_model.selected_features],
],
[TRAIN_LABEL, TEST_LABEL],
[pred_train["label"], predictions["label"]],
)
):
_row = 0
axes[_row, axes_col].set_title(_title)
for i, j in itertools.combinations(range(max_rows), 2):
ax = seaborn.scatterplot(
x=_embedding.iloc[:, i],
y=_embedding.iloc[:, j],
hue=_model_pred_label,
hue_order=["TN", "TP", "FN", "FP"],
palette=[colors[0], colors[2], colors[1], colors[3]],
ax=axes[_row, axes_col],
)
_row += 1
fname = FOLDER / f"sel_feat_up_to_{max_rows}.pdf"
files_out[fname.name] = fname
njab.plotting.savefig(ax.get_figure(), fname)
else:
fig, axes = plt.subplots(1, 1, figsize=(6, 2), layout="constrained")
single_feature = results_model.selected_features[0]
data = pd.concat(
[
X[single_feature]
.to_frame()
.join(pred_train["label"])
.assign(group=TRAIN_LABEL),
X_val[single_feature]
.to_frame()
.join(predictions["label"])
.assign(group=TEST_LABEL),
]
)
ax = seaborn.swarmplot(data=data, x="group", y=single_feature, hue="label", ax=axes)
fname = FOLDER / f"sel_feat_{single_feature}.pdf"
files_out[fname.name] = fname
njab.plotting.savefig(ax.get_figure(), fname)
</code>
## Savee annotation of errors for manuel analysis
Saved to excel table.
<code>
X[results_model.selected_features].join(pred_train).to_excel(
writer, sheet_name="pred_train_annotated", float_format="%.3f"
)
X_val[results_model.selected_features].join(predictions).to_excel(
writer, sheet_name="pred_test_annotated", float_format="%.3f"
)
</code>
## Outputs
<code>
writer.close()
files_out
</code>
|
{
"filename": "log_reg_1.ipynb",
"repository": "RasmussenLab/njab",
"query": "transformed_from_existing",
"size": 50499,
"sha": ""
}
|
# index.ipynb
Repository: yoavram/SciComPy
# Scientific Computing with Python
## Yoav Ram
## [scicompy.yoavram.com](http://scicompy.yoavram.com)
## Tutorials
- [Python](notebooks/python.ipynb)
- [NumPy](notebooks/numpy.ipynb)
- [Matplotlib](notebooks/matplotlib.ipynb)
## Lectures
1. [Pandas & Seaborn](notebooks/pandas-seaborn.ipynb)
1. [Statistical Inference](notebooks/statistics.ipynb)
1. [Bayesian Inference](notebooks/bayesian.ipynb)
1. [Generalized Linear Models 1: linear model](notebooks/linear-model.ipynb)
1. [Generalized Linear Models 2: logistic model](notebooks/logistic-model.ipynb)
1. [Population Genetics](notebooks/population-genetics.ipynb)
1. [Population Dynamics 1: Growth](notebooks/population-growth.ipynb)
1. [Population Dynamics 2: Interaction](notebooks/lotka-volterra.ipynb)
1. [Population Dynamics 3: Stochastic](notebooks/gillespie.ipynb)
1. [Approximate Bayesian Computation](notebooks/ABC.ipynb)
1. [Feed Forward Neural networks](notebooks/FFN.ipynb) | [Softmax Model](notebooks/softmax-model.ipynb)
1. [Density Estimation](notebooks/density-estimation.ipynb)
## Jupyter help
- Use autocompletion by pressing `Tab`.
- In the middle of a word it will try to finish the variable name.
- Just after a dot (`.`) it will try to bring up a menu of methods and attributes; the variable before the dot must already be defined.
- Use documentation by pressing `Shift+Tab`; this is especially useful inside a function parentheses as it will show the function arguments, but it can be used anywhere. Again, variables must already be defined.
## Terminal
To open a terminal inside Jupyter, choose `File->New...->Terminal` in the top menu.
## Update git
- Open terminal
- Change directory to the repository (`library` on Azure)
- Update to latest version by running `git pull`
- Note: if any files were changed you would have to discard the changes using `git checkout -- <filename>` or [stash](https://www.git-scm.com/docs/git-stash) them
|
{
"filename": "index.ipynb",
"repository": "yoavram/SciComPy",
"query": "transformed_from_existing",
"size": 3434,
"sha": ""
}
|
# Metabolomics_Shannon.ipynb
Repository: PriceLab/ShannonMets
<code>
# Run order - 1
# Needed input files: 'second_genome_2.csv', 'data_discovery.csv'
# Generated output files: '_40_coefs.csv', 'top_11_mets.csv', 'coeff_validation.csv'
</code>
<code>
# Load libraries
from sklearn.preprocessing import StandardScaler
import scipy.stats as stats
import matplotlib.pyplot as plt
from sklearn import model_selection
import seaborn as sns
from string import ascii_letters
import numpy as np
import pandas as pd
import sklearn
from sklearn.model_selection import train_test_split
from sklearn.linear_model import RidgeCV,LassoCV
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import cross_val_predict
from sklearn.metrics import r2_score
from sklearn.model_selection import KFold
import warnings
warnings.filterwarnings("ignore")
%matplotlib inline
</code>
<code>
#Importing Data
#Importing Data
second_genome=pd.read_csv('second_genome_2.csv',
index_col = 'public_client_id')
second_genome.index=second_genome.index.astype('float64')
print (second_genome.shape)
discovery_mets=pd.read_csv('data_discovery.csv',
index_col = 'public_client_id')
discovery_mets.index=discovery_mets.index.astype('float64')
print (discovery_mets.shape)
</code>
# Discovery Cohort Analysis
<code>
#Scale and standardize metabolites
X = discovery_mets[discovery_mets.columns[0:659]]
y = (discovery_mets['shannon'])
scaler = StandardScaler(copy=True, with_mean=True, with_std=True)
Xcolumns=X.columns
X = scaler.fit_transform(X)
X=pd.DataFrame(data=X,columns=Xcolumns)
print (X.shape)
</code>
<code>
## run cross_val_score on ridge and lasso to get out-of-sample R2 scores across 10-CV
#defining L2 parameters to be tested
alphas = np.linspace(1,1000,200)
#Defining LASSO and Ridge parameters
lassocv=LassoCV(eps=0.175, n_alphas=200, alphas=None, fit_intercept=True, normalize=False, precompute='auto', cv=10)
ridgecv=RidgeCV(alphas=alphas,fit_intercept=True,normalize=False,cv=10)
#Running 10-fold CV score function to get mean out-of-sample R2
discovery_score=cross_val_score(lassocv,X,y,cv=10)
print ('mean out-of-sample R2 LASSO',np.mean(discovery_score))
discovery_score_ridge=cross_val_score(ridgecv,X,y,cv=10)
print ('mean out-of-sample R2 Ridge',np.mean(discovery_score_ridge))
</code>
<code>
#Run Cross-validation and extract Beta_coefficients for each model
#Save predictions from each test set
lassocv=LassoCV(eps=0.175, n_alphas=200, alphas=None, fit_intercept=True, normalize=False, precompute='auto', cv=10)
y=discovery_mets['shannon']
y=y.reset_index()
y.drop(['public_client_id'],1,inplace=True)
X_folds = np.array_split(X, 10)
y_folds = np.array_split(y, 10)
coefficients=pd.DataFrame(index=X.columns).astype('float64')
predictions=[]
alphas= []
score= []
for k in range(10):
X_train = list(X_folds)
X_test = X_train.pop(k)
X_train = np.concatenate(X_train)
y_train = list(y_folds)
y_test = y_train.pop(k)
y_test=[ x[0] for x in list(y_test.values)]
y_train = np.concatenate(y_train)
lassocv.fit(X_train, y_train)
predictions.append(lassocv.predict(X_test).flatten())
coef=list(lassocv.coef_)
coefficients[k]=coef
alphas.append(lassocv.alpha_)
score.append(r2_score(y_test,lassocv.predict(X_test)))
#The L1 penalty for each model
print (alphas)
predictions_lasso=[item for sublist in predictions for item in sublist]
#Checking r2 score and pearson r
print ('mean R2 Score LASSO',np.mean(score))
print ('std. deviation for R2 Score',np.std(score))
print ('S.E.M',np.std(score)/np.sqrt(10))
print ('observed v predicted pearson r',stats.pearsonr(discovery_mets['shannon'],predictions_lasso))
</code>
<code>
#Identifying all metabolites with non-zero Beta Coefficients for figures 1B&C
for x in coefficients.index.tolist():
if (coefficients.loc[x] == 0.0).sum()==10:
coefficients.drop([x],inplace=True)
print (coefficients.shape)
#calculating mean beta-coefficient for each metabolite and counting no. of times each metabolite had a 0 beta-coefficient.
means=[]
std=[]
zeroes=[]
for x in coefficients.index.tolist():
means.append((np.mean(coefficients.loc[x])))
std.append((np.std(coefficients.loc[x])))
zeroes.append((coefficients.loc[x] == 0.0).astype(int).sum())
coefficients['mean']=means
coefficients['std_dev']=std
coefficients['zeroes']=zeroes
#save table as csv
coefficients.to_csv('_40_coefs.csv')
coefficients.sort_values(by='mean',ascending=False).head()
</code>
<code>
#save top 11 metabolites to csv for classification analysis
coefficients[coefficients['zeroes']==0].to_csv('top_11_mets.csv')
</code>
# PD whole tree and Chao1 predictions
<code>
#Using all metabolites to predict PD whole tree and Chao1
lassocv=LassoCV(eps=0.175, n_alphas=200, alphas=None, fit_intercept=True, normalize=False, precompute='auto', cv=10)
discovery_PD_all=cross_val_score(lassocv,X,discovery_mets['PD_whole_tree'],cv=10)
print ('PD whole tree all 659 mets mean out-of-sample R2',np.mean(discovery_PD_all))
discovery_Chao_all=cross_val_score(lassocv,X,discovery_mets['chao1'],cv=10)
print ('Chao1 all 659 mets mean out-of-sample R2',np.mean(discovery_Chao_all))
</code>
<code>
#Testing prediction of other diversity metrics using just the 40 mets identified.
W=pd.DataFrame()
for x in discovery_mets.columns.tolist():
if x in coefficients.index.tolist():
W[x]=X[x]
print (W.shape)
lassocv=LassoCV(eps=0.01, n_alphas=200, alphas=None, fit_intercept=True, normalize=False, precompute='auto', cv=10)
discovery_PD40_=cross_val_score(lassocv,W,discovery_mets['PD_whole_tree'],cv=10)
print ('PD whole tree 40 mets mean out-of-sample R2',np.mean(discovery_PD40_))
discovery_Chao40_=cross_val_score(lassocv,W,discovery_mets['chao1'],cv=10)
print ('Chao1 all 40 mean out-of-sample R2',np.mean(discovery_Chao40_))
</code>
# Validation Cohort Analysis
<code>
#Metabolomics Validation
#Scaling and standardizing the validation cohort
y_validation = (second_genome['shannon'])
vendor = second_genome[second_genome.columns[0:659]]
scaler = StandardScaler(copy=True, with_mean=True, with_std=True)
Xcolumns=vendor.columns
Xindex=vendor.index
vendor = scaler.fit_transform(vendor)
vendor=pd.DataFrame(data=vendor,columns=Xcolumns)
</code>
<code>
#Run LASSO using all 659 Mets
## run cross_val_score on ridge and lasso to get out-of-sample R2 scores across 10-CV
lassocv=LassoCV(eps=0.175, n_alphas=200, alphas=None, fit_intercept=True, normalize=False, precompute='auto', cv=10)
validation_score=cross_val_score(lassocv,vendor,y_validation,cv=10)
print ('mean out-of-sample R2 LASSO',np.mean(validation_score))
print ('mean out-of-sample STD LASSO',np.std(validation_score))
</code>
<code>
#Predict shannon using just the 40 metabolites identified in the discovery cohort
for x in vendor.columns.tolist():
if x not in coefficients.index.tolist():
vendor.drop([x],1,inplace=True)
print (vendor.shape)
lassocv=LassoCV(eps=0.05, n_alphas=200, alphas=None, fit_intercept=True, normalize=False, precompute='auto', cv=10)
validation40_score=cross_val_score(lassocv,vendor,y_validation,cv=10)
print ('mean out-of-sample R2 LASSO 40 mets validation',np.mean(validation40_score))
print ('std deviation of R2 score',np.std(validation40_score))
</code>
<code>
#Assessing whether performance is significantly different across the 10-CVs between whole metabolome model and the 40 metabolite model
print ('ttest 40 mets vs. 659 mets',stats.ttest_ind(validation_score,validation40_score))
</code>
<code>
#Extracting Beta coefficients from 10-fold cv using only 40 mets in the validation set
lassocv=LassoCV(eps=0.05, n_alphas=200, alphas=None, fit_intercept=True, normalize=False, precompute='auto', cv=10)
y=y_validation
y=y.reset_index()
y.drop(['public_client_id'],1,inplace=True)
from sklearn.model_selection import KFold
X_folds = np.array_split(vendor, 10)
y_folds = np.array_split(y, 10)
coefficients_validation=pd.DataFrame(index=vendor.columns).astype('float64')
predictions_validation=[]
alphas= []
score_validation= []
for k in range(10):
X_train = list(X_folds)
X_test = X_train.pop(k)
X_train = np.concatenate(X_train)
y_train = list(y_folds)
y_test = y_train.pop(k)
y_test=[ x[0] for x in list(y_test.values)]
y_train = np.concatenate(y_train)
lassocv.fit(X_train, y_train)
predictions_validation.append(lassocv.predict(X_test).flatten())
coef=list(lassocv.coef_)
coefficients_validation[k]=coef
alphas.append(lassocv.alpha_)
score_validation.append(r2_score(y_test,lassocv.predict(X_test)))
print (lassocv.alpha_)
print (alphas)
predictions_validation=[item for sublist in predictions_validation for item in sublist]
#Identifying all metabolites with non-zero Beta Coefficients
means=[]
std=[]
zeroes=[]
for x in coefficients_validation.index.tolist():
means.append((np.mean(coefficients_validation.loc[x])))
std.append((np.std(coefficients_validation.loc[x])))
zeroes.append((coefficients_validation.loc[x] == 0).astype(int).sum())
coefficients_validation['mean']=means
coefficients_validation['std_dev']=std
coefficients_validation['zeroes']=zeroes
coefficients_validation.sort_values(by='mean')
coefficients_validation.to_csv('coeff_validation.csv')
</code>
<code>
#comparing beta coefficients across discovery and validation sets (Figure 6B)
top_11_=coefficients[coefficients['zeroes']==0].index.tolist()
correlating_coef=pd.DataFrame(index=top_11_)
correlating_coef['mean_discovery']=coefficients['mean']
correlating_coef['mean_validation']=coefficients_validation['mean']
correlating_coef['std_validation']=coefficients_validation['std_dev']
correlating_coef['std_discovery']=coefficients['std_dev']
#running pearson and spearman on the mean model beta-coefficients across cohorts
spearman=stats.spearmanr(correlating_coef['mean_discovery'],correlating_coef['mean_validation'])
print ('spearman rho=',spearman)
pearson=stats.pearsonr(correlating_coef['mean_discovery'],correlating_coef['mean_validation'])
print ('Pearson R=',pearson)
</code>
|
{
"filename": "Metabolomics_Shannon.ipynb",
"repository": "PriceLab/ShannonMets",
"query": "transformed_from_existing",
"size": 27681,
"sha": ""
}
|
# Introduction_to_Epigenetics.ipynb
Repository: Tseehay/Standford-Data-Ocean
<img src="materials/images/introduction-to-epigenetics-cover.png"/>
# **Introduction to Epigenetics**
`🕒 This module should take less than 1 hour to complete.`
`✍️ This notebook is written using Python.`
Epigenetics is a field of study focused on changes in DNA that do not involve alterations to the underlying sequence. In comparison to static nature of the genome, epigenetics is dynamic, and change substantially during development, aging, cancers and in response to environmental exposure.
Chemical modification of DNA and the protein that interact with DNA can change the degrees to which genes are switched **ON** or **OFF**, causing an effect on the phenotype. The most common epigenetic regulation include DNA methylation, histone modifications, and non-coding RNAs.
The collection of all epigenetic changes in a genome is called an epigenome. In this module, you will learn approaches to study epigenomes, and interpret them.
<img src="materials/images/epigenetic-regulation.png"/>
<div class="alert alert-block alert-info">
<h3>⌨️ Keyboard shortcut</h3>
These common shortcut could save your time going through this notebook:
- Run the current cell: **`Enter + Shift`**.
- Add a cell above the current cell: Press **`A`**.
- Add a cell below the current cell: Press **`B`**.
- Change a code cell to markdown cell: Select the cell, and then press **`M`**.
- Delete a cell: Press **`D`** twice.
Need more help with keyboard shortcut? Press **`H`** to look it up.
</div>
---
## **Epigenetic regulation in development and disease**
Here we have two examples around epigenetics regulation:
1. The first example shows a stem cell uses epigenetic factors in the cell differentiation process.
<img src="materials/images/epigenetic-regulation-development.png"/>
2. The second example indicates our lifestyle can switch ON or OFF genes. Studies show smoking and drinking turn on genes that are associated with the development of addiction.
<img src="materials/images/epigenetic-regulation-disease.png"/>
---
## Epigenetic assays
**DNA methylation analysis**
Cytosine methylation (5-methylcytosine, 5mC) is one of the main covalent base modifications in eukaryotic genomes, generally observed on CpG dinucleotides. Once methyl compounds are present on a gene, the gene is muted, or turned off. As a result, no protein is generated. That is how DNA methylation regulates gene expression.
Genome-wide DNA methylation can be mapped using either Whole Genome Bisulphite Sequencing (WBGS), Reduced-Representation Bisulfite Sequencing (RRBS), Methylation-sensitive Restriction Enzyme (MRE) and immunoprecipitation based assays.
<img src="materials/images/dna-methylation.png"/>
**Histone modification analysis**
DNA is wrapped around a protein complex called the histone complex. Histones form a chain of beads along the DNA. Histone modifications at specific locations (e.g., lysine acetylation and methylation) can affect whether a gene is wrapped or unwrapped ("ON" versus "OFF"). This alters the expression of genes.
Proteins that read genes cannot reach DNA wrapped tightly around histones. Consequently, this mechanism switches some genes off because the DNA is wrapped around the histone proteins, and is inaccessible to the proteins that read DNA to make RNA, whereas other genes get expressed because they are not wrapped around histones.
Genome-wide histone modifications can be measured using ChIP-Sequencing. ChIP-Seq, a combination of chromatin immunoprecipitation (ChIP), and massively parallel sequencing, delves into the interactions between proteins, DNA, and RNA, revealing critical regulatory events in many biological processes and disease states. ChIP-Seq is used to identify transcription factor binding sites, track histone modifications across the genome, and constrain chromatin structure and function.
<img src="materials/images/histone-modification.png"/>
---
## The data
In this module we will take a look at the epigenome data (DNA methylation and histone modifications) from human reference epigenomes generated as part of **NIH Roadmap Epigenomics Program**. You can learn more at: https://egg2.wustl.edu/roadmap/web_portal/index.html
**Differentially Methylated Region (DMR) calls across reference epigenomes**
As a general resource for epigenomic comparisons across all epigenomes, Differentially Methylated Region (DMR) were defined using the Lister et al method (Lister et al., 2013), combining all Differentially Methylated Sites (DMSs) within 250bp of one another into a single DMR, and excluding any DMR with less than 3 DMSs.
For each DMR in each sample, average methylation level was computed, weighted by the number of reads overlapping it (Schultz et al., 2012). This resulted in a methylation level matrix with rows of DMRs and columns of samples.
**ChIP-seq peak calls**
For each gene, a region of 10.000bp around the transcription start site of the gene is extracted (5000bp upstream and 5000bp downstream). This region is binned in 100 bins of 100bp. For each bin, five core histone modification marks are counted.
**Whole Genome Bisulphite Sequencing data from 111 reference epigenomes**
<code>
import sys
import os
# import pyathena
import pandas as pd
</code>
<code>
import gzip
wgbs = pd.read_csv('https://egg2.wustl.edu/roadmap/data/byDataType/dnamethylation/DMRs/WGBS_DMRs_v2.tsv.gz',sep='\t')
</code>
<code>
! wget https://egg2.wustl.edu/roadmap/data/byDataType/dnamethylation/DMRs/WGBS_DMRs_v2.tsv.gz
</code>
<code>
print(wgbs.head(10))
</code>
The data matrix shows rows of DMRs and columns of samples including chromosome number, location 'start' and 'end'.
1. `chr`: Chromosome.
2. `start`: Start of the location.
3. `end`: End of the location.
Metadata on the samples used in the analysis are available here. https://docs.google.com/spreadsheets/d/1yikGx4MsO9Ei36b64yOy9Vb6oPC5IBGlFbYEt-N6gOM/edit#gid=15
**Querying DNA methylation and histone modification states of genes in UCSC genome Browser**
`FractionMethylation.tar.gz` files are used for visualization of DNA methylation states, and provides fractional methylation calls at CpG. It contains 25 files. One for each chromosome.
Format of each file: Tab separated table CpGs (rows) X epigenomes (columns)
Methylation calls are round to two decimal digits.
Each file has the same matrix format:
- The first column is a position of C or G in the CpG
- The rest of the columns are epigenomes.
Only CpG info is present (as it came from EDACC files); for those CpG where coverage was <=3 we - for both coverage and Methylation (as missing data)
For ChIP-Seq data visualization, negative log10 of the Poisson p-value of ChIP-seq counts relative to expected background counts local were used. These signal confidence scores provides a measure of statistical significance of the observed enrichment.
The NCBI RefSeq Genes composite track shows human protein-coding and non-protein-coding genes taken from the NCBI RNA reference sequences collection (RefSeq) [hg19 refGene]. You could learn more at the following:
- https://hgdownload.cse.ucsc.edu/goldenPath/hg19/database/
- https://genome.ucsc.edu/cgi-bin/hgTables?db=hg19&hgta_group=genes&hgta_track=refSeqComposite&hgta_table=refGene&hgta_doSchema=describe+table+schema (Schema for NCBI RefSeq - RefSeq genes from NCBI)
Visualize DNA methylation and histone modification (an active marker-H3K4me3) here. http://genome.ucsc.edu/cgi-bin/hgTracks?db=hg19&lastVirtModeType=default&lastVirtModeExtraState=&virtModeType=default&virtMode=0&nonVirtPosition=&position=chr6%3A31128153%2D31142413&hgsid=1467202695_p1HPrLZa2tMzJREfrrGplnR1tNsc
Compare Pou5f1 promoter methylation and H3K4me3 levels between ESCs and neuronal progenitor cultured cells. You could query other crucial genes in ESC maintenance for e.g., Nanog, Sox2. TDGF1, LEFTY1, GDF3, FOXD3 and in neuronal progenitor cells, PAX6, SIX3, LHX2, OTX2, PLZF, SOX1, FOXG1.
<img src="materials/images/visualization.png"/>
---
**Reference**
- Roadmap Epigenomics Consortium., Kundaje, A., Meuleman, W. et al. Integrative analysis of 111 reference human epigenomes. Nature 518, 317–330 (2015). https://doi.org/10.1038/nature14248.
https://www.illumina.com/content/dam/illumina-marketing/documents/products/appnotes/appnote-methylseq-wgbs.pdf
---
# Contributions & acknowledgment
Thank the following team to work on this module:
- **Module Content:** Abtin Tondar, Mohan Babu
- **Engineering:** Amit Dixit
- **UX/UI Design & Illustration:** Kexin Cha
- **Video Production:** Francesca Goncalves
- **Project Management:** Amir Bahmani, Kexin Cha
---
Copyright (c) 2022 Stanford Data Ocean (SDO)
All rights reserved.
|
{
"filename": "Introduction_to_Epigenetics.ipynb",
"repository": "Tseehay/Standford-Data-Ocean",
"query": "transformed_from_existing",
"size": 17510,
"sha": ""
}
|
# Phylo_1.ipynb
Repository: mkborregaard/JuliaWorkshopIBS
Let's do some analyses combining trees and map objects
<code>
using Phylo # phylogenetics
using SpatialEcology #spatial ecology, duh
using Plots # plotting
using JLD2, SparseArrays, DataFrames #jld2 is to load our files. Due to a bug we need the other two
</code>
#### Loading and curating the data
<code>
@load "African_mammals.jld"
</code>
Load the tree
<code>
tree = open(t->parsenewick(t, NamedPolytomousTree), "../Data/Mammals.tree")
</code>
We always start with a plot - it'll take a little time
<code>
plot(tree, treetype = :fan, tipfont = (2,))
</code>
We want to align the tree and the dataset. We do that by dropping all species that don't overlap. It becomes a little more tricky as there is used an underscore in the names in the tree
<code>
names = getleafnames(tree)
</code>
<code>
spnames = replace.(speciesnames(mammals), Ref(" "=>"_"))
</code>
<code>
not_in = setdiff(names, spnames)
</code>
<code>
droptips!(tree, not_in)
</code>
Here is the new tree
<code>
plot(tree, treetype = :fan, tipfont = (2,), fmt = :png)
</code>
#### Node-based analysis
We want to use this new tree to compare the richness of monophyletic sister groups, which is interesting to see the spatial signature of evolution
Find the names of all internal nodes
<code>
collect(nodenamefilter(!isleaf, tree))
</code>
Let's pick a random node, 791, and find all species descending from that
<code>
desc = getdescendants(tree, "Node 791")
ds = filter(x->isleaf(tree, x), desc)
</code>
We'll make a view containing only these species
<code>
n791 = view(mammals, species = replace.(ds, Ref("_"=>" ")))
plot(n791)
</code>
Now this was interesting, we'd want to be able to do this many times. So let's write a function to do it
<code>
getnode(nodenumber::Int, asm) = getnode("Node $nodenumber", asm)
function getnode(nodename, asm)
desc = getdescendants(tree, nodename)
ds = filter(x->isleaf(tree, x), desc)
view(asm, species = replace.(ds, Ref("_"=>" ")))
end
</code>
This will call the function on random clades and show their richness
<code>
randnode = rand(collect(nodenamefilter(!isleaf, tree)))
n = getnode(randnode, mammals)
plot(n, title = randnode)
</code>
Now, let's compare the richness of the two clades descending from that node
<code>
gc = getchildren(tree, "Node 791")
</code>
<code>
ch = getnode.(gc, mammals)
</code>
<code>
plot(plot.(ch)..., title = permutedims(gc))
</code>
We can see that they look quite different. So something interesting must have happened at this point in history. But is the difference significant from a random expectation? Let's run a null model on this.
#### Randomizing occurrences for null model analysis
<code>
using RandomBooleanMatrices, Random
</code>
Gives us a method for randomizing matrices while maintaining marginal totals
<code>
m = matrixrandomizer(occurrences(n791))
plot(
heatmap(rand!(m)),
heatmap(rand!(m))
)
</code>
Due to Julia's dispatch system, we also have a method to randomize our occurrences data sets
<code>
m = matrixrandomizer(n791)
</code>
<code>
randm = rand!(m)
plot(randm)
</code>
<code>
# Empirical species richness of the two daughter nodes
plot( plot.(getnode.(gc, Ref(mammals)))..., title = permutedims(gc))
</code>
<code>
# Random species richness of the two daughter nodes
plot( plot.(getnode.(gc, Ref(randm)))..., title = permutedims(gc))
</code>
Let's get the empirical richness of one of the clades
<code>
dec2 = getnode(790, mammals)
emprich = collect(richness(dec2))
</code>
Now let's do a 100 randomizations of the total clade. This is slow because the `getnode` function is slow (will be much faster in the future)
<code>
randrich = [collect(richness(getnode(790, rand!(m)))) for i in 1:100]
</code>
We now have an array of 100 richness values. Let's calculate the mean and standard deviation, and use this to calculate standardized effect size
<code>
using Statistics
temp = reduce(hcat,randrich)
sdt, meant = vec(mapslices(std, temp, dims = 2)), vec(mean(temp, dims = 2))
</code>
<code>
z = (emprich .- meant) ./ sdt
</code>
A plot tells us that deviations are much larger than the ~2 expected from random chance in certain areas
<code>
plot(z, n791, clim = (-5,5), c = :RdYlBu_r)
</code>
|
{
"filename": "Phylo_1.ipynb",
"repository": "mkborregaard/JuliaWorkshopIBS",
"query": "transformed_from_existing",
"size": 10817,
"sha": ""
}
|
# Notes_3.ipynb
Repository: hekaplex/HSPC
# Open Problems - Multimodal Single-Cell Integration
While splitting the CITEseq RNA expression data by day-donor, I noticed that day2-donor32606 from train_cite_inputs.h5 and day2-donor27678 from test_cite_inputs.h5 had the same number of cells(7476). I got two separate expression matrices from these two donors but they seem to present the same gene expression patterns even if they were extracted from different files (32606 from train and 27678 from test data) with different barcode information.
Is this intended or released by mistake?
<code>
%pip install nbconvert
</code>
<code>
%pip install --upgrade pip
</code>
<code>
import os
import os
for dirname, _, filenames in os.walk('C:\\HSPC\\prior_work'):
for filename in filenames:
print(os.path.join(dirname, filename))
</code>
|
{
"filename": "Notes_3.ipynb",
"repository": "hekaplex/HSPC",
"query": "transformed_from_existing",
"size": 17128,
"sha": ""
}
|
# COMO_2.ipynb
Repository: HelikarLab/COMO
# COMO: Constraint-based Optomization of Metabolic Objectives
COMO is used to build computational models that simulate the biochemical and phisiological processes that occur in a cell or organism, known as constraint-based metabolic models. The basic idea behind a constraint-based metabolic model is to use a set of constraints to place boundaries on the system being modeled. These constraints may include (but are not limited to) limits on the availability of nutrients, energy requirements, and the maximum rates of metabolic reactions. COMO imposes these constraints within a specific context. This context includes the cell or tissue type being modeled, along with its disease state. In addition to creating metabolic models, COMO serves as a platform to identify (1) drug targets and (2) repurposable drugs for metabolism-impacting diseases.
This pipeline has everything necessary to build a model from any combination of the following sources:
- Bulk RNA-seq (total and mRNA)
- Single-cell RNA-seq
- Proteomics
COMO does not require programming experience to create models. However, every step of the pipeline is easily accessable to promote modification, addition, or replacement of analysis steps. In addition, this docker container comes pre-loaded with popular R and Python libraries; if you would like to use a library and cannot install it for any reason, please [request it on our GitHub page](https://github.com/HelikarLab/COMO)!
<h2>
<font color='red'>⚠️ WARNING ⚠️</font>
</h2>
If you terminate your session after running Docker, any changes you make *will <ins>**not**</ins> be saved*. Please mount a local directory to the docker image, [as instructed on the GitHub page](https://helikarlab.github.io/COMO/#choosing-a-tag), to prevent data loss.
# Before Starting
## Input Files
The proper input files, dependent on the types of data you are using, must be loaded before model creation. Some example files are included to build metabolic models of naive, Th1, Th2, and Th17 T-cell subtypes, and identify targets for rheumatoid arthritis.
### RNA-seq
A correctly formatted folder named "COMO inputs" in the data directory. Proper inputs can be generated using our Snakemake pipeline, [FastqToGeneCounts](https://github.com/HelikarLab/FastqToGeneCounts), which is specifically designed for use with COMO. RNA sequencing data can be single-cell, or bulk, but the provided Snakemake pipeline does not process single-cell data as of now. If you are processing RNA-seq data with an alternate procedure or importing a pre-made gene count matrix, follow the instructions [listed under Step 1](#Importing-a-Pre-Generated-Counts-Matrix)
### Proteomics
A matrix of measurement values, where rows are protein names in Entrez format and columns are sample names
## Configuration Information
You should upload configuration files (in Excel format, `.xlsx`) to `data/config_sheets`. The sheet names in these configuration files should correspond to the context (tissue name, cell name, etc.). The data in each sheet contains the sample names to include in that context-specific model. These sample names should correspond to the column name in the source data matrix, which will be output (or uploaded, if you have your own data) to `data/data_matrices/MODEL-NAME`
# Drug Target Identification
1. Preprocess Bulk RNA-seq data
1. Convert STAR-output gene count files into a unified matrix
2. Fetch necessary information about each gene in the matrix
3. Generate a configuration file
2. Analyze any combination of RNA-seq or proteomics data, and output a list of active genes for each strategy
3. Check for a consensus amongst strategies according to a desired rigor and merge into a singular set of active genes
4. Create a tissue-specific model based on the list of active genes (from Step 3)
5. Identify differential gene expression from disease datasets using RNA-seq transcriptomics information
6. Identify drug targets and repurposable drugs. This step consists of four substeps:
1. Map drugs to models
2. Knock-out simulation
3. Compare results between perturbed and unperturbed models (i.e., knocked-out models vs non-knocked-out models)
4. Integrate with disease genes and create a score of drug targets
# Step 1: Data Preprocessing and Analysis
The first step of COMO will perform processing and analysis on each of the following data:
- Total RNA sequencing
- mRNA sequencing
- Proteomics
## RNA-seq Data
RNA sequencing data is read by COMO as a count matrix, where each column is a different sample or replicate named "tissueName_SXRYrZ", where:
- "`X`" represents the study (or batch) number. Each study represents a new experiment
- "`Y`" represents the replicate number
- "`Z`" represents the run number. If the replicate does not contain multiple runs for a single replicate, then "`rZ`" should not be included.
- "`tissueName`" represents the name of the model that will be built from this data. It should be consistent with other data sources if you would like them to be integrated.
❗The `tissueName` identifier should not contain any special characters, including `_`. Doing so may interfere with parsing throughout this pipeline.
Replicates should come from the same study or batch group. Different studies/batches can come from different published studies, as long as the tissue/cell was under similar enough conditions for your personal modeling purposes. "Run numbers" in the same replicate will be summed together.
### Example
Pretend `S1` represents a study done by Margaret and `S2` represents a different study done by John. Margaret's experiment contains three replicates, while John's only contains two. Each of these studies comes from m0 Macrophages. Using this cell name, we will set our tissue name to `m0Macro`. The studies were conducted in different labs, by different researches, at different points in time, even using different preparation kits. . Using this information, we have the following samples:
<table style="border: 1px solid black; border-collapse: collapse;">
<thead>
<tr>
<th colspan="1000" style="text-align: center;">m0 Macrophage Data</th>
</tr>
</thead>
<tbody>
<tr>
<td colspan="3" style="padding: 10px; text-align: center; border-bottom: 1px solid black;">Margaret's Data</td>
<td colspan="3" style="padding: 10px; text-align: center; border-left: 1px solid black; border-bottom: 1px solid black;">John's Data</td>
</tr>
<tr>
<td style="padding: 10px; text-align: center;">Study</td>
<td style="padding: 10px; text-align: center;">Replicate</td>
<td style="padding: 10px; text-align: center;">Resulting Name</td>
<td style="padding: 10px; text-align: center; border-left: 1px solid black;">Study</td>
<td style="padding: 10px; text-align: center;">Replicate</td>
<td style="padding: 10px; text-align: center;">Resulting Name</td>
</tr>
<tr>
<td style="padding: 10px; text-align: center;">S1</td>
<td style="padding: 10px; text-align: center;">R1</td>
<td style="padding: 10px; text-align: center;">m0Macro_S1R1</td>
<td style="padding: 10px; text-align: center; border-left: 1px solid black;">S2</td>
<td style="padding: 10px; text-align: center;">R1</td>
<td style="padding: 10px; text-align: center;">m0Macro_S2R1</td>
</tr>
<tr>
<td style="padding: 10px; text-align: center;">S1</td>
<td style="padding: 10px; text-align: center;">R2</td>
<td style="padding: 10px; text-align: center;">m0Macro_S1R2</td>
<td style="padding: 10px; text-align: center; border-left: 1px solid black;">S2</td>
<td style="padding: 10px; text-align: center;">R2</td>
<td style="padding: 10px; text-align: center;">m0Macro_S2R2</td>
</tr>
<tr>
<td style="padding: 10px; text-align: center;">S1</td>
<td style="padding: 10px; text-align: center;">R3</td>
<td style="padding: 10px; text-align: center;">m0Macro_S1R3</td>
<td style="padding: 10px; text-align: center; border-left: 1px solid black;">-</td>
<td style="padding: 10px; text-align: center;">-</td>
<td style="padding: 10px; text-align: center;">-</td>
</tr>
</tbody>
</table>
From the `Resulting Name` column, the `m0Macro_S1R1`, `m0Macro_S1R2`, and `m0Macro_S1R3` samples (Margaret's data) will be checked for gene expression consensus to generate a list of active genes in all three replicates. The same will be done for `m0Macro_S2R1` and `m0Macro_S2R2` (John's data). Once these two *separate* lists of active genes have been generated, expression *between* lists will be checked for additional consensus between the studies. This system is used not only to help maintain organization throughout COMO, but because most types of normalized gene counts cannot undergo direct comparisons across replicates. This is especially true for comparisons between different experiments. Therefore, COMO will convert normalized gene counts into a boolean list of active genes. These lists will be compared at the level of replicates in a study, and then again at the level of all provided studies. Finally, the active genes will be merged with the outputs of proteomics and various RNA-sequencing strategies if provided. The rigor used at each level is easily modifiable.
### Initializing RNA-seq Data
Please choose an option below:
1. Importing a `COMO inputs` directory
1. [Initialization using the Snakemake Pipeline](https://github.com/HelikarLab/FastqToGeneCounts)
2. [Creating your own Inputs](#Creating-a-Properly-Formatted-COMO-inputs-Folder)
2. [Importing a pre-generated gene counts file](#Importing-a-Pre-Generated-Counts-Matrix)
#### Snakemake Pipeline
It is recommended you use the available Snakemake pipeline to align to create a properly formatted `COMO inputs` folder. The pipeline also runs a series of quality control steps to help determine if any of the provided samples are not suitable for model creation. This pipeline can be found at https://github.com/HelikarLab/FastqToGeneCounts.
The folder output from the snakemake pipeline can be uploaded directly to the folder `data/COMO inputs` in this pipeline
Once this is done, continue to the code block at the end of this section
#### Creating a Properly Formatted `COMO inputs` Folder
If you are using your own alignment protocol, follow this section to create a properly formatted `COMO inputs` folder.
The top-level of the directory will have separate tissue/cell types that models should be created from. The next level must have a folder called `geneCounts`, and optionally a `strandedness` folder. If you are using zFPKM normalization, two additional folders must be included: `layouts` and `fragmentSizes`. Inside each of these folders should be folders named `SX`, where `X` is a number that replicates are associated with.
<br>
<ins>Gene Counts</ins>
Create a folder named `geneCounts`. The outputs of the STAR aligner using the `-quantMode GeneCounts` option should be included inside the "study-number" folders (`SX`) of `geneCounts`. To help you (and COMO!) stay organized, these outputs should be renamed `tissueName_SXRYrZ.tab`. Just like above, `X` is the study number, `Y` is the replicate number, and (if present), `Z` is the run number. If the replicate does not contain multiple runs, the `rZ` should be excluded from the name. Replicates should come from the same study/sample group. Different samples can come from different published studies as long as the experiments were performed under similar enough conditions for your modeling purposes.
<ins>Strandedness</ins>
Create a folder named `strandedness`. This folder should contain files named `tissueName_SXRYrZ_strandedness.txt`. These files must tell the strandedness of the RNA-sequencing method used. It should contain one of the following texts (and nothing else):
- `NONE`: If you don't know the strandedness
- `FIRST_READ_TRANSCRIPTION_STRAND`: If this RNA-sequencing sample originates from the first strand of cDNA, or the "antisense" strand
- `SECOND_READ_TRANSCRIPTION_STRAND`: If this RNA-sequencing sample originates from the second strand of cDNA, or the "sense" strand
<ins>Layouts</ins>
Create a folder a folder named `layouts`. Files should be named `tissueName_SXRYrZ_layout.txt, where each file tells the layout of the library used. It must contain one of the following texts, and nothing else:
- `paired-end`: Paired-end reads were generated
- `single-end`: Single-end reads were generated
<ins>Fragment Sizes</ins>
Create a folder named `fragmentSizes`. Files should be named `tissueName_SXRYrZ_fragment_sizes.txt` and contain the output of [RSeQC](https://rseqc.sourceforge.net/)'s `como/RNA_fragment_size.py` function.
<ins>Preparation Methods</ins>
Create a folder named `prepMethods`. Files should be named `tissueName_SXRYrZ_prep_method.txt`. Each file should tell the library preparation strategy. It must contain one of the following texts, and nothing else:
- `total`: All mRNA expression was measured (mRNA, ncRNA, rRNA, etc.)
- `mRNA`: Only polyA mRNA expression was measured
It should be noted that these strategies only serve to differentiate the methods in the event that both are used to build a model. If a different library strategy is desired, you have two options:
1. Replace one of these with a placeholder. If you only have polyA mRNA expression, you only have to enter data for those samples. Do not fill out any samples with `total`.
2. With a little Python knowledge, a new strategy can easily be added to the `como/merge_xomics.py` file. If you would like to do so, the file is located under `como/merge_xomics.py` in this Jupyter Notebook
#### Importing a Pre-Generated Counts Matrix
Import a properly formatted counts matrix to `data/data_matrices/exampleTissue/gene_counts_matrix_exampleTissue.csv`. The rows should be named `exampleTissue_SXRY` (note the lack of a run number (`rZ`), runs should be summed into each replicate). If you are providing the count matrix this way, instead of generating one using the snakemake pipeline mentioned above, you must create a configuration file that has each sample's name, study number, and if using zFPKM, layout and mean fragment length. Use the provided template below to create yours. Once you have created this file and placed it under the `data/data_matrices/exampleTissue` directory, run the `como/rnaseq_preprocess.py` file with `preprocess-mode` set to `provide-matrix`.
This method is best if you are downloading a premade count matrix, or using single-cell data that has already been batch corrected, clustered, and sorted into only the cell type of interest!
<table style="border: 1px solid black; border-collapse: collapse;">
<thead>
<tr>
<th colspan="1000" style="text-align: center; border-bottom: 1px solid black;">Example Gene Count Table</th>
</tr>
</thead>
<tbody>
<tr>
<td style="padding: 10px; text-align: center; border-bottom: 1px solid black;">genes</td>
<td style="padding: 10px; text-align: center; border-left: 1px solid black; border-bottom: 1px solid black;">exampleTissue_S1R1</td>
<td style="padding: 10px; text-align: center; border-left: 1px solid black; border-bottom: 1px solid black;">exampleTissue_S1R2</td>
<td style="padding: 10px; text-align: center; border-left: 1px solid black; border-bottom: 1px solid black;">exampleTissue_S2R1</td>
<td style="padding: 10px; text-align: center; border-left: 1px solid black; border-bottom: 1px solid black;">exampleTissue_S2R2</td>
</tr>
<tr>
<td style="padding: 10px; text-align: center;">ENSG00000000003</td>
<td style="padding: 10px; text-align: center;">20</td>
<td style="padding: 10px; text-align: center;">29</td>
<td style="padding: 10px; text-align: center;">52</td>
<td style="padding: 10px; text-align: center;">71</td>
</tr>
<tr>
<td style="padding: 10px; text-align: center;">ENSG00000000005</td>
<td style="padding: 10px; text-align: center;">0</td>
<td style="padding: 10px; text-align: center;">0</td>
<td style="padding: 10px; text-align: center;">0</td>
<td style="padding: 10px; text-align: center;">0</td>
</tr>
<tr>
<td style="padding: 10px; text-align: center;">ENSG00000000419</td>
<td style="padding: 10px; text-align: center;">1354</td>
<td style="padding: 10px; text-align: center;">2081</td>
<td style="padding: 10px; text-align: center;">1760</td>
<td style="padding: 10px; text-align: center;">3400</td>
</tr>
</tbody>
</table>
### RNA-seq Preprocessing Parameters
- `context_names`: The tissue/cell types to use. This is a simple space-separated list of items, such as "naiveB regulatoryTcell"
- `gene_format`: The format of input genes, accepts `"Extrez"`, `"Emsembl"` or `"Symbol"`
- `taxon_id`: The [NCBI Taxon ID](https://www.ncbi.nlm.nih.gov/taxonomy) to use
- `preprocess_mode`: This should be set to `"create-matrix"` if you are **not** providing a matrix, otherwise set it to `"provide-matrix"`
<code>
context_names = "naiveB"
taxon_id = "human" # accepts integer (bioDBnet taxon id) or "human" or "mouse"
preprocess_mode = "create" # "create" or "provide"
# fmt: off
cmd = " ".join(
[
"python3", "como/rnaseq_preprocess.py",
"--context-names", context_names,
"--taxon-id", taxon_id,
"--mode", preprocess_mode,
"--input-format", "Ensembl"
]
)
# fmt: on
!{cmd}
</code>
## Identification of Gene Activity in Transcriptomic and Proteomic Datasets
This part of Step 1 will identify gene activity in the following data sources:
- RNA-seq (total, mRNA, or single cell)
- Proteomics
Only one source is required for model generation, but multiple sources can be helpful for additional validation if they are of high enough quality
### Filtering Raw Counts
Regardless of normalization technique used, or provided files used for RNA-seq, preprocessing is required to fetch relevent gene information needed for harmonization and normalization such as Entrez ID, and the start and end postions. Currently, COMO can filter raw RNA-sequencing counts using one of the following normalization techniques:
#### Transcripts Per Million Quantile
TPM Quantile. Each replicate is normalized with Transcripts-Per-Million, and an upper quantiile is taken to create a boolean list of active genes for the replicate (i.e., `R1`). Replicates are compared for consensus within the study, and then studies are compared between one another for additional consensus. The strictness of the consensus easily be set using the appropriate option within the `rnaseq_gen.py` code-block.
This method is recommended if you want more control over the size of the model; smaller models can include only the most expressed reactions, and larger models can encompass less essentail reactions
#### zFPKM
This method is outlined by [Hart et. al](https://pubmed.ncbi.nlm.nih.gov/24215113/). Counts will be normalized using zFPKM and genes > -3 will be considered "expressed" per Hart's recommendation. Expressed genes will be checked for consensus at the replicate and study level.
This method is recommended if you want to less control over which genes are essential, and instead use the most standardized method of active gene determination. This method is more "hands-off" than the above TPM Quantile method.
#### Counts Per Million
This is a flat cutoff value of counts per million normalized values. Gene expression will be checked for consensus at the replicate and study level.
This method is not recommended, as zFPKM is much more robust for a similar level of "hands-off" model building
### RNA Sequencing Analysis
#### Bulk RNA Sequencing
This has multiple strategies of library preparation (total, polyA-mRNA). If you are using public data, you may encounter a situation where you would like to use a combination of bulk RNA sequencing data produced using two different library preparation strategies.
COMO currently supports the two most common strategies, mRNA polyA enriched RNA sequencing, and total RNA sequencing. Because of the expected differences in distribution of transcripts, COMO is written to handle each strategy seperately before the integration step. The recommended Snakemake alignment pipeline is designed to work with COMO's preprocessing step ([Step 1, above](Step-1:-Initialize-and-Preprocess-RNA-seq-data)) to split RNA sequencing data from GEO into seperate input matrices and configuration files.
To create a gene expression file for total RNA sequencing data, use `"total"` for the "`--library-prep`" argument.
To create a gene expression file for mRNA polyA enriched data, use `"mRNA"` for the "`--library-prep`" argument.
The analysis of each strategy is identical. Specifying the type of analysis (total vs mRNA) only serves to ensure COMO analyzes them seperately.
#### Single Cell RNA Sequencing
While the Snakemake pipeline does not yet support single-cell alignment, and COMO does not yet support automated configuration file and counts matrix file creation for single-cell alignment output from STAR, it is possible to use single-cell data to create a model with COMO. Because normalization strategies can be applied to single-cell data in the same way it is applied to bulk RNA sequencing, `como/rnaseq_gen.py` can be used with a provided counts matrix and configuration file, from [Step 1](Step-1:-Initialize-and-Preprocess-RNA-seq-data), above. Just like `"total"` and `"mRNA"`, `como/rnaseq_gen.py` can be executed with `"SC"` as the "`--library-prep`" argument to help COMO differentiate it from any bulk RNA sequencing data if multiple strategies are being used.
### Total RNA Sequencing Generation
#### Parameters
- `trnaseq_config_file`: The configuration filename for total RNA. This file is found under the `data/config_sheets` folder
- `rep_ratio`: The proportion of replicates before a gene is considered "active" in a study
- `group_ratio`: The proportion of studies with expression required for a gene to be considered "active"
- `rep_ratio_h`: The proportion of replicates that must express a gene before that gene is considered "high-confidience"
- `group_ratio_h`: The proportion of studies that must express a gene before that gene is considered "high-confidence"
- `technique`: The technique to use. Options are: `"quantile"`, `"cpm"`, or `"zfpkm"`. The difference in these options is discussed above
- `quantile`: The cutoff Transcripts-Per-Million quantile for filtering
- `min_zfpkm`: The cutoff for Counts-Per-Million filtering
- `prep_method`: The library method used for preparation. Options are: `"total"`, `"mRNA"`, or `"SC"`,
<code>
# step 2.2 RNA-seq Analysis for Total RNA-seq library preparation
trnaseq_config_file = "trnaseq_data_inputs_auto.xlsx"
rep_ratio = 0.75
group_ratio = 0.75
rep_ratio_h = 1.0
group_ratio_h = 1.0
technique = "zFPKM"
minimum_cutoff = -3
taxon_id = "9606"
# fmt: off
cmd = " ".join(
[
"python3", "como/rnaseq_gen.py",
"--config-file", trnaseq_config_file,
"--replicate-ratio", str(rep_ratio),
"--batch-ratio", str(group_ratio),
"--high-replicate-ratio", str(rep_ratio_h),
"--high-batch-ratio", str(group_ratio_h),
"--minimum-cutoff", str(minimum_cutoff),
"--filt-technique", f"{technique}",
"--library-prep", "total",
"--taxon-id", taxon_id
]
)
# fmt: on
!{cmd}
</code>
## mRNA Sequencing Generation
These parameters are identical to the ones listed for [total RNA sequencing](#Total-RNA-Sequencing-Generation), but they are listed again here for ease of reference
### Parameters
- `mrnaseq_config_file`: The configuration filename for total RNA. This file is found under the `data/config_sheets` folder
- `rep_ratio`: The proportion of replicates before a gene is considered "active" in a study
- `group_ratio`: The proportion of studies with expression required for a gene to be considered "active"
- `rep_ratio_h`: The proportion of replicates that must express a gene before that gene is considered "high-confidience"
- `group_ratio_h`: The proportion of studies that must express a gene before that gene is considered "high-confidence"
- `technique`: The technique to use. Options are: `"quantile"`, `"cpm"`, or `"zfpkm"`. The difference in these options is discussed above
- `quantile`: The cutoff Transcripts-Per-Million quantile for filtering
- `min_zfpkm`: The cutoff for Counts-Per-Million filtering
- `prep_method`: The library method used for preparation. Options are: `"total"`, `"mRNA"`, or `"SC"`,
<code>
mrnaseq_config_file = "mrnaseq_data_inputs_auto.xlsx"
rep_ratio = 0.75
group_ratio = 0.75
rep_ratio_h = 1.0
group_ratio_h = 1.0
technique = "zfpkm"
minimum_cutoff = -3
taxon_id = "9606"
# fmt: off
cmd = " ".join(
[
"python3", "como/rnaseq_gen.py",
"--config-file", mrnaseq_config_file,
"--replicate-ratio", str(rep_ratio),
"--batch-ratio", str(group_ratio),
"--high-replicate-ratio", str(rep_ratio_h),
"--high-batch-ratio", str(group_ratio_h),
"--minimum-cutoff", str(minimum_cutoff),
"--filt-technique", f"{technique}",
"--library-prep", "mrna",
"--taxon-id", taxon_id
]
)
# fmt: on
!{cmd}
</code>
## Single-Cell RNA Sequencing Generation
These parameters are identical to the ones listed for [total RNA sequencing](#Total-RNA-Sequencing-Generation), but they are listed again here for ease of reference
### Parameters
- `scrnaseq_config_file`: The configuration filename for total RNA. This file is found under the `data/config_sheets` folder
- `rep_ratio`: The proportion of replicates before a gene is considered "active" in a study
- `group_ratio`: The proportion of studies with expression required for a gene to be considered "active"
- `rep_ratio_h`: The proportion of replicates that must express a gene before that gene is considered "high-confidience"
- `group_ratio_h`: The proportion of studies that must express a gene before that gene is considered "high-confidence"
- `technique`: The only option offered for single-cell RNA sequencing us "umi"
- `quantile`: The cutoff Transcripts-Per-Million quantile for filtering
- `min_zfpkm`: The cutoff for Counts-Per-Million filtering
- `prep_method`: The library method used for preparation. Options are: `"total"`, `"mRNA"`, or `"scrna"`,
<code>
scrnaseq_config_file = "scrnaseq_data_inputs_auto.xlsx"
rep_ratio = 0.75
group_ratio = 0.75
rep_ratio_h = 1.0
group_ratio_h = 1.0
quantile = 50
minimum_cutoff = -3
taxon_id = "human"
# fmt: off
cmd = " ".join(
[
"python3", "como/rnaseq_gen.py",
"--config-file", scrnaseq_config_file,
"--replicate-ratio", str(rep_ratio),
"--batch-ratio", str(group_ratio),
"--high-replicate-ratio", str(rep_ratio_h),
"--high-batch-ratio", str(group_ratio_h),
"--minimum-cutoff", str(minimum_cutoff),
"--filt-technique", "umi",
"--library-prep", "scrna",
"--taxon-id", taxon_id
]
)
# fmt: on
!{cmd}
</code>
## Proteomics Analysis
The parameters here are mostly the same to total RNA and mRNA sequencing analysis, and are listed here for easier reference
### Parameters
- `proteomics_config_file`: The file path to the proteomics configuration file
- `rep_ratio`: The ratio required before a gene is considered active in the replicate
- `batch_ratio`: The ratio required before a gene is considered active in the study
- `high_rep_ratio`: The ratio required before a gene is considered "high-confidence" in the replicate
- `high_batch_ratio`: The ratio required before a gene is considered "high-confidence" in the study
- `quantile`: The cutoff Transcripts-Per-Million quantile for filtering
<code>
proteomics_config_file = "proteomics_data_inputs_paper.xlsx"
rep_ratio = 0.75
batch_ratio = 0.75
high_rep_ratio = 1.0
high_batch_ratio = 1.0
quantile = 25
# fmt: off
cmd = " ".join(
[
"python3", "como/proteomics_gen.py",
"--config-file", proteomics_config_file,
"--replicate-ratio", str(rep_ratio),
"--high-replicate-ratio", str(high_rep_ratio),
"--batch-ratio", str(batch_ratio),
"--high-batch-ratio", str(high_batch_ratio),
"--quantile", str(quantile),
]
)
# fmt: on
!{cmd}
</code>
# Cluster Sample Data (Optional)
This step is used to cluster the samples based on their expression values. This can be used to determine which samples are more similar to each other. In a perfect world, one cluster would be created for each context type used. This is done using the `como/cluster_rnaseq.py` script.
To see more about clustering, please visit the [Wikipedia article](https://en.wikipedia.org/wiki/Cluster_analysis)
The parameters for this script are as follows:
- `context_names`: The tissue/cell name of models that should be clustered. This was defined in the first code block, so it is not redefined here
- `filt_technique`: The filtering technique to use; options are: `"zfpkm"`, `"quantile"`, or `"cpm"`
- `cluster_algorithm`: The clustering algorithm to use. Options are: `"mca"` or `"umap"`
- `label`: Should the samples be labeled in the plot? Options are: `"True"` or `"False"`
- `min_dist`: The minimum distance for UMAP clustering. Must be between 0 and 1. Default value is 0.01
- `replicate_ratio`: The ratio of active genes in replicates for a batch/study to be considered active. The default is 0.9
- `batch_ratio`: The ratio of active genes in batches/studies for a context to be considered active. The default is 0.9
- `min_count`: The ratio of active genes in a batch/study for a context to be considered active. The default is `"default"`
- `quantile`: The ratio of active genes in a batch/study for a context to be considered active. The default is 0.5
- `n_neighbors_rep`: N nearest neighbors for replicate clustering. The default is `"default"`, which is the total number of replicates
- `n_neighbors_batch`: N nearest neighbors for batch clustering. The default is `"default"`, which is the total number of batches
- `n_neighbors_context`: N nearest neighbors for context clustering. The default is `"default"`, which is the total number of contexts
- `seed`: The random seed for clustering algorithm initialization. If not specified, `np.random.randint(0, 100000)` is used
<code>
filt_technique = "zfpkm"
cluster_algorithm = "umap"
label = True
min_dist = 0.01
replicate_ratio = 0.9
batch_ratio = 0.9
min_count = "default"
quantile = 50
n_neighbors_rep = "default"
n_neighbors_batch = "default"
n_neighbors_context = "default"
seed = -1
# fmt: off
cmd = " ".join(
[
"python3", "como/cluster_rnaseq.py",
"--context-names", context_names,
"--filt-technique", filt_technique,
"--cluster-algorithm", cluster_algorithm,
"--label", label,
"--min-dist", str(min_dist),
"--replicate-ratio", str(replicate_ratio),
"--batch-ratio", str(batch_ratio),
"--n-neighbors-rep", str(n_neighbors_rep),
"--n-neighbors-batch", str(n_neighbors_batch),
"--n-neighbors-context", str(n_neighbors_context),
"--min-count", str(min_count),
"--quantile", str(quantile),
"--seed", str(seed),
]
)
# fmt: on
!{cmd}
</code>
## Merge Expression from Different Data Sources
Thus far, active genes have been determined for at least one data source. If multiple data sources are being used, we can merge the active genes from these sources to make a list of active genes that is more comprehensive (or strict!) than any data source on its own.
`como/merge_xomics.py` takes each data source discussed so far as an argument. The other arguments to consider are:
- `--expression-requirement`: The number of data sources with expression required for a gene to be considered active, if the gene is not "high-confidence" for any source. (default: total number of input sources provided)
- `--requirement-adjust`: This is used to adjust the expression requirement argument in the event that tissues have a different number of provided data sources. This does nothing if there is only one tissue type in the configuration files.
- `"progressive"`: The expression requirement applies to tissue(s) with the lowest number of data sources. Tissues with more than this value will require its genes to be expressed in 1 additional source before it is "active" in the model
- `"regressive"` (default): The expression requirement applies to the tissue(s) with the largest number of data sources. Tissues with less than this value will require its genes to be expressed in 1 fewer sources before the gene is considered "active" in the model.
- `"flat"`: The expression requirement is used regardless of differences in the number of data sources provided for different tissues
- `--no-hc`: This flag should be set to prevent high-confidence genes from overriding the expression requirement set.
- If this flag is not used, any gene that was determined to be "high-confidence" in any input source will cause the gene to be active in the final model, regardless of agreement with other sources
- `--no-na-adjustment`: This flag should be used to prevent genes that are not present in one data source, but are present in others, from subtracting one from the expression requirement.
- If this flag is not used, any time a gene is "NA" in a source, meaning it was not tested for in the library of that data sources but <ins>was</ins> tested in the library of another source, it will subtract one from the expression requirement.
The adjusted expression requirement will never resolve to be less than one or greater than the number of data sources for a given tissue
### Parameters
The three parameters listed here were used in RNA Sequencing generation, and should not need to be defined. If you did **not** use one of these, simply un-comment it from the command below by placing a "`#`" at the beginning of the appropriate lines
- `trnaseq_config_file`: The file name used in the [total RNA Sequencing](#Total-RNA-Sequencing-Generation) section of the notebook
- `mrnaseq_config_file`: The file name used in the [mRNA Sequencing](#mRNA-Sequencing-Generation) section of the notebook
- `proteomics_config_file`: The file name used in the [proteomics generation](#Proteomics-Analysis) section of the notebook
The following parameters have not been used in a previous section of the notebook, so they are defined in the below code block
- `expression_requirement`: This is the number of sources a gene must be active in for it to be considered active
- `requirement_adjust`: The technique to adjust expression requirement based on differences in number of provided data source types
- `total_rna_weight`: Total RNA-seq weight for merging zFPKM distribution
- `mrna_weight`: mRNA weight for merging zFPKM distribution
- `single_cell_weight`: Single-cell weight for merging zFPKM distribution
- `proteomics_weight`: Proteomic weight for merging zFPKM distribution
Each of the "weights" (`total_rna_weight`, `mrna_weight`, etc.) are used to place a significance on each method. Becuase there are many steps in the Dogma from transcription to translation, the gene expression as seen by total RNA or mRNA sequencing may not be representative of the gene's protein expression, and this its metabolic impact. Because of this, you are able to weight each source more (or less) than another.
<code>
expression_requirement = 3
requirement_adjust = "regressive"
total_rna_weight = 6
mrna_weight = 6
single_cell_weight = 6
proteomics_weight = 10
# fmt: off
cmd = " ".join(
[
"python3", "como/merge_xomics.py",
"--merge-zfpkm-distribution",
"--total-rnaseq-config-file", trnaseq_config_file,
"--mrnaseq-config-file", mrnaseq_config_file,
# "--scrnaseq-config-file", scrnaseq_config_file, # If using single-cell data, uncomment the start of this line
# "--proteomics-config-file", proteomics_config_file, # If using proteomics data, uncomment the start of this line
"--requirement-adjust", requirement_adjust,
"--expression-requirement", str(expression_requirement),
"--total-rnaseq-weight", str(total_rna_weight),
"--mrnaseq-weight", str(mrna_weight),
"--single-cell-rnaseq-weight", str(single_cell_weight),
"--protein-weight", str(proteomics_weight),
"--no-high-confidence",
]
)
# fmt: on
!{cmd}
</code>
# Step 2: Create Tissue/Cell-Type Specific Models
## Boundary Reactions
To create a metabolic model, the following information about each metabolite or reaction involved is required:
- **Reaction Type**
- Exchange
- Demand
- Sink
- **Metabolic/Reaction Abbreviation**
- You can use the [Virutal Metabolic Human](https://www.vmh.life/#home) to look up your metabolite and reaction abbreviations
- **Compartments**
- Cytosol
- Extracellular
- Golgi Apparatus
- Internal Membranes
- Lysosome
- Mitochondria
- Nucleus
- Endoplasmic Reticulum
- Unknown
- **Minimum Reaction Rate**
- **Maximum Reaction Rate**
*Below is an example of a properly formatted table of metabolic and reaction information*
| Reaction | Abbreviation | Compartment | Minimum Reaction Rate | Maximum Reaction Rate |
|:--------:|:------------:|:------------------:|:---------------------:|:---------------------:|
| Exchange | glc_D | Extracellular | -100 | 1000 |
| Demand | 15HPETATP | Cytosol | -1 | 1000 |
| Sink | met_L | Internal Membranes | -1000 | 1 |
These reactions should be placed into a CSV file; a template can be found at `data/boundary_rxns/default_force_rxns.csv`. Append your reactions to this file, and remove any that are not required. COMO will load this file in during model creation
## Force Reactions
Force reactions are reactions that should **always** be included in the model, no matter their flux value in the metabolic data provided. In contrast to the boundary reaction list, this is simply a list of reaction names that should be "forced" through the model. Append your force reactions to the `data/force_rxns/default_force_rxns.csv` file, and remove any that are not required. COMO will load this file during model creation
*Below is an example of a properly formatted table of force reactions*
| Reaction |
|:--------:|
| glc_D |
| met_L |
## Adding Reference Models
This Jupyter notebook uses Recon3D's [Virtual Metabolic Human](https://www.vmh.life/) as a base to map reactions onto, and is included with the Jupyter notebook. If you would like to include other reference models, simply upload them to the `data` folder, and set the name of the `general_model_file` below to the name of your reference model.
## Parameters
The following is a list of parameters and their function in this section of the pipeline
- `low_thres`: If you are using the `IMAT` reconstruction algorithm, gene expression above this value will be placed in the "mid-expression" bin
- `high_thres`: If you are using the `IMAT` reconstruction algorithm, gene expression above this value will be placed in the "high-expression" bin
- `output_filetypes`: These are the file types you would like to save your model as. It should be one (or multiple) of the following: `"xml"`, `"mat"`, `"json"`
- `objective_dict`: This is an objective the model should be solved for. Popular options are `"biomass_reaction"` or `"biomass_maintenance"`
- `general_model_file`: This is the reference model file to load
- `recon_algorithm`: The troppo reconstruction algorithm to use. This should be one of the following: `"FastCORE"`, `"CORDA"`, `"GIMME"`, `"tINIT"`, `"IMAT"`
- `solver`: The solver to use for optimizing the model. Options are: `"GUROBI"` or `"GLPK"`
- `boundary_reactions_filename`: The filename of boundary reactions that should be used
- `force_reactions_filename`: The filename of the force reactions to be used. Force reactions will (as the name implies) force the optimizer to use these reactions, **no matter their expression**
- `exclude_reactions_filename`: The filename of reactions to exclude from the model, no matter their expression
<code>
# Set your objectives before running!
objective_dict = {"naiveB": "biomass_maintenance", "smB": "biomass_maintenance"}
# -----------------
low_threshold = -5
high_threshold = -3
output_filetypes = "xml mat json"
general_model_file = "GeneralModelUpdatedV2.mat"
recon_algorithms = ["IMAT"]
solver = "GUROBI"
import json
import os
from pathlib import Path
from como.project import Config
config = Config()
# Load the output of step 1, which is a dictionary that specifies the merged list of active Gene IDs for each tissue
step1_results_file = os.path.join(config.data_dir, "results", "step1_results_files.json")
with open(step1_results_file) as json_file:
context_gene_exp = json.load(json_file)
for recon_algorithm in recon_algorithms:
for context in context_gene_exp.keys():
objective = objective_dict[context]
if recon_algorithm.upper() in ["IMAT", "TINIT"]:
active_genes_filepath = os.path.join(config.data_dir, "results", context, f"model_scores_{context}.csv")
else:
gene_expression_file = context_gene_exp[context]
active_genes_filename = Path(gene_expression_file).name
active_genes_filepath = os.path.join(config.data_dir, "results", context, active_genes_filename)
general_model_filepath = os.path.join(config.data_dir, "GeneralModelUpdatedV2.mat")
boundary_reactions_filepath = os.path.join(config.data_dir, "boundary_rxns", f"{context}_boundary_rxns.csv")
force_reactions_filepath = os.path.join(config.data_dir, "force_rxns", f"{context}_force_rxns.csv")
exclude_reactions_filepath = os.path.join(config.data_dir, "exclude_rxns", f"{context}_exclude_rxns.csv")
# fmt: off
cmd = " ".join(
[
"python3", "como/create_context_specific_model.py",
"--context", context,
"--reference-model-filepath", general_model_filepath,
"--active-genes-filepath", active_genes_filepath,
"--objective", objective,
"--boundary-reactions-filepath", boundary_reactions_filepath,
# "--exclude-reactions-filepath", exclude_reactions_filepath,
"--force-reactions-filepath", force_reactions_filepath,
"--algorithm", recon_algorithm,
"--low-threshold", str(low_threshold),
"--high-threshold", str(high_threshold),
"--solver", solver,
"--output-filetypes", output_filetypes,
]
)
# fmt: on
!{cmd}
</code>
# Generate MEMOTE Reports
> NOTE: This step is entirely optional
MEMOTE is an open-source tool to automate the testing and reporting of metabolic models. This report is a detailed summary of the tests performed by MEMOTE on a given metabolic model (i.e., the one you just generated), along with the results and recommendations for improving the model. In order to create these reports, a metabolic "map" is required. Several of these are included in COMO, found under `data/maps/RECON1`. If you would like to add your own maps, they can be included in multiple places:
1. If you have mapped a `local_files` directory to the container before starting, you can simply copy-and-paste them into the `local_files/maps` directory using the file browser of your computer. This is the most robust solution because the files will not be deleted by the container after it stops, or if it is updated in the future
2. You can upload them to the Jupyter notebook under the `data/maps` directory. The code block below will search for any `.json` files that are not already included in the `map_dict` dictionary
config.data_dir,
"results",
context,
"figures",
f"{key}_map_{context}_{algorithm}.html"
The resulting MEMOTE reports will be saved to `data/results/exampleTissue/figures/mapName_map_exampleTissue_ALGORITHM.html`.
- `mapName`: This is the name of the map file. In the `map_dict` dictionary below, this value would be `trypto`, `retinol`, etc.
- `exampleTissue`: This is the name of the tissue context
- `ALGORITHM`: This is the algorithm (`recon_algorithm`) used in the above model creation step
<code>
import os
from pathlib import Path
import cobra
from como.project import Config
from escher import Builder
config = Config()
user_map_dir = Path(f"{config.data_dir}/local_files/maps/")
map_dict = {
"trypto": f"{config.data_dir}/maps/RECON1/RECON1.tryptophan_metabolism.json",
# "lipid": f"{config.data_dir}/maps/RECON1/RECON1.", # Not present in COMO by default yet
"retinol": f"{config.data_dir}/maps/RECON1/RECON1.inositol_retinol_metabolism.json",
"glyco": f"{config.data_dir}/maps/RECON1/RECON1.glycolysis_TCA_PPP.json",
"combined": f"{config.data_dir}/maps/RECON1/RECON1.combined.json",
"carbo": f"{config.data_dir}/maps/RECON1/RECON1.carbohydrate_metabolism.json",
"amino": f"{config.data_dir}/maps/RECON1/RECON1.amino_acid_partial_metabolism.json",
}
# Collect files from user-input json maps
index = 1
for file in user_map_dir.glob("**/*.json"):
map_dict[file.stem] = file
index += 1
# Collect any additional maps under the `{config.data_dir}/maps/` directory
for file in Path(f"{config.data_dir}/maps").glob("**/*.json"):
if file not in map_dict.values():
map_dict[file.stem] = file
for recon_algorithm in recon_algorithms:
for context in context_gene_exp.keys():
# for context in ["naiveB", "smB"]:
print(f"Starting {context}")
model_json = os.path.join(config.data_dir, "results", context, f"{context}_SpecificModel_{recon_algorithm}.json")
print(f"Loading '{context}', this may take some time...")
model = cobra.io.load_json_model(model_json)
for key in map_dict.keys():
print(f"Running with: {key}")
builder = Builder(map_json=str(map_dict[key]))
builder.model = model
solution = cobra.flux_analysis.pfba(model)
builder.reaction_data = solution.fluxes
builder.reaction_scale = [
{"type": "min", "color": "#ff3300", "size": 12},
{"type": "q1", "color": "#ffc61a", "size": 14},
{"type": "median", "color": "#ffe700", "size": 16},
{"type": "q3", "color": "#4ffd3c", "size": 18},
{"type": "max", "color": "#3399ff", "size": 20},
]
builder.reaction_no_data_color = "#8e8e8e"
builder.save_html(os.path.join(config.data_dir, "results", context, "figures", f"{key}_map_{context}_{recon_algorithm}.html"))
out_dir = os.path.join(config.data_dir, "results", context)
# for algorithm in ["GIMME", "IMAT", "FASTCORE", "tINIT"]:
report_file = os.path.join(out_dir, f"memote_report_{context}_{recon_algorithm}.html")
model_file = os.path.join(out_dir, f"{context}_SpecificModel_{recon_algorithm}.xml")
log_dir = os.path.join(out_dir, "memote")
log_file = os.path.join(log_dir, f"{context}_{recon_algorithm}_memote.log")
if not os.path.exists(log_dir):
os.mkdir(log_dir)
cmd = " ".join(["memote", "report", "snapshot", "--filename", f"{report_file}", f"{model_file}", ">", f"{log_file}"])
!{cmd}
</code>
# Step 3: Disease-related Gene Identification
This step can identify disease related genes by analyzing patient transcriptomics' data
In the `data/config_sheets` folder, create another folder called `disease`. Add an Excel file for each tissue/cell type called `disease_data_inputs_<TISSUE_NAME>`, where `<TISSUE_NAME>` is the name of the tissue you are interested in. Each sheet of this file should correspond to a separate disease to analyze using differential gene analysis. The file is formatted in the same fashion as described in the [final part of Step 1](#Importing-a-Pre-Generated-Counts-Matrix). The sheet names should be in the following format: `<DISEASE_NAME>_bulk`
- `<DISEASE_NAME>`: This is the name of the disease you are analyzing.
For example, if the disease we are interested in is lupus, and the source of the data is bulk RNA sequencing, the name of the first sheet would be `lupus_bulk`. If you are using bulk RNA sequencing, there should be a gene counts matrix file located at `data/data_matrices/<tissue_name>/<disease>` called `BulkRNAseqDataMatrix<DISEASE_NAME>_<TISSUE_NAME>`
## Parameters
- `disease_names`: The diseases you are using. This should match the first section of the sheet name in the Excel file
- `data_source`: The datasource you are using for disease analysis. This should be`"rnaseq"`
- `taxon_id`: The [NCBI Taxon ID](https://www.ncbi.nlm.nih.gov/taxonomy) to use for disease analysis
<code>
disease_names = ["arthritis", "lupus_a", "lupus_b"]
data_source = "rnaseq"
taxon_id = "human"
from como.utils import stringlist_to_list
for context_name in stringlist_to_list(context_names):
disease_config_file = f"disease_data_inputs_{context_name}.xlsx"
# fmt: off
cmd = " ".join(
[
"python3", "como/disease_analysis.py",
"--context-name", context_name,
"--config-file", disease_config_file,
"--data-source", data_source,
"--taxon-id", str(taxon_id),
]
)
# fmt: on
!{cmd}
</code>
# Step 4: Drug Targets & Repurposable Drug Identification
This step performs a series of tasks:
1. Map drug targets in metabolic models
2. Performs knock out simulations
3. Compares simulation results with "disease genes"
4. Identifies drug targets and repurposable drugs
## Execution Steps
### Drug Database
A processed drug-target file is included in the `data` folder, called `Repurposing_Hub_export.txt`. If you would like to include an additional drug-target file, please model your own file after the included one. Alternatively, if you would like to update to a newer version of the database, simply export from the [Drug Repurposing Hub](https://clue.io/repurposing-app). If you do this, remove all `activators`, `agonists`, and `withdrawn` drugs. Replace the `data/Repurposing_Hub_export.txt` file.
### Using Automatically Created Models
This step will use the models generated in Step 4, above. It is **highly** recommended to use refined and validated models for further analysis (i.e., before running this step of the pipeline). If you would like to use a custom model, instead of the one created by COMO, edit the `model_files` dictionary. An example is shown here:
```python
model_files = {
"exampleTissueModel": "/home/jovyan/main/data/myModels/exampleTissueModel.mat",
"anotherTissueModel": "/home/jovyan/main/data/myModels/anotherTissueModel.json",
"thirdTissueModel": "/home/jovyan/main/data/myModels/thirdTissueModel.xml"
}
```
❗The path `/home/jovyan/main/` **<ins>MUST</ins>** stay the same. If it does not, your model **will not be found**
## Parameters
Other than the `model_files` parameter (if required), the only other parameter for this section is the `solver` option
- `solver`: The solver you would like to use. Available options are `"gurobi"` or `"glpk"`
<code>
# Knock out simulation for the analyzed tissues and diseases
model_files = {
# "context_name": "/path/to/model.mat"
# EXAMPLE -> "Treg": "/home/jovyan/main/data/results/naiveB/naiveB_SpecificModel_IMAT.mat"
}
sovler = "gurobi"
import json
import os
from como.utils import stringlist_to_list
from como.project import Config
config = Config()
drug_raw_file = "Repurposing_Hub_export.txt"
for context in stringlist_to_list(context_names):
for recon_algorithm in recon_algorithms:
for disease in disease_names:
disease_path = os.path.join(config.data_dir, "results", context, disease)
out_dir = os.path.join(config.data_dir, "results", context, disease)
tissue_gene_folder = os.path.join(config.data_dir, context)
os.makedirs(tissue_gene_folder, exist_ok=True)
if not os.path.exists(disease_path):
print(f"Disease path doesn't exist! Looking for {disease_path}")
continue
# load the results of step 3 to dictionary "disease_files"
step3_results_file = os.path.join(config.data_dir, "results", context, disease, "step2_results_files.json")
with open(step3_results_file) as json_file:
disease_files = json.load(json_file)
down_regulated_disease_genes = disease_files["down_regulated"]
up_regulated_disease_genes = disease_files["up_regulated"]
if context in model_files.keys():
tissue_specific_model_filepath = model_files[context]
else:
tissue_specific_model_filepath = os.path.join(config.data_dir, "results", context, f"{context}_SpecificModel_{recon_algorithm}.mat")
# fmt: off
cmd = [
"python3", "como/knock_out_simulation.py",
"--context-model", tissue_specific_model_filepath,
"--context-name", context,
"--disease-name", disease,
"--disease-up", up_regulated_disease_genes,
"--disease-down", down_regulated_disease_genes,
"--raw-drug-file", drug_raw_file,
"--solver", sovler,
# "--test-all"
]
# fmt: on
if recon_algorithm == "IMAT":
cmd.extend(["--reference-flux-file", os.path.join(config.data_dir, "results", context, "IMAT_flux.csv")])
cmd = " ".join(cmd)
!{cmd}
</code>
|
{
"filename": "COMO_2.ipynb",
"repository": "HelikarLab/COMO",
"query": "transformed_from_existing",
"size": 117565,
"sha": ""
}
|
# PXRD_1.ipynb
Repository: molmod/gpxrdpy
<code>
# Import statements
import numpy as np
import matplotlib.pyplot as pt
import glob
import os
from ase.io import read
from pyiron import Project, ase_to_pyiron
from molmod.units import *
from molmod.constants import *
from collections import namedtuple
from dataclasses import dataclass # replaces namedtuple with mutable attributes
%matplotlib inline
</code>
### Functions
<code>
# Gather structures
'''@dataclass
class s_object:
structure: object
ffatypes: object
ffatype_ids: object
bonds : object'''
class Sobject(object):
def __init__(self,structure,ffatypes,ffatype_ids,bonds):
self.structure = structure
self.ffatypes = ffatypes
self.ffatype_ids = ffatype_ids
self.bonds = bonds
</code>
### Execution
<code>
structures_database = {}
for block in glob.glob('./input_files/PXRD/Structures_Database/*/*.chk'):
block_name = block.split('/')[-2]
print(block_name)
tmp = pr.create_job(pr.job_type.Yaff,'tmp',delete_existing_job=True)
tmp.load_chk(block)
structures_database[block_name] = Sobject(tmp.structure,tmp.ffatypes,tmp.ffatype_ids,tmp.bonds)
</code>
<code>
pr_database = Project('PXRD/Juul_database')
pr_database_uff = Project('PXRD/Juul_database_uff')
</code>
#### Optimizations
<code>
# yaff job function
def yaff_opt_job(pr, name, sobject, ffpars):
job = pr.create_job(pr.job_type.Yaff, name, delete_existing_job=True)
job.calc_minimize(max_iter=10000, cell=True)
job.set_ffpars(fnames=ffpars)
job.structure = sobject.structure
job.ffatypes = sobject.ffatypes
job.ffatype_ids = sobject.ffatype_ids
job.bonds = sobject.bonds
job.input['rcut'] = 11.0*angstrom
job.input['alpha_scale'] = 2.86
job.input['gcut_scale'] = 1.0
job.input['tailcorrections'] = True
job.executable.version = '2020'
job.server.queue = 'slaking'
job.server.cores = 1
job.server.run_time = 5*60*60 # in seconds
job.run()
</code>
<code>
for bn,s in structures_database.items():
ffpars = os.path.join('./input_files/PXRD/Structures_Database', bn, 'pars_cluster.txt')
bn = bn.replace('-','_')
yaff_opt_job(pr_database, bn, s, ffpars)
</code>
<code>
for bn,s in structures_database.items():
ffpars = os.path.join('./input_files/PXRD/Structures_Database', bn, 'pars_uff.txt')
bn = bn.replace('-','_')
yaff_opt_job(pr_database_uff, bn, s, ffpars)
</code>
#### Molecular Dynamics
<code>
pr_database_md = Project('PXRD/Juul_database_md')
pr_database_uff_md = Project('PXRD/Juul_database_uff_md')
</code>
<code>
def md_job_v2(pr,name,sobject,structure,ffpars,temp=300*kelvin,press=1e5*pascal,nsteps=400000,
time_step=0.5*femtosecond,n_print=200,timecon_thermo=100.0*femtosecond,timecon_baro=1000.0*femtosecond, repeat=(1,1,1), log_lammps=False):
# MD code
job = pr.create_job(pr.job_type.Yaff, name, delete_existing_job=True)
job.calc_md(temperature=temp, pressure=press, nsteps=nsteps, time_step=time_step, n_print=n_print,
timecon_thermo=timecon_thermo, timecon_baro=timecon_baro)
job.set_ffpars(ffpars)
job.input['rcut'] = 11.0*angstrom
job.input['alpha_scale'] = 2.86
job.input['gcut_scale'] = 1.0
job.input['tailcorrections'] = True
if log_lammps:
job.input['log_lammps'] = 'lammps.log'
# Load structure and atomtypes
job.structure = structure.repeat(repeat)
job.ffatypes = sobject.ffatypes
job.ffatype_ids = np.tile(sobject.ffatype_ids,np.prod(repeat))
# bonds will automatically be detected in the supercell
# assume that optimization has taken care of structural irregularities
job.enable_lammps(executable='2020_lammps')
job.server.queue = 'slaking'
job.server.cores = 8
job.server.run_time = 72*60*60 # in seconds
job.run()
</code>
<code>
for bn,s in structures_database.items():
ffpars = os.path.join('./input_files/PXRD/Structures_Database', bn, 'pars_cluster.txt')
bn = bn.replace('-','_')
opt_structure = pr_database.load(bn).get_structure()
# if the structure is 2D make a 1x1x5 supercell, if it is very small, use a 2x2x5 supercell
if any(bn.startswith(k) for k in ['hcb','kgm','sql']):
if len(opt_structure)<280: # 140 atoms per layer
repeat = (2,2,5)
else:
repeat = (1,1,5)
else:
repeat = (1,1,1)
md_job_v2(pr_database_md,bn,s,opt_structure,ffpars,repeat=repeat)
</code>
<code>
for bn,s in structures_database.items():
ffpars = os.path.join('./input_files/PXRD/Structures_Database', bn, 'pars_uff.txt')
bn = bn.replace('-','_')
opt_structure = pr_database_uff.load(bn).get_structure()
# if the structure is 2D make a 1x1x5 supercell, if it is very small, use a 2x2x5 supercell
if any(bn.startswith(k) for k in ['hcb','kgm','sql']):
if len(opt_structure)<280: # 140 atoms per layer
repeat = (2,2,5)
else:
repeat = (1,1,5)
else:
repeat = (1,1,1)
md_job_v2(pr_database_uff_md,bn,s,opt_structure,ffpars,repeat=repeat)
</code>
#### PXRD
##### Static
<code>
pr_database_pxrd = pr_database.create_group('PXRD')
pr_database_uff_pxrd = pr_database_uff.create_group('PXRD')
</code>
<code>
def static_PXRD_job(pr,name,structure,refpattern=None):
job = pr.create_job(pr.job_type.GPXRD, name, delete_existing_job=True)
job.structure = structure
job.input['jobtype'] = 'static'
if refpattern is not None:
job.set_reference_pattern(refpattern)
job.server.queue = 'slaking'
job.server.cores = 1
job.server.run_time = 15*60 # in seconds
job.run()
#return job
</code>
<code>
for bn in structures_database.keys():
name = bn.replace('-','_')
refpattern = glob.glob('./input_files/PXRD/Structures_Database/'+bn+'/*.dat')[0]
job = pr_database.load(name)
job_uff = pr_database_uff.load(name)
static_PXRD_job(pr_database_pxrd,name,job.get_structure(),refpattern=refpattern)
static_PXRD_job(pr_database_uff_pxrd,name,job_uff.get_structure(),refpattern=refpattern)
</code>
<code>
from matplotlib.ticker import MaxNLocator
for bn in structures_database.keys():
name = bn.replace('-','_')
job = pr_database_pxrd.load(name)
job_uff = pr_database_uff_pxrd.load(name)
ttheta_calc = job.get("output/ttheta_calc")
int_calc = job.get("output/int_calc")
ttheta_uff_calc = job_uff.get("output/ttheta_calc")
int_uff_calc = job_uff.get("output/int_calc")
ttheta_ref = job.get("output/ttheta_ref")
int_ref = job.get("output/int_ref")
stat_res = job.compare(np.array([ttheta_calc,int_calc]).T, np.array([ttheta_ref,int_ref]).T,scale='optimal',verbose=False,plot=False)
stat_res_uff = job.compare(np.array([ttheta_uff_calc,int_uff_calc]).T, np.array([ttheta_ref,int_ref]).T,scale='optimal',verbose=False,plot=False)
print(name)
pt.figure()
pt.plot(ttheta_calc, (int_calc-int_calc.min())*stat_res['scalefactor'],lw=1,label='QuickFF, {:3.2f}'.format(stat_res['R_wp']))
pt.plot(ttheta_uff_calc,(int_uff_calc-int_uff_calc.min())*stat_res_uff['scalefactor'],lw=1,label='UFF, {:3.2f}'.format(stat_res_uff['R_wp']))
pt.plot(ttheta_ref, (int_ref-int_ref.min()),lw=1,label='reference')
ax1 = pt.gca()
ax1.set_xlabel('2θ (°)')
ax1.set_ylabel('Intensity (a.u.)')
ax1.tick_params(
axis='y', # changes apply to the y-axis
which='both', # both major and minor ticks are affected
left=False, # ticks along the left edge are off
right=False, # ticks along the right edge are off
labelleft=False) # labels along the left edge are off
ax1.ticklabel_format(axis='y', style='plain', useOffset=False) # avoid scientific notation and offsets
ax1.legend(bbox_to_anchor=(1.1,.5), loc='center left',frameon=False)
ax1.xaxis.set_major_locator(MaxNLocator(integer=True))
pt.show()
</code>
##### Background filtering
<code>
from matplotlib.ticker import MaxNLocator
def remove_background(reference,locs=None,bkg_range=None,bkg_points=10,uniform=True,plot=False,fname=None,skiprows=0):
"""
Remove the background the provided reference pattern
***Args***
reference
file name of reference data (file with two columns, no header)
or a 2D array with (ttheta,intensity)
skiprows
number of rows to skip in reading file (when reference is a filename)
locs
node locations in degrees, default linspace in range with bkg_points
bkg_range
range for background points, default full ttheta range of reference pattern
bkg_points
set number of background points in fit
uniform
if false, the grid points are locally optimized towards the minima for a better interpolation
plot
plot the analysis and difference plot
fname
file location for the plot
***Returns***
a new reference pattern (ttheta,intensity) array
"""
# Interactively use the pyobjcryst code here, it should not be necessary to run a separate job for this
try:
import pyobjcryst
except ImportError:
raise ImportError("The pyobjcryst package is required for this")
pp = pyobjcryst.powderpattern.PowderPattern()
# Add the reference data
if isinstance(reference,str) and os.path.exists(reference):
pp.ImportPowderPattern2ThetaObs(reference,skiprows) # skip no lines
elif isinstance(reference,np.ndarray):
pp.SetPowderPatternX(reference[:,0]*deg)
pp.SetPowderPatternObs(reference[:,1])
ttheta = pp.GetPowderPatternX()/deg
reference = pp.GetPowderPatternObs()
if ttheta.shape == (0,):
raise ValueError('Did not succeed in loading reference data. Try explicitely loading your data as an array instead.')
# Background
if locs is not None:
bx = np.array(locs)
# Keep first location index for clear plot
else:
if bkg_range is not None:
assert len(bkg_range)==2
bx=np.linspace(bkg_range[0],bkg_range[1],bkg_points)
else:
bx=np.linspace(ttheta.min(),ttheta.max(),bkg_points)
# Keep first location index for clear plot
idx = [np.argmin(np.abs(ttheta-bxi)) for bxi in bx]
reference_idx = idx[0]
# Adapt bx to minima of reference pattern in each neighbourhood (optional)
if locs is None and not uniform:
idx = [np.argmin(np.abs(ttheta-bxi)) for bxi in bx]
step = (idx[1] - idx[0])//4
for n in range(len(bx)):
mn = -step if idx[n]>step else 0
mx = step if n<(len(idx)-1) else 0
bx[n] = ttheta[idx[n]+ mn + np.argmin(reference[idx[n]+mn:idx[n]+mx])]
if n==0: reference_idx = idx[n]+ mn + np.argmin(reference[idx[n]+mn:idx[n]+mx])
bx*=deg
by=np.zeros(bx.shape)
b=pp.AddPowderPatternBackground()
b.SetInterpPoints(bx,by)
b.UnFixAllPar()
b.OptimizeBayesianBackground()
no_bg = pp.GetPowderPatternObs()-pp.GetPowderPatternCalc()
no_bg -= np.min(no_bg)
# Plot the difference
if plot:
# Consider the difference for the delta plot
height = no_bg.max()-no_bg.min()
fig = pt.figure(figsize=(10,6))
ax1 = pt.gca()
background = pp.GetPowderPatternCalc() - pp.GetPowderPatternCalc().min()
reference_pattern = reference - reference.min()
ax1.plot(ttheta,background - (background[reference_idx]- reference_pattern[reference_idx]),lw=1,label='background')
ax1.plot(ttheta,reference_pattern,lw=1,label='reference')
ax1.plot(ttheta,no_bg-height*0.1,color='g',lw=1,label=r'$\Delta$')
ax1.set_xlabel('2θ (°)')
ax1.set_ylabel('Intensity (a.u.)')
# Format the plot
lims = pt.xlim()
ax1.hlines(0,lims[0],lims[1],lw=0.1)
ax1.set_xlim(lims)
lims = pt.ylim()
ax1.vlines(bx/deg,lims[0],lims[1],lw=0.1)
ax1.tick_params(
axis='y', # changes apply to the y-axis
which='both', # both major and minor ticks are affected
left=False, # ticks along the left edge are off
right=False, # ticks along the right edge are off
labelleft=False) # labels along the left edge are off
ax1.ticklabel_format(axis='y', style='plain', useOffset=False) # avoid scientific notation and offsets
ax1.legend(bbox_to_anchor=(1.1,.5), loc='center left',frameon=False)
ax1.xaxis.set_major_locator(MaxNLocator(integer=True))
if fname is None:
fig.tight_layout()
pt.show()
else:
pt.savefig(fname+'.pdf',bbox_inches='tight')
pt.close()
if fname is not None:
with open(fname + '.tsv','w') as f:
for i in range(len(ttheta)):
f.write("{:4.3f}\t{:10.8f}\n".format(ttheta[i],no_bg[i]))
return np.array([ttheta, no_bg]).T
</code>
<code>
# SEQUENCE BELOW
'dia_30-03-12_06-12-12',
'kgm_31-03-04_01-02-04',
'hcb_34-11-10_01-06-10',
'ctn_33-11-02_29-01-02_None',
'bor_18-08-01_28-01-01_None',
'hcb_11-02-06_None',
'sql_24-01-01_10-08-01',
</code>
<code>
# 'dia_30-03-12_06-12-12'
bn = list(structures_database.keys())[0]
name = bn.replace('-','_')
job = pr_database_pxrd.load(name)
refpattern = np.array([job['output/ttheta_ref'],job['output/int_ref']]).T
refpattern_1 = remove_background(refpattern, bkg_range=(2,10), bkg_points=2, uniform=True,plot=False)
refpattern_2 = remove_background(refpattern_1, bkg_points=3, uniform=True,plot=False)
refpattern_3 = remove_background(refpattern_2, bkg_range=(20,40), bkg_points=3, uniform=True,plot=False,fname='./input_files/PXRD/Structures_Database/'+bn+'/background_filtered')
#job.compare(refpattern,refpattern_3,scale=False,plot=True)
</code>
<code>
# 'kgm_31-03-04_01-02-04'
bn = list(structures_database.keys())[1]
name = bn.replace('-','_')
print(name)
job = pr_database_pxrd.load(name)
#refpattern = np.array([job['output/ttheta_ref'],job['output/int_ref']]).T
#refpattern_1 = remove_background(refpattern, bkg_range=(1,8), bkg_points=4, uniform=False,plot=True)
#refpattern_2 = remove_background(refpattern_1, bkg_points=3, uniform=True,plot=False)
#refpattern_3 = remove_background(refpattern_2, bkg_range=(20,40), bkg_points=3, uniform=True,plot=False,fname='./input_files/PXRD/Structures_Database/'+bn+'/background_filtered')
#job.compare(refpattern,refpattern_3,scale=False,plot=True)
</code>
<code>
# 'hcb_34-11-10_01-06-10'
bn = list(structures_database.keys())[2]
name = bn.replace('-','_')
job = pr_database_pxrd.load(name)
refpattern = np.array([job['output/ttheta_ref'],job['output/int_ref']]).T
plot=False
refpattern_1 = remove_background(refpattern, bkg_points=5, uniform=True,plot=plot)
refpattern_2 = remove_background(refpattern_1, bkg_range=(10,40), bkg_points=2, uniform=True,plot=plot)
refpattern_3 = remove_background(refpattern_2, bkg_points=5, uniform=True,plot=plot)
refpattern_4 = remove_background(refpattern_3, bkg_range=(10,40), bkg_points=3, uniform=True,plot=plot)
refpattern_5 = remove_background(refpattern_4, bkg_range=(35,40), bkg_points=3, uniform=True,plot=plot,fname='./input_files/PXRD/Structures_Database/'+bn+'/background_filtered')
#job.compare(refpattern,refpattern_5,scale=False,plot=True)
</code>
<code>
# 'ctn_33-11-02_29-01-02_None'
bn = list(structures_database.keys())[3]
name = bn.replace('-','_')
job = pr_database_pxrd.load(name)
refpattern = np.array([job['output/ttheta_ref'],job['output/int_ref']]).T
plot=True
# no fit necessary
#job.compare(refpattern,refpattern_5,scale=False,plot=True)
</code>
<code>
# 'bor_18-08-01_28-01-01_None'
bn = list(structures_database.keys())[4]
name = bn.replace('-','_')
job = pr_database_pxrd.load(name)
refpattern = np.array([job['output/ttheta_ref'],job['output/int_ref']]).T
plot=False
refpattern_1 = remove_background(refpattern, bkg_points=3, uniform=True,plot=plot)
refpattern_2 = remove_background(refpattern_1, bkg_range=(9,35), bkg_points=4, uniform=True,plot=plot)
refpattern_3 = remove_background(refpattern_2, bkg_points=4, uniform=True,plot=plot,fname='./input_files/PXRD/Structures_Database/'+bn+'/background_filtered')
#job.compare(refpattern,refpattern_3,scale=False,plot=True)
</code>
<code>
# 'hcb_11-02-06_None'
bn = list(structures_database.keys())[5]
name = bn.replace('-','_')
job = pr_database_pxrd.load(name)
refpattern = np.array([job['output/ttheta_ref'],job['output/int_ref']]).T
plot=False
refpattern_1 = remove_background(refpattern, bkg_points=4, uniform=True,plot=plot)
refpattern_2 = remove_background(refpattern_1, bkg_points=2, uniform=True,plot=plot)
refpattern_3 = remove_background(refpattern_2, bkg_points=4, uniform=True,plot=plot)
refpattern_4 = remove_background(refpattern_3, bkg_range=(16,60), bkg_points=2, uniform=True,plot=plot)
refpattern_5 = remove_background(refpattern_4, bkg_points=4, uniform=False,plot=plot)
refpattern_6 = remove_background(refpattern_5, bkg_range=(38,60), bkg_points=3, uniform=False,plot=plot,fname='./input_files/PXRD/Structures_Database/'+bn+'/background_filtered')
#job.compare(refpattern,refpattern_6,scale=False,plot=True)
</code>
<code>
#
bn = list(structures_database.keys())[6]
name = bn.replace('-','_')
job = pr_database_pxrd.load(name)
refpattern = np.array([job['output/ttheta_ref'],job['output/int_ref']]).T
plot=False
refpattern_1 = remove_background(refpattern, bkg_range=(2,13), bkg_points=3, uniform=False,plot=plot)
refpattern_2 = remove_background(refpattern_1,bkg_range=(2,50), bkg_points=2, uniform=True,plot=plot)
refpattern_3 = remove_background(refpattern_2, bkg_points=4, uniform=True,plot=plot)
refpattern_4 = remove_background(refpattern_3, bkg_points=7, uniform=False,plot=plot)
refpattern_5 = remove_background(refpattern_4, bkg_range=(15,50), bkg_points=7, uniform=False,plot=plot)
refpattern_6 = remove_background(refpattern_5, bkg_points=4, uniform=False,plot=plot,fname='./input_files/PXRD/Structures_Database/'+bn+'/background_filtered')
#job.compare(refpattern,refpattern_6,scale=False,plot=True)
</code>
##### Static w background filtering
<code>
pr_database_pxrd_bg = pr_database.create_group('PXRD_bg')
pr_database_uff_pxrd_bg = pr_database_uff.create_group('PXRD_bg')
</code>
<code>
for bn in structures_database.keys():
name = bn.replace('-','_')
background_filtering = glob.glob('./input_files/PXRD/Structures_Database/'+bn+'/background_filtered.tsv')
if len(background_filtering)==0:
refpattern = glob.glob('./input_files/PXRD/Structures_Database/'+bn+'/*.dat')[0]
else:
refpattern = background_filtering[0]
job = pr_database.load(name)
job_uff = pr_database_uff.load(name)
static_PXRD_job(pr_database_pxrd_bg,name,job.get_structure(),refpattern=refpattern)
static_PXRD_job(pr_database_uff_pxrd_bg,name,job_uff.get_structure(),refpattern=refpattern)
</code>
<code>
from matplotlib.ticker import MaxNLocator
for bn in structures_database.keys():
name = bn.replace('-','_')
job = pr_database_pxrd_bg.load(name)
job_uff = pr_database_uff_pxrd_bg.load(name)
ttheta_calc = job.get("output/ttheta_calc")
int_calc = job.get("output/int_calc")
ttheta_uff_calc = job_uff.get("output/ttheta_calc")
int_uff_calc = job_uff.get("output/int_calc")
ttheta_ref = job.get("output/ttheta_ref")
int_ref = job.get("output/int_ref")
stat_res = job.compare(np.array([ttheta_calc,int_calc]).T, np.array([ttheta_ref,int_ref]).T,scale='optimal',verbose=False,plot=False)
stat_res_uff = job.compare(np.array([ttheta_uff_calc,int_uff_calc]).T, np.array([ttheta_ref,int_ref]).T,scale='optimal',verbose=False,plot=False)
print(name)
pt.figure()
pt.plot(ttheta_calc, (int_calc-int_calc.min())*stat_res['scalefactor'],lw=1,label='QuickFF, {:3.2f}'.format(stat_res['R_wp']))
pt.plot(ttheta_uff_calc,(int_uff_calc-int_uff_calc.min())*stat_res_uff['scalefactor'],lw=1,label='UFF, {:3.2f}'.format(stat_res_uff['R_wp']))
pt.plot(ttheta_ref, (int_ref-int_ref.min()),lw=1,label='reference')
ax1 = pt.gca()
ax1.set_xlabel('2θ (°)')
ax1.set_ylabel('Intensity (a.u.)')
ax1.tick_params(
axis='y', # changes apply to the y-axis
which='both', # both major and minor ticks are affected
left=False, # ticks along the left edge are off
right=False, # ticks along the right edge are off
labelleft=False) # labels along the left edge are off
ax1.ticklabel_format(axis='y', style='plain', useOffset=False) # avoid scientific notation and offsets
ax1.legend(bbox_to_anchor=(1.1,.5), loc='center left',frameon=False)
ax1.xaxis.set_major_locator(MaxNLocator(integer=True))
pt.show()
</code>
##### Dynamic
<code>
pr_database_md_pxrd = pr_database_md.create_group('PXRD')
pr_database_uff_md_pxrd = pr_database_uff_md.create_group('PXRD')
</code>
<code>
def dynamic_PXRD_job_v2(pr,name,md_job,refpattern=None,start=1000,num_frames=50,cores=1,cluster='victini',reservation_tag=None):
job = pr.create_job(pr.job_type.GPXRD, name, delete_existing_job=True)
job.load_trajectory(md_job,start=start,num_frames=num_frames)
if refpattern is not None:
job.set_reference_pattern(refpattern)
# We dont want individual fhkl indices
job.input['save_fhkl'] = False
job.server.queue = cluster
job.server.cores = cores
job.server.run_time = 5*60*60 # in seconds
job.server.reservation_tag = reservation_tag
job.run()
#return job
</code>
<code>
for bn in structures_database.keys():
name = bn.replace('-','_')
refpattern = glob.glob('./input_files/PXRD/Structures_Database/'+bn+'/*.dat')[0]
job = pr_database_md.load(name)
job_uff = pr_database_uff_md.load(name)
if job.status.finished:
dynamic_PXRD_job_v2(pr_database_md_pxrd,name,job,refpattern=refpattern,cluster='slaking',cores=1)
if job_uff.status.finished:
dynamic_PXRD_job_v2(pr_database_uff_md_pxrd,name,job_uff,refpattern=refpattern,cluster='slaking',cores=1)
</code>
<code>
from matplotlib.ticker import MaxNLocator
for bn in structures_database.keys():
name = bn.replace('-','_')
try:
job = pr_database_md_pxrd.load(name)
job_uff = pr_database_uff_md_pxrd.load(name)
ttheta_calc = job.get("output/ttheta_calc")
int_calc = job.get("output/int_calc")
ttheta_uff_calc = job_uff.get("output/ttheta_calc")
int_uff_calc = job_uff.get("output/int_calc")
ttheta_ref = job.get("output/ttheta_ref")
int_ref = job.get("output/int_ref")
except AttributeError:
continue
stat_res = job.compare(np.array([ttheta_calc,int_calc]).T, np.array([ttheta_ref,int_ref]).T,scale='optimal',verbose=False,plot=False)
stat_res_uff = job.compare(np.array([ttheta_uff_calc,int_uff_calc]).T, np.array([ttheta_ref,int_ref]).T,scale='optimal',verbose=False,plot=False)
print(name)
pt.figure(figsize=(20,10))
pt.plot(ttheta_calc, (int_calc-int_calc.min())*stat_res['scalefactor'],lw=1,label='QuickFF, {:3.2f}'.format(stat_res['R_wp']))
pt.plot(ttheta_uff_calc,(int_uff_calc-int_uff_calc.min())*stat_res_uff['scalefactor'],lw=1,label='UFF, {:3.2f}'.format(stat_res_uff['R_wp']))
pt.plot(ttheta_ref, (int_ref-int_ref.min()),lw=1,label='reference')
ax1 = pt.gca()
ax1.set_xlabel('2θ (°)')
ax1.set_ylabel('Intensity (a.u.)')
ax1.tick_params(
axis='y', # changes apply to the y-axis
which='both', # both major and minor ticks are affected
left=False, # ticks along the left edge are off
right=False, # ticks along the right edge are off
labelleft=False) # labels along the left edge are off
ax1.ticklabel_format(axis='y', style='plain', useOffset=False) # avoid scientific notation and offsets
ax1.legend(bbox_to_anchor=(1.1,.5), loc='center left',frameon=False)
ax1.xaxis.set_major_locator(MaxNLocator(integer=True))
pt.savefig(name+'.png',bbox_inches='tight')
pt.show()
</code>
##### Dynamic w background filtering
<code>
pr_database_md_pxrd_bg = pr_database_md.create_group('PXRD_bg')
pr_database_uff_md_pxrd_bg = pr_database_uff_md.create_group('PXRD_bg')
</code>
<code>
for bn in structures_database.keys():
name = bn.replace('-','_')
background_filtering = glob.glob('./input_files/PXRD/Structures_Database/'+bn+'/background_filtered.tsv')
if len(background_filtering)==0:
refpattern = glob.glob('./input_files/PXRD/Structures_Database/'+bn+'/*.dat')[0]
else:
refpattern = background_filtering[0]
job = pr_database_md.load(name)
job_uff = pr_database_uff_md.load(name)
if job.status.finished:
if pr_database_md_pxrd_bg.load(name) is None:
dynamic_PXRD_job_v2(pr_database_md_pxrd_bg,name,job,refpattern=refpattern,cluster='slaking',cores=1)
elif pr_database_md_pxrd_bg.load(name).status.aborted:
dynamic_PXRD_job_v2(pr_database_md_pxrd_bg,name,job,refpattern=refpattern,cluster='slaking',cores=8)
if job_uff.status.finished:
if pr_database_uff_md_pxrd_bg.load(name) is None:
dynamic_PXRD_job_v2(pr_database_uff_md_pxrd_bg,name,job_uff,refpattern=refpattern,cluster='slaking',cores=1)
elif pr_database_uff_md_pxrd_bg.load(name).status.aborted:
dynamic_PXRD_job_v2(pr_database_uff_md_pxrd_bg,name,job_uff,refpattern=refpattern,cluster='slaking',cores=8)
</code>
<code>
from matplotlib.ticker import MaxNLocator
for bn in structures_database.keys():
name = bn.replace('-','_')
try:
job = pr_database_md_pxrd_bg.load(name)
job_uff = pr_database_uff_md_pxrd_bg.load(name)
ttheta_calc = job.get("output/ttheta_calc")
int_calc = job.get("output/int_calc")
ttheta_uff_calc = job_uff.get("output/ttheta_calc")
int_uff_calc = job_uff.get("output/int_calc")
ttheta_ref = job.get("output/ttheta_ref")
int_ref = job.get("output/int_ref")
except AttributeError:
continue
stat_res = job.compare(np.array([ttheta_calc,int_calc]).T, np.array([ttheta_ref,int_ref]).T,scale='optimal',verbose=False,plot=False)
stat_res_uff = job.compare(np.array([ttheta_uff_calc,int_uff_calc]).T, np.array([ttheta_ref,int_ref]).T,scale='optimal',verbose=False,plot=False)
print(name)
pt.figure(figsize=(20,10))
pt.plot(ttheta_calc, (int_calc-int_calc.min())*stat_res['scalefactor'],lw=1,label='QuickFF, {:3.2f}'.format(stat_res['R_wp']))
pt.plot(ttheta_uff_calc,(int_uff_calc-int_uff_calc.min())*stat_res_uff['scalefactor'],lw=1,label='UFF, {:3.2f}'.format(stat_res_uff['R_wp']))
pt.plot(ttheta_ref, (int_ref-int_ref.min()),lw=1,label='reference')
ax1 = pt.gca()
ax1.set_xlabel('2θ (°)')
ax1.set_ylabel('Intensity (a.u.)')
ax1.tick_params(
axis='y', # changes apply to the y-axis
which='both', # both major and minor ticks are affected
left=False, # ticks along the left edge are off
right=False, # ticks along the right edge are off
labelleft=False) # labels along the left edge are off
ax1.ticklabel_format(axis='y', style='plain', useOffset=False) # avoid scientific notation and offsets
ax1.legend(bbox_to_anchor=(1.1,.5), loc='center left',frameon=False)
ax1.xaxis.set_major_locator(MaxNLocator(integer=True))
#pt.savefig(name+'.png',bbox_inches='tight')
pt.show()
</code>
|
{
"filename": "PXRD_1.ipynb",
"repository": "molmod/gpxrdpy",
"query": "transformed_from_existing",
"size": 42368,
"sha": ""
}
|
# Assignments_Regression_5_1.ipynb
Repository: MayankG001/PW
Q1. What is Elastic Net Regression and how does it differ from other regression techniques?
Elastic Net Regression is a type of linear regression that combines the penalties of Lasso (L1) and Ridge (L2) methods. It aims to improve model accuracy and prevent overfitting by applying both L1 and L2 regularization simultaneously. This helps in situations where there are multiple features that are correlated.
* Lasso Regression: Adds L1 regularization, which can shrink some coefficients to zero, effectively performing feature selection.
* Ridge Regression: Adds L2 regularization, which shrinks the coefficients but doesn’t zero them out.
* Elastic Net Regression: Combines both L1 and L2 regularization, allowing it to handle more complex data scenarios where Lasso or Ridge alone might not perform well.
Q2. How do you choose the optimal values of the regularization parameters for Elastic Net Regression?
The optimal values for the regularization parameters in Elastic Net Regression are typically chosen using cross-validation. Here’s a common approach:
1. Grid Search: Define a grid of potential values for the parameters (α, λ).
2. Cross-Validation: For each combination of parameters, perform cross-validation to evaluate the model performance.
3. Select Best Parameters: Choose the combination of parameters that provides the best cross-validation performance (e.g., lowest mean squared error).
Q3. What are the advantages and disadvantages of Elastic Net Regression?
`Advantages:`
* Feature Selection: Can perform feature selection by shrinking some coefficients to zero.
* Handles Multicollinearity: Better handles situations where features are highly correlated.
* Stability: Provides a more stable model by combining L1 and L2 penalties.
`Disadvantages:`
* Complexity: More complex to tune compared to Lasso or Ridge due to the need to select two parameters (α and λ).
* Interpretability: The model can be less interpretable due to the combined regularization effects.
Q4. What are some common use cases for Elastic Net Regression?
* Genomics: Used in genetics to handle high-dimensional data with correlated variables.
* Finance: Predicting stock prices where features are often correlated.
* Healthcare: Modeling disease progression where multiple biomarkers may be correlated.
* Marketing: Predicting customer behavior where demographic and behavioral data may be correlated.
Q5. How do you interpret the coefficients in Elastic Net Regression?
* Magnitude: The magnitude of the coefficients indicates the strength of the relationship between the feature and the response variable.
* Sign: The sign (positive or negative) indicates the direction of the relationship.
* Zero Coefficients: Coefficients shrunk to zero imply that those features are not important for the model.
* Regularization Effect: Coefficients are adjusted based on the regularization terms, which means they are not purely indicative of the relationship as they would be in ordinary least squares regression.
Q6. How do you handle missing values when using Elastic Net Regression?
* Imputation: Common techniques like mean, median, mode imputation, or more sophisticated methods like K-nearest neighbors (KNN) imputation.
* Model-Based Imputation: Use regression models to predict and fill missing values.
* Removal: In cases with a large amount of data, rows with missing values can be removed if the missingness is random.
Q7. How do you use Elastic Net Regression for feature selection?
Elastic Net performs feature selection by shrinking some coefficients to zero. This effectively removes the influence of these features from the model. To use it explicitly for feature selection:
* Train Elastic Net Model: Train the model with a chosen set of α and λ.
* Identify Non-zero Coefficients: Identify features with non-zero coefficients.
* Subset Data: Use these features to create a reduced dataset.
* Refit Model: Optionally, refit a new model using the reduced set of features.
Q8. How do you pickle and unpickle a trained Elastic Net Regression model in Python?
<code>
# Pickle a Model:
import pickle
from sklearn.linear_model import ElasticNet
# Train your model
model = ElasticNet()
model.fit(X_train, y_train)
# Save the model
with open('elastic_net_model.pkl', 'wb') as f:
pickle.dump(model, f)
</code>
<code>
# Unpickle a Model:
import pickle
# Load the model
with open('elastic_net_model.pkl', 'rb') as f:
loaded_model = pickle.load(f)
# Use the model
predictions = loaded_model.predict(X_test)
</code>
Q9. What is the purpose of pickling a model in machine learning?
Pickling a model in machine learning serves several purposes:
* Persistence: Save a trained model to disk for later use without retraining.
* Portability: Transfer models between different systems or environments.
* Deployment: Deploy models in production environments for making predictions.
* Version Control: Maintain different versions of models for comparison or rollback.
|
{
"filename": "Assignments_Regression_5_1.ipynb",
"repository": "MayankG001/PW",
"query": "transformed_from_existing",
"size": 9211,
"sha": ""
}
|
# bottleneck_Phylogenetic-Analysis_1.ipynb
Repository: jbloomlab/SARS-CoV-2
## Phylogenetic Analysis
The goal of this notebook is to perform phylogenetic inference on the samples from the boat as well as other genomes sampled from around the same time as the boat outbreak.
**Requirements:**
Make sure you have `Biopython`, `MAFFT`, `BLAST`, and `ETE3` installed in your environment.
<code>
# Change the directory if running interactivley
%cd /fh/fast/bloom_j/computational_notebooks/whannon/2020/SARS-CoV-2_bottleneck
</code>
<code>
import os
import math
from io import StringIO
from itertools import combinations, product
from collections import Counter, defaultdict
from subprocess import call
import pandas as pd
import numpy as np
from ete3 import Tree, TreeStyle, NodeStyle, TextFace, SequenceFace, CircleFace, faces, AttrFace
from Bio import SeqIO, AlignIO, SeqRecord, Seq, Alphabet
from Bio.Align.Applications import MafftCommandline
</code>
<code>
# Path to output directory
outpath = "results/phylogeny"
if not os.path.exists(outpath):
os.mkdir(outpath)
</code>
## Functions for Phylogenetic Trees
These functions are used to make the phylogenetic trees with `MAFFT` for the alignment and `IQtree` for the phylodynamic inference.
<code>
def mask_positions(record, positions = [265, 29674]):
"""
Positions to exclude from the ends of the genome for alignments and phylogenetics.
For now, the default will be to exclude the 3' and 5' UTRs.
"""
record.seq = record.seq[positions[0]:positions[1]]
return record
def make_tree(
fastapath, outpath, prefix, referencepath = "config/index/samtools/SARS2.fa", large = True, mask = True
):
"""
Function to align sequences and build a phylogenetic tree. Alignment is
performed using MAFFT and tree is built with IQtree.
"""
# Path for the output aligned fasta
alignfasta = f"{outpath}/{prefix}.aligned.fa"
print(f"Aligning the fasta file: {fastapath}\n")
if large:
# When there are lots of sequences, align to a reference. -- this is default behavior
call(f"mafft --6merpair --thread {4} --keeplength --addfragments {fastapath} {referencepath} > {alignfasta}", shell=True)
else:
mafft_cline = MafftCommandline(input=fastapath)
stdout, stderr = mafft_cline()
with open(alignfasta, "w") as handle:
handle.write(stdout)
print(f"Finshed aligning file. Alignment is located at {alignfasta}\n")
# Default behavior is to make the untranslated regions of SARS-CoV-2 genome
if mask:
SeqIO.write([mask_positions(record) for record in SeqIO.parse(alignfasta, "fasta")], alignfasta, "fasta")
print(f"Building treefile: {alignfasta}.treefile\n")
# Building the phylogeny with IQtree with 1000 bootstrap iterations with GTR+I+G (Invariable site plus discrete Gamma model) with asr
call(f"iqtree -s {alignfasta} -m GTR+I+G -bb 1000 -asr -st DNA -redo", shell=True)
print(f"Finished building treefile.\n")
def ancestral_snps(tree, states, nodes, reference_list, offset = 265):
"""
Function to get the SNPs (differences from a reference) for the common ancestor of a given list of nodes.
Params
------
tree: TreeNode
A tree from IQtree with -asr flag.
states: str
A path to the the *.states file of IQtree -asr
nodes: list
A list of nodes to find the common ancestor of
reference: list
A list of the reference bases to call SNPs to
offset: int
The offset of the state relative to reference from sequence masking
Return
------
pd.DataFrame
A table of the SNPs relative to the reference provided for an inferred ancestral state
"""
# Import the internal states from IQtree `-asr` .state file
states_df = (
pd.read_csv(states,
sep='\t',
comment='#',
usecols=['Node', 'Site', 'State'])
.assign(Site = lambda x: x['Site'] + offset)
)
# Get the common ancestor to the nodes provided
mrca = tree.get_common_ancestor(nodes).name.split("/")[0]
# Get the ancestral state of the MRCA
ancestral_state = states_df[states_df.Node == mrca]
# Fill the missing sequence with the reference
ancestral_state_dict = defaultdict(int, zip(ancestral_state.Site, ancestral_state.State))
for position, base in list(enumerate(reference_list)):
if position+1 not in ancestral_state_dict.keys():
ancestral_state_dict[position+1] = base
ancestral_state_list = sorted((position, base) for position, base in ancestral_state_dict.items())
ancestral_state_list_with_ref = [[tup[0], reference_list[i], tup[1]] for i, tup in enumerate(ancestral_state_list)]
# Get just the differences from the reference as a dataframe
consensus_snps_df = pd.DataFrame(ancestral_state_list_with_ref, columns = ["POS", "REF", "ALT"])
consensus_snps_df['SNP'] = np.where(consensus_snps_df['REF'] == consensus_snps_df['ALT'], False, True)
return consensus_snps_df[consensus_snps_df.SNP]
</code>
## Boat Genome Phylodynamics
This first part of the notebook contains an analysis of the genomes collected from the boat and **re-sequenced** from two different reverse transcription experiments.
I made the consensus sequences with `workflow/scripts/make-consensus-sequence.py`. To do this, I took an aligned `BAM` file and counted the occurences of each nucleotide at every position. If there were less than 100 reads that covered a given site (with `BQ > 25`) then I coded this site as an `N`. Then, using both replicates, I filled in missing (`N`) nucleotides with the more highly covered positions.
**This method doesn't take into account insertions and deletions**. Since the analysis is mostly focused on SNPs in the intrahost population, I thought this was reasonable.
<code>
# Path to the aligned sequences - aligned in the snakemake pipeline
boat_genomes_path = "results/consensus/aligned_consensus.fa"
boat_genomes = {record.id : record.seq for record in SeqIO.parse(boat_genomes_path, "fasta")}
</code>
#### Edit Distance
Here is the edit distance (hamming distance) between each of the genomes. The most divergent sample is `10136`. We know that this sample probably wasn't infected by the same virus that spread on the boat. It also doesn't seem like any of the cases on the boat came from this person. On average, there are about 2 nucleotide differences between all samples, expect for `10136` which averages ~9 differences per sample.
<code>
# Anonymous function for calculating edit distance
distance = lambda x,y : sum(c1 != c2 for c1, c2 in zip(x, y))
# Distance with every re-sequenced boat sample
edit_distance = {f"{g1}-{g2}": distance(boat_genomes[g1], boat_genomes[g2]) for g1, g2 in combinations(boat_genomes.keys(), 2)}
# Distance of every sample to the non-boat sample, 10136
outlier_distance = {f"{g1}-{g2}": distance(boat_genomes[g1], boat_genomes[g2]) for g1, g2 in combinations(boat_genomes.keys(), 2) if g1 == '10136' or g2 == '10136'}
# Distance between every sample on the boat, not including the re-sequenced sample 10136.
wout_outlier_distance = {f"{g1}-{g2}": distance(boat_genomes[g1], boat_genomes[g2]) for g1, g2 in combinations(boat_genomes.keys(), 2) if g1 != '10136' and g2 != '10136'}
print(f"The mean pairwise distance between samples is {sum(v for v in edit_distance.values()) / len(edit_distance):.2f}.\n")
print(f"However, this includes a sample (10136) that is highly diverged from the rest with a mean edit distance of {sum(v for v in outlier_distance.values()) / len(outlier_distance):.2f} from any other sample.\n")
print(f"The mean pairwise distance between samples excluding 10136 is {sum(v for v in wout_outlier_distance.values()) / len(wout_outlier_distance):.2f} from any other sample.")
</code>
#### Consensus Differences
Here are all the differences between the 24 genomes and the reference (Wuhan-1). I collapsed patients with the same consensus sequecnes into a single row.
Of the 24 specimens, there are two clusters of identical genomes that make up roughly half of all the resequenced samples.
<code>
# Wuhan-1 reference
reference = [base for record in SeqIO.parse("config/ref/SARS2.fa", "fasta") for base in record.seq]
# Get a list of differences from the reference for each boat genome.
consesus = {}
for patient, genome in boat_genomes.items():
differences = []
for i, bases in enumerate(zip(reference, genome.upper())):
if len(set(bases)) > 1:
differences.append((i, bases))
consesus[patient] = differences
# Get only the unique differences
consensus_differences = set(snp for snp_list in consesus.values() for snp in snp_list)
# Fill in the missing SNPs in the dict
for patient, snps in consesus.items():
for snp in consensus_differences:
if snp not in snps:
consesus[patient].append((snp[0], (snp[1][0], snp[1][0])))
# Condense the identical sequences
unique_consensus = {}
for patient, snps in consesus.items():
con = ' '.join([snp[1][1] for snp in sorted(snps, key = (lambda s: s[0]))])
if con in unique_consensus.keys():
unique_consensus[con].append(patient)
else:
unique_consensus[con] = [patient]
print(f"Position: {' '.join([str(snp[0] + 1) for snp in sorted(consensus_differences, key = (lambda s: s[0]))])}")
print(f"\nReference: \t\t\t\t{' '.join([snp[1][0] for snp in sorted(consensus_differences, key = (lambda s: s[0]))])}")
for snps, patients in unique_consensus.items():
print(f"\n{'/'.join(patients)}: \n\t\t\t\t\t{snps}")
</code>
#### Boat 'Consensus'
These are the mutations that are different from the reference, but shared between all 24 people we've **re-sequenced** on the boat.
<code>
# Save a list of the mutations shared in all 24 genomes
all_shared_variants = []
for k,v in Counter(snp for snps in consesus.values() for snp in snps).items():
if v == 24:
all_shared_variants.append(k)
# Save a list of the mutations shared in all 24 genomes -- *EXCLUDING 10136
shared_variants_without_10136 = []
for k,v in Counter(snp for patient, snps in consesus.items() if patient != "10136" for snp in snps).items():
if v == 23:
shared_variants_without_10136.append(k)
print(f"Mutations found in every specimen including 10136:\n\t{[''.join([alleles[0], str(pos+1), alleles[1]]) for pos, alleles in all_shared_variants]}\n")
print(f"Mutations found in every specimen EXCLUDING 10136:\n\t{[''.join([alleles[0], str(pos+1), alleles[1]]) for pos, alleles in shared_variants_without_10136]}\n")
</code>
<code>
variable_sites = []
for k,v in Counter(snp for snps in consesus.values() for snp in snps).items():
if k[1][0] != k[1][1]:
if v < 24:
variable_sites.append(k[0] + 1)
print("Consensus mutations that are not shared among all re-sequenced samples:", (*sorted(variable_sites)))
</code>
#### Boat Phylogenetic Tree
Here I make a tree from the consensus samples deep-sequenced from the boat. I annotated the tree with the differences from the reference. I rooted the tree using the midpoint. You can see that `10136` is the outgroup as expected. This figure shows the phylogenetic relationship between the samples on the boat along with the nucleotide differences from the reference.
**With all genomes:** This even includes the genomes that are identical to one another.
<code>
# Only make the tree if it hasn't already been made - otherwise this notebook takes quite a while to run.
if not os.path.exists("results/phylogeny/merged.consenus.aligned.fa.treefile"):
make_tree(fastapath = "results/consensus/merged.consenus.fa", outpath = outpath, prefix = "merged.consenus", large = False, mask = False)
## == Make the figure == ##
t = Tree("results/phylogeny/merged.consenus.aligned.fa.treefile", format = 1)
# Simple midpoint rooting is sufficent
root_point = t.get_midpoint_outgroup()
t.set_outgroup(root_point)
seq_annotation = {}
for patient, snps in consesus.items():
con = ''.join([snp[1][1] for snp in sorted(snps, key = (lambda s: s[0])) if snp not in all_shared_variants])
seq_annotation[patient] = con
# Tree style - applies to the entire tree
ts = TreeStyle()
ts.show_branch_support = True
ts.branch_vertical_margin = 10
ts.draw_guiding_lines = True
# Node styles - only applies to select nodes
leafstyle = NodeStyle()
leafstyle["shape"] = "circle"
leafstyle["size"] = 7
leafstyle["fgcolor"] = "darkred"
leafstyle["hz_line_width"] = 2
leafstyle["vt_line_width"] = 2
internalstyle = NodeStyle()
internalstyle["shape"] = "circle"
internalstyle["size"] = 3
internalstyle["fgcolor"] = "blue"
internalstyle["hz_line_width"] = 2
internalstyle["vt_line_width"] = 2
# Apply the node styles
for n in t.traverse():
if n.is_leaf():
n.set_style(leafstyle)
else:
n.set_style(internalstyle)
# Apply faces to the leaves to demonstrate differences from consensus sequence
for leaf in t.iter_leaves():
if leaf.name == "10136":
seqface = SequenceFace(seq_annotation[leaf.name], seqtype= "nt")
seqface.margin_left = 5
#refface = SequenceFace(''.join([snp[1][1] for snp in sorted(consensus_differences, key = (lambda s: s[0]))]), seqtype= "nt")
#refface.margin_left = 5
#refface.margin_bottom = 5
#leaf.add_face(refface, 1, "aligned")
leaf.add_face(seqface, 1, "aligned")
#leaf.add_face(TextFace("Reference"), 0, "aligned")
else:
seqface = SequenceFace(seq_annotation[leaf.name], seqtype= "nt")
seqface.margin_left = 5
leaf.add_face(seqface, 1, "aligned")
t.render("results/phylogeny/custom-boat-sequences.svg", w=10, units="in", tree_style = ts)
print(f"Positions of differences in order: {' '.join([str(snp[0] + 1) for snp in sorted(snps, key = (lambda s: s[0])) if snp not in all_shared_variants])}")
t.ladderize()
t.render("%%inline", w=10, units="in", tree_style = ts)
</code>
**Collapsed nodes:** Collapses the same sequence into a single node with the area of the circle proportional to the number of sequences.
I also have a version of this plot in `R`. This is a main paper figure. This version is more compact, but not as information dense as the `ggtree` version.
<code>
# Collapse identical sequences into a single node and write into a fasta file
collapsed_consensus = {genome[0]: len(genome) for genome in unique_consensus.values()}
collapsed_records = [SeqRecord.SeqRecord((boat_genomes[spid]), id=spid) for spid in collapsed_consensus.keys()]
SeqIO.write(collapsed_records, "results/consensus/condensed-boat-sequences.fa", "fasta")
# Only make the tree if it hasn't already been made - otherwise this notebook takes quite a while to run.
if not os.path.exists("results/phylogeny/condensed-boat-sequences.aligned.fa.treefile"):
make_tree(fastapath = "results/consensus/condensed-boat-sequences.fa", outpath = outpath, prefix = "condensed-boat-sequences", large = False, mask = False)
## == Make the figure == ##
t = Tree("results/phylogeny/condensed-boat-sequences.aligned.fa.treefile", format = 1)
# Simple midpoint rooting is sufficent
root_point = t.get_midpoint_outgroup()
t.set_outgroup(root_point)
# Tree style -- applies to the whole tree
ts = TreeStyle()
ts.show_branch_support = False
ts.branch_vertical_margin = 10
ts.draw_guiding_lines = True
# Node styles -- applies to nodes only
internalstyle = NodeStyle()
internalstyle["shape"] = "circle"
internalstyle["size"] = 1
internalstyle["fgcolor"] = "black"
internalstyle["hz_line_width"] = 2
internalstyle["vt_line_width"] = 2
for n in t.traverse():
if n.is_leaf():
leafstyle = NodeStyle()
leafstyle["shape"] = "circle"
leafstyle["hz_line_width"] = 2
leafstyle["vt_line_width"] = 2
if n.name == "10136":
leafstyle["size"] = 5
leafstyle["fgcolor"] = "darkred"
elif collapsed_consensus[n.name] == 3:
leafstyle["size"] = 5 * math.sqrt(collapsed_consensus[n.name])
leafstyle["fgcolor"] = "blue"
elif collapsed_consensus[n.name] == 10:
leafstyle["size"] = 5 * math.sqrt(collapsed_consensus[n.name])
leafstyle["fgcolor"] = "blue"
else:
leafstyle["size"] = 5
leafstyle["fgcolor"] = "blue"
n.set_style(leafstyle)
else:
n.set_style(internalstyle)
# Apply faces to the leaves to demonstrate differences from consensus sequence
for leaf in t.iter_leaves():
if leaf.name == "10136":
seqface = SequenceFace(seq_annotation[leaf.name], seqtype= "nt")
seqface.margin_left = 5
#refface = SequenceFace(''.join([snp[1][1] for snp in sorted(consensus_differences, key = (lambda s: s[0]))]), seqtype= "nt")
#refface.margin_left = 5
#refface.margin_bottom = 5
#leaf.add_face(refface, 1, "aligned")
leaf.add_face(seqface, 1, "aligned")
#leaf.add_face(TextFace("Reference"), 0, "aligned")
else:
seqface = SequenceFace(seq_annotation[leaf.name], seqtype= "nt")
seqface.margin_left = 5
leaf.add_face(seqface, 1, "aligned")
if collapsed_consensus[leaf.name] > 1:
leaf.name = ""
t.ladderize()
t.render("results/phylogeny/condensed-custom-boat-sequences.svg", w=10, units="in", tree_style = ts)
t.render("%%inline", w=10, units="in", tree_style = ts)
</code>
<code>
# Write out the annotations for making this plot in R
pd.DataFrame.from_dict(
{spid: [nt for nt in seq_annotation] for spid, seq_annotation in seq_annotation.items()},
orient='index',
columns= [str(snp[0] + 1) for snp in sorted(snps, key = (lambda s: s[0])) if snp not in all_shared_variants],
).to_csv("results/phylogeny/sequence_annotation_matrix.csv")
</code>
## Global Analysis
This portion of the analysis includes additional genomes from GISIAD. This is used for two main types of phylogenies:
1. Boat sequences that we **did not** re-sequence that we wanted to include in our analysis.
2. Genomes from GISAID that are 'representative' of those circulating at the time of the outbreak and close genomes from BLAST
Here is the metadata for all GISAID sequences as of `08-02-2021`:
<code>
# Path to all of the fasta sequences in GISAID as of 08/04/21
all_fastas_path = "config/gisaid/2021-08-04_GISAID_sequences.fasta"
# Import all of the metadata from GISIAD
GISAID_metadata = pd.read_table("config/gisaid/2021-08-02-GISAID-metadata.tsv", low_memory=False)
GISAID_Epi_metadata = pd.read_table("config/gisaid/2021-08-03-GISAID-Epi-Metadata.tsv", low_memory=False)
</code>
#### GISAID Boat Comparison
These are just the sequences on the boat that we have replicate deep-sequencing runs for. It's also important to look at the remaining sequences (**15 samples with Ct > 20**). Also, I want to see where these sequences fall in the global phylogeny as well as the local washington phylogeny.
To get as many of the relevant boat sequences as possible, I searched GISAID for all strains relevant to the boat. I took the 72 patient samples that we had data for from Pavitra and converted the internal SpID into a strain name to search in the GISAID metadata. -- **Apply the same quality standards as the main sequences**
<code>
# Read in the SpIDs from the supplement of https://doi.org/10.1128/JCM.02107-20
boat_metadata = pd.read_csv("config/data/Boat_Sample_Metadata.csv")
strains_to_search = [f"hCoV-19/USA/WA-UW-{spid}/2020" for spid in boat_metadata["SpID"]]
# Get the GISAID IDs from these strains
GISAID_metadata_boat = GISAID_metadata[GISAID_metadata['Virus name'].isin(strains_to_search)]
GISAID_metadata_boat.to_csv("config/gisaid/boat.csv")
GISAID_boat_accessions = GISAID_metadata_boat['Accession ID'].tolist()
# Go through all of the fasta files to get the sequences for each of the samples on the boat.
GISAID_boat_sequences = {record.name.split('/')[2].split("-")[-1]: record.seq for record in SeqIO.parse(all_fastas_path, "fasta") if (record.id).split("|")[0] in strains_to_search}
</code>
How similar are the consensus sequences on GISIAD to their counterparts from my analysis?
<code>
for spid, genome in boat_genomes.items():
# Get the gisaid version of the custom genome
gisaid_genome = GISAID_boat_sequences[spid]
# Make a record object for the custom genome
custom_record = SeqRecord.SeqRecord(genome, id=f"custom")
# Make a record object for the GISAD genome
gisaid_record = SeqRecord.SeqRecord(gisaid_genome, id=f"gisaid")
# Export the sequences to a fasta file for alignment
SeqIO.write([custom_record, gisaid_record], "comparison.fa", "fasta")
# Align the two sequences with Mafft
mafft_cline = MafftCommandline(input="comparison.fa")
stdout, stderr = mafft_cline()
# Make a dictionary to compare the alignment
alignment_dict = {record.name: "".join(base if base in 'atcgn-' else 'n' for base in record.seq) for record in SeqIO.parse(StringIO(stdout), 'fasta')}
# Get the non gap or 'n' differences
differences = [(pos, alleles[0], alleles[1]) for pos, alleles in enumerate(zip(alignment_dict['custom'], alignment_dict['gisaid'])) if alleles[0] != alleles[1] and alleles[0] not in "-n" and alleles[1] not in "-n"]
print(f"\nFor patient {spid}, the differences between the custom and GISAID genome are:\n")
for dif in differences:
print(f"Position: {dif[0]}, Custom: {dif[1]}, GISAID: {dif[2]}")
# When finished, remove the temporary fasta file.
out = os.system("rm -f comparison.fa")
if out == 0:
print("\nRemoved temp fasta.")
</code>
It seems like all of the single nucleotide differences between the GISAID sequences and the custom sequences are in where the Poly-A sequence starts. I can simply mask those bases when doing any kind of phylogenetic analysis with the extra sequences.
In addition, there is a `c` at position `13` that seems to show up in the GISAID sequences. This is a difference with respect to the reference. I don't have this in the custom sequence. It would be a `T13C` change in some samples. There is one other discrepency in sample `10091` at position `36`. I have the base as a reference base `C` when they have a polymorphism at this position `C36T`.
It seems possible that consensus at position `13` is `C`, but there is never enough coverage to annotate this. As for the `C36T` mutation in `10091`, that doesn't seem to exist (I looked in IGV for that sample).
Here, I assembled a fasta file with all of the sequences from our analysis and the remaining sequences from the boat. I masked the untranslated regions, **although it might make sense to only mask the regions from the above analysis where there are discepencies.**
<code>
# Check if this phylogeny already exists -- if not, make the appropriate files
if not os.path.exists("results/phylogeny/all-boat-sequences.aligned.fa.treefile"):
records = []
for spid in set(list(boat_genomes.keys()) + list(GISAID_boat_sequences.keys())):
if spid == "10136": # not from the boat
continue
# Try to see if the patient has a genome record in the illumina assemblies. If not, get GISAID.
try:
record = SeqRecord.SeqRecord((boat_genomes[spid]), id=spid)
except:
record = SeqRecord.SeqRecord((GISAID_boat_sequences[spid]), id=spid)
records.append(record)
SeqIO.write(records, "results/consensus/all-boat-sequences.fa", "fasta")
# Tree with all the boat sequences and our custom genomes.
make_tree(fastapath = "results/consensus/all-boat-sequences.fa", outpath = outpath, prefix = "all-boat-sequences", mask = True, large = True)
</code>
Here is the tree with **all of the sequences from the boat**. I included any sequence for which there were duplicates. If there weren't duplicates, I included the assembled genome from GISAID.
*What's the best way to root this phylogenetic tree?*
If I exclude the the 5' and 3' UTRs from this tree because I'm unsure of the coverage of the samples that we didn't resequence.
The most distant sample is `10115`, I could do some outgroup rooting using this sample.
<code>
alignment = "results/phylogeny/all-boat-sequences.aligned.fa"
site_offset = 265 + 1 # Start of ORF1ab
# Get the genome sequences for each tip
boat_seqs = {record.id: list(str(record.seq).upper()) for record in SeqIO.parse(alignment, 'fasta')}
reference_differences = {}
for spid, sequence in boat_seqs.items():
if spid != 'NC_045512.2':
reference_differences[spid] = [(i + site_offset, bases) for i, bases in enumerate(zip(boat_seqs['NC_045512.2'], sequence)) if "N" not in set(bases) and len(set(bases)) != 1]
# Get only the unique differences
all_boat_differences = set(snp for snps in reference_differences.values() for snp in snps)
# SNPs in all crew
snp_in_all_crew = [snp for snp, count in Counter(snp for snps in reference_differences.values() for snp in snps).items() if count == 39]
# Fill in the missing SNPs in the dict
for spid, snps in reference_differences.items():
for snp in all_boat_differences:
if snp not in snps:
reference_differences[spid].append((snp[0], (snp[1][0], snp[1][0])))
all_boat_annotation = {}
for spid, snps in reference_differences.items():
con = ''.join([snp[1][1] for snp in sorted(snps, key = (lambda s: s[0])) if snp not in snp_in_all_crew])
pos = [snp[0] for snp in sorted(snps, key = (lambda s: s[0])) if snp not in snp_in_all_crew]
all_boat_annotation[spid] = con
# Write out the annotations for making this plot in R
pd.DataFrame.from_dict(
{spid: [nt for nt in all_boat_annotation] for spid, all_boat_annotation in all_boat_annotation.items()},
orient='index',
columns = pos
).to_csv("results/phylogeny/all-boat-substitutions-matrix.csv")
</code>
<code>
# Condense the identical sequences
identical_seqs = defaultdict(list)
for k,v in all_boat_annotation.items():
identical_seqs[v].append(k)
# Collapse identical sequences into a single node and write into a fasta file
all_boat_collapsed_consensus = {genome[0]: len(genome) for genome in identical_seqs.values()}
# Two groups
print("10101 represents", len(identical_seqs['CGTCCCCCCCGTTCA']), "sequences")
print("10107 represents", len(identical_seqs['CGTTCCCCCCGTTCA']), "sequences")
all_collapsed_records = [SeqRecord.SeqRecord(Seq.Seq("".join(boat_seqs[spid]), alphabet= Alphabet.SingleLetterAlphabet()), id=spid) for spid in all_boat_collapsed_consensus.keys()]
SeqIO.write(all_collapsed_records, "results/consensus/collapsed-all-boat-sequences.fa", "fasta")
# Only make the tree if it hasn't already been made - otherwise this notebook takes quite a while to run.
if not os.path.exists("results/phylogeny/collapsed-all-boat-sequences.aligned.fa.treefile"):
make_tree(fastapath = "results/consensus/collapsed-all-boat-sequences.fa", outpath = outpath, prefix = "collapsed-all-boat-sequences", large = False, mask = True)
</code>
<code>
identical_seqs
</code>
### SNP annotated Boat tree
Here is the phylogenetic relationship between the boat sequences - those that we resequenced and those that were previously published - where the brances are annotated with infered SNPs from the ancestral state reconstruction function of `IQtree`.
**Importantly**, this state reconstruction is also used to infer sequence of the boat consensus. All 'fixed' mutations in the paper will be called relative to this boat consensus.
<code>
## === Make the substitution matrix for internal nodes and tips === ##
# Adapted from J. Bloom
alignment = "results/phylogeny/all-boat-sequences.aligned.fa"
state_file = "results/phylogeny/all-boat-sequences.aligned.fa.state"
site_offset = 265 # Start of ORF1ab
# Get the genome sequences for each tip
tip_to_seq = {record.id: list(str(record.seq).upper()) for record in SeqIO.parse(alignment, 'fasta')}
# Convert the sequences into the same format of IQtree `-asr` .state file
tip_states = (
pd.DataFrame.from_dict(tip_to_seq, orient='index')
.rename_axis('Node')
.reset_index()
.melt(id_vars='Node',
var_name='Site',
value_name='State',
)
.assign(Site=lambda x: x['Site'] + 1)
)
# Import the internal states from IQtree `-asr` .state file
internal_states = (
pd.read_csv(state_file,
sep='\t',
comment='#',
usecols=['Node', 'Site', 'State'])
)
# Combine and format the internal and tip states
states = (
internal_states
.append(tip_states)
.assign(Site = lambda x: x['Site'] + site_offset, # to get the actual position
n_states_at_site = lambda x: x.groupby('Site')['State'].transform('nunique'),
)
.query('n_states_at_site > 1')
.drop(columns='n_states_at_site')
)
states_dict = states.set_index(['Node', 'Site'])['State'].to_dict()
nodes = states['Node'].unique().tolist()
sites = sorted(states['Site'].unique())
# Find all pairwise differeces between the nodes/tips
subs_matrix = {} # keyed by (parent, descendant)
for n1, n2 in product(nodes, nodes):
subs = []
for site in sites:
nt1 = states_dict[(n1, site)]
nt2 = states_dict[(n2, site)]
if nt1 != nt2:
if nt1 in {'A', 'C', 'G', 'T'} and nt2 in {'A', 'C', 'G', 'T'}:
subs.append(f"{nt1}{site}{nt2}")
subs_matrix[(n1, n2)] = ', '.join(subs)
</code>
<code>
tip_states.to_csv("results/phylogeny/tip_sequence_matrix.csv")
tip_states
</code>
<code>
# With midpoint rooting
t = Tree("results/phylogeny/all-boat-sequences.aligned.fa.treefile", format = 1)
for n in t.get_descendants():
if not n.is_leaf():
name, support = n.name.split("/")
n.name = name
n.support = support
# if n.support <= 50:
# n.delete()
# Tree style
ts = TreeStyle()
ts.show_branch_length = False
ts.show_branch_support = False
ts.branch_vertical_margin = 3
ts.scale = 10
ts.show_scale = False
nstyle_dict = {'hz_line_width': 1,
'vt_line_width': 1,
'hz_line_color': 'black',
'vt_line_color': 'black',
'size': 0}
# label nodes
for n in t.traverse():
if n != t:
subs = subs_matrix[(n.up.name.split("/")[0], n.name.split("/")[0])]
icol = 0
for sub in subs.split(', '):
if sub:
n.add_face(TextFace(f"{sub} ",
fsize=4,
fgcolor="blue",),
column=icol,
position='branch-top',
)
icol += 1
nstyle = NodeStyle(**nstyle_dict)
n.set_style(nstyle)
refnode = t.search_nodes(name='NC_045512.2')[0]
refnode.delete()
root_point = t.search_nodes(name='10115')[0]
t.set_outgroup(root_point)
t.ladderize()
t.render("results/phylogeny/all-boat-sequences.png", w=15, units="in", tree_style = ts)
t.render("%%inline", w=5, units="in", tree_style = ts)
</code>
### Context of the Outbreak
The next thing that I wanted to do was establish the context of the outbreak. I wanted to do this for two reasons. First, I want to demonstrate that the outbreak was monophyletic and stemed from a single introduction of SARS-CoV-2 on the boat that spread amoung passangers. Second, I want to see if intrahost mutations that arose on the boat showed up elsewhere in the local washington phylogeny.
#### Global Phylogeny
Here, I want to show where the boat samples sit on the global phylogeny. To do this I included all circulating clades and samples that were close to the boat sequences specifically. The help with the visualization, I only incded a limited number of samples from each clade.
*Ciculating clades up to the date of collection `05/03/2020` were: 19A, 19B, 20A, 20C, 20B, 20D, and 20F*
<code>
# Clades circulating at the time of, or before, the outbreak on the boat.
clades_of_interest = ['19A','19B','20A','20B','20C','20D','20E (EU1)','20F']
# Max number to sample from each clade
n_sample = 25
# List to contain the clade specific samples
clade_dfs = []
for clade in clades_of_interest:
# Get the accessions for every sequence in a given clade
accessions = GISAID_Epi_metadata[GISAID_Epi_metadata.Nextstrain_clade == clade].gisaid_epi_isl
# Subtract the boat accessions
accessions = set(accessions) - set(GISAID_boat_accessions)
# Subset the metadata based on these sequences
clade_subset = GISAID_metadata[GISAID_metadata['Accession ID'].isin(accessions)]
# Filter based on quality
# high coverage
clade_subset = clade_subset[clade_subset['Is high coverage?'] == True]
# less than 5% N's
clade_subset = clade_subset[clade_subset['N-Content'] <= 0.05]
# complete genome
clade_subset = clade_subset[clade_subset['Is complete?'] == True]
# host is human
clade_subset = clade_subset[clade_subset['Host'] == "Human"]
if len(clade_subset) > n_sample:
clade_subset = clade_subset.sample(n = n_sample, random_state = 7) # Set the seed for reproducibility
clade_subset['clade'] = clade
clade_dfs.append(clade_subset)
# Combine all of these representative sequences
clade_df = pd.concat(clade_dfs)
</code>
#### BLAST Database
After I got representative sequences from each clade, I needed to get similar sequeucnes from `BLAST`. I think it makes the most sense to constrain the BLAST search to sequences from around the time and region of the outbreak.
<code>
# Check if this phylogeny already exists
if not os.path.exists("results/phylogeny/blast_database.fasta"):
print("Making the BLAST databse\n")
# Get sequences from around the time of the outbreak ~~ '2020-05-30'
blast_sequence_metadata = GISAID_metadata[(GISAID_metadata['Collection date'] >= '2020-04-30') & (GISAID_metadata['Collection date'] <= '2020-06-30')]
# Location is Washington State
blast_sequence_metadata = blast_sequence_metadata[blast_sequence_metadata['Location'] == "North America / USA / Washington"]
# high coverage
blast_sequence_metadata = blast_sequence_metadata[blast_sequence_metadata['Is high coverage?'] == True]
# less than 5% N's
blast_sequence_metadata = blast_sequence_metadata[blast_sequence_metadata['N-Content'] <= 0.05]
# complete genome
blast_sequence_metadata = blast_sequence_metadata[blast_sequence_metadata['Is complete?'] == True]
# host is human
blast_sequence_metadata = blast_sequence_metadata[blast_sequence_metadata['Host'] == "Human"]
# not including the boat sequences
blast_sequence_metadata = blast_sequence_metadata[~blast_sequence_metadata['Accession ID'].isin(GISAID_boat_accessions)]
# Get the virus names for the sequences to make the blast database and remove boat samples
blast_virus_names = set(blast_sequence_metadata['Virus name'].tolist()) - set(GISAID_metadata_boat['Virus name'].to_list())
# Get these sequences in a dictionary
blast_virus_sequences = [record for record in SeqIO.parse(all_fastas_path, "fasta") if (record.id).split("|")[0] in blast_virus_names]
# Write these out to a fasta file
SeqIO.write(blast_virus_sequences, "results/phylogeny/blast_database.fasta", "fasta")
print("(1/3) Finished downloading the eligible samples for the BLAST database.\n")
# Build the BLAST database
call(f"makeblastdb -in results/phylogeny/blast_database.fasta -dbtype nucl", shell=True)
print("(2/3) Finished making the BLAST databse with the eligible samples.\n")
# For each boat genome, make the query fasta
all_matches_list = []
colnames = ["SpID", "Subject", "Perc_Identity", "Alignment_Length", "Mismatches", "Gap_Opens", "Q_start", "Q_end", "S_start", "S_end", "Evalue", "Bit_Score"]
for spid, genome in boat_genomes.items():
custom_record = SeqRecord.SeqRecord(genome, id=spid)
SeqIO.write(custom_record, "results/phylogeny/query.fasta", "fasta")
# Query the database
print(f"Querying the BLAST database for sample {spid}\n")
call(f"blastn -db results/phylogeny/blast_database.fasta -query results/phylogeny/query.fasta -out results/phylogeny/results.out -outfmt 7", shell=True)
# Import the table of matches
match_df = pd.read_table("results/phylogeny/results.out", comment='#', names=colnames)
match_df = match_df.head(25)
# add to a list of dataframes
all_matches_list.append(match_df)
print("(3/3) Finished BLASTing each sample.\n")
# Combine all of these and remove the duplicates
closest_sequences = pd.concat(all_matches_list).drop_duplicates(['Subject'])
# Remove the temp files
os.system("rm -f results/phylogeny/results.out results/phylogeny/query.fasta")
</code>
#### Representative Phylogeny
This phylogeny contains sequences sampled from each clade circulating globally at the time of the outbreak. The phylogeny also contains the boat sequences as well as a non-redundant list of the top 25 matches to each sample by `BLASTN` (see above code).
<code>
# Check if this phylogeny already exists
if not os.path.exists("results/phylogeny/global_phylogeny.aligned.fa.treefile"):
# Check if this fasta already exists
if not os.path.exists("results/phylogeny/global_phylogeny.fa"):
# Parse the sequence names from the dataframe
blast_virus_names = [virus.split('|')[0] for virus in closest_sequences.Subject.tolist()]
# Get these sequences from the GISAID metadata
blast_df = GISAID_metadata[GISAID_metadata['Virus name'].isin(blast_virus_names)]
# Concat with the clade sequences and boat sequences
global_phylogeny_metadata = pd.concat([GISAID_metadata_boat, blast_df, clade_df])
# 1. Get the virus names for the sequences to make the blast database and remove boat samples
virus_names = set(global_phylogeny_metadata['Virus name'].tolist())
# Get these sequences in a dictionary
virus_sequences = [record for record in SeqIO.parse(all_fastas_path, "fasta") if (record.id).split("|")[0] in virus_names]
# Write these out to a fasta file
SeqIO.write(virus_sequences, "results/phylogeny/global_phylogeny.fa", "fasta")
print("(1/3) Finished downloading the eligible samples for the global phylogeny.\n")
make_tree(fastapath = "results/phylogeny/global_phylogeny.fa", outpath = outpath, prefix = "global_phylogeny")
</code>
### Ancestral Boat Sequence
Here, I determine the sequences of the boat consensus. This corresponds to the most likely sequence of the common ancestor to all genomes sampled from the boat.
There is some uncertaintly whether to include `10115` or not in this reconstruction.
<code>
t = Tree("results/phylogeny/global_phylogeny.aligned.fa.treefile", format = 1)
state_file = "results/phylogeny/global_phylogeny.aligned.fa.state"
# Get only the genomes that belong to the same clade all of the boat samples
boat_minimum_clade = [x for x in GISAID_metadata_boat['Virus name'].tolist() if x.split("/")[2].split("-")[-1] not in ('10136', '10115')]
boat_all_clade = [x for x in GISAID_metadata_boat['Virus name'].tolist()]
boat_minimum_names = [n.name for n in t.get_leaves() if n.name.split("|")[0] in boat_minimum_clade]
boat_all_names = [n.name for n in t.get_leaves() if n.name.split("|")[0] in boat_all_clade]
# Get the samples that break the monophyletic relationship
non_boat_samples = [n.name.split("|")[0] for n in t.check_monophyly(boat_minimum_names, "name")[-1]]
print(f"Samples that violate monophyletic boat clade: {non_boat_samples}")
# Ancestral State = without 10115 and 10136
boat_consensus_snps = ancestral_snps(t, state_file, boat_minimum_names, reference, offset = 265)
boat_consensus_snps.to_csv("results/phylogeny/boat_consensus.csv")
print("\n", boat_consensus_snps)
</code>
#### Closer look at the boat clade.
From the above image, it's clear that sample `10136` belongs somewhere else on the phylogeny and was perhaps infected elsewhere on the boat.
Another interesting observation is that two samples collected on the same day from the UW virology lab also show up as part of the boat clade. The branch length for one sample looks especially long. However, the other sample seems closer to the main one. These don't appear in the sample sheet for what was collected on the boat. That doesn't mean that they weren't resampled at a later date.
What are the names of the two samples in this part of the subtree?
<code>
print(non_boat_samples)
[n.name for n in t.check_monophyly(boat_minimum_names, "name")[-1]]
</code>
Two of the top 5 samples that weren't part of the boat have the longest branch lenghts. However, one of theses has a comparable branch length to other samples.
How different are these samples from the deep-sequenced samples from the boat? The number of consensus differences can inform us about how likely these were to have resulted from direct transmission from any one of our samples.
<code>
# Check if this phylogeny already exists
if not os.path.exists("results/phylogeny/with_outliers.fa"):
# Get the records for the non-boat genomes
non_boat_genomes = [record for record in SeqIO.parse(all_fastas_path, "fasta") if (record.id).split("|")[0] in non_boat_samples]
# Make a dictionary by adding the new samples to the boat dictionary.
for record in non_boat_genomes:
SpID = record.name.split("/")[2].split('-')[-1]
if SpID not in boat_genomes.keys():
boat_genomes[SpID] = record.seq
# Get a list of records from the updated boat dicitonary
records = [SeqRecord.SeqRecord(seq, id=spid) for spid, seq in boat_genomes.items()]
# Add the reference genome
records.append(SeqRecord.SeqRecord(Seq.Seq("".join(reference).lower()), id = "reference"))
# Write out a new fasta with the non-boat outliers
SeqIO.write(records, "results/phylogeny/with_outliers.fa", "fasta")
make_tree(fastapath = "results/phylogeny/with_outliers.fa", outpath = outpath, prefix = "with_outliers", large = False, mask = False)
</code>
<code>
# Hamming distance not including n's or gaps
distance2 = lambda x,y : sum(c1 != c2 for c1, c2 in zip(x, y) if "-" not in {c1, c2} and "n" not in {c1, c2})
# Get all of the aligned records as a dictionary.
aligned_records = {record.id: str(record.seq) for record in SeqIO.parse("results/phylogeny/with_outliers.aligned.fa", "fasta")}
# Calculate all of the edit distances.
edit_distance = {f"{g1}-{g2}": distance2(aligned_records[g1], aligned_records[g2]) for g1, g2 in combinations(aligned_records.keys(), 2) if "10136" not in {g1, g2}}
print("10510 average edit distance:", sum(v for k,v in edit_distance.items() if '10510' in k.split("-"))/ len([v for k,v in edit_distance.items() if '10510' in k.split("-")]))
print("10521 average edit distance:", sum(v for k,v in edit_distance.items() if '10521' in k.split("-"))/len([v for k,v in edit_distance.items() if '10510' in k.split("-")]))
</code>
<code>
# Reference Sequence
aligned_records = {record.id: str(record.seq) for record in SeqIO.parse("results/phylogeny/with_outliers.aligned.fa", "fasta")}
reference = [base.upper() for base in aligned_records['reference']]
aligned_records.pop("reference")
# Save a dict of the consensus SNPs for each patient
consesus = {}
# Populate the dict
for patient, genome in aligned_records.items():
differences = []
for i, bases in enumerate(zip(genome.upper(), reference)):
if len(set(bases)) > 1 and "-" not in bases and "N" not in bases:
differences.append((i, bases))
consesus[patient] = differences
# Get a set of all unique SNPs
consensus_differences = set(snp for snp_list in consesus.values() for snp in snp_list)
# Fill in the missing SNPs in the dict
for patient, snps in consesus.items():
for snp in consensus_differences:
if snp not in snps:
consesus[patient].append((snp[0], (snp[1][1], snp[1][1])))
# Condense the identical sequences
unique_consensus = {}
for patient, snps in consesus.items():
con = ' '.join([snp[1][0] for snp in sorted(snps, key = (lambda s: s[0]))])
if con in unique_consensus.keys():
unique_consensus[con].append(patient)
else:
unique_consensus[con] = [patient]
# Print the differences for each patient
print(f"Position: {' '.join([str(snp[0]) for snp in sorted(consensus_differences, key = (lambda s: s[0]))])}")
print(f"\nReference: \t\t\t\t{' '.join([snp[1][1] for snp in sorted(consensus_differences, key = (lambda s: s[0]))])}")
for snps, patients in unique_consensus.items():
print(f"\n{'/'.join(patients)}: \n\t\t\t\t\t{snps}")
</code>
Interestingly, there are 0 consensus differences between `10510` and three other samples `10129/10028/10107` if you disregard the first and last 100 nucleotides as well as n's or gaps in the sequence. Essentially, they seem to have identical SNPs. I'm not fully sure what this means for the samples on the boat.
Unfortunately, there is a 7 nucleotide un-resolved stretch from positions: `23298 - 23304`. Otherwise, the genome is identical outside of the non-coding UTRS.
<code>
make_tree(fastapath = "results/phylogeny/with_outliers.fa", outpath = outpath, prefix = "with_outliers", large = True, mask = False)
aligned_records = {record.id: str(record.seq) for record in SeqIO.parse("results/phylogeny/with_outliers.aligned.fa", "fasta")}
[(i+1, nts) for i, nts in enumerate(zip(aligned_records['10510'], aligned_records['10107'])) if nts[0] != nts[1]]
</code>
Interestingly, there is one SNP difference between some of the sequences on the boat and this sample at position 13. However, this position is so close to the beggining of the genome, it's hard to give it any weight as a real mutation.
# END
|
{
"filename": "bottleneck_Phylogenetic-Analysis_1.ipynb",
"repository": "jbloomlab/SARS-CoV-2",
"query": "transformed_from_existing",
"size": 317835,
"sha": ""
}
|
# map_citation_map_app_1.ipynb
Repository: lyuzhuoqi/citation
<code>
import pandas as pd
</code>
<code>
node_labels = {0: 'Law, Politics',
1: 'Geography & Environment',
2: 'Computing',
3: 'Dentistry, Ophthalmology, Dermatology',
4: 'Oncology',
5: 'Electrical & Electronic Engineering',
6: 'Physics',
7: 'Cardiology',
8: 'Ecology & Zoology',
9: 'Psychology',
10: 'Information Engineering',
11: 'Chemistry & Materials',
12: 'Geology',
13: 'History & Literature & Philosophy',
14: 'Mechanic Engineering',
15: 'Mathematics',
16: 'Animal',
17: 'Molecular & Cell Biology',
18: 'Infectious Diseases',
19: 'Linguistics',
20: 'Nursing',
21: 'Agriculture',
22: 'Rehabilitation & Sports',
23: 'Sociology & Culture',
24: 'Economics',
25: 'Education'}
</code>
<code>
import pandas as pd
from io import StringIO
data = """cluster,inner_citations,outer_citations,total_citations,inner_pct
0,711800,548049,1259849,56.498834384120634
1,2825306,4350637,7175943,39.37191251379784
2,545215,1038813,1584028,34.41953046284535
3,3032857,2105357,5138214,59.02550964206629
4,8835127,7723805,16558932,53.355657236831455
5,4895488,3669681,8565169,57.15576657039691
6,12930483,7209477,20139960,64.20312155535562
7,11827200,10544479,22371679,52.86684115215492
8,5666184,4822988,10489172,54.019363968862365
9,7727996,5622399,13350395,57.88589775808132
10,20884,181174,202058,10.33564620059587
11,42393393,17034051,59427444,71.33638963169946
12,10201614,5963339,16164953,63.109456612710225
13,175760,216233,391993,44.83753536415191
14,10137104,7893760,18030864,56.22084443651729
15,3357277,2073105,5430382,61.82395639938406
16,1788271,2342619,4130890,43.290211068317
17,32475251,23904513,56379764,57.600899145303266
18,1828191,3769129,5597320,32.66189890876348
19,372329,422513,794842,46.84314618502797
20,5412666,6430978,11843644,45.70101904447652
21,5092710,6579800,11672510,43.62994762908749
22,3578910,3525015,7103925,50.379332552075084
23,915814,1025574,1941388,47.17315652512532
24,4814264,1825075,6639339,72.5111942619589
25,1303886,980294,2284180,57.08332968505109
"""
node_stats_df = pd.read_csv(StringIO(data))
</code>
<code>
data = """source,target,weight
1,0,66149
9,0,71487
13,0,23908
23,0,138078
24,0,146312
0,1,120324
8,1,408321
12,1,800313
14,1,562147
23,1,218507
24,1,699784
5,2,402426
17,2,108435
24,2,97128
4,3,299181
7,3,428572
17,3,622684
7,4,1994114
17,4,3630339
20,4,666158
2,5,498139
6,5,525336
10,5,22793
14,5,618541
15,5,499998
10,6,22375
11,6,3025458
15,6,597620
17,6,1345166
3,7,423672
4,7,1963211
17,7,3720848
18,7,699751
20,7,1762585
22,7,927212
12,8,727240
17,8,1993749
21,8,699519
0,9,73416
7,9,672196
17,9,1869135
19,9,170920
20,9,1119340
22,9,413857
23,9,166780
25,9,236031
5,10,18378
6,10,14412
11,10,51329
14,10,60540
6,11,4299903
8,11,652449
10,11,41463
12,11,1167841
14,11,2781998
17,11,4779059
21,11,1271460
1,12,768398
8,12,1012860
10,12,29281
11,12,1294923
13,12,25792
0,13,20819
8,13,21134
9,13,24415
12,13,24099
23,13,47977
1,14,500750
2,14,85965
5,14,841689
6,14,984505
10,14,80114
11,14,3330667
15,14,342352
5,15,331958
6,15,772570
8,16,275029
17,16,933612
18,16,370272
21,16,269749
3,17,657476
4,17,3800112
6,17,1079789
7,17,4784463
8,17,2025234
9,17,1780143
11,17,2987257
16,17,557236
18,17,1455812
20,17,1001315
21,17,1524990
22,17,1117967
4,18,301010
7,18,740642
16,18,207663
17,18,1233237
20,18,544621
9,19,195756
25,19,56815
4,20,653182
7,20,1799023
9,20,1121270
17,20,756893
18,20,562629
22,20,459085
23,20,125828
8,21,951619
11,21,1261571
16,21,322731
17,21,2391614
18,21,312984
7,22,995276
17,22,887859
20,22,470379
0,23,170152
1,23,142231
9,23,177747
13,23,52370
20,23,123757
24,23,138902
0,24,160567
1,24,424907
2,24,93550
23,24,165142
25,24,145669
9,25,296731
19,25,65181
23,25,115073
24,25,157661
"""
edge_df = pd.read_csv(StringIO(data))
</code>
<code>
node_stats_df['color'] = (node_stats_df['inner_citations']-node_stats_df['inner_citations'].min())/(node_stats_df['inner_citations'].max()-node_stats_df['inner_citations'].min())
edge_df['normalized_weight'] = (edge_df['weight']-edge_df['weight'].min())/(edge_df['weight'].max()-edge_df['weight'].min())
</code>
<code>
from bqplot import Graph, ColorScale, Figure
import ipywidgets as widgets
import numpy as np
# 转换节点和边数据
node_data = []
for _, row in node_stats_df.iterrows():
r = np.sqrt(row.total_citations)*0.005
label_text = node_labels[row.cluster]
label_loc = 'center'
if r < 15 or len(label_text) > 25:
label_loc = 'outside'
node_data.append({
'label': label_text,
'label_display': label_loc,
'shape': 'circle',
'color': row.color,
'shape_attrs': {'r': r},
})
link_data = []
for _, row in edge_df.iterrows():
link_data.append({
'source': row.source,
'target': row.target,
'value': row.normalized_weight,
})
# 创建颜色比例尺
node_color_scale = ColorScale(min=node_stats_df.color.min(),
max=node_stats_df.color.max(),
colors=['#ffeda0', '#f03b20'])
link_color_scale = ColorScale(min=edge_df.normalized_weight.min(),
mid=edge_df.normalized_weight.mean(),
max=edge_df.normalized_weight.max(),
colors=['#f7fbff', '#6baed6', '#08306b'])
# 创建图形标记
graph = Graph(
node_data=node_data,
link_data=link_data,
static=False,
directed=True,
link_type='arc',
scales={
'color': node_color_scale,
'link_color': link_color_scale
},
charge=-1500,
)
# 创建图形并添加图例
figure = Figure(
marks=[graph],
layout=widgets.Layout(width='1200px', height='1200px'),
)
figure
</code>
<code>
from IPython.display import display, HTML
import numpy as np
# 计算实际显示参数
# 节点大小参数
node_sizes = np.sqrt(node_stats_df['total_citations']) * 0.005
min_size, max_size = node_sizes.min(), node_sizes.max()
size_legend_values = np.linspace(min_size, max_size, 5)
size_labels = [f"{(s/0.01)**2:.0f}" for s in size_legend_values]
# 节点颜色参数
inner_citations_min = node_stats_df['inner_citations'].min()
inner_citations_max = node_stats_df['inner_citations'].max()
# 边颜色参数
edge_min = edge_df['weight'].min()
edge_max = edge_df['weight'].max()
legend_html = f"""
<div style="width: 14%; padding: 15px; background: white; border-radius: 6px; box-shadow: 0 2px 4px rgba(0,0,0,0.1);">
<h7 style="color: #333;">Node Size: Total Citations</h7>
<div>
<div style="display: flex; align-items: flex-end; height: 100px;">
{''.join([f'<div style="display: flex; flex-direction: column; align-items: center; margin: 0 8px;">'
f'<div style="width: {s*2}px; height: {s*2}px; border-radius: 50%; '
f'background: #ffeda0; border: 1.75px solid #0b0b0b;"></div>'
f'<span style="margin-top: 8px; font-size: 12px; color: #333">{label}</span></div>'
for s, label in zip(size_legend_values, size_labels)])}
</div>
</div>
<h7 style="margin-bottom: 5px; color: #333;">Node Color: Intradisciplinary Citations</h7>
<div style="height: 10px; width: 100%; background: linear-gradient(to right, #ffeda0, #f03b20); border-radius: 4px;"></div>
<div style="margin-bottom: -0px; display: flex; justify-content: space-between; font-size: 12px; color: #333">
<span>{inner_citations_min}</span>
<span>{inner_citations_min+(inner_citations_max-inner_citations_min)/4:.0f}</span>
<span>{inner_citations_min+(inner_citations_max-inner_citations_min)/2:.0f}</span>
<span>{inner_citations_min+(inner_citations_max-inner_citations_min)*3/4:.0f}</span>
<span>{inner_citations_max}</span>
</div>
<h7 style="margin-bottom: 5px; color: #333;">Edge Weight: Interdisciplinary Citations</h7>
<div style="height: 10px; width: 100%; background: linear-gradient(to right, #f7fbff, #6baed6, #08306b); border-radius: 4px;"></div>
<div style="margin-bottom: -10px; display: flex; justify-content: space-between;">
<span style="font-size: 12px; color: #333;">{edge_min}</span>
<span style="font-size: 12px; color: #333;">{edge_min+(edge_max-edge_min)/4:.0f}</span>
<span style="font-size: 12px; color: #333;">{edge_min+(edge_max-edge_min)/2:.0f}</span>
<span style="font-size: 12px; color: #333;">{edge_min+(edge_max-edge_min)*3/4:.0f}</span>
<span style="font-size: 12px; color: #333;">{edge_max}</span>
</div>
</div>
"""
display(HTML(legend_html))
</code>
|
{
"filename": "map_citation_map_app_1.ipynb",
"repository": "lyuzhuoqi/citation",
"query": "transformed_from_existing",
"size": 18754,
"sha": ""
}
|
# vuegen_basic_case_study_1.ipynb
Repository: Multiomics-Analytics-Group/vuegen
# Predefined Directory Case Study - Notebook
[![Open In Colab][colab_badge]][colab_link]
This notebook is a basic demo of the Vuegen Python library. This sofwtare automates the creation of reports based on a directory with plots, dataframes, and other files in different formats. A YAML configuration file is generated from the directory to define the structure of the report. Users can customize the report by modifying the configuration file, or they can create their own configuration file instead of passing a directory as input.
The configuration file specifies the structure of the report, including sections, subsections, and various components such as plots, dataframes, markdown, html, and API calls. Reports can be generated in various formats, including documents (PDF, HTML, DOCX, ODT), presentations (PPTX, Reveal.js), notebooks (Jupyter) or Streamlit web applications.
An overview of the VueGen workflow is shown in the figure below:
![Vuegen graphical abstarct][abstractfig_vuegen]
This introductory case study familiarizes users with the tool, its report types, file formats, and other features. In this example, a directory with plots, dataframes, Markdown, and HTML components is provided. An advanced example can be found [here][advanced_notebook].
## Notebook structure
First, we will set up the work environment by installing the necessary packages and importing the required libraries. Next, we will create various reports using the example directory. Finally, we will extend the report by modifying the configuration file to include additional components.
0. [Work environment setup](#0-work-environment-setup)
1. [Report generation](#1-report-generation)
2. [Extending the report](#2-extending-the-report)
## Credits and Contributors
- This notebook was created by Sebastián Ayala-Ruano under the supervision of Henry Webel and Alberto Santos, head of the [Multiomics Network Analytics Group (MoNA)][Mona] at the [Novo Nordisk Foundation Center for Biosustainability (DTU Biosustain)][Biosustain].
- You can find more details about the project in this [GitHub repository][githubrepo].
[colab_badge]: https://colab.research.google.com/assets/colab-badge.svg
[colab_link]: https://colab.research.google.com/github/Multiomics-Analytics-Group/vuegen/blob/main/docs/vuegen_basic_case_study.ipynb
[abstractfig_vuegen]: https://raw.githubusercontent.com/Multiomics-Analytics-Group/vuegen/main/docs/images/vuegen_graph_abstract.png
[emp_repo]: https://github.com/biocore/emp/tree/master
[emp_paper]: https://www.nature.com/articles/nature24621
[Mona]: https://multiomics-analytics-group.github.io/
[Biosustain]: https://www.biosustain.dtu.dk/
[githubrepo]: https://github.com/Multiomics-Analytics-Group/vuegen
[advanced_notebook]: https://github.com/Multiomics-Analytics-Group/vuegen/blob/main/docs/vuegen_case_study_earth_microbiome.ipynb
## 0. Work environment setup
### 0.1. Installing libraries and creating global variables for platform and working directory
To run this notebook locally, you should create a virtual environment with the required libraries. If you are running this notebook on Google Colab, everything should be set.
<code>
# Vuegen library
%pip install vuegen
</code>
<code>
import os
IN_COLAB = "COLAB_GPU" in os.environ
</code>
<code>
# Set working directory
if IN_COLAB:
# Clone the repository in Colab
!git clone --depth=1 https://github.com/Multiomics-Analytics-Group/vuegen.git
base_output_dir = "vuegen/docs/example_data/Basic_example_vuegen_demo_notebook/"
else:
# Output directory for local execution
base_output_dir = "./example_data/Basic_example_vuegen_demo_notebook/"
</code>
<code>
# Optional library to launch a streamlit app from colab
if IN_COLAB:
!npm install localtunnel
</code>
### 0.2. Importing libraries
<code>
# Imports
import yaml
from vuegen import report_generator
from vuegen.utils import load_yaml_config
if IN_COLAB:
import urllib
</code>
## 1. Report generation
To generate different report types, just modify the report_type variable. The available types are:
* streamlit
* html
* pdf
* docx
* odt
* revealjs
* pptx
* jupyter
### 1.1. Streamlit report
To launch the Streamlit web application from Colab, open the generated URL and copy the localtunnel entry point IP into the corresponding field on the opened page. Once submited, you will be redirected to your Streamlit web application.
<code>
# Generate the report
report_type = "streamlit"
report_dir, config_path = report_generator.get_report(
dir_path=base_output_dir, report_type=report_type, logger=None
)
print(f"\nReport generated in {report_dir}")
print(f"\nConfig file generated in {config_path}")
</code>
<code>
run_streamlit = False
# run_streamlit = True # uncomment line to run the streamlit report
# Launch the Streamlit report depneding on the platform
if not IN_COLAB and run_streamlit:
!streamlit run streamlit_report/sections/report_manager.py
elif run_streamlit:
# see: https://discuss.streamlit.io/t/how-to-launch-streamlit-app-from-google-colab-notebook/42399
print(
"Password/Enpoint IP for localtunnel is:",
urllib.request.urlopen("https://ipv4.icanhazip.com")
.read()
.decode("utf8")
.strip("\n"),
)
# Run the Streamlit app in the background
!streamlit run streamlit_report/sections/report_manager.py --server.address=localhost &>/content/logs.txt &
# Expose the Streamlit app on port 8501
!npx localtunnel --port 8501 --subdomain vuegen-demo
else:
print("Streamlit report not executed, set run_streamlit to True to run the report")
</code>
### 1.2. HTML and other report types
<code>
# Generate the report
report_type = "html"
report_dir, config_path = report_generator.get_report(
dir_path=base_output_dir, report_type=report_type, logger=None
)
print(f"Report generated at: {report_dir}")
</code>
## 2. Extending the report
Now, we will extend the report by modifying the configuration file to include a logo and graphica abstract in the main page, a descritption for a section and a subsection, and a new plot from a url. We are modifying this file from the notebook, but it is also possible to do it directly in the configuration file with a text editor.
### 2.1. Adding a logo and graphical abstract
<code>
vuegen_logo_path = "https://raw.githubusercontent.com/Multiomics-Analytics-Group/vuegen/main/docs/images/vuegen_logo.svg"
# Load the YAML file
print(
f"Loading the YAML config file from: {config_path}"
) # generated based on directory path above
config = load_yaml_config(config_path)
# Update the logo and graphical abstract with the URL
config["report"].update(
{"logo": vuegen_logo_path, "graphical_abstract": vuegen_logo_path}
)
</code>
### 2.2. Including a description for a section and a subsection
<code>
# Update the description for the EDA section
for section in config["sections"]:
if section["title"] == "Plots":
section["description"] = "This section contains example plots"
# Update the description for the alpha diversity subsection from the Metagenomics section
for section in config["sections"]:
if section["title"] == "Dataframes":
for subsection in section["subsections"]:
if subsection["title"] == "All Formats":
subsection["description"] = (
"This subsection contains example dataframes."
)
</code>
### 2.3. Adding a new plot from a url
<code>
# Define new plot with a URL as the file path
vuegen_abst_fig = {
"title": "Graphical overview of VueGen’s workflow and components",
"file_path": "https://raw.githubusercontent.com/Multiomics-Analytics-Group/vuegen/main/docs/images/vuegen_graph_abstract.png",
"description": "",
"caption": "The diagram illustrates the processing pipeline of VueGen, starting from either a directory or a YAML configuration file. Reports consist of hierarchical sections and subsections, each containing various components such as plots, dataframes, Markdown, HTML, and data retrieved via API calls.",
"component_type": "plot",
"plot_type": "static",
}
# Add the plot to the Sample Provenance subsection in the EDA section
for section in config["sections"]:
if section["title"] == "Plots":
for subsection in section["subsections"]:
if subsection["title"] == "Static Plots":
subsection["components"].append(vuegen_abst_fig)
# Save the modified YAML file
with open(config_path, "w") as file:
yaml.dump(config, file, default_flow_style=False)
</code>
### 2.5. Stremlit report with the extended configuration file
To launch the Streamlit web application from Colab, open the generated URL and copy the localtunnel entry point IP into the corresponding field on the opened page. Once submited, you will be redirected to your Streamlit web application.
<code>
# Test the changes by generarating the report from the modified YAML file
report_type = "streamlit"
_ = report_generator.get_report(
config_path=config_path, report_type=report_type, logger=None
)
</code>
<code>
run_streamlit = False
# run_streamlit = True # uncomment line to run the streamlit report
# Launch the Streamlit report depneding on the platform
if not IN_COLAB and run_streamlit:
!streamlit run streamlit_report/sections/report_manager.py
elif run_streamlit:
# see: https://discuss.streamlit.io/t/how-to-launch-streamlit-app-from-google-colab-notebook/42399
print(
"Password/Enpoint IP for localtunnel is:",
urllib.request.urlopen("https://ipv4.icanhazip.com")
.read()
.decode("utf8")
.strip("\n"),
)
# Run the Streamlit app in the background
!streamlit run streamlit_report/sections/report_manager.py --server.address=localhost &>/content/logs.txt &
# Expose the Streamlit app on port 8501
!npx localtunnel --port 8501 --subdomain vuegen-demo
else:
print("Streamlit report not executed, set run_streamlit to True to run the report")
</code>
### 2.6. HTML and other report types with the extended configuration file
<code>
# Test the changes by generarating the report from the modified YAML file
report_type = "html"
_ = report_generator.get_report(
config_path=config_path, report_type=report_type, logger=None
)
</code>
|
{
"filename": "vuegen_basic_case_study_1.ipynb",
"repository": "Multiomics-Analytics-Group/vuegen",
"query": "transformed_from_existing",
"size": 17120,
"sha": ""
}
|
# BioEmu.ipynb
Repository: pokynmr/POKY
# **Biomolecular Emulator (BioEmu) in ColabFold**
<img src="https://github.com/microsoft/bioemu/raw/main/assets/emu.png" height="130" align="right" style="height:240px">
[BioEmu](https://github.com/microsoft/bioemu) is a framework for emulating biomolecular dynamics and integrating structural prediction tools to accelerate research in structural biology and protein engineering. This notebook uses BioEmu with ColabFold to generate the MSA and identify cluster conformations using Foldseek.
For more details, please read the [BioEmu Preprint](https://www.biorxiv.org/content/10.1101/2024.12.05.626885v2).
## To run
Either run each cell sequentially, or click on `Runtime -> Run All` after choosing the desired sampling config
<code>
#@title Sample with following config
#@markdown - `sequence`: Monomer sequence to sample
sequence = "MADQLTEEQIAEFKEAFSLFDKDGDGTITTKELGTVMRSLGQNPTEAELQDMINEVDADGNGTIDFPEFLTMMARKMKDTDSEEEIREAFRVFDKDGNGYISAAELRHVMTNLGEKLTDEEVDEMIREADIDGDGQVNYEEFVQMMTAK" #@param {type:"string"}
#@markdown - `num_samples`: Number of samples requested
num_samples = 100 #@param {type:"integer"}
#@markdown - `jobname`: Name assigned to this job
jobname = "calmodulin" #@param {type:"string"}
#@markdown - `filter_samples`: Whether to filter unphysical samples (e.g., those containing chain breaks) from the written samples
filter_samples = True #@param {type:"boolean"}
# #@param {type:"boolean"}
# ------------------------
# Copied logic from ColabFold
# ------------------------
import os
import re
import hashlib
def add_hash(x, seq):
"""Append a short SHA-1 hash of seq to x."""
return x + "_" + hashlib.sha1(seq.encode()).hexdigest()[:5]
def folder_is_free(folder):
"""Return True if folder doesn't exist."""
return not os.path.exists(folder)
jobname_clean = re.sub(r'\W+', '', jobname)
sequence = "".join(sequence.split())
jobname = add_hash(jobname_clean, sequence)
if not folder_is_free(jobname):
n = 0
while not folder_is_free(f"{jobname}_{n}"):
n += 1
jobname = f"{jobname}_{n}"
output_dir = os.path.join("/content", jobname)
os.makedirs(output_dir, exist_ok=True)
</code>
<code>
#@title Install dependencies
import os
import sys
_is_bioemu_setup_file = '/content/.BIOEMU_SETUP'
if not os.path.exists(_is_bioemu_setup_file):
os.system('wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh')
os.system('chmod +x Miniconda3-latest-Linux-x86_64.sh')
os.system('./Miniconda3-latest-Linux-x86_64.sh -b -f -p /usr/local')
os.system('conda install -q -y --prefix /usr/local python=3.11')
os.system('uv pip install bioemu[md]')
os.system('wget https://mmseqs.com/foldseek/foldseek-linux-avx2.tar.gz; tar xvzf foldseek-linux-avx2.tar.gz')
sys.path.append('/usr/local/lib/python3.11/site-packages/')
os.environ['CONDA_PREFIX'] = '/usr/local/'
os.environ['CONDA_PREFIX_1'] = '/usr/local/envs/myenv'
os.environ['CONDA_DEFAULT_ENV'] = 'base'
os.system(f"touch {_is_bioemu_setup_file}")
os.unlink('Miniconda3-latest-Linux-x86_64.sh')
</code>
<code>
#@title Run BioEmu
from bioemu.sample import main as sample
output_dir = f'/content/{jobname}'
sample(sequence=sequence, num_samples=num_samples, output_dir=output_dir, filter_samples=filter_samples)
</code>
<code>
#@title Write samples and run `foldseek`
#@markdown - `n_write_samples`: Number of samples to randomly select for clustering. Set to `-1` to select all available samples
#@markdown - `tmscore_threshold`: TM-score threshold used for foldseek clustering
#@markdown - `coverage_threshold`: Coverage threshold used for foldseek clustering
#@markdown - `seq_id`: Sequence identity threshold used for foldseek clustering
n_write_samples = -1 #@param {type:"integer"}
tmscore_threshold = 0.6 #@param {type: "number"}
coverage_threshold = 0.7 #@param {type: "number"}
seq_id = 0.95 #@param {type: "number"}
import numpy as np
import mdtraj
_py3dmol_installed_file = '/content/.py3dmol'
if not os.path.exists(_py3dmol_installed_file):
os.system('uv pip install py3Dmol')
os.system(f"touch {_py3dmol_installed_file}")
import py3Dmol
pdb_sample_dir = os.path.join('/content', 'pdb_samples')
os.makedirs(pdb_sample_dir, exist_ok=True)
def write_some_samples(topology_file: str, trajectory_file: str, output_dir:str, n_samples: int) -> None:
traj = mdtraj.load(trajectory_file, top=topology_file)
assert traj.n_frames >= n_samples
if n_samples == -1:
sample_indices = np.arange(traj.n_frames)
else:
sample_indices = np.random.choice(np.arange(traj.n_frames), size=n_samples, replace=False)
for idx in sample_indices:
traj[idx].save_pdb(os.path.join(output_dir, f'sample_{idx}.pdb'))
topology_file = os.path.join(output_dir, "topology.pdb")
trajectory_file = os.path.join(output_dir, "samples.xtc")
write_some_samples(topology_file=topology_file,
trajectory_file=trajectory_file,
output_dir=pdb_sample_dir,
n_samples=n_write_samples)
# Foldseek
import os
import subprocess
import tempfile
import pandas as pd
def parse_foldseek_cluster_results(cluster_table_path: str) -> dict[int, list[str]]:
"""
Parses the result of foldseek clustering
Args:
cluster_table: path of the output cluster table from foldseek
Returns:
Dictionary mapping cluster indices to members
"""
cluster_table = pd.read_csv(cluster_table_path, sep=r"\s+", header=None)
cluster_idx_to_members = {}
for index, group in enumerate(cluster_table.groupby(0)):
cluster_idx_to_members[index] = sorted(list(group[1][1]))
return cluster_idx_to_members
def foldseek_cluster(
input_dir: str,
out_prefix: str | None = None,
tmscore_threshold: float = 0.7,
coverage_threshold: float = 0.9,
seq_id: float = 0.7,
coverage_mode: int = 1,
) -> dict[int, set[str]]:
"""
Runs foldseek easy cluster
Args:
input_dir (str): input directory with .cif or .pdb files
out_prefix (str | None): the prefix of the output files, if None a temporary directory will be used
tmscore_threshold (float): the tm-score threshold used for clustering
coverage_threshold (float): the coverage threshold used for clustering
seq_id (float): the sequence identity threshold used for clustering
coverage_mode (int): mode used by mmseqs/foldseek to compute coverage
Returns:
Dictionary mapping cluster indices to members
"""
with tempfile.TemporaryDirectory() as temp_dir:
with tempfile.TemporaryDirectory() as temp_out_dir:
if out_prefix is None:
out_prefix = os.path.join(temp_out_dir, "output")
res = subprocess.run(
"/content/foldseek/bin/foldseek easy-cluster "
+ input_dir
+ " "
+ out_prefix
+ " "
+ temp_dir
+ " -c "
+ str(coverage_threshold)
+ " --min-seq-id "
+ str(seq_id)
+ " --tmscore-threshold "
+ str(tmscore_threshold)
+ " --cov-mode "
+ str(coverage_mode)
+ " --single-step-clustering",
shell=True,
)
assert res.returncode == 0, "Something went wrong with foldseek"
cluster_idx_to_members = parse_foldseek_cluster_results(out_prefix + "_cluster.tsv")
return cluster_idx_to_members
!chmod +x '/content/foldseek/bin/foldseek'
# Get foldseek clusters
clusters = foldseek_cluster(input_dir=pdb_sample_dir, tmscore_threshold=tmscore_threshold,
coverage_threshold=coverage_threshold, seq_id=seq_id)
n_clusters = len(clusters)
print(f'{n_clusters} clusters detected')
# Write foldseek clusters to output dir
import json
with open(os.path.join(output_dir, 'foldseek_clusters.json'), 'w') as json_handle:
json.dump(clusters, json_handle)
# Write XTC with one sample per cluster only
cluster_trajs = []
for _cluster_idx, samples in clusters.items():
sample = list(samples)[0] # Choose first sample in cluster
pdb_file = os.path.join(pdb_sample_dir, f"{sample}.pdb")
traj = mdtraj.load_pdb(pdb_file)
cluster_trajs.append(traj)
joint_traj = mdtraj.join(cluster_trajs)
cluster_topology_file = os.path.join(output_dir, "clustered_topology.pdb")
cluster_trajectory_file = os.path.join(output_dir, "clustered_samples.xtc")
joint_traj[0].save_pdb(cluster_topology_file)
joint_traj.save_xtc(cluster_trajectory_file)
</code>
<code>
#@title Display structure
import os
import ipywidgets as widgets
import py3Dmol
from IPython.display import display, clear_output
# Create interactive widgets for cluster and sample selection.
cluster_slider = widgets.IntSlider(
value=0,
min=0,
max=n_clusters - 1,
step=1,
description='Cluster No:',
continuous_update=False
)
sample_slider = widgets.IntSlider(
value=0,
min=0,
max=0, # will update based on the selected cluster
step=1,
description='Sample Idx:',
continuous_update=False
)
display(cluster_slider, sample_slider)
# Function to visualize a PDB file using py3Dmol.
def show_pdb(pdb_file: str, show_sidechains: bool = False, show_mainchains: bool = True):
view = py3Dmol.view(js='https://3dmol.org/build/3Dmol.js')
try:
with open(pdb_file, 'r') as f:
pdb_content = f.read()
except FileNotFoundError:
print(f"File not found: {pdb_file}")
return None
view.addModel(pdb_content, 'pdb')
view.setStyle({'cartoon': {'color': 'spectrum'}})
if show_sidechains:
BB = ['C', 'O', 'N']
view.addStyle({'and': [{'resn': ["GLY", "PRO"], 'invert': True}, {'atom': BB, 'invert': True}]},
{'stick': {'colorscheme': "WhiteCarbon", 'radius': 0.3}})
view.addStyle({'and': [{'resn': "GLY"}, {'atom': 'CA'}]},
{'sphere': {'colorscheme': "WhiteCarbon", 'radius': 0.3}})
view.addStyle({'and': [{'resn': "PRO"}, {'atom': ['C', 'O'], 'invert': True}]},
{'stick': {'colorscheme': "WhiteCarbon", 'radius': 0.3}})
if show_mainchains:
BB = ['C', 'O', 'N', 'CA']
view.addStyle({'atom': BB}, {'stick': {'colorscheme': "WhiteCarbon", 'radius': 0.3}})
view.zoomTo()
return view
# Helper to update the sample slider's maximum value based on the selected cluster.
def update_sample_slider(cluster_no):
available_samples = list(clusters[cluster_no])
sample_slider.max = max(len(available_samples) - 1, 0)
# Reset sample_slider's value if it's out of range.
if sample_slider.value > sample_slider.max:
sample_slider.value = 0
# Main function to update the viewer whenever widget values change.
def update_view(change=None):
cluster_no = cluster_slider.value
update_sample_slider(cluster_no)
available_samples = list(clusters[cluster_no])
sample_idx = sample_slider.value
clear_output(wait=True)
display(cluster_slider, sample_slider)
if sample_idx >= len(available_samples):
print(f"Only {len(available_samples)} samples available in cluster {cluster_no}")
return
chosen_sample = available_samples[sample_idx]
pdb_path = os.path.join("pdb_samples", f"{chosen_sample}.pdb")
# Check if the file exists before attempting to open it.
if not os.path.exists(pdb_path):
print(f"File not found: {pdb_path}")
return
print(f"Displaying sample {sample_idx} from cluster {cluster_no}")
view = show_pdb(pdb_path)
if view:
view.show()
# Observe changes to the slider values.
cluster_slider.observe(update_view, names='value')
sample_slider.observe(update_view, names='value')
# Trigger an initial update.
update_view()
</code>
<code>
#@title (Optional) Reconstruct sidechains + Run MD relaxation
#@markdown - `reconstruct_sidechains`: whether to reconstruct sidechains via `hpacker`
#@markdown - `run_md`: check to run MD after sidechain reconstruction, otherwise only sidechain reconstruction is performed
#@markdown - `md_protocol`: `LOCAL_MINIMIZATION`: fast but only resolves local problems ; `NVT_EQUIL`: slow but might resolve more severe issues
#@markdown - `one_per_cluster`: Reconstruct sidechains / optionally run MD for only one sample within each foldseek cluster
#@markdown **WARNING**: this step can be quite expensive depending on how many samples you have requested / sequence length. You may want to check the `one_per_cluster` option.
reconstruct_sidechains = False #@param {type: "boolean"}
run_md = True #@param {type:"boolean"}
one_per_cluster = True #@param {type:"boolean"}
md_protocol = "LOCAL_MINIMIZATION" #@param ["LOCAL_MINIMIZATION", "NVT_EQUIL"] {type:"string"}
import bioemu.sidechain_relax
bioemu.sidechain_relax.HPACKER_PYTHONBIN = '/usr/local/envs/hpacker/bin/python'
from bioemu.sidechain_relax import main as sidechainrelax
from bioemu.sidechain_relax import MDProtocol
md_protocol = MDProtocol[md_protocol]
os.environ['CONDA_PREFIX_1'] = '/usr/local/'
if one_per_cluster:
topology_file = cluster_topology_file
trajectory_file = cluster_trajectory_file
prefix = 'hpacker-openmm'
if reconstruct_sidechains:
relaxed_dir = os.path.join(output_dir, prefix)
os.makedirs(relaxed_dir, exist_ok=True)
sidechainrelax(pdb_path=topology_file, xtc_path=trajectory_file,
outpath=relaxed_dir, prefix=prefix, md_protocol=md_protocol,
md_equil=run_md)
if run_md:
os.system(f'touch {relaxed_dir}/.RELAXED')
</code>
<code>
#@title Package and download results
from google.colab import files
import tarfile
from glob import glob
# Delete bioemu .npz batch files
npz_files = glob(os.path.join(output_dir, "*.npz"))
[os.unlink(npz) for npz in npz_files]
# Add sidechain reconstruction files to output (#82)
import shutil
sidechain_topology = '/content/hpacker-openmm_sidechain_rec.pdb'
sidechain_trajectory = '/content/hpacker-openmm_sidechain_rec.xtc'
if os.path.exists(sidechain_topology) and os.path.exists(sidechain_trajectory):
shutil.copyfile(sidechain_topology, os.path.join(relaxed_dir, os.path.basename(sidechain_topology)))
shutil.copyfile(sidechain_trajectory, os.path.join(relaxed_dir, os.path.basename(sidechain_trajectory)))
# Add small README
README = """
# BioEmu Colab output:
`samples.xtc` and `topology.pdb`: Trajectory and topology files of all drawn samples.
`cluster_samples.xtc` and `cluster_topology.pdb`: Trajectory and topology files of clustered samples via
foldseek using the parameters specified in the notebook.
`foldseek_clusters.json`: Foldseek cluster assignment of all drawn samples
`sequence.fasta`: FASTA file containing the sequence that was sampled
`hpacker-openmm/`
|- `hpacker-openmm_sidechain_rec.pdb` and `hpacker-openmm_sidechain_rec.xtc`: Contain sidechain
reconstructed samples via `hpacker`.
|- `hpacker-openmm_md_equil.pdb` and `hpacker-openmm_md_equil.xtc`: Contain MD-equilibrated samples
after sidechain reconstruction.
For issues, please visit the [`bioemu` GitHub repository](https://github.com/microsoft/bioemu)
"""
with open(os.path.join(output_dir, "README.md"), "w") as readme_handle:
readme_handle.write(README)
citations = {
"Lewis2024": """@article{Lewis2024,
author = {Lewis, Sarah and Hempel, Tim and Jim{\'e}nez-Luna, Jos{\'e} and Gastegger, Michael and Xie, Yu and Foong, Andrew Y. K. and Satorras, Victor Garc{\'\i}a and Abdin, Osama and Veeling, Bastiaan S. and Zaporozhets, Iryna and Chen, Yaoyi and Yang, Soojung and Schneuing, Arne and Nigam, Jigyasa and Barbero, Federico and Stimper, Vincent and Campbell, Andrew and Yim, Jason and Lienen, Marten and Shi, Yu and Zheng, Shuxin and Schulz, Hannes and Munir, Usman and Tomioka, Ryota and Clementi, Cecilia and No{\'e}, Frank},
doi = {10.1101/2024.12.05.626885},
journal = {bioRxiv},
title = {{Scalable emulation of protein equilibrium ensembles with generative deep learning}},
year = {2025},
comment = {BioEmu prediction}
}""",
"Mirdita2021": """@article{Mirdita2022,
author= {Mirdita, Milot and Schütze, Konstantin and Moriwaki, Yoshitaka and Heo, Lim and Ovchinnikov, Sergey and Steinegger, Martin },
doi = {10.1038/s41592-022-01488-1},
journal = {Nature Methods},
title = {{ColabFold: Making Protein folding accessible to all}},
year = {2022},
comment = {ColabFold MMseqs2 MSA server}
}""",
"VanKempen2023": """@article{VanKempen2023,
author = {van Kempen, Michel and Kim, Stephanie S and Tumescheit, Charlotte and Mirdita, Milot and Lee, Jeongjae and Gilchrist, Cameron L M and S{\"{o}}ding, Johannes and Steinegger, Martin},
doi = {10.1038/s41587-023-01773-0},
journal = {Nature Biotechnology},
title = {{Fast and accurate protein structure search with Foldseek}},
year = {2023},
comment = {Clustering structures}
}""",
}
from pathlib import Path
def write_bibtex(
result_dir: Path,
bibtex_file: str = "cite.bibtex",
) -> Path:
to_cite = ["Lewis2024"]
to_cite += ["Mirdita2021"]
to_cite += ["VanKempen2023"]
bibtex_file = result_dir.joinpath(bibtex_file)
with bibtex_file.open("w", encoding="utf-8") as writer:
for i in to_cite:
writer.write(citations[i])
writer.write("\n")
print(f"Found {len(to_cite)} citations for tools or databases")
return bibtex_file
write_bibtex(Path(output_dir))
output_tarfile = f'/content/{jobname}.tar.gz'
with tarfile.open(output_tarfile, 'w:gz') as tar_handle:
tar_handle.add(name=f'{output_dir}', arcname=os.path.basename(output_dir), recursive=True)
files.download(output_tarfile)
</code>
|
{
"filename": "BioEmu.ipynb",
"repository": "pokynmr/POKY",
"query": "transformed_from_existing",
"size": 25576,
"sha": ""
}
|
# mpf_1.ipynb
Repository: Doulos/ESE24-python
# Mit Python Fliegen
Copyright 2024 by Doulos
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at:
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
[](https://colab.research.google.com/github/doulos/ESE24-python/blob/main/mpf.ipynb)
<code>
import antigravity
</code>
<br/>
<div style="background-color: #FFD8B2 ; font-size: 24px; ">
1. Python Basic
</div>
### Q01*- What's the type of 1+2j ?
<code>
print( type(1+2j) )
</code>
### Q02*- What's the result of 5/2 ?
<code>
print("5/2 = ", 5/2)
</code>
<code>
print("5//2 =", 5//2)
</code>
### Q03*- When does an integer overflow?
Run the program below for different values of $n$ and discover when we get an integer overflow.
<code>
n = 5
x = 2
for i in range(n):
print(i,":",x)
x=x**2
</code>
<br/>
<div style="background-color: #FFD8B2 ; font-size: 24px; ">
2. Class Recap
</div>
### Q04*- Fix the error
<code>
class Point:
def setxy(x, y):
self.x = x
self.y = y
def display():
print( (self.x, self.y) )
p1 = Point()
p1.setxy(1,2)
p1.display()
</code>
### Q05**- What happen if we run the code below?
- Do you have any explanation?
<code>
class Coord:
def setxy(self, x, y):
self.x = x
self.y = y
def getxy(self):
return (self.x, self.y)
</code>
<code>
c3 = Coord()
print("c3: ", c3.getxy() )
</code>
### Q06*- What happen if you run the code below?
<code>
class Coord:
def __init__(self, x, y):
self.setxy(x,y)
def setxy(self, x, y):
self.x = x
self.y = y
def getxy(self):
return (self.x, self.y)
def display(self):
print("(x,y) = ", self.getxy())
</code>
<code>
c1 = Coord(1,1)
c1.display()
Coord.display(c1)
type(c1).display(c1)
</code>
### Q07***- Any idea why the following code does not work?
- Hint: print the value of `A.count` and `a1.count` between line 10 and 11
<code>
class A:
count = 0
def __init__(self, inst_name):
print("init",inst_name)
print(f" was: {self.count=}")
self.count+=1
print(f" now: {self.count=}")
a1 = A("a1")
a2 = A("a2")
</code>
### Q08**- Run the program below.
- Can you explain it?
- How is it achieved in C++?
<code>
class Animal:
def make_sound(self):
self.sound()
def sound(self):
print("<...>")
class Dog(Animal):
def sound(self):
print("Woaf!")
class Cat(Animal):
def sound(self):
print("Meow!")
felix = Cat()
rex = Dog()
rex.make_sound()
felix.make_sound()
</code>
### Q09**- Run the program below
- What's the value of r1, r2, r3, r4? Is it expected?
- How would you implement this function in C++?
<code>
def add(a,b):
return a+b
r1 = add(1,2)
r2 = add(1.0, 2.0)
r3 = add("hello", " world!")
r4 = add(["hoooo"], [42])
print(f"{r1=} {r2=} {r3=} {r4=}")
</code>
<br/>
<div style="background-color: #FFD8B2 ; font-size: 24px; ">
3. Magic Methods
</div>
### Q10*- Run the program below.
- Have you seen some of the name before?
<code>
s = 'ESE 2024'
dir(s)
</code>
### Q11*- Run the program below.
- Any observations?
- What's the other (or more natural way) to get s3 and sz?
<code>
s1 = "Mit Python fliegen "
s2 = "beim ESE Kongress"
s3 = s1.__add__(s2)
print(f"{s3 = }")
s4 = "abcdef"
sz = s4.__len__()
print(f"{sz = }")
</code>
### Q12**- Using some magics, make the output nicier:
Expected output: \
`(1,-4)` \
`Coord(1,-4)`
<code>
class Coord:
def __init__(self, x, y):
self.setxy(x,y)
def setxy(self, x, y):
self.x = x
self.y = y
def getxy(self):
return (self.x, self.y)
# do not modify the lines below
c1 = Coord(-1,4)
print(c1)
print(f"{c1 = }")
</code>
<br/>
<div style="background-color: #F0E68C ; font-size: 24px; text-align: center;">
Coffee Break ☕
</div>
<code>
from time import sleep
from but_better import but_better # 3rd party package
coffee_break = but_better("jKCm4IpvF5E")
@coffee_break
def pause(minutes):
sleep(minutes*60)
pause(20)
</code>
<br/>
<div style="background-color: #FFD8B2 ; font-size: 24px; ">
4. Python Data Model
</div>
### Q13***- Look at the program below
- What do you expect? Run it and verify your claim!
- Uncomment line 5. What Do you expect?
- Run it. Any explanations?
<code>
x = 0
def the_answer():
print(f"{x = }")
# x=42
the_answer()
print(f"now: {x=}")
</code>
### Q14**- Are arguments passed by value or by reference?
- Use the program below to figure out.
- Does the output make any sense to you?
<code>
def f(arg):
arg *= 2
return arg
a = 2
fa = f(a)
print(f'a=2, after f: {a=}')
b = [2]
fb = f(b)
print(f'b=[2], after f: {b=}')
</code>
### Q15***- How many objects do you spot in the following code:
<code>
import math
def area(r):
return math.pi*r**2
class Void:
pass
s = area(1.0)
nothing = Void()
</code>
### Q16*- Run the following code.
- Explain what you see
<code>
def f(x):
return 2*x
double = f
print(f"{double(4) = }")
</code>
### Q17**- Challenge!
- We want to run `mytest` with `new_A` (the new, API compatible, implementation of class `A`)
- We should not modify the `mytest` function
Hint: add a statement on line 14
<code>
class A:
def __init__(self):
print("A")
# don't touch this mytest function
def mytest():
print("run test")
inst = A()
class new_A:
def __init__(self):
print("super fancy A version 2.0")
# add a line here
mytest()
</code>
### Q18**- Investigate!
- What's the type of `wf`?
- call `wf`. What's printed out?
- What happen if we write `f=welcomer(f)` and call `f`?
<code>
def welcomer(client):
def wrapper():
print('welcome!')
client()
return wrapper
def f():
print('f is called')
wf = welcomer(f)
</code>
<br/>
<div style="background-color: #FFD8B2 ; font-size: 24px; ">
5. Iterators and Generators
</div>
### Q19**- Run the following code:
- why is nothing printed out?
- fix the code!
<code>
L = [2,3,5,7,11,13,17,19]
it = iter(L)
t = tuple(it)
for e in it:
print(e)
</code>
### Q20**- Run the code below
- What happen when you run the program?
- what is the type of `g`?
- call `next()` on g one, two, three times. Observations?
<code>
def my_generator():
for i in range(5):
yield i**2
print(f'resuming, {i=}...')
g = my_generator()
# v = next(g)
# print(f'{v=}')
</code>
### Q21*- Make it pythonic!
- Use List comprehension to make the code below more pythonic.
<code>
import random
random.seed(42)
L = []
for i in range(10):
L.append(i*random.randrange(1,7))
print(L)
</code>
### Q22**- Use Generator Expression
- Same as previous exercise, but use a generator expression instead.
- How do you print all values?
<code>
import random
random.seed(42)
G = ...
</code>
<br/>
<div style="background-color: #FFD8B2 ; font-size: 24px; ">
6. Decorator
</div>
### Q23**- Investigate!
- Run the program below. Is the output expected?
- Add the following on line 7: `@welcomer`
- Re-run the program. Any observations?
<code>
def welcomer(client):
def wrapper():
print('welcome!')
client()
return wrapper
# change this line
def f():
print('f is called')
f()
</code>
### Q24***- Investigate
- change the definition of `g()` on line 8 to take one argument `x`
- change line 11 accordingly.
- does our `welcomer` decorator works? Why not?
- change the definition of `wrapper` on line 2 to fix `welcomer` for functions with one parameter.
- any idea how to make `welcomer` to work for any function?
<code>
def welcomer(client):
def wrapper():
print('welcome!')
client()
return wrapper
@welcomer
def g():
print('g is called')
g()
</code>
### Q25**- Investigate!
- What are the type of `by_2` and `by_3` ?
- What is printed out?
- When we return the inner object, what else need to be retained for this to work?
<code>
def multiply_by_(n):
def inner(x):
return x*n
return inner
by_2 = multiply_by_(2)
by_3 = multiply_by_(3)
print(f"{by_2(5) = }")
print(f"{by_3(5) = }")
</code>
<br/>
<br/>
<div style="font-family: 'Candy Cane', cursive; font-size: 42px; text-align: left; text-shadow: 2px 2px 5px #fff;">
✈️ Viel Spaß beim fliegen!
</div>
<code>
from time import sleep
from but_better import but_better
mit_python = but_better("fkaldvt3EOw")
@mit_python
def fliegen():
sleep(19)
print("🐍 Mit Python Fliegen ✈️")
fliegen()
</code>
<div style="font-family: 'Candy Cane', cursive; font-size: 42px; color: red; text-align: center; text-shadow: 2px 2px 5px #fff;">
🎄 Eine Schöne Adventzeit 🎄
</div>
|
{
"filename": "mpf_1.ipynb",
"repository": "Doulos/ESE24-python",
"query": "transformed_from_existing",
"size": 26244,
"sha": ""
}
|
# 02_preprocess_peak_data.ipynb
Repository: morris-lab/CellOracle
# Overview
Before building the base GRN, we need to annotate the coaccessible peaks and filter our active promoter/enhancer elements. First, we will identify the peaks around transcription starting sites (TSS). We will then merge the Cicero data with the TSS peak information and filter any peaks with weak connections to the TSS peaks. As such, the filtered peak data will only include TSS peaks and peaks with strong TSS connections. These will be our active promoter/enhancer elements for our base GRN.
### Notebook file
Notebook file is available on CellOracle GitHub page.
https://github.com/morris-lab/CellOracle/blob/master/docs/notebooks/01_ATAC-seq_data_processing/option1_scATAC-seq_data_analysis_with_cicero/02_preprocess_peak_data.ipynb
# 0. Import libraries
<code>
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
import os, sys, shutil, importlib, glob
from tqdm.notebook import tqdm
</code>
<code>
from celloracle import motif_analysis as ma
import celloracle as co
co.__version__
</code>
<code>
%config InlineBackend.figure_format = 'retina'
plt.rcParams['figure.figsize'] = [6, 4.5]
plt.rcParams["savefig.dpi"] = 300
</code>
# 1. Load scATAC peak data and peak connection data made with Cicero
## 1.0. Download data
In this notebook, we will annotate and filter output from Cicero. Please refer to the previous step to learn about data preparation with Cicero.
https://morris-lab.github.io/CellOracle.documentation/tutorials/base_grn.html#step1-scatac-seq-analysis-with-cicero
Here, we will use the preprocessed fetal brain scATAC-seq data from step 1.
You can download the demo file by running the following command.
Note: If the download fails, please manually download and unzip the data.
https://raw.githubusercontent.com/morris-lab/CellOracle/master/docs/demo_data/all_peaks.csv
https://raw.githubusercontent.com/morris-lab/CellOracle/master/docs/demo_data/cicero_connections.csv
<code>
# Download file.
!wget https://raw.githubusercontent.com/morris-lab/CellOracle/master/docs/demo_data/all_peaks.csv
!wget https://raw.githubusercontent.com/morris-lab/CellOracle/master/docs/demo_data/cicero_connections.csv
# If you are using macOS, please try the following command.
#!curl -O https://raw.githubusercontent.com/morris-lab/CellOracle/master/docs/demo_data/all_peaks.csv
#!curl -O https://raw.githubusercontent.com/morris-lab/CellOracle/master/docs/demo_data/cicero_connections.csv
</code>
## 1.1. Load data
<code>
# Load scATAC-seq peak list.
peaks = pd.read_csv("all_peaks.csv", index_col=0)
peaks = peaks.x.values
peaks
</code>
<code>
# Load Cicero coaccessibility scores.
cicero_connections = pd.read_csv("cicero_connections.csv", index_col=0)
cicero_connections.head()
</code>
# 2. Annotate transcription start sites (TSSs)¶
## IMPORTANT: Please make sure that you are setting correct reference genoms.
If your scATAC-seq data was generated with mm10 reference genome, please set `ref_genome="mm10"`.
You can check supported reference genome using `ma.SUPPORTED_REF_GENOME`
If your reference genome is not in the list, please send a request to us through CellOracle GitHub issue page.
<code>
ma.SUPPORTED_REF_GENOME
</code>
<code>
##!! Please make sure to specify the correct reference genome here
tss_annotated = ma.get_tss_info(peak_str_list=peaks, ref_genome="mm10")
# Check results
tss_annotated.tail()
</code>
# 3. Integrate TSS info and cicero connections
The output file after the integration process has three columns: `["peak_id", "gene_short_name", "coaccess"`].
- "peak_id" is either the TSS peak or the peaks that have a connection to a TSS peak.
- "gene_short_name" is the gene name that associated with the TSS site.
- "coaccess" is the coaccessibility score between the peak and a TSS peak. If the score is 1, it means that the peak is a TSS itself.
<code>
integrated = ma.integrate_tss_peak_with_cicero(tss_peak=tss_annotated,
cicero_connections=cicero_connections)
print(integrated.shape)
integrated.head()
</code>
# 4. Filter peaks
Remove peaks with weak coaccessibility scores.
<code>
peak = integrated[integrated.coaccess >= 0.8]
peak = peak[["peak_id", "gene_short_name"]].reset_index(drop=True)
</code>
<code>
print(peak.shape)
peak.head()
</code>
# 5. Save data
Save the promoter/enhancer peaks.
<code>
peak.to_csv("processed_peak_file.csv")
</code>
**Please go to next step: Transcriptoin factor motif scan**
https://morris-lab.github.io/CellOracle.documentation/tutorials/motifscan.html
|
{
"filename": "02_preprocess_peak_data.ipynb",
"repository": "morris-lab/CellOracle",
"query": "transformed_from_existing",
"size": 32413,
"sha": ""
}
|
# RNAseq.ipynb
Repository: hosseinshn/MOLI
<code>
from __future__ import print_function
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from scipy.stats import zscore
import seaborn as sns
import sys,os
from mapper import expand, parse_mapping_table, apply_mappers
%matplotlib inline
</code>
<code>
gene_id = "ENTREZID"
raw_data_dir = "/home/olya/SFU/Hossein/PDX/"
preprocessed_data_dir = "/home/olya/SFU/Hossein/v2/preprocessed/exprs/"
root_dir = "/home/olya/SFU/Hossein/v2/"
# wget https://media.nature.com/original/nature-assets/nm/journal/v21/n11/extref/nm.3954-S2.xlsx
# download Entrez ID mapping file
</code>
# PDX
- for 22665 Gene symbols are converted to Gene IDs
- gene expression profiles (FPKM) for 399 samples converted to log2(TPM+1)
<code>
exprs = pd.read_excel(raw_data_dir+"nm.3954-S2.xlsx","RNAseq_fpkm")
exprs.set_index("Sample",inplace=True,drop=True)
print(exprs.shape)
exprs.head()
</code>
# Mapping of gene symbols to EntrezID using current gene_info file prowided by NCBI:
* Download the unzip the file
\# wget ftp://ftp.ncbi.nih.gov/gene/DATA/GENE_INFO/Mammalia/Homo_sapiens.gene_info.gz
\# gunzip Homo_sapiens.gene_info.gz;
* Specify *hgnc_file* variable in this notebook
* Mapping strategy
1). Unknown genes and genes belonging to organisms other than H.sapiens were excluded.
2). First each query symbol was tried to match with any of current "Symbol" directly. If the query symbol mapped to one of current symbol but has no Gene ID, the query symbol was marked as not mapped.
3). If the query symbol matches none of current symbols, we tried to match it with one of "Synonyms". Genes matched no synonym as well as matched ambiguous synonyms correponding more than one Gene ID were condiered not mapped.
4). At this point, many of not recognized symbols had LOCXXXXXXXXX format.
According to the documntation provided by NCBI: "Symbols beginning with LOC. When a published symbol is not available, and orthologs have not yet been determined, Gene will provide a symbol that is constructed as 'LOC' + the GeneID. This is not retained when a replacement symbol has been identified, although queries by the LOC term are still supported. In other words, a record with the symbol LOC12345 is equivalent to GeneID = 12345. So if the symbol changes, the record can still be retrieved on the web using LOC12345 as a query, or from any file using GeneID = 12345" e.g. :
- LOC100093631 -> 100093631
- LOC100129726 -> 100129726
- etc.
Therefore all genes started with LOC were converted to Gene IDs removing "LOC" from query term. If resulting Gene ID matched none of current Gene IDs the symbol was considered not mapped.
4). Several pairs of query gene symbols matched current symbol and synonym of the same Entrez gene ID, e.g. AGAP8 and AGAP4, ANXA8L1 and ANXA8L2, etc.
Expressions of these genes were summarized because such genes were merged to a single gene in newer assembly versions.
<code>
hgnc_file = root_dir+"HGNC_5.10.2018.txt"
hgnc = pd.read_csv(hgnc_file, sep ="\t",index_col=0)#
#print(hgnc.shape, len(set(hgnc.index.values)))
approved = hgnc.loc[hgnc["Status"] == "Approved",:]
hgnc_prev = expand(approved[["Previous Symbols","Entrez Gene ID"]],column="Previous Symbols",sep=", ")
hgnc_prev = parse_mapping_table(hgnc_prev, "Previous Symbols","Entrez Gene ID")
</code>
<code>
hgnc_syn = expand(approved[["Synonyms","Entrez Gene ID"]],column="Synonyms",sep=", ")
hgnc_syn = parse_mapping_table(hgnc_syn, "Synonyms","Entrez Gene ID")
</code>
<code>
NCBI = pd.read_csv(root_dir+"Homo_sapiens.gene_info",sep = "\t")
NCBI = NCBI[["#tax_id","GeneID","Symbol","Synonyms","type_of_gene"]]
NCBI = NCBI.loc[NCBI["#tax_id"] == 9606]
NCBI = NCBI.loc[NCBI["type_of_gene"] != "unknown"]
ncbi_symbols = parse_mapping_table(NCBI, "Symbol","GeneID")
</code>
<code>
ncbi_synonyms = expand(NCBI[["Synonyms","GeneID"]],column="Synonyms",sep="|")
ncbi_synonyms = parse_mapping_table(ncbi_synonyms, "Synonyms","GeneID")
</code>
<code>
exprs = apply_mappers(exprs, ncbi_symbols, ncbi_synonyms, verbose = True,handle_duplicates = "sum")
exprs.head(5)
</code>
### FPKM to TPM conversion
Let $X_i$ is a number of fragments mapped to a transcript, $N$ is a total number of fragments sequensed (and assigned to any transcript) and $\widetilde{l_i}$ is an effective length of a transcript (i.e. how many fragments with average length $\mu_{frag}$ can generate a transcript of length $l_i$: $\widetilde{l_i} = l_i - \mu_{frag}+1$)
*FPKM* - fragments per kilobase of exon (i.e. effective length) per million reads mapped
$FPKM_i = \frac{X_i}{\frac{$\widetilde{l_i}}{10^3}*\frac{N}{10^6}} = \frac{X_i}{l_iN}*10^9 $
*TPM* - transcripts per million of transcripts.
In turn, $\frac{X_i}{\widetilde{l_i}}$ is estimated number of transcripts
$TPM_i = \frac{\frac{X_i}{\widetilde{l_i}}}{\sum_j{\frac{X_j}{\widetilde{l_j}}}}*10^6 $
### how to convert FPKM to TPM
Divide both numerator and denominator by $N$ and mutiply by $10^9$:
$TPM_i = \frac{\frac{X_i}{N\widetilde{l_i}}*10^9}{\sum_j{\frac{X_j}{N\widetilde{l_j}}*10^9}}*10^6 = \frac{FPKM_i}{\sum_j{FPKM_j}} * 10^6$
Sources:
- What the FPKM? https://haroldpimentel.wordpress.com/2014/05/08/what-the-fpkm-a-review-rna-seq-expression-units/
- https://www.biostars.org/p/160989/
- Lior Pachter https://arxiv.org/pdf/1104.3889.pdf
<code>
## FPKM convert to log2(TPM+1)
sum_fpkm = exprs.apply(sum,axis=0)
sum_fpkm.head()
</code>
<code>
tpm = exprs / sum_fpkm *1000000 +1
tpm.head()
</code>
<code>
tpm = tpm.applymap(np.log2)
#tpm.to_csv(preprocessed_data_dir + "/PDX.FPKM2TPMplus1log2.Expr.tsv",sep="\t")
print(tpm.shape)
tpm.head()
</code>
# TCGA
* from http://gdac.broadinstitute.org/runs/stddata__2016_01_18/data/ download RSEM files, "scaled estimate" per gene.
* RSEM Scaled estimate is an aboundance of a transcript divided by sum of aboundance over all the transcripts. Therefore $TPM_i=ScaledEstimate_i*10^6$
* Resulted TPM were log2-transformed
<code>
# replace with downloading
tcga_tmp_dir = "/home/olya/SFU/Hossein/TCGA/expression__2016_01_28/"
</code>
<code>
f_ext = ".rnaseqv2__illuminahiseq_rnaseqv2__unc_edu__Level_3__RSEM_genes__data.data.txt"
for fpath in os.listdir(tcga_tmp_dir):
if fpath.startswith("gdac.broadinstitute.org") and not fpath.endswith(".tar.gz") :
cohort = fpath.split("_")[1].replace(".Merge","")
#print(fpath, cohort)
fname = cohort + f_ext
try:
exprs = pd.read_csv(tcga_tmp_dir+"/"+fpath+"/"+fname,sep="\t",index_col=0)
# drop "gene_id" and keep only "scaled_estimate" columns
exprs = exprs.loc[:,exprs.T.loc[exprs.T["gene_id"]=="scaled_estimate",:].index]
exprs = exprs.iloc[1:,]
exprs.rename(index = lambda x : int(x.split("|")[1]),
columns = lambda x : x.replace(".1",""),inplace = True)
exprs.index.name = "ENTREZID"
# convert scaled_extimates to log2(TPM+1)
exprs = exprs.applymap(lambda x : np.log2(float(x)*1000000+1))
exprs = exprs.sort_index()
exprs.to_csv(preprocessed_data_dir +"TCGA-"+cohort+"_exprs.RSEMscaled_est2TPMplus1log2.tsv",sep ="\t")
print(cohort,exprs.shape)
except:
print(cohort,"No expression data.")
</code>
<code>
exprs
</code>
|
{
"filename": "RNAseq.ipynb",
"repository": "hosseinshn/MOLI",
"query": "transformed_from_existing",
"size": 263232,
"sha": ""
}
|
# scRNAseq_Analysis_PartI_sample6_2.ipynb
Repository: SchoberLab/YF
# Analysis Part I - Preprocessing Sample 6
<code>
%load_ext autoreload
</code>
<code>
%matplotlib inline
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
warnings.filterwarnings(action='ignore')
</code>
<code>
import os
import scanpy as sc
import scirpy as ir
import anndata as ann
import numpy as np
import pandas as pd
import seaborn as sb
import matplotlib.pyplot as plt
from matplotlib import rcParams
from mudata import MuData
import mudata
import tarfile
import warnings
from glob import glob
import muon as mu
</code>
<code>
%autoreload 2
import sys
sys.path.append('..')
import utility.annotation as utils_annotation
import utility.representation as utils_representation
import utility.visualisation as utils_vis
</code>
<code>
sc.settings.set_figure_params(dpi=150)
sc.settings.verbosity = 3
sc.set_figure_params(vector_friendly=True, color_map='viridis', transparent=True)
sb.set_style('whitegrid')
</code>
Samples:
- Sample 2:
- D5_d7 #HA1
- D5_d11 #HA2
- D5_d14 #HA3
- D5_d21 #HA4
- D5_d28
- D5_d49 #d50 in reality
- D5_d90
- B42_d11
Dextrameres:
- NS4B214-222 -- LLWNGPMAV (A*02:01) -- TTGGCGATTCCTCCA
<code>
#Define the lists for later
hashtags = [f'sample{i}' for i in range(1, 9)]
epitope_ids = ['NS4B214']
cite_seqs = ['CD45RA', 'CCR7-1', 'CD95', 'CD62L']
feature_barcode_ids = hashtags + epitope_ids + cite_seqs
</code>
<code>
##Read data
# GEX data
datafile = "/media/agschober/HDD12/3_scRNA-Seq_Sina/Cellranger_output/2nd_Experiment/run2/outs/per_sample_outs/run2/count/sample_filtered_feature_bc_matrix.h5"
adata = sc.read_10x_h5(datafile, gex_only=False)
adata.var_names_make_unique()
# VDJ data
adata_vdj = ir.io.read_10x_vdj("/media/agschober/HDD12/3_scRNA-Seq_Sina/Cellranger_output/2nd_Experiment/run2/outs/per_sample_outs/run2/vdj_t/filtered_contig_annotations.csv")
#ir.pp.merge_with_ir(adata, adata_vdj)
# Epitope data
adata.uns['epitopes'] = epitope_ids
for e in epitope_ids:
adata.obs[e] = adata[:, e].X.A.copy()
# Hashtag data
adata.uns['hashtags'] = hashtags
for h in hashtags:
adata.obs[h] = adata[:, h].X.A.copy()
# CiteSeq Data
adata.uns['cite_ids'] = cite_seqs
for c in cite_seqs:
adata.obs[c] = adata[:, c].X.A.copy()
# Remove Barcodes from counts
adata = adata[:, [gene for gene in adata.var_names if gene not in feature_barcode_ids]]
adata.obs['sample'] = f'sample6'
adata.shape
</code>
<code>
#fuse the information of gene expression and tcr
adata = mu.MuData({"gex": adata, "airr": adata_vdj})
</code>
<code>
adata.shape
</code>
### Quality control
Basic analysis by amount counts, genes, and fraction of mitochondrial genes
<code>
adata["gex"].obs['n_counts'] = adata["gex"].X.A.sum(axis=1)
adata["gex"].obs['log_counts'] = np.log10(adata["gex"].obs['n_counts'])
adata["gex"].obs['n_genes'] = (adata["gex"].X.A > 0).sum(axis=1)
adata["gex"].obs['log_genes'] = np.log10(adata["gex"].obs['n_genes'])
mt_gene_mask = [gene.startswith('MT-') for gene in adata.var_names]
mt_gene_idx = np.where(mt_gene_mask)[0]
adata["gex"].obs['mt_frac'] = adata["gex"].X.A[:, mt_gene_idx].sum(1) / adata["gex"].X.A.sum(axis=1)
</code>
<code>
print('Mean # Genes: ', adata["gex"].obs['n_genes'].mean())
print('Median # Genes: ', adata["gex"].obs['n_genes'].median())
print('Mean # Counts: ', adata["gex"].obs['n_counts'].mean())
print('Median # Counts: ', adata["gex"].obs['n_counts'].median())
print('Mean % MT: ', adata["gex"].obs['mt_frac'].mean())
print('Median % MT: ', adata["gex"].obs['mt_frac'].median())
</code>
<code>
rcParams['figure.figsize'] = (4, 4)
sc.pl.violin(adata["gex"], ['n_counts'], size=1, log=False, rotation=90)
sc.pl.violin(adata["gex"], ['n_genes'], size=1, log=False, rotation=90)
sc.pl.violin(adata["gex"], ['mt_frac'], size=1, log=False, rotation=90)
</code>
- counts up to 15000, but mostly below 10000
- number of genes up to 6000, but mostly below 4000
- mitochondrial fraction up to 0.04
<code>
rcParams['figure.figsize'] = (8, 8)
sc.pl.scatter(adata["gex"], y='n_genes', x='n_counts', color ='mt_frac', size=10, show=False)
sc.pl.scatter(adata["gex"][np.logical_and(adata["gex"].obs['n_genes']<1500, adata["gex"].obs['n_counts']<8000)],
y='n_genes', x='n_counts', color='mt_frac', size=10, show=False)
plt.show()
</code>
<code>
b = ((adata['gex'].obs['n_counts']).sort_values()).to_list()
c = ((adata['gex'].obs['n_genes']).sort_values()).to_list()
</code>
<code>
plt.plot(b)
plt.ylabel('counts')
plt.xlabel('barcode')
</code>
<code>
plt.plot(c)
plt.ylabel('genes')
plt.xlabel('barcode')
</code>
<code>
plt.plot(b)
plt.ylabel('counts')
plt.xlabel('barcode')
plt.ylim((0,3000))
plt.xlim((0,1000))
</code>
<code>
plt.plot(c)
plt.ylabel('genes')
plt.xlabel('barcode')
plt.ylim((0,1000))
plt.xlim((0,200))
</code>
- remove cells with more than 4000 genes and more than 13000 counts
- remove cells with more than 0.1 mt_fraction
- remove cells with less than 600 genes and 1200 counts
- Use those values from other experiment because it is hard to see where there is a drop in the curve
### Filtering of the cells
<code>
params_filter = { 'mt_frac': 0.1,
'n_counts_min': 1200,
'n_counts_max': 13000,
'n_genes_min': 600,
}
</code>
<code>
print(f'Size before filtering: {len(adata)}')
adata = adata[adata["gex"].obs['mt_frac'] < params_filter['mt_frac']]
adata = adata[adata["gex"].obs['n_counts'] > params_filter['n_counts_min']]
adata = adata[adata["gex"].obs['n_counts'] < params_filter['n_counts_max']]
adata = adata[adata["gex"].obs['n_genes'] > params_filter['n_genes_min']].copy()
print(f'Size after filtering: {len(adata)}')
adata.shape
</code>
### QC after filtering
<code>
rcParams['figure.figsize'] = (4, 4)
sc.pl.violin(adata["gex"], ['n_counts'], size=1, log=False, rotation=90)
sc.pl.violin(adata["gex"], ['n_genes'], size=1, log=False, rotation=90)
sc.pl.violin(adata["gex"], ['mt_frac'], size=1, log=False, rotation=90)
rcParams['figure.figsize'] = (8, 8)
sc.pl.scatter(adata["gex"], y='n_genes', x='n_counts', color ='mt_frac', size=10, show=False)
sc.pl.scatter(adata["gex"][np.logical_and(adata["gex"].obs['n_genes']<1500, adata["gex"].obs['n_counts']<8000)],
y='n_genes', x='n_counts', color='mt_frac', size=10, show=False)
plt.show()
</code>
### TCR stats
<code>
ir.pp.index_chains(adata)
ir.tl.chain_qc(adata)
</code>
<code>
adata.obs['airr:chain_pairing'].loc[(adata.obs['airr:chain_pairing']).isna()] = 'no_IR'
</code>
<code>
adata.obs['airr:chain_pairing'].value_counts()
</code>
<code>
def get_percentages_tcr(data):
df = ir.get.airr(data, "junction_aa", ["VJ_1", "VDJ_1", "VJ_2", "VDJ_2"])
p_alpha = df['VJ_1_junction_aa'].notnull().mean()
p_beta = df['VDJ_1_junction_aa'].notnull().mean()
p_paired = (df['VDJ_1_junction_aa'].notnull() & df['VJ_1_junction_aa'].notnull()).mean()
return [p_alpha, p_beta, p_paired]
chains = ['Alpha', 'Beta', 'Paired']
percentages = get_percentages_tcr(adata)
df_tcr_fractions = {
'chain': chains,
'percentage': percentages
}
df_tcr_fractions = pd.DataFrame(df_tcr_fractions)
g = sb.barplot(data=df_tcr_fractions, y='percentage', x='chain')
_ = g.set_xticklabels(rotation=30, labels=chains)
</code>
### Normalise
<code>
sc.pp.normalize_total(adata["gex"], target_sum=1e4)
sc.pp.log1p(adata["gex"])
</code>
### Quick Visual Sanity Check
<code>
utils_representation.calculate_umap(adata["gex"], n_high_var=5000, remove_tcr_genes=True)
</code>
<code>
adata["gex"].obs['chain_pairing'] = adata.obs['airr:chain_pairing']
</code>
<code>
sc.pl.umap(adata["gex"])
</code>
<code>
rcParams['figure.figsize'] = (6, 6)
sc.pl.umap(adata["gex"], color=['chain_pairing'])
</code>
<code>
sc.pl.umap(adata["gex"], color=['n_counts', 'log_counts', 'n_genes', 'mt_frac'], ncols=2)
</code>
### Separate the samples
<code>
utils_vis.distributions_over_columns(adata["gex"], hashtags, 2, 4)
</code>
<code>
def hash_solo_by_sample(hashtag_cols, col_name, n_noise_barcodes):
adata["gex"].obs[col_name] = 'NaN'
dfs_donor = []
adata["gex"].obs = adata["gex"].obs.drop(col_name, axis=1)
sc.external.pp.hashsolo(adata["gex"], hashtag_cols, number_of_noise_barcodes=n_noise_barcodes)
adata["gex"].obs = adata["gex"].obs.rename(columns={'Classification': col_name})
hash_solo_by_sample(hashtags, 'pool', 3)
adata["gex"].obs['pool'].value_counts()
</code>
<code>
hash_solo_by_sample(hashtags, 'pool', 5)
adata["gex"].obs['pool'].value_counts()
</code>
<code>
rcParams['figure.figsize'] = (16, 4)
for h in hashtags:
adata["gex"].obs[f'log_{h}'] = np.log(adata["gex"].obs[h].values+1)
sb.violinplot(data=adata["gex"].obs[[f'log_{h}' for h in hashtags]], scale='area')
</code>
<code>
utils_vis.adt_counts_by_condition(adata["gex"], hashtags, 'pool', 8, 4, do_log=True)
</code>
<code>
rcParams['figure.figsize'] = (8, 8)
sc.pl.umap(adata["gex"], color='pool')
</code>
<code>
rcParams['figure.figsize'] = (8, 8)
adata_ha = ann.AnnData(X=adata["gex"].obs[adata["gex"].uns['hashtags']], obs=adata["gex"].obs[['pool']])
adata_ha.var_names = adata["gex"].uns['hashtags']
sc.pp.log1p(adata_ha)
sc.pp.neighbors(adata_ha)
sc.tl.umap(adata_ha)
sc.pl.umap(adata_ha, color=['pool'] + [f'sample{i}' for i in range(1, 9)], ncols=3,
save=f'sample6_hashtag_umap.pdf')
</code>
<code>
adata = adata[~adata["gex"].obs['pool'].isin(['Doublet', 'Negative'])]
</code>
### Remove Epitope Counts
<code>
epitope_2_sample = {'NS4B214': ['sample1', 'sample2', 'sample3', 'sample4', 'sample5', 'sample6', 'sample7', 'sample8'],}
</code>
<code>
for e, samples in epitope_2_sample.items():
adata["gex"].obs.loc[~adata["gex"].obs['pool'].isin(samples), e] = np.nan
</code>
### Remove Totalseq Counts
<code>
samples_full_totalseq = ['sample1', 'sample2', 'sample3', 'sample4', 'sample5', 'sample6', 'sample7', 'sample8']
</code>
<code>
for c in cite_seqs:
adata["gex"].obs.loc[~adata["gex"].obs['pool'].isin(samples_full_totalseq), c] = np.nan
</code>
### Save
<code>
adata["gex"].obs['pool'] = f'sample6' + adata["gex"].obs['pool'].astype(str)
adata.write(filename="/media/agschober/HDD12/3_scRNA-Seq_Sina/Preprocessing/data6.h5mu")
</code>
<code>
import session_info
session_info.show()
</code>
|
{
"filename": "scRNAseq_Analysis_PartI_sample6_2.ipynb",
"repository": "SchoberLab/YF",
"query": "transformed_from_existing",
"size": 22368,
"sha": ""
}
|
# Hierarchical Clustering using Euclidean Distance.ipynb
Repository: galkinc/Hierarchical-Clustering
# Hierarchical Clustering using Euclidean Distance
# Task 1: Introduction
## - Extending Skew Analysis
Six skews of different combinations of two nucleotides: CA-, GA-, UA-, UG-, UC-, and CG-skew are used to draw what is known as the **skew profile**. The skew profile is plotted using **cumulative** values of skews and is determined by nucleotide **composition**, not the sequence. Despite the benefits of this method for knowing the genome profile, one of its flaws is the lack of a quantitative comparison between more than one genome. This is being overcome by additional techniques such as [Euclidean distance](https://en.wikipedia.org/wiki/Euclidean_distance) matrices and [Neighbor-joining](https://en.wikipedia.org/wiki/Neighbor_joining) trees. Our traget in the current project is to explore these options.
The following twenty skew profile graphs represent different strains for seven viruses, Corona and SARS viruses from the [Coronaviridae](https://en.wikipedia.org/wiki/Coronaviridae) family, Dengue, Zika, and West Nile viruses from the [Flaviviridae](https://en.wikipedia.org/wiki/Flaviviridae) family, Enterovirus from [Picornaviridae](https://en.wikipedia.org/wiki/Picornavirus) family, and HIV from [Retroviridae](https://en.wikipedia.org/w/index.php?title=Retroviridae&redirect=yes). All these viruses belong to Realm [Riboviria](https://en.wikipedia.org/wiki/Riboviria).
<table><tr><td><img src='images/Corona_HCoV-NL63.png'></td><td><img src='images/Corona_MN988668_China.png'></td></tr></table>
<table><tr><td><img src='images/Corona_MT755827_Bangladesh.png'></td><td><img src='images/Corona_MT759582.1_India.png'></td></tr></table>
<table><tr><td><img src='images/Corona_MT766907.1_USA.png'></td><td><img src='images/Corona_NC_045512_China.png'></td></tr></table>
<table><tr><td><img src='images/SARS_BJ01.png'></td><td><img src='images/SARS_BJ02.png'></td></tr></table>
<table><tr><td><img src='images/SARS_BJ03.png'></td><td><img src='images/SARS_BJ04.png'></td></tr></table>
<table><tr><td><img src='images/Dengue_MT862858.png'></td><td><img src='images/Dengue_MT862893.png'></td></tr></table>
<table><tr><td><img src='images/EnterovirusA_NC038306.png'></td><td><img src='images/EnterovirusB_NC038307.png'></td></tr></table>
<table><tr><td><img src='images/EnterovirusD_NC038308.png'></td><td><img src='images/EnterovirusH_NC038309.png'></td></tr></table>
<table><tr><td><img src='images/HIV_NC001802.1.png'></td><td><img src='images/WestNile_NC009942.png'></td></tr></table>
<table><tr><td><img src='images/Zika_AY632535.png'></td><td><img src='images/Zika_MN101548.png'></td></tr></table>
If you've tried to compare the previous 20 graphs with each other, you know how difficult it can be to accomplish that with sight. We will accomplish that by starting with the profile skew data, like the one shown in the next table, and compute the Euclidean differences among them and build a [dendrogram](https://en.wikipedia.org/wiki/Dendrogram) based on the neighbor-joining tree algorithm, which shows the difference and similarity between the different viruses.
No. | virus | strain |CA-Skew | GA-Skew | UA-Skew | UG-Skew | UC-Skew | CG-Skew*
----| :-------- | :-- | -- | -- | -- | -- |----------| ---------
1. | **Corona** | HCoV-NL63 |-8892.72|-13454.38|-5947.28|8392.05|3165.32 |5417.59
2. | **SARS** | BJ01 |-4673.66|-5852.78|-1008.29|4887.03|3697.07 |1216.47
3. | **Dengue** | MT862858 |816.63|-351.01|1728.46|2067.33|926.05 |1164.02
4. | **Enterovirus A** | NC038306 |33.56|201.33|429.91|229.41|397.55 |-165.63
5. | **HIV** | NC001802 |447.21|-939.66|2171.98|3040.84|1749.07 |1384.04
6. | **West Nile** | NC009942 |1324.54|-57.23|1054.20|1111.13|-273.42 |1382.03
7. | **Zika** | AY632535 |1381.19|-283.89|1147.27|1432.18|-237.85 |1663.35
*It should be noted that, in skew language, CG does not represent a CG base pair but a comparison of the C with the G nucleotide proportions.*
Our target in this project is to create a dendrogram like the next dendrograms created using [**MEGA** *X* (**M**olecular **E**volutionary **G**enetics **A**nalysis software )](https://www.megasoftware.net/). Please note that this dendrogram was constructed by relying on [pairwise distances](https://en.wikipedia.org/wiki/Distance_matrix) and [multiple sequence alignments](https://en.wikipedia.org/wiki/Multiple_sequence_alignment). The dendrogram that we will create will depend on the cumulative skew profile, which in turn depends on the nucleotide **composition**, not the sequence.
<table><tr><td><img src='images/dendogram_H.png'></td><td><img src='images/dendogram.png'></td></tr></table>
Dendrogram (1) | | Dendrogram (2) |
---- | |-------- |
**Vertical lines show the distances** | | **Horizontal lines show the distances** |
The dendrogram can be drawn in any direction. In dendrogram (1), the lengths of vertical lines show the distances between different strains, while the lengths of the horizontal lines in this form of tree do not signify anything. They are merely drawn to space the strains conveniently. The opposite is in the dendrogram (2), the horizontal lines show the distances.
# Task 2:
## 2.1. Importing Libraries
First of all, we will need to import some libraries. These include
- [os](https://docs.python.org/3/library/os.html) - Miscellaneous operating system interfaces,
- [statistics](https://docs.python.org/3/library/statistics.html) - Mathematical statistics functions
- [numpy](https://numpy.org/) - Package for scientific computing,
- [pandas](https://pandas.pydata.org/) - data analysis and manipulation tool,
- [matplotlib](https://matplotlib.org/users/index.html) - Visualization with Python,
- [pyplot](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.html#module-matplotlib.pyplot) - Interactive plots (*MATLAB-like way of plotting*),
- [scipy](https://www.scipy.org/) - Python-based ecosystem of open-source software for mathematics, science, and engineering.
- [dendrogram](https://docs.scipy.org/doc/scipy/reference/generated/scipy.cluster.hierarchy.dendrogram.html)
- [linkage](https://docs.scipy.org/doc/scipy/reference/generated/scipy.cluster.hierarchy.linkage.html)
<code>
import os
import statistics
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy.cluster.hierarchy import dendrogram, linkage
</code>
## 2.2. Locate the Data Files
All viruses that we will be analyzing are [RNA viruses](https://en.wikipedia.org/wiki/RNA_virus). In contrast to [DNA viruses](https://en.wikipedia.org/wiki/DNA_virus), which can reach **300** kilobases (kb) in size, RNA viruses have a size of about 30 kb. In late 2018, the [planarian nidovirus](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6211748/) genome size of **41** kb was discovered seting a new recored for the length of RNA genomes.
**Classification & Genome sizes**:
- Riboviria
- Coronaviridae family,
- [Coronavirus](https://en.wikipedia.org/wiki/Severe_acute_respiratory_syndrome_coronavirus_2) ≈ 30 Kb
- [SARS Coronavirus](https://en.wikipedia.org/wiki/Severe_acute_respiratory_syndrome_coronavirus) ≈ 30 Kb
- Flaviviridae family,
- [Dengue virus](https://en.wikipedia.org/wiki/Dengue_virus) ≈ 11 Kb
- [Zika](https://en.wikipedia.org/wiki/Zika_virus) ≈ 11 Kb
- [West Nile virus](https://en.wikipedia.org/wiki/West_Nile_virus) ≈ 11 Kb
- Picornaviridae
- [Enterovirus](https://en.wikipedia.org/wiki/Enterovirus) ≈ 8 Kb
- Retroviridae.
- [HIV](https://en.wikipedia.org/wiki/HIV) ≈ 10 Kb
*N.B. Keep an eye on the [Corona_HCoV-NL63](https://en.wikipedia.org/wiki/Human_coronavirus_NL63) strain. It is one of the first strains discovered back in 2004 (doi: [10.1038/nm1024. Epub 2004 Mar 21](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7095789/)). It is interesting to know the differences and similarities between this strain and the newly discovered ones.*
<code>
# List all data files
# You need to change the path
data_path = 'C:/Users/Administrator/Documents/data'
for file in os.listdir(data_path):
print(file)
</code>
## 2.3. Calculate the Cumulative and Mean Skew Values.
<code>
# define 'bases_skew' function to calculate the skew values.
def bases_skew(A, B):
try: return (A - B) / (A + B)
except ZeroDivisionError: return 0
</code>
<code>
mat = np.array([]) # Cumulative skew values
mat2 = np.array([]) # Mean skew values
virus_names = list()
for file in os.listdir(data_path):
input_file_path = data_path + '/' + file
#print(input_file_path)
counter = 0; A_count = 0; C_count = 0; G_count = 0; U_count = 0
ca_skew = []; ga_skew = []; ua_skew = []; uc_skew = []; ug_skew = []; cg_skew = []
temp_DNA = '' # One line of Template DNA Sequence
with open(input_file_path,'r') as input_data:
header = input_data.readline().strip()
for line in input_data:
temp_DNA = line.strip()
for base in temp_DNA:
counter += 1
if base == "A":
U_count +=1
elif base == "C":
G_count +=1
elif base == "G":
C_count +=1
elif base == "T":
A_count +=1
ca_skew.insert(counter, bases_skew(C_count, A_count))
ga_skew.insert(counter, bases_skew(G_count, A_count))
ua_skew.insert(counter, bases_skew(U_count, A_count))
ug_skew.insert(counter, bases_skew(U_count, G_count))
uc_skew.insert(counter, bases_skew(U_count, C_count))
cg_skew.insert(counter, bases_skew(C_count, G_count))
#print('File name ', file)
#print('Total bases: ', counter)
#print('Cumulative ca_skew = ', np.cumsum(ca_skew)[len(ca_skew)-1])
#print('Cumulative ga_skew = ',np.cumsum(ga_skew)[len(ga_skew)-1])
#print('Cumulative ua_skew = ',np.cumsum(ua_skew)[len(ua_skew)-1])
#print('Cumulative ug_skew = ',np.cumsum(ug_skew)[len(ug_skew)-1])
#print('Cumulative uc_skew = ',np.cumsum(uc_skew)[len(uc_skew)-1])
#print('Cumulative cg_skew = ',np.cumsum(cg_skew)[len(cg_skew)-1])
#print('==============================================')
#print('Mean ca_skew = ', statistics.mean(ca_skew))
#print('Mean ga_skew = ', statistics.mean(ga_skew))
#print('Mean ua_skew = ', statistics.mean(ua_skew))
#print('Mean ug_skew = ', statistics.mean(ug_skew))
#print('Mean uc_skew = ', statistics.mean(uc_skew))
#print('Mean cg_skew = ', statistics.mean(cg_skew))
# Get the virus name from the file name
virus_Name = os.path.split(input_file_path)[1].split(".")[0]
# Insert the virus name into the virus names list
virus_names.append(virus_Name)
c = 10
if mat.shape == (0,):
#print('The mat is empty, use hstack.')
mat = np.hstack((mat, np.array([np.cumsum(ca_skew)[len(ca_skew)-1],
np.cumsum(ga_skew)[len(ga_skew)-1],
np.cumsum(ua_skew)[len(ua_skew)-1],
np.cumsum(ug_skew)[len(ug_skew)-1],
np.cumsum(uc_skew)[len(uc_skew)-1],
np.cumsum(cg_skew)[len(cg_skew)-1]
])))
mat2 = np.hstack((mat2, np.array([(statistics.mean(ca_skew))*c,
(statistics.mean(ga_skew))*c,
(statistics.mean(ua_skew))*c,
(statistics.mean(ug_skew))*c,
(statistics.mean(uc_skew))*c,
(statistics.mean(cg_skew))*c
])))
else:
#print('The mat has at least one row, use vstack.')
mat = np.vstack((mat, np.array([np.cumsum(ca_skew)[len(ca_skew)-1],
np.cumsum(ga_skew)[len(ga_skew)-1],
np.cumsum(ua_skew)[len(ua_skew)-1],
np.cumsum(ug_skew)[len(ug_skew)-1],
np.cumsum(uc_skew)[len(uc_skew)-1],
np.cumsum(cg_skew)[len(cg_skew)-1]
])))
mat2 = np.vstack((mat2, np.array([(statistics.mean(ca_skew))*c,
(statistics.mean(ga_skew))*c,
(statistics.mean(ua_skew))*c,
(statistics.mean(ug_skew))*c,
(statistics.mean(uc_skew))*c,
(statistics.mean(cg_skew))*c
])))
#plt.figure(figsize=(9,6))
#plt.plot(np.cumsum(ca_skew), label="Cumulative CA Skew")
#plt.plot(np.cumsum(ga_skew), label="Cumulative GA Skew")
#plt.plot(np.cumsum(ua_skew), label="Cumulative UA Skew")
#plt.plot(np.cumsum(ug_skew), label="Cumulative UG Skew")
#plt.plot(np.cumsum(uc_skew), label="Cumulative UC Skew")
#plt.plot(np.cumsum(cg_skew), label="Cumulative CG Skew")
#plt.title(virus_Name + " Skew Profiles")
#plt.legend()
#plt.grid()
### You can use 'plt.savefig()' to save the plot, but that will slow down the code even more!
##plt.savefig(virus_Name + '.png', dpi=72, bbox_inches='tight')
#plt.show()
#plt.close()
input_data.close()
</code>
<code>
linkage(mat2)[0,]
</code>
# Task 3: Understand the data set
<code>
print(len(virus_names))
virus_names[:]
</code>
<code>
print('mat shape = ', mat.shape)
print('mat2 shape = ', mat2.shape)
</code>
<code>
print(mat[0,0])
print("{:.2f}".format(mat[0,0]))
</code>
<code>
print(mat[0,])
</code>
<code>
np.set_printoptions(linewidth=150)
print('0 ', mat[0,])
print('1 ', mat[1,])
print('2 ', mat[2,])
print('...', '...')
print('...', '...')
print('19 ', mat[19,])
</code>
<code>
print('0 ', mat2[0,])
print('1 ', mat2[1,])
print('2 ', mat2[2,])
print('...', '...')
print('...', '...')
print('19 ', mat2[19,])
</code>
<code>
print(list(map('{:.1f}'.format, mat[0,])))
print(list(map('{:.1f}'.format, mat[1,])))
print(list(map('{:.1f}'.format, mat[2,])))
print(list(map('{:.1f}'.format, mat[3,])))
</code>
<code>
print(list(map('{:.3f}'.format, [i/10 for i in mat2[0,]])))
print(list(map('{:.3f}'.format, [i/10 for i in mat2[1,]])))
print(list(map('{:.3f}'.format, [i/10 for i in mat2[2,]])))
print('...', '...')
print('...', '...')
print(list(map('{:.3f}'.format, [i/10 for i in mat2[19,]])))
</code>
<code>
df = pd.DataFrame(np.array([
list(map('{:.2f}'.format, mat[0,])), list(map('{:.2f}'.format, mat[1,])),
list(map('{:.2f}'.format, mat[2,])), list(map('{:.2f}'.format, mat[3,])),
list(map('{:.2f}'.format, mat[4,])), list(map('{:.2f}'.format, mat[5,])),
list(map('{:.2f}'.format, mat[6,])), list(map('{:.2f}'.format, mat[7,])),
list(map('{:.2f}'.format, mat[8,])), list(map('{:.2f}'.format, mat[9,])),
list(map('{:.2f}'.format, mat[10,])), list(map('{:.2f}'.format, mat[11,])),
list(map('{:.2f}'.format, mat[12,])), list(map('{:.2f}'.format, mat[13,])),
list(map('{:.2f}'.format, mat[14,])), list(map('{:.2f}'.format, mat[15,])),
list(map('{:.2f}'.format, mat[16,])), list(map('{:.2f}'.format, mat[17,])),
list(map('{:.2f}'.format, mat[18,])), list(map('{:.2f}'.format, mat[19,]))
]),
columns=['ca_skew', 'ga_skew', 'ua_skew','ug_skew', 'uc_skew', 'cg_skew'],
index=virus_names[:])
df
</code>
<code>
df2 = pd.DataFrame(np.array([
list(map('{:.2f}'.format, [i/10 for i in mat2[0,]])),
list(map('{:.2f}'.format, [i/10 for i in mat2[1,]])),
list(map('{:.2f}'.format, [i/10 for i in mat2[2,]])),
list(map('{:.2f}'.format, [i/10 for i in mat2[3,]])),
list(map('{:.2f}'.format, [i/10 for i in mat2[4,]])),
list(map('{:.2f}'.format, [i/10 for i in mat2[5,]])),
list(map('{:.2f}'.format, [i/10 for i in mat2[6,]])),
list(map('{:.2f}'.format, [i/10 for i in mat2[7,]])),
list(map('{:.2f}'.format, [i/10 for i in mat2[8,]])),
list(map('{:.2f}'.format, [i/10 for i in mat2[9,]])),
list(map('{:.2f}'.format, [i/10 for i in mat2[10,]])),
list(map('{:.2f}'.format, [i/10 for i in mat2[11,]])),
list(map('{:.2f}'.format, [i/10 for i in mat2[12,]])),
list(map('{:.2f}'.format, [i/10 for i in mat2[13,]])),
list(map('{:.2f}'.format, [i/10 for i in mat2[14,]])),
list(map('{:.2f}'.format, [i/10 for i in mat2[15,]])),
list(map('{:.2f}'.format, [i/10 for i in mat2[16,]])),
list(map('{:.2f}'.format, [i/10 for i in mat2[17,]])),
list(map('{:.2f}'.format, [i/10 for i in mat2[18,]])),
list(map('{:.2f}'.format, [i/10 for i in mat2[19,]]))
]),
columns=['ca_skew', 'ga_skew', 'ua_skew','ug_skew', 'uc_skew', 'cg_skew'],
index=virus_names[:])
df2
</code>
# Task 4: Hierarchical Clustering - Metric
> Wikipedia
>> In [Cartesian coordinates](https://en.wikipedia.org/wiki/Cartesian_coordinate_system) if $p = (p_1, p_2,\cdots, p_n)$ and $q = (q_1, q_2,\cdots, q_n)$ are two points in [Euclidean n-space](https://en.wikipedia.org/wiki/Euclidean_space), then the **Euclidean distance** (d) from **p** to **q**, or from **q** to **p**, is given by the [Pythagorean formula](https://en.wikipedia.org/wiki/Pythagorean_theorem):
>>$d(p, q) = d(q, p)= \sqrt{\sum_{i=1}^n (q_i - p_i)^2}$
>>$=\sqrt{(q_1-p_1)^2+(q_2-p_2)^2+\cdots+(q_n-p_n)^2}$
$d(Zika AY632535, Zika MN101548)=
\sqrt{(1381.19-1352.11)^2+(-283.89-(-295.92))^2+(1147.27-1152.43)^2+
(1432.18-1444.86)^2+(-237.85-(-202.89))^2+(1663.35-1644.59)^2}\approx52.46$
>```python
import math
dist = 0
for p, q in zip(mat[18,], mat[19,]):
dist += math.pow((p-q), 2)
dist = math.sqrt(dist)
print(dist)
> 52.45662363913553
>```
```python
from scipy.cluster.hierarchy import linkage
linkage(mat)
```
<code>
linkage_matrix = linkage(mat,
method='single',
metric='euclidean',
optimal_ordering=False)
print(linkage_matrix.shape)
print(linkage_matrix[0:3])
print('...', '...')
print(linkage_matrix[18])
</code>
<code>
y = 0
for x in range(len(linkage_matrix)):
y+=1
print(len(linkage_matrix)+y, linkage_matrix[x])
</code>
<code>
df.iloc[[2, 1,5], :]
</code>
# Task 5: Hierarchical Clustering - Ordering & Methods
## 5.1: Optimal Ordering
> **Higgs and Attwood (2005)**:
>>*It is often said that trees (the phylogenetics trees) are like hanging mobiles. You can imagine suspending them from the root and allowing the horizontal lines to swing around.*
<table><tr><td><img src='images/p_tree_1.png'></td><td><img src='images/p_tree_2.png'></td></tr></table>
Dendrogram (a) | | Dendrogram (b) |
---- | |:-------- |
**Vertical lines show the distances** | | **Vertical lines show the distances** |
Rooted trees. Tree (a) can be converted to tree (b) by swinging around the horizontal branches like mobiles. Hence (a) and
(b) are equivalent to one another. (*modified from Higgs and Attwood - 2005*)
**Paul G. Higgs and Teresa K. Attwood** (2005). Bioinformattics and Molecular Evolution. Willey-Blackwell. Print ISBN:9781405106832 |Online ISBN:9781118697078 |[DOI:10.1002/9781118697078](https://onlinelibrary.wiley.com/doi/book/10.1002/9781118697078)
<code>
linkage_matrix = linkage(mat,
method='single',
metric='euclidean',
optimal_ordering=True)
print(linkage_matrix.shape)
for x in range(len(linkage_matrix)):
print(linkage_matrix[x])
</code>
## 5.2: Clustring Methods
```Python
methods = ('single', 'complete', 'average', 'weighted', 'centroid', 'median', 'ward')
```
- **single** $^a$
- $d(u,v) = min(dist(u[i], v[j])$
- [Nearest Point Algorithm](https://en.wikipedia.org/wiki/Nearest_neighbor_search)
- **complete** $^a$
- $d(u,v) = max(dist(u[i], v[j])$
- [Complete-linkage clustering](https://en.wikipedia.org/wiki/Complete-linkage_clustering) known also as Farthest Point Algorithm or Voor Hees Algorithm
**average** $^b$
- $d(u,v) = \sum_{ij}\frac{d(u[i], v[j])}{(|u| * |v|)}$
- [UPGMA](https://en.wikipedia.org/wiki/UPGMA) **U**nweighted **P**air **G**roup **M**ethod with **A**rithmetic Mean algorithm.
- **weighted** $^c$
- $d(u,v) = \frac{dist(s,v) + dist(t,v)}{2}$
- [WPGMA](https://en.wikipedia.org/wiki/WPGMA) (**W**eighted **P**air **G**roup **M**ethod with **A**rithmetic Mean)
- **centroid** $^d$
- $dist(s,t) = ||c_s - c_t||2$
- [UPGMC](https://en.wikipedia.org/wiki/Hierarchical_clustering) (**U**nweighted **P**air **G**roup **M**ethod with **C**entroid linkage)
- **median**
- $d{(i\cup j),k} = \frac{d_{i,k} + d_{j,k}}{2}$
- [WPGMC]() (**W**eighted **P**air **G**roup **M**ethod with **C**entroid linkage)
- **ward** $^e$
- $d(u,v) = \sqrt{\frac{|v| + |s|}{T}d(v,s)^2 + \frac{|v| + |t|}{T}d(v,t)^2 - \frac{|v|}{T}d(s,t)^2}$
- [Ward's minimum variance method](https://en.wikipedia.org/wiki/Ward%27s_method)
$^a)$ $d(u,v)$ Euclidean distanc between $u$ and $v$ clusters.
$^a)$ $u[i], v[j]$ all points $i$ in cluster $u$ and $j$ in cluster $v$.
$^b)$ $|u|$ and $|v|$ are the cardinalities of clusters $u$ and $v$ respectiely.
$^c)$ $u, s, v, t$ where cluster $u$ was formed with cluster $s$ and $t$ and $v$ is a remaining cluster in the forest
$^d)$ $c_s$ and $c_t$ The centroids of clusters $s$ and $t$.
$^e)$ $T = |v| + |s| + |t|$
[Centroid](https://en.wikipedia.org/wiki/Centroid) or geometric center.
| | Arithmetic Mean | Geometric Center |
| :---- | :-------- |:-------- |
| **Unweighted Pair Group** | average | centroid |
| **Weighted Pair Group** | weighted | median |
<code>
df.iloc[[6,7,18,19], :]
</code>
[Time Complexity](http://en.wikipedia.org/wiki/Time_complexity) (Big O Notaion)
- $O(n^2)$
- single
- complete
- average
- weighted
- ward
- $O(n^3)$
- centroid
- median
## 5.3: Cophenetic correlation
You can use the [cophenet](https://docs.scipy.org/doc/scipy/reference/generated/scipy.cluster.hierarchy.cophenet.html) function to calculate the cophenetic coefficent. The [cophenetic correlation](https://en.wikipedia.org/wiki/Cophenetic_correlation) coefficient is a measure of how faithfully a dendrogram preserves the pairwise distances between the original unmodeled data points.
<code>
from scipy.cluster.hierarchy import cophenet
from scipy.spatial.distance import pdist
c, coph_dists = cophenet(linkage_matrix, pdist(mat, metric='euclidean'))
c
</code>
<code>
#coph_dists
#pdist(mat, metric='euclidean')
</code>
<code>
import operator
cophenetic_results = {}
methods = ('single', 'complete', 'average', 'weighted', 'centroid', 'median', 'ward')
for m in methods:
linkage_matrix = linkage(mat, method = m, metric='euclidean', optimal_ordering=True)
#for x in range(len(linkage_matrix)):
# print(linkage_matrix[x])
c, coph_dists = cophenet(linkage_matrix, pdist(mat))
cophenetic_results[m] = c
#print(c)
#print('======================================================')
print(max(cophenetic_results.items(), key=operator.itemgetter(1))[0])
print(cophenetic_results)
</code>
# Task 6: Dendrogram
<code>
linkage_matrix = linkage(mat, method = 'single', metric='euclidean', optimal_ordering=True)
dendrogram(linkage_matrix)
plt.show()
plt.close()
</code>
<code>
linkage_matrix = linkage(mat, method = 'single', metric='euclidean', optimal_ordering=False)
dendrogram(linkage_matrix)
plt.show()
plt.close()
</code>
<code>
orientations = ('top', 'bottom', 'left', 'right')
linkage_matrix = linkage(mat, method = 'single', metric='euclidean', optimal_ordering=False)
for orien in orientations:
dendrogram(linkage_matrix, orientation = orien)
plt.show()
plt.close()
</code>
<code>
linkage_matrix = linkage(mat, method = 'single', metric='euclidean', optimal_ordering=True)
dendrogram(linkage_matrix, labels=virus_names[:])
plt.show()
plt.close()
</code>
<code>
linkage_matrix = linkage(mat, method = 'single', metric='euclidean', optimal_ordering=True)
dendrogram(linkage_matrix, labels=virus_names[:], leaf_rotation=90, leaf_font_size=10)
plt.show()
plt.close()
</code>
<code>
plt.figure(figsize=(10, 12))
dendrogram(linkage_matrix, labels=virus_names[:], orientation="left", color_threshold=6230)
plt.show()
plt.close()
</code>
<code>
0.7*max(linkage_matrix[:,2])
</code>
**Line style and color options**
```python
linestyles = ('solid', 'dashed', 'dashdot', 'dotted')
colors_abbreviation = ('b', 'g', 'r', 'c', 'm', 'y', 'k', 'w')
colors_names = ('blue', 'orange', 'green', 'red', 'purple', 'brown', 'pink', 'gray', 'olive', 'cyan')
colors_hexa = ('#1f77b4', '#ff7f0e', '#2ca02c', '#d62728', '#9467bd', '#8c564b', '#e377c2', '#7f7f7f', '#bcbd22', '#17becf')
```
*You can read more about matplotlib colors in the [documentation](https://matplotlib.org/api/colors_api.html#matplotlib.colors.Colormap).*
<code>
plt.figure(figsize=(10, 12))
plt.title('20 Viruses Hierarchical \n Clustering Dendrogram', fontsize=16)
plt.xlabel('Euclidean Distance', fontsize=14)
plt.ylabel('The length of the cluster in this direction has no meaning what so ever!', fontsize=14)
dendrogram(linkage_matrix, labels=virus_names[:], orientation="left", color_threshold=0.2*max(linkage_matrix[:,2]))
plt.vlines(x=2000, ymin=1.0, ymax=200., linestyles='dashdot', color='r', lw=2)
plt.vlines(x=[4000,2000, 1000], ymin=[0, 2, 8], ymax=198., linestyles='dashdot', color='#7f7f7f')
#plt.hlines(y=2000, xmin=1.0, xmax=200., linestyles='dashdot', color='r')
#plt.hlines(y=[4000,2000, 1000], xmin=[0, 2, 8], xmax=198., linestyles='dashdot', label='Multiple Lines', color='#7f7f7f')
plt.text(2000, 190, '2000', size=12, ha='center', va='center')
plt.show()
plt.close()
</code>
# Task 7: Dendrogram
- Riboviria
- Coronaviridae family,
- Corona
- SARS
- Flaviviridae family,
- Dengue
- Zika
- West Nile
- Picornaviridae
- Enterovirus
- Retroviridae
- HIV
| | Virus Count | Strain Count |
| :---- | :--------: |:--------: |
| *Coronaviridae* | **2** | **10** |
| *Flaviviridae* | **3** | **5** |
| *Picornaviridae* | **1** | **5** |
| *Retroviridae* | **1** | **1** |
| |
| **Total** | **7** | **20** |
<code>
print(cophenetic_results)
</code>
<code>
print("{:.3f}".format(cophenetic_results['single'] - cophenetic_results['ward']))
</code>
<code>
linkage_matrix = linkage(mat, method = 'single', metric='euclidean', optimal_ordering=False)
plt.figure(figsize=(10, 12))
plt.title('Hierarchical Clustering \n Single Method \n', fontsize=16)
dendrogram(linkage_matrix, labels=virus_names[:], orientation="left", color_threshold=0.12*max(linkage_matrix[:,2]))
plt.vlines(x=[2000, 4000, 9000], ymin=[0, 0, 0], ymax=200., linestyles='dashdot', color='#7f7f7f')
plt.text(2000, 190, '2000', size=12, ha='center', va='center')
plt.text(4000, 190, '4000', size=12, ha='center', va='center')
plt.text(9000, 190, '9000', size=12, ha='center', va='center')
plt.show()
</code>
<code>
linkage_matrix = linkage(mat, method = 'complete', metric='euclidean', optimal_ordering=False)
plt.figure(figsize=(10, 12))
plt.title('Hierarchical Clustering \n Complete Method \n', fontsize=16)
dendrogram(linkage_matrix, labels=virus_names[:], orientation="left", color_threshold=0.12*max(linkage_matrix[:,2]))
plt.vlines(x=[5000, 11000, 20000], ymin=[0, 0, 0], ymax=200., linestyles='dashdot', color='#7f7f7f')
plt.text(5000, 190, '5000', size=12, ha='center', va='center')
plt.text(11000, 190, '11000', size=12, ha='center', va='center')
plt.text(20000, 190, '20000', size=12, ha='center', va='center')
plt.show()
</code>
<code>
linkage_matrix = linkage(mat, method = 'ward', metric='euclidean', optimal_ordering=False)
plt.figure(figsize=(10, 12))
plt.title('Hierarchical Clustering \n Ward\'s Method \n', fontsize=16)
dendrogram(linkage_matrix, labels=virus_names[:], orientation="left", color_threshold=0.12*max(linkage_matrix[:,2]))
plt.vlines(x=[5000, 13000, 39000], ymin=[0, 0, 0], ymax=200., linestyles='dashdot', color='#7f7f7f')
plt.text(5000, 190, '5000', size=12, ha='center', va='center')
plt.text(11000, 190, '11000', size=12, ha='center', va='center')
plt.text(39000, 190, '39000', size=12, ha='center', va='center')
plt.show()
</code>
<code>
linkage_matrix = linkage(mat[:,[0,3]], method = 'single', metric='euclidean', optimal_ordering=False)
plt.figure(figsize=(10, 12))
plt.title('Hierarchical Clustering \n Single Method \n ', fontsize=16)
dendrogram(linkage_matrix, labels=virus_names[:], orientation="left", color_threshold=0.12*max(linkage_matrix[:,2]))
plt.show()
</code>
<code>
methods = ('single', 'complete', 'average', 'weighted', 'centroid', 'median', 'ward')
for m in methods:
linkage_matrix = linkage(mat, method = m, metric='euclidean', optimal_ordering=False)
y = 0
plt.figure(figsize=(8, 9))
plt.title('Hierarchical Clustering Dendrogram using ' + str(m) + ' Method', fontsize=16)
dendrogram(linkage_matrix, labels=virus_names[:], orientation="left", color_threshold=0.2*max(linkage_matrix[:,2]))
plt.show()
</code>
<code>
methods = ('single', 'complete', 'average', 'weighted', 'centroid', 'median', 'ward')
for m in methods:
linkage_matrix = linkage(mat2, method = m, metric='euclidean', optimal_ordering=False)
y = 0
plt.figure(figsize=(8, 9))
plt.title('Hierarchical Clustering Dendrogram using ' + str(m) + ' Method', fontsize=16)
dendrogram(linkage_matrix, labels=virus_names[:], orientation="left", color_threshold=0.2*max(linkage_matrix[:,2]))
plt.show()
</code>
<table><tr><td><img src='images/dendogram_H.png'></td><td><img src='images/dendogram.png'></td></tr></table>
|
{
"filename": "Hierarchical Clustering using Euclidean Distance.ipynb",
"repository": "galkinc/Hierarchical-Clustering",
"query": "transformed_from_existing",
"size": 82890,
"sha": ""
}
|
# Chi_Cuadrada_1.ipynb
Repository: OsmarVar/Unidad-1-Simulacion
<a href="https://colab.research.google.com/github/OsmarVar/Unidad-1-Simulacion/blob/main/Chi_Cuadrada.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
<code>
import numpy as np
from scipy.stats import chi2
def chi_square_test(data, num_intervals):
observed_frequencies, _ = np.histogram(data, bins=num_intervals)
expected_frequency = len(data) / num_intervals
chi_square_statistic = np.sum((observed_frequencies - expected_frequency)**2 / expected_frequency)
degrees_of_freedom = num_intervals - 1
critical_value = chi2.ppf(0.95, degrees_of_freedom) # Nivel de confianza del 95%
if chi_square_statistic <= critical_value:
return True, chi_square_statistic, critical_value
else:
return False, chi_square_statistic, critical_value
# Generamos una secuencia de 1000 números aleatorios entre 0 y 1
random_numbers = np.random.uniform(0, 1, 1000)
# Realizamos la prueba de Chi-Cuadrada con 10 intervalos
is_uniform, chi_square_stat, crit_val = chi_square_test(random_numbers, 10)
if is_uniform:
print("La secuencia de números aleatorios sigue una distribución uniforme.")
else:
print("La secuencia de números aleatorios no sigue una distribución uniforme.")
print("Estadístico de Chi-Cuadrada:", chi_square_stat)
print("Valor crítico:", crit_val)
</code>
|
{
"filename": "Chi_Cuadrada_1.ipynb",
"repository": "OsmarVar/Unidad-1-Simulacion",
"query": "transformed_from_existing",
"size": 2893,
"sha": ""
}
|
# Website_GetFinalGCFData_1.ipynb
Repository: gnick18/FungalICS
## Item 1: The list of species in a given GCF
<code>
import os
import pandas as pd
import pdb
import json
gcfTable_rootDir = r'/Users/gnickles/Desktop/FungalICS_Website/Data/GCFTables'
speciesInGCFs = {}
#looping over each GCF table's summary tsv
for file in os.listdir(gcfTable_rootDir):
filePath = os.path.join(gcfTable_rootDir, file)
if not file.startswith(".") and file.endswith('.tsv'):
gcfDF = pd.read_csv(filePath, sep='\t')
#getting the gcf name
if "Refined" in file:
gcfName = file.split("_Re")[0]
else:
gcfName = file.split('.t')[0]
#
gcfDF['Species'] = gcfDF['Cluster_Name'].str.split("_", n=1).str[1]
speciesList = gcfDF['Species'].unique().tolist()
speciesInGCFs[gcfName] = speciesList
#saving this as a json file
output_file = r'/Users/gnickles/Desktop/FungalICS_Website/Data/GCFSpecies.json'
with open(output_file, "w") as file:
json.dump(speciesInGCFs, file)
</code>
## Item 2: Adding in the extra plots for the refined GCFs
## Item 3: Adding in the consensus prediction information for those that I have that for
# Taxonomy for GCFs
<code>
import os
import pandas as pd
baseDir = r'/Users/gnickles/Desktop/FungalICS_Website/Data/GCFTables'
taxDF = pd.DataFrame(columns=["NCBI_Accession","Cluster_Name",'FinalGCFs','Species'])
for file in os.listdir(baseDir):
filePath = os.path.join(baseDir, file)
if os.path.isfile(filePath) and not file.startswith("."):
gcfDF = pd.read_csv(filePath, sep='\t')
gcfDF = gcfDF[['NCBI_Accession',"Cluster_Name",'FinalGCFs']]
gcfDF.drop_duplicates(inplace=True)
gcfDF['Species'] = gcfDF['Cluster_Name'].str.replace(r'^\d+_', '', regex=True)
taxDF = pd.concat([taxDF, gcfDF], ignore_index=True)
#getting the taxonomy information
taxonomyOverlay = pd.read_csv(r"/Volumes/T7/ICSProject/ConsolidatedResults_3300Genomes/Trees_Phylogenetics/ASTRAL_SpeciesTree/FinalAstralSpeciesTree/STree_TaxOverlay.tsv", sep='\t')
taxonomyOverlay.rename(columns={"ID":"Species"}, inplace=True)
taxDF = pd.merge(left=taxDF, right=taxonomyOverlay, how='left',on='Species')
taxDF.sort_values(by="Genus", inplace=True)
taxDF.to_csv('Data/Taxonomy_Final.tsv', sep='\t', index=False)
</code>
<code>
gcfs[gcfs['FinalGCFs'] == 160]
</code>
<code>
import pandas as pd
tax = pd.read_csv(r"/Users/gnickles/Desktop/FungalICS_Website/Data/AllICSPredictions.tsv", sep='\t')
gcfs = pd.read_csv(r"/Users/gnickles/Desktop/FungalICS_Website/Data/GCFs_Keys.tsv", sep='\t')
tax.drop(columns=['GCF', 'GCC','Protein','Domain','Domain_Accession', 'E-value_hmmer','NCBI_Accession'], inplace=True, axis=1)
tax.drop_duplicates(inplace=True)
tax.reset_index(inplace=True)
merged = pd.merge(tax, gcfs, how="left")
merged.drop(columns=['index'], inplace=True, axis=1)
merged.to_csv('Data/Taxonomy.tsv', sep='\t', index=False)
</code>
<code>
merged
</code>
<code>
gcfs
</code>
<code>
tax
</code>
|
{
"filename": "Website_GetFinalGCFData_1.ipynb",
"repository": "gnick18/FungalICS",
"query": "transformed_from_existing",
"size": 36236,
"sha": ""
}
|
# Evaluation.ipynb
Repository: sithvincent/Biomedical-Information-Retrieval
<code>
import helper.pubmed_search as pubs
from helper.pubmed_search import QueryExpansionManager
from sklearn.metrics.pairwise import cosine_similarity
import pandas as pd
import random
import time
import json
import math
import csv
import os
def precision_at_k(retrieved_docs, relevant_docs, k=10):
# k = min(k, len(retrieved_docs)) # Handle case where retrieved_docs < k
k = len(relevant_docs) # Handle case where retrieved_docs < k
relevant_in_top_k = [doc for doc in retrieved_docs[:k] if doc in relevant_docs]
return 100*len(relevant_in_top_k) / k if k > 0 else 0.0
def average_precision_at_k(retrieved_docs, relevant_docs, k=10):
# k = min(k, len(retrieved_docs)) # Handle case where retrieved_docs < k
k = len(relevant_docs) # Handle case where retrieved_docs < k
if k == 0:
return 0.0
num_relevant = 0
precision_sum = 0
if len(retrieved_docs) != 0:
for i in range(1, k + 1):
if retrieved_docs[i - 1] in relevant_docs:
num_relevant += 1
precision_sum += num_relevant / i
# Use the smaller of k or total number of relevant documents
# return 100*precision_sum / min(len(relevant_docs), k) if num_relevant > 0 else 0.0
return 100*precision_sum / k if num_relevant > 0 else 0.0
def dcg(retrieved_docs, relevant_docs, k):
dcg_value = 0.0
if len(retrieved_docs)!=0:
for i in range(k):
if retrieved_docs[i] in relevant_docs:
dcg_value += 1 / math.log2(i + 2) # i + 2 because of 0-based indexing
return dcg_value
def idcg(relevant_docs, k):
idcg_value = 0.0
# for i in range(min(len(relevant_docs), k)):
for i in range(k):
idcg_value += 1 / math.log2(i + 2)
return idcg_value
def ndcg_at_k(retrieved_docs, relevant_docs, k=10):
# k = min(k, len(retrieved_docs)) # Handle case where retrieved_docs < k
k = len(relevant_docs) # Handle case where retrieved_docs < k
if k == 0:
return 0.0
# def dcg(retrieved_docs, relevant_docs, k):
# dcg_value = 0.0
# for i in range(k):
# if retrieved_docs[i] in relevant_docs:
# dcg_value += 1 / math.log2(i + 2) # i + 2 because of 0-based indexing
# return dcg_value
# def idcg(relevant_docs, k):
# idcg_value = 0.0
# for i in range(min(len(relevant_docs), k)):
# idcg_value += 1 / math.log2(i + 2)
# return idcg_value
dcg_value = dcg(retrieved_docs, relevant_docs, k)
idcg_value = idcg(relevant_docs, k)
return 100*dcg_value / idcg_value if idcg_value > 0 else 0.0
def extract_subset_of_evaluation_data(evaluation_data, filename, fraction = 0.5, random_seed = 31):
random.seed(random_seed)
subset = random.sample(evaluation_data, int(len(evaluation_data)*fraction))
with open (filename, 'w') as outfile:
json.dump(subset, outfile)
print('Extracted', len(subset), 'documents out of', len(evaluation_data))
def output_full_metrics_to_file(retrieved_ids_list, ground_truth_list, name, file_name):
# precision at k values
metric_list = []
for i in range (len(retrieved_ids_list)):
metric_list.append(precision_at_k(retrieved_ids_list[i], ground_truth_list[i]))
mean_precision_at_k = sum(metric_list)/len(metric_list)
# average precision at k values
metric_list = []
for i in range (len(retrieved_ids_list)):
metric_list.append(average_precision_at_k(retrieved_ids_list[i], ground_truth_list[i]))
mean_average_precision_at_k = sum(metric_list)/len(metric_list)
# ndcg at k values
metric_list = []
for i in range (len(retrieved_ids_list)):
metric_list.append(ndcg_at_k(retrieved_ids_list[i], ground_truth_list[i]))
mean_ndcg_at_k = sum(metric_list)/len(metric_list)
output_list = [name, mean_precision_at_k, mean_average_precision_at_k, mean_ndcg_at_k]
# Check if file exists
file_exists = os.path.isfile(file_name)
# Open the file in append mode ('a') and create it if it doesn't exist
with open(file_name, mode='a', newline='') as file:
writer = csv.writer(file)
# Write the header if the file is new
if not file_exists:
# Customize the header if needed, here assuming just column numbers
header = [f"Column {i+1}" for i in range(len(output_list))]
writer.writerow(header)
# Write the data list
writer.writerow(output_list)
print(f"List successfully written to {file_name}")
def length_check(retrieved_ids_list, ground_truth_list):
if len(retrieved_ids_list)!=len(ground_truth_list):
print('Different length!')
inconsistent_indexes = 0
for i in range (len(retrieved_ids_list)):
retrieved_ids_length = len(retrieved_ids_list[i])
ground_truth_length = len(ground_truth_list[i])
if retrieved_ids_length!=ground_truth_length:
print('Inconsistency at index:', i, 'where retrieved ids have length:', retrieved_ids_length, 'and ground truth has length', ground_truth_length)
inconsistent_indexes +=1
return inconsistent_indexes
</code>
Extracts out a subset (25%) of the full test-set data and store it in a JSON for brevity. Already done so things are commented out.
<code>
# with open ('evaluation/BioASQ-training11b/training11b.json') as training_file:
# full_evaluation_data = json.load(training_file)
# extract_subset_of_evaluation_data(full_evaluation_data['questions'], 'evaluation/testing_script.json', 0.002)
</code>
# 2. Evaluation For Initial Retrieval
This part measures the accuracy of the results returned from the initial query without any query expansion. It covers:
1. Testing immediate results from a pubmed search.
2. Testing the results ranked by an embedding model that compares article content to the
Extract the subset evaluation data and run evaluation on it.
<code>
# Method to rank relevance of documents
def get_initial_retrieved_and_ground_truth_docs(evaluation_set, remove_stop_words = False, articles_to_retrieve = 20, qe_manager = None):
retrieved_ids_list = []
ground_truth_list = []
# Run it through the system:
for idx, entry in enumerate(evaluation_set):
# # Include this for diagnostics only, otherwise comment it out
# if idx > 0:
# continue
# Extract out the query
query = entry['body']
# When running the evaluation, there is a tendency for the internet to break so we need to continuously try to connnect
runs = 0
while True:
if qe_manager is None:
time.sleep(1) # rest for 1 second so as not to overwhelm PubMed's API if there is no re-ranking
# query the question through the database and extract the relevant documents
# try:
print('Query number', idx, 'is: ', query)
if qe_manager is not None:
query_embedding = qe_manager.embed_mesh_headings_preloaded_model(query)
query_response = pubs.get_query_response(query, remove_stop_words=remove_stop_words, articles_to_retrieve=articles_to_retrieve)
ground_truth = [x.split('/')[-1] for x in entry['documents']]
ground_truth_length = len (ground_truth)
ground_truth_list.append(ground_truth)
# Checks if there is idlist in the results, then append it to the list
# If not (usually because of pubmed search errors or the preprocessing removed all words from the query), append an empty list
if 'idlist' in query_response['esearchresult']:
retrieved_ids = query_response['esearchresult']['idlist']
if retrieved_ids == []:
retrieved_ids_list.append([])
break
# If we decide to rank the retrieval before returning the list
if qe_manager is not None:
ranking_list = []
retrieved_article_details = pubs.get_article_details_from_id(retrieved_ids)
for key, value in retrieved_article_details.items():
title = value.get('Title') or ""
abstract = value.get('Abstract') or ""
article_title_abstract = title + abstract
article_title_abstract_embeddings = qe_manager.embed_mesh_headings_preloaded_model(article_title_abstract)
similarity = cosine_similarity(query_embedding, article_title_abstract_embeddings)[0][0]
ranking_list.append({'Article': key, 'Similarity': similarity})
# If the ranking list is shorter than ground truth, need to pad it
if len(ranking_list) < ground_truth_length:
entries_to_add = ground_truth_length - len(ranking_list)
for i in range(entries_to_add):
ranking_list.append({'Article': 'None', 'Similarity': 0})
ranking_list_df = pd.DataFrame(ranking_list).sort_values(by=['Similarity'], ascending=False)
ranking_list_df.to_csv('evaluation/simple_retrieval_articles.csv')
# If the ranking list is longer than ground truth, need to trim it
retrieved_ids_list.append(ranking_list_df.head(ground_truth_length)['Article'].to_list())
else:
if len(retrieved_ids) < ground_truth_length:
entries_to_add = ground_truth_length - len(retrieved_ids)
for i in range(entries_to_add):
retrieved_ids.append('')
else: # for the case where the lengths are equal or ground truth is shorter
retrieved_ids = retrieved_ids[0:ground_truth_length]
retrieved_ids_list.append(retrieved_ids)
else:
retrieved_ids_list.append([])
break # once information is extracted, stop the rerun
# # This is just to pre-ampt anything that would happen above during long evaluation runs.
# except Exception as e:
# print('Error occured when querying pubmed:', e)
# runs +=1
# if runs > 3: # after 3 rounds of rerun due to errors, just move on
# print('This entry is not counted:', entry['body'])
# break
return retrieved_ids_list, ground_truth_list
</code>
### 2.1 Immediate Results from PubMed Entrez Search
<code>
with open ('evaluation/small subset.json') as infile:
evaluation = json.load(infile)
# pure retrieval on a fixed dataset (takes 31.7s compared to retrieval + p@k ranking at 33.3s)
retrieved_ids_list, ground_truth_list = get_initial_retrieved_and_ground_truth_docs(evaluation, remove_stop_words=False)
output_full_metrics_to_file(retrieved_ids_list, ground_truth_list, 'baseline (initial retrieval, 10 articles retrieved, no reranking, 233 documents) trimming', 'evaluation/evaluation_metrics.csv')
</code>
### 2.2 Reranking of Initial Results from PubMed Entrez Search
In this section, we will first retrieve the documents, embed their contents and the query, do a reranking of their order using cosine similarity between the embedded query and embedded document content, then output the re-ranked list and ground truth list.
You can select whichever model you want to evaluate the reranking, but if you choose 'openai', please pass your api key as argument (api_key = 'your api key') into QueryExpansionManager(model_name, 'helper/descriptors.json').
Note that there is no query expansion or suggestion at this stage.
<code>
from helper.config import api_key
# model_name = "sentence-transformers/all-mpnet-base-v2"
# model_name = "w601sxs/b1ade-embed"
# model_name = "dmis-lab/biobert-v1.1"
model_name = 'openai'
blade_embed_qe_manager = QueryExpansionManager(model_name, 'helper/descriptors.json', api_key=api_key)
with open ('evaluation/small subset.json') as infile:
evaluation = json.load(infile)
retrieved_ids_list, ground_truth_list = get_initial_retrieved_and_ground_truth_docs(evaluation, remove_stop_words=False, articles_to_retrieve=10, qe_manager=blade_embed_qe_manager)
output_full_metrics_to_file(retrieved_ids_list, ground_truth_list, 'initial retrieval with Blade reranking, 10 articles, trimming', 'evaluation/evaluation_metrics.csv')
</code>
# 3. Evaluation for Query Expansion and Suggestion
<code>
import pandas as pd
import helper.pubmed_search as pubs
def get_reretrieval_and_ground_truth_docs(evaluation_set, qe_manager):
retrieved_ids_list = []
ground_truth_list = []
# Run it through the system:
for idx, entry in enumerate(evaluation_set):
# # for diagnostics only
# if idx > 0:
# continue
query = entry['body']
# When running the evaluation, there is a tendency for the internet to break so we need to continuously try to connnect
# rerun = True
runs = 0
while True:
# query the question through the database and extract the relevant documents
try:
print('Query number', idx, 'is: ', query)
ground_truth = [x.split('/')[-1] for x in entry['documents']]
ground_truth_length = len(ground_truth)
# Get the initial round of documents
heading_entries, article_entries = qe_manager.get_entries_from_query(query)
title_abstract_threshold = 20
heading_threshold = 40
print('Initial article length:', len(article_entries))
# Deal with articles
if article_entries!=[]:
article_entries_df = pd.DataFrame(article_entries).drop_duplicates(['name']).sort_values(by=['suitability'], ascending=False)
# print('Initial article length:', len(article_entries_df))
article_entries_df.to_csv('evaluation/article_entries.csv')
filtered_article_entries = article_entries_df[article_entries_df['suitability'] > title_abstract_threshold].to_dict(orient='records')
else:
retrieved_ids_list.append([])
print('Benchmark length is:', ground_truth_length)
ground_truth_list.append(ground_truth)
break # if no articles returned, move on
if heading_entries!=[]:
heading_entries_df = pd.DataFrame(heading_entries).drop_duplicates(['name']).sort_values(by=['suitability'], ascending=False)
heading_entries_df.to_csv('evaluation/heading_entries.csv')
filtered_heading_entries = heading_entries_df[heading_entries_df['suitability'] > heading_threshold].to_dict(orient='records')
requery = pubs.create_semantic_neighbourhood_query(filtered_heading_entries, filtered_article_entries)
print('Re-Query is: ', requery)
if requery != '':
_, requeried_article_entries = qe_manager.get_entries_from_query(requery, requery = True, articles_to_retrieve = 20)
else:
requeried_article_entries = []
print('Initial requery length is:', len(requeried_article_entries))
# And finally, get the list of retrieved_ids to be returned
if (requeried_article_entries != []) and (article_entries != []):
requeried_article_entries_df = pd.DataFrame(requeried_article_entries)
requeried_article_entries_df.to_csv('evaluation/requeried_article_entries.csv')
combined_article_entries_df = pd.concat([article_entries_df, requeried_article_entries_df]).drop_duplicates(['name']).sort_values(by=['suitability'], ascending=False)
combined_article_entries_df.to_csv('evaluation/combined_article_entries_df.csv')
retrieved_ids = combined_article_entries_df.head(ground_truth_length)['name'].to_list()
# The following code ensures length of retrieved ids are same as ground truth
if len(retrieved_ids) < ground_truth_length:
entries_to_add = ground_truth_length - len(retrieved_ids)
for i in range (entries_to_add):
retrieved_ids.append('')
elif article_entries != []: # if there are no articles in initially retrieved set, there won't be requeried articles as well
retrieved_ids = article_entries_df.head(ground_truth_length)['name'].to_list()
if len(retrieved_ids) < ground_truth_length:
entries_to_add = ground_truth_length - len(retrieved_ids)
for i in range (entries_to_add):
retrieved_ids.append('')
else: # this one should be handled earlier but we put another here just for safety
retrieved_ids = []
retrieved_ids_list.append(retrieved_ids)
ground_truth_list.append(ground_truth)
print('Length of retrieved list: ', len(retrieved_ids))
print('Ground truth length is:', ground_truth_length)
break # once information is extracted, stop the rerun
# This is just to pre-ampt anything that would happen above during long evaluation runs.
except Exception as e:
print('Error occured when querying pubmed:', e)
runs +=1
if runs > 3: # after 10 rounds of rerun due to errors, just move on
print('This entry is not counted:', entry['body'])
break
return retrieved_ids_list, ground_truth_list
</code>
<code>
from helper.pubmed_search import QueryExpansionManager
import json
# model_name1 = "sentence-transformers/all-mpnet-base-v2"
# model_name = "w601sxs/b1ade-embed"
model_name = "dmis-lab/biobert-v1.1"
blade_embed_qe_manager = QueryExpansionManager(model_name, 'helper/descriptors.json')
# from helper.config import api_key
# blade_embed_qe_manager = QueryExpansionManager('openai', 'helper/descriptors.json', api_key)
# with open ('evaluation/subset.json') as infile:
# evaluation = json.load(infile)
with open ('evaluation/small subset.json') as infile:
evaluation = json.load(infile)
# pure retrieval on a fixed dataset (takes 31.7s compared to retrieval + p@k ranking at 33.3s)
retrieved_ids_list, ground_truth_list = get_reretrieval_and_ground_truth_docs(evaluation, blade_embed_qe_manager)
output_full_metrics_to_file(retrieved_ids_list, ground_truth_list, 'biobert (re retrieval, 10 articles retrieved, no reranking, 233 documents) trimming', 'evaluation/evaluation_metrics.csv')
</code>
<code>
len(ground_truth_list)
</code>
<code>
first_half = ground_truth_list[1:][0:187]
</code>
<code>
length_check(retrieved_ids_list, ground_truth_list[1:])
</code>
|
{
"filename": "Evaluation.ipynb",
"repository": "sithvincent/Biomedical-Information-Retrieval",
"query": "transformed_from_existing",
"size": 271348,
"sha": ""
}
|
# Assignment1_Assignment1_2023.ipynb
Repository: newtonharry/BINF7000
# SCIE3100/BINF7000 Assignment 1
## Probability, motif discovery, and ancestral sequence reconstruction
* **Due:** 2PM Friday 18/8/2023 (Discussion board contributions), 2PM Friday 1/9/2023 (Part A and B solutions)
* **Revision:** 2023 v1
* **Marks:** 20% of course
### Objectives
Below are a number of exercises that aim to guide you through issues and help you understand concepts related primarily to:
* Probability (introduced in week 1)
* Motif discovery (introduced in week 2)
* Graphical models and phylogenetics (introduced in week 4)
### Format
This assignment consists of two parts, each containing a number of problems of the two types discussed below. You are expected to work you way through this over a number of weeks in practicals and in your own time, supported by topics covered in course materials and tutorials. You are also able to interact with tutors in practicals and your fellow class mates.
The assignment has a series of “A” problems to which short responses are assessed. Some responses are fixed-format and automatically marked as either correct (pass) or not (fail); these can be attempted multiple times before a final submission. Other responses are evaluated by a marker, after the submission. Solutions to “A” problems are based on individual work (such as research and experimentation, involving programming, data processing/analysis and interpretation). **Note:** Automatic marking is used in some instances to ensure an appropriate answer is reached which will allow completion of subsequent questions.
The assignment also has more open-ended “B” problems for which short text responses are submitted and assessed. Solutions to “B” problems involve mostly *individual*, but also *collaborative* work based on exchange/discussion amongst members of the class. That said, your submission *must* report on your *own* work. So, you may *not* post your answers to the discussion board. You may *not* use somebody's text in your submission(see more on this below). The exchange and discussion are recorded via the online discussion forum. To acknowledge the exchange, posts that you
have made and used information from *must* be listed in the submission. (A field will be available.)
Note: the content to complete Part B has not been covered at the time of this assignment's release. Don't worry - we will be discussing the necessary concepts at length in upcoming lecture videos and tutorials!
### Marking
The assignment is worth 20% of the course, marked out of 20 marks.
Marks are awarded as per the schedule below.
#### “A” problems (10 marks total)
| Marks | Criteria |
| ----- | -------- |
| 0 – 10| In proportion to number of *correct* responses|
Marks are given for <span style="color:red">*fixed-format questions*</span>, <span style="color:green">*short text questions*</span>, and <span style="color:blue">*submitted portions of your code*</span>. Note that it is *not* sufficient to attempt a question to get a mark.
#### “B” problems (sum of two parts, capped at 10 marks in total)
| Marks | Criteria |
| ----- | -------- |
| 0 – 3 | Responses are inaccurate or absent |
| 4 – 5 | Responses are incomplete or unclear, contain some inaccuracies, lack evidence of research and experimentation |
| 6 – 7 | Responses are informative and reflect insights, contain limited inaccuracies, contain some evidence of research and experimentation |
| 8 – 10| Responses are accurate, informative, insightful, and contain clear evidence of research and experimentation|
| + | |
| 0 | Listed no contributions to discussion forum |
| 1 | Listed one or more well-informed questions posed in discussion forum |
| 2 | Listed one or more contributions to discussion forum, in response to questions |
As indicated, marks are given based on the quantity and quality of constructive interaction in class and in the online forum associated with the course: questions *and* answers. We learn from one another, and this should be acknowledged. To ensure the recording of forum activity, please refer to your posts in the submission, and posts that assisted you.
Formative feedback on submissions should be actively sought in the timetabled practical and tutorial sessions from course staff. Awarded marks will be published on Blackboard Grade Centre.
### Workflow and submission
You will submit your responses to [Coder Quiz](https://coderquiz.scmb.uq.edu.au/). You may submit as many times as you would like, and your last response before the due date will be graded. Please ensure that everything has submitted correctly to Coder Quiz by clicking on the 'View Submissions' link and verifying that all of your answers and code display correctly.
Coder Quiz has the ability to check correctness of provided answers to **some** <span style="color:red">*fixed-format questions*</span> prior to submission. This means that as you progress through the practical, you can check whether you are on the right track or not by clicking *Check Answers* button. Auto-marked questions are found in Part A only, and are indicated on the submission form.
*Some* questions may ask you to provide the code you used to reach your answer. In this case, you need to save the relevant section of code. Once you're ready to submit, condense all required code into a **single .py file**, and upload this file to Coder Quiz. **Please ensure that code is *only provided for questions where it is explicitly requested*, and that all questions are labelled appropriately. If the marker cannot determine which code corresponds to each question, marks will not be awarded for those problems.** Submitted code is for visual inspection of your attempt, so it does not need to run (i.e. you do not need to include import statements) but it should be appropriately commented and understandable by a tutor. Marking criteria will consider whether you demonstrate an understanding of the underlying concepts. A separate Coder Quiz form is provided for code submission.
Coder Quiz **does not** save or retrieve partial attempts, so we recommend storing your work and answers in a separate file; we strongly recommend you use this Jupyter notebook with additional markdown cells to save your ongoing work, then use Coder Quiz to validate and submit once you are complete.
Remember to use the discussion board if you are unsure about how to approach a question or you are not able to get the correct result. That said, your submission must be the result of *your* understanding; if your answer contains anything that you are unable to explain or reproduce without the help of somebody else, you *must* acknowledge this. There is a separate prompt at the end of your submission to list posts on the discussion board that you have made, and posts that you have benefitted from. (In Ed Discussion, "... / Copy Link" for each such post.)
### Resources
* Course materials are available via Blackboard; pay attention to the weekly Python notebooks, in particular
* Quick link to [How-to install binfpy](#howto_install_binfpy)
* The UQ Bioinformatics Python Guide (on Blackboard)
* The [Python 3 documentation]. For those unfamiliar with Python the [official tutorial] is recommended
* The Software Carpentry [novice Python lessons]
* [IPython's own notebook tutorial](http://nbviewer.jupyter.org/github/ipython/ipython/blob/3.x/examples/Notebook/Index.ipynb)
* [Markdown cheatsheet] (Markdown is the syntax you use to write formatted text into cells in a notebook.)
[Practical 1 ECP]: https://course-profiles.uq.edu.au/student_section_loader/section_5/108015#407455
[Python 3 documentation]: https://docs.python.org/3/
[official tutorial]: https://docs.python.org/3/tutorial/index.html
[novice python lessons]: http://swcarpentry.github.io/python-novice-inflammation/
[Markdown cheatsheet]: https://github.com/adam-p/markdown-here/wiki/Markdown-Here-Cheatsheet
## Part A: motif discovery in proteins
### Introduction
We will be using the custom `binfpy` Python modules for this assignment. Instructions for accessing these modules are found in [How-to install binfpy](#howto_install_binfpy). Data files required for this practical are found in the same folder as this notebook.
Import the following modules into Python:
<code>
import sequence
import prob
import sym
</code>
The classes you will need to complete this assignment are introduced in the weekly Python notebooks. Some simple examples of their use are provied below, however you can refer back to these notebooks for more comprehensive demonstrations.
### Sequence data
We will be working with biological sequences (`sequence.Sequence`), which are defined by a series of characters from an alphabet (which in turn specifies all valid characters; `sym.Alphabet`). To understand sequences, it is informative to analyse their composition, and do so *probabilistically*. The Python module `prob.py` in `binfpy` has a number of useful classes, e.g. `prob.Distrib`.
<code>
myDNA = sym.Alphabet('ACGT') # Define an alphabet, but many are pre-defined in sym.py, e.g. sym.Protein_Alphabet
d = prob.Distrib(myDNA) # Create a probability distribution for our alphabet
d.observe('A') # Count a single observation of a character
d.observe('T', 2) # Count an observation of a character twice
print(d)
</code>
### Background distributions
Here, you will construct a “background distribution” of amino acids that is suited for scoring motifs in human protein sequences. First, consider the background used for constructing BLOSUM62 (a popular substitution matrix) by viewing `blosum62.distrib`. This file can be read using the method `readDistrib` in `prob.py`.
<code>
bg = prob.readDistrib('blosum62.distrib')
bg['S']
</code>
The file `up_bac.fa` contains a random sample of bacterial sequences from [Uniprot](https://www.uniprot.org/) for which protein-level evidence is available. Construct your own background distribution using the sequences in `up_bac.fa`.
Read through the `Distrib` class (in `prob.py`) to see how to construct a `Distrib` object and how to use its methods.
**<span style="color:red"> Problem A1: Report the probability of Serine in both the BLOSUM62 and bacterial background distributions. Enter to three decimal places. </span>**
Enter your code in the cell below. A few lines are provided to get you started.
#### Tips:
* Consider what a background distribution represents and how it could be generated
* An instance of the class `Distrib` can be inspected like so:
```python
print(bg) # refers to __str__ defined for Distrib in prob.py
```
* Once you've generated your bacterial background distribution, save it in a file named `bac_bg.distrib` using the `writeDistrib` method:
```python
bac_distrib.writeDistrib('bac_bg.distrib')
```
* A function `prob.writeDistribs` is also available, and will be useful later.
* Note: the BLOSUM62 background is provided to you in the file `blosum62.distrib`.
* If you are not already familiar with the `binfpy` library of Python code, please browse `sequence.py`, in particular the code to read FASTA files, and then access sequences' contents.
<code>
# Write code to solve Problem A1 here
bac_seqs = sequence.readFastaFile('up_bac.fa', sequence.Protein_Alphabet)
print(len(bac_seqs), 'sequences loaded')
# The following lines will create and print a clean-slate distribution
bac_distrib = prob.Distrib(sequence.Protein_Alphabet)
print(bac_distrib) # print out the distribution BEFORE any data have been looked-at
# Your code to construct bacterial background below
</code>
<code>
for seq in bac_seqs:
for pos in seq:
bac_distrib.observe(pos)
bac_distrib['S']
print(bac_distrib)
</code>
### Gibbs sampling
Two methods for motif discovery are discussed in week 2 content: Gibbs sampling and expectation maximisation. This assignment makes use of the former. A key distinction of Gibbs sampling is its *stochastic* nature, as opposed to expectation maximisation which is *deterministic*.
An overly-simplistic example of Gibbs sampling is provided below. This is to ensure that you undertand the concepts before we introduce some real data. For further clarification, refer to the lecture videos and relevant Python notebook.
Here, we imagine a sequence alphabet which corresponds to the standard English alphabet.
```python
my_alpha = sym.Alphabet('ABCDEFGHIJKLMNOPQRSTUVWXYZ')
```
The file `example_seqs.fa` contains 150 sequences of length 80. We are told that a *motif* of length 7 is expected to appear in each of these sequences. In this case, the motif is a word relevant to this assignment's context. Given the relatively small length and number of sequences, you could probably identify this word by eye with relative ease. However, lets see if we can instead recover it using Gibbs sampling.
For a more in-depth discussion of this approach in the context of biological sequence analysis, take a look at the publicaiton which first described it: [Lawrence et al. 1993](http://dx.doi.org/10.1126/science.8211139)
```python
import gibbs
seqs = sequence.readFastaFile('example_seqsa.fa', my_alpha)
W = ? # the width of the motif sought
g = gibbs.GibbsMotif(seqs, W)
q = g.discover(niter = 1000)
```
The above code can be found in a cell below. Extra print statements have been included to display the consensus sequence and the background distribution.
**<span style="color:blue">Problem A2: In the cell below, add comments *in your own words* explaining what the variables `seqs`, `g`, `q`, `p` and `a` represent (as an example, a comment has already been added to the variable `W`). Export this cell or copy and paste the commented code into a separate Python file and upload to Coder Quiz.</span>**
**Note: For an example on how to easily save a Jupyter notebook cell to a Python file, [click here](#export).**
Run Gibbs sampling several times using different window sizes (`W`), bearing in mind that we know the motif is 7 characters long. The size and number of iterations will influence the ability of the algorithm to discover anything significant. Compare the outcomes of different runs. You should do this by visually comparing motifs using [WebLogo], which takes an alignment like that printed by the example code. Determine approximately, on average, how many iterations are required for convergence. (Note: 'convergence' in the case of a stochastic algorithm does not mean that the log-likelihood score will not change between iterations. Rather, the score will gradually rise and then appear to 'level off' and fluctuate around a (possibly local) optima). Additionally, examine the log-likelihood of the final model and alignment. Can you distinguish between runs where Gibbs sampling has found the global optima, and those where it gets stuck in a local optima?
The main motif discovery method is in `gibbs.discover`. By adjusting positions at which sequences are aligned, it essentially tries to maximise the (log) ratio of the foreground over background probabilities for observed sequences. It prints a sum of these log ratios. (The sum of log $x_1, x_2, ..., x_n$, is equal to the product of $x_1, x_2, ..., x_n$, where $x_1, x_2, ..., x_n$ are the ratios of foreground to background probabilities.)
Naively your comparison could be based on the sums above but if you think about it, and as discussed in the paper by [Lawrence et al.](http://dx.doi.org/10.1126/science.8211139), they will inevitably be greater for longer motifs (at least as long as there is a modicum of conservation in included positions). So this score should not be relied on in isolation.
[WebLogo]: http://weblogo.threeplusone.com
**<span style="color:red">Problem A3: Identify the 7-residue mystery sub-motif in these example sequences. </span>**
**<span style="color:green">Problem A4: Examine the background disribution obtained from Gibbs sampling, and suggest how these example sequences were likely generated. Is it likely that a similar phenomenon would be observed in real biological sequences?</span>**
Due to the stochastic nature of this approach, the path the algorithm takes through the sequence-probability space varies between runs. In this case, a perfect copy of the 'motif' is present in each of the input sequences, and hence Gibbs sampling converges on this intended sub-sequence in a decent proportion of runs. In subsequent sections of this assignment, consider whether such strict conservation is biologically realistic, and how this will impact the algorithm's behaviour.
<code>
import gibbs
import sym
import sequence
my_alpha = sym.Alphabet('ABCDEFGHIJKLMNOPQRSTUVWXYZ')
seqs = sequence.readFastaFile('example_seqs.fa', my_alpha)
W = 7 # the width of the motif sought
g = gibbs.GibbsMotif(seqs, W)
q = g.discover(niter = 2000)
print('Consensus: ', end='')
for pos in q:
print(pos.getmax(), end='')
print()
p = g.getBackground()
print('Background distribution:', p)
print()
a = gibbs.getAlignment(seqs, q, p)
k = 0
print('Identified sub-sequences of length: ', W, 'in input sequences.')
for seq in seqs:
#print("%s \t%s" % (seq.name, seq[a[k]:a[k]+W]))
print(f"{seq.name}\t{seq[a[k]:a[k]+W]}")
k += 1
</code>
Before moving on, run Gibbs sampling with W=7 until you find the desired motif. Save the foreground and background probability distributions in case you need to re-load them later.
<code>
# Save your foreground (q) and background (p) distributions
</code>
### Motif searching
Above, we identified our mystery 'motif'. Now, you are going to construct position specific scoring matrices (log-odds matrices), otherwise called “position weight matrices” (PWMs) which represent this motif and can be used to search for occurrences of the motif. We need to be able to search for a motif because it may not always be in the same position of a sequence.
One of the variables that you commented on above contains the probabilities defining your motif that you need to construct PWMs. We also previously defined two backgrounds, as well as the background generated in Gibbs sampling (based on the" training sequences excluding the motifs). Both the probability defining the motif and a background are required to generate a PWM. The class `PWM` is defined in the `sequence.py` module. An example of constructing a PWM is included below, but their use is also demonstrated in weekly notebooks.
<code>
pwm = sequence.PWM(q, p)
print(pwm)
</code>
The PWM we've just created uses the background from the proteins in `example_seqs.fa`. In the following exercise, assume that the new sequences' backgrounds are similar to that of `example_seqs.fa`. Why this is this an important consideration?
There are a couple of ways of using a PWM to score sequences as exemplified below. Here, we define four sequences in which we'll search for our motif.
<code>
my_alpha = sym.Alphabet('ABCDEFGHIJKLMNOPQRSTUVWXYZ')
seq1 = sequence.Sequence('XZGUIZMVHVEVZRHJJUCXKBOCBXEUMIXQZITGTAMLPTJSNWRFDDOBANZLGQQZHONFVHKIKJYXCWCUIYOV', my_alpha)
seq2 = sequence.Sequence('SMATNQFJYWFHPRUTYINWLNAAOZXWUQKWCVOZSIPPLCONJRYVKBOBTIAIUVOBUNUANFVSEQGMHMIORBRC', my_alpha)
seq3 = sequence.Sequence('OQUPKMLEZXELCMIVMVUDBPEAAPFJPROTEINFNBYNLDXTNYLCLUCHROCOOHFWQYCIHVVKLAEASAZXUINI', my_alpha)
seq4 = sequence.Sequence('JUYROTEMNHKREGNRPVSHLKCUUSWIMWRLRATIPROTEINFMEHMFMXIGFUVJEEWQLGHUOLHLVALELRGDYMS', my_alpha)
</code>
`seq1` does not contain a copy of the motif. Using the PWM class method `PWM.search` won't return any hits, as the default threshold score is 0. Given the highly specific nature of our motif, we would expect random sub-sequences of length W to score poorly (<0). This can be understood by viewing the PWM (printed above). Characters which are not the consensus character in a given position have very negative log-odds, providing little flexibility to variation in the motif.
<code>
# Search for the motif in the above sequences
pwm.search(seq4,0) # A search in seq1 returns no hits for our motif
</code>
#### Try searching the other three sequences using your PWM, and understand what `PWM.search` outputs. You should also see examples of hits where a slightly 'mutated' version of the motif is present.
**<span style="color:red">Problem A5: Which sequence defined above appears to contain two copies of the mystery motif? </span>**
A second method, `PWM.maxscore`, will return the position and score of the highest scoring window of length W, regardless of the score. See below for an example. Observe the output for each of the sequences above.
<code>
myseq = seq1
result = pwm.maxscore(myseq)
print('The maximum score', result[0], 'occurs at position', result[1])
threshold = 0
result = pwm.search(myseq, threshold)
print('All matches above', threshold, 'are:')
for r in result:
print('\t', r)
</code>
### The metallo-beta-lactamase superfamily
The following information is relevant to the sequence data you will use throughout this assignment.
The term 'homologous protein' is often associated with different versions of the same gene across species - distinct in sequence, yet ultimately undertaking a highly similar function. This over-simplification conflates 'homologs' and 'orthologs', and fails to consider modes of molecular evolution beyond just speciation, such as gene duplication and subsequent functional specialistaion.
Studying the composition of large and diverse protein families - often termed 'superfamilies' - is further complicated by the phenomenon of molecular 'promiscuity', whereby some proteins are seemingly capable of catalysing multiple distinct reaction types, often on different substrates. Tracing the evolution of function within such families is a complex undertaking, particulary where the level of sequence divergence is high and the last common ancestor is billions of years old.
The metallo-beta-lactamase (MBL) superfamily is an ancient group of related enzymes with an extraordinarily broad catalytic repertoire. Named for the first such activity to be discovered (degradation of beta-lactam antibiotics), these enzymes possess a highly conserved $\alpha\beta\beta\alpha$ fold.
**Figure 1: Conserved fold of the MBL superfamily**
<img src="mbl_fold.png" width="400" height="400" />
Despite this structural conservation, pairwise sequence similarities can be as low as 5%, meaning that alignment, let alone phylogenetic inference, is anything but straightforward. How then, can we identify MBL superfamily members without undertaking the laborious process of structural determination?
Incredibly, the sequence and structural positions implicated in catalysis are the same throughout the entire superfamily. These positions localise to two distinct sites in 3D space upon folding, each of which can coordinate metal ions (hence *metallo*-beta-lactamase) that participate in catalysis.
**Figure 2: Metal binding residues of the MBL superfamily**
<img src="mbl_metals.png" width="250" height="250" />
Residues present at these sites are well conserved, and invariably include Histidine. Subtle differences occur at some positions which has been linked to specific metal preference and/or catalytic function.
All protein entries in Uniprot are scanned for the presence of MBL domains, among many others, using a complementary database, [Interpro](https://www.ebi.ac.uk/interpro/). Take a look at entry [P52700](uniprot.org/uniprotkb/P52700/entry), and visualise the annotation of various protein domains by Interpro.
**<span style="color:red">Problem A6: Provide the start and end positions of the Interpro-annotated MBL domain for P52700.</span>**
Interpro also attempts to classify proteins into homologous superfamilies.
**<span style="color:red">Problem A7: What alternative name does Interpro use for the MBL superfamily?</span>**
Several different functions are attributed to proteins in the MBL superfamily. Those for which beta-lactamase is the primary activity are often referred to as *True MBLs* or *Class B MBLs*. These can be further divided into types B1, B2 and B3. The marked ability of *promiscuous* MBL superfamily members to catalyse multiple reaction types is of growing interest to researchers.
Evolutionarily distinct from these enzymes are Classes A, C and D, collectively termed *Serine beta-lactamases* (SBLs), which form part of another protein superfamily. While both capable of degrading beta-lactam antibiotics, MBLs and SBLs gained this ability via *convergent evolution.*
### Datasets
You are provided with a number of datasets for use in the following exercises. These are briefly described here:
* `mbl_seqs.fa` - All bacterial sequences with an annotated MBL domain were extracted from Uniprot and clustered at 40% sequence identity. A representative from each cluster was placed in this file.
* `positives.fa` - A set of 20 proteins verified as Class B beta-lactamases.
* `negatives.fa` - A set of 20 random proteins from various model species known **not** to contain an MBL domain.
* `active.fa` - A set of proteins found to degrade beta-lactam antibiotics in a high-throughput screening experiment (further context provided below).
### Motif discovery in MBLs
The above examples provide the essential tools for linear motif discovery with Gibbs sampling, and can be generalised to any biological sequence type. You will now perform motif discovery and searching on MBL sequences. Note that there are no single correct solutions to the following problems, however your processes must be sufficiently justified.
#### Perform Gibbs sampling on the full set of bacterial MBL sequences
`mbl_seqs.fa` contains a representative sample of all bacterial MBL superfamily members in Uniprot. Using your knowledge of Gibbs sampling and MBL proteins, you will now carry out motif discovery on this set of sequences. While the seven highly conserved metal-binding residues do not all occur together in the *linear* sequence, you will notice in Figure 2 that four of these residues are in close proximity. Assume that the location of these positions represents the most highly conserved sequence region.
You should consider several factors while implementing your motif discovery method.
* What window size will you use?
* How many iterations are required?
* How will you determine the 'best' motif if different results are obtained between runs?
* Given your knowledge of conserved residues in MBLs, what may you expect to see in your consensus motif?
You will want to run Gibbs multiple times when considering the above parameters. Once happy with your motif, ensure that you save any relevant distributions for subsequent searching.
<code>
# Write code for motif discovery in MBLs here...
</code>
**<span style="color:green">Problem A8: State and justify the window size used in your motif discovery process.</span>**
**<span style="color:green">Problem A9: State and justify the number of iterations used.</span>**
**<span style="color:red">Problem A10: Provide the consensus sequence of your final motif.</span>**
**Please ensure Coder Quiz accepts your motif before moving on to subsequent sections.**
**<span style="color:blue">Problem A11: Provide the code used for motif discovery from the above cell.</span>**
#### Construct a PWM for motif searching
<code>
# Write code for constructing your PWM here...
</code>
**<span style="color:green">Problem A12: Describe and justify the background distribution you used.</span>**
`print` the PWM object to view the log-odds matrix.
**<span style="color:green">Problem A13: Which positions appear to have the highest levels of conservation? What is the significance of these residues?</span>**
#### Determine a score threshold for MBL domain-containing proteins
In order to search for MBL domains among unknown sequences, an appropriate score threshold is required. Use the two sequence sets, `positives.fa` and `negatives.fa` (described above) to select this threshold.
<code>
# Write code for treshold determination here...
</code>
**<span style="color:green">Problem A14: State and justify the score threshold you have chosen, making reference to relevant observations from the positive and negative sequences.</span>**
**<span style="color:green">Problem A15: Given your observations and chosen threshold, comment generally on the likely sensitivity and specificity of your PWM for identifying MBL superfamily proteins. Calculations of these values are *not* required. </span>**
**<span style="color:blue">Problem A16: Provide the code used for threshold determination from the above cells.</span>**
#### Search a set of unknown proteins for your motif
A biochemist purified several proteins for which beta-lactamse activity was suspected, and high-throughput screening was performed to identify those capable of degrading at least one beta-lactam substrate. The genes corresponding to these active proteins were sequenced, and subsequently identified by BLAST search of the Uniprot database. These records were downloaded in FASTA format and are available in the file `active.fa`.
Check for the existence of your motif in each of these beta-lactamase sequences.
<code>
# Write code for searching the beta-lactam degrading enzymes here...
</code>
**<span style="color:red">Problem A17: How many of these proteins score above your chosen threshold? </span>**
**<span style="color:red">Problem A18: Provide the Uniprot accession for one of the highest scoring proteins. </span>**
**<span style="color:blue">Problem A19: Provide the code used for motif searching from the above cell(s).</span>**
The Sequence class has an attribute `.info` which may be helpful in answering the following questions.
**<span style="color:green">Problem A20: Explain why several of the proteins capable of beta-lactamase activity do not appear to contain your motif. </span>**
Identify at least one sequence among your hits with an unusual description, then take a look at this paper by [Fröhlick *et al.*](https://academic.oup.com/peds/article/doi/10.1093/protein/gzab013/6294778)
**<span style="color:green">Problem A21: Provide a likely explanation for the presence of your motif in this sequence, as well as the protein's apparent beta-lactamase activity.</span>**
**<span style="color:green">Problem A22: Suggest how the motif discovery processes undertaken in this section could be altered to more specifically identify *true* beta-lactam degrading enzymes from the MBL superfamily.</span>**
## Part B: inferring ancestral protein sequences using graphical models
### Introduction
Part B problems are less guided and require some choices to be made by you, informed by your understanding of the relevant probabilistic and biological concepts. Activity on the discussion board around these problems is encouraged, however ensure that no code or solutions are shared publicly. Marks are available for discussion board participation (details in *Marking* section above).
Import the following modules into Python:
<code>
!pip install sympy
</code>
<code>
import asr
import sequence
import prob
import sym
</code>
`asr.py` makes use of the package, `sympy` for matrix operations. If you find that it is not available to import, it is easily installable via standard methods (see https://pypi.org/project/sympy/)
Given a set of evolutionarily related present day ('extant') biological sequences, we are often interested in the paths by which said sequences diverged from a common ancestor. In the absence of ancient DNA from fossils (or a time machine), such queries can only be answered with predictions. Techniques for doing so are commonly termed ancestral sequence reconstruction (ASR). Various paradigms exist for performing ASR. One such paradigm, Maximum Likelihood (ML), will be discussed and used in this assignment.
### Assumptions in ancestral sequence reconstruction
Several key assumptions underly ML-ASR, and are discussed below.
The first of these is the phylogenetic topology describing the relationships between extant sequences. Inferring these relationships is a separate (but related) matter which we won't concern ourselves with in this assignment. Some are discussed in SCIE2100/BINF6000, so refer back to those notes if you'd like a refresher; however, all you need to know here is that ML-ASR takes a fixed phylogenetic tree as input. Extant sequences are represented as terminal, or leaf, nodes, while ancestral sequences are represented as internal nodes. Given a bifurcating tree with *m* leaf nodes, the total number of nodes, *n*, is *2m-1*. The below example shows a tree with 4 leaf nodes (A, B, C, and D), and 3 internal nodes (X, Y, and Z).
<img src="example_tree.png" width="600" height="600" />
In general, we have access to the sequences at extant nodes. A common motivation of ASR is to determine the most likely *joint* assignment of sequences to all internal nodes. The fixed nature of our input phylogenetic tree allows us to make some helpful assumptions of *conditional independence* between nodes. In fact, the structure of phylogenetic trees can be intuitively imagined as Bayesian networks in which we assume that sequence characters at nodes are dependent only on the characters at their direct parents, and the corresponding branch length. If the concept of Bayesian network representations for phylogenetic trees is unfamiliar, refer to week 4 lecture materials.
**<span style="color:green">Problem B1: Assuming a Bayesian network representation of the phylogenetic tree in the figure above, provide an expression for the joint probability of some arbitrary assignment of characters, $\{\alpha_A,\alpha_B,\alpha_C,\alpha_D,\alpha_X,\alpha_Y,\alpha_Z\}$, in terms of all relevant conditional and/or prior probabilities. </span>**
**<span style="color:green">Problem B2: Briefly explain why the assumptions of conditional independence underlying Bayesian networks are sensbile for inferring ancestral sequence characters. </span>**
Another major assumption often (although not always) made in ASR is that of *column independence*. It specifies that all aligned columns are considered separately when inferring ancestral states. Effectively, this means that for a given phylogenetic tree built from an alignment of length *L*, determining the most maximally likely set of ancestral sequences involves inference on *L* independent Bayesian networks.
**<span style="color:green">Problem B3: Briefly explain why the assumption of column independence for ASR may be considered a statistical simplification, but biologically unrealistic. </span>**
### Time-dependent amino acid substitution models
In week 4 lectures, we discussed the concept of sequence evolution along branches as continuous-time Markov chains. Specifically, we considered the Jukes-Cantor model for DNA, which makes an assumption of equal mutation rates to any character other than the current one. While this model requires only one rate parameter, the key assumption is not well supported in the reality of evolution.
Time-dependent substitution models are generally represented by an *instantaneous rate matrix*, or IRM. This matrix contains parameters relating to the likelihood of transitioning between two characters over time. The DNA sequence alphabet contains only four characters, and hence only 12 character 'transitions' (**excluding** cases where the character does not change) are possible. Many biological substitution models are *time-reversible*, which means that the direction of evolution on a tree is unimportant to the model. This is useful when the *root* of a phylogenetic tree is not confidently known. For time-reversible models, only one rate parameter is required to determine both transition probabilities for a pair of characters. For a DNA alphabet, this reduces the number of required rate parameters to 6.
**<span style="color:red">Problem B4: Assuming a time-reversible substitution model, provide a mathematical expression for the number of rate parameters required given a sequence alphabet of size $N$. How many rate parameters are therefore required for a standard amino acid alphabet (excluding stop and gap characters)?</span>**
Substitution models also require an equilibrium probability distribution over the alphabet of characters. This can be thought of as the *prior* probability distribution of observing a character at any node.
For a given substitution model, probabilities of transitioning from one state to another over a specific time $t$ are determined through some basic linear algebra on the equilibrium and instantaneous rate matrices. You aren't required to understand the mathematics underlying these, however if you're interested, it is demonstrated in the `asr.py` code. This results in a matrix, $P(t)$, which directly states probabilities of a particular sequence character at a child node conditional on its immediate ancestor and preceding branch length.
Luckily, several pre-defined substitution models for amino acid sequences are available. Currently, only the Jones-Taylor-Thornton (JTT) model - which we will use in this section - is implemented in `asr.py`. Let's take a look at this model.
<code>
JTT = asr.MODELS['JTT'] # The pre-defined JTT model in asr.py
</code>
First, we will investigate transition probabilities from a parent node to a single child node for a given sequence position. Hypothetically, we know that the ancestral character at this position was alanine (A). Let's take a look at the *equilibrium* frequency of A in the JTT model.
<code>
print(JTT.priorProb('A'))
</code>
Next, let's consider how likely it is (based on our model) that this A will transition to various amino acids over a branch length of $t=10$.
<code>
print('A to V:')
print(JTT.condProb('A', 'V', 10))
print()
print('A to L:')
print(JTT.condProb('A', 'L', 10))
print()
print('A to Y:')
print(JTT.condProb('A', 'Y', 10))
</code>
**<span style="color:green">Problem B5: Suggest a biological explanation for the differences in transition probabilities observed.</span>**
In addition to the probability of transitioning between two characters, we're also often interested in the probability of a character being maintained. Investigate the probability of an alanine residue being maintained over branch lengths of $t=1$, $t=5$, $t=10$, $t=50$, $t=100$, $t=500$, and $t=1000$.
<code>
# Write your code here
branch_lengths = [1,5,10.50,100,500,1000]
for t in branch_lengths:
print(f"t={t}: {JTT.condProb('A','A',t)}")
</code>
**<span style="color:green">Problem B6: Describe what you observe with increasing time. What is happening to the transition probabilities in terms of the model?</span>**
### Ancestral inference in phylogenetic trees
We're generally faced with far more complex situations than a single parent-child relationship. When performing evolutionary inference on actual phylogenetic trees, we must consider many dependencies - and hence many conditional probabilities - simultaneously.
To demonstrate some further functionality of `asr.py`, we'll first consider a simple phylogenetic tree. It is represented below as a Bayesian network.
<img src="simple_bn.png" width="400" height="400" />
Network nodes can be represented as `PhyloBNode` objects which hold information such as the node's parent (if one exists), the preceding branch length, and the alphabet of allowable sequence characters. If the character state at the node is known, such is the case for the leaf nodes in the figure above, then these can also be annotated. From the code below, identify which variables represent each node in the example Bayesian network. Ensure that you understand each input to the PhyloBNode constructors.
<code>
JTT = asr.MODELS['JTT']
ancestor_1 = asr.PhyloBNode(JTT, label='ancestor_1')
ancestor_2 = asr.PhyloBNode(JTT, parent=ancestor_1, distance=1, label='ancestor_2')
child_1 = asr.PhyloBNode(JTT, parent=ancestor_1, distance=2, label='child_1', annot='K')
child_2 = asr.PhyloBNode(JTT, parent=ancestor_2, distance=1, label='child_2', annot='A')
child_3 = asr.PhyloBNode(JTT, parent=ancestor_2, distance=1, label='child_3', annot='V')
</code>
With our nodes constructed, we now need to connect them in a `PhyoBNet` object which is representative of the network shown above. We can do this by first initialising our network with the root node, and subsequently adding a list of all other nodes. It's important that the parent fields for all nodes are correct, and that no duplicate labels exist! Note that the instantiation of the network adds the root node to the network. You don't need to (and should not) add it again - this will cause unusual behaviour of the object's methods.
<code>
bn = asr.PhyloBNet(root=ancestor_1)
bn.addNodes([ancestor_2, child_1, child_2, child_3])
</code>
Provided that node instantiations are consistent with a valid phylogenetic tree, the network's structure is automatically resolved using the information provided in the nodes. To demonstrate this, we can extract the children of a given node, even though we haven't explicitly provided this information.
<code>
print([child.label for child in bn.getChildrenOf('ancestor_2')])
</code>
### Joint maximum likelihood ancestral inference
Ancestral reconstruction can be used to ask different questions about the evolution of related sequences. In some cases, we may be interested in the most likely sequence character at a specific node, without consideration of the assignments to all others. Such inference is referred to as a *marginal* reconstruction at the node in question. Conversely, we may want to determine the most likely joint assignment to all unknown nodes; that is, perform a *joint* reconstruction of the phylogenetic tree. This task demonstrates the latter.
The `PhyloBNet` method `getMLJoint` performs joint reconstruction on the tree. The implemented algorithm takes a 'brute force' approach. This means that it exhaustively calculates the likelihood of all joint assignments to unknown nodes, and finds that which gives rise to the most likely network. A 'shortcut' version of the algorithm can be used to dramatically reduce the number of joint assignments tested. By default the `reduced` parameter, which instructs the algorithm to take this shortcut, is set to `True`. Take a look at the code for `getMLJoint`.
**<span style="color:green">Problem B7: In your own words, describe the shortcut which is taken when `getMLJoint` is run in 'reduced' mode.</span>**
**<span style="color:green">Problem B8: For the example network above, calculate the number of joint assignments which `getMLJoint` would test using both the reduced and non-reduced modes. </span>**
Finally, we will perform joint reconstruction for our example network. As you should appreciate from Problem B8, this will take significantly less time than it would without running in 'reduced' mode!
<code>
print(bn.getMLJoint())
</code>
You will notice that two values are returned.
* A dictionary indicating the most likely sequence character at each unknown node
* The log-likelihood of the network when annotated with these characters
### Ancestral inference in the MBL protein superfamily
You will now perform some basic inference on MBL proteins. Candidates from some functionally characterised sub-families of the MBL superfamily were aligned, and a phylogenetic tree was subsequently inferred. The tree, including branch lengths, is shown below.
<img src="mbl_tree.png" width="800" height="800" />
A contiguous 9-residue segment of the alignment was extracted and is shown in the figure below. It is known that some conserved residues in this site are involved in the coordination of metal ion cofactors. The equivalent sequences segments are provided to you in the file, `mbl_subseqs.fa`.
<img src="subseq_aln.png" width="600" height="600" />
**On the basis of the tree and sub-sequence alignment provided, perform joint maximum likelihood ancestral reconstruction. Use the result(s) to infer the most likely 9-residue subsequence of these sequences' last common ancestor.**
Some general pointers:
* All nodes (*including internal*) require a unique label
* The total log-likelihood of a subsequence reconstruction is simply the sum of the log-likelihood values for the individual columns.
<code>
# Cells provided here for your code. Add more as required
</code>
**<span style="color:green">Problem B9: Briefly describe your approach to the problem in words.</span>**
**<span style="color:blue">Problem B10: Provide the code you used to determine the maximum likelihood ancestral sequence.</span>**
**<span style="color:red">Problem B11: Provide your inferred subsequence of the last common ancestor. What is the log-likelihood of this joint assignment of characters?</span>**
Much about the true evolutionary history of the MBL superfamily is unknown to date due to its vast sequence and functional diversity. Research of this nature requires comprehensive sequence curation to ensure sufficient coverage of the complete phylogenetic space. Such coverage is not achieved by five representative sequences - in fact, the number of identified sub-families is at least triple that which is represented here. It is always important to consider the limitations - both statistical and biological - of any bioinformatics workflow.
**<span style="color:green">Problem B12: Explain how insufficient sampling of sequences in the superfamily will bias ancestral inference.</span>**
As discussed above, an alternative approach to joint reconstruction is marginal reconstruction, where a single node is targetted by *marginalising* all others. Marginalisation in Bayesian networds was discussed in the week 4 material. These two reconstruction types can sometimes yield different inference results at a given node in a Bayesian network.
**<span style="color:green">Problem B13: Explain how the process of node marginalisation occurs by describing operations on a joint probability table.</span>**
**<span style="color:purple">Discussion Board Contributions: Provide a list of links to contributions you've made on the Ed discussion board.</span>**
## Appendix: How-to install binfpy
<a id='howto_install_binfpy'></a>
The `binfpy` library is available from our local git server [GitLab_binfpy]. You can use git to store the binfpy directory on your computer where you can easily update it if any changes are made (see instructions below). You can also use the link to download the files in binfpy and place them in a folder of your choice. If any changes are made you will have to repeat the download process. **You will have to update your path regardless of which method you choose.**
[GitLab_binfpy]: http://bioinf1.biosci.uq.edu.au/opensource/binfpy.git
To install binfpy on your own computer, you need to have a git client installed on your computer or retrieve files from the web site above.
**Mac OS X or Linux**
If you're on Linux or Mac OS X, you should be set already. Open the terminal and change to a directory of choice and type
```
git clone http://bioinf1.biosci.uq.edu.au/opensource/binfpy.git
```
That will create a new directory called `binfpy` with a bunch of Python files.
Add this directory, e.g. `/Users/johndoe/binfpy` to your PYTHONPATH, by adding the following line to your start-up file, e.g. `.profile`
```
export PYTHONPATH=/Users/johndoe/binfpy
```
This will be read next time you start a new shell, or you can activate immediately by
```
source .profile
```
**Windows**
Install [git_for_windows]. Open the windows command prompt, navigate to a directory of your choice and type
```
git clone http://bioinf1.biosci.uq.edu.au/opensource/binfpy.git
```
That will create a new directory called `binfpy` with a bunch of Python files.
Add this directory, e.g. `/Users/johndoe/binfpy` to your PYTHONPATH using the following instructions:
Hit start and search for 'Environment variables'
Add or edit the variable PYTHONPATH to include the `binfpy` directory
[git_for_windows]: https://git-for-windows.github.io/
**Updating**
With all the above installed, you should be able to fire up your Python environment of choice, and `import` statements will be able to find the `binfpy` files. If there is an update, you can return to the binfpy directory and type
```
git pull
```
This will keep your files up-to-date.
<a id='export'></a>
## How to export a single cell as a Python file
If you are asked to save a cell as a Python file and upload it to Coder Quiz, you can do this easily by adding the line
`%%writefile test.py`
to the start of the Python cell, where test can be any name.
Try it below to see how you can save this simple Python cell to your current working directory
<code>
%%writefile test.py
animal = "dog"
if animal == "dog":
print ("Bark")
</code>
|
{
"filename": "Assignment1_Assignment1_2023.ipynb",
"repository": "newtonharry/BINF7000",
"query": "transformed_from_existing",
"size": 75994,
"sha": ""
}
|
# CustomDB_MTG_Taxa_Profiling_v1.0-checkpoint.ipynb
Repository: new-atlantis-labs/Metagenomics
# Re-formatting plankton-specific marker genes fetched from different sources to create a custom database (DB) compatible with the powerful metagenomics-based taxonomic profiling tool [Motus](https://www.nature.com/articles/s41467-019-08844-4).
See Motus' GitHub repo [here](https://github.com/motu-tool/mOTUs).
### NOTE: given that DB customization for Motus is not clearly explained in the Docs, we use a tool called [read_counter](https://github.com/AlessioMilanese/read_counter), which is a wrapper to run Motus using a customized DB.
Importantly, the default reference DB of marker genes used by Motus is not suitable for profiling marine planktonic ecosystems. Therefore we will create a plankton-specific marker gene DB to quantify relative abundance profiles across taxonomic groups. To achieve this, we will build on two well curated marker gene DBs:
- The huge catalog of phytoplankton psbO marker gene sequences, which encodes the manganese-stabilising polypeptide of the photosystem II oxygen evolving complex, reported in this [paper](https://onlinelibrary.wiley.com/doi/epdf/10.1111/1755-0998.13592) and accessible [here](https://www.ebi.ac.uk/biostudies/studies/S-BSST761?query=A%20robust%20approach%20to%20estimate%20relative%20phytoplankton%20cell%20abundances%20from%20metagenomes).
- The [MZGdb](https://metazoogene.org/MZGdb) database and most specifically the "All Plankton Combo" files contain all data from the All Zooplankton and the All Ichthyoplankton combined files. This database was described in this [paper](https://link.springer.com/article/10.1007/s00227-021-03887-y). Here we will focus on DNA sequences for the barcode region of mitochondrial cytochrome oxidase I (COI).
#### The code developed in this Notebook is meant for developing our first proof-of-concept (POC1) biodiversity data asset, which focuses on the taxonomic composition found in a given environmental sample. In a nutshell, we assess relative abundances across numerous plankton taxonomic groups from metagenomics (MTG) datasets.
</br>
Author: jay@newatlantis.io
<code>
# install for outside requirements
!pip3 install -r requirements.txt
</code>
<code>
# imports
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
from matplotlib import cm
import seaborn as sns
import colorsys
from matplotlib.collections import PatchCollection
import Bio.SeqIO as bioseqio
from Bio.Seq import Seq
from Bio.SeqRecord import SeqRecord
#from Bio.Alphabet import IUPAC
from Bio import Entrez
from ete3 import NCBITaxa
from taxonomy_ranks import TaxonomyRanks
from subprocess import Popen, call, STDOUT, PIPE
import os
import shutil
import pandas as pd
import numpy as np
import matplotlib
import json
import glob
import re
import gzip
import sys
import csv
import time
import io
import pathlib
from collections import OrderedDict
import pickle
import bz2
from IPython.display import Image
from itertools import combinations
import itertools
</code>
<code>
# pandas setup
pd.set_option('mode.chained_assignment', None)
</code>
<code>
# matplotlib setup
matplotlib.rcParams['savefig.dpi'] = 1000
matplotlib.rcParams['figure.dpi'] = 1000
sns.set_style("whitegrid", {'axes.grid' : False})
sns.set_context("paper")
sns.set(font='serif')
sns.set_style('ticks')
</code>
<code>
# graphic
import plotly.graph_objects as go
import plotly.io as pio
import plotly.express as px
###Uncomment below if necessary
rendef = "png" #"pdf"
fig_renderer = pio.renderers[rendef]
fig_renderer.width = 1000
fig_renderer.height = 1000
pio.renderers.default = rendef
</code>
Utility functions
<code>
def format_tax_lbl(taxid = '94617'):
# Function to properly format taxonomic labels compatible with MetaPhlan
tax_lvls_lbls = ['species','genus','family', 'order', 'class', 'phylum', 'superkingdom'][::-1]
rank_taxon = TaxonomyRanks(taxid)
rank_taxon.get_lineage_taxids_and_taxanames()
rank_dict = list(rank_taxon.lineages.values())[0]
tax_tree = list(rank_taxon.lineages.values())[0]
#Parsing info on high rank tanks is optional, but can be quite handy when low ranks are unclassified: for higher rank assignment purposes if needed
tax_ranks_list = [tax_tree[t][0].replace("NA","unclassified") for t in tax_lvls_lbls]
ncbi_taxIDs_list = [str(tax_tree[t][1]) for t in tax_lvls_lbls]
return ncbi_taxIDs_list,tax_ranks_list
def parse_lineage(lineage_str):
# Function to correctly parse lineage using NCBI tax ID
return (";".join(format_tax_lbl(lineage_str.split(';')[-1].replace('_', ' '))[-1])).replace(" ","_")
</code>
<code>
# *Always* tell NCBI who you are
Entrez.email = "REPLACE WITH EMAIL"
</code>
<code>
# Dumping our select marker gene set into this fasta
custom_db_fasta_fid = '../custom_db/CustomPhytoZooPlanktonMGs.fna'
</code>
### Processing/filtering COI (mitochondrial cytochrome oxidase I gene) sequences for Zooplankton sepecies
<code>
coi_zooplankt_df = pd.read_csv("../data/MZGdata-coi__MZGdbALL__o00__A.csv", header=None)
</code>
A COI sequence from this highly curated gene sequence database looks like this
<code>
print(coi_zooplankt_df.iloc[1,30])
</code>
<code>
print("This COI DB contains a total number of {} sequences".format(coi_zooplankt_df.shape[0]))
</code>
The following chunks of code will perform filtering and reformatting of the headers of each sequence in the original COI DB
<code>
###Uncomment below to rebuild custom DB
filtered_coi_zooplankt_df = coi_zooplankt_df[coi_zooplankt_df.iloc[:,33].map(lambda s: isinstance(s, str))]
#Lets concentrate on organism name and corresponding COI sequence
min_coi_zooplankt_df = filtered_coi_zooplankt_df.iloc[:,[1,8,30]]
#Rename columns
#Use genebank accession to fetch a bunch of info needed to reformat headers
min_coi_zooplankt_df.columns = ["Species_name","Genebank_accession","Species_COI_seq"]
#Add full lineage
min_coi_zooplankt_df['Full_lineage'] = filtered_coi_zooplankt_df.iloc[:,33].map(lambda s: ";".join([l for l in s.split(';') if '_EXT' not in l and l!='']))
#Drop duplicate entries for genebank accessions
min_coi_zooplankt_df = min_coi_zooplankt_df.drop_duplicates(['Species_name'])
#Reset index
min_coi_zooplankt_df.reset_index(drop=True, inplace=True)
</code>
Iterating over each row in the filtered DF and creating a Seq object with formatted header, which is dumped into the dedicated file
<code>
###Uncomment below to rebuild custom DB
# for i,seq_rec in min_coi_zooplankt_df.iterrows():
# record = SeqRecord(
# Seq(seq_rec['Species_COI_seq']),
# id = seq_rec['Genebank_accession'] + '__' + seq_rec['Full_lineage'],
# name="",
# description="")
# with open(custom_db_fasta_fid, "a") as output_handle:
# bioseqio.write(record, output_handle, "fasta")
</code>
### Processing/filtering psbO sequences for Phytoplankton sepecies.
Adding to fasta file already created above
<code>
#Original psbO DB
psbO_db_fid = '../data/psbO_20210825.fna'
</code>
<code>
print("An entry in the psbO DB (fasta format) looks as follows:\n")
!head $psbO_db_fid -n 10
</code>
<code>
#Uncomment below to rebuild custom DB
with open(psbO_db_fid, "r") as handle:
for (i,record) in enumerate(bioseqio.parse(handle, "fasta")):
if(i>10):
#Fetch standard header components and reformat full header
seq_id, tax_lin = record.description.split(' ')[:2]
record.description=''
record.id = "{}__{}".format(seq_id, tax_lin)
#Dump to fasta
with open(custom_db_fasta_fid, "a") as output_handle:
bioseqio.write(record, output_handle, "fasta")
</code>
### Our final DB (concatenated gene markers for both Phyto and Zooplankton species)
<code>
print("Total number of marker genes included in our customized DB of marker genes is:")
!grep -c '>' $custom_db_fasta_fid
</code>
### With the customized marker gene DB (for both zooplankton --COI sequences-- & phytoplankton --psbO sequences--) created above we can screen across metagenomic datasets in order to assess the taxonomic composition of the plankton community sequenced.
#### A short read in a MTG file looks like this:
```@ERR1719507.4222 H2:D1NNJACXX:6:1101:5303:2333/1
AGCGAGCCCACTGTGTTCCCGGGGGACTGGGGGCCATTAGCGGCGTCAGACACGGGGGGGAGCGGGGTCTGACCATCCTGGGCCGGGACCCGGCCGTCCAGTTTGTCCAGCATGGCCCGGGCCGCCCCGTGCTTGGCCTGCTTCTTG
+
CCCFFFFFHHHHGGIIJJJJIJJJJGHIJJJJHDDDDDDDDDDCDBJJJHIGJJJJJJJJJJIJJJJJJJJIJJJJJJJJJJJJJJJEJJJJJJJJJJJJJDDDDDDDDDDDDDDDDDFFHJJIIJJJJJIJJJHGHHHFFFFFCCC```
### Analysis
The following is a simple visualiztion illustrating the relative abundances across a bunch of plankton taxonomic groups (down to the species level) that we obtained from profiling a given TARA Ocean sample (ID: [ERR1719507](https://www.ebi.ac.uk/ena/browser/view/ERR1719507)) collected in the North Atlantic Ocean (offshore Cadiz, Spain, [Location 36.5533 N 6.5669 W](https://www.google.com/maps/place/36%C2%B033'11.9%22N+6%C2%B034'00.8%22W/@36.5534368,-6.5668384,17z/data=!4m5!3m4!1s0x0:0x9aa20881883fdb5f!8m2!3d36.5533!4d-6.5669)), on date/time=2009-09-15T18:00, using a PUMP (High Volume Peristaltic Pump). The sample material (particulate matter, including plankton (ENVO:xxxxxxxx)) was collected at a depth of 38-42 m, targeting a deep chlorophyll maximum layer (ENVO:xxxxxxxx) in the marine biome (ENVO:00000447). The sample was size-fractionated (0.8-5 micrometres), and stored in liquid nitrogen for later detection of unicellular eukaryote (protist) nucleic acid sequences by pyrosequencing methods, and for later metagenomics/transcriptomics analysis. This sample has replicate sample(s): TARA_X000000407."
<code>
with open('../results/ERR1719507_mapped_reads.map','r') as fid:
lines = fid.readlines()
#Parsing lines
data = [re.split('\t|__',l.strip()) for l in lines if ';' in l]
#Pick only those with a given format to avoid noise
data = [l for l in data if len(l)==3]
</code>
<code>
#Cast data into a DF
abund_df = pd.DataFrame(data, columns = ['GeneID','Lineage','Abundance'])
#Change data type
abund_df['Abundance'] = abund_df['Abundance'].astype(float)
#Sort data by Abundance
abund_df.sort_values('Abundance', ascending=False, inplace=True)
#Reset index for tractability purpose
abund_df.reset_index(drop=True, inplace=True)
#Reformatting lineage using NCBI taxID
abund_df['Lineage'] = abund_df['Lineage'].map(parse_lineage)
#Count total number of observations/hits across unique taxa
unique_taxa_abund_df = abund_df.groupby('Lineage')['Abundance'].sum().sort_values(ascending=False)
#Take log10 and make df
unique_taxa_log_abund_df = unique_taxa_abund_df.map(np.log10).reset_index()
#Cut off by a certain value
thresholded_df = unique_taxa_log_abund_df[unique_taxa_log_abund_df['Abundance']>=1]
</code>
Make a new DF with columns = taxonomic level, and then append at the end the abundance observed in the sample analyzed
<code>
tax_enumeration_df_filtered = pd.DataFrame.from_records(thresholded_df['Lineage'].map(lambda s: s.split(';')).values)
#Name columns
tax_enumeration_df_filtered.columns = ['species','genus','family', 'order', 'class', 'phylum', 'superkingdom'][::-1]
#Add log-transformed abundance column
tax_enumeration_df_filtered['log_abundance'] = thresholded_df['Abundance'].values
</code>
<code>
#Peek at the new DF
tax_enumeration_df_filtered.head()
</code>
<code>
fig = px.sunburst(tax_enumeration_df_filtered,#.query('superkingdom == "Eukaryota"'),
path=['superkingdom','phylum', 'class', 'order', 'family', 'genus'],
values='log_abundance', color='order')
fig.update_layout(
title={
'text': "Species richness in TARA Ocean's sample (ID = ERR1719507)",
'y':0.985,
'x':0.5,
'xanchor': 'center',
'yanchor': 'top',
'font_size':30,
'font_color':"black"})
# fig.update_yaxes(automargin=True)
# fig.update_xaxes(automargin=True)
fig.update_layout(
autosize=False,
# width=500,
# height=500,
margin=dict(
l=1,
r=1,
b=4,
t=50,
pad=2
),
paper_bgcolor="White",
)
fig.show(width=1000, height=1000)
# pio.write_image(fig, "CustomProkEukDB/SunburstTaxDist_DB_v2.3.png", width=1.5*1000, height=1*1000, scale =1.25)
</code>
### Observations
- Zooplankton is clearly the numerically dominant taxonomic group in this sample.
- Based on the test performed above, one can conclude that the taxonomic profiling tools is quite effective at profiling the taxonomic composition of MTG datasets using our customized DB.
- The tool is ready for large-scale testing (using more TARA Ocean MTG datasets) for better assessing the ability of the computational pipeline to characterize taxonomic diversity across samples collected across a great variety of oceanic provinces.
- Once testing is achieved, we can confidently deploy the tool to characterize our future in-house collected MTG datasets.
|
{
"filename": "CustomDB_MTG_Taxa_Profiling_v1.0-checkpoint.ipynb",
"repository": "new-atlantis-labs/Metagenomics",
"query": "transformed_from_existing",
"size": 22671,
"sha": ""
}
|
# ae_7.ipynb
Repository: CKolland/Research-Internship-SchulzLab
# Main
Autoencoders are powerful neural network architectures used for unsupervised learning, enabling the extraction of meaningful features from high-dimensional datasets such as single-cell RNA sequencing (scRNA-seq) data. When applied to scRNA-seq data from the Heart Cell Atlas, autoencoders can efficiently compress and reconstruct gene expression profiles, aiding in the identification of cellular heterogeneity and underlying biological patterns in cardiac tissues.
> Generated by ChatGPT
In this notebook the whole preprocessing, training, and evaluation will take place.
***
## Loading Libraries
Library | Version | Channel
--- | --- | ---
anndata | 0.7.0 | bioconda?
PyTorch | 2.2.2 | pytorch
Torchvision | 0.17.2 | pytorch
Tensorboard | / | conda-forge
<code>
# Built-in libraries
from datetime import datetime
import os
import sys
# Third-party libraries
import anndata as ad
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as opt
from torch.utils.tensorboard import SummaryWriter
# Get the absolute path of the 'notebooks' directory
notebooks_dir = os.path.dirname(os.path.abspath("__file__"))
# Construct the path to the 'src' directory
src_path = os.path.abspath(os.path.join(notebooks_dir, "..", "src"))
# Add the 'src' directory to the Python path
if src_path not in sys.path:
sys.path.append(src_path)
# Self-build modules
import autoencoder.ae_model as ae
import autoencoder.ae_training as T
import utils.data_utils as data_utils
</code>
## Hyperparameters
Load model structure and hyperparameters from JSON file.
<code>
file_path = "../config/autoencoder_test.json"
model_params = utils.import_model_params(file_path)
model_architecture = model_params["model"]
model_training = model_params["training"]
</code>
<code>
# Batch size
batch_size = model_training["batch_size"] # Power of 2 is optimized in many libraries
# Training
num_epochs = model_training["training_epochs"]
</code>
### Device Specification
The CUDA architecture from NVIDIA enables high-performance parallel computing on GPUs, optimizing tasks through concurrent execution and accelerating applications like deep learning and scientific simulations.
> Generated by ChatGPT
<code>
## Established the type of device used for model processing
device = "cuda" if torch.cuda.is_available() else "cpu"
cuda = True if device == "cuda" else False
</code>
## Loading Data
Annotated data in the h5ad format is widely used for efficiently storing and accessing large-scale single-cell RNA sequencing (scRNA-seq) datasets. Leveraging the anndata library, researchers can seamlessly manipulate and analyze these datasets, facilitating tasks such as data integration, preprocessing, and visualization, thereby enhancing insights into complex biological systems.
> Generated by ChatGPT
<code>
file_path = "../data/adata_normalized_sample.h5ad"
# file_path = "../data/adata_30kx10k_normalized_sample.h5ad"
adata = ad.read_h5ad(filename=file_path)
</code>
<code>
adata
</code>
<code>
count_data = adata.layers["min_max_normalized"]
</code>
<code>
count_data
</code>
## Data Split
Split data in training and testing data.
<code>
train_size = int(0.8 * count_data.shape[0])
test_size = count_data.shape[0] - train_size
torch.manual_seed(2406)
perm = torch.randperm(count_data.shape[0])
train_split, test_split = perm[:train_size], perm[train_size:]
</code>
<code>
train_data = SparseDataset(count_data[train_split, :])
test_data = SparseDataset(count_data[test_split, :])
</code>
<code>
# Create data loaders
train_loader = torch.utils.data.DataLoader(
train_data,
batch_size=batch_size,
shuffle=True,
)
test_loader = torch.utils.data.DataLoader(
test_data,
batch_size=batch_size,
shuffle=False,
)
</code>
<code>
train_loader.dataset
</code>
<code>
test_loader.dataset
</code>
## Model Structure
The **autoencoder** is comprised of two primary components: the **encoder** and the **decoder**. The encoder is responsible for reducing the dimensionality of the input tensor. The decoder, in turn, attempts to reconstruct the original input data from the reduced representation generated by the encoder.
<code>
encoder_layers, decoder_layers = utils.import_model_architecture(
forward=model_architecture["layers"]["encoder"],
backward=model_architecture["layers"]["decoder"],
)
loss_function = utils.import_loss_function(model_architecture["loss_function"])
model = ae.Autoencoder(encoder_layers, decoder_layers, loss_function=loss_function)
</code>
<code>
model
</code>
## Training
<code>
optimizer = utils.import_optimizer(
model.parameters(),
model_architecture["optimization"]["optimizer"],
learning_rate=model_architecture["optimization"]["learning_rate"],
weight_decay=model_architecture["optimization"]["weight_decay"],
)
writer = SummaryWriter(
f'../runs/hca/ae_{num_epochs}_{datetime.now().strftime("%Y%m%d-%H%M%S")}'
)
prev_updates = 0
for epoch in range(num_epochs):
print(f"Epoch {epoch+1}/{num_epochs}")
prev_updates = T.train(
model, train_loader, optimizer, prev_updates, device, writer=writer
)
T.test(model, test_loader, prev_updates, device=device, writer=writer)
</code>
|
{
"filename": "ae_7.ipynb",
"repository": "CKolland/Research-Internship-SchulzLab",
"query": "transformed_from_existing",
"size": 16023,
"sha": ""
}
|
# S2.ipynb
Repository: yackermann/udemy-langchain
<code>
from dotenv import load_dotenv
load_dotenv(dotenv_path='.env')
</code>
# LLMs
<code>
from langchain.llms import OpenAI
llm = OpenAI()
llm.predict("How are you?")
</code>
<code>
from langchain.chat_models import ChatOpenAI
chat_model = ChatOpenAI()
chat_model.predict("How are you?")
chat_model.predict("What was my previous question?")
</code>
# Chains
<code>
from langchain.chains import ConversationChain
chain = ConversationChain(
llm=chat_model,
verbose=True
)
chain.run("How are you today?")
</code>
<code>
chain.run("What was my previous question?")
</code>
Prompt Template
<code>
from langchain.prompts import PromptTemplate
template = """
Return all subcategories of a given category.
Category: {category}
"""
prompt = PromptTemplate(
template=template,
input_variables=["category"],
)
from langchain.chains import LLMChain
llm_chain = LLMChain(
llm=llm,
prompt=prompt,
verbose=True,
)
llm_chain.run(category="Computer science")
</code>
<code>
from langchain.prompts import ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate, ChatMessagePromptTemplate
system_template = """
You are a helpful assistant who generates comma separated lists.
A user will only pass a category and you should generate a list of subcategories.
ONLY return comma separated values and nothing else!
"""
prompt = ChatPromptTemplate.from_messages([
SystemMessagePromptTemplate.from_template(system_template),
HumanMessagePromptTemplate.from_template("{category}"),
])
chain = LLMChain(
llm=llm,
prompt=prompt,
verbose=True,
)
chain.run("Machine learning")
</code>
# Output parser
<code>
from langchain.schema import BaseOutputParser
class CommaSeparatedParser(BaseOutputParser):
def parse(self, text):
output = text.strip().split(",")
output = [x.strip() for x in output]
return output
chain = LLMChain(
llm=llm,
prompt=prompt,
output_parser=CommaSeparatedParser(),
verbose=True,
)
chain.run("Machine learning")
</code>
<code>
input_list = [
{"category": "food"},
{"category": "country"},
{"category": "colours"},
]
response = chain.apply(input_list)
print(response)
</code>
# Simple Sequence
<code>
title_template = """
You are a writer
Given a subject, your job is to return a fun
title for a play.
Subject {subject}
Title:
"""
title_chain = LLMChain.from_string(
llm=llm,
template=title_template,
)
title_chain.run(subject="Machine learning")
</code>
<code>
synopsis_template = """
You are a writer
Given a title, write synopsis for a play.
Title: {title}
Synopsis:
"""
synopsis_chain = LLMChain.from_string(
llm=llm,
template=synopsis_template,
)
synopsis_chain.run(title="The Learning Machine: A Journey Through Artificial Intelligence")
</code>
<code>
from langchain.chains import SimpleSequentialChain
chain = SimpleSequentialChain(
chains=[title_chain, synopsis_chain],
verbose=True,
)
chain = chain.run("Machine learning.")
</code>
|
{
"filename": "S2.ipynb",
"repository": "yackermann/udemy-langchain",
"query": "transformed_from_existing",
"size": 19982,
"sha": ""
}
|
# Abstract_notebook_final.ipynb
Repository: atlantisq/PolymerDay
<code>
import os
import pandas
import re
directory = os.getcwd()
print(directory)
pandas.set_option('display.max_rows', None)
pandas.set_option('display.max_columns', None)
pandas.set_option('display.width', None)
pandas.set_option('display.max_colwidth', -1)
file = (os.path.join(directory, 'reg.xlsx'))
df = pandas.read_excel(file, sheet_name = 'Posters', names = ['Title','Filename','Presenter','AuthorList','Affil','Abst'], usecols = [15,16,17,18,19,20])
#df = df.loc[lambda df: df['Choice'] == 'Yes']
df['Filename'] = df['Filename'].fillna(0)
print(df)
</code>
<code>
def download_image(url, filename):
# Importing required libraries
import urllib.request
# Adding information about user agent
opener=urllib.request.build_opener()
opener.addheaders=[('User-Agent','Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/36.0.1941.0 Safari/537.36')]
urllib.request.install_opener(opener)
# setting filename and image URL
filename = filename
image_url = url
# calling urlretrieve function to get resource
urllib.request.urlretrieve(image_url, filename)
</code>
<code>
for ind in df.index:
url = df['Filename'][ind]
if url != 0:
format = '.' + url[-3] + url[-2] + url[-1]
file = str(ind) + format
filename = (os.path.join(directory, 'toc', file ))
print(filename)
download_image(url, filename)
df['Filename'][ind] = filename
</code>
<code>
toc_dict = {}
directory = os.getcwd()
toc_directory = (os.path.join(directory, 'toc'))
print(toc_directory)
for filename in os.listdir(toc_directory):
#print(filename)
file = (os.path.join(directory, 'toc', filename))
key = os.path.splitext(filename)[0]
toc_dict[key] = file
for key in toc_dict:
print(key, toc_dict[key])
</code>
<code>
from docx import Document
from docx.shared import Inches
from docx.shared import Pt
from docx.enum.text import WD_PARAGRAPH_ALIGNMENT
from docx.shared import Pt
document = Document()
for ind in df.index:
hd = document.add_heading(str(ind + 1) + '. ', level=1)
hd.alignment = WD_PARAGRAPH_ALIGNMENT.CENTER
hd.add_run(df['Title'][ind])
p1 = document.add_paragraph()
p1.alignment = WD_PARAGRAPH_ALIGNMENT.CENTER
p1.paragraph_format.space_before = Pt(3)
p1.paragraph_format.space_after = Pt(3)
run = p1.add_run(df['AuthorList'][ind])
run.font.size = Pt(11)
p2 = document.add_paragraph()
p2.alignment = WD_PARAGRAPH_ALIGNMENT.CENTER
p1.paragraph_format.space_before = Pt(3)
p1.paragraph_format.space_after = Pt(3)
run = p2.add_run(df['Affil'][ind])
run.font.size = Pt(10)
abst = document.add_paragraph(df['Abst'][ind])
abst.alignment = WD_PARAGRAPH_ALIGNMENT.JUSTIFY
#file = str(ind) + '.jpg'
#toc = (os.path.join(directory, 'toc', file ))
#print(toc)
file = df['Filename'][ind]
if file != 0:
toc_img = toc_dict[str(ind)]
print(toc_img)
img = document.add_picture(toc_img, width=Inches(4.5))
last_paragraph = document.paragraphs[-1]
last_paragraph.alignment = WD_PARAGRAPH_ALIGNMENT.CENTER
p3 = document.add_page_break()
#p2.alignment = None
document.save('Abst.docx')
</code>
|
{
"filename": "Abstract_notebook_final.ipynb",
"repository": "atlantisq/PolymerDay",
"query": "transformed_from_existing",
"size": 222197,
"sha": ""
}
|
# old_example_2.ipynb
Repository: saeyslab/ViVAE
# *ViVAE* and *ViScore* usage example
In this Jupyter notebook, we download a single-cell dataset from Zenodo, run basic pre-processing on in and make a simple 2-dimensional layout of the data using *ViVAE*.
*(It takes around 4 minutes to run this on an M1 MacBook Air with GPU acceleration via Metal.)*
<code>
import numpy as np, matplotlib.pyplot as plt
import ViVAE, ViScore, copy
</code>
### **1.** Import data
We start by importing a pre-processed [scRNA-seq dataset](https://singlecell.broadinstitute.org/single_cell/study/SCP162) from `./data` (make sure this has been downloaded via [Git LFS](https://docs.github.com/en/repositories/working-with-files/managing-large-files/installing-git-large-file-storage)).
For scRNA-seq datasets, we typically recommend using the first 50 components of the count data.
For cytometry data, use the pre-processed protein expression matrix (*ie.* post- compensation, transformation and batch effect correction, or whichever applies to your use-case).
In order to smooth (de-noise) the data, we use a k-NN graph: a pre-computed one is provided here, but a new one can be computed using the *PyNNDescent* nearest-neighbour search algorithm with `ViVAE.make_knn`.
**(If you want to use the *SQuadVAE* model without de-noising the input data, there is no need for the *k*-NNG.)**
<code>
load = lambda dataset : [np.load(f'./data/{dataset}_{x}.npy', allow_pickle=True) for x in ['pc', 'knn', 'annot']]
pc, knn, annot = load('Shekhar')
</code>
### **2.** De-noise inputs
Nearest-neighbour-based de-noising of inputs (pre-smoothing) is done using the approximate *k*-NN graph computed earlier (see *Methods* section of the paper).
A single iteration with $\lambda$=1 and $k$=50 is applied here.
This is the default pre-smoothing set-up proposed for flow cytometry, CyTOF and scRNA-seq data.
If small populations are present, decrease $k$.
If working with less noisy (non-biological even?) data, experiment with smaller values for $\lambda$.
(For post-smoothing, if used, multiple iterations with $\lambda$ around 0.01 is recommended.
For a quantitative evaluation of how these settings work with your dataset, use *ViScore*!)
<code>
pc_s = ViVAE.smooth(x=pc, knn=knn, k=50, coef=1., n_iter=1)
</code>
### **3.** Train an *SQuadVAE* model
We train *SQuadVAE* (VAE with a quartet loss regularisation term) on the PCs.
<code>
model = ViVAE.ViVAE(full_dim=pc_s.shape[1], latent_dim=2)
</code>
<code>
model.fit(pc_s)
</code>
### **4.** Create embedding and plot it
The trained model is then used to create the lower-dimensional embedding of the dataset we trained on (or, alternatively, a similar enough or extended dataset).
*ViVAE* also has a plotting function that quickly visualises the embedding (or its first two components) with cell population annotation when available.
<code>
ld = model.transform(pc_s)
</code>
<code>
palette = [
'grey', '#1CE6FF', '#FF34FF', '#FF4A46', '#008941', '#006FA6', '#A30059', '#7A4900', '#dedb8c', '#63FFAC', '#B79762', '#004D43', '#8FB0FF', '#997D87',
'#5A0007', '#809693', '#1B4400', '#4FC601', '#3B5DFF', '#4A3B53', '#FF2F80', '#61615A', '#BA0900', '#6B7900', '#00C2A0', '#FFAA92', '#FF90C9', '#B903AA',
'#D16100', '#DDEFFF', '#000035', '#7B4F4B', '#A1C299', '#300018', '#0AA6D8', '#013349', '#00846F', '#372101', '#FFB500', '#C2FFED', '#A079BF', '#CC0744',
'#C0B9B2', '#C2FF99', '#001E09', '#00489C', '#6F0062', '#0CBD66', '#EEC3FF', '#456D75', '#B77B68', '#7A87A1', '#788D66', '#885578', '#FAD09F', '#FF8A9A',
'#D157A0', '#BEC459', '#456648', '#0086ED', '#886F4C', '#34362D', '#B4A8BD', '#00A6AA', '#452C2C', '#636375', '#A3C8C9', '#FF913F', '#938A81', '#575329',
'#00FECF', '#B05B6F', '#8CD0FF', '#3B9700', '#04F757', '#C8A1A1', '#1E6E00', '#7900D7', '#A77500', '#6367A9', '#A05837', '#6B002C', '#772600', '#D790FF',
'#9B9700', '#549E79', '#FFF69F', '#201625', '#72418F', '#BC23FF', '#99ADC0', '#3A2465', '#922329', '#5B4534', '#FDE8DC', '#404E55', '#0089A3', '#CB7E98',
'#A4E804', '#324E72', '#6A3A4C'
]
</code>
<code>
ViVAE.plot(proj=ld, annot=annot, unassigned='nan', figsize=(6,5), dpi=80, point_size=.01, title='Shekhar retina dataset embedding', palette=palette)
</code>
### **5.** Use structure-preservation metrics as unsupervised score
Using *ViScore*, we can calculate the local and global structure-preservation index ($S_{L}$ and $S_{G}$, respectively).
For reference, we can compare the *ViVAE* embedding with the first two PCs of the original data.
(To compare to alternative non-linear dimensionality reduction methods, use their resulting embeddings instead.)
<code>
score_vivae = ViScore.score(hd=pc, ld=ld)
score_pca = ViScore.score(hd=pc, ld=pc[:,range(2)])
</code>
<code>
print(f'ViVAE embedding scores\n\tLocal:\t{score_vivae["Sl"]:.3f}\n\tGlobal:\t{score_vivae["Sg"]:.3f}\nFirst 2 PCs scores\n\tLocal:\t{score_pca["Sl"]:.3f}\n\tGlobal:\t{score_pca["Sg"]:.3f}')
</code>
### **6.** Use supervised evaluation to describe population-wise embedding errors
*ViScore* can also help qualify and quantify the nature of embedding distorion as it pertains to any given population, to limit misinterpretation of dimensionality reduction.
<code>
nc_hd = ViScore.neighbourhood_composition(X=pc, pop='BC6', annot=annot, exclude='nan')
nc_ld = ViScore.neighbourhood_composition(X=ld, pop='BC6', annot=annot, exclude='nan')
</code>
<code>
palette_without_bc6 = copy.deepcopy(palette)
del palette_without_bc6[11]
</code>
<code>
plot_hd = ViScore.neighbourhood_composition_plot(nc=nc_hd, palette=palette_without_bc6)
plt.show(plot_hd)
</code>
<code>
plot_ld = ViScore.neighbourhood_composition_plot(nc=nc_ld, palette=palette_without_bc6)
plt.show(plot_ld)
</code>
|
{
"filename": "old_example_2.ipynb",
"repository": "saeyslab/ViVAE",
"query": "transformed_from_existing",
"size": 9621,
"sha": ""
}
|
# Analysis of negative control data.ipynb
Repository: vals/Blog
<code>
%pylab inline
import pandas as pd
import plotnine as p
p.theme_set(p.theme_classic())
</code>
## The effect of Poisson zeros on OLS regression results
In a [previous post](http://www.nxn.se/valent/2018/1/30/count-depth-variation-makes-poisson-scrna-seq-data-negative-binomial) I wrote about the Poisson distribution seeming like a good error model for scRNA-seq counts. This suggests using GLM with Poisson likelihood to analyse your data, as long as the offset due to count depth variation is taken into consideration.
An alternative strategy could be to transform the counts to roughly normal, and perform analysis in that setting. This is effectively what the vast majority of studies do for unsupervised analysis: counts are transformed, then PCA is used to find a low-dimensional representation for further analysis such as clustering.
What if we try to adjust for the count depth variation in a supervised setting assuming Gaussian noise?
A huge benefit of assuming Gaussian noise is that linear regression has an extremely efficient solution, usually referred to as [OLS regression](https://en.wikipedia.org/wiki/Ordinary_least_squares). A couple of years ago I made a [simple Python package]((https://github.com/Teichlab/NaiveDE) `NaiveDE` to perform OLS regression on gene expression matrices. I don't recommend anyone use it for final analysis, indeed I called it "Naive DE" because it is a baseline. Literally every other DE test will be better than it by design, in particular with regards to false positive P-values. (Well maybe not [according to a recent study](https://www.nature.com/articles/nmeth.4612), the `NaiveDE` test should be equivalent to the t-test.) It is nevertheless convenient during exploratory analysis to iterate through models
Alternative and null models are specified by [Patsy formulas](https://patsy.readthedocs.io/en/latest/overview.html), and significance is calculated with a [likelihood ratio test](https://en.wikipedia.org/wiki/Likelihood-ratio_test). A [Bonferroni corrected](https://en.wikipedia.org/wiki/Bonferroni_correction) version of the P-value is also reported.
For every gene $ g $ where we have a design matrix $ X $ and observed counts $ y_g $ we look at
$$
y_g = \mathcal{N}\left( \alpha^T_g X, \sigma^2_g \right).
$$
The weights $ \alpha $ are calculated by OLS, and $ \sigma^2 $ is reflected in the residual errors. For flexibility, intercept is optionally part of the design matrix.
### Negative control data
In the negative control 10X dataset from Svensson et al 2017, the only variation in observed expression should (in theory) be due technical effects, in particular the count depth variation. Here we are using 2,000 cells with 24,000 genes. The most common variance stabilizing transformation of scRNA-seq data is $ \log(Y + 1) $, so we will investigate how this affects regression.
If the gene counts are scaled per cells, we would want
$$
\log\left( \frac{y_g}{\text{counts}} \right) = \log(y_g) - 1.0 \cdot \log(\text{counts}) = \mathcal{N}(0, \sigma^2)
$$
We set up a model where the design matrix $ X $ have the log total counts, and an intercept. Ideally the weights for the log counts should be found to be 1, and the intercept 0. Note that in practice we are always using $ \log(y_g + 1) $.
<code>
long_counts = pd.read_csv('Svensson%2Fsvensson_long_counts.csv.gz')
</code>
<code>
counts = long_counts.pivot(index='cell', columns='gene', values='count').fillna(0)
</code>
<code>
counts = counts[counts.index.str.slice(0, 5) == '20311'].copy()
</code>
<code>
counts.shape
</code>
<code>
sample_info = pd.DataFrame(index=counts.index)
</code>
<code>
import NaiveDE
</code>
<code>
sample_info['total_count'] = counts.sum(1)
</code>
<code>
%%time
lr_results = NaiveDE.lr_tests(sample_info, np.log1p(counts.T),
alt_model='~ np.log(total_count) + 1',
null_model='~ 1')
</code>
<code>
lr_results.pval = lr_results.pval.clip_lower(lr_results.query('pval != 0')['pval'].min())
lr_results.qval = lr_results.qval.clip_lower(lr_results.query('qval != 0')['qval'].min())
</code>
The test produces a table with weights from the alternative model, hypothesis test restults.
<code>
print(lr_results.sort_values('np.log(total_count)', ascending=False).head(25))
</code>
<code>
img = p.qplot('np.log(total_count)', '-np.log10(pval)', lr_results) + p.labs(title='Negative control data')
img.save('1.png', verbose=False)
img
</code>
In this plot `np.log(total_count)` does not refer to the value, but _weight_ for this variable. Each dot is a gene rather than droplet. The P-value comes from comparing the model with one that does not consider the depth.
The marjority of genes are found to have gene count weights much smaller than 1.
<code>
p.qplot('Intercept', 'np.log(total_count)', lr_results)
</code>
It turns out that lowly abundant genes will have delfated total count slopes.
<code>
img = \
p.qplot(counts.sum(0).clip_lower(1), lr_results['np.log(total_count)'],
log='x') \
+ p.labs(x='Gene count across dataset', y='np.log(total_count)',
title='Negative control data')
img.save('2.png', verbose=False)
img
</code>
<code>
top_results = \
lr_results \
.sort_values('np.log(total_count)', ascending=False) \
.groupby(pd.cut(lr_results['np.log(total_count)'], 6)) \
.head(2)
print(top_results)
</code>
<code>
xx = np.linspace(np.log(sample_info.total_count.min()),
np.log(sample_info.total_count.max()))
</code>
<code>
def linres(gene):
yy = \
top_results.loc[gene, 'np.log(total_count)'] * xx \
+ top_results.loc[gene, 'Intercept']
yy = np.exp(yy)
return yy
</code>
<code>
plt.rcParams['axes.spines.top'] = False
plt.rcParams['axes.spines.right'] = False
</code>
We can look at a few examples of genes with different count depth weights
<code>
figsize(11, 3)
plt.subplot(141)
plt.loglog()
gene = 'ENSG00000198938'
plt.scatter(sample_info.total_count, counts[gene] + 1, c='k');
yy = linres(gene)
plt.plot(np.exp(xx), yy, c='w', lw=5)
plt.plot(np.exp(xx), yy, c='r', lw=3)
plt.title(gene)
plt.ylabel('Counts + 1')
plt.xlabel('Total counts')
ax = plt.gca()
plt.subplot(142, sharey=ax)
plt.loglog()
gene = 'ENSG00000197971'
plt.scatter(sample_info.total_count, counts[gene] + 1, c='k');
yy = linres(gene)
plt.plot(np.exp(xx), yy, c='w', lw=5)
plt.plot(np.exp(xx), yy, c='r', lw=3)
plt.title(gene)
plt.xlabel('Total counts')
plt.subplot(143, sharey=ax)
plt.loglog()
gene = 'ENSG00000167526'
plt.scatter(sample_info.total_count, counts[gene] + 1, c='k');
yy = linres(gene)
plt.plot(np.exp(xx), yy, c='w', lw=5)
plt.plot(np.exp(xx), yy, c='r', lw=3)
plt.title(gene)
plt.xlabel('Total counts')
plt.subplot(144, sharey=ax)
plt.loglog()
gene = 'ENSG00000008988'
plt.scatter(sample_info.total_count, counts[gene] + 1, c='k');
yy = linres(gene)
plt.plot(np.exp(xx), yy, c='w', lw=5)
plt.plot(np.exp(xx), yy, c='r', lw=3)
plt.title(gene)
plt.xlabel('Total counts')
ax.set_ylim(0.76, 75);
plt.tight_layout()
plt.savefig('3.png', bbox_inches='tight')
</code>
From this, it is clear that increased observations on the low count values, in particular 0, are responsible for decrease in the total count weight.
|
{
"filename": "Analysis of negative control data.ipynb",
"repository": "vals/Blog",
"query": "transformed_from_existing",
"size": 161680,
"sha": ""
}
|
# Tutorial 5_Batch-learning on large-scale dataset_2.ipynb
Repository: Hgy1014/scAGDE
# Tutorial 5: Batch-learning on large-scale dataset
Here we will use scATAC-seq dataset `10XBlood' as an example to illustrate how to train large-scale scATAC-seq data with batch-learning strategy in an end-to-end style.
## 1. Read and preprocess data
We first read '.h5ad' data file using [Scanpy](https://github.com/scverse/scanpy) package
<code>
import scanpy as sc
adata = sc.read_h5ad("data/10XBlood.h5ad")
</code>
We can use Scanpy to further filter data. In our case, we pass this step because the loaded dataset has been preprocessed. Some codes for filtering are copied below for easy reference:
<code>
# sc.pp.filter_cells(adata, min_genes=100)
# min_cells = int(adata.shape[0] * 0.01)
# sc.pp.filter_genes(adata, min_cells=min_cells)
</code>
<code>
adata
</code>
## 2. Setup and train scAGDE model
Now we can initialize the trainer with the AnnData object, which will ensure settings for model are in place for training.
We can specify the `outdir` to the dir path where we want to save the output file (mainly the model weights file).
`n_centroids` represents the cluster number of dataset. If this information is unknown, we can set `n_centroids=None` and in this case, scAGDE will apply the estimation strategy to estimate the optimal cluster number for the initialization of its cluster layer. Here, we set `n_centroids=9`.
We can train scAGDE on specified device by setting `gpu`. For example, train scAGDE on CPUs by `gpu=None` and trian it on GPU #0 by `gpu="0"`
To stop early once the model converges, we set `early_stopping=True`, and `patience=50` representing epochs to wait for improvement before early stopping.
<code>
import scAGDE
trainer = scAGDE.Trainer_scale(adata,outdir="output",n_centroids=9,gpu="1",early_stopping=True,patience=50)
</code>
Now we can train scAGDE model in end-to-end style. The whole pipeline behind the function of `fit()` mainly consists of three stages, as below:
1. scAGDE first trained an chromatin accessibility-based autoencoder to measure the importance of the peaks and select the key peaks. The number of selected peaks is set to 10,000 in default, or you can change it by setting `top_n`. In the meanwhile, the initial cell representations for cell graph construction are stored in `adata.obs[embed_init_key]`, which is `"latent_init"` in default.
2. scAGDE then constructed cell graph and trains the GCN-based embedded model to extract essential structural information from both count and cell graph data.
3. scAGDE finally yiels robust and discriminative cell embeddings which are stored in `adata.obsm[embed_key]`, which is `"latent"` in default. Also, scAGDE enables imputation task if `impute_key` is not None and the imputed data will be stored in `adata.obsm[impute_key]`, which is `"impute"` in default.
scAGDE performs clustering on final embeddings if `cluster_key` is not None, and the cluster assignments will be in `adata.obs[cluster_key]`, which is `"cluster"` in default. The cluster number is the value of `n_centroids` and if estimation is used, the cluster number is the value of estimated cluster number.
<code>
embed_key = "latent"
adata = trainer.fit(topn=10000,embed_key=embed_key)
print(adata)
</code>
## 3. Visualizing and evaluation
We can now use Scanpy to visualize our latent space.
<code>
sc.pp.neighbors(adata, use_rep="latent")
sc.tl.umap(adata, min_dist=0.2)
sc.pl.umap(adata,color=["celltype","cluster"])
</code>
We can evaluate the clustering performance with multiple metrics as below:
<code>
y = adata.obs["celltype"].astype("category").cat.codes.values
res = scAGDE.utils.cluster_report(y, adata.obs["cluster"].astype(int))
</code>
|
{
"filename": "Tutorial 5_Batch-learning on large-scale dataset_2.ipynb",
"repository": "Hgy1014/scAGDE",
"query": "transformed_from_existing",
"size": 303489,
"sha": ""
}
|
# publications.ipynb
Repository: xuesoso/xuesoso.github.io
# Publications markdown generator for academicpages
Takes a TSV of publications with metadata and converts them for use with [academicpages.github.io](academicpages.github.io). This is an interactive Jupyter notebook ([see more info here](http://jupyter-notebook-beginner-guide.readthedocs.io/en/latest/what_is_jupyter.html)). The core python code is also in `publications.py`. Run either from the `markdown_generator` folder after replacing `publications.tsv` with one containing your data.
TODO: Make this work with BibTex and other databases of citations, rather than Stuart's non-standard TSV format and citation style.
## Data format
The TSV needs to have the following columns: pub_date, title, venue, excerpt, citation, site_url, and paper_url, with a header at the top.
- `excerpt` and `paper_url` can be blank, but the others must have values.
- `pub_date` must be formatted as YYYY-MM-DD.
- `url_slug` will be the descriptive part of the .md file and the permalink URL for the page about the paper. The .md file will be `YYYY-MM-DD-[url_slug].md` and the permalink will be `https://[yourdomain]/publications/YYYY-MM-DD-[url_slug]`
This is how the raw file looks (it doesn't look pretty, use a spreadsheet or other program to edit and create).
## Import pandas
We are using the very handy pandas library for dataframes.
<code>
import pandas as pd
import numpy as np
</code>
## Import TSV
Pandas makes this easy with the read_csv function. We are using a TSV, so we specify the separator as a tab, or `\t`.
I found it important to put this data in a tab-separated values format, because there are a lot of commas in this kind of data and comma-separated values can get messed up. However, you can modify the import statement, as pandas also has read_excel(), read_json(), and others.
<code>
publications = pd.read_csv("publications.csv", sep=",", header=0)
publications
</code>
## Escape special characters
YAML is very picky about how it takes a valid string, so we are replacing single and double quotes (and ampersands) with their HTML encoded equivilents. This makes them look not so readable in raw format, but they are parsed and rendered nicely.
<code>
html_escape_table = {
"&": "&",
'"': """,
"'": "'"
}
def html_escape(text):
"""Produce entities within text."""
return "".join(html_escape_table.get(c,c) for c in text)
</code>
## Creating the markdown files
This is where the heavy lifting is done. This loops through all the rows in the TSV dataframe, then starts to concatentate a big string (```md```) that contains the markdown for each type. It does the YAML metadata first, then does the description for the individual page.
<code>
import os
for row, item in publications.iterrows():
md_filename = str(item.pub_date) + "-" + item.url_slug + ".md"
html_filename = str(item.pub_date) + "-" + item.url_slug
year = item.pub_date
## YAML variables
md = "---\ntitle: \"" + item.title + '"\n'
md += """collection: publications"""
md += """\npermalink: /publication/""" + html_filename
if len(str(item.excerpt)) > 5:
md += "\nexcerpt: '" + html_escape(item.excerpt) + "'"
md += "\nyear: " + str(item.pub_date)
md += "\nvenue: '" + html_escape(item.venue) + "'"
split_paper_url = item.paper_url.split(';')
if len(str(split_paper_url[0])) > 5:
md += "\npaperurl: '" + split_paper_url[0] + "'"
if len(split_paper_url) > 1:
md += "\nbiorxiv_url: '" + split_paper_url[-1] + "'"
md += "\ncitation: '" + html_escape(item.citation) + "'"
md += "\n---"
## Markdown description for individual page
if len(str(item.excerpt)) > 5:
md += "\n" + html_escape(item.excerpt) + "\n"
if len(str(split_paper_url[0])) > 5 and item.venue != 'BioRxiv':
md += "\n[Article available here](" + split_paper_url[0] + ")\n"
if item.venue == 'BioRxiv':
md += "\n[Preprint available here](" + split_paper_url[0] + ")\n"
if len(split_paper_url) > 1:
md += "\n[Preprint available here](" + split_paper_url[-1] + ")\n"
# md += "\nRecommended citation: " + item.citation
md_filename = os.path.basename(md_filename)
with open("../_publications/" + md_filename, 'w') as f:
f.write(md)
</code>
These files are in the publications directory, one directory below where we're working from.
<code>
!ls ../_publications/
</code>
<code>
!cat ../_publications/2009-10-01-paper-title-number-1.md
</code>
|
{
"filename": "publications.ipynb",
"repository": "xuesoso/xuesoso.github.io",
"query": "transformed_from_existing",
"size": 18441,
"sha": ""
}
|
# p4.ipynb
Repository: satuelisa/DataScience
**Práctica 4: Visualización de información con plotly**
Ahora vamos a dibujar todo lo que en la práctica pasada parecía que habría que graficarlo.
Para que las gráficas sean interactuables con plotly en jupyter,
primero hay que extraer los datos para graficar *sin información personal* de los estudiantes
en archivos CSV aparte que se pueda compartir en línea.
Vamos a quedarnos únicamente con las columnas que se analizaron en la práctica anterior
y únicamente con los alumnos de Moisés y Elisa ya que no hemos incorporado las calificaciones de los demás aún.
<code>
import pandas as pd
from numpy import NaN
d = pd.read_csv("casos.csv")
d['CF1op'] = d['1ra']
d.CF1op = [int(v) if str(v).isdigit() else NaN for v in d.CF1op]
d['CF2op'] = d['2da']
d.CF2op = [int(v) if str(v).isdigit() else NaN for v in d.CF2op]
d.ingreso = ["AD" + v[-2:] if "dic" in v else "EJ" + v[-2:] if "nero" in v else NaN for v in [str(v) for v in d.ingreso]]
d.ingreso = d.ingreso.replace('ADre', NaN)
d['inicio'] = ['enero' if 'EJ' in v else 'agosto' if 'AD' else NaN for v in [str(v) for v in d.ingreso]]
hrs = {"No tengo un trabajo": 0, \
"Menos de 10 horas": 5, \
"Entre 10 y 20 horas": 15, \
"Entre 20 y 40 horas": 30, \
"Más de 40 horas": 50}
d['hrsNum'] = [hrs.get(v, NaN) for v in d.hrsTrabajo]
d['sabePromedio'] = [s != 'nan' for s in [str(v) for v in d.prom]]
d['sabeCreditos'] = d.creditos == 'Tres.'
d['sabeHoras'] = d['corr'] == '30'
d['sabeAmbos'] = d.sabeCreditos & d.sabeHoras
d['noSabeCreditos'] = d.creditos != "Tres."
d['noSabeHoras'] = d['corr'] != '30'
d['noSabeNinguno'] = d.noSabeCreditos & d.noSabeHoras
viejos = ["segunda", "70-79", "80-89", "90-100", "2da", "3ra", \
"No creo aprobar en este semestre", "Estimo aprobar en segunda oportunidad", "Creo que aprobaré en segunda oportunidad"]
nuevos = [50, 75, 85, 95, 50, 20, 20, 50, 50] # el precio de la inconsistencia es la redundancia
nn = ['I', 'M', 'F']
for s in ["_ini", "_mcu", ""]:
d['e' + nn.pop(0)] = d["califEsp" + s].replace(viejos, nuevos)
d['fueAses'] = ["NA" if v is NaN else False if str(v) == "No" else True for v in d.asesorias]
d['cuantasTemas'] = [0 if len(t) < 1 else t.count(';') + 1 for t in [str(v) if v is not NaN else '' for v in d.temasGral]]
d['formasApoyo'] = [0 if len(t) < 1 else t.count(';') + 1 for t in [str(v) if v is not NaN else '' for v in d.apoyo]]
d['cuantosMedios'] = [0 if len(t) < 1 else t.count(';') + 1 for t in [str(v) if v is not NaN else '' for v in d.medios]]
d['analizable'] = [p in ['elisa', 'moi'] for p in d.profe]
pedazo = d.query('analizable')
keep = ['profe', 'grupo', 'sem', 'PE', 'CF1op', 'CF2op', 'ingreso', 'inicio', 'hrsNum', 'sabePromedio', \
'sabeCreditos', 'sabeHoras', 'sabeAmbos', 'noSabeCreditos', 'noSabeHoras', 'noSabeNinguno', \
'eI', 'eM', 'eF', 'fueAses', 'temas', 'cuantasTemas', 'formasApoyo', 'cuantosMedios', \
'hrsEstudio_ini', 'hrsEstudio_mcu', 'hrsEstudio']
extracto = pd.DataFrame(pedazo, columns = keep)
extracto.to_csv("graficar.csv")
</code>
<code>
python3 extract.py
wc -l graficar.csv
580 graficar.csv
wc -l casos.csv
1023 casos.csv
</code>
Ahora, ese archivo graficar.csv se puede poner en la web,
lo que nos permite graficarlo con plotly aquí mismo en el notebook de jupyter.
Lo que sí hay que decirle es que no se preocupe por verificar el certificado SSL del HTTPS.
<code>
import plotly.plotly as py
import plotly.graph_objs as go
import pandas as pd
import ssl
if getattr(ssl, '_create_unverified_context', None):
ssl._create_default_https_context = ssl._create_unverified_context
d = pd.read_csv("https://elisa.dyndns-web.com/teaching/comp/datasci/graficar.csv")
datos = []
for sem in d.ingreso.unique():
if sem != NaN:
datos.append(go.Box(y = d.loc[d['ingreso'] == sem].CF1op, name = sem))
g = py.iplot(datos, filename='jupyter-semestre_primera')
print(g.embed_code)
</code>
<iframe id="igraph" scrolling="no" style="border:none;" seamless="seamless" src="https://plot.ly/~satuelisa/6.embed" height="525px" width="100%"></iframe>
El resultado, igual como todos los demás, están además en __[una página aparte](https://elisa.dyndns-web.com/teaching/comp/datasci/p4.html#f1)__ para interactuar con ellos directamente.
El orden de los semestres en el eje horizontal no convence.
Tiene que existir una forma de reordenarlos.
El chiste es reordenar el data frame para que los semestres aparezcan en el orden lógico:
primero por año y luego que EJ venga antes de AD.
Mejor primero chequemos cuál es el año más antiguo y el más reciente en aparecer:
<code>
import pandas as pd
from numpy import NaN
import ssl
if getattr(ssl, '_create_unverified_context', None):
ssl._create_default_https_context = ssl._create_unverified_context
d = pd.read_csv("https://elisa.dyndns-web.com/teaching/comp/datasci/graficar.csv")
tmp = [NaN if v == 'nan' else v[2:] for v in [str(v) for v in d.ingreso]]
d['aIng'] = [NaN if v == 'nan' else '20' + v if int(v) < 20 else '19' + v for v in [str(v) for v in tmp]]
print(d.aIng.unique())
</code>
OK, tenemos los años. Ahora los semestres.
Pongamos .1 para EJ y .2 para AD para tener una codificación numérica.
Así será fácil de ordenar.
<code>
import pandas as pd
from numpy import NaN
import ssl
if getattr(ssl, '_create_unverified_context', None):
ssl._create_default_https_context = ssl._create_unverified_context
d = pd.read_csv("https://elisa.dyndns-web.com/teaching/comp/datasci/graficar.csv")
tmp = [NaN if v == 'nan' else v[2:] for v in [str(v) for v in d.ingreso]]
d['aIng'] = [NaN if v == 'nan' else '20' + v if int(v) < 20 else '19' + v for v in [str(v) for v in tmp]]
d['sIng'] = [NaN if v == 'nan' else v[:2] for v in [str(v) for v in d.ingreso]]
d['saIng'] = [NaN if x is NaN else int(x) + (0.1 if y == "AD" else 0.2) for x, y in zip(d.aIng, d.sIng)]
print(d.saIng.unique())
</code>
<code>
import pandas as pd
from numpy import NaN
import ssl
import plotly.plotly as py
import plotly.graph_objs as go
if getattr(ssl, '_create_unverified_context', None):
ssl._create_default_https_context = ssl._create_unverified_context
d = pd.read_csv("https://elisa.dyndns-web.com/teaching/comp/datasci/graficar.csv")
tmp = [NaN if v == 'nan' else v[2:] for v in [str(v) for v in d.ingreso]]
d['aIng'] = [NaN if v == 'nan' else '20' + v if int(v) < 20 else '19' + v for v in [str(v) for v in tmp]]
d['sIng'] = [NaN if v == 'nan' else v[:2] for v in [str(v) for v in d.ingreso]]
d['saIng'] = [NaN if x is NaN else int(x) + (0.1 if y == "EJ" else 0.2) for x, y in zip(d.aIng, d.sIng)]
ordenado = d.sort_values(by = "saIng")
datos = []
for sem in ordenado.ingreso.unique():
if sem != NaN:
datos.append(go.Box(y = d.loc[d['ingreso'] == sem].CF1op, name = sem))
g = py.iplot(datos, filename='jupyter-semestre_primera_v2')
print(g.embed_code)
</code>
<iframe id="igraph" scrolling="no" style="border:none;" seamless="seamless" src="https://plot.ly/~satuelisa/8.embed" height="525px" width="100%"></iframe>
__[El resultado](https://elisa.dyndns-web.com/teaching/comp/datasci/p4.html#f2)__ ya es lo que buscaba.
Por lo menos con estos alumnos, la leyenda urbana de que los que inicien en enero estén peores no aplica.
No se nota ningún patrón sistemático alternación en las calificaciones de primera oportunidad.
Hagamos algo parecido por programa educativo,
poniendo boxplots de un color para 1ra op. y de otro color para 2da op.
<code>
import pandas as pd
from numpy import NaN
import ssl
import plotly.plotly as py
import plotly.graph_objs as go
from random import randint
if getattr(ssl, '_create_unverified_context', None):
ssl._create_default_https_context = ssl._create_unverified_context
d = pd.read_csv("https://elisa.dyndns-web.com/teaching/comp/datasci/graficar.csv")
datos = []
rojo = 69
for car in d.PE.unique():
if car != NaN:
pedazo = d.loc[d['PE'] == car]
verde = {"color": 'rgb({:d}, {:d}, {:d})'.format(rojo, randint(170, 255), randint(0, 50))}
azul = {"color": 'rgb({:d}, {:d}, {:d})'.format(rojo, randint(0, 50), randint(170, 255))}
rojo = (rojo + 41) % 255
if pedazo.CF1op.count() > 0:
datos.append(go.Box(y = d.loc[d['PE'] == car].CF1op, name = str(car) + " 1ra", marker = verde))
if pedazo.CF2op.count() > 0:
datos.append(go.Box(y = d.loc[d['PE'] == car].CF2op, name = str(car) + " 2da", marker = azul))
g = py.iplot(datos, filename='jupyter-carrera')
print(g.embed_code)
</code>
<iframe id="igraph" scrolling="no" style="border:none;" seamless="seamless" src="https://plot.ly/~satuelisa/10.embed" height="525px" width="100%"></iframe>
En el __[resultado](https://elisa.dyndns-web.com/teaching/comp/datasci/p4.html#f3)__ es un poco difícil comprar las carreras porque la primera y la segunda oportunidad tienen una variación muy distinta.
Hagamos otra versión, ordenado primero por oportunidad y luego por carrera.
<code>
import pandas as pd
from numpy import NaN
import ssl
import plotly.plotly as py
import plotly.graph_objs as go
from random import randint
if getattr(ssl, '_create_unverified_context', None):
ssl._create_default_https_context = ssl._create_unverified_context
d = pd.read_csv("https://elisa.dyndns-web.com/teaching/comp/datasci/graficar.csv")
datos = []
colores = dict()
rojo = 69
for car in d.PE.unique():
if car != NaN:
pedazo = d.loc[d['PE'] == car]
verde = {"color": 'rgb({:d}, {:d}, {:d})'.format(rojo, randint(170, 255), randint(0, 50))}
azul = {"color": 'rgb({:d}, {:d}, {:d})'.format(rojo, randint(0, 50), randint(170, 255))}
rojo = (rojo + 41) % 255
if pedazo.CF1op.count() > 0:
datos.append(go.Box(y = d.loc[d['PE'] == car].CF1op, name = str(car) + " 1ra", marker = verde))
colores[car] = (verde, azul)
for car in d.PE.unique():
if car != NaN:
if car not in colores:
colores[car] = ({"color": 'rgb({:d}, {:d}, {:d})'.format(rojo, randint(170, 255), randint(0, 50))}, \
{"color": 'rgb({:d}, {:d}, {:d})'.format(rojo, randint(0, 50), randint(170, 255))})
rojo = (rojo + 41) % 255
(verde, azul) = colores[car]
pedazo = d.loc[d['PE'] == car]
if pedazo.CF2op.count() > 0:
datos.append(go.Box(y = d.loc[d['PE'] == car].CF2op, name = str(car) + " 2da", marker = azul))
g = py.iplot(datos, filename='jupyter-carrera_v2')
print(g.embed_code)
</code>
<iframe id="igraph" scrolling="no" style="border:none;" seamless="seamless" src="https://plot.ly/~satuelisa/12.embed" height="525px" width="100%"></iframe>
Este __[resultado](https://elisa.dyndns-web.com/teaching/comp/datasci/p4.html#f4)__ ya es más fácil de interpretar.
Al parecer las segundas de Moisés para los de materiales son de tipo 70 o nada.
Ibamos a graficar también lo de por cuántas horas planearon estudiar y cuántas horas dijeron que estudiaron en la primera y en la segunda mitad del semestre.
<code>
import pandas as pd
from numpy import NaN
import ssl
import plotly.plotly as py
import plotly.graph_objs as go
if getattr(ssl, '_create_unverified_context', None):
ssl._create_default_https_context = ssl._create_unverified_context
d = pd.read_csv("https://elisa.dyndns-web.com/teaching/comp/datasci/graficar.csv")
datos = []
for hrs in d.hrsEstudio_ini.unique():
if hrs != NaN:
pedazo = d.loc[d['hrsEstudio_ini'] == hrs]
if pedazo.CF1op.count() > 0:
datos.append(go.Box(y = d.loc[d['hrsEstudio_ini'] == hrs].CF1op, name = hrs))
l = go.Layout(
title='Encuesta al inicio del semestre',
xaxis=dict(
title='Horas planeadas de estudio por semana',
titlefont=dict(
size=14,
color='#000000'
)
),
yaxis=dict(
title='Calificación en primera oportunidad',
titlefont=dict(
size=14,
color='#000000'
)
)
)
f = go.Figure(data = datos, layout = l)
g = py.iplot(f, filename='jupyter-hrs_ini')
print(g.embed_code)
datos = []
for hrs in d.hrsEstudio_mcu.unique():
if hrs != NaN:
pedazo = d.loc[d['hrsEstudio_mcu'] == hrs]
if pedazo.CF1op.count() > 0:
datos.append(go.Box(y = d.loc[d['hrsEstudio_mcu'] == hrs].CF1op, name = hrs))
l = go.Layout(
title='Encuesta después del examen de medio curso',
xaxis=dict(
title='Horas reportadas de estudio por semana',
titlefont=dict(
size=14,
color='#000000'
)
),
yaxis=dict(
title='Calificación en primera oportunidad',
titlefont=dict(
size=14,
color='#000000'
)
)
)
f = go.Figure(data = datos, layout = l)
g = py.iplot(f, filename='jupyter-hrs_mcu')
print(g.embed_code)
datos = []
for hrs in d.hrsEstudio.unique():
if hrs != NaN:
pedazo = d.loc[d['hrsEstudio'] == hrs]
if pedazo.CF1op.count() > 0:
datos.append(go.Box(y = d.loc[d['hrsEstudio'] == hrs].CF1op, name = hrs))
l = go.Layout(
title='Encuesta antes del examen de ordinario',
xaxis=dict(
title='Horas reportadas de estudio por semana en la segunda mitad',
titlefont=dict(
size=14,
color='#000000'
)
),
yaxis=dict(
title='Calificación en primera oportunidad',
titlefont=dict(
size=14,
color='#000000'
)
)
)
f = go.Figure(data = datos, layout = l)
g = py.iplot(f, filename='jupyter-hrs_ord')
print(g.embed_code)
</code>
<iframe id="igraph" scrolling="no" style="border:none;" seamless="seamless" src="https://plot.ly/~satuelisa/14.embed" height="525px" width="100%"></iframe>
<iframe id="igraph" scrolling="no" style="border:none;" seamless="seamless" src="https://plot.ly/~satuelisa/16.embed" height="525px" width="100%"></iframe>
<iframe id="igraph" scrolling="no" style="border:none;" seamless="seamless" src="https://plot.ly/~satuelisa/18.embed" height="525px" width="100%"></iframe>
En las figuras __[5](https://elisa.dyndns-web.com/teaching/comp/datasci/p4.html#f5)__, __[6](https://elisa.dyndns-web.com/teaching/comp/datasci/p4.html#f6)__ y __[7](https://elisa.dyndns-web.com/teaching/comp/datasci/p4.html#f7)__
el orden y texto de las opciones varía de una figura a otra y por ende también el color,
lo que dificulta la comparación. Además se empalman las etiquetas.
Mejor ponerles etiquetas consistentes, siempre en el mismo orden y con el mismo color.
<code>
import plotly.plotly as py
import plotly.graph_objs as go
import pandas as pd
from numpy import NaN
import ssl
if getattr(ssl, '_create_unverified_context', None):
ssl._create_default_https_context = ssl._create_unverified_context
d = pd.read_csv("https://elisa.dyndns-web.com/teaching/comp/datasci/graficar.csv")
d.hrsEstudio_ini = d.hrsEstudio_ini.replace("Menos de una hora", "< 1 h")
d.hrsEstudio_ini = d.hrsEstudio_ini.replace("Entre 1 y 2 horas", "1-2 h")
d.hrsEstudio_ini = d.hrsEstudio_ini.replace("Entre 2 y 3 horas", "2-3 h")
d.hrsEstudio_ini = d.hrsEstudio_ini.replace("Entre 3 y 5 horas", "3-5 h")
d.hrsEstudio_ini = d.hrsEstudio_ini.replace("Más de 5 horas", "> 5 h")
print(d.hrsEstudio_ini.unique())
d.hrsEstudio_mcu = d.hrsEstudio_mcu.replace("Una hora o menos", "< 1 h")
d.hrsEstudio_mcu = d.hrsEstudio_mcu.replace("Menos de una hora", "< 1 h") # hubo dos formas decir esencialmente lo mismo
d.hrsEstudio_mcu = d.hrsEstudio_mcu.replace("Entre una y dos horas", "1-2 h")
d.hrsEstudio_mcu = d.hrsEstudio_mcu.replace("Entre dos y tres horas", "2-3 h")
d.hrsEstudio_mcu = d.hrsEstudio_mcu.replace("Entre tres y cinco horas", "3-5 h")
d.hrsEstudio_mcu = d.hrsEstudio_mcu.replace("Más de cinco horas", "> 5 h")
print(d.hrsEstudio_mcu.unique()) # que nada siga siendo nada
d.hrsEstudio = d.hrsEstudio.replace("Menos de una hora", "< 1 h")
d.hrsEstudio = d.hrsEstudio.replace("Entre una y dos horas", "1-2 h")
d.hrsEstudio = d.hrsEstudio.replace("Entre dos y tres horas", "2-3 h")
d.hrsEstudio = d.hrsEstudio.replace("Entre tres y cinco horas", "3-5 h")
d.hrsEstudio = d.hrsEstudio.replace("Más de cinco horas", "> 5 h")
print(d.hrsEstudio.unique())
orden = ["Nada", "< 1 h", "1-2 h", "2-3 h", "3-5 h", "> 5 h"]
colores = dict()
r = 29
g = 41
b = 67
for o in orden:
colores[o] = {"color": 'rgb({:d}, {:d}, {:d})'.format(r, g, b)}
r = (r + 29) % 255
g = (g + 41) % 255
b = (b + 67) % 255
datos = []
for hrs in orden:
if hrs != NaN:
pedazo = d.loc[d['hrsEstudio_ini'] == hrs]
if pedazo.CF1op.count() > 0:
datos.append(go.Box(y = d.loc[d['hrsEstudio_ini'] == hrs].CF1op, name = hrs, marker = colores[hrs]))
l = go.Layout(
title='Encuesta al inicio del semestre',
xaxis=dict(
title='Horas planeadas de estudio por semana',
titlefont=dict(
size=14,
color='#000000'
)
),
yaxis=dict(
title='Calificación en primera oportunidad',
titlefont=dict(
size=14,
color='#000000'
)
)
)
f = go.Figure(data = datos, layout = l)
g = py.iplot(f, filename='jupyter-hrs_ini_v2')
print(g.embed_code)
datos = []
for hrs in orden:
if hrs != NaN:
pedazo = d.loc[d['hrsEstudio_mcu'] == hrs]
if pedazo.CF1op.count() > 0:
datos.append(go.Box(y = d.loc[d['hrsEstudio_mcu'] == hrs].CF1op, name = hrs, marker = colores[hrs]))
l = go.Layout(
title='Encuesta después del examen de medio curso',
xaxis=dict(
title='Horas reportadas de estudio por semana',
titlefont=dict(
size=14,
color='#000000'
)
),
yaxis=dict(
title='Calificación en primera oportunidad',
titlefont=dict(
size=14,
color='#000000'
)
)
)
f = go.Figure(data = datos, layout = l)
g = py.iplot(f, filename='jupyter-hrs_mcu_v2')
print(g.embed_code)
datos = []
for hrs in orden:
if hrs != NaN:
pedazo = d.loc[d['hrsEstudio'] == hrs]
if pedazo.CF1op.count() > 0:
datos.append(go.Box(y = d.loc[d['hrsEstudio'] == hrs].CF1op, name = hrs, marker = colores[hrs]))
l = go.Layout(
title='Encuesta antes del examen de ordinario',
xaxis=dict(
title='Horas reportadas de estudio por semana en la segunda mitad',
titlefont=dict(
size=14,
color='#000000'
)
),
yaxis=dict(
title='Calificación en primera oportunidad',
titlefont=dict(
size=14,
color='#000000'
)
)
)
f = go.Figure(data = datos, layout = l)
g = py.iplot(f, filename='jupyter-hrs_ord_v2')
print(g.embed_code)
</code>
<iframe id="igraph" scrolling="no" style="border:none;" seamless="seamless" src="https://plot.ly/~satuelisa/20.embed" height="525px" width="100%"></iframe>
<iframe id="igraph" scrolling="no" style="border:none;" seamless="seamless" src="https://plot.ly/~satuelisa/22.embed" height="525px" width="100%"></iframe>
<iframe id="igraph" scrolling="no" style="border:none;" seamless="seamless" src="https://plot.ly/~satuelisa/24.embed" height="525px" width="100%"></iframe>
Las figuras __[8](https://elisa.dyndns-web.com/teaching/comp/datasci/p4.html#f8)__, __[9](https://elisa.dyndns-web.com/teaching/comp/datasci/p4.html#f9)__ y __[10](https://elisa.dyndns-web.com/teaching/comp/datasci/p4.html#f10)__ son mucho más fáciles de comparar e interpretar.
Al inicio nadie dice que no va a estudiar. Luego resulta que algunos no estudiaron y les fue peor en promedio a los que dicen que sí estudiaron. En medio curso parece que con que hayan estudiado *algo*, ya salen un poco mejor, mientras en el ordinario pinta que estudiando más de una hora a la semana se mejora aún más.
Ibamos a ver también un scatterplot de las horas trabajadas a la semana (cuantificado a nuestros niveles discretos que representaban las opciones) ya que daba una correlación negativa leve.
<code>
import plotly.plotly as py
import plotly.graph_objs as go
import pandas as pd
import ssl
if getattr(ssl, '_create_unverified_context', None):
ssl._create_default_https_context = ssl._create_unverified_context
d = pd.read_csv("https://elisa.dyndns-web.com/teaching/comp/datasci/graficar.csv")
datos = [go.Scatter(x = d.hrsNum, y = d.CF1op, mode = 'markers')]
g = py.iplot(datos, filename='hrs_de_trabajo')
print(g.embed_code)
</code>
<iframe id="igraph" scrolling="no" style="border:none;" seamless="seamless" src="https://plot.ly/~satuelisa/26.embed" height="525px" width="100%"></iframe>
Según la figura __[11](https://elisa.dyndns-web.com/teaching/comp/datasci/p4.html#f11)__ no podemos concluir nada.
Esto quedará por resolver con las pruebas estadísticas.
Lo que quedó por graficar, para esta práctica aún (vamos a seguir dibujando más adelante), era lo de temas por reforzar y medios de apoyo que les interesaban, en términos de cuántos fueron.
Vamos a graduar a gráficas de violin ya que usábamos diagramas de caja-bigote tanto en las figuras anteriores.
<code>
>>> d.temas.unique()
array([nan, 'Unos pocos.', 'Bastantes.',
'Creo tener que estudiarlos todos luego.',
'Solo continuar ampliando ', 'hacer mas facil el examen', 'Ninguno.'], dtype=object)
>>> d.temas.value_counts()
Unos pocos. 146
Bastantes. 93
Creo tener que estudiarlos todos luego. 28
Ninguno. 3
Solo continuar ampliando 1
hacer mas facil el examen 1
Name: temas, dtype: int64
</code>
Conviene concentrarnos en los tres niveles que tienen muchas respuestas: Pocos / Bastantes / Todos
<code>
import plotly.plotly as py
import plotly.graph_objs as go
import pandas as pd
import ssl
if getattr(ssl, '_create_unverified_context', None):
ssl._create_default_https_context = ssl._create_unverified_context
d = pd.read_csv("https://elisa.dyndns-web.com/teaching/comp/datasci/graficar.csv")
f = {"data": [{"type": 'violin', "y": d.loc[d['temas'] == "Unos pocos."].CF1op, \
"box": {"visible": True}, "line": {"color": 'black'}, "meanline": {"visible": True}, \
"fillcolor": '#8dd3c7',"opacity": 0.6,"x0": 'Pocos'}, \
{"type": 'violin', "y": d.loc[d['temas'] == "Bastantes."].CF1op, \
"box": {"visible": True}, "line": {"color": 'black'}, "meanline": {"visible": True}, \
"fillcolor": '#d38dc7',"opacity": 0.6,"x0": 'Bastantes'}, \
{"type": 'violin', "y": d.loc[d['temas'] == "Creo tener que estudiarlos todos luego."].CF1op, \
"box": {"visible": True}, "line": {"color": 'black'}, "meanline": {"visible": True}, \
"fillcolor": '#d3c78d',"opacity": 0.6,"x0": 'Todos'}],
"layout" : {
"title": "Cuántas temas de la unidad de aprendizaje tendrá que reforzar en el futuro",
"showlegend": False,
"yaxis": {
"title": "Calificación en primera oportunidad",
"zeroline": False,
}
}
}
g = py.iplot(f, filename = 'temas_pendientes', validate = False)
print(g.embed_code)
</code>
<iframe id="igraph" scrolling="no" style="border:none;" seamless="seamless" src="https://plot.ly/~satuelisa/28.embed" height="525px" width="100%"></iframe>
No se nota gran diferencia entre esos tres casos en la figura __[12](https://elisa.dyndns-web.com/teaching/comp/datasci/p4.html#f12)__.
Ahora lo de cuántas temas en general sienten que les falta reforzar con apoyo de parte de la facultad.
Vamos nuevamente con una gráfica de dispersión.
Podemos de hecho poner en una misma figura lo de temas por reforzar y cuántas formas de apoyo y cuántos medios indicaron que les interesaría usar.
Para que no se sobrepongan tanto, apliquemos un pequeño desplazamiento horizontal a dos de los tres conjuntos.
<code>
import plotly.plotly as py
import plotly.graph_objs as go
import pandas as pd
import ssl
if getattr(ssl, '_create_unverified_context', None):
ssl._create_default_https_context = ssl._create_unverified_context
d = pd.read_csv("https://elisa.dyndns-web.com/teaching/comp/datasci/graficar.csv")
m1 = {'size': 5, 'color': 'rgba(170, 0, 0, .7)'}
m2 = {'size': 6, 'color': 'rgba(0, 170, 0, .6)'}
m3 = {'size': 7, 'color': 'rgba(0, 0, 170, .5)'}
delta = 0.2
datos = [go.Scatter(x = d.cuantasTemas - delta, y = d.CF1op, mode = 'markers', marker = m1, name="Temas"),\
go.Scatter(x = d.formasApoyo, y = d.CF1op, mode = 'markers', marker = m2, name="Apoyos"), \
go.Scatter(x = d.cuantosMedios + delta, y = d.CF1op, mode = 'markers', marker = m3, name="Medios")]
f = {"data": datos,
"layout" : {
"title": "Interés en apoyo en temas generales",
"showlegend": True,
"xaxis": {
"title": "Número de opciones seleccionadas",
"zeroline": False,
},
"yaxis": {
"title": "Calificación en primera oportunidad",
"zeroline": False,
}
}
}
g = py.iplot(f, filename='apoyo')
print(g.embed_code)
</code>
<iframe id="igraph" scrolling="no" style="border:none;" seamless="seamless" src="https://plot.ly/~satuelisa/30.embed" height="525px" width="100%"></iframe>
El resultado es la figura __[13](https://elisa.dyndns-web.com/teaching/comp/datasci/p4.html#f13)__ que nos indica que gente de todas las imaginables calificaciones dice que cero, mayormente porque ahora hasta los que no contestaron salen como ceros por cómo se codificó esto en la práctica anterior.
Lo único que puedo concluir de la gráfica es que se va cerrando un poco hacía los números grandes: los que eligen seis o más ya desempeñan un poco peor que los que elegieron entre una y cinco opciones.
Esto se podría probar con una prueba estadística en una práctica futura.
En su reporte, incluyan por lo menos tres diferentes tipos de gráficas e intenten concluir algo sobre sus datos.
|
{
"filename": "p4.ipynb",
"repository": "satuelisa/DataScience",
"query": "transformed_from_existing",
"size": 41765,
"sha": ""
}
|
# PathwayEnrichmentOfModules_3.ipynb
Repository: XiaYangLabOrg/SCING
<code>
library('enrichR')
library('tidyverse')
</code>
<code>
input_dir <- '../intermediate_data/'
gene_modules <- paste0(input_dir,'gene.membership.csv.gz')
</code>
<code>
modules <- read.table(gene_modules,sep=',',header=TRUE)
</code>
<code>
for (t in listEnrichrDbs()$libraryName){
print(t)
}
</code>
<code>
dbs <- c("GO_Biological_Process_2021",
'DisGeNet',
"Reactome_2016",
"BioCarta_2016",
"KEGG_2021_Human",
'GWAS_Catalog_2019'
)
</code>
<code>
pathway_results <- list()
significant_mods <- c()
for(m in unique(modules$cluster_membership)){
module_genes <- modules[modules$cluster_membership == m,]$genes
print(length(module_genes))
if(length(module_genes) > 4){
enriched <- enrichr(module_genes, dbs)
for(database in names(enriched)){
database_results <- enriched[[database]]
if (dim(database_results)[1] > 0){
if (!(m %in% significant_mods)){
print(m)
pathway_results[[as.character(m)]] <- list()
significant_mods <- c(significant_mods, m)
}
database_results$module <- m
pathway_results[[as.character(m)]][[database]] <- database_results
}
}
}
}
</code>
<code>
for( m in names(pathway_results)){
print(m)
for (database in names(pathway_results[[m]])){
pathway_results[[m]][[database]]$module <- m
pathway_results[[m]][[database]]$database <- database
noverlap <- sapply(pathway_results[[m]][[database]]$Overlap, function(x) as.numeric(unlist(strsplit(x,"/"))[1]))
genes_in_pathway <- sapply(pathway_results[[m]][[database]]$Overlap, function(x) as.numeric(unlist(strsplit(x,"/"))[2]))
pathway_results[[m]][[database]]$noverlap <- noverlap
pathway_results[[m]][[database]]$genes_in_pathway <- genes_in_pathway
}
}
</code>
<code>
temp_list <- list()
for (m in names(pathway_results)){
temp_list[[m]] <- bind_rows(pathway_results[[m]])
}
for (m in names(pathway_results)){
temp_list[[m]]$adj_p <- p.adjust(temp_list[[m]]$P.value, 'BH')
}
final_pathway_results <- bind_rows(temp_list)
</code>
<code>
final_pathway_results <- final_pathway_results[final_pathway_results$Adjusted.P.value < 0.05,]
final_pathway_results <- final_pathway_results[final_pathway_results$noverlap > 4,]
</code>
<code>
final_pathway_results <- final_pathway_results[,c('Term','Odds.Ratio','P.value','Combined.Score','Genes','module','database','noverlap','genes_in_pathway','adj_p')]
final_pathway_results <- final_pathway_results[,c('database','module','Term','P.value','adj_p','noverlap','genes_in_pathway','Odds.Ratio','Combined.Score','Genes')]
colnames(final_pathway_results) <- c('Database',
'Module',
'Pathway',
'pvalue',
'pvalue_adj',
'NumberOfOverlap',
'NumberOfGenesInPathway',
'Odds.Ratio',
'Combined.Score',
'Genes')
final_pathway_results <- final_pathway_results[order(final_pathway_results$pvalue_adj),]
</code>
<code>
for(m in sort(unique(final_pathway_results$Module))){
temp <- final_pathway_results[final_pathway_results$Module==m,]
print(m)
for(p in temp$Pathway){
print(p)
}
}
</code>
<code>
write.table(final_pathway_results,
'../intermediate_data/pathway.csv',
quote=FALSE,sep=',')
</code>
|
{
"filename": "PathwayEnrichmentOfModules_3.ipynb",
"repository": "XiaYangLabOrg/SCING",
"query": "transformed_from_existing",
"size": 176555,
"sha": ""
}
|
# data_wrangling_te_1.ipynb
Repository: xavier-orcutt/TrialTranslator-notebooks
# Flatiron Health mCRC: Data Wrangling Test Set
**OBJECTIVE: Create a dataframe of relevant variables using test cohort patients which will be used to validate machine learning survival models.**
**BACKGROUND: The 11 CSV Flatiron files will be cleaned in the exact same fashion for the test set patients as for the training set patients. For more information on the cleaning process refer to Notebook: Data Wrangling Training Set.**
**OUTLINE:**
1. **File cleaning for patients in training set**
2. **Merge files to create master test dataframe**
<code>
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import datetime as dt
</code>
<code>
# Function that returns number of rows and count of unique PatientIDs for a dataframe.
def row_ID(dataframe):
row = dataframe.shape[0]
ID = dataframe['PatientID'].nunique()
return row, ID
</code>
<code>
#Import test IDs saved from Data Wrangling Training Set file.
test_IDs = pd.read_csv('test_IDs.csv')
</code>
<code>
# Array of PatientIDs in training set.
test_IDs = test_IDs['PatientID'].to_numpy()
</code>
<code>
len(test_IDs)
</code>
## Part 1: Data wrangling
**Relevant CSV files will be imported and processed. A file is considered processed when each row corresponds to a unique patient from the training set and each column is a relevant variable for mortality prognositication. The eligibility window for collecting variables is typically defined as -90 days and +30 days from index date. The index date is time of metastatic diagnosis. Plus 30 was selected as the upper bound of the eligibility window given that median time to start of first line treatment is about 30 days from metastatic diagnosis.**
**The following 11 CSV files from Flatiron will be cleaned:**
1. **Demographics**
2. **Enhanced_MetastaticCRC**
3. **Enhanced_Mortality_V2**
4. **MedicationAdministration**
5. **Enhanced_MetCRCBiomarkers**
6. **Insurance**
7. **ECOG**
8. **Vitals**
9. **Labs**
10. **Diagnosis**
11. **SocialDeeterminantsOfHealth**
### 1. Demographics
<code>
demographics = pd.read_csv('Demographics.csv')
</code>
<code>
demographics = demographics[demographics['PatientID'].isin(test_IDs)]
</code>
<code>
row_ID(demographics)
</code>
#### Race and Ethnicity
<code>
# If race value is 'Hispanic or Latino', code as unknown, otherwise value unchanged.
demographics['race'] = (
np.where(demographics['Race'] == 'Hispanic or Latino', 'unknown', demographics['Race'])
)
</code>
<code>
# Missing race value will be recoded as Unknown
demographics['race'] = demographics['race'].fillna('unknown')
</code>
<code>
demographics['race'].value_counts().sum()
</code>
<code>
# If race value is equal to 'Hispanic or Latino', code ethnicity as 'Hispanic or Latino', otherwise unchanged.
demographics['ethnicity'] = (
np.where(demographics['Race'] == 'Hispanic or Latino', 'hispanic_latino', demographics['Ethnicity'])
)
</code>
<code>
demographics['ethnicity'] = demographics['ethnicity'].fillna('unknown')
</code>
<code>
demographics['ethnicity'] = demographics['ethnicity'].replace({'Hispanic or Latino': 'hispanic_latino'})
</code>
<code>
demographics = demographics.drop(columns = ['Race', 'Ethnicity'])
</code>
#### BirthYear
<code>
enhanced_met = pd.read_csv('Enhanced_MetastaticCRC.csv')
</code>
<code>
demographics = pd.merge(demographics, enhanced_met[['PatientID', 'MetDiagnosisDate']], on = 'PatientID')
</code>
<code>
demographics.loc[:, 'MetDiagnosisDate'] = pd.to_datetime(demographics['MetDiagnosisDate'])
</code>
<code>
demographics.loc[:, 'age'] = demographics['MetDiagnosisDate'].dt.year - demographics['BirthYear']
</code>
<code>
demographics = demographics.drop(columns = ['BirthYear', 'MetDiagnosisDate'])
</code>
#### PracticeType
<code>
practice = pd.read_csv('Practice.csv')
</code>
<code>
practice = practice[practice['PatientID'].isin(test_IDs)]
</code>
<code>
row_ID(practice)
</code>
<code>
practice_unique_count = (
practice.groupby('PatientID')['PracticeType'].agg('nunique')
.to_frame()
.reset_index()
.rename(columns = {'PracticeType': 'n_type'})
)
</code>
<code>
practice_n = pd.merge(practice, practice_unique_count, on = 'PatientID')
</code>
<code>
practice_n['p_type'] = (
np.where(practice_n['n_type'] == 1, practice_n['PracticeType'], 'BOTH')
)
</code>
<code>
practice_n = (
practice_n.drop_duplicates(subset = ['PatientID'], keep = 'first')
.filter(items = ['PatientID', 'p_type'])
)
</code>
<code>
demographics = pd.merge(demographics, practice_n, on = 'PatientID')
</code>
#### Gender
<code>
# Impute M as unknown given most common gender.
demographics['Gender'] = demographics['Gender'].fillna('M')
</code>
<code>
demographics = demographics.rename(columns = {'Gender': 'gender'})
</code>
#### State
<code>
# Group states into Census-Bureau regions
state_dict = {
'ME': 'northeast',
'NH': 'northeast',
'VT': 'northeast',
'MA': 'northeast',
'CT': 'northeast',
'RI': 'northeast',
'NY': 'northeast',
'NJ': 'northeast',
'PA': 'northeast',
'IL': 'midwest',
'IN': 'midwest',
'MI': 'midwest',
'OH': 'midwest',
'WI': 'midwest',
'IA': 'midwest',
'KS': 'midwest',
'MN': 'midwest',
'MO': 'midwest',
'NE': 'midwest',
'ND': 'midwest',
'SD': 'midwest',
'DE': 'south',
'FL': 'south',
'GA': 'south',
'MD': 'south',
'NC': 'south',
'SC': 'south',
'VA': 'south',
'DC': 'south',
'WV': 'south',
'AL': 'south',
'KY': 'south',
'MS': 'south',
'TN': 'south',
'AR': 'south',
'LA': 'south',
'OK': 'south',
'TX': 'south',
'AZ': 'west',
'CO': 'west',
'ID': 'west',
'MT': 'west',
'NV': 'west',
'NM': 'west',
'UT': 'west',
'WY': 'west',
'AK': 'west',
'CA': 'west',
'HI': 'west',
'OR': 'west',
'WA': 'west',
'PR': 'unknown'
}
demographics['region'] = demographics['State'].map(state_dict)
</code>
<code>
demographics['region'] = demographics['region'].fillna('unknown')
</code>
<code>
demographics = demographics.drop(columns = ['State'])
</code>
<code>
# Final training demographics table.
demographics.sample(5)
</code>
<code>
%whos DataFrame
</code>
<code>
# Keep demographics and enhanced_met
del practice
del practice_n
del practice_unique_count
</code>
### 2. Enhanced_MetastaticCRC
<code>
enhanced_met = enhanced_met[enhanced_met['PatientID'].isin(test_IDs)]
</code>
<code>
row_ID(enhanced_met)
</code>
#### GroupStage
<code>
# Dictionary for regrouping stages
stage_dict = {
'0': '0',
'I': 'I',
'II': 'II',
'IIA': 'II',
'IIB': 'II',
'IIC': 'II',
'III': 'III',
'IIIA': 'III',
'IIIB': 'III',
'IIIC': 'III',
'IV': 'IV',
'IVA': 'IV',
'IVB': 'IV'
}
enhanced_met['stage'] = enhanced_met['GroupStage'].map(stage_dict)
</code>
<code>
enhanced_met['stage'] = enhanced_met['stage'].fillna('unknown')
</code>
<code>
enhanced_met = enhanced_met.drop(columns = ['GroupStage'])
</code>
#### CRCSite
**Refer to Diagnosis section for further cleaning of variable.**
#### MetDiagnosisDate
<code>
enhanced_met = enhanced_met.rename(columns = {'MetDiagnosisDate': 'met_date'})
</code>
<code>
enhanced_met.loc[:, 'met_date'] = pd.to_datetime(enhanced_met['met_date'])
</code>
<code>
enhanced_met.loc[:, 'met_year'] = enhanced_met['met_date'].dt.year
</code>
#### DiagnosisDate
<code>
enhanced_met = enhanced_met.rename(columns = {'DiagnosisDate': 'diagnosis_date'})
</code>
<code>
# Missing diagnosis_date will be replaced with met_date; other dates will be left untouched.
enhanced_met['diagnosis_date'] = (
np.where(enhanced_met['diagnosis_date'].isna(), enhanced_met['met_date'], enhanced_met['diagnosis_date'])
)
</code>
<code>
enhanced_met['diagnosis_date'] = pd.to_datetime(enhanced_met['diagnosis_date'])
</code>
#### Time from diagnosis date to metastatic date
<code>
enhanced_met.loc[:, 'delta_met_diagnosis'] = (enhanced_met['met_date'] - enhanced_met['diagnosis_date']).dt.days
</code>
<code>
enhanced_met.sample(5)
</code>
<code>
%whos DataFrame
</code>
### 3. Enhanced_Mortality_V2
<code>
mortality = pd.read_csv('Enhanced_Mortality_V2.csv')
</code>
<code>
mortality = mortality[mortality['PatientID'].isin(test_IDs)]
</code>
<code>
row_ID(mortality)
</code>
<code>
mortality = mortality.rename(columns = {'DateOfDeath': 'death_date'})
</code>
<code>
# For patients with year granularity, impute middle of the year (ie., July 1)
mortality['death_date'] = (
np.where(mortality['death_date'].str.len() == 4, mortality['death_date'] + '-07-01', mortality['death_date'])
)
</code>
<code>
# For patients with month granularity, impute 15th of the month.
mortality['death_date'] = (
np.where(mortality['death_date'].str.len() == 7, mortality['death_date'] + '-15', mortality['death_date'])
)
</code>
<code>
mortality['death_date'] = pd.to_datetime(mortality['death_date'])
</code>
#### Censoring
**For patients for whom a date of death is not known, the censor date can be defined either as the data cutoff date or as the last confirmed activity date. The last confirmed activity date is broadly defined as the last date at which there is evidence in the EHR that a patient is alive. Evidence of a record in at least one of the items listed below qualifies as patient-level confirmed activity:**
* **Visit: VisitDate**
* **Enhanced_MetCRC_Orals: StartDate or EndDate**
* **Enhanced_MetCRCBiomarkers: SpecimenCollectedDate**
<code>
visit = pd.read_csv('Visit.csv')
telemedicine = pd.read_csv('Telemedicine.csv')
orals = pd.read_csv('Enhanced_MetCRC_Orals.csv')
biomarkers = pd.read_csv('Enhanced_MetCRCBiomarkers.csv')
</code>
##### Visit and Telemedicine
<code>
visit_tele = (
visit
.drop(columns = ['VisitType', 'IsVitalsVisit', 'IsTreatmentVisit', 'IsLabVisit'])
.append(telemedicine)
)
</code>
<code>
visit_tele.loc[:,'VisitDate'] = pd.to_datetime(visit_tele['VisitDate'])
</code>
<code>
# Select max VisitDate from combined Visit and Telemedicine table.
visit_tele_max = (
visit_tele
[visit_tele['PatientID'].isin(test_IDs)]
.groupby('PatientID')['VisitDate'].max()
.to_frame(name = 'visit_max')
.reset_index()
)
</code>
##### Orals
<code>
orals = orals[orals['PatientID'].isin(test_IDs)]
</code>
<code>
orals.loc[:, 'StartDate'] = pd.to_datetime(orals['StartDate'])
</code>
<code>
orals.loc[:, 'EndDate'] = pd.to_datetime(orals['EndDate'])
</code>
<code>
orals_max = (
orals
.assign(max_date = orals[['StartDate', 'EndDate']].max(axis = 1))
.groupby('PatientID')['max_date'].max()
.to_frame(name = 'orals_max')
.reset_index()
)
</code>
##### Biomarkers
<code>
biomarkers = biomarkers[biomarkers['PatientID'].isin(test_IDs)]
</code>
<code>
biomarkers.loc[:, 'SpecimenCollectedDate'] = pd.to_datetime(biomarkers['SpecimenCollectedDate'])
</code>
<code>
biomarkers_max = (
biomarkers
.groupby('PatientID')['SpecimenCollectedDate'].max()
.to_frame(name = 'biomarkers_max')
.reset_index()
)
</code>
##### Max date merge
<code>
last_activity = pd.merge(visit_tele_max, orals_max, on = 'PatientID', how = 'outer')
</code>
<code>
last_activity = pd.merge(last_activity, biomarkers_max, on = 'PatientID', how = 'outer')
</code>
<code>
row_ID(last_activity)
</code>
<code>
# Find max of each row.
last_activity = (
last_activity
.assign(last_activity = last_activity[['visit_max', 'orals_max', 'biomarkers_max']].max(axis = 1))
.filter(items = ['PatientID', 'last_activity'])
)
</code>
<code>
# Append missing training IDs.
mortality = (
mortality
.append(
pd.Series(test_IDs)[~pd.Series(test_IDs).isin(mortality['PatientID'])].to_frame(name = 'PatientID'),
sort = False
)
)
</code>
<code>
row_ID(mortality)
</code>
<code>
mortality = pd.merge(mortality, enhanced_met[['PatientID', 'met_date']], on = 'PatientID')
</code>
<code>
mortality = pd.merge(mortality, last_activity, on = 'PatientID')
</code>
<code>
mortality.loc[:, 'death_status'] = np.where(mortality['death_date'].isna(), 0, 1)
</code>
<code>
# timerisk_activity is time from advanced disease diagnosis to death or last activity if no death date.
mortality.loc[:, 'timerisk_activity'] = (
np.where(mortality['death_date'].isna(),
(mortality['last_activity'] - mortality['met_date']).dt.days,
(mortality['death_date'] - mortality['met_date']).dt.days)
)
</code>
<code>
# If timerisk_activity is less than 0, set to 0 otherwise remains unchanged.
mortality['timerisk_activity'] = np.where(mortality['timerisk_activity'] < 0, 0, mortality['timerisk_activity'])
</code>
<code>
mortality.sample(5)
</code>
<code>
mortality = pd.merge(mortality, enhanced_met[['PatientID', 'diagnosis_date']], on = 'PatientID', how = 'outer')
</code>
<code>
# timerisk_activity_first is time from first diagnosis (metastatic or not) to death or last activity if no death date.
mortality.loc[:, 'timerisk_activity_first'] = (
np.where(mortality['death_date'].isna(),
(mortality['last_activity'] - mortality['diagnosis_date']).dt.days,
(mortality['death_date'] - mortality['diagnosis_date']).dt.days)
)
</code>
<code>
# If timerisk_activity is less than 0, set to 0 otherwise remains unchanged.
mortality['timerisk_activity_first'] = np.where(
mortality['timerisk_activity_first'] < 0, 0, mortality['timerisk_activity_first'])
</code>
<code>
mortality.to_csv('mortality_cleaned_te.csv', index = False, header = True)
</code>
<code>
mortality = mortality.filter(items = ['PatientID', 'death_status', 'timerisk_activity'])
</code>
<code>
mortality.sample(5)
</code>
<code>
%whos DataFrame
</code>
<code>
# Keep demographics, enhanced_met, and mortality
del biomarkers
del biomarkers_max
del last_activity
del orals
del orals_max
del telemedicine
del visit
del visit_tele
del visit_tele_max
</code>
### 4. MedicationAdministration
<code>
med_admin = pd.read_csv('MedicationAdministration.csv')
</code>
<code>
med_admin = med_admin[med_admin['PatientID'].isin(test_IDs)]
</code>
<code>
row_ID(med_admin)
</code>
<code>
med_admin.shape
</code>
**An indicator variable will be created for key medications (ie., steroids, opioids, other pain meds, antibiotics, anticoagulation, diabetic medicaitons, etc.) around time of metastatic diagnosis. The elgibility window is -90 days from metastatic diagnosis to first line of therapy or +30, whichever comes first. First line of therapy is included as an upper bound because steroids are frequently administered as part of treatment for chemotherapy induced-nausea, so steroids might inadvertently capture chemotherapy treatment if upper bound is set after first line of therapy.**
<code>
line_therapy = pd.read_csv('LineOfTherapy.csv')
</code>
<code>
line_therapy = line_therapy[line_therapy['PatientID'].isin(med_admin['PatientID'])]
</code>
<code>
line_therapy_1 = (
line_therapy
.query('LineNumber == 1 and IsMaintenanceTherapy == False')
)
</code>
<code>
# If patients have 2 first line therapies, select earliest
line_therapy_1 = line_therapy_1.drop_duplicates(subset = ['PatientID'], keep = 'first')
</code>
<code>
med_admin = pd.merge(med_admin, line_therapy_1[['PatientID', 'StartDate']], on = 'PatientID', how = 'left')
</code>
<code>
med_admin = pd.merge(med_admin, enhanced_met[['PatientID', 'met_date']], on = 'PatientID', how = 'left')
</code>
<code>
med_admin.loc[:, 'AdministeredDate'] = pd.to_datetime(med_admin['AdministeredDate'])
</code>
<code>
med_admin.loc[:, 'StartDate'] = pd.to_datetime(med_admin['StartDate'])
</code>
<code>
med_admin['AdministeredDate'].isna().sum()
</code>
<code>
# New variable upper_bound which defines upper bound
# If no StartDate (ie., no treatment received), then upper bound +30 from metastatic diagnosis
# If StartDate is greater than 30 days from metastatic diagnosis, then upper bound +30 from metastatic diagnosis
# If StartDate is less than or equal 30 from metastatic diagnosis, then upper bound is one day before StartDate
conditions = [
(med_admin['StartDate'].isna()) | ((med_admin['StartDate'] - med_admin['met_date']).dt.days > 30),
((med_admin['StartDate'] - med_admin['met_date']).dt.days <= 30)]
choices = [30, (med_admin['StartDate'] - med_admin['met_date']).dt.days - 1]
med_admin.loc[:, 'upper_bound'] = np.select(conditions, choices)
</code>
<code>
med_admin.loc[:, 'upper_bound_date'] = (
np.where(med_admin['upper_bound'] != 30,
med_admin['StartDate'] - pd.DateOffset(days = 1),
med_admin['met_date'] + pd.DateOffset(days = 30))
)
</code>
<code>
# Select window of -90 days and from metastatic diagnosis and remove clinical study drug.
med_admin_win = (
med_admin
[((med_admin['AdministeredDate'] - med_admin['met_date']).dt.days >= -90) &
(med_admin['AdministeredDate'] <= med_admin['upper_bound_date']) &
(med_admin['CommonDrugName'] != 'Clinical study drug')]
)
</code>
<code>
row_ID(med_admin_win)
</code>
<code>
med_admin_win.DrugCategory.value_counts()
</code>
#### Antineoplastic
<code>
# Select window before date of metastatic diagnosis.
med_admin_win_chemo = (
med_admin
[med_admin['AdministeredDate'] < med_admin['met_date']]
)
</code>
<code>
(
med_admin_win_chemo.query('DrugCategory == "antineoplastic"').DrugName.value_counts().head(20)
)
</code>
<code>
med_admin_win_chemo.loc[:, 'adjuv'] = (
np.where((med_admin_win_chemo['DrugName'] == 'fluorouracil') |
(med_admin_win_chemo['DrugName'] == 'oxaliplatin') |
(med_admin_win_chemo['DrugName'] == 'capecitabine'), 1, 0)
)
</code>
<code>
med_admin_adjuv = (
med_admin_win_chemo
.query('adjuv == 1')
.drop_duplicates(subset = ['PatientID'], keep = 'first')
[['PatientID', 'adjuv']]
)
</code>
#### Antiemetic
**No indicator variable created.**
#### Solution-fluid
**No indicator variable created.**
#### Steroid
<code>
med_admin_win.loc[:, 'steroid_diag'] = (
np.where((med_admin_win['DrugCategory'] == 'steroid') &
((med_admin_win['Route'] == 'Intravenous') |
(med_admin_win['Route'] == 'Oral') |
(med_admin_win['Route'] == 'Intrajejunal')), 1, 0)
)
</code>
#### Pain
##### Opioid PO
<code>
# List of avialable opioids in the US.
opioid_list = [
'buprenorphine',
'codeine',
'fentanyl',
'hydrocodone',
'hydromorphone',
'methadone',
'morphine',
'oxycodone',
'oxymorphone',
'tapentadol',
'tramadol'
]
</code>
<code>
med_admin_win.loc[:, 'opioid_PO_diag'] = (
np.where(((med_admin_win['Route'] == 'Oral') |
(med_admin_win['Route'] == 'Transdermal') |
(med_admin_win['Route'] == 'Sublingual')) &
(med_admin_win['CommonDrugName'].str.contains('|'.join(opioid_list))), 1, 0)
)
</code>
##### Nonopioid PO
<code>
med_admin_win.loc[:, 'nonopioid_PO_diag'] = (
np.where((med_admin_win['DrugCategory'] == 'pain agent') &
(med_admin_win['Route'] == 'Oral') &
(~med_admin_win['CommonDrugName'].str.contains('|'.join(opioid_list))), 1, 0)
)
</code>
##### Pain IV
<code>
med_admin_win.loc[:, 'pain_IV_diag'] = (
np.where((med_admin_win['DrugCategory'] == 'pain agent') &
(med_admin_win['Route'] == 'Intravenous'), 1, 0)
)
</code>
#### Hematologic agent
##### Heparin and other parenteral agents
<code>
med_admin_win.loc[:, 'heparin_diag'] = (
np.where(((med_admin_win['CommonDrugName'].str.contains('heparin')) &
(med_admin_win['AdministeredUnits'] == 'unit/kg/hr')) |
(med_admin_win['CommonDrugName'].str.contains('bivalirudin')) |
(med_admin_win['CommonDrugName'].str.contains('argatroban')), 1, 0)
)
</code>
###### Enoxaparin and other subcutaneous agents
<code>
med_admin_win.loc[:, 'enoxaparin_diag'] = (
np.where(((med_admin_win['CommonDrugName'].str.contains('enoxaparin')) &
(med_admin_win['AdministeredAmount'] > 40)) |
((med_admin_win['CommonDrugName'].str.contains('dalteparin')) &
(med_admin_win['AdministeredAmount'] > 5000)) |
((med_admin_win['CommonDrugName'].str.contains('fondaparinux')) &
(med_admin_win['AdministeredAmount'] > 2.5)), 1, 0)
)
</code>
##### DOAC
<code>
med_admin_win.loc[:, 'doac_diag'] = (
np.where((med_admin_win['CommonDrugName'].str.contains('apixaban')) |
(med_admin_win['CommonDrugName'].str.contains('rivaroxaban')) |
(med_admin_win['CommonDrugName'].str.contains('dabigatran')) |
(med_admin_win['CommonDrugName'].str.contains('edoxaban')), 1, 0)
)
</code>
##### Warfarin
<code>
med_admin_win.loc[:, 'warfarin_diag'] = np.where((med_admin_win['CommonDrugName'].str.contains('warfarin')), 1, 0)
</code>
##### Anticoagulation merge
<code>
# Combine heparin, enoxparin, DOAC, and warfarin columns into a single anticoagulation indicator variable.
med_admin_win['ac_diag'] = (
med_admin_win['heparin_diag'] + med_admin_win['enoxaparin_diag'] + med_admin_win['doac_diag'] + med_admin_win['warfarin_diag']
)
</code>
<code>
# Drop heparin, enoxaparin, DOAC, and warfarin columns.
med_admin_win = med_admin_win.drop(columns = ['heparin_diag', 'enoxaparin_diag', 'doac_diag', 'warfarin_diag'])
</code>
#### Anti-infective
##### Anti-infective IV
<code>
med_admin_win.loc[:, 'antiinfective_IV_diag'] = (
np.where((med_admin_win['DrugCategory'] == 'anti-infective') &
(med_admin_win['Route'] == 'Intravenous'), 1, 0)
)
</code>
##### Anti-infective PO
<code>
med_admin_win.loc[:, 'antiinfective_diag'] = (
np.where((med_admin_win['DrugCategory'] == 'anti-infective') &
(med_admin_win['Route'] == 'Oral'), 1, 0)
)
</code>
#### Anesthetic
**No indicator variable created.**
#### Cytoprotective
**No indicator variable created.**
#### Antihyperglycemic
<code>
med_admin_win.loc[:, 'antihyperglycemic_diag'] = np.where(med_admin_win['DrugCategory'] == 'antihyperglycemic', 1, 0)
</code>
#### Proton pump inhibitor
<code>
med_admin_win.loc[:, 'ppi_diag'] = np.where(med_admin_win['DrugCategory'] == 'proton pump inhibitor', 1, 0)
</code>
#### Antidepressant
<code>
med_admin_win.loc[:, 'antidepressant_diag'] = np.where(med_admin_win['DrugCategory'] == 'antidepressant', 1, 0)
</code>
#### Bone therapy agent
<code>
med_admin_win.loc[:, 'bta_diag'] = np.where(med_admin_win['DrugCategory'] == 'bone therapy agent (bta)', 1, 0)
</code>
#### Hormone
<code>
med_admin_win.loc[:, 'thyroid_diag'] = np.where(med_admin_win['CommonDrugName'] == 'levothyroxine', 1, 0)
</code>
#### Gout and hyperurecemia agent
**No indicator variable created.**
#### 4.16 Immunosuppressive
<code>
med_admin_win.loc[:, 'is_diag'] = np.where(med_admin_win['DrugCategory'] == 'immunosuppressive', 1, 0)
</code>
#### Sedative agent
**No indicator variable created.**
#### Endocrine
**No indicator variable created.**
#### Antidote and reversal agent
**No indicator variable created.**
#### Hyperglycemic
**No indicator variable created.**
#### Antithyroid agent
**No indicator variable created.**
#### Anticholinergic
**No indicator variable created.**
#### Calciumimetic
**No indicator variable created.**
#### Targeted therapy
**No indicator variable created.**
#### Condensing
<code>
# Select columns with indicator variables and PatientID, then collapse rows by PatientID and sum columns.
med_admin_wide = (
med_admin_win
[med_admin_win.columns[med_admin_win.columns.str.contains('diag|PatientID')]]
.groupby('PatientID').sum()
)
</code>
<code>
# Replace numbers greater than 1 with 1; 0 remains unchanged.
med_admin_wide = (
med_admin_wide.mask(med_admin_wide > 1, 1)
.reset_index()
)
</code>
<code>
row_ID(med_admin_wide)
</code>
<code>
# Append missing training IDs.
med_admin_wide = (
med_admin_wide.append(
pd.Series(test_IDs)[~pd.Series(test_IDs).isin(med_admin_wide['PatientID'])].to_frame(name = 'PatientID'),
sort = False
)
.fillna(0)
)
</code>
<code>
row_ID(med_admin_wide)
</code>
<code>
med_admin_wide = pd.merge(med_admin_wide, med_admin_adjuv, on = 'PatientID', how = 'left').fillna(0)
</code>
<code>
%whos DataFrame
</code>
<code>
# Keep demographics, enhanced_met, med_admin_wide, med_admin_wide, and mortality
del line_therapy
del line_therapy_1
del med_admin
del med_admin_adjuv
del med_admin_win
del med_admin_win_chemo
</code>
### 5. Enhanced_MetCRCBiomarkers
<code>
biomarkers = pd.read_csv('Enhanced_MetCRCBiomarkers.csv')
</code>
<code>
biomarkers = biomarkers[biomarkers['PatientID'].isin(test_IDs)]
</code>
<code>
biomarkers.shape
</code>
<code>
biomarkers.loc[:, 'ResultDate'] = pd.to_datetime(biomarkers['ResultDate'])
</code>
**The Biomarkers dataframe is in a long format. The goal is to build a single-row-per-patient dataframe with columns reflecting a patient's biomarker status within a predefined elgibility window. For this project, the elgibility window is defined as negative infinity to +30 days from time of diagnosis of metastatic disease (ie., index date).**
**Regarding biomarker date information, result date is the date the biomarker result was first reported, and so represents the date on which the clinician would be expected to have information about the patient’s biomarker status to inform the course of treatment. Flatiron recommends using result date as the relevant biomarker test date and using specimen received date as the proxy when result date is not available. The gaps between collected date and either received or result date are substantially more variable.**
**We'll begin by imputing specimen received date when result date is missing. Then, we'll select all biomarkers that fall within the elbility window.**
<code>
biomarkers.loc[:, 'SpecimenReceivedDate'] = pd.to_datetime(biomarkers['SpecimenReceivedDate'])
</code>
<code>
# Replace missing result date with specimen received date.
biomarkers.loc[:, 'result_date'] = (
np.where(biomarkers['ResultDate'].isna(), biomarkers['SpecimenReceivedDate'], biomarkers['ResultDate'])
)
</code>
<code>
biomarkers = pd.merge(biomarkers, enhanced_met[['PatientID', 'met_date']], on = 'PatientID', how = 'left')
</code>
<code>
# Create new variable that captures difference in days between result date and metastatic diagnosis.
biomarkers.loc[:, 'bio_date_diff'] = (biomarkers['result_date'] - biomarkers['met_date']).dt.days
</code>
<code>
# Select all patients with biomarkers < +30 from metastatic diagnosis.
biomarker_win = biomarkers[biomarkers['bio_date_diff'] <= 30]
</code>
**The next step is defining positive and negative staus for each biomarker. Patients with at least one confirmed positive test result for the biomarker of interest within the eligibility window will be considered “ever-positive”. This will include patients who may have confirmed negative results before and/or after a positive result within the eligibility window. A patient with an "ever-positive" biomarker during the elgibility window will have that biomarker labeled as positive.**
**In contrast, patients with at least one confirmed negative test result for the biomarker of interest, and no confirmed positive test results for the same biomarker within the eligibility window may be considered “only-negative”. A patient with an "only-negative" biomarker during the elgibility window will have that biomarker labeled as negative.**
**Lastly, if the biomarker is neither positive or negative, then it will be labeled as unknown.**
<code>
# Create indicator variable where where 2 if positive, 1 if negative, and 0 if unknown or missing.
conditions = [
(biomarker_win['BiomarkerStatus'] == 'Mutation positive') |
(biomarker_win['BiomarkerStatus'] == 'Loss of MMR protein expression (MMR protein deficiency found)') |
(biomarker_win['BiomarkerStatus'] == 'MSI-H'),
(biomarker_win['BiomarkerStatus'] == 'Mutation negative') |
(biomarker_win['BiomarkerStatus'] == 'Normal MMR protein expression (No loss of nuclear expression of MMR protein)') |
(biomarker_win['BiomarkerStatus'] == 'MSS') |
(biomarker_win['BiomarkerStatus'] == 'MSI-L') |
(biomarker_win['BiomarkerStatus'] == 'MSS-Ambiguous') |
(biomarker_win['BiomarkerStatus'] == 'Equivocal')
]
choices = [2,1]
biomarker_win.loc[:, 'bio_status'] = np.select(conditions, choices, default = 0)
</code>
<code>
# Select highest number biomarker status among duplicates, merge with nonduplciates, then pivot.
biomarker_wide = (
biomarker_win
.sort_values(by = ['PatientID', 'BiomarkerName','bio_status'], ascending = False)
.drop_duplicates(subset = ['PatientID', 'BiomarkerName'], keep = 'first')
.pivot(index = 'PatientID', columns = 'BiomarkerName', values = 'bio_status')
.reset_index()
)
biomarker_wide.columns.name = None
biomarker_wide = biomarker_wide.rename(columns = {'MMR/MSI': 'dMMR_MSIh'})
</code>
<code>
row_ID(biomarker_wide)
</code>
<code>
biomarker_wide = (
biomarker_wide
.append(
pd.Series(test_IDs)[~pd.Series(test_IDs).isin(biomarker_wide['PatientID'])].to_frame(name = 'PatientID'),
sort = False)
)
</code>
<code>
row_ID(biomarker_wide)
</code>
<code>
biomarker_wide['BRAF'] = (
biomarker_wide['BRAF'].replace({
2: 'mutated',
1: 'wild-type',
0: 'unknown',
np.nan: 'unknown'})
)
</code>
<code>
biomarker_wide['KRAS'] = (
biomarker_wide['KRAS'].replace({
2: 'mutated',
1: 'wild-type',
0: 'unknown',
np.nan: 'unknown'})
)
</code>
<code>
biomarker_wide['dMMR_MSIh'] = (
biomarker_wide['dMMR_MSIh'].replace({
2: 'yes',
1: 'no',
0: 'unknown',
np.nan: 'unknown'})
)
</code>
<code>
biomarker_wide['NRAS'] = (
biomarker_wide['NRAS'].replace({
2: 'mutated',
1: 'wild-type',
0: 'unknown',
np.nan: 'unknown'})
)
</code>
<code>
# IDs of patients with mutation positive BRAF that is V600E.
v600e_id = (
biomarker_win
.query('BiomarkerName == "BRAF"')
.query('BiomarkerStatus == "Mutation positive"')
.query('BiomarkerDetail == "V600E BRAF mutation"')
.PatientID
)
</code>
<code>
# Identify patients with V600E mutated BRAF vs. other.
conditions = [
(biomarker_wide['PatientID'].isin(v600e_id)),
(~biomarker_wide['PatientID'].isin(v600e_id)) & (biomarker_wide['BRAF'] == 'mutated')
]
choices = ['mutated V600E', 'mutated other']
biomarker_wide.loc[:, 'BRAF_n'] = np.select(conditions, choices, default = biomarker_wide['BRAF'])
</code>
<code>
biomarker_wide = (
biomarker_wide
.drop(columns = ['BRAF'])
.rename(columns = {'BRAF_n': 'BRAF'}))
</code>
<code>
%whos DataFrame
</code>
<code>
# Keep biomarker_wide, demographics, enhanced_met, med_admin_wide, and mortality
del biomarker_win
del biomarkers
</code>
### 6. Insurance
<code>
insurance = pd.read_csv('Insurance.csv')
</code>
<code>
insurance = insurance[insurance['PatientID'].isin(test_IDs)]
</code>
<code>
row_ID(insurance)
</code>
**The insurance table contains patient insurance/payer information. Patients may have multiple payer categories concurrently. Start date is populated roughly 80% of the time, while end date is populated about 20% of the time. This mutiple-row-per-patient table will be transformed into a single-row-per-patient table. Indicator variables for each payer category active at time of metastatic diagnosis will be made as columns. Insurance will be considered active if start date is less than 30 days from advanced diagnosis regardless of end date.**
<code>
insurance.loc[:, 'StartDate'] = pd.to_datetime(insurance['StartDate'])
</code>
<code>
insurance = pd.merge(insurance, enhanced_met[['PatientID', 'met_date']], on = 'PatientID', how = 'left')
</code>
<code>
# Remove years with start dates less than 1900 which is likely a coding error.
insurance = insurance[(insurance['StartDate']).dt.year >= 1900]
</code>
<code>
insurance.loc[:, 'insurance_date_diff'] = (insurance['StartDate'] - insurance['met_date']).dt.days
</code>
<code>
insurance_win = insurance[insurance['insurance_date_diff'] <= 30]
</code>
<code>
row_ID(insurance)
</code>
<code>
# Recode payer category
conditions = [
(insurance_win['IsMedicareAdv'] == 'Yes') |
(insurance_win['IsPartAOnly'] == 'Yes') |
(insurance_win['IsPartBOnly'] == 'Yes') |
(insurance_win['IsPartAandPartB'] == 'Yes') |
(insurance_win['IsPartDOnly'] == 'Yes'),
(insurance_win['IsManagedGovtPlan'] == 'Yes'),
(insurance_win['IsManagedMedicaid'] == 'Yes'),
(insurance_win['IsMedicareMedicaid'] == 'Yes')]
choices = ['Medicare', 'Other Government Program', 'Medicaid', 'medicare_medicaid']
insurance_win.loc[:, 'payer_category'] = np.select(conditions, choices, insurance_win['PayerCategory'])
</code>
#### Medicare
<code>
insurance_win.loc[:, 'medicare'] = np.where(insurance_win['payer_category'] == 'Medicare', 1, 0)
</code>
#### Medicaid
<code>
insurance_win.loc[:, 'medicaid'] = np.where(insurance_win['payer_category'] == 'Medicaid', 1, 0)
</code>
#### Medicare/Medicaid
<code>
insurance_win.loc[:, 'medicare_medicaid'] = np.where(insurance_win['payer_category'] == 'medicare_medicaid', 1, 0)
</code>
#### Commercial
<code>
insurance_win.loc[:, 'commercial'] = np.where(insurance_win['payer_category'] == 'Commercial Health Plan', 1, 0)
</code>
#### Patient Assistance Programs
<code>
insurance_win.loc[:, 'patient_assistance'] = np.where(insurance_win['payer_category'] == 'Patient Assistance Program', 1, 0)
</code>
#### Other Government Program
<code>
insurance_win.loc[:, 'other_govt'] = np.where(insurance_win['payer_category'] == 'Other Government Program', 1, 0)
</code>
#### Self Pay
<code>
insurance_win.loc[:, 'self_pay'] = np.where(insurance_win['payer_category'] == 'Self Pay', 1, 0)
</code>
#### Other Payer
<code>
insurance_win.loc[:, 'other'] = np.where(insurance_win['payer_category'] == 'Other Payer - Type Unknown', 1, 0)
</code>
#### Condense
<code>
# After dropping 'insurance_date_diff', add columns by PatientID.
insurance_wide = (
insurance_win
.drop(columns = ['insurance_date_diff'])
.groupby('PatientID').sum()
)
</code>
<code>
# Set any value greater than 1 to 1; leave 0 unchanged.
insurance_wide = (
insurance_wide
.mask(insurance_wide > 1, 1)
.reset_index()
)
</code>
<code>
row_ID(insurance_wide)
</code>
<code>
# Append missing training IDs.
insurance_wide = (
insurance_wide
.append(
pd.Series(test_IDs)[~pd.Series(test_IDs).isin(insurance_wide['PatientID'])].to_frame(name = 'PatientID'),
sort = False)
)
</code>
<code>
row_ID(insurance_wide)
</code>
<code>
insurance_wide = insurance_wide.fillna(0)
</code>
<code>
%whos DataFrame
</code>
<code>
# Keep biomarker_wide, demographics, enhanced_met, insurance_wide, med_admin_wide, and mortality
del insurance
del insurance_win
</code>
### 7. ECOG
<code>
ecog = pd.read_csv('ECOG.csv')
</code>
<code>
ecog = ecog[ecog['PatientID'].isin(test_IDs)]
</code>
<code>
row_ID(ecog)
</code>
**The ECOG table is a longitudinal record of structured ECOG scores captured in the EHR for each patient. Many patients have multiple ECOG scores reported. A new dataframe will be built where one ECOG score will be assigned to each patient. The index date will be date of advanced diagnosis with an elgible window period of +30 days to -90 days from advanced diagnosis. The ECOG score closest to index date will be assigned to the patient. In the case of two ECOG scores on the same day or equidistant but on opposite sides of the index date, the higher ECOG score (worse performance) will be selected.**
**BaselineECOG is a composite table that selects one ECOG score within +7 days and -30 days of a line of therapy. Patients might have two baseline ECOG values for line number 1 due to maintenance therapy. BaselineECOG will not be used for creating baseline models.**
<code>
ecog = pd.merge(ecog, enhanced_met[['PatientID', 'met_date']], on = 'PatientID', how = 'left')
</code>
<code>
ecog.loc[:, 'EcogDate'] = pd.to_datetime(ecog['EcogDate'])
</code>
<code>
ecog.loc[:, 'ecog_date_diff'] = (ecog['EcogDate'] - ecog['met_date']).dt.days
</code>
<code>
ecog_win = ecog[(ecog['ecog_date_diff'] >= -90) & (ecog['ecog_date_diff'] <= 30)]
</code>
<code>
row_ID(ecog_win)
</code>
<code>
# Time from metastatic diagnosis to ECOG date will be converted to an absolute value.
ecog_win.loc[:, 'ecog_date_diff'] = ecog_win['ecog_date_diff'].abs()
</code>
<code>
# Sort values with ECOG nearest to time of diagnosis as top row (and largest ECOG if multiple ECOGs that day) then select top row. ECOG date nearest to day of diagnosis as top row and largest ES
ecog_diagnosis_wide = (
ecog_win
.sort_values(by = ['PatientID', 'ecog_date_diff', 'EcogValue'], ascending = [True, True, False])
.drop_duplicates(subset = ['PatientID'], keep = 'first' )
.filter(items = ['PatientID', 'EcogValue'])
.rename(columns = {'EcogValue': 'ecog_diagnosis'})
)
</code>
<code>
row_ID(ecog_diagnosis_wide)
</code>
<code>
# Append missing training IDs.
ecog_diagnosis_wide = (
ecog_diagnosis_wide
.append(
pd.Series(test_IDs)[~pd.Series(test_IDs).isin(ecog_diagnosis_wide['PatientID'])].to_frame(name = 'PatientID'),
sort = False)
.fillna('unknown')
)
</code>
<code>
row_ID(ecog_diagnosis_wide)
</code>
<code>
%whos DataFrame
</code>
<code>
# Keep biomarker_wide, demographics, ecog_diagnosis_wide, enhanced_met, insurance_wide, med_admin_wide, and mortality
del ecog
del ecog_win
</code>
### 8. Vitals
<code>
vitals = pd.read_csv('Vitals.csv')
</code>
<code>
vitals = vitals[vitals['PatientID'].isin(test_IDs)]
</code>
<code>
row_ID(vitals)
</code>
**The Vitals table is a longitudinal record of vitals captured in the EHR for each patient. A weight and BMI variable at time of advanced diagnosis will be created. The elgibility window will be -90 days to +30 days from advanced diagnosis. Average height from all visits will be used to calculate BMI. In the case of two weights on the same day or equidistant but on opposite sides of the index date, the lowest weight will be selected. Percent change in weight and weight slope 3 months within advanced diagnosis will be calculated as in the LCPI model. Patients must have at least two weight recordings to calculate percent change in weight or weight slope.**
#### Weight and BMI
<code>
# Create weight dataframe; remove weight values that are empty or equal to zero.
weight = (
vitals
.query('Test == "body weight"')
.filter(items = ['PatientID', 'TestDate', 'TestResultCleaned'])
.rename(columns = {'TestResultCleaned': 'weight'})
.dropna(subset = ['weight'])
.query('weight != 0')
)
</code>
<code>
weight.loc[:, 'TestDate'] = pd.to_datetime(weight['TestDate'])
</code>
<code>
weight = pd.merge(weight, enhanced_met[['PatientID', 'met_date']], on = 'PatientID', how = 'left')
</code>
<code>
# Weight elgibliity window is -90 and +30 from metastatic diagnosis diagnosis.
weight_win_bmi = (
weight
.assign(weight_date_diff = (weight['TestDate'] - weight['met_date']).dt.days)
.query('weight_date_diff >= -90 and weight_date_diff <= 30')
)
</code>
<code>
weight_win_bmi.loc[:, 'weight_date_diff'] = weight_win_bmi['weight_date_diff'].abs()
</code>
<code>
# Select weight closest to date of metastatic diagnosis; lowest weight selected in the event of two weights on same day or equidistant.
weight_bmi_wide = (
weight_win_bmi
.sort_values(by = ['PatientID', 'weight_date_diff', 'weight'], ascending = [True, True, True])
.drop_duplicates(subset = ['PatientID'], keep = 'first')
.filter(items = ['PatientID', 'weight'])
.rename(columns = {'weight': 'weight_diag'})
)
</code>
<code>
# Dataframe of average height for each patient.
height_avg = (
vitals
.query('Test == "body height"')
.filter(items = ['PatientID', 'TestResultCleaned'])
.groupby('PatientID')['TestResultCleaned'].mean()
.to_frame()
.reset_index()
.rename(columns = {'TestResultCleaned': 'height_avg'})
)
</code>
<code>
weight_bmi_wide = pd.merge(weight_bmi_wide, height_avg, on = 'PatientID', how = 'left')
</code>
<code>
# Create BMI column.
weight_bmi_wide = (
weight_bmi_wide
.assign(bmi_diag = lambda x: (x['weight_diag']/(x['height_avg']*x['height_avg']))*10000)
.drop(columns = ['height_avg'])
)
</code>
<code>
# Append excluded IDs from training set and create a missing variable for those without BMI at diagnosis.
weight_bmi_wide = (
weight_bmi_wide
.append(
pd.Series(test_IDs)[~pd.Series(test_IDs).isin(weight_bmi_wide['PatientID'])].to_frame(name = 'PatientID'),
sort = False)
)
</code>
<code>
row_ID(weight_bmi_wide)
</code>
<code>
weight_bmi_wide.loc[:, 'bmi_diag_na'] = np.where(weight_bmi_wide['bmi_diag'].isna(), 1, 0)
</code>
#### Percent change
<code>
# Select elgbility window of -90 to +90 days from advanced diagnosis.
weight_win_summary = (
weight
.assign(weight_date_diff = (weight['TestDate'] - weight['met_date']).dt.days)
.query('weight_date_diff >= -90 and weight_date_diff <= 90')
)
</code>
<code>
# Select patients with more than 1 weight recording within elgibility window.
weight_win_summary = weight_win_summary[weight_win_summary.duplicated(subset = ['PatientID'], keep = False)]
</code>
<code>
# Select weight from the earliest time within elgibility window.
weight_tmin = weight_win_summary.loc[weight_win_summary.groupby('PatientID')['weight_date_diff'].idxmin()]
</code>
<code>
# Select weight from the latest time within elgibility window.
weight_tmax = weight_win_summary.loc[weight_win_summary.groupby('PatientID')['weight_date_diff'].idxmax()]
</code>
<code>
# Combine above two dataframes and sort from earliest recorded weight to latest recorded weight for each patient.
weight_tcomb = (
pd.concat([weight_tmin, weight_tmax])
.sort_values(by = ['PatientID', 'weight_date_diff'], ascending = True)
)
</code>
<code>
row_ID(weight_tcomb)
</code>
<code>
weight_tcomb.loc[:, 'weight_pct_change'] = weight_tcomb.groupby('PatientID')['weight'].pct_change()
</code>
<code>
weight_tcomb.loc[:, 'diff_date_diff'] = weight_tcomb['weight_date_diff'].diff()
</code>
<code>
# Drop empty rows for weight_pct_change.
weight_pct_wide = (
weight_tcomb
.dropna(subset = ['weight_pct_change'])
.filter(items = ['PatientID', 'weight_pct_change', 'diff_date_diff'])
)
</code>
<code>
row_ID(weight_pct_wide)
</code>
<code>
# Append missing training IDs and create a missing variable for those without weight_pct_change.
weight_pct_wide = (
weight_pct_wide
.append(
pd.Series(test_IDs)[~pd.Series(test_IDs).isin(weight_pct_wide['PatientID'])].to_frame(name = 'PatientID'),
sort = False)
.drop(columns = ['diff_date_diff'])
)
</code>
<code>
row_ID(weight_pct_wide)
</code>
<code>
weight_pct_wide.loc[:, 'weight_pct_na'] = np.where(weight_pct_wide['weight_pct_change'].isna(), 1, 0)
</code>
#### Weight slope
<code>
from scipy.stats import linregress
</code>
<code>
weight_win_summary.loc[:, 'date_ordinal'] = weight_win_summary['TestDate'].map(dt.datetime.toordinal)
</code>
<code>
# Dataframe of slope for weight recordings within window period (kg/day).
weight_slope_wide = (
weight_win_summary
.groupby('PatientID')
.apply(lambda x: pd.Series(linregress(x['date_ordinal'], x['weight'])))
.rename(columns = {0: 'weight_slope'})
.reset_index()
.filter(items = ['PatientID', 'weight_slope']))
</code>
<code>
row_ID(weight_slope_wide)
</code>
<code>
# Append missing training IDs.
weight_slope_wide = (
weight_slope_wide
.append(
pd.Series(test_IDs)[~pd.Series(test_IDs).isin(weight_slope_wide['PatientID'])].to_frame(name = 'PatientID'),
sort = False)
)
</code>
<code>
row_ID(weight_slope_wide)
</code>
#### Weight merge
<code>
weight_wide = pd.merge(weight_bmi_wide, weight_pct_wide, on = 'PatientID')
</code>
<code>
weight_wide = pd.merge(weight_wide, weight_slope_wide, on = 'PatientID')
</code>
<code>
row_ID(weight_wide)
</code>
<code>
%whos DataFrame
</code>
<code>
# Keep biomarker_wide, demographics, ecog_diagnosis_wide, enhanced_met, insurance_wide, med_admin_wide, mortality,
# and weight_wide
del height_avg
del vitals
del weight
del weight_bmi_wide
del weight_pct_wide
del weight_slope_wide
del weight_tcomb
del weight_tmax
del weight_tmin
del weight_win_bmi
del weight_win_summary
</code>
### Labs
<code>
lab = pd.read_csv('Lab.csv')
</code>
<code>
lab = lab[lab['PatientID'].isin(test_IDs)]
</code>
<code>
row_ID(lab)
</code>
**The Lab table is a longitudinal record of lab captured in the EHR with multiple-rows-per-patient. A single-patient-per-row table will be built focusing on the following NCCN recommended labs:**
* **Creatinine -- (LOINC: 2160-0 and 38483-4)**
* **Hemoglobin -- (LOINC: 718-7 and 20509-6)**
* **White blood cell count -- (LOINC: 26464-8 and 6690-2)**
* **Neutrophil count -- (LOINC: 26499-4, 751-8, 30451-9, and 753-4)**
* **Albumin, serum -- (LOINC: 1751-7)**
* **Total bilirubin -- (LOINC: 42719-5 and 1975-2)**
* **Sodium — (LOINC: 2947-0 and 2951-2)**
* **Bicarb — (LOINC: 1963-8, 1959-6, 14627-4, 1960-4, and 2028-9)**
* **Calcium — (LOINC: 17861-6 and 49765-1)**
* **AST — (LOINC: 1920-8)**
* **ALT — (LOINC: 1742-6, 1743-4, and 1744-2)**
* **Platelet -- (LOINC: 26515-7, 777-3, 778-1, and 49497-1)**
* **Potassium -- (LOINC: 6298-4 and 2823-3)**
* **Chloride -- (LOINC: 2075-0)**
* **BUN -- (LOINC: 3094-0)**
* **ALP -- (LOINC: 6768-6)**
* **CEA -- (LOINC: 2039-6)**
**The index date will be time of advanced diagnosis with an elgibility window of -90 days to +30 days. The lab value closest to the index date will be selected for each patient. The following summary statistics, using an elgibility window of negative infinity to +30 days from advanced diagnosis, will also be created for the above variables:**
* **Max**
* **Min**
* **Mean**
* **Standard deviation**
* **Slope**
#### 9.1 Baseline lab values
<code>
lab = pd.merge(lab, enhanced_met[['PatientID', 'met_date']], on = 'PatientID', how = 'left')
</code>
<code>
lab.loc[:, 'ResultDate'] = pd.to_datetime(lab['ResultDate'])
</code>
<code>
# Select rows with clinically relevant labs.
lab_core = (
lab[
(lab['LOINC'] == "2160-0") |
(lab['LOINC'] == "38483-4") |
(lab['LOINC'] == "718-7") |
(lab['LOINC'] == "20509-6") |
(lab['LOINC'] == "26464-8") |
(lab['LOINC'] == "6690-2") |
(lab['LOINC'] == "26499-4") |
(lab['LOINC'] == "751-8") |
(lab['LOINC'] == "30451-9") |
(lab['LOINC'] == "753-4") |
(lab['LOINC'] == "1751-7") |
(lab['LOINC'] == "42719-5") |
(lab['LOINC'] == "1975-2") |
(lab['LOINC'] == "2947-0") |
(lab['LOINC'] == "2951-2") |
(lab['LOINC'] == "1963-8") |
(lab['LOINC'] == "1959-6") |
(lab['LOINC'] == "14627-4") |
(lab['LOINC'] == "1960-4") |
(lab['LOINC'] == "2028-9") |
(lab['LOINC'] == "17861-6") |
(lab['LOINC'] == "49765-1") |
(lab['LOINC'] == "1920-8") |
(lab['LOINC'] == "1742-6") |
(lab['LOINC'] == "1743-4") |
(lab['LOINC'] == "1744-2") |
(lab['LOINC'] == "26515-7") |
(lab['LOINC'] == "777-3") |
(lab['LOINC'] == "778-1") |
(lab['LOINC'] == "49497-1") |
(lab['LOINC'] == "6298-4") |
(lab['LOINC'] == "2823-3") |
(lab['LOINC'] == "2075-0") |
(lab['LOINC'] == "3094-0") |
(lab['LOINC'] == "6768-6") |
(lab['LOINC'] == "2039-6")]
.filter(items = ['PatientID',
'ResultDate',
'LOINC',
'LabComponent',
'TestUnits',
'TestUnitsCleaned',
'TestResult',
'TestResultCleaned',
'met_date'])
)
</code>
<code>
conditions = [
((lab_core['LOINC'] == '2160-0') | (lab_core['LOINC'] == '38483-4')),
((lab_core['LOINC'] == '718-7') | (lab_core['LOINC'] == '20509-6')),
((lab_core['LOINC'] == '26464-8') | (lab_core['LOINC'] == '6690-2')),
((lab_core['LOINC'] == '26499-4') | (lab_core['LOINC'] == '751-8') | (lab_core['LOINC'] == '30451-9') | (lab_core['LOINC'] == '753-4')),
(lab_core['LOINC'] == '1751-7'),
((lab_core['LOINC'] == '42719-5') | (lab_core['LOINC'] == '1975-2')),
((lab_core['LOINC'] == '2947-0') | (lab_core['LOINC'] == '2951-2')),
((lab_core['LOINC'] == '1963-8') | (lab_core['LOINC'] == '1959-6') | (lab_core['LOINC'] == '14627-4') | (lab_core['LOINC'] == '1960-4') | (lab_core['LOINC'] == '2028-9')),
((lab_core['LOINC'] == '17861-6') | (lab_core['LOINC'] == '49765-1')),
(lab_core['LOINC'] == '1920-8'),
((lab_core['LOINC'] == '1742-6') | (lab_core['LOINC'] == '1743-4') | (lab_core['LOINC'] == '1744-2')),
((lab_core['LOINC'] == '26515-7') | (lab_core['LOINC'] == '777-3') | (lab_core['LOINC'] == '778-1') | (lab_core['LOINC'] == '49497-1')),
((lab_core['LOINC'] == '6298-4') | (lab_core['LOINC'] == '2823-3')),
(lab_core['LOINC'] == '2075-0'),
(lab_core['LOINC'] == '3094-0'),
(lab_core['LOINC'] == '6768-6'),
(lab_core['LOINC'] == '2039-6')]
choices = ['creatinine',
'hemoglobin',
'wbc',
'neutrophil_count',
'albumin',
'total_bilirubin',
'sodium',
'bicarb',
'calcium',
'ast',
'alt',
'platelet',
'potassium',
'chloride',
'bun',
'alp',
'cea']
lab_core.loc[:, 'lab_name'] = np.select(conditions, choices)
</code>
<code>
# Remove missing lab values.
lab_core = lab_core.dropna(subset = ['TestResultCleaned'])
</code>
<code>
conditions = [
((lab_core['lab_name'] == 'wbc') | (lab_core['lab_name'] == 'neutrophil_count') | (lab_core['lab_name'] == 'platelet')) &
(lab_core['TestUnits'] == '10*3/L'),
(lab_core['lab_name'] == 'hemoglobin') & (lab_core['TestUnits'] == 'g/uL')]
choices = [lab_core['TestResultCleaned'] * 1000000,
lab_core['TestResultCleaned'] / 100000]
lab_core.loc[:, 'test_result_cleaned'] = np.select(conditions, choices, default = lab_core['TestResultCleaned'])
</code>
<code>
# Elgibliity window is -90 and +30 from advanced diagnosis.
lab_core_win = (
lab_core
.assign(lab_date_diff = (lab_core['ResultDate'] - lab_core['met_date']).dt.days)
.query('lab_date_diff >= -90 and lab_date_diff <= 30')
.filter(items = ['PatientID', 'ResultDate', 'TestResultCleaned', 'lab_name', 'met_date', 'test_result_cleaned', 'lab_date_diff'])
)
</code>
<code>
lab_core_win.loc[:, 'lab_date_diff'] = lab_core_win['lab_date_diff'].abs()
</code>
<code>
# Select lab closest to date of advanced diagnosis and pivot to a wide table.
lab_diag_wide = (
lab_core_win
.loc[lab_core_win.groupby(['PatientID', 'lab_name'])['lab_date_diff'].idxmin()]
.pivot(index = 'PatientID', columns = 'lab_name', values = 'test_result_cleaned')
.reset_index()
.rename(columns = {
'albumin': 'albumin_diag',
'creatinine': 'creatinine_diag',
'hemoglobin': 'hemoglobin_diag',
'neutrophil_count': 'neutrophil_count_diag',
'total_bilirubin': 'total_bilirubin_diag',
'wbc': 'wbc_diag',
'sodium': 'sodium_diag',
'bicarb': 'bicarb_diag',
'calcium': 'calcium_diag',
'ast': 'ast_diag',
'alt': 'alt_diag',
'platelet': 'platelet_diag',
'potassium': 'potassium_diag',
'chloride': 'chloride_diag',
'bun': 'bun_diag',
'alp': 'alp_diag',
'cea': 'cea_diag'})
)
lab_diag_wide.columns.name = None
</code>
<code>
row_ID(lab_diag_wide)
</code>
<code>
lab_diag_wide = (
lab_diag_wide
.append(
pd.Series(test_IDs)[~pd.Series(test_IDs).isin(lab_diag_wide['PatientID'])].to_frame(name = 'PatientID'),
sort = False)
)
</code>
<code>
row_ID(lab_diag_wide)
</code>
<code>
# Create missing variables for labs at time of diagnosis.
for x in range (1, len(lab_diag_wide.columns)):
lab_diag_wide.loc[:, lab_diag_wide.columns[x]+'_na'] = np.where(lab_diag_wide[lab_diag_wide.columns[x]].isna(), 1, 0)
</code>
<code>
list(lab_diag_wide.columns)
</code>
#### Mean, max, min, and standard deviation
<code>
# Elgibility window is negative infinity to +30 from advanced diagnosis.
lab_core_win_summ = (
lab_core
.assign(lab_date_diff = (lab_core['ResultDate'] - lab_core['met_date']).dt.days)
.query('lab_date_diff <= 30')
.filter(items = ['PatientID', 'ResultDate', 'TestResultCleaned', 'lab_name', 'met_date', 'test_result_cleaned', 'lab_date_diff'])
)
</code>
<code>
# Pivot table of average values for core labs during elgibility period of -90 to -30 days from advanced diagnosis.
lab_avg_wide = (
lab_core_win_summ
.groupby(['PatientID', 'lab_name'])['test_result_cleaned'].mean()
.to_frame()
.reset_index()
.pivot(index = 'PatientID', columns = 'lab_name', values = 'test_result_cleaned')
.reset_index()
.rename(columns = {
'albumin': 'albumin_avg',
'creatinine': 'creatinine_avg',
'hemoglobin': 'hemoglobin_avg',
'neutrophil_count': 'neutrophil_count_avg',
'total_bilirubin': 'total_bilirubin_avg',
'wbc': 'wbc_avg',
'sodium': 'sodium_avg',
'bicarb': 'bicarb_avg',
'calcium': 'calcium_avg',
'ast': 'ast_avg',
'alt': 'alt_avg',
'platelet': 'platelet_avg',
'potassium': 'potassium_avg',
'chloride': 'chloride_avg',
'bun': 'bun_avg',
'alp': 'alp_avg',
'cea': 'cea_avg'})
)
lab_avg_wide.columns.name = None
</code>
<code>
row_ID(lab_avg_wide)
</code>
<code>
# Pivot table of maximum values for core labs during elgibility period of -90 to -30 days from advanced diagnosis.
lab_max_wide = (
lab_core_win_summ
.groupby(['PatientID', 'lab_name'])['test_result_cleaned'].max()
.to_frame()
.reset_index()
.pivot(index = 'PatientID', columns = 'lab_name', values = 'test_result_cleaned')
.reset_index()
.rename(columns = {
'albumin': 'albumin_max',
'creatinine': 'creatinine_max',
'hemoglobin': 'hemoglobin_max',
'neutrophil_count': 'neutrophil_count_max',
'total_bilirubin': 'total_bilirubin_max',
'wbc': 'wbc_max',
'sodium': 'sodium_max',
'bicarb': 'bicarb_max',
'calcium': 'calcium_max',
'ast': 'ast_max',
'alt': 'alt_max',
'platelet': 'platelet_max',
'potassium': 'potassium_max',
'chloride': 'chloride_max',
'bun': 'bun_max',
'alp': 'alp_max',
'cea': 'cea_max'})
)
lab_max_wide.columns.name = None
</code>
<code>
row_ID(lab_max_wide)
</code>
<code>
# Pivot table of minimum values for core labs during elgibility period of -90 to -30 days from advanced diagnosis.
lab_min_wide = (
lab_core_win_summ
.groupby(['PatientID', 'lab_name'])['test_result_cleaned'].min()
.to_frame()
.reset_index()
.pivot(index = 'PatientID', columns = 'lab_name', values = 'test_result_cleaned')
.reset_index()
.rename(columns = {
'albumin': 'albumin_min',
'creatinine': 'creatinine_min',
'hemoglobin': 'hemoglobin_min',
'neutrophil_count': 'neutrophil_count_min',
'total_bilirubin': 'total_bilirubin_min',
'wbc': 'wbc_min',
'sodium': 'sodium_min',
'bicarb': 'bicarb_min',
'calcium': 'calcium_min',
'ast': 'ast_min',
'alt': 'alt_min',
'platelet': 'platelet_min',
'potassium': 'potassium_min',
'chloride': 'chloride_min',
'bun': 'bun_min',
'alp': 'alp_min',
'cea': 'cea_min'})
)
lab_min_wide.columns.name = None
</code>
<code>
row_ID(lab_min_wide)
</code>
<code>
# Pivot table of standard deviation for core labs during elgibility period of -90 to -30 days from advanced diagnosis.
lab_std_wide = (
lab_core_win_summ
.groupby(['PatientID', 'lab_name'])['test_result_cleaned'].std()
.to_frame()
.reset_index()
.pivot(index = 'PatientID', columns = 'lab_name', values = 'test_result_cleaned')
.reset_index()
.rename(columns = {
'albumin': 'albumin_std',
'creatinine': 'creatinine_std',
'hemoglobin': 'hemoglobin_std',
'neutrophil_count': 'neutrophil_count_std',
'total_bilirubin': 'total_bilirubin_std',
'wbc': 'wbc_std',
'sodium': 'sodium_std',
'bicarb': 'bicarb_std',
'calcium': 'calcium_std',
'ast': 'ast_std',
'alt': 'alt_std',
'platelet': 'platelet_std',
'potassium': 'potassium_std',
'chloride': 'chloride_std',
'bun': 'bun_std',
'alp': 'alp_std',
'cea': 'cea_std'})
)
lab_std_wide.columns.name = None
</code>
<code>
row_ID(lab_std_wide)
</code>
<code>
lab_summary_wide = pd.merge(lab_avg_wide, lab_max_wide, on = 'PatientID', how = 'outer')
</code>
<code>
lab_summary_wide = pd.merge(lab_summary_wide, lab_min_wide, on = 'PatientID', how = 'outer')
</code>
<code>
lab_summary_wide = pd.merge(lab_summary_wide, lab_std_wide, on = 'PatientID', how = 'outer')
</code>
<code>
row_ID(lab_summary_wide)
</code>
<code>
lab_summary_wide = (
lab_summary_wide
.append(
pd.Series(test_IDs)[~pd.Series(test_IDs).isin(lab_summary_wide['PatientID'])].to_frame(name = 'PatientID'),
sort = False)
)
</code>
<code>
row_ID(lab_summary_wide)
</code>
#### Slope
<code>
lab_core_win_summ.loc[:, 'result_date_ordinal'] = lab_core_win_summ['ResultDate'].map(dt.datetime.toordinal)
</code>
<code>
lab_slope_wide = (
lab_core_win_summ
.groupby(['PatientID', 'lab_name'])
.apply(lambda x: pd.Series(linregress(x['result_date_ordinal'], x['test_result_cleaned'])))
.rename(columns = {0: 'slope'})
.reset_index()
.filter(items = ['PatientID', 'lab_name', 'slope'])
.pivot(index = 'PatientID', columns = 'lab_name', values = 'slope')
.reset_index()
.rename(columns = {
'albumin': 'albumin_slope',
'creatinine': 'creatinine_slope',
'hemoglobin': 'hemoglobin_slope',
'neutrophil_count': 'neutrophil_count_slope',
'total_bilirubin': 'total_bilirubin_slope',
'wbc': 'wbc_slope',
'sodium': 'sodium_slope',
'bicarb': 'bicarb_slope',
'calcium': 'calcium_slope',
'ast': 'ast_slope',
'alt': 'alt_slope',
'platelet': 'platelet_slope',
'potassium': 'potassium_slope',
'chloride': 'chloride_slope',
'bun': 'bun_slope',
'alp': 'alp_slope',
'cea': 'cea_slope'})
)
lab_slope_wide.columns.name = None
</code>
<code>
row_ID(lab_slope_wide)
</code>
<code>
lab_slope_wide = (
lab_slope_wide
.append(
pd.Series(test_IDs)[~pd.Series(test_IDs).isin(lab_slope_wide['PatientID'])].to_frame(name = 'PatientID'),
sort = False)
)
</code>
<code>
# Create missing variables for lab slope.
for x in range (1, len(lab_slope_wide.columns)):
lab_slope_wide.loc[:, lab_slope_wide.columns[x]+'_na'] = np.where(lab_slope_wide[lab_slope_wide.columns[x]].isna(), 1, 0)
</code>
<code>
row_ID(lab_slope_wide)
</code>
#### Merge
<code>
lab_wide = pd.merge(lab_diag_wide, lab_summary_wide, on = 'PatientID')
</code>
<code>
lab_wide = pd.merge(lab_wide, lab_slope_wide, on = 'PatientID')
</code>
<code>
row_ID(lab_wide)
</code>
<code>
list(lab_wide.columns)
</code>
<code>
%whos DataFrame
</code>
<code>
# Keep biomarker_wide, demographics, ecog_diagnosis_wide, enhanced_met, insurance_wide, lab_wide, med_admin_wide,
# mortality, and weight_wide
del lab
del lab_avg_wide
del lab_core
del lab_core_win
del lab_core_win_summ
del lab_diag_wide
del lab_max_wide
del lab_min_wide
del lab_slope_wide
del lab_std_wide
del lab_summary_wide
</code>
### Diagnosis
<code>
diagnosis = pd.read_csv('Diagnosis.csv')
</code>
<code>
diagnosis = diagnosis[diagnosis['PatientID'].isin(test_IDs)]
</code>
<code>
row_ID(diagnosis)
</code>
<code>
diagnosis.sample(5)
</code>
**The Diagnosis table is in long format and has close to 12,000 unique ICD 9 and 10 codes. The median number of ICD codes per patient is 8 with a standard deviation of 106 which shows the varaibility in number of ICD codes per patent..**
**ICD codes before metatstatic diagnosis and up to 30 days past diagnosis will be mapped to the Elixhauser comorbidity index. ("Coding Algorithms for Defining Comorbidities in ICD-9-CM and ICD-10 Administrative Data" by Quan et al is used as a guide for linking ICD codes to Elixhauser comorbidities.) Presence of concurrent or prior cancer diagnosis that is not lung cancer or metastasis and sites of metastases at time of diagnosis will also be created.**
#### Elixhauser
<code>
diagnosis = pd.merge(diagnosis, enhanced_met[['PatientID', 'met_date']], on = 'PatientID', how = 'left')
</code>
<code>
diagnosis.loc[:, 'DiagnosisDate'] = pd.to_datetime(diagnosis['DiagnosisDate'])
</code>
<code>
diagnosis.loc[:, 'diagnosis_date_diff'] = (diagnosis['DiagnosisDate'] - diagnosis['met_date']).dt.days
</code>
<code>
# Remove decimal to make mapping to Elixhauser easier.
diagnosis.loc[:, 'diagnosis_code'] = diagnosis['DiagnosisCode'].replace('\.', '', regex = True)
</code>
##### Elixhauser for ICD-9
<code>
# ICD-9 dataframe with unique codes for each patient.
diagnosis_elix_9 = (
diagnosis
.query('diagnosis_date_diff <= 30')
.query('DiagnosisCodeSystem == "ICD-9-CM"')
.drop_duplicates(subset = (['PatientID', 'DiagnosisCode']), keep = 'first')
.filter(items = ['PatientID', 'DiagnosisCode', 'diagnosis_code'])
)
</code>
<code>
row_ID(diagnosis_elix_9)
</code>
<code>
diagnosis_elix_9.loc[:, 'chf'] = (
np.where(diagnosis_elix_9['diagnosis_code'].str.match('39891|'
'402(01|11|91)|'
'404(01|03|[19][13])|'
'42(5[456789]|8)'), 1, 0)
)
</code>
<code>
diagnosis_elix_9.loc[:, 'cardiac_arrhythmias'] = (
np.where(diagnosis_elix_9['diagnosis_code'].str.match('426([079]|1[023])|'
'427[012346789]|'
'7850|'
'996(01|04)|'
'V450|'
'V533'), 1, 0)
)
</code>
<code>
diagnosis_elix_9.loc[:, 'valvular_disease'] = (
np.where(diagnosis_elix_9['diagnosis_code'].str.match('0932|'
'39[4567]|'
'424|'
'746[3456]|'
'V422|'
'V433'), 1, 0)
)
</code>
<code>
diagnosis_elix_9.loc[:, 'pulmonary_circulation'] = (
np.where(diagnosis_elix_9['diagnosis_code'].str.match('41(5[01]|6|7[089])'), 1, 0)
)
</code>
<code>
diagnosis_elix_9.loc[:, 'peripheral_vascular'] = (
np.where(diagnosis_elix_9['diagnosis_code'].str.match('0930|'
'4373|'
'44([01]|3[123456789]|71)|'
'557[19]|'
'V434'), 1, 0)
)
</code>
<code>
diagnosis_elix_9.loc[:, 'htn_uncomplicated'] = (
np.where(diagnosis_elix_9['diagnosis_code'].str.match('401'), 1, 0)
)
</code>
<code>
diagnosis_elix_9.loc[:, 'htn_complicated'] = (
np.where(diagnosis_elix_9['diagnosis_code'].str.match('40[2345]'), 1, 0)
)
</code>
<code>
diagnosis_elix_9.loc[:, 'paralysis'] = (
np.where(diagnosis_elix_9['diagnosis_code'].str.match('3341|'
'34([23]|4[01234569])'), 1, 0)
)
</code>
<code>
diagnosis_elix_9.loc[:, 'other_neuro_disorders'] = (
np.where(diagnosis_elix_9['diagnosis_code'].str.match('33(19|2[01]|3([45]|92)|[45]|62)|'
'34([015]|8[13])|'
'78[04]3'), 1, 0)
)
</code>
<code>
diagnosis_elix_9.loc[:, 'chronic_pulmonary'] = (
np.where(diagnosis_elix_9['diagnosis_code'].str.match('416[89]|'
'49|'
'50([012345]|64|8[18])'), 1, 0)
)
</code>
<code>
diagnosis_elix_9.loc[:, 'diabetes_uncomplicated'] = (
np.where(diagnosis_elix_9['diagnosis_code'].str.match('250[0123]'), 1, 0)
)
</code>
<code>
diagnosis_elix_9.loc[:, 'diabetes_complicated'] = (
np.where(diagnosis_elix_9['diagnosis_code'].str.match('250[456789]'), 1, 0)
)
</code>
<code>
diagnosis_elix_9.loc[:, 'hypothyroidism'] = (
np.where(diagnosis_elix_9['diagnosis_code'].str.match('2409|'
'24([34]|6[18])'), 1, 0)
)
</code>
<code>
diagnosis_elix_9.loc[:, 'renal_failure'] = (
np.where(diagnosis_elix_9['diagnosis_code'].str.match('403[019]1|'
'404[019][23]|'
'58([56]|80)|'
'V4(20|51)|'
'V56'), 1, 0)
)
</code>
<code>
diagnosis_elix_9.loc[:, 'liver_disease'] = (
np.where(diagnosis_elix_9['diagnosis_code'].str.match('070(2[23]]|3[23]|44|54|6|9)|'
'456[012]|'
'57([01]|2[2345678]|3[3489])|'
'V427'), 1, 0)
)
</code>
<code>
diagnosis_elix_9.loc[:, 'peptic_ulcer_disease'] = (
np.where(diagnosis_elix_9['diagnosis_code'].str.match('53[1234][79]'), 1, 0)
)
</code>
<code>
diagnosis_elix_9.loc[:, 'aids_hiv'] = (
np.where(diagnosis_elix_9['diagnosis_code'].str.match('04[234]'), 1, 0)
)
</code>
<code>
diagnosis_elix_9.loc[:, 'lymphoma'] = (
np.where(diagnosis_elix_9['diagnosis_code'].str.match('20([012]|30)|'
'2386'), 1, 0)
)
</code>
<code>
diagnosis_elix_9.loc[:, 'metastatic_cancer'] = (
np.where(diagnosis_elix_9['diagnosis_code'].str.match('19[6789]'), 1, 0)
)
</code>
<code>
diagnosis_elix_9.loc[:, 'solid_tumor_wout_mets'] = (
np.where(diagnosis_elix_9['diagnosis_code'].str.match('1[456]|'
'17[012456789]|'
'18|'
'19([012345])'), 1, 0)
)
</code>
<code>
diagnosis_elix_9.loc[:, 'rheumatoid_arthritis'] = (
np.where(diagnosis_elix_9['diagnosis_code'].str.match('446|'
'7010|'
'71(0[0123489]|12|4|93)|'
'72([05]|85|889|930)'), 1, 0)
)
</code>
<code>
diagnosis_elix_9.loc[:, 'coagulopathy'] = (
np.where(diagnosis_elix_9['diagnosis_code'].str.match('28(6|7[1345])'), 1, 0)
)
</code>
<code>
diagnosis_elix_9.loc[:, 'obesity'] = (
np.where(diagnosis_elix_9['diagnosis_code'].str.match('2780'), 1, 0)
)
</code>
<code>
diagnosis_elix_9.loc[:, 'weight_loss'] = (
np.where(diagnosis_elix_9['diagnosis_code'].str.match('26[0123]|'
'7832|'
'7994'), 1, 0)
)
</code>
<code>
diagnosis_elix_9.loc[:, 'fluid_electrolyte'] = (
np.where(diagnosis_elix_9['diagnosis_code'].str.match('2(536|76)'), 1, 0)
)
</code>
<code>
diagnosis_elix_9.loc[:, 'blood_loss_anemia'] = (
np.where(diagnosis_elix_9['diagnosis_code'].str.match('2800'), 1, 0)
)
</code>
<code>
diagnosis_elix_9.loc[:, 'deficiency_anemia'] = (
np.where(diagnosis_elix_9['diagnosis_code'].str.match('28(0[123456789]|1)'), 1, 0)
)
</code>
<code>
diagnosis_elix_9.loc[:, 'alcohol_abuse'] = (
np.where(diagnosis_elix_9['diagnosis_code'].str.match('2652|'
'291[12356789]|'
'30(3[09]|50)|'
'3575|'
'4255|'
'5353|'
'571[0123]|'
'980|'
'V113'), 1, 0)
)
</code>
<code>
diagnosis_elix_9.loc[:, 'drug_abuse'] = (
np.where(diagnosis_elix_9['diagnosis_code'].str.match('292|'
'30(4|5[23456789])|'
'V6542'), 1, 0)
)
</code>
<code>
diagnosis_elix_9.loc[:, 'psychoses'] = (
np.where(diagnosis_elix_9['diagnosis_code'].str.match('2938|'
'296[0145]4|'
'29[578]'), 1, 0)
)
</code>
<code>
diagnosis_elix_9.loc[:, 'depression'] = (
np.where(diagnosis_elix_9['diagnosis_code'].str.match('296[235]|'
'3(004|09|11)'), 1, 0)
)
</code>
<code>
# Create variable that captures ICD-9 codes not included in Elixhauser.
diagnosis_elix_9.loc[:, 'elixhauser_other'] = (
np.where(diagnosis_elix_9.iloc[:, 3:].eq(0).all(1), 1, 0)
)
</code>
<code>
# Percentage of ICD-9 codes not captured by Elixhauser.
diagnosis_elix_9['elixhauser_other'].sum()/len(diagnosis_elix_9)
</code>
<code>
# Single-row-per-patient dataframe with columns as Elixhauser comorbidities.
diagnosis_elix_9_wide = (
diagnosis_elix_9
.drop(columns = ['DiagnosisCode', 'diagnosis_code'])
.groupby('PatientID').sum()
.reset_index()
)
</code>
<code>
row_ID(diagnosis_elix_9_wide)
</code>
##### Elixhauser for ICD-10
<code>
# ICD-10 dataframe with unique codes for each patient.
diagnosis_elix_10 = (
diagnosis
.query('diagnosis_date_diff <= 30')
.query('DiagnosisCodeSystem == "ICD-10-CM"')
.drop_duplicates(subset = (['PatientID', 'DiagnosisCode']), keep = 'first')
.filter(items = ['PatientID', 'DiagnosisCode', 'diagnosis_code'])
)
</code>
<code>
row_ID(diagnosis_elix_10)
</code>
<code>
diagnosis_elix_10.loc[:, 'chf'] = (
np.where(diagnosis_elix_10['diagnosis_code'].str.match('I099|'
'I1(10|3[02])|'
'I255|'
'I4(2[056789]|3)|'
'I50|'
'P290'), 1, 0)
)
</code>
<code>
diagnosis_elix_10.loc[:, 'cardiac_arrhythmias'] = (
np.where(diagnosis_elix_10['diagnosis_code'].str.match('I4(4[123]|5[69]|[789])|'
'R00[018]|'
'T821|'
'Z[49]50'), 1, 0)
)
</code>
<code>
diagnosis_elix_10.loc[:, 'valvular_disease'] = (
np.where(diagnosis_elix_10['diagnosis_code'].str.match('A520|'
'I0([5678]|9[18])|'
'I3[456789]|'
'Q23[0123]|'
'Z95[234]'), 1, 0)
)
</code>
<code>
diagnosis_elix_10.loc[:, 'pulmonary_circulation'] = (
np.where(diagnosis_elix_10['diagnosis_code'].str.match('I2([67]|8[089])'), 1, 0)
)
</code>
<code>
diagnosis_elix_10.loc[:, 'peripheral_vascular'] = (
np.where(diagnosis_elix_10['diagnosis_code'].str.match('I7([01]|3[189]|71|9[02])|'
'K55[189]|'
'Z95[89]'), 1, 0)
)
</code>
<code>
diagnosis_elix_10.loc[:, 'htn_uncomplicated'] = (
np.where(diagnosis_elix_10['diagnosis_code'].str.match('I10'), 1, 0)
)
</code>
<code>
diagnosis_elix_10.loc[:, 'htn_complicated'] = (
np.where(diagnosis_elix_10['diagnosis_code'].str.match('I1[1235]'), 1, 0)
)
</code>
<code>
diagnosis_elix_10.loc[:, 'paralysis'] = (
np.where(diagnosis_elix_10['diagnosis_code'].str.match('G041|'
'G114|'
'G8(0[12]|[12]|3[012349])'), 1, 0)
)
</code>
<code>
diagnosis_elix_10.loc[:, 'other_neuro_disorders'] = (
np.where(diagnosis_elix_10['diagnosis_code'].str.match('G1[0123]|'
'G2([012]|5[45])|'
'G3(1[289]|[2567])|'
'G4[01]|'
'G93[14]|'
'R470|'
'R56'), 1, 0)
)
</code>
<code>
diagnosis_elix_10.loc[:, 'chronic_pulmonary'] = (
np.where(diagnosis_elix_10['diagnosis_code'].str.match('I27[89]|'
'J4[01234567]|'
'J6([01234567]|84)|'
'J70[13]'), 1, 0)
)
</code>
<code>
diagnosis_elix_10.loc[:, 'diabetes_uncomplicated'] = (
np.where(diagnosis_elix_10['diagnosis_code'].str.match('E1[01234][019]'), 1, 0)
)
</code>
<code>
diagnosis_elix_10.loc[:, 'diabetes_complicated'] = (
np.where(diagnosis_elix_10['diagnosis_code'].str.match('E1[01234][2345678]'), 1, 0)
)
</code>
<code>
diagnosis_elix_10.loc[:, 'hypothyroidism'] = (
np.where(diagnosis_elix_10['diagnosis_code'].str.match('E0[0123]|'
'E890'), 1, 0)
)
</code>
<code>
diagnosis_elix_10.loc[:, 'renal_failure'] = (
np.where(diagnosis_elix_10['diagnosis_code'].str.match('I1(20|31)|'
'N1[89]|'
'N250|'
'Z49[012]|'
'Z9(40|92)'), 1, 0)
)
</code>
<code>
diagnosis_elix_10.loc[:, 'liver_disease'] = (
np.where(diagnosis_elix_10['diagnosis_code'].str.match('B18|'
'I8(5|64)|'
'I982|'
'K7(0|1[13457]|[234]|6[023456789])|'
'Z944'), 1, 0)
)
</code>
<code>
diagnosis_elix_10.loc[:, 'peptic_ulcer_disease'] = (
np.where(diagnosis_elix_10['diagnosis_code'].str.match('K2[5678][79]'), 1, 0)
)
</code>
<code>
diagnosis_elix_10.loc[:, 'aids_hiv'] = (
np.where(diagnosis_elix_10['diagnosis_code'].str.match('B2[0124]'), 1, 0)
)
</code>
<code>
diagnosis_elix_10.loc[:, 'lymphoma'] = (
np.where(diagnosis_elix_10['diagnosis_code'].str.match('C8[123458]|'
'C9(0[02]|6)'), 1, 0)
)
</code>
<code>
diagnosis_elix_10.loc[:, 'metastatic_cancer'] = (
np.where(diagnosis_elix_10['diagnosis_code'].str.match('C(7[789]|80)'), 1, 0)
)
</code>
<code>
diagnosis_elix_10.loc[:, 'solid_tumor_wout_mets'] = (
np.where(diagnosis_elix_10['diagnosis_code'].str.match('C[01]|'
'C2[0123456]|'
'C3[01234789]|'
'C4[01356789]|'
'C5[012345678]|'
'C6|'
'C7[0123456]|'
'C97'), 1, 0)
)
</code>
<code>
diagnosis_elix_10.loc[:, 'rheumatoid_arthritis'] = (
np.where(diagnosis_elix_10['diagnosis_code'].str.match('L94[013]|'
'M0[568]|'
'M12[03]|'
'M3(0|1[0123]|[2345])|'
'M4(5|6[189])'), 1, 0)
)
</code>
<code>
diagnosis_elix_10.loc[:, 'coagulopathy'] = (
np.where(diagnosis_elix_10['diagnosis_code'].str.match('D6([5678]|9[13456])'), 1, 0)
)
</code>
<code>
diagnosis_elix_10.loc[:, 'obesity'] = (
np.where(diagnosis_elix_10['diagnosis_code'].str.match('E66'), 1, 0)
)
</code>
<code>
diagnosis_elix_10.loc[:, 'weight_loss'] = (
np.where(diagnosis_elix_10['diagnosis_code'].str.match('E4[0123456]|'
'R6(34|4)'), 1, 0)
)
</code>
<code>
diagnosis_elix_10.loc[:, 'fluid_electrolyte'] = (
np.where(diagnosis_elix_10['diagnosis_code'].str.match('E222|'
'E8[67]'), 1, 0)
)
</code>
<code>
diagnosis_elix_10.loc[:, 'blood_loss_anemia'] = (
np.where(diagnosis_elix_10['diagnosis_code'].str.match('D500'), 1, 0)
)
</code>
<code>
diagnosis_elix_10.loc[:, 'deficiency_anemia'] = (
np.where(diagnosis_elix_10['diagnosis_code'].str.match('D5(0[89]|[123])'), 1, 0)
)
</code>
<code>
diagnosis_elix_10.loc[:, 'alcohol_abuse'] = (
np.where(diagnosis_elix_10['diagnosis_code'].str.match('F10|'
'E52|'
'G621|'
'I426|'
'K292|'
'K70[039]|'
'T51|'
'Z502|'
'Z7(14|21)'), 1, 0)
)
</code>
<code>
diagnosis_elix_10.loc[:, 'drug_abuse'] = (
np.where(diagnosis_elix_10['diagnosis_code'].str.match('F1[12345689]|'
'Z7(15|22)'), 1, 0)
)
</code>
<code>
diagnosis_elix_10.loc[:, 'psychoses'] = (
np.where(diagnosis_elix_10['diagnosis_code'].str.match('F2[0234589]|'
'F3([01]2|15)'), 1, 0)
)
</code>
<code>
diagnosis_elix_10.loc[:, 'depression'] = (
np.where(diagnosis_elix_10['diagnosis_code'].str.match('F204|'
'F3(1[345]|[23]|41)|'
'F4[13]2'), 1, 0)
)
</code>
<code>
# Create variable that captures ICD-10 codes not included in Elixhauser.
diagnosis_elix_10.loc[:, 'elixhauser_other'] = (
np.where(diagnosis_elix_10.iloc[:, 3:].eq(0).all(1), 1, 0)
)
</code>
<code>
# Percenage of ICD-10 codes not captured by Elixhauser
diagnosis_elix_10['elixhauser_other'].sum()/len(diagnosis_elix_10)
</code>
<code>
diagnosis_elix_10_wide = (
diagnosis_elix_10
.drop(columns = ['DiagnosisCode', 'diagnosis_code'])
.groupby('PatientID').sum()
.reset_index()
)
</code>
<code>
row_ID(diagnosis_elix_10_wide)
</code>
<code>
# Merge Elixhauser 9 and 10 and sum by PatientID.
diagnosis_elixhauser = (
pd.concat([diagnosis_elix_9_wide, diagnosis_elix_10_wide])
.groupby('PatientID').sum()
)
</code>
<code>
# Create unqiue ICD count for each patient.
diagnosis_elixhauser['icd_count'] = diagnosis_elixhauser.sum(axis = 1)
</code>
<code>
# Other than unique ICD count, values greater than 1 are set to 1; 0 remains unchanged.
diagnosis_elixhauser.iloc[:, :-1] = (
diagnosis_elixhauser.iloc[:, :-1].mask(diagnosis_elixhauser.iloc[:, :-1] >1, 1)
)
</code>
<code>
diagnosis_elixhauser = diagnosis_elixhauser.reset_index()
</code>
<code>
row_ID(diagnosis_elixhauser)
</code>
<code>
# Append missing training IDs.
diagnosis_elixhauser = (
diagnosis_elixhauser
.append(
pd.Series(test_IDs)[~pd.Series(test_IDs).isin(diagnosis_elixhauser['PatientID'])].to_frame(name = 'PatientID'),
sort = False)
.fillna(0)
)
</code>
<code>
row_ID(diagnosis_elixhauser)
</code>
#### Colon cancer location
**Right sided colon cancer (ie., proximal to the splenic flexure) is an independent, poor prognostic factor for overall survival. Using ICD codes, we will characterize the colon cancer as left vs. right sided.**
<code>
diagnosis_elix_9.loc[:, 'right_colon'] = (
np.where(diagnosis_elix_9['diagnosis_code'].str.match('153[01456]'), 1, 0)
)
</code>
<code>
diagnosis_elix_9.loc[:, 'left_colon'] = (
np.where(diagnosis_elix_9['diagnosis_code'].str.match('153[237]'), 1, 0)
)
</code>
<code>
colon_location_9 = diagnosis_elix_9[['PatientID', 'right_colon', 'left_colon']].groupby('PatientID').sum()
</code>
<code>
diagnosis_elix_10.loc[:, 'right_colon'] = (
np.where(diagnosis_elix_10['diagnosis_code'].str.match('C18[01234]'), 1, 0)
)
</code>
<code>
diagnosis_elix_10.loc[:, 'left_colon'] = (
np.where(diagnosis_elix_10['diagnosis_code'].str.match('C18[567]'), 1, 0)
)
</code>
<code>
colon_location_10 = diagnosis_elix_10[['PatientID', 'right_colon', 'left_colon']].groupby('PatientID').sum()
</code>
<code>
colon_location = (
pd.concat([colon_location_9, colon_location_10])
.groupby('PatientID').sum()
.reset_index()
)
</code>
<code>
colon_location = pd.merge(enhanced_met[['PatientID', 'CrcSite']], colon_location, on = 'PatientID', how = 'left')
</code>
<code>
colon_location[['right_colon', 'left_colon']] = colon_location[['right_colon', 'left_colon']].fillna(0)
</code>
<code>
row_ID(colon_location)
</code>
<code>
# Create indicator variable where where 2 if positive, 1 if negative, and 0 if unknown or missing.
conditions = [
(colon_location['CrcSite'] == 'Colon') & (colon_location['right_colon'] >= 1),
(colon_location['CrcSite'] == 'Colon') & (colon_location['right_colon'] == 0) & (colon_location['left_colon'] >= 1),
(colon_location['CrcSite'] == 'Colon') & (colon_location['right_colon'] == 0) & (colon_location['left_colon'] == 0),
(colon_location['CrcSite'] == 'Rectum'),
(colon_location['CrcSite'] == 'Colorectal NOS')]
choices = ['colon_right', 'colon_left', 'colon_unknown', 'rectum', 'unknown']
colon_location.loc[:, 'crc_site'] = np.select(conditions, choices)
</code>
<code>
enhanced_met = pd.merge(enhanced_met, colon_location[['PatientID', 'crc_site']], on = 'PatientID')
</code>
#### Other cancer
##### ICD-9 Cancer codes
<code>
# Select all ICD-9 cancer codes between 140-209.
# Exclude benign neoplasms: 210-229, carcinoma in site: 230-234, and neoplasms of uncertain behavior or nature: 235-239.
cancer_9 = (
diagnosis_elix_9[diagnosis_elix_9['DiagnosisCode'].str.startswith(
('14','15', '16', '17', '18', '19', '20'))]
.filter(items = ['PatientID', 'DiagnosisCode', 'diagnosis_code'])
)
</code>
<code>
row_ID(cancer_9)
</code>
**Remove the following ICD-9 codes representing colorectal cancer, metastasis, ill-defined neoplasms, and benign neoplasms of skin (BCC and SCC):**
* **153 - Malignant neoplasm of colon**
* **154 - Malignant neoplasm of rectum rectosigmoid junction and anus**
* **155 - Malignant neoplasm of liver and intrahepatic bile ducts**
* **158 - Malignant neoplasm of retroperitoneum and peritoneum**
* **159 - Malignant neoplasm of other and ill-defined sites within the digestive organs and peritoneum**
* **173 - Other and unspecified malignant neoplasm of skin**
* **195.2 - Malignant neoplasm of abdomen**
* **196 - Secondary and unspecified malignant neoplasm of lymph nodes**
* **197 - Secondary malignant neoplasm of respiratory and digestive systems**
* **198 - Secondary malignant neoplasm of other specified sites**
* **199 - Malignant neoplasm without specification of site**
<code>
# Dataframe of ICD-9 neoplasm codes that exclude colorectal cancer, metastasis, or benign neoplasms.
other_cancer_9 = (
cancer_9[~cancer_9['diagnosis_code'].str.match('15([34589])|'
'173|'
'19(52|[6789])')]
)
</code>
<code>
other_cancer_9.loc[:,'other_cancer_9'] = 1
</code>
<code>
other_cancer_9 = (
other_cancer_9
.drop_duplicates(subset = 'PatientID', keep = 'first')
.filter(items = ['PatientID', 'other_cancer_9'])
)
</code>
<code>
row_ID(other_cancer_9)
</code>
<code>
other_cancer_9 = (
other_cancer_9
.append(
pd.Series(test_IDs)[~pd.Series(test_IDs).isin(other_cancer_9['PatientID'])].to_frame(name = 'PatientID'),
sort = False)
.fillna(0)
)
</code>
<code>
row_ID(other_cancer_9)
</code>
##### ICD-10 Cancer codes
<code>
# Select all ICD-10 codes between C00-D49
# Exclude in situ neoplasms: D00-D09, benign neoplasms: D10-D36, benign neuroendocrine tumor: D3A, and neoplasms of unspecified behavior: D37 and D49
cancer_10 = (
diagnosis_elix_10[diagnosis_elix_10['DiagnosisCode'].str.startswith(
('C0', 'C1', 'C2', 'C3', 'C4', 'C5', 'C6', 'C7', 'C8', 'C9', 'D38', 'D39', 'D4'))]
.filter(items = ['PatientID', 'DiagnosisCode', 'diagnosis_code'])
)
</code>
<code>
row_ID(cancer_10)
</code>
**Remove the following ICD-10 codes which capture lung cancer, metastasis, and benign skin neoplasms(eg., BCC and SCC).**
* **C18 - Malignant neoplasm of colon**
* **C19 - Malignant neoplasm of rectosigmoid junction**
* **C20 - Malignant neoplasm of rectum**
* **C21.8 - Malignant neoplasm of overlapping sites of rectum, anus and anal canal**
* **C22 - Malignant neoplasm of liver, not specified as primary or secondary**
* **C26 - Malignant neoplasm of other and ill-defined digestive organs**
* **C44 - Other and unspecified malignant neoplasm of skin**
* **C77 - Secondary and unspecified malignant neoplasm of lymph nodes**
* **C78 - Secondary malignant neoplasm of respiratory and digestive organs**
* **C79 - Secondary malignant neoplasm of other and unspecified sites**
* **C80 - Malignant neoplasm without specification of site**
* **D47.2 - Monoclonal gammopathy**
* **D48 - Neoplasm of uncertain behavior of other and unspecified sites**
* **D49 - Neoplasms of unspecified behavior**
<code>
# Dataframe of ICD-10 neoplasm codes that exclude lung cancer, metastasis, or benign neoplasms.
other_cancer_10 = (
cancer_10[~cancer_10['diagnosis_code'].str.match('C1[89]|'
'C2([06]|18|29)|'
'C44|'
'C7[789]|'
'C80|'
'D4(72|[89])')]
)
</code>
<code>
other_cancer_10.loc[:,'other_cancer_10'] = 1
</code>
<code>
# Drop duplicates.
other_cancer_10 = (
other_cancer_10
.drop_duplicates(subset = 'PatientID', keep = 'first')
.filter(items = ['PatientID', 'other_cancer_10'])
)
</code>
<code>
row_ID(other_cancer_10)
</code>
<code>
# Append missing training IDs.
other_cancer_10 = (
other_cancer_10
.append(
pd.Series(test_IDs)[~pd.Series(test_IDs).isin(other_cancer_10['PatientID'])].to_frame(name = 'PatientID'),
sort = False)
.fillna(0)
)
</code>
<code>
row_ID(other_cancer_10)
</code>
<code>
other_cancer = pd.merge(other_cancer_9, other_cancer_10, on = 'PatientID')
</code>
<code>
# Combine other_cancer_9 and other_cancer_19; replace values equal to 2 with 1.
other_cancer = (
other_cancer
.assign(other_cancer = other_cancer['other_cancer_9'] + other_cancer['other_cancer_10'])
.filter(items = ['PatientID', 'other_cancer'])
.replace(2, 1)
)
</code>
<code>
row_ID(other_cancer)
</code>
<code>
# Percentage of patients with a cancer other than colorectal or mets.
len(other_cancer[other_cancer['other_cancer'] == 1])/len(other_cancer)
</code>
#### Sites of metastases
##### ICD-9 sites of metastases
<code>
# Create dataframe contianing patients with ICD-9 codes within -90 to +30 days from advanced diagnosis and remove duplicate codes.(
diagnosis_mets_9 = (
diagnosis
.query('diagnosis_date_diff >= -90 and diagnosis_date_diff <= 30')
.query('DiagnosisCodeSystem == "ICD-9-CM"')
.drop_duplicates(subset = ['PatientID', 'DiagnosisCode'], keep = 'first')
.filter(items = ['PatientID', 'DiagnosisCode', 'diagnosis_code'])
)
</code>
**Sites of metastasis will be grouped into the following categories according to ICD-9 codes:**
* **Thorax - 197.0, 197.1, 197.2, and 197.3**
* **Peritoneum - 197.6**
* **Liver - 197.7**
* **Other GI - 197.4 and 197.8**
* **CNS - 198.3 and 198.4**
* **Bone - 198.5**
* **Other - 198.0, 198.1, 198.2, 198.6, 198.7, 198.8, and 196**
<code>
diagnosis_mets_9['thorax_met'] = np.where(diagnosis_mets_9['diagnosis_code'].str.match('197[0123]'), 1, 0)
</code>
<code>
diagnosis_mets_9['peritoneum_met'] = np.where(diagnosis_mets_9['diagnosis_code'].str.match('1976'), 1, 0)
</code>
<code>
diagnosis_mets_9['liver_met'] = np.where(diagnosis_mets_9['diagnosis_code'].str.match('1977'), 1, 0)
</code>
<code>
diagnosis_mets_9['other_gi_met'] = np.where(diagnosis_mets_9['diagnosis_code'].str.match('197[48]'), 1, 0)
</code>
<code>
diagnosis_mets_9['cns_met'] = np.where(diagnosis_mets_9['diagnosis_code'].str.match('198[34]'), 1, 0)
</code>
<code>
diagnosis_mets_9['bone_met'] = np.where(diagnosis_mets_9['diagnosis_code'].str.match('1985'), 1, 0)
</code>
<code>
diagnosis_mets_9['other_met'] = np.where(diagnosis_mets_9['diagnosis_code'].str.match('198[012678]|'
'196'), 1, 0)
</code>
<code>
# Collapse columns and sum.
diagnosis_mets_9 = (
diagnosis_mets_9
.drop(columns = ['DiagnosisCode', 'diagnosis_code'])
.groupby('PatientID').sum()
.reset_index()
)
</code>
##### ICD-10 sites of metastases
<code>
# Create dataframe contianing patients with ICD-10 codes within -90 to +30 days from advanced diagnosis and remove duplicate codes.
diagnosis_mets_10 = (
diagnosis
.query('diagnosis_date_diff >= -90 and diagnosis_date_diff <= 30')
.query('DiagnosisCodeSystem == "ICD-10-CM"')
.drop_duplicates(subset = ['PatientID', 'DiagnosisCode'], keep = 'first')
.filter(items = ['PatientID', 'DiagnosisCode', 'diagnosis_code'])
)
</code>
**Sites of metastasis will be grouped into the following categories according to ICD-10 codes:**
* **Thorax - C78.0, C78.1, C78.2, and C78.3**
* **Peritoneum - C78.6**
* **Liver - C78.7**
* **Other GI - C78.4 and C78.8**
* **CNS - C79.3 and C79.4**
* **Bone - C79.5**
* **Other - C77, C79.0, C79.1, C79.2, C79.6, C79.7, C79.8, and C79.9**
<code>
diagnosis_mets_10['thorax_met'] = np.where(diagnosis_mets_10['diagnosis_code'].str.match('C78[0123]'), 1, 0)
</code>
<code>
diagnosis_mets_10['peritoneum_met'] = np.where(diagnosis_mets_10['diagnosis_code'].str.match('C786'), 1, 0)
</code>
<code>
diagnosis_mets_10['liver_met'] = np.where(diagnosis_mets_10['diagnosis_code'].str.match('C787'), 1, 0)
</code>
<code>
diagnosis_mets_10['other_gi_met'] = np.where(diagnosis_mets_10['diagnosis_code'].str.match('C78[48]'), 1, 0)
</code>
<code>
diagnosis_mets_10['cns_met'] = np.where(diagnosis_mets_10['diagnosis_code'].str.match('C79[34]'), 1, 0)
</code>
<code>
diagnosis_mets_10['bone_met'] = np.where(diagnosis_mets_10['diagnosis_code'].str.match('C795'), 1, 0)
</code>
<code>
diagnosis_mets_10['other_met'] = np.where(diagnosis_mets_10['diagnosis_code'].str.match('C77|'
'C79[0126789]'), 1, 0)
</code>
<code>
# Collapse columns and sum.
diagnosis_mets_10 = (
diagnosis_mets_10
.drop(columns = ['DiagnosisCode', 'diagnosis_code'])
.groupby('PatientID').sum()
.reset_index()
)
</code>
<code>
# Merge ICD-9 and ICD-10 mets tables; collapse and sum.
diagnosis_mets = (
pd.concat([diagnosis_mets_9, diagnosis_mets_10])
.groupby('PatientID').sum()
)
</code>
<code>
# All values >1 replaced by 1.
diagnosis_mets = (
diagnosis_mets.mask(diagnosis_mets > 1, 1)
.reset_index()
)
</code>
<code>
# Append missing training IDs.
diagnosis_mets = (
diagnosis_mets.append(
pd.Series(test_IDs)[~pd.Series(test_IDs).isin(diagnosis_mets['PatientID'])].to_frame(name = 'PatientID'),
sort = False)
.fillna(0)
)
</code>
<code>
row_ID(diagnosis_mets)
</code>
#### Merge
<code>
diagnosis_wide = pd.merge(diagnosis_elixhauser, other_cancer, on = 'PatientID')
</code>
<code>
diagnosis_wide = pd.merge(diagnosis_wide, diagnosis_mets, on = 'PatientID')
</code>
<code>
row_ID(diagnosis_wide)
</code>
<code>
list(diagnosis_wide.columns)
</code>
<code>
%whos DataFrame
</code>
<code>
# Keep biomarker_wide, demographics, diagnosis_wide, ecog_diagnosis_wide, enhanced_met, insurance_wide,
# lab_wide, med_admin_wide, mortality, and weight_wide
del cancer_10
del cancer_9
del colon_location
del colon_location_10
del colon_location_9
del diagnosis
del diagnosis_elix_10
del diagnosis_elix_10_wide
del diagnosis_elix_9
del diagnosis_elix_9_wide
del diagnosis_elixhauser
del diagnosis_mets
del diagnosis_mets_10
del diagnosis_mets_9
del other_cancer
del other_cancer_10
del other_cancer_9
</code>
### SocialDeterminantsOfHealth
<code>
sdoh = pd.read_csv('SocialDeterminantsOfHealth.csv')
</code>
<code>
sdoh = sdoh[sdoh['PatientID'].isin(test_IDs)]
</code>
<code>
row_ID(sdoh)
</code>
**Measures the area-level socioeconomic status (SES) of a patient between 2015 and 2019 based on their most recent address.**
<code>
conditions = [
(sdoh['SESIndex2015_2019'] == '5 - Highest SES'),
(sdoh['SESIndex2015_2019'] == '1 - Lowest SES')]
choices = ['5', '1']
sdoh.loc[:, 'ses'] = np.select(conditions, choices, default = sdoh['SESIndex2015_2019'])
</code>
<code>
sdoh = sdoh.drop(columns = ['PracticeID', 'SESIndex2015_2019'])
</code>
<code>
sdoh_wide = (
sdoh
.append(
pd.Series(test_IDs)[~pd.Series(test_IDs).isin(sdoh['PatientID'])].to_frame(name = 'PatientID'),
sort = False)
)
</code>
<code>
row_ID(sdoh_wide)
</code>
<code>
%whos DataFrame
</code>
<code>
# Keep biomarker_wide, demographics, diagnosis_wide, ecog_diagnosis_wide, enhanced_met, insurance_wide,
# lab_wide, med_admin_wide, mortality, sdoh_wide, and weight_wide
del sdoh
</code>
## Part 3: File merge
<code>
enhanced_met = enhanced_met.drop(columns = ['diagnosis_date', 'met_date', 'CrcSite'])
</code>
<code>
test_full = pd.merge(demographics, enhanced_met, on = 'PatientID')
</code>
<code>
test_full = pd.merge(test_full, mortality, on = 'PatientID')
</code>
<code>
test_full = pd.merge(test_full, med_admin_wide, on = 'PatientID')
</code>
<code>
test_full = pd.merge(test_full, biomarker_wide, on = 'PatientID')
</code>
<code>
test_full = pd.merge(test_full, insurance_wide, on = 'PatientID')
</code>
<code>
test_full = pd.merge(test_full, ecog_diagnosis_wide, on = 'PatientID')
</code>
<code>
test_full = pd.merge(test_full, weight_wide, on = 'PatientID')
</code>
<code>
test_full = pd.merge(test_full, lab_wide, on = 'PatientID')
</code>
<code>
test_full = pd.merge(test_full, diagnosis_wide, on = 'PatientID')
</code>
<code>
test_full = pd.merge(test_full, sdoh_wide, on = 'PatientID')
</code>
<code>
row_ID(test_full)
</code>
<code>
len(test_full.columns)
</code>
<code>
list(test_full.columns)
</code>
<code>
test_full.to_csv('test_full.csv', index = False, header = True)
</code>
|
{
"filename": "data_wrangling_te_1.ipynb",
"repository": "xavier-orcutt/TrialTranslator-notebooks",
"query": "transformed_from_existing",
"size": 279720,
"sha": ""
}
|
# analysis-simulation_s4.ipynb
Repository: Young-won/deepbiome
# Deep MicroBiome
Aug. 14. 2019
@ Youngwon (youngwon08@gmail.com)
<code>
import os
import json
import numpy as np
import pandas as pd
import copy
import logging
import sys
import keras.backend as k
import tensorflow as tf
import matplotlib.pyplot as plt
%matplotlib inline
os.environ['CUDA_VISIBLE_DEVICES']=''
</code>
<code>
from deepbiome.deepbiome import *
</code>
<code>
if not tf.__version__.startswith('2'):
config = tf.ConfigProto(gpu_options=tf.GPUOptions(allow_growth=True))
k.set_session(tf.Session(config=config))
</code>
## Pick Models
<code>
save = False
kfold=1000
# kfold=20
network_model_keys = ['optimizer','lr','decay']
architecture_keys = ['weight_decay', 'weight_l1_penalty', #'weight_l2_penalty',
'tree_thrd', 'weight_initial',
'batch_normalization','drop_out']
network_training_keys = ['batch_size','epochs']
logging.basicConfig(format = '[%(name)-8s|%(levelname)s|%(filename)s:%(lineno)s] %(message)s',
level=logging.DEBUG)
log = logging.getLogger()
</code>
<code>
#######################################################################
# filenames = 'simulation_s0.Rmd'
# models = [
# 'simulation_s0/simulation_s0_deep',
# 'simulation_s0/simulation_s0_deep_l1',
# 'simulation_s0/simulation_s0_deepbiome',
# ]
# models_aka = [
# 'DNN',
# 'DNN+$\ell_1$',
# 'DeepBiome',
# ]
# num_classes = 0
########################################################################
# filenames = 'simulation_s1.Rmd'
# models = [
# 'simulation_s1/simulation_s1_deep',
# 'simulation_s1/simulation_s1_deep_l1',
# 'simulation_s1/simulation_s1_deepbiome',
# ]
# models_aka = [
# 'DNN',
# 'DNN+$\ell_1$',
# 'DeepBiome',
# ]
# num_classes = 0
# ########################################################################
# filenames = 'simulation_s2.Rmd'
# models = [
# 'simulation_s2/simulation_s2_deep',
# 'simulation_s2/simulation_s2_deep_l1',
# 'simulation_s2/simulation_s2_deepbiome',
# ]
# models_aka = [
# 'DNN',
# 'DNN+$\ell_1$',
# 'DeepBiome',
# ]
# num_classes = 1
# #######################################################################
# filenames = 'simulation_s3.Rmd'
# models = [
# 'simulation_s3/simulation_s3_deep',
# 'simulation_s3/simulation_s3_deep_l1',
# 'simulation_s3/simulation_s3_deepbiome',
# ]
# models_aka = [
# 'DNN',
# 'DNN+$\ell_1$',
# 'DeepBiome',
# ]
# num_classes = 3
# # ########################################################################
filenames = 'simulation_s4.Rmd'
models = [
'simulation_s4/simulation_s4_deep',
'simulation_s4/simulation_s4_deep_l1',
'simulation_s4/simulation_s4_deepbiome',
]
models_aka = [
'DNN',
'DNN+$\ell_1$',
'DeepBiome',
]
num_classes = 0
######################################################################
# filenames = 'simulation_s5.Rmd'
# models = [
# 'simulation_s5/simulation_s5_deep',
# 'simulation_s5/simulation_s5_deep_l1',
# 'simulation_s5/simulation_s5_deepbiome',
# ]
# models_aka = [
# 'DNN',
# 'DNN+$\ell_1$',
# 'DeepBiome',
# ]
# num_classes = 0
########################################################################
</code>
<code>
model_network_info = {}
model_path_info = {}
for model_path in models:
config_data = configuration.Configurator('%s/config/path_info.cfg' % model_path, log, verbose=False)
config_data.set_config_map(config_data.get_section_map())
config_network = configuration.Configurator('%s/config/network_info.cfg' % model_path, log, verbose=False)
config_network.set_config_map(config_network.get_section_map())
model_path_info[model_path] = config_data.get_config_map()
model_network_info[model_path] = config_network.get_config_map()
if num_classes == 0: y_names = ['loss','correlation_coefficient']
elif num_classes==1: y_names = ['loss','binary_accuracy','sensitivity','specificity','gmeasure', 'auc']
else: y_names=['loss','categorical_accuracy','precision','recall','f1', 'auc']
if num_classes == 0: measure_index = np.array([0,1])
elif num_classes==1: measure_index = np.array([2,3,4,1,5])
else: measure_index = np.array([1,2,3,4,5])
</code>
## Accuracy
<code>
results = []
# log.info('%20s & %s' % ('model', '& '.join(['%s ' % name for name in np.array(y_names)[[measure_index]]])))
# print('%10s & %s \\\\\ \hline' % ('model', '& '.join(['%7s & (sd) ' % name for name in np.array(y_names)[[measure_index]]])))
# for model, aka in zip(models, models_aka):
# evaluation = np.load('%s/eval.npy' % model)
# log.info('%20s: %s' % (aka, ''.join(['%10.4f (%10.4f)'%(mean, std) for mean, std in zip(np.mean(evaluation, axis=0),np.std(evaluation, axis=0))])))
# results.append(np.vstack([np.mean(evaluation, axis=0),np.std(evaluation, axis=0)]).transpose())
for model, aka in zip(models, models_aka):
train_evaluation = np.load('%s/train_eval.npy' % model)[:,measure_index]
train_res = '&'.join(['%7.3f & %7.3f'%(mean, std) for mean, std in zip(np.nanmean(train_evaluation, axis=0),np.nanstd(train_evaluation, axis=0))])
test_evaluation = np.load('%s/test_eval.npy' % model)[:,measure_index]
test_res = '&'.join(['%7.3f & %7.3f'%(mean, std) for mean, std in zip(np.nanmean(test_evaluation, axis=0),np.nanstd(test_evaluation, axis=0))])
# log.info('%s & %s & %s \\\\' % (aka, train_res, test_res))
print('%10s & %s & %s \\\\' % (aka, test_res, train_res))
# results.append(np.vstack([np.mean(evaluation, axis=0),np.std(evaluation, axis=0)]).transpose())
</code>
# Weight estimation of DeepBiom
We identify the largest weight estimatio of neurons in two hidden layers; by doing this, we can identify the strongest phylogenetic connections. We compute the True Positive Rate (``TPR``, sensitivity), True Negative Rate (``TNR``, specificity), and their geometric mean (i.e., ``g-Measure``). The false discovery rate (FDR) would be ``FDR = 1-TPR`` in our case.
## DNN + $\ell_1$
<code>
num=1
model_path = models[num]
model_aka = models_aka[num]
config_data = configuration.Configurator('%s/config/path_info.cfg' % model_path, log, verbose=False)
config_data.set_config_map(config_data.get_section_map())
config_network = configuration.Configurator('%s/config/network_info.cfg' % model_path, log, verbose=False)
config_network.set_config_map(config_network.get_section_map())
path_info = config_data.get_config_map()
network_info = config_network.get_config_map()
path_info['data_info']['data_path'] = '/'.join(path_info['data_info']['data_path'].split('/')[2:])
path_info['data_info']['tree_info_path'] = '/'.join(path_info['data_info']['tree_info_path'].split('/')[2:])
try: path_info['data_info']['count_list_path'] = '/'.join(path_info['data_info']['count_list_path'].split('/')[2:])
except: pass
try: path_info['data_info']['count_path'] = '/'.join(path_info['data_info']['count_path'].split('/')[2:])
except: pass
path_info['data_info']['idx_path'] = '/'.join(path_info['data_info']['idx_path'].split('/')[2:])
path_info['model_info']['model_dir'] = './%s/%s'%(model_path,path_info['model_info']['model_dir'])
log.info('%22s : %s' % ('model', model_path))
log.info('%22s : %s' % ('model_aka', model_aka))
for k in architecture_keys:
log.info('%22s : %s' % (k, network_info['architecture_info'].get(k, None)))
for k in network_model_keys:
log.info('%22s : %s' % (k, network_info['model_info'].get(k, None)))
for k in network_training_keys:
log.info('%22s : %s' % (k, network_info['training_info'].get(k, None)))
</code>
<code>
tw_1 = np.load('%s/tw_1.npy' % path_info['data_info']['data_path'])
tw_2 = np.load('%s/tw_2.npy' % path_info['data_info']['data_path'])
tw_3 = np.load('%s/tw_3.npy' % path_info['data_info']['data_path'])
tw_4 = np.load('%s/tw_4.npy' % path_info['data_info']['data_path'])
true_tree_weight_list = []
for fold in range(kfold):
true_tree_weight_list.append(np.array([tw_1[fold],tw_2[fold],tw_3[fold],tw_4[fold]]))
# true_tree_weight_list = np.array(true_tree_weight_list)
# np.save('../deepbiome/tests/data/true_weight_list.npy', true_tree_weight_list)
</code>
<code>
trained_weight_path_list = ['%s/weight/weight_%d.h5' % (path_info['model_info']['model_dir'], i) for i in range(kfold)]
</code>
<code>
summary = deepbiome_taxa_selection_performance(log, network_info, path_info, num_classes, true_tree_weight_list, trained_weight_path_list)
summary.iloc[0,0] = model_aka
</code>
<code>
summary
</code>
<code>
print('%7s & %7s & %12s & %s' % ('Model', 'PhyloTree', 'True (Total)', ' & '.join(summary.columns[4:])))
print('---------------------------------------------------------------------------------------------------------------')
for i in range(summary.shape[0]):
print('%10s & %7s & %7d (%d) & ' % tuple(summary.iloc[i,:4]) + ' &'.join(['%6.3f' % val for val in summary.iloc[i,4:]]) + ' \\\\')
# if save:
# # filenametexa = '.'.join(["%s_select_texa_1" % filename.split('.')[0], filename.split('.')[1]])
# colname = ['Tree','True (Total)','Selected','Sensitivity','Specificity','gMeasure','Accuracy']
# with open('%s/%s' % (analysis_dir, filename), mode='a') as f:
# # f.write('---\ntitle: "%s texa selection ver.1"\noutput: html_document\n---\n\n' % filename.split('.')[0])
# f.write('\n## Texa Selection Preformance (ver 1): %s\n\n' % model_aka)
# f.write('| %s |\n' % ('|'.join([v for v in colname])))
# f.write('|'+'---|'*len(colname)+'\n')
# for value in values:
# f.write('| %s |\n' % ('|'.join(value)))
</code>
## DeepBiome
<code>
num=2
model_path = models[num]
model_aka = models_aka[num]
config_data = configuration.Configurator('%s/config/path_info.cfg' % model_path, log, verbose=False)
config_data.set_config_map(config_data.get_section_map())
config_network = configuration.Configurator('%s/config/network_info.cfg' % model_path, log, verbose=False)
config_network.set_config_map(config_network.get_section_map())
path_info = config_data.get_config_map()
network_info = config_network.get_config_map()
path_info['data_info']['data_path'] = '/'.join(path_info['data_info']['data_path'].split('/')[2:])
path_info['data_info']['tree_info_path'] = '/'.join(path_info['data_info']['tree_info_path'].split('/')[2:])
try: path_info['data_info']['count_list_path'] = '/'.join(path_info['data_info']['count_list_path'].split('/')[2:])
except: pass
try: path_info['data_info']['count_path'] = '/'.join(path_info['data_info']['count_path'].split('/')[2:])
except: pass
path_info['data_info']['idx_path'] = '/'.join(path_info['data_info']['idx_path'].split('/')[2:])
path_info['model_info']['model_dir'] = './%s/%s'%(model_path,path_info['model_info']['model_dir'])
log.info('%22s : %s' % ('model', model_path))
log.info('%22s : %s' % ('model_aka', model_aka))
for k in architecture_keys:
log.info('%22s : %s' % (k, network_info['architecture_info'].get(k, None)))
for k in network_model_keys:
log.info('%22s : %s' % (k, network_info['model_info'].get(k, None)))
for k in network_training_keys:
log.info('%22s : %s' % (k, network_info['training_info'].get(k, None)))
</code>
### Performance
<code>
tw_1 = np.load('%s/tw_1.npy' % path_info['data_info']['data_path'])
tw_2 = np.load('%s/tw_2.npy' % path_info['data_info']['data_path'])
tw_3 = np.load('%s/tw_3.npy' % path_info['data_info']['data_path'])
tw_4 = np.load('%s/tw_4.npy' % path_info['data_info']['data_path'])
true_tree_weight_list = []
for fold in range(kfold):
true_tree_weight_list.append(np.array([tw_1[fold],tw_2[fold],tw_3[fold],tw_4[fold]]))
# true_tree_weight_list = np.array(true_tree_weight_list)
# np.save('../deepbiome/tests/data/true_weight_list.npy', true_tree_weight_list)
</code>
<code>
trained_weight_path_list = ['%s/weight/weight_%d.h5' % (path_info['model_info']['model_dir'], i) for i in range(kfold)]
</code>
<code>
summary = deepbiome_taxa_selection_performance(log, network_info, path_info, num_classes, true_tree_weight_list, trained_weight_path_list)
summary.iloc[0,0] = model_aka
</code>
<code>
summary
</code>
<code>
print('%7s & %7s & %12s & %s' % ('Model', 'PhyloTree', 'True (Total)', ' & '.join(summary.columns[4:])))
print('---------------------------------------------------------------------------------------------------------------')
for i in range(summary.shape[0]):
print('%10s & %7s & %7d (%d) & ' % tuple(summary.iloc[i,:4]) + ' &'.join(['%6.3f' % val for val in summary.iloc[i,4:]]) + ' \\\\')
# if save:
# # filenametexa = '.'.join(["%s_select_texa_1" % filename.split('.')[0], filename.split('.')[1]])
# colname = ['Tree','True (Total)','Selected','Sensitivity','Specificity','gMeasure','Accuracy']
# with open('%s/%s' % (analysis_dir, filename), mode='a') as f:
# # f.write('---\ntitle: "%s texa selection ver.1"\noutput: html_document\n---\n\n' % filename.split('.')[0])
# f.write('\n## Texa Selection Preformance (ver 1): %s\n\n' % model_aka)
# f.write('| %s |\n' % ('|'.join([v for v in colname])))
# f.write('|'+'---|'*len(colname)+'\n')
# for value in values:
# f.write('| %s |\n' % ('|'.join(value)))
</code>
|
{
"filename": "analysis-simulation_s4.ipynb",
"repository": "Young-won/deepbiome",
"query": "transformed_from_existing",
"size": 336263,
"sha": ""
}
|
# 03_filter_reviews.ipynb
Repository: NilsHellwig/exploring-absa-llm-augmentation
# Notebook: Filter Reviews from Collected HTMLs
## Packages
<code>
from bs4 import BeautifulSoup
import pandas as pd
import spacy
import json
import nltk
from nltk.tokenize import sent_tokenize
import re
</code>
## Settings
<code>
nltk.download('punkt')
</code>
<code>
%%capture
#!python -m spacy download de_core_news_lg
</code>
<code>
nlp = spacy.load("de_core_news_lg")
</code>
## Constants
<code>
RESTAURANT_URLS = "restaurant_metadata_with_highest_page_index.json"
REVIEWS_PATH = "reviews_dataset/reviews_urls.csv"
RANDOM_STATE = 43
</code>
## Code
### Load Dataset
<code>
reviews_df = pd.read_csv(REVIEWS_PATH)
</code>
### Load Reviews
<code>
columns = ['review_id', 'restaurant_id', 'page_index', 'title', 'date', 'author_name', 'author_location', 'text', 'rating', 'city', 'restaurant_name', 'language_code']
data_reviews = []
</code>
<code>
def load_review(review_soup):
review = {}
review["title"] = review_soup.find("div", attrs={"class": "quote"}).get_text()
review["date"] = review_soup.find(class_='ratingDate')['title']
review["author_name"] = review_soup.find(class_='scrname').get_text()
user_location_element = review_soup.find(class_='userLocation')
if user_location_element:
user_location = user_location_element.get_text()
else:
user_location = None
review["author_location"] = user_location
review["text"] = review_soup.find(class_='partial_entry').get_text()
review["rating"] = int(review_soup.find(class_='reviewItemInline').find('span', class_='ui_bubble_rating')['class'][1].split('_')[1]) / 10
return review
</code>
<code>
for index, row in reviews_df.iterrows():
path_review = "reviews_restaurants_html/restaurant_" + str(row['restaurant_id']) + "_review_" + str(row["review_id"]) + ".html"
with open(path_review, 'r', encoding='utf-8') as file:
html_content = file.read()
doc_soup = BeautifulSoup(html_content, 'html.parser')
review_soup = doc_soup.find(id="review_"+str(row["review_id"]))
review = load_review(review_soup)
try:
review["language_code"] = doc_soup.find("div", class_="prw_reviews_user_links_hsx").span["data-language"]
except:
review["language_code"] = "not defined"
review["restaurant_name"] = doc_soup.find("a", attrs={"class": "HEADING"}).get_text()[1:-1]
review["review_id"] = row["review_id"]
review["restaurant_id"] = row["restaurant_id"]
review["page_index"] = row["page_index"]
data_reviews.append(review)
</code>
<code>
df_reviews = pd.DataFrame(data_reviews, columns=columns)
df_reviews
</code>
### Add city
<code>
with open(RESTAURANT_URLS, 'r') as json_file:
restaurant_metadata = json.load(json_file)
</code>
<code>
restaurant_dict = {entry['id']: entry['city'] for entry in restaurant_metadata}
restaurant_dict_str = {int(k): v for k, v in restaurant_dict.items()}
df_reviews['city'] = df_reviews['restaurant_id'].map(restaurant_dict_str)
df_reviews
</code>
### Remove line breaks
<code>
df_reviews['text'] = df_reviews['text'].str.replace('\n', ' ')
</code>
### Check for Duplicates
<code>
duplicate_rows = df_reviews[df_reviews.duplicated(subset=['review_id'], keep=False)]
duplicate_rows
</code>
### Delete Examples without Data
There are rare cases where the text from the rating is not returned with the GET request to the page from the restaurant rating. These will now be excluded.
<code>
df_reviews = df_reviews.drop(df_reviews[(df_reviews['text'] == '') | (df_reviews['title'] == '')].index)
</code>
### Remove Reviews Posted Before October 2022
<code>
month_mapping = {
"Januar": 1, "Februar": 2, "März": 3, "April": 4, "Mai": 5, "Juni": 6,
"Juli": 7, "August": 8, "September": 9, "Oktober": 10, "November": 11, "Dezember": 12
}
def convert_date(date_string):
day, month_name, year = date_string.split()
day = day.replace(".", "")
month = month_mapping[month_name]
return pd.Timestamp(int(year), month, int(day))
df_reviews["date"] = df_reviews["date"].apply(convert_date)
</code>
<code>
df_reviews = df_reviews[df_reviews["date"] >= pd.Timestamp(2022, 10, 15)]
df_reviews = df_reviews[df_reviews["date"] < pd.Timestamp(2023, 10, 16)]
</code>
<code>
df_reviews.reset_index(drop=True, inplace=True)
</code>
### Anonymise Restaurant Chains
<code>
df_reviews["text_noanonymization"] = df_reviews["text"]
</code>
<code>
def anonymize_entities(text):
doc = nlp(text)
for ent in doc.ents:
if ent.label_ in ["LOC", "PERSON", "DATE"] and ent.label_ != "Essen":
text = text.replace(ent.text, f"{ent.label_}")
return text
df_reviews["text"] = df_reviews["text"].apply(anonymize_entities)
</code>
<code>
def anonymize_restaurant_name_chain(text):
restaurant_names = [
"vapiano",
"hans im glück",
"hans ins glück",
"dean&david",
"dean und david",
"dean & david",
"dean and david",
"losteria",
"l osteria",
"l'osteria",
"l‘osteria",
"l´osteria",
"Llosteria",
"L’Osteria",
"la osteria",
"L`Osteria",
"L’Hosteria",
"blockhouse",
"block house",
"block hause",
"blockhaus",
"blockouse",
"Block Houses",
"vapianos",
]
for name in restaurant_names:
text = re.sub(r'\b' + re.escape(name) + r'\b', "RESTAURANT_NAME", text, flags=re.IGNORECASE)
return text
df_reviews["text"] = df_reviews["text"].apply(anonymize_restaurant_name_chain)
</code>
### Anonymise Restaurant Name
<code>
def anonymize_restaurant_name(text, restaurant_name):
return re.sub(re.escape(restaurant_name), "RESTAURANT_NAME", text, flags=re.IGNORECASE)
df_reviews["text"] = df_reviews.apply(lambda row: anonymize_restaurant_name(row["text"], row["restaurant_name"]), axis=1)
</code>
### Anonymise Username
<code>
def anonymize_username(text, username):
return text.replace(username, "PERSON")
df_reviews["text"] = df_reviews.apply(lambda row: anonymize_username(row["text"], row["author_name"]), axis=1)
</code>
### Filter German Languages by language code
We are only considering reviews in german language.
<code>
df_reviews = df_reviews.drop(df_reviews[(df_reviews['language_code'] != 'de')].index)
</code>
### Store as .csv
<code>
df_reviews.to_csv("reviews_dataset/reviews.csv")
</code>
<code>
df_reviews
</code>
|
{
"filename": "03_filter_reviews.ipynb",
"repository": "NilsHellwig/exploring-absa-llm-augmentation",
"query": "transformed_from_existing",
"size": 63137,
"sha": ""
}
|
# jupyter_1.ipynb
Repository: hmelberg/causal
# Jupyter Notebooks <img src="http://blog.jupyter.org/content/images/2015/02/jupyter-sq-text.png" width='150' align='right'>
## for Collaborative and Reproducible Research
## Reproducible Research
> reproducing conclusions from a single experiment based on the measurements from that experiment
The most basic form of reproducibility is a complete description of the data and associated analyses (including code!) so the results can be *exactly* reproduced by others.
Reproducing calculations can be onerous, even with one's own work!
Scientific data are becoming larger and more complex, making simple descriptions inadequate for reproducibility. As a result, most modern research is irreproducible without tremendous effort.
*** Reproducible research is not yet part of the culture of science in general, or scientific computing in particular. ***
## Scientific Computing Workflow
There are a number of steps to scientific endeavors that involve computing:

Many of the standard tools impose barriers between one or more of these steps. This can make it difficult to iterate, reproduce work.
The Jupyter notebook [eliminates or reduces these barriers to reproducibility](http://www.nature.com/news/interactive-notebooks-sharing-the-code-1.16261).
Jupyter/IPython notebooks have already motivated the generation of [reproducible publications](https://github.com/ipython/ipython/wiki/A-gallery-of-interesting-IPython-Notebooks#reproducible-academic-publications) and an [open source statistics textbook](http://camdavidsonpilon.github.io/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/)
## Jupyter Notebook
The Jupyter Notebook is an **interactive computing environment** that enables users to author notebook documents that include:
- Live code
- Interactive widgets
- Plots
- Narrative text
- Equations
- Images
- Video
These documents provide a **complete and self-contained record of a computation** that can be converted to various formats and shared with others using email, [Dropbox](http://dropbox.com), version control systems (like git/[GitHub](http://github.com)) or [nbviewer.ipython.org](http://nbviewer.ipython.org).
### Components
The Jupyter Notebook combines three components:
* **The notebook web application**: An interactive web application for writing and running code interactively and authoring notebook documents.
* **Kernels**: Separate processes started by the notebook web application that runs users' code in a given language and returns output back to the notebook web application. The kernel also handles things like computations for interactive widgets, tab completion and introspection.
* **Notebook documents**: Self-contained documents that contain a representation of all content visible in the notebook web application, including inputs and outputs of the computations, narrative
text, equations, images, and rich media representations of objects. Each notebook document has its own kernel.
## Kernels
Through IPython's kernel and messaging architecture, the Notebook allows code to be run in a range of different programming languages. For each notebook document that a user opens, the web application starts a kernel that runs the code for that notebook. Each kernel is capable of running code in a single programming language and there are kernels available in the following languages:
* [Python](https://github.com/ipython/ipython)
* [Julia](https://github.com/JuliaLang/IJulia.jl)
* [R](https://github.com/takluyver/IRkernel)
* [Ruby](https://github.com/minrk/iruby)
* [Haskell](https://github.com/gibiansky/IHaskell)
* [Scala](https://github.com/Bridgewater/scala-notebook)
* [node.js](https://github.com/n-riesco/ijavascript)
* [Go](https://github.com/takluyver/igo)
The default kernel runs Python code. IPython 3.0 provides a simple way for users to pick which of these kernels is used for a given notebook.
Each of these kernels communicate with the notebook web application and web browser using a JSON over ZeroMQ/WebSockets message protocol that is described [here](http://ipython.org/ipython-doc/dev/development/messaging.html). Most users don't need to know about these details, but it helps to understand that "kernels run code."
## Notebook Documents
Notebook documents contain the **inputs and outputs** of an interactive session as well as **narrative text** that accompanies the code but is not meant for execution. **Rich output** generated by running code, including HTML, images, video, and plots, is embeddeed in the notebook, which makes it a complete and self-contained record of a computation.
When you run the notebook web application on your computer, notebook documents are just **files on your local filesystem with a `.ipynb` extension**. This allows you to use familiar workflows for organizing your notebooks into folders and sharing them with others.
Notebooks consist of a **linear sequence of cells**. There are three basic cell types:
* **Code cells:** Input and output of live code that is run in the kernel
* **Markdown cells:** Narrative text with embedded LaTeX equations
* **Raw cells:** Unformatted text that is included, without modification, when notebooks are converted to different formats using nbconvert
Internally, notebook documents are **[JSON](http://en.wikipedia.org/wiki/JSON) data** with **binary values [base64](http://en.wikipedia.org/wiki/Base64)** encoded. This allows them to be **read and manipulated programmatically** by any programming language. Because JSON is a text format, notebook documents are version control friendly.
**Notebooks can be exported** to different static formats including HTML, reStructeredText, LaTeX, PDF, and slide shows ([reveal.js](http://lab.hakim.se/reveal-js/#/)) using IPython's `nbconvert` utility.
Furthermore, any notebook document available from a **public URL** can be shared via [nbviewer](http://nbviewer.ipython.org). This service loads the notebook document from the URL and renders it as a static web page. The resulting web page may thus be shared with others **without their needing to install IPython**.
## Installation and Configuration
While Jupyter runs code in many different programming languages, Python is a prerequisite for installing Jupyter notebook.
Perhaps the easiest way to get a feature-complete version of Python on your system is to install the [Anaconda](http://continuum.io/downloads.html) distribution by Continuum Analytics. Anaconda is a completely free Python environment that includes includes almost 200 of the best Python packages for science and data analysis. Its simply a matter of downloading the installer (either graphical or command line), and running it on your system.
Be sure to download the Python 3.5 installer, by following the **Python 3.5 link** for your computing platform (Mac OS X example shown below).

Once Python is installed, installing Jupyter is a matter of running a single command:
conda install jupyter
If you prefer to install Jupyter from source, or you did not use Anaconda to install Python, you can also use `pip`:
pip install jupyter
## Installing Kernels
Individual language kernels must be installed from each respective language. We will show the R kernel installation as an example.
Setting up the R kernel involves two commands from within the R shell. The first installs the packages:
```r
install.packages(c('repr', 'IRkernel', 'IRdisplay'),
repos = c('http://irkernel.github.io/', getOption('repos')))
```
and the second links the kernel to Jupyter:
```r
IRkernel::installspec()
```
## Running Jupyter Notebooks
Once installed, a notebook session can be initiated from the command line via:
jupyter notebook
If you installed Jupyter via Anaconda, you will also have a graphical launcher available.
## IPython
**IPython** (Interactive Python) is an enhanced Python shell which provides a more robust and productive development environment for users. There are several key features that set it apart from the standard Python shell.
* Interactive data analysis and visualization
* Python kernel for Jupyter notebooks
* Easy parallel computation
Over time, the IPython project grew to include several components, including:
* an interactive shell
* a REPL protocol
* a notebook document fromat
* a notebook document conversion tool
* a web-based notebook authoring tool
* tools for building interactive UI (widgets)
* interactive parallel Python
As each component has evolved, several had grown to the point that they warrented projects of their own. For example, pieces like the notebook and protocol are not even specific to Python. As the result, the IPython team created Project Jupyter, which is the new home of language-agnostic projects that began as part of IPython, such as the notebook in which you are reading this text.
The HTML notebook that is part of the Jupyter project supports **interactive data visualization** and easy high-performance **parallel computing**.
<code>
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
def f(x):
return (x-3)*(x-5)*(x-7)+85
import numpy as np
x = np.linspace(0, 10, 200)
y = f(x)
plt.plot(x,y)
</code>
The Notebook gives you everything that a browser gives you. For example, you can embed images, videos, or entire websites.
<code>
from IPython.display import IFrame
IFrame('http://biostat.mc.vanderbilt.edu/wiki', width='100%', height=350)
</code>
<code>
from IPython.display import YouTubeVideo
YouTubeVideo("rl5DaFbLc60")
</code>
# Running Code
First and foremost, the IPython Notebook is an interactive environment for writing and running code. IPython is capable of running code in a wide range of languages. However, this notebook, and the default kernel in IPython 3, runs Python code.
## Code cells allow you to enter and run Python code
Run a code cell using `Shift-Enter` or pressing the <button class='btn btn-default btn-xs'><i class="icon-play fa fa-play"></i></button> button in the toolbar above:
<code>
a = 10
</code>
<code>
print(a)
</code>
There are three keyboard shortcuts for running code:
* `Shift-Enter` runs the current cell, enters command mode, and select next cell.
* `Ctrl-Enter` runs the current cell and enters command mode.
* `Alt-Enter` runs the current cell and inserts a new one below, enters edit mode.
These keyboard shortcuts works both in command and edit mode.
## Managing the IPython Kernel
Code is run in a separate process called the IPython Kernel. The Kernel can be interrupted or restarted. Try running the following cell and then hit the <button class='btn btn-default btn-xs'><i class='icon-stop fa fa-stop'></i></button> button in the toolbar above.
<code>
import time
time.sleep(10)
</code>
If the Kernel dies it will be automatically restarted up to 3 times.
If it cannot be restarted automatically you will be prompted to try again, or abort.
Here we call the low-level system libc.time routine with the wrong argument via
ctypes to segfault the Python interpreter:
<code>
import sys
from ctypes import CDLL
# This will crash a Linux or Mac system
# equivalent calls can be made on Windows
dll = 'dylib' if sys.platform == 'darwin' else 'so.6'
libc = CDLL("libc.%s" % dll)
libc.time(-1) # BOOM!!
</code>
## Cell menu
The "Cell" menu has a number of menu items for running code in different ways. These includes:
* Run
* Run and Select Below
* Run and Insert Below
* Run All
* Run All Above
* Run All Below
## Restarting the kernels
The kernel maintains the state of a notebook's computations. You can reset this state by restarting the kernel. This is done by clicking on the <button class='btn btn-default btn-xs'><i class='fa fa-repeat icon-repeat'></i></button> in the toolbar above, or by using the `00` (press 0 twice) shortcut in command mode.
## Output is asynchronous
All output is displayed asynchronously as it is generated in the Kernel. If you execute the next cell, you will see the output one piece at a time, not all at the end.
<code>
import time, sys
for i in range(8):
print(i)
time.sleep(0.5)
</code>
## Large outputs
To better handle large outputs, the output area can be collapsed. Run the following cell and then single- or double- click on the active area to the left of the output:
<code>
for i in range(50):
print(i)
</code>
Beyond a certain point, output will scroll automatically:
<code>
for i in range(500):
print(2**i - 1)
</code>
## Markdown cells
Markdown is a simple *markup* language that allows plain text to be converted into HTML.
The advantages of using Markdown over HTML (and LaTeX):
- its a **human-readable** format
- allows writers to focus on content rather than formatting and layout
- easier to learn and use
For example, instead of writing:
```html
<p>In order to create valid
<a href="http://en.wikipedia.org/wiki/HTML">HTML</a>, you
need properly coded syntax that can be cumbersome for
“non-programmers” to write. Sometimes, you
just want to easily make certain words <strong>bold
</strong>, and certain words <em>italicized</em> without
having to remember the syntax. Additionally, for example,
creating lists:</p>
<ul>
<li>should be easy</li>
<li>should not involve programming</li>
</ul>
```
we can write the following in Markdown:
```markdown
In order to create valid [HTML], you need properly
coded syntax that can be cumbersome for
"non-programmers" to write. Sometimes, you just want
to easily make certain words **bold**, and certain
words *italicized* without having to remember the
syntax. Additionally, for example, creating lists:
* should be easy
* should not involve programming
```
### Emphasis
Markdown uses `*` (asterisk) and `_` (underscore) characters as
indicators of emphasis.
*italic*, _italic_
**bold**, __bold__
***bold-italic***, ___bold-italic___
*italic*, _italic_
**bold**, __bold__
***bold-italic***, ___bold-italic___
### Lists
Markdown supports both unordered and ordered lists. Unordered lists can use `*`, `-`, or
`+` to define a list. This is an unordered list:
* Apples
* Bananas
* Oranges
* Apples
* Bananas
* Oranges
Ordered lists are numbered lists in plain text:
1. Bryan Ferry
2. Brian Eno
3. Andy Mackay
4. Paul Thompson
5. Phil Manzanera
1. Bryan Ferry
2. Brian Eno
3. Andy Mackay
4. Paul Thompson
5. Phil Manzanera
### Links
Markdown inline links are equivalent to HTML `<a href='foo.com'>`
links, they just have a different syntax.
[Biostatistics home page](http://biostat.mc.vanderbilt.edu "Visit Biostat!")
[Biostatistics home page](http://biostat.mc.vanderbilt.edu "Visit Biostat!")
### Block quotes
Block quotes are denoted by a `>` (greater than) character
before each line of the block quote.
> Sometimes a simple model will outperform a more complex model . . .
> Nevertheless, I believe that deliberately limiting the complexity
> of the model is not fruitful when the problem is evidently complex.
> Sometimes a simple model will outperform a more complex model . . .
> Nevertheless, I believe that deliberately limiting the complexity
> of the model is not fruitful when the problem is evidently complex.
### Images
Images look an awful lot like Markdown links, they just have an extra
`!` (exclamation mark) in front of them.


### Mathjax Support
Mathjax ia a javascript implementation $\alpha$ of LaTeX that allows equations to be embedded into HTML. For example, this markup:
"""$$ \int_{a}^{b} f(x)\, dx \approx \frac{1}{2} \sum_{k=1}^{N} \left( x_{k} - x_{k-1} \right) \left( f(x_{k}) + f(x_{k-1}) \right). $$"""
becomes this:
$$
\int_{a}^{b} f(x)\, dx \approx \frac{1}{2} \sum_{k=1}^{N} \left( x_{k} - x_{k-1} \right) \left( f(x_{k}) + f(x_{k-1}) \right).
$$
## Running other Kernels
The kernel of a Jupyter session can be switched from the menu. [Here is an example of a notebook running R code](rtutorial.ipynb).
## IPython in Jupyter Notebooks
Running IPython within a Jupyter Notebook provides an enhanced interactive scientific computing environment.
### SymPy
SymPy is a Python library for symbolic mathematics. It supports:
* polynomials
* calculus
* solving equations
* discrete math
* matrices
<code>
from sympy import *
init_printing()
x, y = symbols("x y")
</code>
<code>
eq = ((x+y)**2 * (x+1))
eq
</code>
<code>
expand(eq)
</code>
<code>
(1/cos(x)).series(x, 0, 6)
</code>
<code>
limit((sin(x)-x)/x**3, x, 0)
</code>
<code>
diff(cos(x**2)**2 / (1+x), x)
</code>
### Magic functions
IPython has a set of predefined ‘magic functions’ that you can call with a command line style syntax. These include:
* `%run`
* `%edit`
* `%debug`
* `%timeit`
* `%paste`
* `%load_ext`
<code>
%lsmagic
</code>
IPython also creates aliases for a few common interpreters, such as bash, ruby, perl, etc.
These are all equivalent to `%%script <name>`
<code>
%%ruby
puts "Hello from Ruby #{RUBY_VERSION}"
</code>
<code>
%%bash
echo "hello from $BASH"
</code>
IPython has an `rmagic` extension that contains a some magic functions for working with R via rpy2. This extension can be loaded using the `%load_ext` magic as follows:
<code>
%load_ext rpy2.ipython
</code>
If the above generates an error, it is likely that you do not have the `rpy2` module installed. You can install this now via:
<code>
!pip install rpy2
</code>
or, if you are running Anaconda, via `conda`:
<code>
!conda install rpy2
</code>
<code>
%R print(lm(rnorm(10)~rnorm(10)))
print('i am python')
</code>
<code>
import numpy as np
x,y = np.arange(10), np.random.normal(size=10)
</code>
<code>
%%R -i x,y -o XYcoef
lm.fit <- lm(y~x)
par(mfrow=c(2,2))
print(summary(lm.fit))
plot(lm.fit)
XYcoef <- coef(lm.fit)
</code>
<code>
XYcoef
</code>
### Remote Code
Use `%load` to add remote code
<code>
# %load http://matplotlib.org/mpl_examples/shapes_and_collections/scatter_demo.py
"""
Simple demo of a scatter plot.
"""
import numpy as np
import matplotlib.pyplot as plt
N = 50
x = np.random.rand(N)
y = np.random.rand(N)
colors = np.random.rand(N)
area = np.pi * (15 * np.random.rand(N))**2 # 0 to 15 point radiuses
plt.scatter(x, y, s=area, c=colors, alpha=0.5)
plt.show()
</code>
### Debugging and Profiling
The `%debug` magic can be used to trigger the IPython debugger (`ipd`) for a cell that raises an exception. The debugger allows you to step through code line-by-line and inspect variables and execute code.
<code>
import numpy
def abc(y, N, epsilon=[0.2, 0.8]):
trace = []
while len(trace) < N:
# Simulate from priors
mu = numpy.random.normal(0, 10)
sigma = numpy.random.uniform(0, 20)
x = numpy.random.normal(mu, sigma, 50)
#if (np.linalg.norm(y - x) < epsilon):
if ((abs(x.mean() - y.mean()) < epsilon) &
(abs(x.std() - y.std()) < epsilon[1])):
trace.append([mu, sigma])
return trace
</code>
<code>
y = numpy.random.normal(4, 2, 50)
abc(y, 10)
</code>
<code>
%debug
</code>
Timing the execution of code is easy with the `timeit` magic:
<code>
%timeit [i**2 for i in range(1000)]
</code>
<code>
%timeit numpy.arange(1000)**2
</code>
## Exporting and Converting Notebooks
In Jupyter, one can convert an `.ipynb` notebook document file into various static formats via the `nbconvert` tool. Currently, nbconvert is a command line tool, run as a script using Jupyter.
<code>
!jupyter nbconvert --to html "Introduction to Jupyter Notebooks.ipynb"
</code>
Currently, `nbconvert` supports HTML (default), LaTeX, Markdown, reStructuredText, Python and HTML5 slides for presentations. Some types can be post-processed, such as LaTeX to PDF (this requires [Pandoc](http://johnmacfarlane.net/pandoc/) to be installed, however).
<code>
!jupyter nbconvert --to pdf "Introduction to Jupyter Notebooks.ipynb"
</code>
A very useful online service is the [IPython Notebook Viewer](http://nbviewer.ipython.org) which allows you to display your notebook as a static HTML page, which is useful for sharing with others:
<code>
IFrame("http://nbviewer.ipython.org/2352771", width='100%', height=350)
</code>
GitHub supports the [rendering of Jupyter Notebooks](https://gist.github.com/fonnesbeck/670e777406a2f2bfb67e) stored on its repositories.
## Parallel IPython
The IPython architecture consists of four components, which reside in the `ipyparallel` package:
1. **Engine** The IPython engine is a Python instance that accepts Python commands over a network connection. When multiple engines are started, parallel and distributed computing becomes possible. An important property of an IPython engine is that it blocks while user code is being executed.
2. **Hub** The hub keeps track of engine connections, schedulers, clients, as well as persist all task requests and results in a database for later use.
3. **Schedulers** All actions that can be performed on the engine go through a Scheduler. While the engines themselves block when user code is run, the schedulers hide that from the user to provide a fully asynchronous interface to a set of engines.
4. **Client** The primary object for connecting to a cluster.

(courtesy Min Ragan-Kelley)
This architecture is implemented using the ØMQ messaging library and the associated Python bindings in `pyzmq`.
### Running parallel IPython
To enable the IPython Clusters tab in Jupyter Notebook:
ipcluster nbextension enable
When you then start a Jupyter session, you should see the following in your **IPython Clusters** tab:

Before running the next cell, make sure you have first started your cluster, you can use the [clusters tab in the dashboard](/#tab2) to do so.
Select the number if IPython engines (nodes) that you want to use, then click **Start**.
<code>
from ipyparallel import Client
client = Client()
dv = client.direct_view()
</code>
<code>
len(dv)
</code>
<code>
def where_am_i():
import os
import socket
return "In process with pid {0} on host: '{1}'".format(
os.getpid(), socket.gethostname())
</code>
<code>
where_am_i_direct_results = dv.apply(where_am_i)
where_am_i_direct_results.get()
</code>
Let's now consider a useful function that we might want to run in parallel. Here is a version of the approximate Bayesian computing (ABC) algorithm.
<code>
import numpy
def abc(y, N, epsilon=[0.2, 0.8]):
trace = []
while len(trace) < N:
# Simulate from priors
mu = numpy.random.normal(0, 10)
sigma = numpy.random.uniform(0, 20)
x = numpy.random.normal(mu, sigma, 50)
#if (np.linalg.norm(y - x) < epsilon):
if ((abs(x.mean() - y.mean()) < epsilon[0]) &
(abs(x.std() - y.std()) < epsilon[1])):
trace.append([mu, sigma])
return trace
</code>
<code>
y = numpy.random.normal(4, 2, 50)
</code>
Let's try running this on one of the cluster engines:
<code>
dv0 = client[0]
dv0.block = True
dv0.apply(abc, y, 10)
</code>
This fails with a NameError because NumPy has not been imported on the engine to which we sent the task. Each engine has its own namespace, so we need to import whatever modules we will need prior to running our code:
<code>
dv0.execute("import numpy")
</code>
<code>
dv0.apply(abc, y, 10)
</code>
An easier approach is to use the parallel cell magic to import everywhere:
<code>
%%px
import numpy
</code>
This magic can be used to execute the same code on all nodes.
<code>
%%px
import os
print(os.getpid())
</code>
<code>
%%px
%matplotlib inline
import matplotlib.pyplot as plt
import os
tsamples = numpy.random.randn(100)
plt.hist(tsamples)
_ = plt.title('PID %i' % os.getpid())
</code>
## JupyterHub
[JupyterHub](https://github.com/jupyterhub/jupyterhub) is a server that gives multiple users access to Jupyter notebooks, running an independent Jupyter notebook server for each user.
To use JupyterHub, you need a Unix server (typically Linux) running somewhere that is accessible to your team on the network. The JupyterHub server can be on an internal network at your organisation, or it can run on the public internet (in which case, take care with security). Users access JupyterHub in a web browser, by going to the IP address or domain name of the server.
Three actors:
- multi-user Hub (tornado process)
- configurable http proxy (node-http-proxy)
- multiple single-user IPython notebook servers (Python/IPython/tornado)
Basic principles:
- Hub spawns proxy
- Proxy forwards requests to hub by default
- Hub handles login, and spawns single-user servers on demand
- Hub configures proxy to forward url prefixes to single-user servers
To start the server, run the command:
jupyterhub
and then visit http://localhost:8000, and sign in with your unix credentials.
To allow multiple users to sign into the server, you will need to run the jupyterhub command as a privileged user, such as root. The wiki describes how to run the server as a less privileged user, which requires more configuration of the system.

*(animation courtesy of Jessica Hamrick)*
## Links and References
* [IPython Notebook Viewer](http://nbviewer.ipython.org) Displays static HTML versions of notebooks, and includes a gallery of notebook examples.
* [A Reference-Free Algorithm for Computational Normalization of Shotgun Sequencing Data](http://ged.msu.edu/papers/2012-diginorm/) A landmark example of reproducible research in genomics: Git repo, iPython notebook, data and scripts.
* Jacques Ravel and K Eric Wommack. 2014. [All Hail Reproducibility in Microbiome Research](http://www.microbiomejournal.com/content/pdf/2049-2618-2-8.pdf). Microbiome, 2:8.
* Benjamin Ragan-Kelley et al.. 2013. [Collaborative cloud-enabled tools allow rapid, reproducible biological insights](http://www.nature.com/ismej/journal/v7/n3/full/ismej2012123a.html). The ISME Journal, 7, 461–464; doi:10.1038/ismej.2012.123;
|
{
"filename": "jupyter_1.ipynb",
"repository": "hmelberg/causal",
"query": "transformed_from_existing",
"size": 302628,
"sha": ""
}
|
# J_Resume_analyzer_1.ipynb
Repository: Aishwarya-127/Aishwarya
<a href="https://colab.research.google.com/github/Aishwarya-127/Aishwarya_J/blob/main/Resume_analyzer.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
<code>
!pip install -U langchain langchain-community google-generativeai pypdf docx2txt
</code>
<code>
!pip install -U langchain langchain-community google-generativeai wikipedia
</code>
<code>
from google.colab import files
uploaded = files.upload() # Upload .pdf or .docx
file_path = list(uploaded.keys())[0] # Get uploaded file name
print("Uploaded:", file_path)
</code>
<code>
import os
import google.generativeai as genai
from langchain.llms.base import LLM
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
from langchain_community.document_loaders import PyPDFLoader, Docx2txtLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from typing import Optional, List
</code>
<code>
GEMINI_API_KEY = "AIzaSyDrlaYZ07flQD26CKcq_a2kA5VYY2dv4Lc" # Replace this
genai.configure(api_key=GEMINI_API_KEY)
</code>
<code>
class GeminiLLM(LLM):
model: str = "models/gemini-1.5-pro-latest"
temperature: float = 0.3
def _call(self, prompt: str, stop: Optional[List[str]] = None) -> str:
response = genai.GenerativeModel(self.model).generate_content(prompt)
return response.text
@property
def _llm_type(self) -> str:
return "gemini-pro"
</code>
<code>
llm = GeminiLLM()
prompt = PromptTemplate(
input_variables=["resume_chunk"],
template="""
You are a professional resume reviewer. Analyze the following resume content and provide:
1. Summary of experience
2. Key strengths
3. Areas for improvement
4. Missing keywords
Resume Text:
{resume_chunk}
"""
)
review_chain = LLMChain(llm=llm, prompt=prompt)
</code>
<code>
def analyze_resume(file_path):
if file_path.endswith(".pdf"):
loader = PyPDFLoader(file_path)
elif file_path.endswith(".docx"):
loader = Docx2txtLoader(file_path)
else:
print("Unsupported file format.")
return
docs = loader.load()
splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=100)
chunks = splitter.split_documents(docs)
for i, chunk in enumerate(chunks):
print(f"\n=== Section {i+1} ===")
feedback = review_chain.run(resume_chunk=chunk.page_content)
print(feedback)
</code>
<code>
analyze_resume(file_path)
</code>
|
{
"filename": "J_Resume_analyzer_1.ipynb",
"repository": "Aishwarya-127/Aishwarya",
"query": "transformed_from_existing",
"size": 54509,
"sha": ""
}
|
# lesson2_1.ipynb
Repository: AlyssaRSchaefer/Neural-Engineering
# Week Two: What does our brain do?
## MONDAY
1. Review items 1 (quiz) and 2 (book Make it Stick), and summarize in itemized bullet form, limited to 1/4th a page each. The other items are optional for those interested in the topic, and you don't need to summarize them.
2. Society of Neuroscience has a very popular book for lay-persons titled 'BrainFacts' that is available at [this site](https://www.brainfacts.org/the-brain-facts-book).
Summarize the book by condensing the key points in each chapter in a page, in itemized bullet point form (1 page per chapter). You will have a total of 18 pages (can be shorter). Focus on highlighting the key points of each chapter.
### Science of Learning & Teaching Resources
1. Take this [QUIZ](https://www.npr.org/sections/ed/2017/03/22/520843457/you-probably-believe-some-learning-myths-take-our-quiz-to-find-out) on the myths of learning that you probably believe.
2. The book [Make it Stick by Brown, Roediger and McDaniel](https://www.amazon.com/Make-It-Stick-Successful-Learning/dp/0674729013) lists fascinating discoveries about learning and teaching revealed... and being continued to be revealed by cognitive neuroscience.
3. Extra articles and videos related to these topics you may want to check out:
[Right from the horse's mouth](https://www.youtube.com/watch?v=3DGmr0etxWI): Nice talk about the book by one of the authors <br>
[What works, What doesn't](resources/week2/What_works,_What_doesn't.pdf): Scientific American article about the topic <br>
[Top 20 Principles](resources/week2/Top%2020%20Principles.pdf): A Great Summary
4. OPTIONAL: The resources below are only for your reading; no need to summarize.<br>
A. [Do learners really know best?](resources/week2/Do%20learners%20really%20know%20best.pdf)<br>
B. [Self-Regulated Learning - Beliefs, Techniques, and Illusions](resources/week2/Self-Regulated%20Learning%20-%20Beliefs,%20Techniques%20and%20Illusions.pdf)<br>
C. [Strengthening the Student Toolbox](resources/week2/Strengthening%20the%20student%20toolbox.pdf)<br>
D. [The Influence of Experience and Deliberate Practice](resources/week2/Ericsson%202006%20Handbook%20chapter%20improved.pdf)<br>
E. [Cohen Shermon 2014 Self-Affirmation Research](resources/week2/Cohen%20Sherman%202014%20AnnualReview%20self-affirmation%20research-1.pdf)<br>
F. [The Need to Integrate Neuroscience and Learning](resources/week2/Integrating%20neuroscience%20and%20learning-%20now's%20the%20time(1).pdf)<br>
## WEDNESDAY
3. Summarize the article [Seven Challenges for Neuroscience](resources/week2/Seven%20challenges%20for%20neuroscience%20Markram%202013.pdf).
4. How do we measure brain activity for modeling purposes? What this video: [Human brain mapping and brain decoding (TedTalk, 17:26 min)](https://www.youtube.com/watch?v=Ecvv-EvOj8M)
Work has started to decode and use it already: [Tetraplegic can drink coffee](https://www.youtube.com/watch?v=ogBX18maUiM)
## FRIDAY
Homework #2 Assigned: Complete the tasks in the (i) Monday and (ii) Wednesday sections and upload as a pdf.
|
{
"filename": "lesson2_1.ipynb",
"repository": "AlyssaRSchaefer/Neural-Engineering",
"query": "transformed_from_existing",
"size": 4149,
"sha": ""
}
|
# rna_3D.ipynb
Repository: CompGenomeLab/uv-3d-ddr
## Libraries
<code>
import bioframe
import numpy as np
import pandas as pd
import gseapy as gp
import seaborn as sns
import matplotlib.pyplot as plt
import tqdm
import glob
import cooler
#bm = gp.Biomart()
pd.options.mode.chained_assignment = None # default='warn'
</code>
## Helper functions and files ops
<code>
# read gtf
gtf = "/home/carlos/oldies/manuscripts/notebooks/unibind/GRCh38.gtf"
genes_all = bioframe.read_table(gtf, schema='gtf').query('feature=="CDS"')
genes_all.start = genes_all.start.astype(int)
genes_all.end = genes_all.end.astype(int)
genes_all.sort_values(by=['chrom', 'start'], inplace=True)
</code>
<code>
# use appris or mane
</code>
<code>
df = pd.read_csv("/home/carlos/oldies/manuscripts/notebooks/RNA/appris_data.appris.txt", sep='\t')
df = df.loc[df["MANE"].isin(['MANE_Select'])] #, 'MANE_Plus_Clinical'])]
mane_tx_list = df['Transcript ID'].tolist()
</code>
<code>
genes = genes_all.copy()
genes['gene_id'] = [gene_id.split(".")[0] for gene_id in genes.attributes.str.extract(r'gene_id "(.*?)";', expand=False)]
genes['tx_id'] = [tx_id.split(".")[0] for tx_id in genes.attributes.str.extract(r'transcript_id "(.*?)";', expand=False)]
genes['external_gene_name'] = [gene_name.split(".")[0] for gene_name in genes.attributes.str.extract(r'gene_name "(.*?)";', expand=False)]
genes = genes.loc[genes['tx_id'].isin(mane_tx_list)]
genes.sort_values(by=['chrom', 'start'], inplace=True)
</code>
<code>
tx_adjusted = []
for tx_id, tx_df in genes.groupby('tx_id').__iter__():
if "+" in tx_df.strand.to_list():
tx_adjusted.append(tx_df.iloc[0, :])
elif "-" in tx_df.strand.to_list():
tx_adjusted.append(tx_df.iloc[-1, :])
genes = pd.concat(tx_adjusted, axis=1).T
genes.reset_index(drop=True, inplace=True)
genes.start = genes.start.astype(int)
genes.end = genes.end.astype(int)
</code>
<code>
human = pd.Series(gp.get_library_name(organism='Human'))
pathways = human.loc[human.str.contains("MSigDB_Hallmark_2020") | human.str.contains("GO_Biological_Process_2023") | human.str.contains("NCI-Nature_2016")].reset_index(drop=True)
#pathways = human.loc[human.str.contains("MSigDB_Hallmark_2020") | human.str.contains("NCI-Nature_2016") ].reset_index(drop=True)
pathways = {
pathway: gp.get_library(name=pathway, organism='Human')
for pathway in pathways
}
</code>
<code>
def enrichrrr(
degs_list: list,
pathways_dict: dict,
universe_list: list = None):
results = []
for pathway_name, gene_sets in pathways_dict.items():
enr = gp.enrichr(gene_list=degs_list,
gene_sets=gene_sets,
#gene_sets = pathway_name,
outdir=None,
background=universe_list,
verbose=False)
results.append(enr)
return results
</code>
<code>
def overlapper(
current_degs_df: pd.DataFrame,
my_regions_df: pd.DataFrame,
all_genes: pd.DataFrame,
tss_coord_only: bool = True, # True if you want to use only the TSS coordinates (point-wise), False if you want to create a window around the TSS
upstream: int =2000,
downstream: int =500,
returnNames = True):
strand_oriented_genes = all_genes.copy()
if tss_coord_only == True:
strand_oriented_genes['start'] = all_genes.apply(lambda x: x['start'] if x['strand'] == "+" else x['end'], axis=1)
strand_oriented_genes['end'] = strand_oriented_genes['start']
else:
strand_oriented_genes['start'] = all_genes.apply(lambda x: x['start'] - upstream if x['strand'] == 1 else x['end'] - downstream, axis=1)
strand_oriented_genes['end'] = all_genes.apply(lambda x: x['start'] + downstream if x['strand'] == 1 else x['end'] + upstream, axis=1)
my_regions_universe = bioframe.overlap(strand_oriented_genes, my_regions_df, how='inner')
degs_filter = current_degs_df.loc[current_degs_df['ensembl_gene_id'].isin(my_regions_universe['gene_id'])]
if returnNames == True:
return list(degs_filter.external_gene_name.dropna().unique()), my_regions_universe
else:
return degs_filter, my_regions_universe
</code>
<code>
def merge_enrs_into_common_df(res_1, res_2):
comparison_dfs = []
for i, (enr1, enr2) in enumerate(zip(res_1, res_2)):
df1 = enr1.results.sort_values('Adjusted P-value')
df2 = enr2.results.sort_values('Adjusted P-value')
sig_terms_df1 = df1.loc[df1['Adjusted P-value'] <= 0.05].Term
sig_terms_df2 = df2.loc[df2['Adjusted P-value'] <= 0.05].Term
df1['logPadj'] = -np.log10(df1['Adjusted P-value'])
df2['logPadj'] = -np.log10(df2['Adjusted P-value'])
# merge dfs
df = pd.merge(df1, df2, on='Term', suffixes=('_0_12', '_0_60'))
if i == 2:
df = df.loc[df['logPadj_0_60'] >= 1.3]
# keep columns term, logPadj_0_12, logPadj_0_60
df = df[['Term', 'logPadj_0_12', 'logPadj_0_60']]
df = df.loc[df.Term.isin(sig_terms_df1) | df.Term.isin(sig_terms_df2)]
df.reset_index(drop=True, inplace=True)
# sort df based on the mean of logPadj_0_12 and logPadj_0_60, without writing over df
df = df.loc[df.iloc[:,[1,2]].max(axis=1).sort_values(ascending=True).index]
if i == 0:
# remove " (GO):$" from term
df['Term'] = ["".join(term.split("(GO")[0]) for term in df.Term]
if i == 2:
df['Term'] = [" ".join(term.split(" ")[:-3]) for term in df.Term]
comparison_dfs.append(df)
return comparison_dfs
def merge_terms(df_list: list):
pathways = []
for df in df_list:
sig_terms = df.loc[df['Adjusted P-value'] <= 0.05].Term.unique()
pathways += list(sig_terms)
return list(set(pathways))
def merge_enrs_into_common_df_2(res_list : list, names : list = None):
comparison_dfs = []
n_pathways = len(res_list[0])
for i in range(n_pathways):
df_list_toMergeTerms = []
for res in res_list:
curr_df = res[i].results
curr_df = curr_df.loc[curr_df['Adjusted P-value'] <= 0.05]
df_list_toMergeTerms.append(curr_df)
merged_terms = merge_terms(df_list_toMergeTerms)
db_dfs = []
for resIdx, res in enumerate(res_list):
curr_df = res[i].results
curr_df = curr_df.loc[curr_df['Term'].isin(merged_terms)]
curr_df.reset_index(drop=True, inplace=True)
curr_df.loc[:, 'logPadj'] = -np.log10(curr_df['Adjusted P-value'])
curr_df.sort_values(by='logPadj', inplace=True, ascending=False)
if names is not None:
curr_df['whichRes'] = names[resIdx]
else:
curr_df['whichRes'] = resIdx
if i == 0:
curr_df['Term'] = ["".join(term.split("(GO")[0]) for term in curr_df.Term]
if i == list(range(n_pathways))[-1]:
curr_df['Term'] = [" ".join(term.split(" ")[:-3]) for term in curr_df.Term]
db_dfs.append(curr_df)
df = pd.concat(db_dfs, axis=0)
comparison_dfs.append(df)
# for comp_df in comparison_dfs:
# if len(list(set(comp_df.groupby('whichRes').count().Term.values))) != 1 and len(comp_df) != 0:
# print(comp_df)
# print("WARNING: different number of terms in different results")
# return None
comparison_dfs_reformat = []
for comp_df in comparison_dfs:
comp_df.sort_values(by=['Term', 'logPadj'], inplace=True, ascending=False)
comp_df = comp_df.pivot(index='Term', columns='whichRes', values='logPadj')
comp_df['Term'] = comp_df.index
# nColumns = len(comp_df.columns) - 1
# comp_df = comp_df.loc[comp_df.iloc[:,:nColumns].max(axis=1).sort_values(ascending=True).index]
comp_df.Term = [split_text_into_lines(term) for term in comp_df.Term]
comparison_dfs_reformat.append(comp_df)
return comparison_dfs_reformat
def split_text_into_lines(text, max_line_length=30):
# split text into lines, but it should not split words
lines = []
words = text.split(" ")
line = ""
for word in words:
if len(line) + len(word) <= max_line_length:
line += word + " "
else:
lines.append(line)
line = word + " "
lines.append(line) # append the last line
return "\n".join([line[:-1] for line in lines]) # remove last space from all
def write_results(res, out):
df= res.results.sort_values('Adjusted P-value')
df = df.loc[df['Adjusted P-value'] <= 0.05]
if len(df) != 0:
df.to_csv(out, sep='\t', index=False)
</code>
<code>
def get_anchors(regions_file):
regions_bedpe = pd.read_csv(regions_file, sep="\t")
fivePrime_anchors = regions_bedpe[['chrom1', 'start1', 'end1']]
fivePrime_anchors.columns = ['chrom', 'start', 'end']
threePrime_anchors = regions_bedpe[['chrom2', 'start2', 'end2']]
threePrime_anchors.columns = ['chrom', 'start', 'end']
regions = pd.concat([fivePrime_anchors, threePrime_anchors], axis=0).drop_duplicates().reset_index(drop=True)
regions.drop_duplicates(inplace=True)
return regions
</code>
We gather anchors from the loops because there might be shared anchors between loops
And there is a possibilty that a gene is regulated by other pair, that is not present in unique anchors (to a timepoint) list
However, we can use the unique anchors to a timepoint to estimate differential TF binding, which is another analysis
<code>
degs_0_12 = pd.read_csv("/home/carlos/oldies/manuscripts/notebooks/RNA/t0-t12.degs.tsv", sep="\t")
degs_0_60 = pd.read_csv("/home/carlos/oldies/manuscripts/notebooks/RNA/t0-t60.degs.tsv", sep="\t")
degs_0_30 = pd.read_csv("/home/carlos/oldies/manuscripts/notebooks/RNA/t0-t30.degs.tsv", sep="\t")
deseq_lrt = pd.read_csv("/home/carlos/oldies/manuscripts/notebooks/RNA/all_deseq_lrt.tsv", sep="\t")
deseq_lrt.rename(columns={'gene_id': 'ensembl_gene_id'}, inplace=True)
</code>
<code>
# search specific genes
gene_oi_ENS = "ENSG00000012061"
df_curr = genes.loc[genes.gene_id == gene_oi_ENS]
if df_curr.strand.to_list()[0] == "-":
chr_name = df_curr.chrom.to_list()[0]
start = int(df_curr.end.to_list()[0]) // 10_000 * 10_000
end = start + 10_000
elif df_curr.strand.to_list()[0] == "+":
chr_name = df_curr.chrom.to_list()[0]
start = int(df_curr.start.to_list()[0]) // 10_000 * 10_000
end = start + 10_000
comp_labels = ["comp1", "comp2", "comp3", "comp4"]
comp_files = ["t0_t12_results_0_0", "t0_t12_results_0_1", "t0_t12_results_1_0", "t0_t12_results_1_1"]
for label, comp in zip(comp_labels, comp_files):
regions = pd.read_csv(f"/home/carlos/oldies/manuscripts/notebooks/gnn/{comp}.tsv", sep="\t")
oi_df = regions.loc[(regions.chrom == chr_name) & (regions.start == start)]
if len(oi_df) > 0:
print(f"Processing {label}")
print(oi_df)
</code>
## gnn res
<code>
paths = [f"/home/carlos/oldies/manuscripts/notebooks/gnn/t0_t12_results_{comp_now}.tsv" for comp_now in ["0_0", "1_0", "0_1", "1_1"]]
all_dfs = [pd.read_csv(df, sep="\t") for df in paths]
all_regions = pd.concat(all_dfs) # This is the GNN evaluated regions
all_regions = all_regions.iloc[:, :3]
universe_3d_df = bioframe.overlap(all_regions, genes, how='inner')
universe_3d_ids = list(universe_3d_df.gene_id_.dropna().unique())
universe_3d_names = list(universe_3d_df.external_gene_name_.dropna().unique())
</code>
<code>
comp_labels = ["comp1", "comp2", "comp3", "comp4"]
comp_files = ["t0_t12_results_0_0", "t0_t12_results_0_1", "t0_t12_results_1_0", "t0_t12_results_1_1"]
filter_11 = False
for label, comp in zip(comp_labels, comp_files):
regions = pd.read_csv(f"/home/carlos/oldies/manuscripts/notebooks/gnn/{comp}.tsv", sep="\t")
filter_w_unibind = False
unibind_mapping = {
"0_1": "0_1_vs_1_0",
"1_0": "1_0_vs_0_1",
"0_0": "0_0_vs_1_1",
"1_1": "1_1_vs_0_0"
}
comp_name = comp[-3:]
if comp_name == "1_1" and filter_11:
regions = regions.loc[(regions["t0_q30-t12_q30"] == 0) & (regions["t12_q30-t0_q30"] == 0)]
if filter_w_unibind:
unibind_regions = pd.read_csv(f"/home/carlos/oldies/manuscripts/notebooks/unibind/gnn_res/gnn_{unibind_mapping[comp_name]}/extracted_regions_merged.bed", sep="\t", header=None).iloc[:, :3]
unibind_regions.columns = ['chrom', 'start', 'end']
unibind_regions.start = unibind_regions.start.astype(int)
unibind_regions.end = unibind_regions.end.astype(int)
regions = bioframe.overlap(regions, unibind_regions, how='inner')
degs_0_12_degs, uni = overlapper(degs_0_12, regions, genes)
degs_0_30_degs, uni = overlapper(degs_0_30, regions, genes)
degs_0_60_degs, uni = overlapper(degs_0_60, regions, genes)
uni.to_csv(f"gnn_enrichr_results_comp_wise/{label}_universe.tsv", sep="\t", index=False)
res_0_12 = enrichrrr(degs_0_12_degs, pathways, universe_3d_names)
res_0_30 = enrichrrr(degs_0_30_degs, pathways, universe_3d_names)
res_0_60 = enrichrrr(degs_0_60_degs, pathways, universe_3d_names)
comparison_dfs = merge_enrs_into_common_df_2([res_0_12, res_0_30, res_0_60], ["0_12", "0_30", "0_60"])
for res1, res2, res3, pathway in zip(res_0_12, res_0_30, res_0_60, pathways):
write_results(res1, f"gnn_enrichr_results_comp_wise/{label}_{pathway}_0_12.tsv")
write_results(res2, f"gnn_enrichr_results_comp_wise/{label}_{pathway}_0_30.tsv")
write_results(res3, f"gnn_enrichr_results_comp_wise/{label}_{pathway}_0_60.tsv")
database_names = ["Gene Ontology\nBiological Process", "MSigDB\nHallmark", "NCI-Nature\nPID"]
plot_count = 0
for i, (df, pathway) in enumerate(zip(comparison_dfs, pathways)):
fig, ax = plt.subplots(1, 1, figsize=(10, 8))
if len(df) != 0:
df = df.loc[df.iloc[:,[0,1,2]].max(axis=1).sort_values(ascending=True).index]
df = df.iloc[-20:, :]
b = df.plot.barh(x='Term', ax=ax, color=['#A63446', "#F5B841", '#9DBBAE'])
b.axvline(x=-np.log10(0.05), color='black', linestyle='--', alpha=0.6, linewidth=3)
b.set_xlabel(f'-log$_{{10}}$(Adjusted P-value)', fontsize=20)
b.axvline(x=-np.log10(0.05), color='black', linestyle='--', alpha=.9, linewidth=3)
b.set_xlabel(f'-log$_{{10}}$ Adjusted P-value', fontsize=12)
b.set_ylabel(database_names[i], fontsize=20)
# change font size of x and y ticks
b.tick_params(labelsize=8)
b.tick_params(axis='both', which='major', labelsize=10)
b.tick_params(axis='both', which='minor', labelsize=10)
db_name = database_names[i].replace("\n", "_")
#fig.suptitle(f"{label}", fontsize=30)
fig.set_tight_layout(True)
fig.savefig(f"gnn_enrichr_results_comp_wise/{label}_{db_name}_pathways.svg")
fig.savefig(f"gnn_enrichr_results_comp_wise/{label}_{db_name}_pathways.png", dpi=300, facecolor="white", edgecolor='none')
fig.clf()
</code>
<code>
# comp_files = ["t0_t12_results_0_0", "t0_t12_results_0_1", "t0_t12_results_1_0", "t0_t12_results_1_1"]
# #regions = pd.concat([pd.read_csv(f"/home/carlos/oldies/manuscripts/notebooks/gnn/{comp}.tsv", sep="\t") for comp in comp_files[:3]]) # combine comp1, comp2, comp3
# #label = "changed_regions"
# regions = pd.concat([pd.read_csv(f"/home/carlos/oldies/manuscripts/notebooks/gnn/{comp}.tsv", sep="\t") for comp in comp_files[3:]])
# label = "unchanged_regions"
# degs_0_12_degs, _ = overlapper(degs_0_12, regions, genes)
# degs_0_30_degs, _ = overlapper(degs_0_30, regions, genes)
# degs_0_60_degs, _ = overlapper(degs_0_60, regions, genes)
# res_0_12 = enrichrrr(degs_0_12_degs, pathways, universe_3d_names)
# res_0_30 = enrichrrr(degs_0_30_degs, pathways, universe_3d_names)
# res_0_60 = enrichrrr(degs_0_60_degs, pathways, universe_3d_names)
# comparison_dfs = merge_enrs_into_common_df_2([res_0_12, res_0_30, res_0_60], ["0_12", "0_30", "0_60"])
# for res1, res2, res3, pathway in zip(res_0_12, res_0_30, res_0_60, pathways):
# write_results(res1, f"gnn_enrichr_results_all_vs_all/{label}_{pathway}_0_12.tsv")
# write_results(res2, f"gnn_enrichr_results_all_vs_all/{label}_{pathway}_0_30.tsv")
# write_results(res3, f"gnn_enrichr_results_all_vs_all/{label}_{pathway}_0_60.tsv")
# fig, ax = plt.subplots(3, 1, figsize=(10, 20))
# plot_count = 0
# for i, (df, pathway) in enumerate(zip(comparison_dfs, pathways)):
# if len(df) != 0:
# df = df.loc[df.iloc[:,[0,1,2]].mean(axis=1).sort_values(ascending=True).index]
# df = df.iloc[-20:, :]
# plot_count += 1
# b = df.plot.barh(x='Term', ax=ax[i], color=['#A63446', "#F5B841", '#9DBBAE'])
# b.axvline(x=-np.log10(0.05), color='black', linestyle='--', alpha=0.6, linewidth=3)
# b.set_xlabel(f'-log$_{{10}}$(Adjusted P-value)', fontsize=20)
# b.set_ylabel(f'{pathway} Term', fontsize=20)
# if plot_count != 0:
# fig.suptitle(f"{label}", fontsize=30)
# fig.set_tight_layout(True)
# fig.savefig(f"gnn_enrichr_results_all_vs_all/{label}_pathways.svg")
# fig.savefig(f"gnn_enrichr_results_all_vs_all/{label}_pathways.png", dpi=300, facecolor="white", edgecolor='none')
# fig.clf()
</code>
### GNN plot all vs all / Uniq Common
<code>
comparison_dfs_all_vs_all = []
comp_files = ["t0_t12_results_0_0", "t0_t12_results_0_1", "t0_t12_results_1_0", "t0_t12_results_1_1"]
regions = pd.concat([pd.read_csv(f"/home/carlos/oldies/manuscripts/notebooks/gnn/{comp}.tsv", sep="\t") for comp in comp_files[:3]]) # combine comp1, comp2, comp3
label = "changed_regions"
degs_0_12_degs, _ = overlapper(degs_0_12, regions, genes)
degs_0_30_degs, _ = overlapper(degs_0_30, regions, genes)
degs_0_60_degs, _ = overlapper(degs_0_60, regions, genes)
res_0_12 = enrichrrr(degs_0_12_degs, pathways, universe_3d_names)
res_0_30 = enrichrrr(degs_0_30_degs, pathways, universe_3d_names)
res_0_60 = enrichrrr(degs_0_60_degs, pathways, universe_3d_names)
for res1, res2, res3, pathway in zip(res_0_12, res_0_30, res_0_60, pathways):
write_results(res1, f"gnn_enrichr_results_all_vs_all/{label}_{pathway}_0_12.tsv")
write_results(res2, f"gnn_enrichr_results_all_vs_all/{label}_{pathway}_0_30.tsv")
write_results(res3, f"gnn_enrichr_results_all_vs_all/{label}_{pathway}_0_60.tsv")
comparison_dfs = merge_enrs_into_common_df_2([res_0_12, res_0_30, res_0_60], ["0_12", "0_30", "0_60"])
comparison_dfs_all_vs_all.append(comparison_dfs)
</code>
<code>
regions = pd.concat([pd.read_csv(f"/home/carlos/oldies/manuscripts/notebooks/gnn/{comp}.tsv", sep="\t") for comp in comp_files[3:]])
label = "unchanged_regions"
degs_0_12_degs, _ = overlapper(degs_0_12, regions, genes)
degs_0_30_degs, _ = overlapper(degs_0_30, regions, genes)
degs_0_60_degs, _ = overlapper(degs_0_60, regions, genes)
res_0_12 = enrichrrr(degs_0_12_degs, pathways, universe_3d_names)
res_0_30 = enrichrrr(degs_0_30_degs, pathways, universe_3d_names)
res_0_60 = enrichrrr(degs_0_60_degs, pathways, universe_3d_names)
for res1, res2, res3, pathway in zip(res_0_12, res_0_30, res_0_60, pathways):
write_results(res1, f"gnn_enrichr_results_all_vs_all/{label}_{pathway}_0_12.tsv")
write_results(res2, f"gnn_enrichr_results_all_vs_all/{label}_{pathway}_0_30.tsv")
write_results(res3, f"gnn_enrichr_results_all_vs_all/{label}_{pathway}_0_60.tsv")
comparison_dfs = merge_enrs_into_common_df_2([res_0_12, res_0_30, res_0_60], ["0_12", "0_30", "0_60"])
comparison_dfs_all_vs_all.append(comparison_dfs)
</code>
<code>
database_names = ["Gene Ontology\nBiological Process", "MSigDB\nHallmark", "NCI-Nature\nPID"]
</code>
<code>
# Changed regions uniq terms
for idx, (changed, notChanged) in enumerate(zip(comparison_dfs_all_vs_all[0], comparison_dfs_all_vs_all[1])):
fig , ax = plt.subplots(1, 1, figsize=(10, 8))
# find common terms
common_terms = list(set(changed.index).intersection(set(notChanged.index)))
# remove common terms from changed
df = changed.loc[~changed.index.isin(common_terms)]
df = df.loc[df.iloc[:,[0,1,2]].max(axis=1).sort_values(ascending=True).index]
if len(df) != 0:
df = df.iloc[-15:, :]
df.columns = ['12min', '30min', '60min', 'Term']
b = df.plot.barh(x='Term', ax=ax, color=['#A63446', "#F5B841", '#9DBBAE'])
b.axvline(x=-np.log10(0.05), color='black', linestyle='--', alpha=.9, linewidth=3)
b.set_xlabel(f'-log$_{{10}}$ Adjusted P-value', fontsize=12)
b.set_ylabel(database_names[idx], fontsize=20)
# change font size of x and y ticks
b.tick_params(labelsize=8)
b.tick_params(axis='both', which='major', labelsize=10)
b.tick_params(axis='both', which='minor', labelsize=10)
db_name = database_names[idx].replace("\n", "_")
df.to_csv(f"gnn_enrichr_results_all_vs_all_uc/{db_name}_pathways_changed_uniq_terms.tsv", sep='\t', index=False)
fig.set_tight_layout(True)
fig.savefig(f"gnn_enrichr_results_all_vs_all_uc/{db_name}_pathways_changed_uniq_terms.svg")
fig.savefig(f"gnn_enrichr_results_all_vs_all_uc/{db_name}_pathways_changed_uniq_terms.png", dpi=300, facecolor="white", edgecolor='none')
fig.clf()
</code>
<code>
# Unchanged regions uniq terms
for idx, (changed, notChanged) in enumerate(zip(comparison_dfs_all_vs_all[0], comparison_dfs_all_vs_all[1])):
fig , ax = plt.subplots(1, 1, figsize=(10, 8))
common_terms = list(set(changed.index).intersection(set(notChanged.index)))
# remove common terms from changed
df = notChanged.loc[~notChanged.index.isin(common_terms)]
if len(df) != 0:
df = df.loc[df.iloc[:,[0,1,2]].max(axis=1).sort_values(ascending=True).index]
if len(df) != 0:
df = df.iloc[-15:, :]
df.columns = ['12min', '30min', '60min', 'Term']
b = df.plot.barh(x='Term', ax=ax, color=['#A63446', "#F5B841", '#9DBBAE'])
b.axvline(x=-np.log10(0.05), color='black', linestyle='--', alpha=.9, linewidth=3)
b.set_xlabel(f'-log$_{{10}}$ Adjusted P-value', fontsize=12)
b.set_ylabel(database_names[idx], fontsize=20)
# change font size of x and y ticks
b.tick_params(labelsize=8)
b.tick_params(axis='both', which='major', labelsize=10)
b.tick_params(axis='both', which='minor', labelsize=10)
db_name = database_names[idx].replace("\n", "_")
df.to_csv(f"gnn_enrichr_results_all_vs_all_uc/{db_name}_pathways_unchanged_uniq_terms.tsv", sep='\t', index=False)
fig.set_tight_layout(True)
fig.savefig(f"gnn_enrichr_results_all_vs_all_uc/{db_name}_pathways_unchanged_uniq_terms.svg")
fig.savefig(f"gnn_enrichr_results_all_vs_all_uc/{db_name}_pathways_unchanged_uniq_terms.png", dpi=300, facecolor="white", edgecolor='none')
fig.clf()
</code>
<code>
# Changed regions common terms
for idx, (changed, notChanged) in enumerate(zip(comparison_dfs_all_vs_all[0], comparison_dfs_all_vs_all[1])):
fig , ax = plt.subplots(1, 1, figsize=(10, 8))
common_terms = list(set(changed.index).intersection(set(notChanged.index)))
df = changed.loc[changed.index.isin(common_terms)]
if len(df) != 0:
df = df.loc[df.iloc[:,[0,1,2]].max(axis=1).sort_values(ascending=True).index]
if len(df) != 0:
df = df.iloc[-15:, :]
df.columns = ['12min', '30min', '60min', 'Term']
b = df.plot.barh(x='Term', ax=ax, color=['#A63446', "#F5B841", '#9DBBAE'])
b.axvline(x=-np.log10(0.05), color='black', linestyle='--', alpha=.9, linewidth=3)
b.set_xlabel(f'-log$_{{10}}$ Adjusted P-value', fontsize=12)
b.set_ylabel(database_names[idx], fontsize=20)
# change font size of x and y ticks
b.tick_params(labelsize=8)
b.tick_params(axis='both', which='major', labelsize=12)
b.tick_params(axis='both', which='minor', labelsize=12)
db_name = database_names[idx].replace("\n", "_")
df.to_csv(f"gnn_enrichr_results_all_vs_all_uc/{db_name}_pathways_changed_common_terms.tsv", sep='\t', index=False)
fig.set_tight_layout(True)
fig.savefig(f"gnn_enrichr_results_all_vs_all_uc/{db_name}_pathways_changed_common_terms.svg")
fig.savefig(f"gnn_enrichr_results_all_vs_all_uc/{db_name}_pathways_changed_common_terms.png", dpi=300, facecolor="white", edgecolor='none')
fig.clf()
</code>
<code>
# not Changed regions common terms
for idx, (changed, notChanged) in enumerate(zip(comparison_dfs_all_vs_all[0], comparison_dfs_all_vs_all[1])):
fig , ax = plt.subplots(1, 1, figsize=(10, 8))
common_terms = list(set(changed.index).intersection(set(notChanged.index)))
# remove common terms from changed
df = notChanged.loc[notChanged.index.isin(common_terms)]
if len(df) != 0:
df = df.loc[df.iloc[:,[0,1,2]].max(axis=1).sort_values(ascending=True).index]
if len(df) != 0:
df = df.iloc[-15:, :]
df.columns = ['12min', '30min', '60min', 'Term']
b = df.plot.barh(x='Term', ax=ax, color=['#A63446', "#F5B841", '#9DBBAE'])
b.axvline(x=-np.log10(0.05), color='black', linestyle='--', alpha=.9, linewidth=3)
b.set_xlabel(f'-log$_{{10}}$ Adjusted P-value', fontsize=12)
b.set_ylabel(database_names[idx], fontsize=20)
# change font size of x and y ticks
b.tick_params(labelsize=8)
b.tick_params(axis='both', which='major', labelsize=12)
b.tick_params(axis='both', which='minor', labelsize=12)
db_name = database_names[idx].replace("\n", "_")
fig.set_tight_layout(True)
df.to_csv(f"gnn_enrichr_results_all_vs_all_uc/{db_name}_pathways_unchanged_common_terms.tsv", sep='\t', index=False)
fig.savefig(f"gnn_enrichr_results_all_vs_all_uc/{db_name}_pathways_unchanged_common_terms.svg")
fig.savefig(f"gnn_enrichr_results_all_vs_all_uc/{db_name}_pathways_unchanged_common_terms.png", dpi=300, facecolor="white", edgecolor='none')
fig.clf()
</code>
### GNN res, expression profile
<code>
def map_geneID_to_name(geneID):
if geneID not in genes.gene_id.values:
return None
return genes.loc[genes.gene_id == geneID].external_gene_name.values[0]
def map_txID_to_name(txID):
if txID not in genes.tx_id.values:
return None
return genes.loc[genes.tx_id == txID].external_gene_name.values[0]
</code>
<code>
order = ['SU_100', 'SU_200', 'SU_300', 'SU_112', 'SU_212', 'SU_312', 'SU_130', 'SU_230', 'SU_330', 'SU_160', 'SU_260', 'SU_360']
df = pd.read_csv(f"/home/carlos/oldies/projects/rna-seq/quant/SU_100/quant.sf", sep="\t")
df.Name = df.Name.apply(lambda x: x.split(".")[0])
df = df.loc[df.Name.isin(genes.tx_id.values)]
mapped_names = [map_txID_to_name(txID.split(".")[0]) for txID in df.Name.values]
</code>
<code>
series = []
for name in order:
df = pd.read_csv(f"/home/carlos/oldies/projects/rna-seq/quant/{name}/quant.sf", sep="\t")
df.Name = df.Name.apply(lambda x: x.split(".")[0])
df = df.loc[df.Name.isin(genes.tx_id.values)]
df.rename(columns={'TPM': name}, inplace=True)
series.append(df[name])
tcounts_df = pd.DataFrame(series).T
tcounts_df['geneName'] = mapped_names
for i, name in zip([0, 3, 6, 9], ["Control", "12min", "30min", "60min"]):
tcounts_df[name] = tcounts_df.iloc[:, i : i + 3].mean(axis=1)
</code>
<code>
# which_Genes = [
# "ATF2", "ATF3", "ATF4",
# "JUN", "JUNB", "JUND",
# "FOS", "FOSL1", "FOSL2", "FOSB",
# "MAF", "MAFB",
# "TP53"]
which_Genes = "SURF1;POLH;GTF2B;ADCY6;BRF2;PRIM1;DGUOK;RNMT;SEC61A1;ZWINT;POLD1;RBX1;CDA;NELFE;RFC4".split(";")
plot_df = {
"geneName": [],
"Mean": [],
"time": []
}
for gene_oi in which_Genes:
for name in ["Control", "12min", "30min", "60min"]:
plot_df["geneName"].append(gene_oi)
plot_df["Mean"].append(tcounts_df.loc[tcounts_df.geneName == gene_oi, name].values[0])
plot_df["time"].append(name)
fig, ax = plt.subplots(figsize=(20, 10), ncols=len(which_Genes))
plot_df = pd.DataFrame(plot_df)
for i, gene_oi in enumerate(which_Genes):
sns.barplot(x="time", y="Mean", data=plot_df.loc[plot_df.geneName == gene_oi], ax=ax[i])
ax[i].set_title(gene_oi)
ax[i].set_ylabel("Mean TPM")
ax[i].set_xlabel("Time")
ax[i].set_xticklabels(ax[i].get_xticklabels(), rotation=45, horizontalalignment='right')
</code>
|
{
"filename": "rna_3D.ipynb",
"repository": "CompGenomeLab/uv-3d-ddr",
"query": "transformed_from_existing",
"size": 41148,
"sha": ""
}
|
# homework-5_4.ipynb
Repository: IB-ULFRI/homework-5
# Homework 5: Effect of SARS-CoV-2 on the host organism
We will learn about the basics of gene expression data analysis. Biologists have found a way to measure how much each gene is *expressed* in each cell in an experiment. We do this by counting the number of mRNA molecules in each cell. Remember, DNA holds instructions for building proteins but can't be turned into proteins directly. Translation of DNA creates mRNA molecules which ribosomes read to synthesize proteins.
If we measure the amount of mRNA in a cell, we can tell what proteins the cell is making and, indirectly, what the cell is doing as a whole.
<code>
# In order to import from the python file without hassle, we add the current
# directory to the python path
import sys; sys.path.append(".")
</code>
## Problem 1: Constructing the count matrix
Each single-cell gene-expression experiment takes a tissue sample containing many cells. We want to measure the amount of mRNA from a particular gene for each of those cells. We create a *gene-expression matrix*, where the rows correspond to individual cells, and the columns correspond to individual genes. So, our output will be an $N$ by $G$ matrix where $N$ is the number of cells, and $G$ is the number of genes.
A collection of cells forming a tissue must undergo special treatment before we can put it into a sequence. For instance, if we were to take all the cells, gather their mRNA material, and put this into a sequencer, we wouldn't be able to match mRNA molecules with their cell of origin. Therefore, we must attach a *cell barcode* to each cell. This barcode will be attached to all the mRNA reads coming from this cell. We will use this barcode to match mRNA molecules with their cell. The sequencer also needs to know which molecules to sequence. We only want it to sequence mRNA molecules and nothing else. Therefore, we use a special molecular primer that binds to the poly-A tail of mRNA molecules. Don't worry if you don't understand this, because it isn't crucial to us. The important thing is that each read comes with three pieces. First is the cell barcode, then the molecular primer, and then the actual mRNA fragment.
The first 12 bases of each read are the cell barcode. The following 24 bases are the oligo-dT primer, which we will discard since it carries no information. The remaining bases are the actual mRNA fragment of the gene of interest. You can find a more realistic schematic [here](https://training.galaxyproject.org/archive/2022-02-01/topics/transcriptomics/images/celseq2_schema.svg). If you want to find out more about this, [this tutorial](https://training.galaxyproject.org/training-material/topics/transcriptomics/tutorials/scrna-umis/tutorial.html) seems informative.
**[TASK]**
We have prepared a collection of reads (`data/reads.fastq`) in a FASTQ file containing Phred quality scores for each nucleotide (this comes from the sequencer). We will ignore these scores in this homework. You can easily read these files using Biopython.
Your job is to take each read and determine which gene and cell it corresponds to. The reads come from SARS-CoV-2 infected tissue, so we'll be interested in which cells SARS-CoV-2 genes are expressed. We won't use NCBI for SARS-CoV-2 gene annotations this time, but we will use a more standard approach. Two files are necessary: `data/sars-cov-2.fa` is a fasta file containing the reference SARS-CoV-2 genome, and `genes.gff` contains the gene annotations in GFF format. You must use these files in this homework, as we have removed some genes to make the exercise easier.
For each read in `reads.fasq`, you must extract the cell barcode and mRNA fragment (and drop the primer). Because sequencers make mistakes and introduce errors, we'll have to run local alignment to align the fragment to each SARS-CoV-2 gene and determine the origin. For alignment, you can either adapt your implementation from HW2 or use the [`pairwise2`](https://biopython.org/docs/1.76/api/Bio.pairwise2.html) module from Biopython.
Once you align a fragment, determine the gene region of this mRNA fragment. For instance, if we have gene XYZ ranging from positions 250 to 1250 on the reference genome, a fragment that maps into this region, e.g., 450-600, can be considered an expression of this gene. Since we are constructing a count matrix, we are just counting the fragments. For instance, if the barcode is AAACCCTTT and we've mapped the read to gene XZY, we'd increase the cell in our count matrix at row AAACCCTTT and column XZY by +1.
One more important thing we must account for is possible contamination. If the tissue sample contains contamination with cells from other organisms, we might get reads that map insufficiently to our reference genome. To circumvent this, we will apply a simple threshold to our aligned reads. We will calculate the Hamming similarity of the alignments and only keep the reads that map to our reference genome with a similarity of 0.95 or higher. The Hamming similarity is just Hamming distance but counting matches instead of mismatches.
Your task is to implement four functions: `split_read`, `map_read_to_gene`, `generate_count_matrix`, and `filter_matrix` in `helper_functions.py`. Each function is worth an equal number of points. Please go through the docstrings for implementation details. Once you've implemented these functions, create a count matrix from the reads provided in `data/reads.fastq`.
**[16 points]**
*Notes:*
To keep things simple, we won't deal with RNA sequences but with DNA sequences here. We also won't have to find the reverse complement of the mRNA fragment; you can take each sequence as is and align it to the reference genome. Finally, we will assume that the cell barcodes contain no sequencing errors; sequencing errors are limited to the mRNA portion of each read.
<code>
from helper_functions import split_read, map_read_to_gene, generate_count_matrix, filter_matrix
</code>
Then, answer the following questions about constructing a count matrix:
- In the generated count matrix, which gene has the highest cumulative read count across all samples (i.e., the sum of all values in its column)? Write down the gene's name.
- Mapping reads to genes is the most time-consuming step in constructing a count matrix. What are your suggestions for improving the implementation of read mapping? How would you speed it up? One possible improvement is using multiprocessing. Can you suggest other algorithmic enhancements? We encourage innovative ideas.
- In our example of constructing a count matrix for the SARS-CoV-2 virus, what could be a potential source of non-viral genetic material?
- How would you find out the origin (source organism) of those non-viral reads? We might have implemented a similar method in one of the previous assignments.
Store your answers in the `highest_cumulative_sum`, `faster_mapping`, `non_viral_material`, and `non_viral_origin` variables, respectively.
**[4 points]**
<code>
highest_cumulative_sum = """
In the generated count matrix, which gene has the highest cumulative read count across all samples (i.e., the sum of all values in its column)? Write down the gene's name.
"""
</code>
<code>
faster_mapping = """
What are your suggestions for improving the implementation of read mapping? How would you speed it up?
"""
</code>
<code>
non_viral_material = """
In our example of constructing a count matrix for the SARS-CoV-2 virus, what could be a potential source of non-viral genetic material?
"""
</code>
<code>
non_viral_origin = """
How would you find out the origin (source organism) of those non-viral reads? We might have implemented a similar method in one of the previous assignments.
"""
</code>
## Problem 2: A realistic example
In the previous problem, we learned how to construct count matrices and what the matrix entries mean. However, this scenario is unrealistically small. In the real world, single-cell RNA-sequencing runs produce millions of reads, which we must map to the genome. There are also intronic regions to consider, which can further complicate our lives. Fortunately, researchers have already implemented these algorithms and created well-established pipelines that go through this entire process for us. For instance, RNA sequence alignment is usually done using the STAR aligner or bowtie2 (in case you ever run across these in the wild).
It makes little sense to align reads to the SARS-CoV-2 genome. After all, the virus has one goal -- to replicate. If we sequenced some infected human cells and looked at reads aligning to the SARS-CoV-2 genome, we would most likely see that all of the ten or so genes, whose sole purpose is replication, are expressed practically all the time. It would be much more interesting to investigate the effects of SARS-CoV-2 on the gene expression of the host organism instead. The infected human cells are highly diverse as they have to perform various wildly different tasks. They achieve this by activating different sets of genes for each of the different tasks that each cell needs to perform. And luckily for us, we can measure all of this activity using single-cell RNA sequencing. We can take some cells from a healthy individual and some cells from an individual infected with SARS-CoV-2. Then, we can play a game of spot-the-difference and find the differences in the gene expression profiles between the two individuals to determine how SARS-CoV-2 impacts the genetic programs inside the cell.
To find these differences, we will take a real-world, pre-assembled count matrix. Count matrices are often readily available in public repositories, e.g., NCBI GEO. We have provided you with one such count matrix -- `data/homework5.h5ad` -- which contains cells from several healthy and several SARS-CoV-2-infected individuals. The primary cells of interest in the matrix are cells from the peripheral immune system. By inspecting these cells, we can determine how the immune system responds to infection. The count matrix is provided in the H5AD format, a format built on top of HDF5, and is a standard within the gene-expression analysis ecosystem. You can easily load this data using `scanpy`, the standard single-cell data analysis toolkit in Python. Refer to the scanpy documentation for more information and see `sc.read_h5ad` in particular.
Unfortunately, a full-blown analysis of this data is out of scope for this subject. However, we can still look at some basic statistics to better understand what problems we may deal with when working with single-cell RNA-seq data.
### Problem 2a: Preliminary statistics
**Task:** Report the number of cells and the number of genes in the `num_cells` and `num_genes` variables.
For every gene, count the number of cells where this gene is expressed in (>0). Then, for every cell, count the number of expressed genes. Plot the distribution over all cells and genes, and save your plots into `realistic_gene_dist.svg` and `realistic_cell_dist.svg`, respectively.
**[4 points]**
The data was obtained from
> Wilk, A.J., Rustagi, A., Zhao, N.Q. et al. A single-cell atlas of the peripheral immune response in patients with severe COVID-19. Nat Med 26, 1070–1076 (2020).
<code>
num_cells = None
num_genes = None
</code>
### Problem 2b: Filtering and normalization
According to these distributions, some cells have only a handful of expressed genes. Furthermore, looking at the genes, a good number of them are expressed in only a few cells (if at all!). Does it make sense to perform any analysis on these cells/genes? How reliable will these results be? We'd most likely need to apply some filtering before proceeding with further analyses. How would you go about filtering this data?
Before continuing the analysis, we must filter the data to keep only reliable information. Thus, we will filter out some cells and some low-expressed genes.
Sequencing depth tells us how many reads of information we counted in our count matrix for a single cell. When comparing gene expression in cells with different sequencing depths, we must account for their total sum and normalize those counts. An easy but effective approach is to normalize expression counts in each cell, to sum up to a number. For single-cell RNAseq data, that number is 10,000. For bulk-RNAseq, that number is 1,000,000, and we know the unit as counts-per-million (CPM). There are more sophisticated methods for normalizing counts that account for mRNA lengths like TPM and other variants.
Observing the distribution of gene expression in different cells, we quickly see that they rarely follow a normal distribution but are heavily skewed. Therefore, we apply a logarithmic transformation to expression values. Using a natural logarithm is a standard procedure for RNAseq, whereas microarray data is already normal-like.
**Task:**
Filter cells based on the number of genes detected. Keep only 7000 cells.
Filter genes based on the number of cells where found. Keep only 5000 genes. First, determine what those cells and genes are, and then create another expression matrix without them. Performing filtering steps consecutively might give different results, so perform them independently.
Implement a function `normalize_expressions` in the `helper_functions.py`.
Normalize counts in a matrix by log-transforming the expressions. We will add 1 to our expression count and then use a natural logarithm. Pseudo count (+1) ensures that genes with 0 counts will map to 0 after the transformation. Lastly, normalize the gene expressions for each sample so they sum up to 10,000.
Apply filtering and normalization to the matrix from the previous subproblem and continue with the analysis.
**Note:** By using this filtering, we lose a lot of information. However, working with fewer genes is easier for this exercise while achieving the same results.
**[4 points]**
<code>
from helper_functions import normalize_expressions
</code>
### Problem 2c: Differential analysis
We want to know how our cells respond to SARS-CoV-2 infection. When a cell is infected, it produces a response by expressing genes that carry out that response, whatever it may be. It can trigger various reactions, such as recruiting other cells, internal signaling to remove the virus, or cell death. We can observe gene expression in healthy cells to find genes with higher or lower expression in COVID-19 patient cells. We will perform differential expression (DE) to confirm their statistical significance.
We will use a simple t-test for the differential expression. The test will give us a p-value for each gene, representing the probability that we will observe such or more extreme results if the null hypothesis is true. In practice, more sophisticated approaches are used, like Willcoxon rank-sum test. Also, some bulk-RNA methods are used in single-cell analysis, such as DESeq2. Sometimes, p-values can give a misleading impression. Therefore, we couple them with information about fold change (FC), calculated as the ratio of the mean expression. In essence, p-values tell us how significant the difference is, while fold change tells us how big the difference in expression is. Plotting $log_2(FC)$ on the x-axis and $-log_{10}(p_{values})$ on the y-axis gives us a volcano plot.
<div>
<img src=https://training.galaxyproject.org/training-material/topics/transcriptomics/images/rna-seq-viz-with-volcanoplot/volcanoplot.png width=500>
</div>
Image source: [Galaxy Training!](https://training.galaxyproject.org/training-material/topics/transcriptomics/tutorials/rna-seq-viz-with-volcanoplot/tutorial.html)
**Task:**
Our null hypothesis states there is no differential expression of gene A between healthy and COVID-19 patients.
Use the t-test from the scipy library (*scipy.stats.ttest_ind*) to calculate a p-value for the hypothesis for each gene.
Because we are making a lot of t-test hypothesis tests, we must correct our p-values for false discovery. Use the false discovery rate (FDR) correction function from the statsmodels library (*statsmodels.stats.multitest.fdrcorrection*) to correct p-values.
Calculate the fold change for healthy and COVID-19 patients.
Plot a volcano plot as a scatter plot, where you put $log_2(FC)$ on the x-axis and $-log_{10}(p_{value})$ on the y-axis. Center the x-axis on the plot as shown in the example above. Use a threshold $\pm 2$ for $log_2(FC)$ and $50$ for $-log_{10}(p_{value})$. Color genes above both thresholds, as shown in the plot above. Save the plot in `volcano.svg`.
Colored genes represent differentially expressed genes. Report these genes as a list of strings in a `diff_expressed_genes` variable.
**[5 points]**
<code>
diff_expressed_genes = ["gene", "names"] # list of stings as gene names
</code>
Then, answer the following questions:
- Why are we specifically interested in differentially expressed genes?
- In differential expression analysis, the t-test relies on several assumptions. Given these assumptions, would we still obtain meaningful results when using raw, unfiltered, and unnormalized data? Discuss the impacts of both filtering and normalization on the analysis.
Store your answers in the `why_diff_expressed_genes` and `meaningful_diff_expression` variables, respectively.
**[2 points]**
<code>
why_diff_expressed_genes = """
Why are we specifically interested in differentially expressed genes?
"""
</code>
<code>
meaningful_diff_expression = """
Given these assumptions, would we still obtain meaningful results when using raw, unfiltered, and unnormalized data? Discuss the impacts of both filtering and normalization on the analysis.
"""
</code>
### Problem 2d: Gene Enrichment Analysis
We found some genes that are differentially expressed in COVID-19 patient cells. We want to link them to biological terms so we can reason about the response of cells to infection. Gene enrichment analysis is a method for making that connection. But first, we need some biological terms to link our genes to.
[Gene Ontology](http://geneontology.org/) (GO) is a database that stores annotated gene sets related to some broader function in human cells. It builds hierarchically; therefore, some gene sets might have only a handful of genes and others a few thousand. Check the resource for more information. We have already prepared a JSON file: `data/GO_genesets.json`, containing GO terms and their genes. We will use these as gene sets in enrichment analysis.
**Task:**
Implement a function `hypergeometric_pval` in the file `helper_functions.py` that calculates the p-value according to the hypergeometric distribution as a part of Gene Enrichment Analysis.
You can use the *scipy* library in your implementation.
Calculate the p-value for each gene set from the Gene Ontology `data/GO_genesets.json` file. Use FDR correction to correct these values.
Sort GO terms by their p-values and check the description of a top few ontologies.
Save the description of the highest ranking ontology in the `enriched_GO_term` variable.
Then answer the following questions:
- Search on the internet (e.g., Wikipedia) and reason about the validity of the enriched term. Does your result make sense?
Store0 your answers in the `GO_term_comments` variable.
**[5 points]**
<code>
from helper_functions import hypergeometric_pval
</code>
<code>
enriched_GO_term = "description" # description field of an enriched GO term
</code>
<code>
GO_term_comments = """
Search on the internet (e.g., Wikipedia) and reason about the validity of the enriched term. Does your result make sense?
"""
</code>
## Bonus problem: Single-cell data analysis
We now know what a count matrix is and how to create one. However, the real fun begins when we start working with this matrix and applying statistical methods to uncover some interesting facts about the tissue. The methods we learned about in this course have been very bioinformatics-specific. We learned about DNA, alignment algorithms, graph assembly algorithms, etc. But now we have a matrix, and we can reach into other fields of statistical analyses with a wide range of tools. Machine learning is one of the most powerful toolboxes for finding structure in these kinds of matrices.
Single-cell data analysis usually involves many predefined steps that include using a mix of bioinformatics-specific procedures and more general machine-learning techniques, e.g., dimensionality reduction and clustering. Of course, we won't go into machine learning here -- there are entire courses dedicated to machine learning -- but we'll follow a simple tutorial to get our feet wet and get a feeling for what can be done with the count matrices we've created here. You will repeat some of the steps done in Problem 2.
In this exercise, we'll continue exploring the SARS-CoV-2 count matrix we started working with in Problem 2 (`data/homework5.h5ad`) and run a standard analysis pipeline.
We'll be using scanpy. Scanpy is a Python library for single-cell data analysis that provides a friendly and easy interface for working with single-cell data. Scanpy also comes complete with several helpful tutorials that are very useful when getting started. Follow this beginner clustering tutorial at https://scanpy-tutorials.readthedocs.io/en/latest/pbmc3k.html, and submit the required images.
In diff expression we need to define two groups. List two different
You're required to submit three images, each worth 5 points:
1. PCA (`sc_analysis_pca.svg`)
2. UMAP or t-SNE colored by clusters (`sc_analysis_clusters.svg`)
3. UMAP or t-SNE colored by some kind of differential expression (`sc_analysis_deg.svg`)
**[15 points]**
Then answer the following questions:
- When plotting the results of t-SNE, you can observe different clusters of cells. What do they represent?
Store your answers in the `tsne_clusters` variable.
**[1 point]**
<code>
tsne_clusters = """
When plotting the results of t-SNE, you can observe different clusters of cells. What do they represent?
"""
</code>
|
{
"filename": "homework-5_4.ipynb",
"repository": "IB-ULFRI/homework-5",
"query": "transformed_from_existing",
"size": 27818,
"sha": ""
}
|
# RL_1.ipynb
Repository: JoseEliel/RL

## LINKS: https://tinyurl.com/UUAIRL
## What Is Reinforcement Learning?
Imagine teaching someone to play a video game without being able to tell them the rules. You can only give them a thumbs up when they do something good and a thumbs down when they do something bad. Over time, they'd figure out what works and what doesn't through trial and error.
That's essentially what reinforcement learning (RL) is - a way for AI to learn by interacting with an environment and receiving feedback.
### Today we will use Pong as a case study
In our case, we're teaching an AI to play Pong by letting it:
- Try different paddle movements
- See what happens in the game
- Get rewards for hitting the ball
- Get penalties for missing the ball
- Gradually improve its strategy through experience
## The Key Components
Let's break down the essential parts of our reinforcement learning system:
1. **Agent**: The AI that controls the paddle
2. **Environment**: The Pong game
3. **State**: What our agent can observe about the game
- Ball x-position
- Ball y-position
- Paddle y-position
- Ball x-velocity
- Ball y-velocity
4. **Actions**: What our agent can do
- Move paddle up
- Stay in place
- Move paddle down
5. **Reward**: The feedback our agent receives
- Positive reward (+1) for hitting the ball
- Negative reward (-1) for missing the ball
- Small "shaping" rewards to guide learning

(image from wikimedia)
## The Learning Loop
Here's how the learning process works:
1. The agent observes the current state of the game
2. Based on this state, it chooses an action (move up, stay, or move down)
3. The game updates (the ball and paddle move)
4. The agent receives a reward
5. The agent observes the new state
6. Repeat until the game ends
7. After the game ends, the agent learns from what happened
This cycle happens over and over - thousands of times - as the agent gradually improves.
## BUT HOW DO WE DO THIS?
There are many ways to do Reinforcement learning. It all hinges on the algorithm used for the training.
- Do we know how to calculate the rewards?
- Or the expected rewards for all possible actions?
- Is it even possible?
- What is the thing that learns? A genetic algorithm? A Neural Network? ...

# BEHOLD AN ARTIFICIAL NEURON!!
<code>
import numpy as np
from ipywidgets import interact, FloatSlider
import matplotlib.pyplot as plt
def plot_neuron(input_value=1.0, weight=1.0, bias=0.0):
# Compute output using the neuron formula
output = input_value * weight + bias
# Create figure
fig, ax = plt.subplots(figsize=(10, 6))
ax.set_xlim(-1.5, 5.5)
ax.set_ylim(-2, 4.5)
ax.axis('off')
ax.set_aspect('equal', 'box') # Ensure circles are not squished
# Add formula text
formula_text = f"Formula: output = (input × weight) + bias\n" \
f" = ({input_value:.1f} × {weight:.1f}) + {bias:.1f}\n" \
f" = {output:.2f}"
ax.text(2.5, 4.2, formula_text, ha='center', va='top', fontsize=12,
bbox=dict(facecolor='white', alpha=0.9))
# Draw input node
ax.text(-1.2, 1.3, f"Input\n{input_value:0.2f}", fontsize=12, ha="center")
plt.plot([-1.4, -0.7], [1.0, 1.0], color='gray', lw=2, linestyle='--')
# Draw neuron
circle = plt.Circle((2.0, 1.0), 1, color='skyblue', ec='k', zorder=2)
ax.add_patch(circle)
ax.text(2.0, 1.0, "Neuron", fontsize=12, ha="center", va="center")
# Draw bias
ax.annotate("", xy=(2.0, 2), xytext=(2.0, 2.7),
arrowprops=dict(arrowstyle="->", color="red", lw=2))
ax.text(2.0, 2.8, f"Bias: {bias:0.2f}", color="red", ha="center", fontsize=12)
# Draw weight
ax.annotate("", xy=(1.0, 1.0), xytext=(-0.7, 1.0),
arrowprops=dict(arrowstyle="->", color="blue", lw=2))
ax.text(-0.1, 1.1, f"Weight: {weight:0.2f}", color="blue", ha="center", fontsize=12)
# Draw output
ax.annotate("", xy=(3, 1.0), xytext=(5.2, 1.0),
arrowprops=dict(arrowstyle="<-", color="green", lw=2))
ax.text(4.2, 1.1, f"Output: {output:0.2f}", color="green", ha="center", fontsize=12)
ax.set_title("Single Neuron with Linear Activation", fontsize=16)
plt.show()
# Create interactive widget
interact(plot_neuron,
input_value=FloatSlider(min=-5, max=5, step=0.1, value=1.0, description="Input"),
weight=FloatSlider(min=-5, max=5, step=0.1, value=1.0, description="Weight"),
bias=FloatSlider(min=-5, max=5, step=0.1, value=0.0, description="Bias"))
</code>
# A network
Several of those neurons (billions in the case of modern AI systems) are put together in a network. Usually in layers that connect to each other, each neuron multiplying, adding and sending its output forward to more neurons.
<code>
# Imports
from ipywidgets import FloatSlider, VBox, HBox, interactive_output
from IPython.display import display
def plot_network(x1, x2, w1_00, w1_10, w1_01, w1_11, w2_0, w2_1):
# Compute the forward pass
h1 = x1 * w1_00 + x2 * w1_10
h2 = x1 * w1_01 + x2 * w1_11
output = h1 * w2_0 + h2 * w2_1
# Set up the figure
fig, ax = plt.subplots(figsize=(12, 6))
ax.set_xlim(-1.5, 5)
ax.set_ylim(-1.5, 3)
ax.axis('off')
ax.set_aspect('equal', 'box') # Ensure circles are not squished
# Define positions for nodes in each layer
positions = {
"x1": (0, 1.5),
"x2": (0, 0.5),
"h1": (2, 1.5),
"h2": (2, 0.5),
"output": (4, 1)
}
# Function to draw each node: a circle, with the label and node value inside
def draw_node(pos, value, label, color):
circle = plt.Circle(pos, 0.2, color=color, ec='k', zorder=5)
ax.add_patch(circle)
ax.text(pos[0], pos[1], f"{label}\n{value:.2f}",
ha='center', va='center', fontsize=10, zorder=6)
# Draw nodes for each layer
draw_node(positions["x1"], x1, "x₁", 'lightyellow')
draw_node(positions["x2"], x2, "x₂", 'lightyellow')
draw_node(positions["h1"], h1, "h₁", 'skyblue')
draw_node(positions["h2"], h2, "h₂", 'skyblue')
draw_node(positions["output"], output, "ŷ", 'lightgreen')
# Draw layer labels above the nodes
ax.text(positions["x1"][0], positions["x1"][1] + 0.6, "Input Layer",
ha='center', va='center', fontsize=12, fontweight='bold')
ax.text(positions["h1"][0], positions["h1"][1] + 0.6, "Hidden Layer",
ha='center', va='center', fontsize=12, fontweight='bold')
ax.text(positions["output"][0], positions["output"][1] + 0.6, "Output Layer",
ha='center', va='center', fontsize=12, fontweight='bold')
# Also add a clear summary of input and output values on the sides
ax.text(-1.3, 1, f"Inputs:\n x₁ = {x1:.2f}\n x₂ = {x2:.2f}",
fontsize=11, ha='center', va='center',
bbox=dict(facecolor='white', alpha=0.9, edgecolor='gray'))
ax.text(4.8, 1, f"Output:\n ŷ = {output:.2f}",
fontsize=11, ha='center', va='center',
bbox=dict(facecolor='white', alpha=0.9, edgecolor='gray'))
# Define the connections with their labels. Each connection is a tuple:
# (start_node, end_node, current weight value, weight label)
connections = [
("x1", "h1", w1_00, "w₁₀₀"),
("x2", "h1", w1_10, "w₁₁₀"),
("x1", "h2", w1_01, "w₁₀₁"),
("x2", "h2", w1_11, "w₁₁₁"),
("h1", "output", w2_0, "w₂₀"),
("h2", "output", w2_1, "w₂₁"),
]
# Function to draw an arrow (connection) with the connection label and weight value
def draw_arrow(start, end, weight, wt_label):
start_pos = np.array(positions[start])
end_pos = np.array(positions[end])
vector = end_pos - start_pos
length = np.linalg.norm(vector)
direction = vector / length
# Adjust start and end positions so the arrow doesn't overlap the node circles
start_adjust = start_pos + direction * 0.25
end_adjust = end_pos - direction * 0.25
# Draw arrow between nodes
ax.annotate("",
xy=end_adjust,
xytext=start_adjust,
arrowprops=dict(arrowstyle="->", color="gray", lw=1.5),
zorder=3)
# Place a label for the connection: show the weight variable and value
midpoint = (start_adjust + end_adjust) / 2.0
# Use a slight offset for clarity
offset = np.array([0.0, 0.15])
ax.text(midpoint[0] + offset[0], midpoint[1] + offset[1],
f"{wt_label}\n{weight:.2f}", fontsize=9, color="red",
ha='center', va='center', bbox=dict(facecolor='white', alpha=0.8, edgecolor='none'))
# Draw all connection arrows with labels
for start, end, weight, wt_label in connections:
draw_arrow(start, end, weight, wt_label)
# Place an explanation text block on the upper right, if desired
explanation_text = (
"Feedforward Computation:\n"
"1. Inputs x₁ and x₂ are each multiplied by their connection weights.\n"
"2. Hidden neurons sum these weighted inputs (h₁, h₂).\n"
"3. Hidden outputs are multiplied by output weights and summed to form ŷ."
)
ax.text(4.2, 2.7, explanation_text, fontsize=10,
bbox=dict(facecolor='white', edgecolor='gray', alpha=0.8),
ha='left', va='top')
plt.show()
### Create Interactive Widgets ###
# Input sliders for x1 and x2
slider_x1 = FloatSlider(min=-2, max=2, step=0.1, value=1.0, description="x₁")
slider_x2 = FloatSlider(min=-2, max=2, step=0.1, value=1.0, description="x₂")
# Sliders for weights connecting inputs to the hidden layer
slider_w1_00 = FloatSlider(min=-2, max=2, step=0.1, value=1.0, description="w₁₀₀")
slider_w1_10 = FloatSlider(min=-2, max=2, step=0.1, value=1.0, description="w₁₁₀")
slider_w1_01 = FloatSlider(min=-2, max=2, step=0.1, value=1.0, description="w₁₀₁")
slider_w1_11 = FloatSlider(min=-2, max=2, step=0.1, value=1.0, description="w₁₁₁")
# Sliders for weights connecting the hidden layer to the output
slider_w2_0 = FloatSlider(min=-2, max=2, step=0.1, value=1.0, description="w₂₀")
slider_w2_1 = FloatSlider(min=-2, max=2, step=0.1, value=1.0, description="w₂₁")
# Organize the slider layout
inputs_box = HBox([slider_x1, slider_x2])
weights_input_hidden = HBox([slider_w1_00, slider_w1_10, slider_w1_01, slider_w1_11])
weights_hidden_output = HBox([slider_w2_0, slider_w2_1])
ui = VBox([inputs_box, weights_input_hidden, weights_hidden_output])
# Set up the interactive output
out = interactive_output(plot_network, {
"x1": slider_x1,
"x2": slider_x2,
"w1_00": slider_w1_00,
"w1_10": slider_w1_10,
"w1_01": slider_w1_01,
"w1_11": slider_w1_11,
"w2_0": slider_w2_0,
"w2_1": slider_w2_1,
})
# Display the interactive UI and plot
display(ui, out)
</code>
## What is an Activation Function?
An activation function determines whether a neuron in a neural network should be activated ("fired") or not, based on the input it receives.
## Why are Activation Functions Important?
### The Key Problem: Linear Limitations
Without activation functions, neural networks can only perform linear operations (multiplying and adding). Here's why this is a problem:
- **Linear operations can only create linear solutions**: No matter how many layers you stack, if each layer only does multiplication and addition, the entire network can only learn straight-line relationships between inputs and outputs.
- **Real-world problems aren't linear**: Most interesting problems (image recognition, language understanding, etc.) involve complex, curved relationships that can't be solved with just straight lines.
### How Activation Functions Solve This:
Activation functions introduce "bends" into the system. When we add an activation function:
1. The neuron can now respond differently to different input ranges
2. When combined with other neurons, these "bends" allow the network to approximate any curved shape
3. This enables the network to learn complex patterns that simple linear models cannot capture
**Simple example**: Imagine trying to separate data points in an X-shape. A straight line can never separate these points correctly, but with activation functions creating "bends," the network can learn the right boundary.
## The ReLU Activation Function
ReLU (Rectified Linear Unit) creates this crucial non-linearity in a very simple way:
- For negative inputs → output is 0
- For positive inputs → output is the same as the input
This simple "bend" at zero is enough to allow neural networks to learn incredibly complex patterns when many neurons work together.

## Training vs. Playing: Two Different Modes
It's important to understand the two modes of our agent:
### Training Mode
- Agent chooses actions randomly at first, based on probabilities from the network
- It records everything that happens (states, actions, rewards)
- After each game, it updates its neural network to improve
- This involves exploration (trying new things)
### Playing Mode
- Agent always chooses the action with highest probability
- No more randomness or exploration
- No more learning or updates
- Just using what it has learned
We spend most of our time in training mode, then switch to playing mode when the agent is ready.
There is no "one way" to do reinforcement learning. We don't have time to go through all cases, but we will look at one that is particularly useful for games.
## The REINFORCE Algorithm: Learning from Success and Failure
Now let's understand how our agent actually learns. We're using an algorithm called REINFORCE, which we'll explain step-by-step:
### Step 1: Play a Complete Game
The agent plays a full game of Pong until it misses the ball (game over). During this game, we record:
- Each state it observed
- Each action it took
- Each reward it received
Let's say our agent played a game that lasted 50 moves before missing the ball. We now have 50 (state, action, reward) tuples stored in memory.
### Step 2: Calculate the "Returns"
We need to know which actions were actually good in the long run. This is tricky because sometimes an action might look good immediately but lead to failure later.
To solve this, we calculate the "return" for each step - essentially the total future reward from that point onwards, with future rewards discounted (valued less than immediate rewards).
For each step t, we calculate:
$$
\text{Return}(t) = \text{Reward}(t) + \gamma \cdot \text{Reward}(t+1) + \gamma^2 \cdot \text{Reward}(t+2) + \cdots
$$
Where $\gamma$ is a number between 0 and 1 that determines how much we care about future rewards.
#### Example:
If our rewards were [0, 0, 0, 1, 0, 0, -1] and gamma is 0.9:
- Return at step 6 = -1
- Return at step 5 = 0 + 0.9 * (-1) = -0.9
- Return at step 4 = 0 + 0.9 * 0 + 0.9 * (0.9)* -1 = -0.81
- Return at step 3 = 1 + 0.9 * (-0.81) = 0.271
- ...and so on
This gives us a better measure of how good each action really was.
### Step 3: Update the Policy Network
Now comes the crucial part - we need to adjust our neural network to make good actions more likely and bad actions less likely in the future.
*ROUGHLY* For each (state, action, return) tuple:
1. Feed the state into the network to get the current probabilities
2. Increase the probability of the action taken if the return was positive
3. Decrease the probability of the action taken if the return was negative
#### How Weights Actually Change
This is where we need to understand how neural networks learn:
1. Each connection in our neural network has a "weight" - just a number that determines how strong that connection is.
2. These weights determine the final probabilities output by the network.
3. To make an action more likely, we need to adjust the weights that led to that action.
Let's break this down with a simple example:
Imagine our network gave these probabilities for a particular state:
- UP: 30%
- STAY: 50%
- DOWN: 20%
The agent selected STAY (based on these probabilities), and this eventually led to a positive return of 0.8.
We want to adjust our network to make STAY even more likely in this situation next time. The math works out such that:
- Weights that contributed to the STAY probability get increased
- The larger the return (0.8 in this case), the larger the increase
- Weights that didn't contribute to STAY don't change much
The technical term for this process is "gradient ascent on the policy parameters" - but you can think of it as "tweak the weights to make good actions more likely."
### Step 4: Repeat
This process is repeated for several episodes, iteratively updating the policy in the direction of higher rewards.
# Let's write the game engine
<code>
# -------------------------------------------------------------------------
# GAME CONSTANTS (these would typically be in a header file in C++)
# -------------------------------------------------------------------------
WIDTH = 400 # Game screen width in pixels
HEIGHT = 250 # Game screen height in pixels
PADDLE_HEIGHT = 70 # Paddle height in pixels
PADDLE_WIDTH = 15 # Paddle width in pixels
PADDLE_MOVE_SPEED = 5 # How fast the paddle moves when a key is pressed
BALL_RADIUS = 7 # Ball radius in pixels
BALL_SPEED = 5 # Base ball speed in pixels per frame
</code>
<code>
import time
import random
import threading
import numpy as np
from ipycanvas import Canvas
import ipywidgets as widgets
from ipyevents import Event
from IPython.display import display
def pong_step(state, action):
"""
Update ball and right-paddle state.
state: [ball_x, ball_y, paddle_y, ball_dx, ball_dy]
action (for right paddle): 0 = up, 1 = none, 2 = down.
Returns a list of native Python floats.
"""
ball_x, ball_y, paddle_y, ball_dx, ball_dy = state
# Update AI paddle position (right paddle) based on action.
if action == 0:
paddle_y = max(0.0, paddle_y - PADDLE_MOVE_SPEED)
elif action == 2:
paddle_y = min(HEIGHT - PADDLE_HEIGHT, paddle_y + PADDLE_MOVE_SPEED)
# Update ball position.
ball_x += ball_dx
ball_y += ball_dy
# Bounce off top and bottom walls.
if ball_y - BALL_RADIUS < 0:
ball_y = BALL_RADIUS
ball_dy = abs(ball_dy)
elif ball_y + BALL_RADIUS > HEIGHT:
ball_y = HEIGHT - BALL_RADIUS
ball_dy = -abs(ball_dy)
# Handle collision with the right (AI) paddle.
if ball_x + BALL_RADIUS >= WIDTH - PADDLE_WIDTH:
# If ball hits paddle, bounce back.
if paddle_y <= ball_y <= (paddle_y + PADDLE_HEIGHT):
ball_x = WIDTH - PADDLE_WIDTH - BALL_RADIUS
ball_dx = -abs(ball_dx)
else:
# If the paddle missed, reset the ball and the right paddle.
ball_x = WIDTH / 2.0
ball_y = random.uniform(BALL_RADIUS, HEIGHT - BALL_RADIUS)
ball_dx = BALL_SPEED
ball_dy = BALL_SPEED
paddle_y = (HEIGHT - PADDLE_HEIGHT) / 2.0
# (Left wall collision handled in game loop)
return [float(ball_x), float(ball_y), float(paddle_y), float(ball_dx), float(ball_dy)]
class PongGame:
def __init__(self, ai_function):
"""
Initialize the Pong game.
ai_function(ball_x, ball_y, paddle_y, ball_dx, ball_dy)
should return one of: "up", "none", or "down" for the right paddle.
"""
# Left (player) paddle and ball state.
self.left_paddle_y = (HEIGHT - PADDLE_HEIGHT) / 2.0
self.right_paddle_y = (HEIGHT - PADDLE_HEIGHT) / 2.0
self.ball_x = WIDTH / 2.0
self.ball_y = random.uniform(BALL_RADIUS, HEIGHT - BALL_RADIUS)
self.ball_dx = BALL_SPEED
self.ball_dy = BALL_SPEED
self.ai_function = ai_function
# Movement flags for the left paddle.
self.left_up_active = False
self.left_down_active = False
self.running = False
self._create_widgets()
def _create_widgets(self):
# Create the game canvas.
self.canvas = Canvas(width=WIDTH, height=HEIGHT)
display(self.canvas)
# Create control buttons.
self.btn_left_up = widgets.Button(
description="UP▲", layout=widgets.Layout(width='100px'), button_style='info')
self.btn_left_down = widgets.Button(
description="▼DOWN", layout=widgets.Layout(width='100px'), button_style='info')
self.btn_stop = widgets.Button(
description="STOP GAME", layout=widgets.Layout(width='100px', height='40px'),
button_style='danger')
# Set up ipyevents on the left paddle buttons for mousedown/up/leave.
event_up = Event(source=self.btn_left_up, watched_events=['mousedown', 'mouseup', 'mouseleave'])
event_up.on_dom_event(self._handle_left_up)
event_down = Event(source=self.btn_left_down, watched_events=['mousedown', 'mouseup', 'mouseleave'])
event_down.on_dom_event(self._handle_left_down)
# Stop button uses normal on_click.
self.btn_stop.on_click(self._stop_game)
# Display control buttons.
controls = widgets.VBox([widgets.HBox([self.btn_left_up, self.btn_left_down]), self.btn_stop])
display(controls)
def _handle_left_up(self, event):
# When the up button is pressed, set the flag; released/leave clears it.
if event['type'] == 'mousedown':
self.left_up_active = True
elif event['type'] in ['mouseup', 'mouseleave']:
self.left_up_active = False
def _handle_left_down(self, event):
# When the down button is pressed, set the flag; released/leave clears it.
if event['type'] == 'mousedown':
self.left_down_active = True
elif event['type'] in ['mouseup', 'mouseleave']:
self.left_down_active = False
def _draw(self):
# Draw ball first (helps with flicker)
self.canvas.fill_style = 'black'
self.canvas.fill_circle(self.ball_x, self.ball_y, BALL_RADIUS)
# Clear the canvas and redraw all elements in the correct order.
self.canvas.clear()
# Draw background first
self.canvas.fill_style = 'white'
self.canvas.fill_rect(0, 0, WIDTH, HEIGHT)
# Draw paddles
self.canvas.fill_style = 'blue'
self.canvas.fill_rect(0, self.left_paddle_y, PADDLE_WIDTH, PADDLE_HEIGHT)
self.canvas.fill_style = 'red'
self.canvas.fill_rect(WIDTH - PADDLE_WIDTH, self.right_paddle_y, PADDLE_WIDTH, PADDLE_HEIGHT)
# Draw ball last (on top of everything else)
self.canvas.fill_style = 'black'
self.canvas.fill_circle(self.ball_x, self.ball_y, BALL_RADIUS)
def _reset_ball(self):
# Reset the ball to the center with a random vertical position.
self.ball_x = WIDTH / 2.0
self.ball_y = random.uniform(BALL_RADIUS, HEIGHT - BALL_RADIUS)
self.ball_dx = BALL_SPEED
self.ball_dy = BALL_SPEED
self.right_paddle_y = (HEIGHT - PADDLE_HEIGHT) / 2.0
def game_loop(self):
fps_delay = 1.0 / 30.0 # approximately 30 FPS
mapping = {"up": 0, "none": 1, "down": 2}
while self.running:
# Move the left paddle based on button flags.
if self.left_up_active:
self.left_paddle_y = max(0.0, self.left_paddle_y - PADDLE_MOVE_SPEED)
if self.left_down_active:
self.left_paddle_y = min(HEIGHT - PADDLE_HEIGHT, self.left_paddle_y + PADDLE_MOVE_SPEED)
# Build the game state for the ball and right paddle.
state = [self.ball_x, self.ball_y, self.right_paddle_y, self.ball_dx, self.ball_dy]
# Get the AI action for the right paddle.
ai_action = self.ai_function(self.ball_x, self.ball_y, self.right_paddle_y, self.ball_dx, self.ball_dy)
action_int = mapping.get(ai_action, 1)
# Update ball position and the right paddle using pong_step.
new_state = pong_step(state, action_int)
self.ball_x, self.ball_y, self.right_paddle_y, self.ball_dx, self.ball_dy = new_state
# Check collision with the left (player) paddle.
if self.ball_x - BALL_RADIUS <= PADDLE_WIDTH:
if self.left_paddle_y <= self.ball_y <= (self.left_paddle_y + PADDLE_HEIGHT):
# Bounce the ball off the player's paddle.
self.ball_x = PADDLE_WIDTH + BALL_RADIUS
self.ball_dx = abs(self.ball_dx)
else:
# The player missed: reset the ball.
self._reset_ball()
self._draw()
time.sleep(fps_delay)
def start(self):
self.running = True
# Run the game loop in a separate thread to free the UI thread.
self.thread = threading.Thread(target=self.game_loop, daemon=True)
self.thread.start()
def _stop_game(self, _):
self.running = False
self.btn_stop.description = "Stopped"
self.btn_stop.disabled = True
self.left_up_active = False
self.left_down_active = False
def start_game(ai_function):
"""
Initialize and start the Pong game.
Provide an ai_function(ball_x, ball_y, paddle_y, ball_dx, ball_dy)
that returns "up", "none", or "down" for controlling the right paddle.
"""
game = PongGame(ai_function)
game.start()
return game
</code>
<code>
# This is how we use it.
# Some useful thigns for your to use in your implementation
# paddle_center = paddle_y + PADDLE_HEIGHT/2
# ball_center = ball_y + BALL_RADIUS
# --- Example AI Function ---
def simple_ai(ball_x, ball_y, paddle_y, ball_dx, ball_dy):
"""
A basic AI: move the paddle up or down so that its center follows the ball.
It returns "up" if the paddle should move up, "down" if it should move down,
and "none" if it should stay still.
"""
#TASK 1: YOUR CODE HERE
# Move the paddle up if the ball is above the center of the paddle and the ball dx is positive
if ball_y < paddle_y + PADDLE_HEIGHT/2 and ball_dx > 0:
return "up"
# Move the paddle down if the ball is below the center of the paddle and the ball dx is positive
elif ball_y > paddle_y + PADDLE_HEIGHT/2 and ball_dx > 0:
return "down"
# --- Start the Game ---
# Pass the AI function you want to use.
start_game(simple_ai)
</code>
# Implementing Reinforcement Learning
Now that we've set up our Pong game environment, we're ready to create and train an AI that can learn to play. We'll be using a reinforcement learning approach called REINFORCE (a type of policy gradient method).
## What was Reinforcement Learning again?
Reinforcement learning works by trial and error:
- The agent (our AI) takes actions in the environment
- It receives feedback in the form of rewards
- It learns to take actions that maximize its total reward
Think of it like training a dog: we don't tell it exactly how to catch a frisbee, but we reward it when it does, and over time it figures out the best strategy.
## Our Implementation Plan
Here's what we'll do next:
1. **Build the Agent**: Create a class that:
- Contains a neural network (the "brain" of our AI)
- Can choose actions based on the game state
- Keeps track of its experiences (states, actions, rewards)
- Can learn from these experiences
2. **Train the Agent**: Run many games where:
- The agent observes the state and chooses actions
- We record what happens (rewards received)
- After each game, the agent updates its neural network to improve
3. **Use the Trained Agent**: Once training is complete, we can use our AI to play the game based on what it has learned.
## Key Components
Our implementation will include:
- **Neural Network**: A simple model with one hidden layer that takes the game state as input and outputs probabilities for each action (up, stay, down)
- **Action Selection**: During training, actions will be chosen probabilistically to encourage exploration. After training, the agent will choose the most likely action.
- **REINFORCE Algorithm**: This is how our agent will learn. After each game:
- It calculates the cumulative rewards from each time step
- It adjusts its neural network to make actions that led to good outcomes more likely in the future
- **Reward Shaping**: To help speed up learning, we'll provide small intermediate rewards for keeping the paddle near the ball.
The code we're about to implement will transform our Pong environment into a learning playground for our AI. By the end of training, we should have an agent that can effectively track and hit the ball.
<code>
# %% Pong Reinforcement Learning Tutorial
# This file demonstrates how to create an AI for Pong using reinforcement learning
# Intended for game design students with C++ background who are new to Python and ML
import time
import numpy as np # NumPy handles arrays and math operations (like C++ vectors but more powerful)
import tensorflow as tf # TensorFlow is a machine learning library
from tensorflow import keras # Keras is a high-level neural network API
from keras import layers # Layers are the building blocks of neural networks
# -------------------------------------------------------------------------
# GAME PHYSICS AND REWARD SYSTEM
# -------------------------------------------------------------------------
def rl_step(state, action):
"""
Simulates one step of the Pong game physics and calculates rewards.
This is similar to the Update() or Step() function you might have in a C++ game loop.
Parameters:
- state: [ball_x, ball_y, paddle_y, ball_dx, ball_dy] - Current game state
- action: What the paddle should do (0 = move up, 1 = stay still, 2 = move down)
Returns:
- new_state: Updated game state after this step
- reward: Positive or negative feedback based on the agent's performance
- done: Whether the game is over (ball passed the paddle)
"""
# Unpack the state values for readability - similar to struct access in C++
ball_x, ball_y, paddle_y, ball_dx, ball_dy = state
# -------------------------------------------------------------------------
# 1. UPDATE PADDLE POSITION BASED ON ACTION
# -------------------------------------------------------------------------
if action == 0: # Move paddle up
paddle_y = max(0.0, paddle_y - PADDLE_MOVE_SPEED) # Prevent going above the screen
elif action == 2: # Move paddle down
paddle_y = min(HEIGHT - PADDLE_HEIGHT, paddle_y + PADDLE_MOVE_SPEED) # Prevent going below the screen
# If action == 1, the paddle doesn't move (stays in place)
# -------------------------------------------------------------------------
# 2. UPDATE BALL POSITION
# -------------------------------------------------------------------------
ball_x += ball_dx # Move ball horizontally
ball_y += ball_dy # Move ball vertically
# -------------------------------------------------------------------------
# 3. HANDLE BALL COLLISIONS WITH TOP AND BOTTOM WALLS
# -------------------------------------------------------------------------
if ball_y - BALL_RADIUS < 0: # Ball hits top wall
ball_y = BALL_RADIUS # Reposition to prevent getting stuck in wall
ball_dy = abs(ball_dy) # Flip vertical direction to positive (downward)
if ball_y + BALL_RADIUS > HEIGHT: # Ball hits bottom wall
ball_y = HEIGHT - BALL_RADIUS # Reposition to prevent getting stuck in wall
ball_dy = -abs(ball_dy) # Flip vertical direction to negative (upward)
# -------------------------------------------------------------------------
# 4. CALCULATE REWARD FOR THE AI
# -------------------------------------------------------------------------
# "Shaping" rewards guide the AI toward better behavior before it succeeds
# This is like giving hints rather than just win/lose feedback
# Calculate how well the paddle is positioned relative to the ball
paddle_center = paddle_y + PADDLE_HEIGHT / 2.0
# This gives higher rewards when paddle is closer to ball's height
shaping_factor = 0.8
shaping_reward = (1 - abs(paddle_center - ball_y)/HEIGHT) * shaping_factor
# Small penalty for not moving, to encourage active play
if action == 1: # If the paddle didn't move
shaping_reward -= 0.1 # Apply small penalty
# Start with the shaping reward
reward = shaping_reward
done = False # Game continues by default
# -------------------------------------------------------------------------
# 5. HANDLE BALL COLLISION WITH RIGHT PADDLE (AI's paddle)
# -------------------------------------------------------------------------
if ball_x + BALL_RADIUS >= WIDTH - PADDLE_WIDTH: # Ball reaches the right edge where paddle is
if paddle_y <= ball_y <= (paddle_y + PADDLE_HEIGHT): # Ball hits paddle
# Successful hit!
reward = shaping_reward + 1.0 # Big reward for hitting the ball
# Push ball back a bit so it doesn't get stuck inside paddle
ball_x = WIDTH - PADDLE_WIDTH - BALL_RADIUS
# Reverse horizontal direction
ball_dx = -abs(ball_dx)
else:
# Ball missed the paddle - game over!
reward = shaping_reward - 1.0 # Penalty for missing
done = True # End the game
# -------------------------------------------------------------------------
# 6. HANDLE BALL COLLISION WITH LEFT WALL (where an opponent would be)
# -------------------------------------------------------------------------
if ball_x - BALL_RADIUS <= 0: # Ball hits left wall
ball_x = BALL_RADIUS # Reposition
ball_dx = abs(ball_dx) # Reverse direction to positive (rightward)
# -------------------------------------------------------------------------
# 7. PREPARE AND RETURN THE NEW GAME STATE
# -------------------------------------------------------------------------
new_state = np.array([ball_x, ball_y, paddle_y, ball_dx, ball_dy], dtype=np.float32)
return new_state, reward, done
</code>
# How Neural Network Weights Change in REINFORCE
## Weights as Sensitivity Knobs
Neural network weights act like **tunable dials** that determine how the agent interprets game states. These numbers evolve to prioritize actions that maximize rewards.
---
## Neural Network Architecture
| Component | Description |
|-----------------|-----------------------------------------------------------------------------|
| **Input Layer** | Receives game state (ball/paddle positions, velocities) |
| **Weights** | Numerical values controlling signal strength between neurons |
| **Output Layer**| Produces probability distribution over actions (UP/STAY/DOWN) |
---
## Learning Process: Step-by-Step
1. **Forward Pass**
- Process game steps through the network → get action probabilities
- *Example Output:* UP (20%), STAY (30%), DOWN (50%)
2. **Action Selection**
- Randomly sample from probabilities (e.g., chooses DOWN)
3. **Reward Calculation**
- Compute discounted return for trajectory segment
- *Example Return:* +0.7 (accounts for future rewards)
4. **Backpropagation**
The gradient is a mathematical concept that tells us the direction and magnitude of the steepest increase of a function. In neural networks, it's essentially a collection of partial derivatives that indicate how a small change in each weight would affect the output.
Key Points About Gradients:
- **Derivative Connection:** The gradient is built from partial derivatives – these measure how much the network's output changes when you slightly adjust a specific weight, while keeping all other weights constant.
- **Direction of Improvement:** When maximizing rewards, the gradient points in the direction where weights should change to increase the probability of beneficial actions.
- **Visualization:** Think of the gradient as a compass pointing "uphill" on a landscape where elevation represents better performance. The steeper the hill, the larger the gradient magnitude.

image from:https://ds100.org/course-notes/feature_engineering/feature_engineering.html
When the REINFORCE algorithm multiplies this gradient by the return value, it strengthens connections that led to good outcomes (positive returns) and weakens those that led to poor ones (negative returns), proportional to how much each weight influenced the chosen action.
```python
# Pseudocode for weight update logic
for weight in network:
if weight encouraged chosen_action (DOWN):
weight += learning_rate * return * gradient
else:
weight -= learning_rate * return * gradient
```
# REINFORCE Algorithm: Formula and Code Implementation
## Standard Expression vs. Code Implementation
The standard REINFORCE formula is typically written for a single timestep:
$$\Large \theta_{t+1} = \theta_t + \alpha \nabla_\theta \log \pi_\theta(a_t \mid s_t) G_t$$
However, in practical implementation, we update across an entire episode of multiple timesteps at once.
## Episode-Based Approach
That means that we just average:
$$\Large \theta_{new} = \theta_{old} + \alpha \nabla_\theta \left( \frac{1}{T} \sum\limits_{t=0}^{T-1} \log \pi_\theta(a_t \mid s_t) G_t \right)$$
Or equivalently, written as minimizing a loss function:
$$\Large \theta_{new} = \theta_{old} - \alpha \nabla_\theta \left( -\frac{1}{T} \sum\limits_{t=0}^{T-1} \log \pi_\theta(a_t \mid s_t) G_t \right)$$
Where:
- $T$ is the number of timesteps in the episode
- $\frac{1}{T} \sum_{t=0}^{T-1}$ represents the averaging operation (implemented as `reduce_mean`)
- The negative sign in the second formula corresponds to the negative in `loss = -tf.reduce_mean(weighted_log_pi)`
## Benefits of Episode-Based Updates
The averaging across timesteps helps stabilize training by reducing the variance in policy updates. Rather than making large updates based on individual timesteps, the policy is updated based on the average performance across the entire episode.
This approach better captures what's actually happening in the code: a batch update using the average gradient across all timesteps in the episode, rather than separate updates for each individual timestep.
## Long-Term Evolution
| Training Stage | Weight Behavior | Agent Performance |
|----------------|-------------------------------------|---------------------------------|
| Early | Large random fluctuations | Frequent misses |
| Mid | Pattern-specific boosting | Consistent returns |
| Late | Fine-tuned precision adjustments | Strategic positioning |
---
<code>
# ===========================================================================
# REINFORCEMENT LEARNING AGENT
# ===========================================================================
# This is the "brain" of our AI paddle that learns to play pong
class RLAgent:
def __init__(self, learning_rate=5e-3, gamma=0.76):
"""
Initialize the AI agent.
Parameters:
- learning_rate: How quickly the model adapts to new information (like step size)
- gamma: Discount factor - how much future rewards matter compared to immediate ones
"""
self.gamma = gamma # Store the discount factor for future rewards
# -------------------------------------------------------------------------
# CREATE THE NEURAL NETWORK MODEL
# -------------------------------------------------------------------------
# This is similar to creating a class with methods in C++, but using a
# pre-built system for machine learning
self.model = keras.Sequential([
# Input layer takes 5 values (the game state)
layers.Input(shape=(5,)),
# Hidden layer with 8 neurons and ReLU activation
# ReLU simply means "if value < 0, output 0, else output the value"
layers.Dense(8, activation='relu'),
# Output layer with 3 neurons (one for each possible action)
# Softmax makes the outputs into probabilities that sum to 1
layers.Dense(3, activation='softmax')
])
# Initialize the optimizer which adjusts the neural network
# Think of this as the "learning algorithm"
self.optimizer = tf.keras.optimizers.Adam(learning_rate)
# -------------------------------------------------------------------------
# SAVING BUFFERS
# -------------------------------------------------------------------------
# These store the agent's experiences to learn from
# Like recording gameplay to study later
self.states = [] # Game states we've seen
self.actions = [] # Actions we took
self.rewards = [] # Rewards we received
def _normalize_state(self, state):
"""
Scale the state values to a range between 0 and 1.
This helps the neural network learn more efficiently,
similar to how you'd normalize a 3D model's coordinates.
"""
return np.array([
state[0] / WIDTH, # x position relative to screen width
state[1] / HEIGHT, # y position relative to screen height
state[2] / HEIGHT, # paddle position relative to screen height
state[3] / BALL_SPEED, # x velocity relative to maximum
state[4] / BALL_SPEED, # y velocity relative to maximum
], dtype=np.float32)
def choose_action(self, state):
"""
Decide what action to take based on the current game state.
This is like the AI's "think" function that runs every frame.
Parameters:
- state: Current game state [ball_x, ball_y, paddle_y, ball_dx, ball_dy]
Returns:
- action: 0 (move up), 1 (stay), or 2 (move down)
"""
# Normalize the state values to help the neural network
# Normalization is like scaling values to a common range, for vectors it is making their length 1
norm_state = self._normalize_state(state).reshape(1, -1)
# Ask the neural network what to do
# It returns probabilities for each possible action
probs = self.model(norm_state).numpy().flatten()
# Choose an action based on the probabilities
# This adds randomness for exploration (trying new strategies)
action = np.random.choice(3, p=probs)
# Remember what we saw and what we did for learning later
self.states.append(norm_state)
self.actions.append(action)
return action
def store_reward(self, reward):
"""
Store the reward received after taking an action.
Parameters:
- reward: The feedback value received from the environment
"""
self.rewards.append(reward)
def finish_episode(self):
"""
Perform the REINFORCE update on the policy network.
The update rule is:
θₜ₊₁ = θₜ + α · ∇θ log π₍θ₎(aₜ | sₜ) · Gₜ
This function implements each step explicitly.
"""
# -------------------------------------------------------------------------
# 1. COMPUTE THE DISCOUNTED RETURNS (Gₜ)
# -------------------------------------------------------------------------
# For each timestep t, compute the return G_t = r_t + γ * r_{t+1} + γ² * r_{t+2} + ...
G_t = np.zeros_like(self.rewards, dtype=np.float32) # Gₜ
cumulative_return = 0.0
for t in reversed(range(len(self.rewards))):
cumulative_return = self.rewards[t] + self.gamma * cumulative_return # Gₜ = rₜ + γ · Gₜ₊₁
G_t[t] = cumulative_return
# Optionally normalize returns for more stable learning
baseline = np.mean(G_t)
G_t = G_t - baseline # Normalized Gₜ
# -------------------------------------------------------------------------
# 2. PREPARE DATA: STATES (sₜ), ACTIONS (aₜ), RETURN (Gₜ)
# -------------------------------------------------------------------------
states = np.concatenate(self.states, axis=0) # States: sₜ
actions = np.array(self.actions) # Actions: aₜ
returns = G_t # Returns: Gₜ
# -------------------------------------------------------------------------
# 3. COMPUTE THE POLICY OBJECTIVE AND GRADIENT (∇θ log π₍θ₎(aₜ|sₜ) · Gₜ)
# -------------------------------------------------------------------------
with tf.GradientTape() as tape:
# Forward pass: Compute the action probabilities π₍θ₎(a | s) for all states.
action_probs = self.model(states, training=True) # π₍θ₎(·|s)
# Create a one-hot vector for actions, so we can select the probability of the executed action.
one_hot_actions = tf.one_hot(actions, depth=3) # Assume 3 actions. This is our mask.
# Select the probability for the taken action: π₍θ₎(aₜ|sₜ)
prob_taken = tf.reduce_sum(action_probs * one_hot_actions, axis=1)
# Compute log probability: log π₍θ₎(aₜ|sₜ)
log_pi = tf.math.log(prob_taken + 1e-8)
# Multiply by the return Gₜ: The term inside the gradient is log π₍θ₎(aₜ|sₜ) * Gₜ
weighted_log_pi = log_pi * returns
# Our objective (to be maximized) is the average policy "score":
# Objective = E[log π₍θ₎(aₜ|sₜ) * Gₜ]
# We minimize the negative of this objective:
loss = -tf.reduce_mean(weighted_log_pi)
# -------------------------------------------------------------------------
# 4. COMPUTE GRADIENTS AND UPDATE THE MODEL PARAMETERS
# -------------------------------------------------------------------------
# Compute the gradient: ∇θ [ - (log π₍θ₎(aₜ | sₜ) * Gₜ) ]
gradients = tape.gradient(loss, self.model.trainable_variables)
# The optimizer updates the parameters using the learning rate (α) set during its initialization.
# This implements: θₜ₊₁ = θₜ + α · ∇θ log π₍θ₎(aₜ|sₜ) · Gₜ
self.optimizer.apply_gradients(zip(gradients, self.model.trainable_variables))
# -------------------------------------------------------------------------
# 5. RESET EPISODE MEMORY FOR THE NEXT EPISODE
# -------------------------------------------------------------------------
self.states, self.actions, self.rewards = [], [], []
def get_action(self, state):
"""
Choose the best action without randomness (for actual gameplay).
This is used after training when we want the AI to play its best.
Parameters:
- state: Current game state
Returns:
- Best action (0, 1, or 2)
"""
norm_state = self._normalize_state(state).reshape(1, -1)
probs = self.model(norm_state).numpy().flatten()
return np.argmax(probs) # Choose the action with highest probability
# ===========================================================================
# TRAINING LOOP
# ===========================================================================
def train_agent(num_episodes=1000):
"""
Train the agent by playing many games and learning from them.
Parameters:
- num_episodes: Number of games to play for training
Returns:
- trained_agent: The agent after training
"""
# Create a new agent
agent = RLAgent()
total_rewards = [] # Track rewards for analysis
max_steps_reached = 0 # Track the longest game
# Play multiple games to train
for i in range(num_episodes):
# -------------------------------------------------------------------------
# 1. SET UP A NEW GAME WITH RANDOM STARTING CONDITIONS
# -------------------------------------------------------------------------
# Randomize the ball and paddle positions for varied training
ball_y_random = np.random.uniform(BALL_RADIUS, HEIGHT - BALL_RADIUS)
paddle_y_random = np.random.uniform(0, HEIGHT - PADDLE_HEIGHT)
# Initialize the game state
state = np.array([
WIDTH / 2.0, # Ball starts in the middle horizontally
ball_y_random, # Random vertical position
paddle_y_random, # Random paddle position
BALL_SPEED, # Ball initially moves right
BALL_SPEED # Ball initially moves down
], dtype=np.float32)
# -------------------------------------------------------------------------
# 2. PLAY THE GAME UNTIL COMPLETION OR MAX STEPS
# -------------------------------------------------------------------------
episode_reward = 0.0 # Total reward for this game
done = False # Game not finished yet
step = 0 # Step counter
max_steps = 500 # Maximum steps per game (to prevent infinite games)
# Game loop - similar to your C++ game loop
while not done and step < max_steps:
# AI chooses an action
action = agent.choose_action(state)
# Update the game state based on the action
state, reward, done = rl_step(state, action)
# Store the reward for learning
agent.store_reward(reward)
# Keep track of total reward
episode_reward += reward
# Increment step counter
step += 1
# -------------------------------------------------------------------------
# 3. LEARN FROM THIS GAME
# -------------------------------------------------------------------------
agent.finish_episode()
# Store results for analysis
total_rewards.append(episode_reward)
max_steps_reached = max(max_steps_reached, step)
if (i+1) % 100 == 0:
print(f"Episode {i+1}/{num_episodes}: Steps= {step}, Total Reward= {episode_reward:.2f}, Max Steps reached= {max_steps_reached}")
max_steps_reached = 0
return agent
# Train the agent.
trained_agent = train_agent(num_episodes=500)
# Wrap the trained agent into an AI function for gameplay.
def trained_ai_function(ball_x, ball_y, paddle_y, ball_dx, ball_dy):
state = np.array([ball_x, ball_y, paddle_y, ball_dx, ball_dy], dtype=np.float32)
action_idx = trained_agent.get_action(state)
mapping = {0: "up", 1: "none", 2: "down"}
return mapping[action_idx]
</code>
<code>
# Save the trained agent.
#trained_agent.model.save("trained_pong_agent.h5")
</code>
<code>
# Read an agent I trained with 5000 episodes into the RL class, took 20 minutes.
trained_agent = RLAgent()
trained_agent.model = keras.models.load_model("trained_pong_agent.h5")
# Wrap the trained agent into an AI function for gameplay.
def trained_ai_function(ball_x, ball_y, paddle_y, ball_dx, ball_dy):
state = np.array([ball_x, ball_y, paddle_y, ball_dx, ball_dy], dtype=np.float32)
action_idx = trained_agent.get_action(state)
mapping = {0: "up", 1: "none", 2: "down"}
return mapping[action_idx]
</code>
<code>
start_game(trained_ai_function)
</code>
<code>
#@markdown Run to visualize the full trained network
import matplotlib.pyplot as plt
import ipywidgets as widgets
from ipywidgets import interactive, HBox, VBox
from IPython.display import display
# --- Ensure that your trained model is built ---
# (This dummy call forces the model’s graph to be built.)
_dummy = np.zeros((1, 5), dtype=np.float32)
_ = trained_agent.model(_dummy)
# --- Determine the hidden dense layer ---
# Depending on your Keras version the explicit Input layer might not be in model.layers.
# In our RLAgent model, if the [Input, Dense, Dense] remains then:
# model.layers[0] is the InputLayer and model.layers[1] is Dense(8)
# but in Keras 3 the InputLayer is often omitted in model.layers.
#
# Check the number of layers and adjust accordingly:
if len(trained_agent.model.layers) == 2:
# Only the Dense layers are present.
hidden_layer = trained_agent.model.layers[0] # Dense(8)
elif len(trained_agent.model.layers) >= 3:
# If the Input layer is included.
hidden_layer = trained_agent.model.layers[1] # Dense(8)
else:
hidden_layer = trained_agent.model.layers[0] # Fallback
print("Extracting hidden layer:", hidden_layer.name)
def visualize_trained_network(ball_x, ball_y, paddle_y, ball_dx, ball_dy):
# Retrieve network weights.
# Assumed order: [kernel_hidden, bias_hidden, kernel_output, bias_output]
weights = trained_agent.model.get_weights()
w1, b1 = weights[0], weights[1]
final_w, final_b = weights[2], weights[3]
# --- Build a sub-model to get hidden activations ---
# Instead of using trained_agent.model.input (which may not be defined),
# we create a new input tensor and pass it to our extracted hidden layer.
input_tensor = keras.Input(shape=(5,))
hidden_output = hidden_layer(input_tensor)
hidden_model = keras.Model(inputs=input_tensor, outputs=hidden_output)
# Create the figure.
fig, ax = plt.subplots(figsize=(14, 8))
ax.set_xlim(-1, 7)
ax.set_ylim(-1, 5)
ax.axis('off')
ax.set_aspect('equal')
# Define node sizes.
node_radius_input = 0.2
node_radius_hidden = 0.15 # hidden nodes are drawn a bit smaller.
node_radius_output = 0.2
# Get the number of hidden neurons.
num_hidden = hidden_model.output_shape[-1]
# Define fixed positions for nodes.
layer_positions = {
"input": [(0, 4), (0, 3), (0, 2), (0, 1), (0, 0)], # five inputs
"hidden": [(3, i * (4/(num_hidden-1))) for i in range(num_hidden)],
"output": [(6, 2), (6, 1), (6, 0)] # three outputs
}
# Build the normalized state from current slider values.
state = np.array([ball_x, ball_y, paddle_y, ball_dx, ball_dy], dtype=np.float32)
norm_state = trained_agent._normalize_state(state).reshape(1, -1)
# Get full network prediction.
probs = trained_agent.model(norm_state, training=False).numpy().flatten()
# Compute hidden layer activations.
hidden_activations = hidden_model(norm_state, training=False).numpy().flatten()
max_act = hidden_activations.max() if hidden_activations.max() > 0 else 1.0
norm_activations = hidden_activations / max_act # Normalize to [0, 1]
# Draw input nodes.
for pos in layer_positions['input']:
circle = plt.Circle(pos, node_radius_input, color='lightyellow', ec='k', zorder=5)
ax.add_patch(circle)
# Draw hidden nodes using a blue colormap based on activation.
cmap = plt.get_cmap("Blues")
for i, pos in enumerate(layer_positions['hidden']):
activation = norm_activations[i]
face_color = cmap(0.3 + 0.7 * activation) # shift so that even low activations are visible.
circle = plt.Circle(pos, node_radius_hidden, color=face_color, ec='k', zorder=5)
ax.add_patch(circle)
# Optionally, display raw activation value.
ax.text(pos[0], pos[1], f"{hidden_activations[i]:.2f}",
fontsize=7, ha='center', va='center', zorder=6)
# Draw output nodes.
for pos in layer_positions['output']:
circle = plt.Circle(pos, node_radius_output, color='lightgreen', ec='k', zorder=5)
ax.add_patch(circle)
# Normalize connection line alpha by maximum absolute weight.
max_weight = max(np.abs(w1).max(), np.abs(final_w).max())
# Draw connections from input to hidden using w1.
for i, start_pos in enumerate(layer_positions['input']):
for j, end_pos in enumerate(layer_positions['hidden']):
weight = w1[i, j]
color = 'red' if weight < 0 else 'blue'
alpha = np.abs(weight) / max_weight
ax.plot([start_pos[0] + node_radius_input, end_pos[0] - node_radius_hidden],
[start_pos[1], end_pos[1]], color=color, alpha=alpha, lw=1)
# Draw connections from hidden to output using final_w.
for j, start_pos in enumerate(layer_positions['hidden']):
for k, end_pos in enumerate(layer_positions['output']):
weight = final_w[j, k]
color = 'red' if weight < 0 else 'blue'
alpha = np.abs(weight) / max_weight
ax.plot([start_pos[0] + node_radius_hidden, end_pos[0] - node_radius_output],
[start_pos[1], end_pos[1]], color=color, alpha=alpha, lw=1)
# Label the layers.
ax.text(0, 4.5, "Input Layer\n(Ball X, Ball Y,\nPaddle Y,\nBall DX, Ball DY)",
ha='center', va='bottom', fontsize=10)
ax.text(3, 4.5, f"Hidden Layer\n({num_hidden} Neurons)",
ha='center', va='bottom', fontsize=10)
ax.text(6, 4.5, "Output Layer\n(Up, Stay, Down)",
ha='center', va='bottom', fontsize=10)
# Display network prediction probabilities.
pred_text = (f"Network Prediction:\n"
f" Up: {probs[0]*100:.1f}%\n"
f" Stay: {probs[1]*100:.1f}%\n"
f" Down: {probs[2]*100:.1f}%")
ax.text(6, -0.5, pred_text, ha='center', va='top',
bbox=dict(facecolor='white', alpha=0.9), fontsize=12)
plt.title("Network Architecture and Hidden Neuron Activations", fontsize=14)
plt.tight_layout()
plt.show()
# --- Create slider widgets (ensure that WIDTH, HEIGHT, BALL_SPEED, PADDLE_HEIGHT are defined) ---
slider_ball_x = widgets.FloatSlider(min=0, max=WIDTH, value=WIDTH/2, description="Ball X",
layout=widgets.Layout(width='300px'))
slider_ball_y = widgets.FloatSlider(min=0, max=HEIGHT, value=HEIGHT/2, description="Ball Y",
layout=widgets.Layout(width='300px'))
slider_paddle_y = widgets.FloatSlider(min=0, max=HEIGHT-PADDLE_HEIGHT, value=160, description="Paddle Y",
layout=widgets.Layout(width='300px'))
slider_ball_dx = widgets.FloatSlider(min=-BALL_SPEED, max=BALL_SPEED, value=BALL_SPEED,
description="Ball DX", layout=widgets.Layout(width='300px'))
slider_ball_dy = widgets.FloatSlider(min=-BALL_SPEED, max=BALL_SPEED, value=BALL_SPEED,
description="Ball DY", layout=widgets.Layout(width='300px'))
sliders_box = VBox([slider_ball_x, slider_ball_y, slider_paddle_y, slider_ball_dx, slider_ball_dy])
# --- Create the interactive widget ---
interactive_plot = interactive(visualize_trained_network,
ball_x=slider_ball_x,
ball_y=slider_ball_y,
paddle_y=slider_paddle_y,
ball_dx=slider_ball_dx,
ball_dy=slider_ball_dy)
display(HBox([sliders_box, interactive_plot.children[-1]]))
</code>
# Task II: Can you think of a different reward function/mechanism?
# Resources:
### Backpropagation, step-by-step | DL3, 3Blue1Brown
https://www.youtube.com/watch?v=Ilg3gGewQ5U
### MIT 6.S191 (2024): Reinforcement Learning, Alexander Amini
https://www.youtube.com/watch?v=8JVRbHAVCws&t=1504s
### RLlib: Industry-Grade, Scalable Reinforcement Learning
https://docs.ray.io/en/latest/rllib/index.html
### Tensorflow Playground, beautiful interactive tool to understand Neural Networks
https://playground.tensorflow.org/
|
{
"filename": "RL_1.ipynb",
"repository": "JoseEliel/RL",
"query": "transformed_from_existing",
"size": 82926,
"sha": ""
}
|
# chapter04_1.ipynb
Repository: leelabcnbc/book-notes
## 4.1 Introduction
### pp. 103
Eq. (4.28) looks wierd, as it seems that Gaussian plays no role in proof. No. This is because $\log p(x)$ takes a quadratic form (check Eq. (4.24)), and in this Theorem, we assume that $q$ and $p$ match in terms of second order moments.
## 4.2 Gaussian Discriminat Analysis
This section gives a very detailed overview of all generative classifiers using Gaussian assumption. Basically, five things are discussed. 1) pure QDA. 2) pure LDA (here LDA has nothing to do with Fisher, nor latent Dirichlet allocation). 3) ML estimate. 4) regularized LDA. 5) feature selection.
### pp. 109
There are some problems and confusions with the SVD trick for Regularized LDA.
1. When it says using prior of the form $\mathrm{IW}(\mathrm{diag}(\Sigma))$, we actually have to compute the MLE of $\Sigma$ first, and then use its diag values as paramters for the prior. This is very different from the typical ridge regression, where we use some fixed diagonal matrix as prior. Here, values along the diagonal are different.
2. When evaluating the the inverse of regularized $\Sigma$, it's possible to compute its inverse, since Eq. (4.54) is sum of a positive definite matrix and a positive semidefinite matrix. (I assume all diagonal values of MLE $\Sigma$ are positive), and here SVD trick is just to speed up. Check pp. 659 (18.3.5) of **The Elements of Statistical Learning (2nd edition)** (abbreviated as ESLII below).
3. bottom of pp. 109, the author says we can recover original mean. It's not because $V^TV=VV^T=I$. Instead, only $V^TV=I$. But here, we seem to need the later. Although $VV^T \ne I$, we are still fine because $\mu$ is mean of all rows of $X$, and by this SVD, you can see that every row is linear combinations of columns of $V$. Thus, $\mu$ falls into the subspace defined by $V$, and $V\mu_z = VV^T \mu = V (V^TV)^{-1} V^T \mu= \mu$ since $ V (V^TV)^{-1} V^T$ is projection (check Eq. 7 on pp. 210 of Strang's Introduction to Linear Algebra, 4th edition) and $\mu$ is already in that subspace
4. on pp. 110, Eq (4.60) and Eq. (4.61) needs some justification. It's not very obvious why taking diag and multiplication by $V$ and $V^T$ on two sides can commute. Actually, it's wrong. The correct way is to use Woodbury inversion lemma.
* It should be obvious that (4.60) can't be equal to (4.54), since (4.60) is not full rank, yet (4.60) is.
<code>
import numpy as np
from scipy.linalg import svd, inv
rng_state = np.random.RandomState(seed=0)
</code>
<code>
n = 3
p = 10
X = rng_state.randn(n, p)
U, S, Vh = svd(X, full_matrices=False)
Z = U.dot(np.diag(S))
mean_X = np.mean(X, axis=0, keepdims=True).T
mean_Z = np.mean(Z, axis=0, keepdims=True).T
mean_Z_debug = Vh.dot(mean_X)
assert np.allclose(mean_Z, mean_Z_debug)
X_c = X - mean_X.T
Z_c = Z - mean_Z.T
cov_mle_X = 1/n*X_c.T.dot(X_c)
cov_mle_X_debug = np.cov(X, rowvar=False, ddof=0)
cov_mle_Z = 1/n*Z_c.T.dot(Z_c)
cov_mle_Z_decompose = 1/np.sqrt(n)*Z_c.T
cov_mle_Z_composed = cov_mle_Z_decompose.dot(cov_mle_Z_decompose.T)
cov_mle_Z_debug = np.cov(Z, rowvar=False, ddof=0)
cov_mle_X_by_Z = Vh.T.dot(cov_mle_Z).dot(Vh)
assert np.allclose(cov_mle_X, cov_mle_X_by_Z)
assert np.allclose(cov_mle_X, cov_mle_X_debug)
assert np.allclose(cov_mle_Z, cov_mle_Z_composed)
assert np.allclose(cov_mle_Z, cov_mle_Z_debug)
lam = 0.5 # regularization factor
cov_reg_X_correct = lam*np.diag(np.diag(cov_mle_X)) + (1-lam)*cov_mle_X
cov_reg_Z = lam*np.diag(np.diag(cov_mle_Z)) + (1-lam)*cov_mle_Z
cov_reg_X_trick = Vh.T.dot(cov_reg_Z).dot(Vh)
print('mean abs diff between cov_reg_X_correct and one obtained using trick', abs(cov_reg_X_correct - cov_reg_X_trick).mean())
cov_reg_X_correct_inv = inv(cov_reg_X_correct)
# you can't since Eq. (4.60) matrix is not full rank, yet (4.54) is.
# print('mean abs diff between inv of cov_reg_X_correct and one obtained using trick', abs(cov_reg_X_correct_inv - inv(cov_reg_X_trick)).mean())
# let's do it the correct way.
# using woodbury formula, where A is lam*np.diag(np.diag(cov_mle_X)), D^{-1} is (1-lam)*I, and B = V*cov_mle_Z_decompose,
# C = cov_mle_Z_decompose.T*Vh. It's important to use this D^{-1}, rather than cov_mle_Z, since cov_mle_Z is not invertible.
cov_reg_X_part1 = lam*np.diag(np.diag(cov_mle_X))
Ainv = inv(cov_reg_X_part1)
B = Vh.T.dot(cov_mle_Z_decompose)
C = cov_mle_Z_decompose.T.dot(Vh)
Dinv = (1-lam)*np.eye(n)
new_inv = inv(inv(Dinv) + C.dot(Ainv).dot(B))
assert new_inv.shape == (n,n)
cov_reg_inv_by_woodbury = Ainv - Ainv.dot(B).dot(new_inv).dot(C).dot(Ainv)
assert cov_reg_inv_by_woodbury.shape == cov_reg_X_correct_inv.shape
assert np.allclose(cov_reg_inv_by_woodbury, cov_reg_X_correct_inv, atol=1e-10)
beta_naive = cov_reg_X_correct_inv.dot(mean_X)
beta_trick = Vh.T.dot(inv(cov_reg_Z)).dot(mean_Z)
beta_woodbury = cov_reg_inv_by_woodbury.dot(mean_X)
assert np.allclose(beta_naive, beta_woodbury) and beta_naive.shape == beta_woodbury.shape == beta_trick.shape
print('difference bewteen the one given by Eq. 4.63 and the one using naive approach\n', beta_trick-beta_naive)
print('difference bewteen the one given by woodbury approach and the one using naive approach\n', beta_woodbury-beta_naive)
</code>
In pp. 660 of ESLII, the author says that SVD trick can be applied to any linear model with quadratic penalties. But the details are omitted. I guess this is more or less like kernel trick, where we all know the general idea, but derivation for each particular case needs some work, especially when the paramterization of the model is not good (here for regularized LDA, we have redundant parameters, such as symmetric terms for covariance matrix or precision matrix). This page also gives some basic reason why this works.
> Geometrically, we are rotating the features to a coordinate system in which all but the first $N$ coordinates are zero. Such rotations are allowed since the quadratic penalty is invariant under rotations, and linear models are equivariant.
* "quadratic penalty is invariant under rotations" basically means that the square of norm (thus quadratic) of some penalty is not changed under rotation, or more generally, orthogonal transformation.
* "linear models are equivariant" sounds reasonble, although I don't understand what equivariant means exactly.
Essentially, all these tricks are orthogonal transformation, exploiting invariance properties of quadratic regularization. Check [Efficient quadratic regularization for expression arrays](http://dx.doi.org/10.1093/biostatistics/kxh010) for detail. Looks that the theorem is easy, and more understanable than the presentation in MLAPP, but I guess to formulate the regularized LDA in the framework of the theorem needs some work, such as making the estimated covariance matrix to be a vector, etc.
### pp. 111
4.2.8 talks about shrunken centroids classifier. If you check the actual code `shrunkenCentroidsFit` in the code for MLAPP, or some equivalent code in scikit-learn, you will see that it's computed using softthresholding, and no actual optimization criterion is given. pp. 651 (18.2) of ESLII describes the algorithm in more detail, and mentions that it can be formulated as a lasso-like problem (Ex. 18.2). But the procedure is not exactly formulated as a lasso. Actually, we can add some small (but boring) adjustments to the lasso formulation in Ex. 18.2 and its solution will be more like the the solution obtained via the actual algorithm, as shown in Eq. 5 of [Improved shrunken centroid classifiers for high-dimensional class-imbalanced data](http://dx.doi.org/10.1186/1471-2105-14-64).
*About Eq. (18.4) of ESLII:* I also checked the original shrunken centroids classifier paper [Diagnosis of multiple cancer types by shrunken centroids of gene expression](http://dx.doi.org/10.1073/pnas.082099299), apart from $s_0$ which I guess is just some parameter to make the algorithm work in that particular context, the reason we set $m_k$ to be $\sqrt{1/n_k - 1/n}$ (the paper is wrong; the package <http://statweb.stanford.edu/~tibs/PAM/Rdist/index.html> and the ESLII book all use minus instead of plus, although I think when $N$ is big, it won't matter; sklearn uses the wrong equation, see <https://github.com/scikit-learn/scikit-learn/issues/7679>) is shown as follows.
1. the paper assumes that we have several classes, and all these classes share the same diagonal covariance matrix. Thus (Eq. 2 of paper) we can estimate $s_i^2$ for the $i$th feature, by pooling all in-class variance. I believe $s_i^2$ is unbiased, and that $N-k$ is for correction. Thus, the estimated standard deviation is $s_i$.
2. The paper wants $d_{ik}$ to be some normalized measure of deviation of $i$th feature from the mean, for class $k$. To normalize it, we should make it zero mean and unit variance.
* $\overline{x}_i$ is total mean feature over all classes, and is used to subtract mean from $\overline{x}_{ik}$ (well, I think $\overline{x}_i$ is the mean only when all classes are balanced; but let's assume this is true).
* $m_k s_i$ is the estimated standard deviation of the numerator ($s_0$ is just numerical hack). To see this, we first notice that all sample points $x_{ij}$ (no matter what class $j$ is in) have esimated variance $s_i^2$ (since they come from some distribution with variance estimated as $s_i^2$, no matter of the class). Then we have
$$
\begin{align}
\overline{x}_{ik} - \overline{x}_i &= \frac{\sum_{j \in C_k} x_{ij}}{n_k} - \frac{\sum_{q=1}^n x_{iq}}{n} \\
&= \frac{n \sum_{j \in C_k} x_{ij}}{n n_k} - \frac{n_k \sum_{q=1}^n x_{iq}}{n n_k} \\
&= \frac{1}{n n_k} [ (n - n_k) \sum_{j \in C_k} x_{ij} + n_k \sum_{j \notin C_k} x_{ij} ] \\
&= \frac{n - n_k}{n n_k} \sum_{j \in C_k} x_{ij} + \frac{1}{n}\sum_{j \notin C_k} x_{ij}
\end{align}
$$
* then basically, we have $n_k$ variables of weight $(n - n_k)(n n_k)$, and $n-n_k$ variables of weight $1/n$. Thus the variance of their weighted sum is sum of their weighted variance (since they are independent), we have
$$
\begin{align}
(\frac{n - n_k}{n n_k})^2 n_k s_i^2 + (\frac{1}{n})^2 (n-n_k) s_i^2 & = (\frac{(n - n_k)^2}{n^2 n_k} + \frac{n-n_k}{n^2}) s_i^2 \\
& = (\frac{(n - n_k)^2}{n^2 n_k} + \frac{n n_k-n_k^2}{n^2 n_k}) s_i^2 \\
& = (\frac{n^2 + n_k^2 - 2n n_k}{n^2 n_k} + \frac{n n_k-n_k^2}{n^2 n_k}) s_i^2 \\
& = (\frac{n^2 - n n_k}{n^2 n_k}) s_i^2 \\
& = (\frac{n - n_k}{n n_k}) s_i^2 \\
& = (\frac{1}{n_k} - \frac{1}{n}) s_i^2
\end{align}
$$
* take the square root and then we get the result.
3. The fact that we can solve a lasso by soft thresholding is consistent with my memory of convex optimization course where proximal methods involving L1 norm can be solved by soft thresholding. Basically I think if the lasso is decomposed to each individual variable, then it can be solved in one step via soft thresholding.
---
## 4.3 Inference in jointly Gaussian distributions
### pp. 115
I like this formulation of interpolation. From Eq. (4.76), we know the relationship between $\epsilon$ and the $x$ in the prior. Then (4.78) is basically saying that $\epsilon$ has form $N(0, (1/\lambda) I)$, and then replace $\epsilon$ by $Lx$. Although this prior itself is problematic, since $L$ doesn't have enough rank, in practice it works as long as you have enough (at least 2) data points. Notice that Eq. (4.82) on next page is wrong, and I think $L_1^{-1}$ should be its pseudoinverse. It's very interesting to compare this interpolation with Fig. 4.15 in pp. 127. This highlights the importance of our noise model.
---
## 4.6 Inferring the parameters of an MVN
### pp. 139
4.6.3.8 and 4.6.3.9 mention some equivalence between frequentist confidence interval and the corresponding Bayesian approach. BTW, I really don't understand why the author mentions paired test at all in 4.6.3.8, as what he simply works on the difference between paired samples, which is just one sample as well.
|
{
"filename": "chapter04_1.ipynb",
"repository": "leelabcnbc/book-notes",
"query": "transformed_from_existing",
"size": 15947,
"sha": ""
}
|
# Hands_on_8_1.ipynb
Repository: osbama/Phys437
<code>
!pip install pennylane
</code>
# Symmetry-invariant quantum machine learning force fields
Symmetries are ubiquitous in physics. From condensed matter to particle
physics, they have helped us make connections and formulate new
theories. In the context of machine learning, inductive bias has proven
to be successful in the presence of symmetries. This framework, known as
geometric deep learning, often enjoys better generalization and
trainability. In this demo, we will learn how to use geometric quantum
machine learning to drive molecular dynamics as introduced in recent
research. We will take as an example a triatomic molecule of $H_2O.$
## Introduction
First, let's introduce the overall playground of this work: **molecular
dynamics (MD)**. MD is an essential computational simulation method to
analyze the dynamics of atoms or molecules in a chemical system. The
simulations can be used to obtain macroscopic thermodynamic properties
of ergodic systems. Within the simulation, Newton\'s equations of motion
are numerically integrated. Therefore, it is crucial to have access to
the forces acting on the constituents of the system or, equivalently,
the potential energy surface (PES), from which we can obtain the atomic
forces. Previous research by presented variational quantum learning
models (VQLMs) that were able to learn the potential energy and atomic
forces of a selection of molecules from *ab initio* reference data.
The description of molecules can be greatly simplified by considering
inherent **symmetries**. For example, actions such as translation,
rotation, or the interchange of identical atoms or molecules leave the
system unchanged. To achieve better performance, it is thus desirable to
include this information in our model. To do so, the data input can
simply be made invariant itself, e.g., by making use of so-called
symmetry functions--hence yielding invariant energy predictions.
In this demo, we instead take the high road and design an intrinsically
symmetry-aware model based on equivariant quantum neural networks.
Equivariant machine learning models have demonstrated many advantages
such as being more robust to noisy data and enjoying better
generalization capabilities. Moreover, this has the additional advantage
of relaxing the need for data preprocessing, as the raw Cartesian
coordinates can be given directly as inputs to the learning model.
An overview of the workflow is shown in the figure below. First, the
relevant symmetries are identified and used to build the quantum machine
model. We then train it on the PES of some molecule, e.g. $H_2O,$ and
finally obtain the forces by computing the gradient of the learned PES.

In order to incorporate symmetries into machine learning models, we need
a few concepts from group theory. A formal course on the subject is out
of the scope of the present document, which is why we have the next sections on
equivariant graph
embedding
and geometric quantum machine
learning .
# Introduction to Geometric Quantum Machine Learning
# Introduction
Symmetries are at the heart of physics. Indeed in condensed matter and
particle physics we often define a thing simply by the symmetries it
adheres to. What does symmetry mean for those in machine learning? In
this context the ambition is straightforward --- it is a means to reduce
the parameter space and improve the trained model\'s ability to
sucessfully label unseen data, i.e., its ability to generalise.
Suppose we have a learning task and the data we are learning from has an
underlying symmetry. For example, consider a game of Noughts and Crosses
(aka Tic-tac-toe): if we win a game, we would have won it if the board
was rotated or flipped along any of the lines of symmetry. Now if we
want to train an algorithm to spot the outcome of these games, we can
either ignore the existence of this symmetry or we can somehow include
it. The advantage of paying attention to the symmetry is it identifies
multiple configurations of the board as \'the same thing\' as far as the
symmetry is concerned. This means we can reduce our parameter space, and
so the amount of data our algorithm must sift through is immediately
reduced. Along the way, the fact that our learning model must encode a
symmetry that actually exists in the system we are trying to represent
naturally encourages our results to be more generalisable. The encoding
of symmetries into our learning models is where the term *equivariance*
will appear. We will see that demanding that certain symmetries are
included in our models means that the mappings that make up our
algorithms must be such that we could transform our input data with
respect to a certain symmetry, then apply our mappings, and this would
be the same as applying the mappings and then transforming the output
data with the same symmetry. This is the technical property that gives
us the name \"equavariant learning\".
In classical machine learning, this area is often referred to as
geometric deep learning (GDL) due to the traditional association of
symmetry to the world of geometry, and the fact that these
considerations usually focus on deep neural networks (see or for a broad
introduction). We will refer to the quantum computing version of this as
*quantum geometric machine learning* (QGML).
# Representation theory in circuits
The first thing to discuss is how do we work with symmetries in the
first place? The answer lies in the world of group representation
theory.
First, let\'s define what we mean by a group:
**Definition**: A group is a set $G$ together with a binary operation on
$G$, here denoted $\circ,$ that combines any two elements $a$ and $b$ to
form an element of $G,$ denoted $a \circ b,$ such that the following
three requirements, known as group axioms, are satisfied as follows:
1. **Associativity**: For all $a, b, c$ in $G,$ one has
$(a \circ b) \circ c=a \circ (b \circ c).$
2.
**Identity element**: There exists an element $e$ in $G$ such that, for every $a$ in $G,$ one
: has $e \circ a=a$ and $a \circ e=a.$ Such an element is unique.
It is called the identity element of the group.
3.
**Inverse element**: For each $a$ in $G,$ there exists an element $b$ in $G$
: such that $a \circ b=e$ and $b \circ a=e,$ where $e$ is the
identity element. For each $a,$ the element $b$ is unique: it is
called the inverse of $a$ and is commonly denoted $a^{-1}.$
With groups defined, we are in a position to articulate what a
representation is: Let $\varphi$ be a map sending $g$ in group $G$ to a
linear map $\varphi(g): V \rightarrow V,$ for some vector space $V,$
which satisfies
$$\varphi\left(g_{1} g_{2}\right)=\varphi\left(g_{1}\right) \circ \varphi\left(g_{2}\right) \quad \text { for all } g_{1}, g_{2} \in G.$$
The idea here is that just as elements in a group act on each other to
reach further elements, i.e., $g\circ h = k,$ a representation sends us
to a mapping acting on a vector space such that
$\varphi(g)\circ \varphi(h) = \varphi(k).$ In this way we are
representing the structure of the group as a linear map. For a
representation, our mapping must send us to the general linear group
$GL(n)$ (the space of invertible $n \times n$ matrices with matrix
multiplication as the group multiplication). Note how this is both a
group, and by virtue of being a collection of invertible matrices, also
a set of linear maps (they\'re all invertble matrices that can act on
row vectors). Fundamentally, representation theory is based on the
prosaic observation that linear algebra is easy and group theory is
abstract. So what if we can study groups via linear maps?
Now due to the importance of unitarity in quantum mechnics, we are
particularly interested in the unitary representations: representations
where the linear maps are unitary matrices. If we can identify these
then we will have a way to naturally encode groups in quantum circuits
(which are mostly made up of unitary gates).

How does all this relate to symmetries? Well, a large class of
symmetries can be characterised as a group, where all the elements of
the group leave some space we are considering unchanged. Let\'s consider
an example: the symmetries of a sphere. Now when we think of this
symmetry we probably think something along the lines of \"it\'s the same
no matter how we rotate it, or flip it left to right, etc\". There is
this idea of being invariant under some operation. We also have the idea
of being able to undo these actions: if we rotate one way, we can rotate
it back. If we flip the sphere right-to-left we can flip it
left-to-right to get back to where we started (notice too all these
inverses are unique). Trivially we can also do nothing. What exactly are
we describing here? We have elements that correspond to an action on a
sphere that can be inverted and for which there exists an identity. It
is also trivially the case here that if we consider three operations a,
b, c from the set of rotations and reflections of the sphere, that if we
combine two of them together then
$a\circ (b \circ c) = (a\circ b) \circ c.$ The operations are
associative. These features turn out to literally define a group!
As we\'ve seen the group in itself is a very abstract creature; this is
why we look to its representations. The group labels what symmetries we
care about, they tell us the mappings that our system is invariant
under, and the unitary representations show us how those symmetries look
on a particular space of unitary matrices. If we want to encode the
structure of the symmeteries in a quantum circuit we must restrict our
gates to being unitary representations of the group.
There remains one question: *what is equivariance?* With our newfound
knowledge of group representation theory we are ready to tackle this.
Let $G$ be our group, and $V$ and $W,$ with elements $v$ and $w$
respectively, be vector spaces over some field $F$ with a map $f$
between them. Suppose we have representations
$\varphi: G \rightarrow GL(V)$ and $\psi: G \rightarrow GL(W).$
Furthermore, let\'s write $\varphi_g$ for the representation of $g$ as a
linear map on $V$ and $\psi_g$ as the same group element represented as
a linear map on $W$ respectively. We call $f$ *equivariant* if
$$f(\varphi_g(v))=\psi_g(f(v)) \quad \text { for all } g\in G.$$
The importance of such a map in machine learning is that if, for
example, our neural network layers are equivariant maps then two inputs
that are related by some intrinsic symmetry (maybe they are reflections)
preserve this information in the outputs.
Consider the following figure for example. What we see is a board with a
cross in a certain square on the left and some numerical encoding of
this on the right, where the 1 is where the X is in the number grid. We
present an equivariant mapping between these two spaces with respect to
a group action that is a rotation or a swap (here a $\pi$ rotation). We
can either apply a group action to the original grid and then map to the
number grid, or we could map to the number grid and then apply the group
action. Equivariance demands that the result of either of these
procedures should be the same.

Given the vast amount of input data required to train a neural network
the principle that one can pre-encode known symmetry structures into the
network allows us to learn better and faster. Indeed it is the reason
for the success of convolutional neural networks (CNNs) for image
analysis, where it is known they are equivariant with respect to
translations. They naturally encode the idea that a picture of a dog is
symmetrically related to the same picture slid to the left by n pixels,
and they do this by having neural network layers that are equivariant
maps. With our focus on unitary representations (and so quantum
circuits) we are looking to extend this idea to quantum machine
learning.
## Noughts and Crosses
Let\'s look at the game of noughts and crosses, as inspired by. Two
players take turns to place a O or an X, depending on which player they
are, in a 3x3 grid. The aim is to get three of your symbols in a row,
column, or diagonal. As this is not always possible depending on the
choices of the players, there could be a draw. Our learning task is to
take a set of completed games labelled with their outcomes and teach the
algorithm to identify these correctly.
This board of nine elements has the symmetry of the square, also known
as the *dihedral group*. This means it is symmetric under
$\frac{\pi}{2}$ rotations and flips about the lines of symmetry of a
square (vertical, horizontal, and both diagonals).

**The question is, how do we encode this in our QML problem?**
First, let us encode this problem classically. We will consider a
nine-element vector $v,$ each element of which identifies a square of
the board. The entries themselves can be $+1$,$0,$$-1,$ representing a
nought, no symbol, or a cross. The label is one-hot encoded in a vector
$y=(y_O,y_- , y_X)$ with $+1$ in the correct label and $-1$ in the
others. For instance (-1,-1,1) would represent an X in the relevant
position.
To create the quantum model let us take nine qubits and let them
represent squares of our board. We\'ll initialise them all as
$|0\rangle,$ which we note leaves the board invariant under the
symmetries of the problem (flip and rotate all you want, it\'s still
going to be zeroes whatever your mapping). We will then look to apply
single qubit $R_x(\theta)$ rotations on individual qubits, encoding each
of the possibilities in the board squares at an angle of
$\frac{2\pi}{3}$ from each other. For our parameterised gates we will
have a single-qubit $R_x(\theta_1)$ and $R_y(\theta_2)$ rotation at each
point. We will then use $CR_y(\theta_3)$ for two-qubit entangling gates.
This implies that, for each encoding, crudely, we\'ll need 18
single-qubit rotation parameters and $\binom{9}{2}=36$ two-qubit gate
rotations. Let\'s see how, by using symmetries, we can reduce this.

The indexing of our game board.
The secret will be to encode the symmetries into the gate set so the
observables we are interested in inherently respect the symmetries. How
do we do this? We need to select the collections of gates that commute
with the symmetries. In general, we can use the twirling formula for
this:
Tip:
Let $\mathcal{S}$ be the group that encodes our symmetries and $U$ be a
unitary representation of $\mathcal{S}.$ Then,
$$\mathcal{T}_{U}[X]=\frac{1}{|\mathcal{S}|} \sum_{s \in \mathcal{S}} U(s) X U(s)^{\dagger}$$
defines a projector onto the set of operators commuting with all
elements of the representation, i.e.,
$\left[\mathcal{T}_{U}[X], U(s)\right]=$ 0 for all $X$ and
$s \in \mathcal{S}.$
The twirling process applied to an arbitrary unitary will give us a new
unitary that commutes with the group as we require. We remember that
unitary gates typically have the form $W = \exp(-i\theta H),$ where $H$
is a Hermitian matrix called a *generator*, and $\theta$ may be fixed or
left as a free parameter. A recipe for creating a unitary that commutes
with our symmetries is to *twirl the generator of the gate*, i.e., we
move from the gate $W = \exp(-i\theta H)$ to the gate
$W' = \exp(-i\theta\mathcal{T}_U[H]).$ When each term in the twirling
formula acts on different qubits, then this unitary would further
simplify to
$$W' = \bigotimes_{s\in\mathcal{S}}U(s)\exp(-i\tfrac{\theta}{\vert\mathcal{S}\vert})U(s)^\dagger.$$
For simplicity, we can absorb the normalization factor
$\vert\mathcal{S}\vert$ into the free parameter $\theta.$
So let\'s look again at our choice of gates: single-qubit $R_x(\theta)$
and $R_y(\theta)$ rotations, and entangling two-qubit $CR_y(\phi)$
gates. What will we get by twirling these?
In this particular instance we can see the action of the twirling
operation geometrically as the symmetries involved are all permutations.
Let\'s consider the $R_x$ rotation acting on one qubit. Now if this
qubit is in the centre location on the grid, then we can flip around any
symmetry axis we like, and this operation leaves the qubit invariant, so
we\'ve identified one equivariant gate immediately. If the qubit is on
the corners, then the flipping will send this qubit rotation to each of
the other corners. Similarly, if a qubit is on the central edge then the
rotation gate will be sent round the other edges. So we can see that the
twirling operation is a sum over all the possible outcomes of performing
the symmetry action (the sum over the symmetry group actions). Having
done this we can see that for a single-qubit rotation the invariant maps
are rotations on the central qubit, at all the corners, and at all the
central edges (when their rotation angles are fixed to be the same).
As an example consider the following figure, where we take a $R_x$ gate
in the corner and then apply all the symmetries of a square. The result
of this twirling leads us to have the same gate at all the corners.

For entangling gates the situation is similar. There are three invariant
classes, the centre entangled with all corners, with all edges, and the
edges paired in a ring.
The prediction of a label is obtained via a one-hot-encoding by
measuring the expectation values of three invariant observables:
$$O_{-}=Z_{\text {middle }}=Z_{4}$$
$$O_{\circ}=\frac{1}{4} \sum_{i \in \text { corners }} Z_{i}=\frac{1}{4}\left[Z_{0}+Z_{2}+Z_{6}+Z_{8}\right]$$
$$O_{\times}=\frac{1}{4} \sum_{i \in \text { edges }} Z_{i}=\frac{1}{4}\left[Z_{1}+Z_{3}+Z_{5}+Z_{7}\right]$$
$$\hat{\boldsymbol{y}}=\left(\left\langle O_{\circ}\right\rangle,\left\langle O_{-}\right\rangle,\left\langle O_{\times}\right\rangle\right)$$
This is the quantum encoding of the symmetries into a learning problem.
A prediction for a given data point will be obtained by selecting the
class for which the observed expectation value is the largest.
Now that we have a specific encoding and have decided on our observables
we need to choose a suitable cost function to optimise. We will use an
$l_2$ loss function acting on pairs of games and labels $D={(g,y)},$
where $D$ is our dataset.
Let\'s now implement this!
First let\'s generate some games. Here we are creating a small program
that will play Noughts and Crosses against itself in a random fashion.
On completion, it spits out the winner and the winning board, with
noughts as +1, draw as 0, and crosses as -1. There are 26,830 different
possible games but we will only sample a few hundred.
<code>
import torch
import random
# Fix seeds for reproducability
torch.backends.cudnn.deterministic = True
torch.manual_seed(16)
random.seed(16)
# create an empty board
def create_board():
return torch.tensor([[0, 0, 0], [0, 0, 0], [0, 0, 0]])
# Check for empty places on board
def possibilities(board):
l = []
for i in range(len(board)):
for j in range(3):
if board[i, j] == 0:
l.append((i, j))
return l
# Select a random place for the player
def random_place(board, player):
selection = possibilities(board)
current_loc = random.choice(selection)
board[current_loc] = player
return board
# Check if there is a winner by having 3 in a row
def row_win(board, player):
for x in range(3):
lista = []
win = True
for y in range(3):
lista.append(board[x, y])
if board[x, y] != player:
win = False
if win:
break
return win
# Check if there is a winner by having 3 in a column
def col_win(board, player):
for x in range(3):
win = True
for y in range(3):
if board[y, x] != player:
win = False
if win:
break
return win
# Check if there is a winner by having 3 along a diagonal
def diag_win(board, player):
win1 = True
win2 = True
for x, y in [(0, 0), (1, 1), (2, 2)]:
if board[x, y] != player:
win1 = False
for x, y in [(0, 2), (1, 1), (2, 0)]:
if board[x, y] != player:
win2 = False
return win1 or win2
# Check if the win conditions have been met or if a draw has occurred
def evaluate_game(board):
winner = None
for player in [1, -1]:
if row_win(board, player) or col_win(board, player) or diag_win(board, player):
winner = player
if torch.all(board != 0) and winner == None:
winner = 0
return winner
# Main function to start the game
def play_game():
board, winner, counter = create_board(), None, 1
while winner == None:
for player in [1, -1]:
board = random_place(board, player)
counter += 1
winner = evaluate_game(board)
if winner != None:
break
return [board.flatten(), winner]
def create_dataset(size_for_each_winner):
game_d = {-1: [], 0: [], 1: []}
while min([len(v) for k, v in game_d.items()]) < size_for_each_winner:
board, winner = play_game()
if len(game_d[winner]) < size_for_each_winner:
game_d[winner].append(board)
res = []
for winner, boards in game_d.items():
res += [(board, winner) for board in boards]
return res
NUM_TRAINING = 450
NUM_VALIDATION = 600
# Create datasets but with even numbers of each outcome
with torch.no_grad():
dataset = create_dataset(NUM_TRAINING // 3)
dataset_val = create_dataset(NUM_VALIDATION // 3)
</code>
Now let\'s create the relevant circuit expectation values that respect
the symmetry classes we defined over the single-site and two-site
measurements.
<code>
import pennylane as qml
import matplotlib.pyplot as plt
# Set up a nine-qubit system
dev = qml.device("default.qubit", wires=9)
ob_center = qml.PauliZ(4)
ob_corner = (qml.PauliZ(0) + qml.PauliZ(2) + qml.PauliZ(6) + qml.PauliZ(8)) * (1 / 4)
ob_edge = (qml.PauliZ(1) + qml.PauliZ(3) + qml.PauliZ(5) + qml.PauliZ(7)) * (1 / 4)
# Now let's encode the data in the following qubit models, first with symmetry
@qml.qnode(dev)
def circuit(x, p):
qml.RX(x[0], wires=0)
qml.RX(x[1], wires=1)
qml.RX(x[2], wires=2)
qml.RX(x[3], wires=3)
qml.RX(x[4], wires=4)
qml.RX(x[5], wires=5)
qml.RX(x[6], wires=6)
qml.RX(x[7], wires=7)
qml.RX(x[8], wires=8)
# Centre single-qubit rotation
qml.RX(p[0], wires=4)
qml.RY(p[1], wires=4)
# Corner single-qubit rotation
qml.RX(p[2], wires=0)
qml.RX(p[2], wires=2)
qml.RX(p[2], wires=6)
qml.RX(p[2], wires=8)
qml.RY(p[3], wires=0)
qml.RY(p[3], wires=2)
qml.RY(p[3], wires=6)
qml.RY(p[3], wires=8)
# Edge single-qubit rotation
qml.RX(p[4], wires=1)
qml.RX(p[4], wires=3)
qml.RX(p[4], wires=5)
qml.RX(p[4], wires=7)
qml.RY(p[5], wires=1)
qml.RY(p[5], wires=3)
qml.RY(p[5], wires=5)
qml.RY(p[5], wires=7)
# Entagling two-qubit gates
# circling the edge of the board
qml.CRY(p[6], wires=[0, 1])
qml.CRY(p[6], wires=[2, 1])
qml.CRY(p[6], wires=[2, 5])
qml.CRY(p[6], wires=[8, 5])
qml.CRY(p[6], wires=[8, 7])
qml.CRY(p[6], wires=[6, 7])
qml.CRY(p[6], wires=[6, 3])
qml.CRY(p[6], wires=[0, 3])
# To the corners from the centre
qml.CRY(p[7], wires=[4, 0])
qml.CRY(p[7], wires=[4, 2])
qml.CRY(p[7], wires=[4, 6])
qml.CRY(p[7], wires=[4, 8])
# To the centre from the edges
qml.CRY(p[8], wires=[1, 4])
qml.CRY(p[8], wires=[3, 4])
qml.CRY(p[8], wires=[5, 4])
qml.CRY(p[8], wires=[7, 4])
return [qml.expval(ob_center), qml.expval(ob_corner), qml.expval(ob_edge)]
fig, ax = qml.draw_mpl(circuit)([0] * 9, 18 * [0])
</code>
Let\'s also look at the same series of gates but this time they are
applied independently from one another, so we won\'t be preserving the
symmetries with our gate operations. Practically this also means more
parameters, as previously groups of gates were updated together.
<code>
@qml.qnode(dev)
def circuit_no_sym(x, p):
qml.RX(x[0], wires=0)
qml.RX(x[1], wires=1)
qml.RX(x[2], wires=2)
qml.RX(x[3], wires=3)
qml.RX(x[4], wires=4)
qml.RX(x[5], wires=5)
qml.RX(x[6], wires=6)
qml.RX(x[7], wires=7)
qml.RX(x[8], wires=8)
# Centre single-qubit rotation
qml.RX(p[0], wires=4)
qml.RY(p[1], wires=4)
# Note in this circuit the parameters aren't all the same.
# Previously they were identical to ensure they were applied
# as one combined gate. The fact they can all vary independently
# here means we aren't respecting the symmetry.
# Corner single-qubit rotation
qml.RX(p[2], wires=0)
qml.RX(p[3], wires=2)
qml.RX(p[4], wires=6)
qml.RX(p[5], wires=8)
qml.RY(p[6], wires=0)
qml.RY(p[7], wires=2)
qml.RY(p[8], wires=6)
qml.RY(p[9], wires=8)
# Edge single-qubit rotation
qml.RX(p[10], wires=1)
qml.RX(p[11], wires=3)
qml.RX(p[12], wires=5)
qml.RX(p[13], wires=7)
qml.RY(p[14], wires=1)
qml.RY(p[15], wires=3)
qml.RY(p[16], wires=5)
qml.RY(p[17], wires=7)
# Entagling two-qubit gates
# circling the edge of the board
qml.CRY(p[18], wires=[0, 1])
qml.CRY(p[19], wires=[2, 1])
qml.CRY(p[20], wires=[2, 5])
qml.CRY(p[21], wires=[8, 5])
qml.CRY(p[22], wires=[8, 7])
qml.CRY(p[23], wires=[6, 7])
qml.CRY(p[24], wires=[6, 3])
qml.CRY(p[25], wires=[0, 3])
# To the corners from the centre
qml.CRY(p[26], wires=[4, 0])
qml.CRY(p[27], wires=[4, 2])
qml.CRY(p[28], wires=[4, 6])
qml.CRY(p[29], wires=[4, 8])
# To the centre from the edges
qml.CRY(p[30], wires=[1, 4])
qml.CRY(p[31], wires=[3, 4])
qml.CRY(p[32], wires=[5, 4])
qml.CRY(p[33], wires=[7, 4])
return [qml.expval(ob_center), qml.expval(ob_corner), qml.expval(ob_edge)]
fig, ax = qml.draw_mpl(circuit_no_sym)([0] * 9, [0] * 34)
</code>
Note again how, though these circuits have a similar form to before,
they are parameterised differently. We need to feed the vector
$\boldsymbol{y}$ made up of the expectation value of these three
operators into the loss function and use this to update our parameters.
<code>
import math
def encode_game(game):
board, res = game
x = board * (2 * math.pi) / 3
if res == 1:
y = [-1, -1, 1]
elif res == -1:
y = [1, -1, -1]
else:
y = [-1, 1, -1]
return x, y
</code>
Recall that the loss function we\'re interested in is
$\mathcal{L}(\mathcal{D})=\frac{1}{|\mathcal{D}|} \sum_{(\boldsymbol{g}, \boldsymbol{y}) \in \mathcal{D}}\|\hat{\boldsymbol{y}}(\boldsymbol{g})-\boldsymbol{y}\|_{2}^{2}.$
We need to define this and then we can begin our optimisation.
<code>
# calculate the mean square error for this classification problem
def cost_function(params, input, target):
output = torch.stack([torch.hstack(circuit(x, params)) for x in input])
vec = output - target
sum_sqr = torch.sum(vec * vec, dim=1)
return torch.mean(sum_sqr)
</code>
Let\'s now train our symmetry-preserving circuit on the data.
<code>
from torch import optim
import numpy as np
params = 0.01 * torch.randn(9)
params.requires_grad = True
opt = optim.Adam([params], lr=1e-2)
max_epoch = 15
max_step = 30
batch_size = 10
encoded_dataset = list(zip(*[encode_game(game) for game in dataset]))
encoded_dataset_val = list(zip(*[encode_game(game) for game in dataset_val]))
def accuracy(p, x_val, y_val):
with torch.no_grad():
y_val = torch.tensor(y_val)
y_out = torch.stack([torch.hstack(circuit(x, p)) for x in x_val])
acc = torch.sum(torch.argmax(y_out, axis=1) == torch.argmax(y_val, axis=1))
return acc / len(x_val)
print(f"accuracy without training = {accuracy(params, *encoded_dataset_val)}")
x_dataset = torch.stack(encoded_dataset[0])
y_dataset = torch.tensor(encoded_dataset[1], requires_grad=False)
saved_costs_sym = []
saved_accs_sym = []
for epoch in range(max_epoch):
rand_idx = torch.randperm(len(x_dataset))
# Shuffled dataset
x_dataset = x_dataset[rand_idx]
y_dataset = y_dataset[rand_idx]
costs = []
for step in range(max_step):
x_batch = x_dataset[step * batch_size : (step + 1) * batch_size]
y_batch = y_dataset[step * batch_size : (step + 1) * batch_size]
def opt_func():
opt.zero_grad()
loss = cost_function(params, x_batch, y_batch)
costs.append(loss.item())
loss.backward()
return loss
opt.step(opt_func)
cost = np.mean(costs)
saved_costs_sym.append(cost)
if (epoch + 1) % 1 == 0:
# Compute validation accuracy
acc_val = accuracy(params, *encoded_dataset_val)
saved_accs_sym.append(acc_val)
res = [epoch + 1, cost, acc_val]
print("Epoch: {:2d} | Loss: {:3f} | Validation accuracy: {:3f}".format(*res))
</code>
Now we train the non-symmetry preserving circuit.
<code>
params = 0.01 * torch.randn(34)
params.requires_grad = True
opt = optim.Adam([params], lr=1e-2)
# calculate mean square error for this classification problem
def cost_function_no_sym(params, input, target):
output = torch.stack([torch.hstack(circuit_no_sym(x, params)) for x in input])
vec = output - target
sum_sqr = torch.sum(vec * vec, dim=1)
return torch.mean(sum_sqr)
max_epoch = 15
max_step = 30
batch_size = 15
encoded_dataset = list(zip(*[encode_game(game) for game in dataset]))
encoded_dataset_val = list(zip(*[encode_game(game) for game in dataset_val]))
def accuracy_no_sym(p, x_val, y_val):
with torch.no_grad():
y_val = torch.tensor(y_val)
y_out = torch.stack([torch.hstack(circuit_no_sym(x, p)) for x in x_val])
acc = torch.sum(torch.argmax(y_out, axis=1) == torch.argmax(y_val, axis=1))
return acc / len(x_val)
print(f"accuracy without training = {accuracy_no_sym(params, *encoded_dataset_val)}")
x_dataset = torch.stack(encoded_dataset[0])
y_dataset = torch.tensor(encoded_dataset[1], requires_grad=False)
saved_costs = []
saved_accs = []
for epoch in range(max_epoch):
rand_idx = torch.randperm(len(x_dataset))
# Shuffled dataset
x_dataset = x_dataset[rand_idx]
y_dataset = y_dataset[rand_idx]
costs = []
for step in range(max_step):
x_batch = x_dataset[step * batch_size : (step + 1) * batch_size]
y_batch = y_dataset[step * batch_size : (step + 1) * batch_size]
def opt_func():
opt.zero_grad()
loss = cost_function_no_sym(params, x_batch, y_batch)
costs.append(loss.item())
loss.backward()
return loss
opt.step(opt_func)
cost = np.mean(costs)
saved_costs.append(costs)
if (epoch + 1) % 1 == 0:
# Compute validation accuracy
acc_val = accuracy_no_sym(params, *encoded_dataset_val)
saved_accs.append(acc_val)
res = [epoch + 1, cost, acc_val]
print("Epoch: {:2d} | Loss: {:3f} | Validation accuracy: {:3f}".format(*res))
</code>
Finally let\'s plot the results and see how the two training regimes
differ.
<code>
from matplotlib import pyplot as plt
plt.title("Validation accuracies")
plt.plot(saved_accs_sym, "b", label="Symmetric")
plt.plot(saved_accs, "g", label="Standard")
plt.ylabel("Validation accuracy (%)")
plt.xlabel("Optimization steps")
plt.legend()
plt.show()
</code>
What we can see then is that by paying attention to the symmetries
intrinsic to the learning problem and reflecting this in an equivariant
gate set we have managed to improve our learning accuracies, while also
using fewer parameters. While the symmetry-aware circuit clearly
outperforms the naive one, it is notable however that the learning
accuracies in both cases are hardly ideal given this is a solved game.
So paying attention to symmetries definitely helps, but it also isn\'t a
magic bullet!
The use of symmetries in both quantum and classical machine learning is
a developing field, so we can expect new results to emerge over the
coming years. If you want to get involved, the references given below
are a great place to start.
# An equivariant graph embedding
A notorious problem when data comes in the form of graphs \-- think of
molecules or social media networks \-- is that the numerical
representation of a graph in a computer is not unique. For example, if
we describe a graph via an [adjacency
matrix](https://en.wikipedia.org/wiki/Adjacency_matrix) whose entries
contain the edge weights as off-diagonals and node weights on the
diagonal, any simultaneous permutation of rows and columns of this
matrix refer to the same graph.

For example, the graph in the image above is represented by each of the
two equivalent adjacency matrices. The top matrix can be transformed
into the bottom matrix by swapping the first row with the third row,
then swapping the third column with the third column, then the new first
row with the second, and finally the first colum with the second.
But the number of such permutations grows factorially with the number of
nodes in the graph, which is even worse than an exponential growth!
If we want computers to learn from graph data, we usually want our
models to \"know\" that all these permuted adjacency matrices refer to
the same object, so we do not waste resources on learning this property.
In mathematical terms, this means that the model should be in- or
equivariant (more about this distinction below) with respect to
permutations. This is the basic motivation of [Geometric Deep
Learning](https://geometricdeeplearning.com/), ideas of which have found
their way into quantum machine learning.
This tutorial shows how to implement an example of a trainable
permutation equivariant graph embedding as proposed in [Skolik et al.
(2022)](https://arxiv.org/pdf/2205.06109.pdf). The embedding maps the
adjacency matrix of an undirected graph with edge and node weights to a
quantum state, such that permutations of an adjacency matrix get mapped
to the same states *if only we also permute the qubit registers in the
same fashion*.
## Permuted adjacency matrices describe the same graph
Let us first verify that permuted adjacency matrices really describe one
and the same graph. We also gain some useful data generation functions
for later.
First we create random adjacency matrices. The entry $a_{ij}$ of this
matrix corresponds to the weight of the edge between nodes $i$ and $j$
in the graph. We assume that graphs have no self-loops; instead, the
diagonal elements of the adjacency matrix are interpreted as node
weights (or \"node attributes\").
Taking the example of a Twitter user retweet network, the nodes would be
users, edge weights indicate how often two users retweet each other and
node attributes could indicate the follower count of a user.
<code>
import numpy as np
import networkx as nx
import matplotlib.pyplot as plt
rng = np.random.default_rng(4324234)
def create_data_point(n):
"""
Returns a random undirected adjacency matrix of dimension (n,n).
The diagonal elements are interpreted as node attributes.
"""
mat = rng.random((n, n))
A = (mat + np.transpose(mat))/2
return np.round(A, decimals=2)
A = create_data_point(3)
print(A)
</code>
Let\'s also write a function to generate permuted versions of this
adjacency matrix.
<code>
def permute(A, permutation):
"""
Returns a copy of A with rows and columns swapped according to permutation.
For example, the permutation [1, 2, 0] swaps 0->1, 1->2, 2->0.
"""
P = np.zeros((len(A), len(A)))
for i,j in enumerate(permutation):
P[i,j] = 1
return P @ A @ np.transpose(P)
A_perm = permute(A, [1, 2, 0])
print(A_perm)
</code>
If we create [networkx]{.title-ref} graphs from both adjacency matrices
and plot them, we see that they are identical as claimed.
<code>
fig, (ax1, ax2) = plt.subplots(1, 2)
# interpret diagonal of matrix as node attributes
node_labels = {n: A[n,n] for n in range(len(A))}
np.fill_diagonal(A, np.zeros(len(A)))
G1 = nx.Graph(A)
pos1=nx.spring_layout(G1)
nx.draw(G1, pos1, labels=node_labels, ax=ax1, node_size = 800, node_color = "#ACE3FF")
edge_labels = nx.get_edge_attributes(G1,'weight')
nx.draw_networkx_edge_labels(G1,pos1,edge_labels=edge_labels, ax=ax1)
# interpret diagonal of permuted matrix as node attributes
node_labels = {n: A_perm[n,n] for n in range(len(A_perm))}
np.fill_diagonal(A_perm, np.zeros(len(A)))
G2 = nx.Graph(A_perm)
pos2=nx.spring_layout(G2)
nx.draw(G2, pos2, labels=node_labels, ax=ax2, node_size = 800, node_color = "#ACE3FF")
edge_labels = nx.get_edge_attributes(G2,'weight')
nx.draw_networkx_edge_labels(G2,pos2,edge_labels=edge_labels, ax=ax2)
ax1.set_xlim([1.2*x for x in ax1.get_xlim()])
ax2.set_xlim([1.2*x for x in ax2.get_xlim()])
plt.tight_layout()
plt.show()
</code>
Note:
The issue of non-unique numerical representations of graphs ultimately
stems from the fact that the nodes in a graph do not have an intrinsic
order, and by labelling them in a numerical data structure like a matrix
we therefore impose an arbitrary order.
## Permutation equivariant embeddings
When we design a machine learning model that takes graph data, the first
step is to encode the adjacency matrix into a quantum state using an
embedding or quantum feature
map $\phi:$
$$A \rightarrow |\phi(A)\rangle .$$
We may want the resulting quantum state to be the same for all adjacency
matrices describing the same graph. In mathematical terms, this means
that $\phi$ is an *invariant* embedding with respect to simultaneous row
and column permutations $\pi(A)$ of the adjacency matrix:
$$|\phi(A) \rangle = |\phi(\pi(A))\rangle \;\; \text{ for all } \pi .$$
However, invariance is often too strong a constraint. Think for example
of an encoding that associates each node in the graph with a qubit. We
might want permutations of the adjacency matrix to lead to the same
state *up to an equivalent permutation of the qubits* $P_{\pi},$ where
$$P_{\pi} |q_1,...,q_n \rangle = |q_{\textit{perm}_{\pi}(1)}, ... q_{\textit{perm}_{\pi}(n)} \rangle .$$
The function $\text{perm}_{\pi}$ maps each index to the permuted index
according to $\pi.$
Note:
The operator $P_{\pi}$ is implemented by PennyLane\'s
`~pennylane.Permute.`{.interpreted-text role="class"}
This results in an *equivariant* embedding with respect to permutations
of the adjacency matrix:
$$|\phi(A) \rangle = P_{\pi}|\phi(\pi(A))\rangle \;\; \text{ for all } \pi .$$
This is exactly what the following quantum embedding is aiming to do!
The mathematical details behind these concepts use group theory and are
beautiful, but can be a bit daunting. Have a look at [this
paper](https://arxiv.org/abs/2210.08566) if you want to learn more.
## Implementation in PennyLane
Let\'s get our hands dirty with an example. As mentioned, we will
implement the permutation-equivariant embedding suggested in [Skolik et
al. (2022)](https://arxiv.org/pdf/2205.06109.pdf) which has this
structure:

The image can be found in [Skolik et al.
(2022)](https://arxiv.org/pdf/2205.06109.pdf) and shows one layer of the
circuit. The $\epsilon$ are our edge weights while $\alpha$ describe the
node weights, and the $\beta,$ $\gamma$ are variational parameters.
In PennyLane this looks as follows:
<code>
import pennylane as qml
def perm_equivariant_embedding(A, betas, gammas):
"""
Ansatz to embedd a graph with node and edge weights into a quantum state.
The adjacency matrix A contains the edge weights on the off-diagonal,
as well as the node attributes on the diagonal.
The embedding contains trainable weights 'betas' and 'gammas'.
"""
n_nodes = len(A)
n_layers = len(betas) # infer the number of layers from the parameters
# initialise in the plus state
for i in range(n_nodes):
qml.Hadamard(i)
for l in range(n_layers):
for i in range(n_nodes):
for j in range(i):
# factor of 2 due to definition of gate
qml.IsingZZ(2*gammas[l]*A[i,j], wires=[i,j])
for i in range(n_nodes):
qml.RX(A[i,i]*betas[l], wires=i)
</code>
We can use this ansatz in a circuit.
<code>
n_qubits = 5
n_layers = 2
dev = qml.device("lightning.qubit", wires=n_qubits)
@qml.qnode(dev)
def eqc(adjacency_matrix, observable, trainable_betas, trainable_gammas):
"""Circuit that uses the permutation equivariant embedding"""
perm_equivariant_embedding(adjacency_matrix, trainable_betas, trainable_gammas)
return qml.expval(observable)
A = create_data_point(n_qubits)
betas = rng.random(n_layers)
gammas = rng.random(n_layers)
observable = qml.PauliX(0) @ qml.PauliX(1) @ qml.PauliX(3)
qml.draw_mpl(eqc, decimals=2)(A, observable, betas, gammas)
plt.show()
</code>
Validating the equivariance
===========================
Let\'s now check if the circuit is really equivariant!
This is the expectation value we get using the original adjacency matrix
as an input:
<code>
result_A = eqc(A, observable, betas, gammas)
print("Model output for A:", result_A)
</code>
If we permute the adjacency matrix, this is what we get:
<code>
perm = [2, 3, 0, 1, 4]
A_perm = permute(A, perm)
result_Aperm = eqc(A_perm, observable, betas, gammas)
print("Model output for permutation of A: ", result_Aperm)
</code>
Why are the two values different? Well, we constructed an *equivariant*
ansatz, not an *invariant* one! Remember, an *invariant* ansatz means
that embedding a permutation of the adjacency matrix leads to the same
state as an embedding of the original matrix. An *equivariant* ansatz
embeds the permuted adjacency matrix into a state where the qubits are
permuted as well.
As a result, the final state before measurement is only the same if we
permute the qubits in the same manner that we permute the input
adjacency matrix. We could insert a permutation operator
`qml.Permute(perm)` to achieve this, or we simply permute the wires of
the observables!
<code>
observable_perm = qml.PauliX(perm[0]) @ qml.PauliX(perm[1]) @ qml.PauliX(perm[3])
</code>
Now everything should work out!
<code>
result_Aperm = eqc(A_perm, observable_perm, betas, gammas)
print("Model output for permutation of A, and with permuted observable: ", result_Aperm)
</code>
Et voilà!
## Conclusion
Equivariant graph embeddings can be combined with other equivariant
parts of a quantum machine learning pipeline (like measurements and the
cost function). [Skolik et al.
(2022)](https://arxiv.org/pdf/2205.06109.pdf), for example, use such a
pipeline as part of a reinforcement learning scheme that finds heuristic
solutions for the traveling salesman problem. Their simulations compare
a fully equivariant model to circuits that break permutation
equivariance and show that it performs better, confirming that if we
know about structure in our data, we should try to use this knowledge in
machine learning.
# Quantum models as Fourier series
This demonstration is based on the paper *The effect of data encoding on
the expressive power of variational quantum machine learning models* by
[Schuld, Sweke, and Meyer (2020)](https://arxiv.org/abs/2008.08605).

The paper links common quantum machine learning models designed for
near-term quantum computers to Fourier series (and, in more general, to
Fourier-type sums). With this link, the class of functions a quantum
model can learn (i.e., its \"expressivity\") can be characterized by the
model\'s control of the Fourier series\' frequencies and coefficients.
Background
==========
Ref. considers quantum machine learning models of the form
$$f_{\boldsymbol \theta}(x) = \langle 0| U^{\dagger}(x,\boldsymbol \theta) M U(x, \boldsymbol \theta) | 0 \rangle$$
where $M$ is a measurement observable and $U(x, \boldsymbol \theta)$ is
a variational quantum circuit that encodes a data input $x$ and depends
on a set of parameters $\boldsymbol \theta.$ Here we will restrict
ourselves to one-dimensional data inputs, but the paper motivates that
higher-dimensional features simply generalize to multi-dimensional
Fourier series.
The circuit itself repeats $L$ layers, each consisting of a
data-encoding circuit block $S(x)$ and a trainable circuit block
$W(\boldsymbol \theta)$ that is controlled by the parameters
$\boldsymbol \theta.$ The data encoding block consists of gates of the
form $\mathcal{G}(x) = e^{-ix H},$ where $H$ is a Hamiltonian. A
prominent example of such gates are Pauli rotations.
The paper shows how such a quantum model can be written as a
Fourier-type sum of the form
$$f_{ \boldsymbol \theta}(x) = \sum_{\omega \in \Omega} c_{\omega}( \boldsymbol \theta) \; e^{i \omega x}.$$
As illustrated in the picture below (which is Figure 1 from the paper),
the \"encoding Hamiltonians\" in $S(x)$ determine the set $\Omega$ of
available \"frequencies\", and the remainder of the circuit, including
the trainable parameters, determines the coefficients $c_{\omega}.$

The paper demonstrates many of its findings for circuits in which
$\mathcal{G}(x)$ is a single-qubit Pauli rotation gate. For example, it
shows that $r$ repetitions of a Pauli rotation-encoding gate in
\"sequence\" (on the same qubit, but with multiple layers $r=L$) or in
\"parallel\" (on $r$ different qubits, with $L=1$) creates a quantum
model that can be expressed as a *Fourier series* of the form
$$f_{ \boldsymbol \theta}(x) = \sum_{n \in \Omega} c_{n}(\boldsymbol \theta) e^{i n x},$$
where $\Omega = \{ -r, \dots, -1, 0, 1, \dots, r\}$ is a spectrum of
consecutive integer-valued frequencies up to degree $r.$
As a result, we expect quantum models that encode an input $x$ by $r$
Pauli rotations to only be able to fit Fourier series of at most degree
$r.$
Goal of this demonstration
==========================
The experiments below investigate this \"Fourier-series\"-like nature of
quantum models by showing how to reproduce the simulations underlying
Figures 3, 4 and 5 in Section II of the paper:
- **Figures 3 and 4** are function-fitting experiments, where quantum
models with different encoding strategies have the task to fit
Fourier series up to a certain degree. As in the paper, we will use
examples of qubit-based quantum circuits where a single data feature
is encoded via Pauli rotations.
- **Figure 5** plots the Fourier coefficients of randomly sampled
instances from a family of quantum models which is defined by some
parametrized ansatz.
The code is presented so you can easily modify it in order to play
around with other settings and models. The settings used in the paper
are given in the various subsections.
First of all, let\'s make some imports and define a standard loss
function for the training.
<code>
import matplotlib.pyplot as plt
import pennylane as qml
from pennylane import numpy as np
np.random.seed(42)
def square_loss(targets, predictions):
loss = 0
for t, p in zip(targets, predictions):
loss += (t - p) ** 2
loss = loss / len(targets)
return 0.5 * loss
</code>
Part I: Fitting Fourier series with serial Pauli-rotation encoding
==================================================================
First we will reproduce Figures 3 and 4 from the paper. These show how
quantum models that use Pauli rotations as data-encoding gates can only
fit Fourier series up to a certain degree. The degree corresponds to the
number of times that the Pauli gate gets repeated in the quantum model.
Let us consider circuits where the encoding gate gets repeated
sequentially (as in Figure 2a of the paper). For simplicity we will only
look at single-qubit circuits:

Define a target function
========================
We first define a (classical) target function which will be used as a
\"ground truth\" that the quantum model has to fit. The target function
is constructed as a Fourier series of a specific degree.
We also allow for a rescaling of the data by a hyperparameter `scaling`,
which we will do in the quantum model as well. As shown in, for the
quantum model to learn the classical model in the experiment below, the
scaling of the quantum model and the target function have to match,
which is an important observation for the design of quantum machine
learning models.
<code>
degree = 1 # degree of the target function
scaling = 1 # scaling of the data
coeffs = [0.15 + 0.15j] * degree # coefficients of non-zero frequencies
coeff0 = 0.1 # coefficient of zero frequency
def target_function(x):
"""Generate a truncated Fourier series, where the data gets re-scaled."""
res = coeff0
for idx, coeff in enumerate(coeffs):
exponent = np.complex128(scaling * (idx + 1) * x * 1j)
conj_coeff = np.conjugate(coeff)
res += coeff * np.exp(exponent) + conj_coeff * np.exp(-exponent)
return np.real(res)
</code>
Let\'s have a look at it.
<code>
x = np.linspace(-6, 6, 70, requires_grad=False)
target_y = np.array([target_function(x_) for x_ in x], requires_grad=False)
plt.plot(x, target_y, c="black")
plt.scatter(x, target_y, facecolor="white", edgecolor="black")
plt.ylim(-1, 1)
plt.show()
</code>
::: {.note}
::: {.title}
Note
:::
To reproduce the figures in the paper, you can use the following
settings in the cells above:
- For the settings
degree = 1
coeffs = (0.15 + 0.15j) * degree
coeff0 = 0.1
this function is the ground truth
$g(x) = \sum_{n=-1}^1 c_{n} e^{-nix}$ from Figure 3 in the paper.
- To get the ground truth $g'(x) = \sum_{n=-2}^2 c_{n} e^{-nix}$ with
$c_0=0.1,$ $c_1 = c_2 = 0.15 - 0.15i$ from Figure 3, you need to
increase the degree to two:
degree = 2
- The ground truth from Figure 4 can be reproduced by changing the
settings to:
degree = 5
coeffs = (0.05 + 0.05j) * degree
coeff0 = 0.0
:::
Define the serial quantum model
===============================
We now define the quantum model itself.
<code>
scaling = 1
dev = qml.device("default.qubit", wires=1)
def S(x):
"""Data-encoding circuit block."""
qml.RX(scaling * x, wires=0)
def W(theta):
"""Trainable circuit block."""
qml.Rot(theta[0], theta[1], theta[2], wires=0)
@qml.qnode(dev)
def serial_quantum_model(weights, x):
for theta in weights[:-1]:
W(theta)
S(x)
# (L+1)'th unitary
W(weights[-1])
return qml.expval(qml.PauliZ(wires=0))
</code>
You can run the following cell multiple times, each time sampling
different weights, and therefore different quantum models.
<code>
r = 1 # number of times the encoding gets repeated (here equal to the number of layers)
weights = (
2 * np.pi * np.random.random(size=(r + 1, 3), requires_grad=True)
) # some random initial weights
x = np.linspace(-6, 6, 70, requires_grad=False)
random_quantum_model_y = [serial_quantum_model(weights, x_) for x_ in x]
plt.plot(x, random_quantum_model_y, c="blue")
plt.ylim(-1, 1)
plt.show()
</code>
No matter what weights are picked, the single qubit model for
[L=1]{.title-ref} will always be a sine function of a fixed frequency.
The weights merely influence the amplitude, y-shift, and phase of the
sine.
This observation is formally derived in Section II.A of the paper.
::: {.note}
::: {.title}
Note
:::
You can increase the number of layers. Figure 4 from the paper, for
example, uses the settings `L=1`, `L=3` and `L=5`.
:::
Finally, let\'s look at the circuit we just created:
<code>
print(qml.draw(serial_quantum_model)(weights, x[-1]))
</code>
Fit the model to the target
===========================
The next step is to optimize the weights in order to fit the ground
truth.
<code>
def cost(weights, x, y):
predictions = [serial_quantum_model(weights, x_) for x_ in x]
return square_loss(y, predictions)
max_steps = 50
opt = qml.AdamOptimizer(0.3)
batch_size = 25
cst = [cost(weights, x, target_y)] # initial cost
for step in range(max_steps):
# Select batch of data
batch_index = np.random.randint(0, len(x), (batch_size,))
x_batch = x[batch_index]
y_batch = target_y[batch_index]
# Update the weights by one optimizer step
weights, _, _ = opt.step(cost, weights, x_batch, y_batch)
# Save, and possibly print, the current cost
c = cost(weights, x, target_y)
cst.append(c)
if (step + 1) % 10 == 0:
print("Cost at step {0:3}: {1}".format(step + 1, c))
</code>
To continue training, you may just run the above cell again. Once you
are happy, you can use the trained model to predict function values, and
compare them with the ground truth.
<code>
predictions = [serial_quantum_model(weights, x_) for x_ in x]
plt.plot(x, target_y, c="black")
plt.scatter(x, target_y, facecolor="white", edgecolor="black")
plt.plot(x, predictions, c="blue")
plt.ylim(-1, 1)
plt.show()
</code>
Let\'s also have a look at the cost during training.
<code>
plt.plot(range(len(cst)), cst)
plt.ylabel("Cost")
plt.xlabel("Step")
plt.ylim(0, 0.23)
plt.show()
</code>
With the initial settings and enough training steps, the quantum model
learns to fit the ground truth perfectly. This is expected, since the
number of Pauli-rotation-encoding gates and the degree of the ground
truth Fourier series are both one.
If the ground truth\'s degree is larger than the number of layers in the
quantum model, the fit will look much less accurate. And finally, we
also need to have the correct scaling of the data: if one of the models
changes the `scaling` parameter (which effectively scales the
frequencies), fitting does not work even with enough encoding
repetitions.
Note:
You will find that the training takes much longer, and needs a lot more
steps to converge for larger L. Some initial weights may not even
converge to a good solution at all; the training seems to get stuck in a
minimum.
It is an open research question whether for asymptotically large L, the
single qubit model can fit *any* function by constructing arbitrary
Fourier coefficients.
Part II: Fitting Fourier series with parallel Pauli-rotation encoding
=====================================================================
Our next task is to repeat the function-fitting experiment for a circuit
where the Pauli rotation gate gets repeated $r$ times on *different*
qubits, using a single layer $L=1.$
As shown in the paper, we expect similar results to the serial model: a
Fourier series of degree $r$ can only be fitted if there are at least
$r$ repetitions of the encoding gate in the quantum model. However, in
practice this experiment is a bit harder, since the dimension of the
trainable unitaries $W$ grows quickly with the number of qubits.
In the paper, the investigations are made with the assumption that the
purple trainable blocks $W$ are arbitrary unitaries. We could use the
`~.pennylane.templates.ArbitraryUnitary`{.interpreted-text role="class"}
template, but since this template requires a number of parameters that
grows exponentially with the number of qubits ($4^L-1$ to be precise),
this quickly becomes cumbersome to train.
We therefore follow Figure 4 in the paper and use an ansatz for $W.$

Define the parallel quantum model
=================================
The ansatz is PennyLane\'s layer structure called
`~.pennylane.templates.StronglyEntanglingLayers`{.interpreted-text
role="class"}, and as the name suggests, it has itself a user-defined
number of layers (which we will call \"ansatz layers\" to avoid
confusion).
<code>
from pennylane.templates import StronglyEntanglingLayers
</code>
Let\'s have a quick look at the ansatz itself for 3 qubits by making a
dummy circuit of 2 ansatz layers:
<code>
n_ansatz_layers = 2
n_qubits = 3
dev = qml.device("default.qubit", wires=4)
@qml.qnode(dev)
def ansatz(weights):
StronglyEntanglingLayers(weights, wires=range(n_qubits))
return qml.expval(qml.Identity(wires=0))
weights_ansatz = 2 * np.pi * np.random.random(size=(n_ansatz_layers, n_qubits, 3))
print(qml.draw(ansatz, level="device")(weights_ansatz))
</code>
Now we define the actual quantum model.
<code>
scaling = 1
r = 3
dev = qml.device("default.qubit", wires=r)
def S(x):
"""Data-encoding circuit block."""
for w in range(r):
qml.RX(scaling * x, wires=w)
def W(theta):
"""Trainable circuit block."""
StronglyEntanglingLayers(theta, wires=range(r))
@qml.qnode(dev)
def parallel_quantum_model(weights, x):
W(weights[0])
S(x)
W(weights[1])
return qml.expval(qml.PauliZ(wires=0))
</code>
Again, you can sample random weights and plot the model function:
<code>
trainable_block_layers = 3
weights = 2 * np.pi * np.random.random(size=(2, trainable_block_layers, r, 3), requires_grad=True)
x = np.linspace(-6, 6, 70, requires_grad=False)
random_quantum_model_y = [parallel_quantum_model(weights, x_) for x_ in x]
plt.plot(x, random_quantum_model_y, c="blue")
plt.ylim(-1, 1)
plt.show()
</code>
Training the model
==================
Training the model is done exactly as before, but it may take a lot
longer this time. We set a default of 70 steps, which you should
increase if necessary. Small models of \<6 qubits usually converge after
a few hundred steps at most---but this depends on your settings.
<code>
def cost(weights, x, y):
predictions = [parallel_quantum_model(weights, x_) for x_ in x]
return square_loss(y, predictions)
max_steps = 70
opt = qml.AdamOptimizer(0.3)
batch_size = 25
cst = [cost(weights, x, target_y)] # initial cost
for step in range(max_steps):
# select batch of data
batch_index = np.random.randint(0, len(x), (batch_size,))
x_batch = x[batch_index]
y_batch = target_y[batch_index]
# update the weights by one optimizer step
weights, _, _ = opt.step(cost, weights, x_batch, y_batch)
# save, and possibly print, the current cost
c = cost(weights, x, target_y)
cst.append(c)
if (step + 1) % 10 == 0:
print("Cost at step {0:3}: {1}".format(step + 1, c))
</code>
<code>
predictions = [parallel_quantum_model(weights, x_) for x_ in x]
plt.plot(x, target_y, c="black")
plt.scatter(x, target_y, facecolor="white", edgecolor="black")
plt.plot(x, predictions, c="blue")
plt.ylim(-1, 1)
plt.show()
</code>
<code>
plt.plot(range(len(cst)), cst)
plt.ylabel("Cost")
plt.xlabel("Step")
plt.show()
</code>
Note :
To reproduce the right column in Figure 4 from the paper, use the
correct ground truth, $r=3$ and
[\`trainable\_block\_layers=3] as well as sufficiently
many training steps. The amount of steps depends on the initial weights
and other hyperparameters, and in some settings training may not
converge to zero error at all.
Part III: Sampling Fourier coefficients
=======================================
When we use a trainable ansatz above, it is possible that even with
enough repetitions of the data-encoding Pauli rotation, the quantum
model cannot fit the circuit, since the expressivity of quantum models
also depends on the Fourier coefficients the model can create.
Figure 5 in shows Fourier coefficients from quantum models sampled from
a model family defined by an ansatz for the trainable circuit block. For
this we need a function that numerically computes the Fourier
coefficients of a periodic function f with period $2 \pi.$
<code>
def fourier_coefficients(f, K):
"""
Computes the first 2*K+1 Fourier coefficients of a 2*pi periodic function.
"""
n_coeffs = 2 * K + 1
t = np.linspace(0, 2 * np.pi, n_coeffs, endpoint=False)
y = np.fft.rfft(f(t)) / t.size
return y
</code>
Define your quantum model
=========================
Now we need to define a quantum model. This could be any model, using a
qubit or continuous-variable circuit, or one of the quantum models from
above. We will use a slight derivation of the `parallel_qubit_model()`
from above, this time using the
`~.pennylane.templates.BasicEntanglerLayers`{.interpreted-text
role="class"} ansatz:
<code>
from pennylane.templates import BasicEntanglerLayers
scaling = 1
n_qubits = 4
dev = qml.device("default.qubit", wires=n_qubits)
def S(x):
"""Data encoding circuit block."""
for w in range(n_qubits):
qml.RX(scaling * x, wires=w)
def W(theta):
"""Trainable circuit block."""
BasicEntanglerLayers(theta, wires=range(n_qubits))
@qml.qnode(dev)
def quantum_model(weights, x):
W(weights[0])
S(x)
W(weights[1])
return qml.expval(qml.PauliZ(wires=0))
</code>
It will also be handy to define a function that samples different random
weights of the correct size for the model.
<code>
n_ansatz_layers = 1
def random_weights():
return 2 * np.pi * np.random.random(size=(2, n_ansatz_layers, n_qubits))
</code>
Now we can compute the first few Fourier coefficients for samples from
this model. The samples are created by randomly sampling different
parameters using the `random_weights()` function.
<code>
n_coeffs = 5
n_samples = 100
coeffs = []
for i in range(n_samples):
weights = random_weights()
def f(x):
return np.array([quantum_model(weights, x_) for x_ in x])
coeffs_sample = fourier_coefficients(f, n_coeffs)
coeffs.append(coeffs_sample)
coeffs = np.array(coeffs)
coeffs_real = np.real(coeffs)
coeffs_imag = np.imag(coeffs)
</code>
Let\'s plot the real vs. the imaginary part of the coefficients. As a
sanity check, the $c_0$ coefficient should be real, and therefore have
no contribution on the y-axis.
<code>
n_coeffs = len(coeffs_real[0])
fig, ax = plt.subplots(1, n_coeffs, figsize=(15, 4))
for idx, ax_ in enumerate(ax):
ax_.set_title(r"$c_{}$".format(idx))
ax_.scatter(
coeffs_real[:, idx],
coeffs_imag[:, idx],
s=20,
facecolor="white",
edgecolor="red",
)
ax_.set_aspect("equal")
ax_.set_ylim(-1, 1)
ax_.set_xlim(-1, 1)
plt.tight_layout(pad=0.5)
plt.show()
</code>
Playing around with different quantum models, you will find that some
quantum models create different distributions over the coefficients than
others. For example `BasicEntanglingLayers` (with the default Pauli-X
rotation) seems to have a structure that forces the even Fourier
coefficients to zero, while `StronglyEntanglingLayers` will have a
non-zero variance for all supported coefficients.
Note also how the variance of the distribution decreases for growing
orders of the coefficients---an effect linked to the convergence of a
Fourier series.
Note :
To reproduce the results from Figure 5 you have to change the ansatz (no
unitary, `BasicEntanglerLayers` or `StronglyEntanglingLayers`, and set
`n_ansatz_layers` either to $1$ or $5$). The `StronglyEntanglingLayers`
requires weights of shape `size=(2, n_ansatz_layers, n_qubits, 3)`.
Continuous-variable model
=========================
Ref. mentions that a phase rotation in continuous-variable quantum
computing has a spectrum that supports *all* Fourier frequecies. To play
with this model, we finally show you the code for a continuous-variable
circuit. For example, to see its Fourier coefficients run the cell
below, and then re-run the two cells above.
<code>
var = 2
n_ansatz_layers = 1
dev_cv = qml.device("default.gaussian", wires=1)
def S(x):
qml.Rotation(x, wires=0)
def W(theta):
"""Trainable circuit block."""
for r_ in range(n_ansatz_layers):
qml.Displacement(theta[0], theta[1], wires=0)
qml.Squeezing(theta[2], theta[3], wires=0)
@qml.qnode(dev_cv)
def quantum_model(weights, x):
W(weights[0])
S(x)
W(weights[1])
return qml.expval(qml.QuadX(wires=0))
def random_weights():
return np.random.normal(size=(2, 5 * n_ansatz_layers), loc=0, scale=var)
</code>
Note :
To find out what effect so-called \"non-Gaussian\" gates like the `Kerr`
gate have, you need to install the [strawberryfields
plugin](https://pennylane-sf.readthedocs.io/en/latest/) and change the
device to
``` {.python}
dev_cv = qml.device('strawberryfields.fock', wires=1, cutoff_dim=50)
```
## Equivariant Quantum Machine learning
In the following, we will denote elements of a symmetry group $G$ with
$g \in G.$ $G$ could be for instance the rotation group $SO(3),$ or the
permutation group $S_n.$ Groups are often easier understood in terms of
their representation $V_g : \mathcal{V} \rightarrow \mathcal{V}$ which
maps group elements to invertible linear operations, i.e. to $GL(n),$ on
some vector space $\mathcal{V}.$ We call a function
$f: \mathcal{V} \rightarrow \mathcal{W}$ *invariant* with respect to the
action of the group, if
$$f(V_g(v)) = f(v), \text{ for all } g \in G.$$
The concept of *equivariance* is a bit weaker, as it only requires the
function to *commute* with the group action, instead of remaining
constant. In mathematical terms, we require that
$$f(V_g(v)) = \mathcal{R}_g(f(v)), \text{ for all } g \in G,$$
with $\mathcal{R}$ being a representation of $G$ on the vector space
$\mathcal{W}.$ These concepts are important in machine learning, as they
tell us how the internal structure of the data, described by the group,
is conserved when passing through the model. In the remaining, we will
refer to $\mathcal{V}$ and $V_g$ as the data space and the
representation on it, respectively, and $\mathcal{W}$ and
$\mathcal{R}_g$ as the qubit space and the symmetry action on it,
respectively.
Now that we have the basics, we will focus on the task at hand: building
an equivariant quantum neural network for chemistry!
We use a [quantum reuploading
model](https://pennylane.ai/qml/demos/tutorial_expressivity_fourier_series/),
which consists of a variational ansatz $M_\Theta(\mathcal{X})$ applied
to some initial state $|\psi_0\rangle.$ Here, $\mathcal{X}$ denotes the
description of a molecular configuration, i.e., the set of Cartesian
coordinates of the atoms. The quantum circuit is given by
$$M_\Theta(\mathcal{X}) = \left[ \prod_{d=D}^1 \Phi(\mathcal{X}) \mathcal{U}_d(\vec{\theta}_d) \right] \Phi(\mathcal{X}),$$
and depends on both data $\mathcal{X}$ and trainable parameters
$\Theta = \{\vec{\theta}_d\}_{d=1}^D.$ It is built by interleaving
parametrized trainable layers $U_d(\vec{\theta}_d)$ with data encoding
layers $\Phi(\mathcal{X}).$ The corresponding quantum function
$f_{\Theta}(\mathcal{X})$ is then given by the expectation value of a
chosen observable $O$
$$f_\Theta(\mathcal{X}) = \langle \psi_0 | M_\Theta(\mathcal{X})^\dagger O M_\Theta(\mathcal{X}) |\psi_0 \rangle.$$
For the cases of a diatomic molecule (e.g. $LiH$) and a triatomic
molecule of two atom types (e.g. $H_2O$), panel (a) of the following
figure displays the descriptions of the chemical systems by the
Cartesian coordinates of their atoms, while the general circuit
formulation of the corresponding symmetry-invariant VQLM for these cases
is shown in panel (b). Note that we will only consider the triatomic
molecule $H_2O$ in the rest of this demo.

An overall invariant model is composed of four ingredients: an invariant
initial state, an equivariant encoding layer, equivariant trainable
layers, and finally an invariant observable. Here, equivariant encoding
means that applying the symmetry transformation first on the atomic
configuration $\mathcal{X}$ and then encoding it into the qubits
produces the same results as first encoding $\mathcal{X}$ and then
letting the symmetry act on the qubits, i.e.,
$$\Phi(V_g[\mathcal{X}]) = \mathcal{R}_g \Phi(\mathcal{X}) \mathcal{R}_g^\dagger,$$
where $V_g$ and $\mathcal{R}_g$ denote the symmetry representation on
the data and qubit level, respectively.
For the trainable layer, equivariance means that the order of applying
the symmetry and the parametrized operations does not matter:
$$\left[\mathcal{U}_d(\vec{\theta}_d), \mathcal{R}_g\right]=0.$$
Furthermore, we need to find an invariant observable
$O = \mathcal{R}_g O \mathcal{R}_g^\dagger$ and an initial state
$|\psi_0\rangle = \mathcal{R}_g |\psi_0\rangle,$ i.e., which can absorb
the symmetry action. Putting all this together results in a
symmetry-invariant VQLM as required.
In this demo, we will consider the example of a triatomic molecule of
two atom types, such as a water molecule. In this case, the system is
invariant under translations, rotations, and the exchange of the two
hydrogen atoms. Translational symmetry is included by taking the central
atom as the origin. Therefore, we only need to encode the coordinates of
the two identical *active* atoms, which we will call $\vec{x}_1$ and
$\vec{x}_2.$
Let's implement the model depicted above!
# Implementation of the VQLM
We start by importing the libraries that we will need.
<code>
import pennylane as qml
import numpy as np
import jax
jax.config.update("jax_platform_name", "cpu")
jax.config.update("jax_enable_x64", True)
from jax import numpy as jnp
import scipy
import matplotlib.pyplot as plt
import sklearn
</code>
Let us construct Pauli matrices, which are used to build the
Hamiltonian.
<code>
X = np.array([[0, 1], [1, 0]])
Y = np.array([[0, -1.0j], [1.0j, 0]])
Z = np.array([[1, 0], [0, -1]])
sigmas = jnp.array(np.array([X, Y, Z])) # Vector of Pauli matrices
sigmas_sigmas = jnp.array(
np.array(
[
np.kron(X, X),
np.kron(Y, Y),
np.kron(Z, Z),
] # Vector of tensor products of Pauli matrices
)
)
</code>
We start by considering **rotational invariance** and building an
initial state invariant under rotation, such as the singlet state
$|S\rangle = \frac{|01⟩−|10⟩}{\sqrt{2}}.$ A general $2n$-invariant state
can be obtained by taking $n$-fold tensor product.
<code>
def singlet(wires):
# Encode a 2-qubit rotation-invariant initial state, i.e., the singlet state.
qml.Hadamard(wires=wires[0])
qml.PauliZ(wires=wires[0])
qml.PauliX(wires=wires[1])
qml.CNOT(wires=wires)
</code>
Next, we need a rotationally equivariant data embedding. We choose to
encode a three-dimensional data point $\vec{x}\in \mathbb{R}^3$ via
$$\Phi(\vec{x}) = \exp\left( -i\alpha_\text{enc} [xX + yY + zZ] \right),$$
where we introduce a trainable encoding angle
$\alpha_\text{enc}\in\mathbb{R}.$ This encoding scheme is indeed
equivariant since embedding a rotated data point is the same as
embedding the original data point and then letting the rotation act on
the qubits:
$\Phi(r(\psi,\theta,\phi)\vec{x}) = U(\psi,\theta,\phi) \Phi(\vec{x}) U(\psi,\theta,\phi)^\dagger.$
For this, we have noticed that any rotation on the data level can be
parametrized by three angles $V_g = r(\psi,\theta,\phi),$ which can also
be used to parametrize the corresponding single-qubit rotation
$\mathcal{R}_g = U(\psi,\theta,\phi),$ implemented by the usual
[qml.rot](https://docs.pennylane.ai/en/stable/code/api/pennylane.Rot.html)
operation. We choose to encode each atom twice in parallel, resulting in
higher expressivity. We can do so by simply using this encoding scheme
twice for each active atom (the two Hydrogens in our case):
$$\Phi(\vec{x}_1, \vec{x}_2) = \Phi^{(1)}(\vec{x}_1) \Phi^{(2)}(\vec{x}_2) \Phi^{(3)}(\vec{x}_1) \Phi^{(4)}(\vec{x}_2).$$
<code>
def equivariant_encoding(alpha, data, wires):
# data (jax array): cartesian coordinates of atom i
# alpha (jax array): trainable scaling parameter
hamiltonian = jnp.einsum("i,ijk", data, sigmas) # Heisenberg Hamiltonian
U = jax.scipy.linalg.expm(-1.0j * alpha * hamiltonian / 2)
qml.QubitUnitary(U, wires=wires, id="E")
</code>
Finally, we require an equivariant trainable map and an invariant
observable. We take the Heisenberg Hamiltonian, which is rotationally
invariant, as an inspiration. We define a single summand of it,
$H^{(i,j)}(J) = -J\left( X^{(i)}X^{(j)} + Y^{(i)}Y^{(j)} + Z^{(i)}Z^{(j)} \right),$
as a rotationally invariant two-qubit operator and choose
$$O = X^{(0)}X^{(1)} + Y^{(0)}Y^{(1)} + Z^{(0)}Z^{(1)}$$
as our observable.
Furthermore, we can obtain an equivariant parametrized operator by
exponentiating this Heisenberg interaction:
$$RH^{(i,j)}(J) = \exp\left( -iH^{(i,j)}(J) \right),$$
where $J\in\mathbb{R}$ is a trainable parameter. By combining this
exponentiated operator for different pairs of qubits, we can design our
equivariant trainable layer
$$\mathcal{U}(\vec{j}) = RH^{(1,2)}(j_1) RH^{(3,4)}(j_2) RH^{(2,3)}(j_3)$$
In the case of a triatomic molecule of two atom types, we need to modify
the previous VQLM to additionally take into account the **invariance
under permutations of the same atom types**.
Interchanging two atoms is represented on the data level by simply
interchanging the corresponding coordinates,
$V_g = \sigma(\vec{x}_1, \vec{x}_2) = (\vec{x}_2, \vec{x}_1).$ On the
Hilbert space this is represented by swapping the corresponding qubits,
$\mathcal{R}_g = U(i,j) = SWAP(i,j).$
The singlet state is not only rotationally invariant but also
permutationally invariant under swapping certain qubit pairs, so we can
keep it. The previous embedding scheme for one data point can be
extended for embedding two atoms and we see that this is indeed not only
rotationally equivariant but also equivariant with respect to
permutations, since encoding two swapped atoms is just the same as
encoding the atoms in the original order and then swapping the qubits:
$\Phi\left( \sigma(\vec{x}_1, \vec{x}_2) \right) = SWAP(i,j) \Phi(\vec{x}_1, \vec{x}_2) SWAP(i,j).$
Again, we choose to encode each atom twice as depicted above.
For the invariant observable $O,$ we note that our Heisenberg
interaction is invariant under the swapping of the two involved qubits,
therefore we can make use of the same observable as before.
For the equivariant parametrized layer we need to be careful when it
comes to the selection of qubit pairs in order to obtain equivariance,
i.e., operations that commute with the swappings. This is fulfilled by
coupling only the qubits with are neighbors with respect to the
1-2-3-4-1 ring topology, leading to the following operation:
$$\mathcal{U}(\vec{j}) = RH^{(1,2)}(j_1) RH^{(3,4)}(j_2) RH^{(2,3)}(j_3) RH^{(1,4)}(j_3)$$
In code, we have:
<code>
def trainable_layer(weight, wires):
hamiltonian = jnp.einsum("ijk->jk", sigmas_sigmas)
U = jax.scipy.linalg.expm(-1.0j * weight * hamiltonian)
qml.QubitUnitary(U, wires=wires, id="U")
# Invariant observable
Heisenberg = [
qml.PauliX(0) @ qml.PauliX(1),
qml.PauliY(0) @ qml.PauliY(1),
qml.PauliZ(0) @ qml.PauliZ(1),
]
Observable = qml.Hamiltonian(np.ones((3)), Heisenberg)
</code>
It has been observed that a small amount of **symmetry-breaking** (SB)
can improve the convergence of the VQLM. We implement it by adding a
small rotation around the $z$-axis.
<code>
def noise_layer(epsilon, wires):
for _, w in enumerate(wires):
qml.RZ(epsilon[_], wires=[w])
</code>
When setting up the model, the hyperparameters such as the number of
repetitions of encoding and trainable layers have to be chosen suitably.
In this demo, we choose six layers ($D=6$) and one repetition of
trainable gates inside each layer ($B=1$) to reduce long runtimes. Note
that this choice differs from the original paper, so the results therein
will not be fully reproduced within this demo. We start by defining the
relevant hyperparameters and the VQLM.
<code>
############ Setup ##############
D = 6 # Depth of the model
B = 1 # Number of repetitions inside a trainable layer
rep = 2 # Number of repeated vertical encoding
active_atoms = 2 # Number of active atoms
# Here we only have two active atoms since we fixed the oxygen (which becomes non-active) at the origin
num_qubits = active_atoms * rep
</code>
<code>
dev = qml.device("default.qubit", wires=num_qubits)
@qml.qnode(dev, interface="jax")
def vqlm(data, params):
weights = params["params"]["weights"]
alphas = params["params"]["alphas"]
epsilon = params["params"]["epsilon"]
# Initial state
for i in range(rep):
singlet(wires=np.arange(active_atoms * i, active_atoms * (1 + i)))
# Initial encoding
for i in range(num_qubits):
equivariant_encoding(
alphas[i, 0], jnp.asarray(data, dtype=complex)[i % active_atoms, ...], wires=[i]
)
# Reuploading model
for d in range(D):
qml.Barrier()
for b in range(B):
# Even layer
for i in range(0, num_qubits - 1, 2):
trainable_layer(weights[i, d + 1, b], wires=[i, (i + 1) % num_qubits])
# Odd layer
for i in range(1, num_qubits, 2):
trainable_layer(weights[i, d + 1, b], wires=[i, (i + 1) % num_qubits])
# Symmetry-breaking
if epsilon is not None:
noise_layer(epsilon[d, :], range(num_qubits))
# Encoding
for i in range(num_qubits):
equivariant_encoding(
alphas[i, d + 1],
jnp.asarray(data, dtype=complex)[i % active_atoms, ...],
wires=[i],
)
return qml.expval(Observable)
</code>
Simulation for the water molecule
=================================
We start by downloading the
[dataset](https://zenodo.org/records/2634098), which we have prepared
for convenience as a Python ndarray. In the following, we will load,
preprocess and split the data into a training and testing set, following
standard practices.
<code>
# Load the data
energy = np.load("eqnn_force_field_data/Energy.npy")
forces = np.load("eqnn_force_field_data/Forces.npy")
positions = np.load(
"eqnn_force_field_data/Positions.npy"
) # Cartesian coordinates shape = (nbr_sample, nbr_atoms,3)
shape = np.shape(positions)
### Scaling the energy to fit in [-1,1]
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler((-1, 1))
energy = scaler.fit_transform(energy)
forces = forces * scaler.scale_
# Placing the oxygen at the origin
data = np.zeros((shape[0], 2, 3))
data[:, 0, :] = positions[:, 1, :] - positions[:, 0, :]
data[:, 1, :] = positions[:, 2, :] - positions[:, 0, :]
positions = data.copy()
forces = forces[:, 1:, :] # Select only the forces on the hydrogen atoms since the oxygen is fixed
# Splitting in train-test set
indices_train = np.random.choice(np.arange(shape[0]), size=int(0.8 * shape[0]), replace=False)
indices_test = np.setdiff1d(np.arange(shape[0]), indices_train)
E_train, E_test = (energy[indices_train, 0], energy[indices_test, 0])
F_train, F_test = forces[indices_train, ...], forces[indices_test, ...]
data_train, data_test = (
jnp.array(positions[indices_train, ...]),
jnp.array(positions[indices_test, ...]),
)
</code>
We will know define the cost function and how to train the model using
Jax. We will use the mean-square-error loss function. To speed up the
computation, we use the decorator `@jax.jit` to do just-in-time
compilation for this execution. This means the first execution will
typically take a little longer with the benefit that all following
executions will be significantly faster, see the [Jax docs on
jitting](https://jax.readthedocs.io/en/latest/jax-101/02-jitting.html).
<code>
from jax.example_libraries import optimizers
# We vectorize the model over the data points
vec_vqlm = jax.vmap(vqlm, (0, None), 0)
# Mean-squared-error loss function
@jax.jit
def mse_loss(predictions, targets):
return jnp.mean(0.5 * (predictions - targets) ** 2)
# Make prediction and compute the loss
@jax.jit
def cost(weights, loss_data):
data, E_target, F_target = loss_data
E_pred = vec_vqlm(data, weights)
l = mse_loss(E_pred, E_target)
return l
# Perform one training step
@jax.jit
def train_step(step_i, opt_state, loss_data):
net_params = get_params(opt_state)
loss, grads = jax.value_and_grad(cost, argnums=0)(net_params, loss_data)
return loss, opt_update(step_i, grads, opt_state)
# Return prediction and loss at inference times, e.g. for testing
@jax.jit
def inference(loss_data, opt_state):
data, E_target, F_target = loss_data
net_params = get_params(opt_state)
E_pred = vec_vqlm(data, net_params)
l = mse_loss(E_pred, E_target)
return E_pred, l
</code>
**Parameter initialization:**
We initiliase the model at the identity by setting the initial
parameters to 0, except the first one which is chosen uniformly. This
ensures that the circuit is shallow at the beginning and has less chance
of suffering from the barren plateau phenomenon. Moreover, we disable
the symmetry-breaking strategy, as it is mainly useful for larger
systems.
<code>
np.random.seed(42)
weights = np.zeros((num_qubits, D, B))
weights[0] = np.random.uniform(0, np.pi, 1)
weights = jnp.array(weights)
# Encoding weights
alphas = jnp.array(np.ones((num_qubits, D + 1)))
# Symmetry-breaking (SB)
np.random.seed(42)
epsilon = jnp.array(np.random.normal(0, 0.001, size=(D, num_qubits)))
epsilon = None # We disable SB for this specific example
epsilon = jax.lax.stop_gradient(epsilon) # comment if we wish to train the SB weights as well.
opt_init, opt_update, get_params = optimizers.adam(1e-2)
net_params = {"params": {"weights": weights, "alphas": alphas, "epsilon": epsilon}}
opt_state = opt_init(net_params)
running_loss = []
</code>
We train our VQLM using stochastic gradient descent.
<code>
num_batches = 5000 # number of optimization steps
batch_size = 256 # number of training data per batch
for ibatch in range(num_batches):
# select a batch of training points
batch = np.random.choice(np.arange(np.shape(data_train)[0]), batch_size, replace=False)
# preparing the data
loss_data = data_train[batch, ...], E_train[batch, ...], F_train[batch, ...]
loss_data_test = data_test, E_test, F_test
# perform one training step
loss, opt_state = train_step(num_batches, opt_state, loss_data)
# computing the test loss and energy predictions
E_pred, test_loss = inference(loss_data_test, opt_state)
running_loss.append([float(loss), float(test_loss)])
</code>
Let us inspect the results. The following figure displays the training
(in red) and testing (in blue) loss during the optimization. We observe
that they are on top of each other, meaning that the model is training
and generalising properly to the unseen test set.
<code>
history_loss = np.array(running_loss)
fontsize = 12
plt.figure(figsize=(4, 4))
plt.plot(history_loss[:, 0], "r-", label="training error")
plt.plot(history_loss[:, 1], "b-", label="testing error")
plt.yscale("log")
plt.xlabel("Optimization Steps", fontsize=fontsize)
plt.ylabel("Mean Squared Error", fontsize=fontsize)
plt.legend(fontsize=fontsize)
plt.tight_layout()
plt.show()
</code>
## Energy predictions
We first inspect the quality of the energy predictions. The exact test
energy points are shown in black, while the predictions are in red. On
the left, we see the exact data against the predicted ones (so the red
points should be in the diagonal line), while the right plots show the
energy as a scatter plot. The model is able to make fair predictions,
especially near the equilibrium position. However, a few points in the
higher energy range could be improved, e.g. by using a deeper model as
in the original paper.
<code>
plt.figure(figsize=(4, 4))
plt.title("Energy predictions", fontsize=fontsize)
plt.plot(energy[indices_test], E_pred, "ro", label="Test predictions")
plt.plot(energy[indices_test], energy[indices_test], "k.-", lw=1, label="Exact")
plt.xlabel("Exact energy", fontsize=fontsize)
plt.ylabel("Predicted energy", fontsize=fontsize)
plt.legend(fontsize=fontsize)
plt.tight_layout()
plt.show()
</code>
## Force predictions
As stated at the beginning, we are interested in obtaining the forces to
drive MD simulations. Since we have access to the potential energy
surface, the forces are directly available by taking the gradient
$$F_{i,j} = -\nabla_{\mathcal{X}_{ij}} E(\mathcal{X}, \Theta),$$
where $\mathcal{X}_{ij}$ contains the $j$ coordinate of the $i$-th atom,
and $\Theta$ are the trainable parameters. In our framework, we can
simply do the following. We note that we do not require the mixed terms
of the Jacobian, which is why we select the diagonal part using
`numpy.einsum`.
<code>
opt_params = get_params(opt_state) # Obtain the optimal parameters
gradient_coordinates = jax.jacobian(
vec_vqlm, argnums=0
) # Compute the gradient with respect to the Cartesian coordinates
pred_forces = gradient_coordinates(jnp.array(positions.real), opt_params)
pred_forces = -np.einsum(
"iijk->ijk", np.array(pred_forces)
) # We are only interested in the diagonal part of the Jacobian
fig, axs = plt.subplots(2, 3)
fig.suptitle("Force predictions", fontsize=fontsize)
for k in range(2):
for l in range(3):
axs[k, l].plot(forces[indices_test, k, l], forces[indices_test, k, l], "k.-", lw=1)
axs[k, l].plot(forces[indices_test, k, l], pred_forces[indices_test, k, l], "r.")
axs[0, 0].set_ylabel("Hydrogen 1")
axs[1, 0].set_ylabel("Hydrogen 2")
for _, a in enumerate(["x", "y", "z"]):
axs[1, _].set_xlabel("{}-axis".format(a))
plt.tight_layout()
plt.show()
</code>
In this series of plots, we can see the predicted forces on the two
Hydrogen atoms in the three $x,$ $y$ and $z$ directions. Again, the
model does a fairly good job. The few points which are not on the
diagonal can be improved using some tricks, such as incorporating the
forces in the loss function.
## Conclusions
In this demo, we saw how to implement a symmetry-invariant VQLM to learn
the energy and forces of small chemical systems and trained it for the
specific example of water. The strong points with respect to
symmetry-agnostic techniques are better generalization, more accurate
force predictions, resilience to small data corruption, and reduction in
classical pre- and postprocessing, as supported by the original paper.
Further work could be devoted to studying larger systems by adopting a
more systematic fragmentation as discussed in the original paper. As an
alternative to building symmetry-invariant quantum architectures, the
symmetries could instead be incorporated into the training routine, such
as recently proposed by. Finally, symmetry-aware models could be used to
design quantum symmetry functions, which in turn could serve as
symmetry-invariant descriptors of the chemical systems within classical
deep learning architectures, which can be easily operated and trained at
scale.
## References
1. Michael M. Bronstein, Joan Bruna, Taco Cohen, Petar Veličković (2021). Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges. arXiv:2104.13478
2. Quynh T. Nguyen, Louis Schatzki, Paolo Braccia, Michael Ragone, Patrick J. Coles, Frédéric Sauvage, Martín Larocca, and M. Cerezo (2022). Theory for Equivariant Quantum Neural Networks. arXiv:2210.08566
3. Andrea Skolik, Michele Cattelan, Sheir Yarkoni,Thomas Baeck and Vedran Dunjko (2022). Equivariant quantum circuits for learning on weighted graphs. arXiv:2205.06109
4. Quynh T. Nguyen, Louis Schatzki, Paolo Braccia, Michael Ragone, Patrick J. Coles, Frédéric Sauvage, Martín Larocca and Marco Cerezo (2022). Theory for Equivariant Quantum Neural Networks. arXiv:2210.08566
5. Johannes Jakob Meyer, Marian Mularski, Elies Gil-Fuster, Antonio Anna Mele, Francesco Arzani, Alissa Wilms, Jens Eisert (2022). Exploiting symmetry in variational quantum machine learning. arXiv:2205.06217
6. Andrea Skolik, Michele Cattelan, Sheir Yarkoni,Thomas Baeck and Vedran Dunjko (2022). Equivariant quantum circuits for learning on weighted graphs.[arXiv:2205.06109](https://arxiv.org/abs/2205.06109)
7. Quynh T. Nguyen, Louis Schatzki, Paolo Braccia, Michael Ragone, Patrick J. Coles, Frédéric Sauvage, Martín Larocca and Marco Cerezo (2022). Theory for Equivariant Quantum Neural Networks. [arXiv:2210.08566](https://arxiv.org/abs/2210.08566)
8. Isabel Nha Minh Le, Oriel Kiss, Julian Schuhmacher, Ivano Tavernelli, Francesco Tacchino, “Symmetry-invariant quantum machine learning force fields”, arXiv:2311.11362, 2023.
9. Oriel Kiss, Francesco Tacchino, Sofia Vallecorsa, Ivano Tavernelli, “Quantum neural networks force fields generation”, Mach.Learn.: Sci. Technol. 3 035004, 2022.
10. Johannes Jakob Meyer, Marian Mularski, Elies Gil-Fuster, Antonio Anna Mele, Francesco Arzani, Alissa Wilms, Jens Eisert, “Exploiting Symmetry in Variational Quantum Machine Learning”, PRX Quantum 4,010328, 2023.
11. David Wierichs, Richard D. P. East, Martín Larocca, M. Cerezo, Nathan Killoran, “Symmetric derivatives of parametrized quantum circuits”, arXiv:2312.06752, 2023.
|
{
"filename": "Hands_on_8_1.ipynb",
"repository": "osbama/Phys437",
"query": "transformed_from_existing",
"size": 140786,
"sha": ""
}
|
# week_4_group5_v2_1.ipynb
Repository: LaDa26/8dm50group5
# Preliminaries
## Dataset
In this set of exercises we will use the same dataset as from [week 3](week_3.ipynb).
As before, we provide the data already curated in the following two files:
`RNA_expression_curated.csv`: [148 cell lines , 238 genes]
`drug_response_curated.csv`: [148 cell lines , YM155 drug]
The curated data can be read as `pandas` `DataFrame` in the following way:
<code>
import pandas as pd
gene_expression = pd.read_csv("./data/RNA_expression_curated.csv", sep=',', header=0, index_col=0)
drug_response = pd.read_csv("./data/drug_response_curated.csv", sep=',', header=0, index_col=0)
</code>
The goal of the exercises is to train support vector machine (SVM) and random forests classifiers on this dataset and explore and learn about their hyperparameters.
## Tools
The `scikit-learn` library provides the required tools for support vector machines, as well as for random forest algorithms.
<code>
from sklearn import svm
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import train_test_split
from sklearn.datasets._samples_generator import make_blobs, make_circles
from sklearn.metrics import classification_report
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import precision_score, classification_report
import scipy
import warnings
warnings.filterwarnings('ignore')
</code>
Before proceeding, look up the documentation of the imported functions and read about their basic functionality. Below, we list some important parameters of SVMs and random forests that can be tuned during training.
#### Support Vector Machines (SVM)
`C`: error term.
`kernel`: similarity function ('linear', 'poly', 'sigmoid' or 'rbf')
`gamma`: kernel coef. for 'rbf', 'poly' and 'sigmoid' kernels. It can be thought of as the ‘spread’ of the kernel and therefore the decision region.
`degree`: degree for the 'poly' kernel.
`coef0`: independt term in the 'poly' and 'sigmoid' kernels
#### Random Forests
`n_estimators`: number of trees in our random forest.
`max_depth`: maximum number of levels in each decision tree
`max_features`: maximum number of features to consider per split in an individual tree.
`min_sample_leaf`: minimum number of data points per leaf node
`min_samples_split`: minimum number of data points placed in a node before the node is split
`oob_score`: the out-of-bag (OOB) error is the average error for each observation calculated using predictions from the trees that do not contain that observation in their respective bootstrap sample. Set this parameter to true.
`bootstrap`: method for sampling data points (with or without replacement). Set this parameter to true.
`criterion`: function used to measure the quality of the split (e.g. 'entropy' or 'gini')
# Exercises
## Support vector machines
The `make_blobs` and `make_circles` functions can be used to generate linearly and not linearly separable toy datasets.
<code>
# data generation: linearly separable
X, Y = make_blobs(n_samples=200, centers=2, n_features=2, random_state=1234)
X = pd.DataFrame(X, columns=['x1', 'x2'])
# splitting data into training and test set
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.33, random_state=333)
</code>
The following code illustrates how to train a linear SVM classifier and plot the data points, the separating hyperplane, the support vectors and the margins that pass through them (considering the training data)
<code>
import numpy as np
import matplotlib.pyplot as plt
# build the model
model = svm.SVC(kernel='linear', random_state=33)
model.fit(X_train, Y_train)
# create plot
fig, ax = plt.subplots()
# get colors from qualitative colormap 'Paired'
cmap = plt.cm.get_cmap('Paired')
# plot data points
ax.scatter(X_train.iloc[Y_train == 1, 0], X_train.iloc[Y_train == 1, 1],
c=[cmap(11)], label='1')
ax.scatter(X_train.iloc[Y_train == 0, 0], X_train.iloc[Y_train == 0, 1],
c=[cmap(0)], label='0')
ax.legend(loc='best')
# plot the decision function
# create grid to evaluate model
x1_min, x1_max = X_train.iloc[:, 0].min() - 1, X_train.iloc[:, 0].max() + 1
x2_min, x2_max = X_train.iloc[:, 1].min() - 1, X_train.iloc[:, 1].max() + 1
XX, YY = np.meshgrid(np.arange(x1_min, x1_max, .2),
np.arange(x2_min, x2_max, .2))
xy = np.vstack([XX.ravel(), YY.ravel()]).T
Z = model.decision_function(xy).reshape(XX.shape)
# plot decision boundary and margins
ax.contour(XX, YY, Z, colors='k', levels=[-1, 0, 1], alpha=0.5,
linestyles=['--', '-', '--'])
# Establish the class for each point in the contour
Z = model.predict(xy).reshape(XX.shape)
# Visualization of the contour
ax.contourf(XX, YY, Z, cmap='bwr', alpha=0.3)
# plot support vectors, whose are responsible for building the margins
ax.scatter(model.support_vectors_[:, 0], model.support_vectors_[:, 1], s=100,
linewidth=1, facecolors='none', edgecolors='k', marker='s')
ax.axis([x1_min, x1_max, x2_min, x2_max])
plt.axis('tight')
plt.title('Linear kernel SVM')
plt.show()
</code>
Train a radial basis function (RBF) SVM classifier with `gamma=0.5` and plot the results in the same way.
<code>
# data generation: not linearly separable
X, Y = make_circles(n_samples=200, noise=0.05, random_state=1234)
X = pd.DataFrame(X, columns=['x1', 'x2'])
# splitting data into training and test set
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.33, random_state=333)
</code>
<code>
# build the model
model = svm.SVC(kernel='rbf', gamma=0.5, random_state=33)
model.fit(X_train, Y_train)
# create plot
fig, ax = plt.subplots()
# get colors from qualitative colormap 'Paired'
cmap = plt.cm.get_cmap('Paired')
# plot data points
ax.scatter(X_train.iloc[Y_train == 1, 0], X_train.iloc[Y_train == 1, 1],
c=[cmap(11)], label='1')
ax.scatter(X_train.iloc[Y_train == 0, 0], X_train.iloc[Y_train == 0, 1],
c=[cmap(0)], label='0')
ax.legend(loc='best')
# plot the decision function
# create grid to evaluate model
x1_min, x1_max = X_train.iloc[:, 0].min() - 0.3, X_train.iloc[:, 0].max() + 0.3
x2_min, x2_max = X_train.iloc[:, 1].min() - 0.3, X_train.iloc[:, 1].max() + 0.3
XX, YY = np.meshgrid(np.arange(x1_min, x1_max, .2),
np.arange(x2_min, x2_max, .2))
xy = np.vstack([XX.ravel(), YY.ravel()]).T
Z = model.decision_function(xy).reshape(XX.shape)
# plot decision boundary and margins
ax.contour(XX, YY, Z, colors='k', levels=[-1, 0, 1], alpha=0.5,
linestyles=['--', '-', '--'])
# Establish the class for each point in the contour
Z = model.predict(xy).reshape(XX.shape)
# Visualization of the contour
ax.contourf(XX, YY, Z, cmap='bwr', alpha=0.3)
# plot support vectors, whose are responsible for building the margins
ax.scatter(model.support_vectors_[:, 0], model.support_vectors_[:, 1], s=100,
linewidth=1, facecolors='none', edgecolors='k', marker='s')
ax.axis([x1_min, x1_max, x2_min, x2_max])
plt.axis('tight')
plt.title('Radial kernel SVM')
plt.show()
</code>
<p><font color='#770a0a'>When should a RBF kernel be used over a linear kernel? Motivate your answer.</font></p>
A RBF kernel should be used when the data is linearly non-separable. That is if the dataset is non-linear or the potential boundary to separate the different classes is not a straight line.
<p><font color='#770a0a'>Do we need to normalize the data before using a kernel function? Motivate your answer.
</font></p>
SVM works by maximizing the distance between support vectors and a separating plane. In the case that the features are not scaled, features with large values will dominate features with low values when computing the distance. Therefore, feature-scaling ensures that all features influence the distance metric to the same extent.
## Predicting drug response on cell lines from gene expression data with SVMs
Explore the hyper-parameter space of an SVM classifier with cross-validation for the Genomics of Drug Sensitivity in Cancer (GDSC) dataset. The`GridSearchCV` function can be used to specify a grid of parameter values with the `param_grid` parameter.
Calculate the precision of your predictions, and compare your calculations with the results of `classification_report`, which displays many classification metrics.
<code>
# Define X (features) and y (target)
X = gene_expression
y = drug_response
# Based on the z-score being lower or higher than 0 the drug_response is either classified
# as sensitive (label 0) or resistant (label 1) respectively
# The target labels (0) or (1) are computed by calculating these z-scores over the whole dataset
# Because we want to predict those labels that are normalized in that way
drug_response = scipy.stats.zscore(drug_response).ravel()
y_class = (drug_response > 0).astype(int)
# We split in train and test set
X_train, X_test, y_train, y_test = train_test_split(X, y_class, test_size=0.25, random_state=42)
# We scale the variables in the training dataset and perform this same transformation on the test set
# As mentioned earlier, scaling the features is very important when using a kernel function
# We do it separately for training and test set to avoid contamination of the train set with information of the test set
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
# Define the model
model = svm.SVC()
# For SVM, five hyperparameters are the most important to be tuned:
# the regularization strength (C), the kernel type (e.g. linear, RBF, sigmoid etc.) and the kernel coefficient for rbf,
# sigmoid and poly (Gamma)
# We can also try different values for the degree of the polynomial kernel and the coefficient (= independent term for the
# poly and sigmoid kernels)
# We define the parameter grids for these three hyperparameters
param_grid = {'C': [0.1,1, 10, 100], 'gamma': [1,0.1,0.01,0.001],'kernel': ['rbf', 'poly', 'sigmoid'], 'degree':[1,2,3,4], 'coef0':np.logspace(-1,1,4)}
# We perform the grid search
gscv = GridSearchCV(model,param_grid,refit=True)
gscv.fit(X_train,y_train)
print(gscv.best_estimator_)
# We predict on the independent test set and calculate model performance
gscv_pred = gscv.predict(X_test)
# Precision = tp / (tp+fp)
tp = sum(gscv_pred * y_test)
fp = np.sum(np.logical_and(gscv_pred == 1, y_test == 0))
precision = tp / (tp+fp)
# Print classification report and calculated precision score
print(classification_report(y_test,gscv_pred))
print('Calculated precision is ' + str(precision))
# Evaluate best parameters
best_kernel = gscv.best_params_['kernel']
best_degree = gscv.best_params_['degree']
best_gamma = gscv.best_params_['gamma']
best_C = gscv.best_params_['C']
best_coef0 = gscv.best_params_['coef0']
print("Optimal kernel: {}".format(best_kernel))
print("Optimal degree: {}".format(best_degree))
print("Optimal gamma: {}".format(best_gamma))
print("Optimal C: {}".format(best_C))
print("Optimal coef0: {}".format(best_coef0))
</code>
<p><font color='#770a0a'>The calculated precision of approximately 0.64 corresponds to the precision in the classification report of label 1. The precision in the classification report correponding to label 0 equals 0.70. The number of test samples with true label 1 is lower (16) compared to test samples with label 0 (21). Hence the samples that have class 0 are a bit overrepresented in the test set. For the label 1 (second row in classification report) the true positives are all samples that are predicted as resistant (=1) that actually are resistant in the test set. The false positives are all samples that are predicted as label resistant when in fact they are sensitive (= 0) in the test set. For the first row in the classification report (precision for label 0) this is the other way around, but we see 'precision' here with respect to label 1.
The optimal degree equals 1, but this parameter is not relevant for this final model as a sigmoid kernel was chosen as optimal kernel and not the poly kernel. For C the value 1 is chosen which corresponds to the 'default' value of C in the SVM. The value of C is inversely proportional to the strength of the regularization which in this case is the L2 norm. Hence we have a somewhat high regularization in our final model.
Finally, for gamma coefficient for the sigmoid kernel and coef0, the independent term in the sigmoid kernel, the values 0.01 and 0.46 were chosen respectively by GridSearch.
</font></p>
## Random forests
Follow the same steps as for SVM. Compare the two algorithms and report which one has better performance.
<code>
model_rf = RandomForestClassifier(bootstrap=True, oob_score=True)
n_estimators = [50, 100, 150, 200, 1000]
depths = np.linspace(3, 19, 5)
features = ['sqrt', 'log2']
criterions = ['gini', 'entropy']
parms = {'n_estimators': n_estimators,
'max_depth': depths,
'max_features': features,
'criterion': criterions
}
# Perform grid search of polynomial order
gscv = GridSearchCV(model_rf, parms, refit=True)
gscv.fit(X_train,y_train)
# We find our best model
best_model = gscv.best_estimator_
print(best_model)
# We predict on the test set and calculate model performance
gscv_pred = gscv.predict(X_test)
# Precision = tp / (tp+fp)
tp = sum(gscv_pred * y_test)
fp = np.sum(np.logical_and(gscv_pred == 1, y_test == 0))
precision = tp / (tp+fp)
# Print confusion matrix, classification report and calculated precision score
report = classification_report(y_test, gscv_pred)
print("Classification report:\n",report)
# Get best value of alpha
best_n_est = gscv.best_params_['n_estimators']
best_depth = gscv.best_params_['max_depth']
best_feature = gscv.best_params_['max_features']
best_criterion = gscv.best_params_['criterion']
print("Optimal # estimators: {}".format(best_n_est))
print("Optimal max depths: {}".format(best_depth))
print("Optimal criterion: {}".format(best_criterion))
print("Optimal features: {}".format(best_feature))
</code>
<p><font color='#770a0a'>
First of all, the hyper-parameters selected by the GridSearch are 50 trees in the random forest and the maximum depth is 11. The maximum number of features considered per split in an individual tree is the log2 of the total number of features in the dataset.
The SVM and RF model have a precision of respectively 0.64 and 0.75 for label 1. The precision for label 0, SVM (0.70) has a better perfomance than the RF model (0.66). There is no model that definitely outperforms the other one, as from the f1-scores no conclusion can be drawn as well (0.73 and 0.60 for SVM vs. 0.76 and 0.50 for random forest). This might be caused by the test dataset being rather small and the asymmetrical distribution of the labels in the independent test set (label 0 is overrepresented).
</font></p>
The random forests classifiers allows to perform feature selection. Evaluate the importance of features extracting the top 50 informative features. A bar plot (`plt.bar()`) can be a useful tool to visualize this.
<code>
feature_importances = pd.DataFrame(best_model.feature_importances_, index = X.columns, columns=['importance']).sort_values('importance', ascending=False)
ax = feature_importances.iloc[0:50].plot.bar(figsize=(18,5), rot=60, title='Feature importance of the top 50 genes')
ax.set_xlabel("Genes", fontsize=14)
ax.set_ylabel("Relative importance", fontsize=14)
</code>
<p><font color='#770a0a'>As can be seen in the figure above, feature ABCB1 is most important with relative importance of 0.03. It is twice as important as the second feature RAMP1 with relative importance 0.015. However, we must note that the relative importance of approximately 0.03 for ABCB1 is on itself not spectacularly high. After the first two features the relative importance does not change as much from feature to feature. The features are sorted based on their importance. Therefore we can see that roughly the top 10 features are twice as important as the last 10 features.</font></p>
## Biomedical applications
Driven by technological advances, there has recently been a dramatic increase in availability of biomedical data. Machine learning approaches are well suited to take advantage of this data and have been widely applied to many areas of biology.
Example of these applications are genome annotation, biomarker identification, systems biology, genome data analysis, protein function prediction, protein structure prediction, protein localization prediction, identification of protein interactions and drug discovery.
SVM and RF methods are among the most popular machine learning methods applied in bioinformatics or computational biology.
Perform a literature search and find a biomedical study in which SVM or RF is applied to obtain certain insights. <p><font color='#770a0a'>Explain the motivation behind using that specific algorithm in the study.
</font></p>
### Campbell et al. (2020) Pharmacologically informed machine learning approach for identifying pathological states of unconsciousness via resting-state fMRI.
The goal of this paper was to evaluate whether three different machine learning models are able to make a binary distinction between conscious wakefulness and anesthetic-induced unconsciousness. Furthermore the ability to identify pathologically induced unconsciousness based on this binary distinction was investigated.
One type of machine learning model used in this paper is Extra Trees (ET) which is a variant of Random Forest. This model introduces additional randomness regarding the decision of split-points.
This model was used as it is popular and is successful in multi-variate neuroimaging applications. The inherent nature of Random Forests, in which predictions are aggregated with bagging, helps to minimize model variance and overfitting. Furthermore, ET offers good computational efficiency, compared to for example a deep learning model, it is easy to construct and has general reliability. Furthermore decision-tree based models, in general have to ability to perform analysis of feature importance. This has the added advantage to help to inform feature selection for future studies.
|
{
"filename": "week_4_group5_v2_1.ipynb",
"repository": "LaDa26/8dm50group5",
"query": "transformed_from_existing",
"size": 156752,
"sha": ""
}
|
# voila_app_voila_app.ipynb
Repository: NIVANorge/watexr
<code>
%matplotlib inline
import datetime as dt
import glob
import os
import warnings
import ipywidgets as widgets
import matplotlib.pyplot as plt
import pandas as pd
from IPython.display import Image, Markdown, clear_output, display
import app_utils as au
warnings.simplefilter("ignore")
</code>
<code>
def display_forecast(b):
with output:
clear_output()
# Get user options
year = years.value
# Make forecast components and PDF
au.make_forecast(year)
# Display results in app
today = dt.datetime.today()
today = today.strftime("%B %d. %Y")
display(Markdown(f"## Forecast issued {today}"))
display(
Markdown(
"Lake water quality forecasts are for the **western basin of Lake Vansjø "
"(Vanemfjorden)**, and aim to predict ecological status according to the Water "
"Framework Directive (WFD). Four variables are predicted: concentrations of "
"**total phosphorus**, **chlorophyll-a** & lake **colour**, and biovolume of **cyanobacteria**. "
"A [guide to interpreting these forecasts](https://github.com/icra/WATExR/blob/master/Norway_Morsa/guidance_docs/GuidanceDoc_InterpretingLakeForecast.pdf) "
"accompanies this bulletin, and includes a short description of the models used "
" to produce the forecasts."
)
)
display(
Markdown("Colour codes used in the forecasts are shown in the table below.")
)
display(Image("./images/wq_confidence_table.png", width=300))
display(Markdown("_______________________"))
display(
Markdown(
f"## Lake chemistry and ecology forecast for May – October, {year}"
)
)
display(Markdown("### Total phosphorus (growing season mean)"))
display(Image("./images/tp_forecast_summary.png", width=800))
display(
Markdown(
"*Click [here](https://github.com/icra/WATExR/blob/master/Norway_Morsa/BayesianNetwork/Hindcast_stats_plots/Timeseries_gof/timeseries_operationalModel_TP.png) "
"for a plot summarising historic skill"
)
)
display(Markdown("### Chlorophyll-a (growing season mean)"))
display(Image("./images/chla_forecast_summary.png", width=800))
display(
Markdown(
"*Click [here](https://github.com/icra/WATExR/blob/master/Norway_Morsa/BayesianNetwork/Hindcast_stats_plots/Timeseries_gof/timeseries_operationalModel_chla.png) "
"for a plot summarising historic skill"
)
)
display(Markdown("### Cyanobacteria (growing season maximum)"))
display(Image("./images/cyano_forecast_summary.png", width=800))
display(
Markdown(
"*Click [here](https://github.com/icra/WATExR/blob/master/Norway_Morsa/BayesianNetwork/Hindcast_stats_plots/Timeseries_gof/timeseries_operationalModel_cyano.png) "
"for a plot summarising historic skill"
)
)
display(Markdown("### Colour (growing season mean)"))
display(Image("./images/colour_forecast_summary.png", width=800))
display(
Markdown(
"*Click [here](https://github.com/icra/WATExR/blob/master/Norway_Morsa/BayesianNetwork/Hindcast_stats_plots/Timeseries_gof/timeseries_operationalModel_colour.png) "
"for a plot summarising historic skill"
)
)
display(
Markdown(
"¹**RMSE:** Root mean square error. An indication of the likely size of error between "
"forecasted and observed values"
)
)
display(
Markdown(
"²**Classification error:** percent of time the model predicted the class "
"incorrectly during the historic assessment period"
)
)
display(
Markdown(
"³**MCC:** Matthews' correlation coefficient. A value of 1 is a perfect fit to "
"historic observations, 0 no better than a random model"
)
)
display(Markdown(f"_______________________"))
display(
Markdown(
"**Disclaimer:** Although water quality models have generally good historic "
"skill, if climatic and/or management conditions change relative to the "
"historic period, forecasts may be inaccurate even when the confidence "
"level is reported as ‘High’. Data used to assess historic skill are from the "
"main body of Vanemfjorden and do not necessarily reflect conditions at the "
"more popular bathing beaches. Historically, toxic algal blooms occurred more "
"frequently at these bathing spots and are therefore likely to be "
"underpredicted by these forecasts."
)
)
</code>
<code>
display(Image("./images/watexr_niva_logo.png", width=800))
</code>
# WATExR: Seasonal forecasts
Forecasts are issued by [NIVA](https://www.niva.no/) as part of the [ERA4CS](http://www.jpi-climate.eu/ERA4CS)-funded [WATExR](https://watexr.eu/) project.
**This is a prototype tool**. Forecasts are currently only available for the historic period, but we are looking to operationalize it in the future.
## Select year of interest
Specify your **year** of interest using the drop-down list below and click the **Start** button.
<code>
style = {"description_width": "initial"}
#cur_year = dt.datetime.today().year
years = widgets.Dropdown(
options=range(2001, 2022),
value=2021,
description="Select year:",
disabled=False,
)
start = widgets.Button(
description="Start", disabled=False, style={"font_weight": "bold"}
)
output = widgets.Output()
display(years, start, output)
start.on_click(display_forecast)
</code>
|
{
"filename": "voila_app_voila_app.ipynb",
"repository": "NIVANorge/watexr",
"query": "transformed_from_existing",
"size": 9122,
"sha": ""
}
|
# IF_1.ipynb
Repository: Mark-Kramer/Case-Studies-Python
# The integrate and fire neuron
In this notebook we will use Python to simulate the integrate and fire (I&F) neuron model. We'll investigate, in particular, how the spiking activity varies as we adjust the input current $I$.
# Background information about the I&F model
Here's a video that describes a slightly more complicated model, the *leaky* integrate and fire model.
<code>
from IPython.lib.display import VimeoVideo
VimeoVideo('140084447')
</code>
Here's some additional intereting videos and references:
- [Lecture by Prof. Gerstner](http://klewel.com/conferences/epfl-neural-networks/index.php?talkID=1)
## Preliminaries
Before beginning, let's load in the Python packages we'll need:
<code>
from pylab import *
%matplotlib inline
rcParams['figure.figsize']=(12,3) # Change the default figure size
</code>
## Part 1: Numerical solutions - Introduction
How do we compute a numerical solution to the integrate and fire model?
The basic idea is to rearrange the differential equation to get $V(t+1)$ on
the left hand side, and $V(t)$ on the right hand side. Then, if we know
what's happening at time $t$, we can solve for what's happening at time $t+1$.
For example, consider the differential equation:
$$
\dfrac{dV}{dt} = \dfrac{I}{C}
$$
In words, we can think of:
$dV$ as the "change in voltage V",
$dt$ as the "change in time t".
Let's consider the case that we record the voltage $V$ in discrete time steps. So
we observe:
$V[0], V[1], V[2], \ldots$
at times:
$dt, \, 2*dt, \, 3*dt, \ldots$
where $dt$ is the time between our samples of $V$.
We can now write the "change in voltage V" as:
$$
dV = V(t+1) - V(t)
$$
Notice that the change in voltage is the difference in V between two
sequential time samples. Now, let's rewrite $\dfrac{dV}{dt}$ as,
$$
\dfrac{dV}{dt} = \dfrac{ V(t+1) - V(t) }{ dt }
$$
where we've replaced $dV$. Now, let's substitute this expression into the equation at the top of this file:
$$
\dfrac{ V(t+1) - V(t) }{ dt } = \dfrac{I}{C}.
$$
Solving this equation for $V(t+1)$ you'll find that:
$$
V(t+1) = V(t) + dt*(I/C)
$$
Notice that, in this expression, we use our current value of the voltage V(t) and the model (I/C) to determine the next value of the voltage V(t+1).
Now, let's program this equation in Python. First, let's set the values
for the parameters $I$ and $C$.
<code>
C=1.0
I=1.0
</code>
We also need to set the value for $dt$. This defines the time step for our
model. We must choose it small enough so that we don't miss anything
interesting. We'll choose:
<code>
dt=0.01
</code>
Let's assume the units of time are seconds. So, we step forward in time by $0.01$ s.
The right hand side of our equation is nearly defined, but we're still missing one thing, $V(t)$.
<div class="question">
**Q:** What value do we assign to $V(t)$?
**A:** We don't know --- that's why we're running the simulation in the first place!
</div>
So here's an easier question: what *initial* value do we assign to $V(t)$?
To start, we'll create an array of zeros to hold our results for $V$:
<code>
V = zeros([1000,1])
V.shape
</code>
This array `V` consists of 1000 rows and 1 column. We can think of each row entry as corresponding to a discrete step in time. Our goal is to fill-in the values of `V` (i.e., step forward in time), in a way consistent with our model.
Let's choose an initial value for `V` of 0.2, which in our simple model we'll assume represents the rest state.
<code>
V[0]=0.2
</code>
<div class="question">
**Q:** Given the initial state `V[0]=0.2`, calculate `V[1]`. Then calcualte `V[2]`.
</div>
After the two calculations above, we've moved forward two time steps into
the future, from $t=0$ s to $t=0.01$ s, and then from $t=0.01$ s to $t=0.02$ s. But what
if we want to know $V$ at $t=10$ s? Then, this iteration-by-hand procedure becomes
much too boring and error-prone. So, what do we do? Let's make the
computer do it ...
## Part 2: Numerical solutions - implementation
Let's computerize this iteration-by-hand procedure to find `V[999]`. To do so, we'll use a [for-loop](https://wiki.python.org/moin/ForLoop). Here's what it looks like:
<code>
for k in range(1,999):
V[k+1] = V[k] + dt*(I/C)
</code>
<div class="question">
**Q:** Does this loop make sense? Describe what's happening here.
</div>
<div class="question">
**Q:** Why does the `range` command end at `999`?
</div>
Execute this for-loop and examine the results in vector `V`. To do so, let's plot `V`:
<code>
figure()
plot(V);
</code>
<div class="question">
**Q:** What happens to the voltage after 1000 steps?
</div>
This plot is informative, but not great. Really, we'd like to plot the
voltage as a function of *time*, not steps or indices. To do so, we
need to define a time axis:
<code>
t = arange(0,len(V))*dt
</code>
<div class="question">
**Q:** What's happening in the command above? Does it make sense? (If not, trying printing or plotting `t`.)
</div>
Now, with *time* defined, let's redo the plot of the voltage with the axes labeled appropriately.
<code>
figure()
plot(t,V)
xlabel('Time [s]');
ylabel('V');
</code>
Finally, let's put it all together . . .
## Part 3: I&F CODE (version 1)
In Parts 1 and 2, we constructed parts of the I&F model in bits-and-pieces.
Let's now collect all of this code, compute a numerical solution to
the I&F model, and plot the results (with appropriate axes).
First, let's clear all the variables:
<code>
%reset
</code>
<code>
from pylab import *
%matplotlib inline
rcParams['figure.figsize']=(12,3)# Change the default figure size
I=1 #Set the parameter I.
C=1 #Set the parameter C.
dt=0.01 #Set the timestep.
V = zeros([1000,1]) #Initialize V.
V[0]=0.2; #Set the initial value of V.
for k in range(1,999): #March forward in time,
V[k+1] = V[k] + dt*(I/C) #... updating V along the way.
t = arange(0,len(V))*dt #Define the time axis.
figure() #Plot the results.
plot(t,V)
xlabel('Time [s]')
ylabel('Voltage [mV]');
</code>
<div class="question">
**Q:** Adjust the parameter `I`. What happens to `V` if `I=0`? Can you set `I` so that `V` > 20 within 10 s?
</div>
## Part 4: Voltage threshold
Notice, our model is missing something important: **the reset**.
Without
the reset, the voltage increases forever (if $I>0$). Now, let's update
our model to include the reset. To do so, we'll need to add two things
to our code.
- First, we'll define the voltage threshold `Vth`, and
reset voltage `Vreset`.
- Second, we'll check to see if `V` exceeds
`Vth` using an [if-statement](https://docs.python.org/3/tutorial/controlflow.html); if it does, then we'll set `V` equal to
`Vreset`.
Here's what we'll add to the code:
<code>
Vth = 1; #Define the voltage threshold.
Vreset = 0; #Define the reset voltage.
for k in range(1,999): #March forward in time,
V[k+1] = V[k] + dt*(I/C) #Update the voltage,
if V[k+1] > Vth: #... and check if the voltage exceeds the threshold.
V[k+1] = Vreset
</code>
## Part 5: I&F CODE (version 2)
Now, let's put it all together to make a complete I&F model (with a thershold and reset), simulate it, and plot the result.
<code>
%reset
</code>
<code>
from pylab import *
%matplotlib inline
rcParams['figure.figsize']=(12,3) # Change the default figure size
I=1 #Set the parameter I.
C=1 #Set the parameter C.
Vth = 1; #Define the voltage threshold.
Vreset = 0; #Define the reset voltage.
dt=0.01 #Set the timestep.
V = zeros([1000,1]) #Initialize V.
V[0]=0.2; #Set the initial condition.
for k in range(1,999): #March forward in time,
V[k+1] = V[k] + dt*(I/C) #Update the voltage,
if V[k+1] > Vth: #... and check if the voltage exceeds the threshold.
V[k+1] = Vreset
t = arange(0,len(V))*dt #Define the time axis.
figure() #Plot the results.
plot(t,V)
xlabel('Time [s]')
ylabel('Voltage [mV]');
</code>
<div class="question">
**Q:** Adjust the parameter `I`. What happens to `V` if `I=10`? If `I=100`?
</div>
<div class="question">
**Q:** Adjust the parameter `C`. What happens to `V` if `C=0.1`? If `C=10`?
</div>
<div class="question">
**Q:** What is "spiking" in this I&F model?
</div>
<a id="donate"></a>
## Donate
If you enjoy Case-Studies-Python, and would like to share your enjoyment with us, sponsor our coffee consuption <a href="https://www.paypal.com/donate/?hosted_button_id=DL8P5ZGS9962U">here</a>.
|
{
"filename": "IF_1.ipynb",
"repository": "Mark-Kramer/Case-Studies-Python",
"query": "transformed_from_existing",
"size": 90526,
"sha": ""
}
|
# visium_1.ipynb
Repository: vitessce/paper-figures
<code>
# Cell type annotation with celltypist
from anndata import read_zarr
import celltypist
from celltypist import models
import scanpy as sc
from os.path import join
import numpy as np
from vitessce.data_utils import (
VAR_CHUNK_SIZE,
)
</code>
<code>
!pwd
</code>
<code>
BASE_DIR = join("..", "..", "hubmap-publication-page", "data")
VIGNETTE_DIR = join("..", "..", "hubmap-publication-page", "vignettes", "vignette_02")
</code>
<code>
PROCESSED_DIR = join("..", "data", "processed")
</code>
<code>
!cp -r {PROCESSED_DIR}/human_lymph_node_10x_visium.h5ad.zarr {BASE_DIR}/human_lymph_node_10x_visium.h5ad.zarr
!cp -r {PROCESSED_DIR}/human_lymph_node_10x_visium.ome.zarr {BASE_DIR}/human_lymph_node_10x_visium.ome.zarr
</code>
<code>
adata = read_zarr(join(BASE_DIR, "human_lymph_node_10x_visium.h5ad.zarr"))
adata
</code>
<code>
# Scale/log-normalize as required by CellTypist
</code>
<code>
adata.X = np.expm1(adata.X)
sc.pp.normalize_total(adata, inplace=True, target_sum=1e4)
sc.pp.log1p(adata)
</code>
<code>
np.expm1(adata.X).sum(axis = 1)
</code>
<code>
#Download a list of models, for example, `Immune_All_Low.pkl` and `Immune_All_High.pkl`.
models.download_models(model = ['Immune_All_Low.pkl', 'Immune_All_High.pkl'])
</code>
<code>
low_predictions = celltypist.annotate(adata, model = 'Immune_All_Low.pkl', majority_voting = True)
adata = low_predictions.to_adata(prefix="low_")
high_predictions = celltypist.annotate(adata, model = 'Immune_All_High.pkl', majority_voting = True)
adata = high_predictions.to_adata(prefix="high_")
</code>
<code>
predicted_adata = adata
</code>
<code>
predicted_adata
</code>
<code>
predicted_adata.write_zarr(join(BASE_DIR, "human_lymph_node_10x_visium_with_cell_types.h5ad.zarr"), chunks=(adata.shape[0], VAR_CHUNK_SIZE))
</code>
<code>
from os.path import join
from vitessce import (
VitessceConfig,
ViewType as vt,
CoordinationType as ct,
FileType as ft,
AnnDataWrapper,
OmeZarrWrapper,
hconcat,
vconcat,
BASE_URL_PLACEHOLDER,
)
import json
</code>
<code>
vc = VitessceConfig(schema_version="1.0.15", name='Visium data', description='', base_dir=BASE_DIR)
</code>
<code>
img_zarr = join("human_lymph_node_10x_visium.ome.zarr")
adata_zarr = join("human_lymph_node_10x_visium_with_cell_types.h5ad.zarr")
</code>
<code>
dataset = vc.add_dataset(name='Human lymph node').add_object(AnnDataWrapper(
adata_path=adata_zarr,
obs_locations_path="obsm/spatial",
obs_segmentations_path="obsm/segmentations",
obs_embedding_paths=["obsm/X_umap", "obsm/X_pca"],
obs_embedding_names=["UMAP", "PCA"],
obs_set_paths=["obs/clusters", ["obs/high_majority_voting", "obs/low_majority_voting"]],
obs_set_names=["Leiden Cluster", "Predicted Cell Type"],
obs_feature_matrix_path="X",
initial_feature_filter_path="var/highly_variable",
# To be explicit that the features represent genes and gene expression, we specify that here.
coordination_values={
"obsType": "spot"
}
)).add_object(OmeZarrWrapper(
# We next run add_object with adata_path=adt_zarr to add the cell-by-ADT matrix and associated metadata.
img_path=img_zarr,
))
</code>
<code>
spatial_by_cellset = vc.add_view(vt.SPATIAL, dataset=dataset, x=0, y=0, w=4, h=6)
spatial_by_expression_a = vc.add_view(vt.SPATIAL, dataset=dataset, x=4, y=0, w=4, h=6)
spatial_by_expression_b = vc.add_view(vt.SPATIAL, dataset=dataset, x=8, y=0, w=4, h=6)
lc = vc.add_view(vt.LAYER_CONTROLLER, dataset=dataset, x=0, y=6, w=4, h=6).set_props(disableChannelsIfRgbDetected=True)
cell_sets = vc.add_view(vt.OBS_SETS, dataset=dataset, x=4, y=6, w=4, h=6)
feature_list = vc.add_view(vt.FEATURE_LIST, dataset=dataset, x=8, y=6, w=4, h=6)
all_views = [
spatial_by_cellset,
spatial_by_expression_a,
spatial_by_expression_b,
lc,
cell_sets,
feature_list,
]
segmentation_layer = {
"radius": 65, "stroked": True, "visible": True, "opacity": 1
}
image_layer = [
{
"type": "raster",
"index": 0,
"colormap": None,
"transparentColor": None,
"opacity": 1,
"domainType": "Min/Max",
"channels": [
{
"selection": { "c": 0 },
"color": [
255,
0,
0
],
"visible": True,
"slider": [
0,
255
]
},
{
"selection": { "c": 1 },
"color": [
0,
255,
0
],
"visible": True,
"slider": [
0,
255
]
},
{
"selection": { "c": 2 },
"color": [
0,
0,
255
],
"visible": True,
"slider": [
0,
255
]
}
]
}
]
vc.link_views(all_views, [ct.OBS_TYPE], ["spot"])
vc.link_views([spatial_by_cellset, spatial_by_expression_a, spatial_by_expression_b, lc], [ct.SPATIAL_SEGMENTATION_LAYER, ct.SPATIAL_IMAGE_LAYER, ct.SPATIAL_ZOOM, ct.SPATIAL_TARGET_X, ct.SPATIAL_TARGET_Y], [segmentation_layer, image_layer, -2.598, 1008.88, 1004.69])
vc.link_views([spatial_by_expression_a], [ct.OBS_COLOR_ENCODING, ct.FEATURE_SELECTION], ["geneSelection", ["CR2"]])
vc.link_views([spatial_by_expression_b, feature_list], [ct.OBS_COLOR_ENCODING, ct.FEATURE_SELECTION], ["geneSelection", ["FCER2"]])
vc.link_views([spatial_by_expression_a, spatial_by_expression_b], [ct.FEATURE_VALUE_COLORMAP_RANGE], [[0.5, 0.75]])
vc.link_views([spatial_by_cellset, cell_sets], [ct.OBS_COLOR_ENCODING, ct.OBS_SET_SELECTION], ["cellSetSelection", [["Predicted Cell Type", "B cells", "Germinal center B cells"]]])
vc.layout(hconcat(spatial_by_cellset, spatial_by_expression_a, spatial_by_expression_b) / hconcat(lc, cell_sets, feature_list));
</code>
<code>
vc.web_app()
</code>
<code>
os.makedirs(VIGNETTE_DIR, exist_ok=True)
</code>
<code>
config_dict = vc.to_dict(base_url=BASE_URL_PLACEHOLDER)
# Use `open` to create a new empty file at ./exported_data/vitessce.json
with open(join(VIGNETTE_DIR, "visium.json"), "w") as f:
json.dump(config_dict, f)
</code>
<code>
vignette_md = """---
name: Use Case 2
figures:
- name: "Visualization"
file: visium.json
---
## Spatial transcriptomics with H&E image from the human lymph node
This dataset is provided by 10x Genomics as a demo of the Visium technology and thus is not intended to answer a particular biological question. Nonetheless, it can be used to validate that the expected lymph node cell types are present. According to the v1 HuBMAP ASCT+B table for lymph node (Börner et al., Nature Cell Biology 2021), CCL19 is expressed by the T Cell Zone Reticular Cell Type in the Interfollicular Cortex and Paracortical Sinus. Using CellPhoneDB (Efremova et al., Nature Protocols 2020), we can query for known receptors of this ligand, which include ACKR4, CCRL2, and CCR7. Using the spatial view in Vitessce, we can observe that CCL19 and CCR7 exhibit coexpression patterns in clusters 2 and 8 (defined by the Leiden unsupervised clustering method).
"""
with open(join(VIGNETTE_DIR, "description.md"), "w") as f:
f.write(vignette_md)
</code>
|
{
"filename": "visium_1.ipynb",
"repository": "vitessce/paper-figures",
"query": "transformed_from_existing",
"size": 25989,
"sha": ""
}
|
# MullerianMesenchymeDifferentiation_SCENICPLUS.ipynb
Repository: ventolab/Human-ReproductiveTract-Development-Atlas
## SCENIC+ Mullerian duct mesenchymal cells
### method benchmarking
<code>
#supress warnings
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
import sys
import os
</code>
<code>
# Get chromosome sizes (for hg38 here)
import pyranges as pr
import requests
import pandas as pd
target_url='http://hgdownload.cse.ucsc.edu/goldenPath/hg38/bigZips/hg38.chrom.sizes'
chromsizes=pd.read_csv(target_url, sep='\t', header=None)
chromsizes.columns=['Chromosome', 'End']
chromsizes['Start']=[0]*chromsizes.shape[0]
chromsizes=chromsizes.loc[:,['Chromosome', 'Start', 'End']]
# Exceptionally in this case, to agree with CellRangerARC annotations
chromsizes['Chromosome'] = [chromsizes['Chromosome'][x].replace('v', '.') for x in range(len(chromsizes['Chromosome']))]
chromsizes['Chromosome'] = [chromsizes['Chromosome'][x].split('_')[1] if len(chromsizes['Chromosome'][x].split('_')) > 1 else chromsizes['Chromosome'][x] for x in range(len(chromsizes['Chromosome']))]
chromsizes=pr.PyRanges(chromsizes)
</code>
<code>
import matplotlib as mpl
mpl.rcParams['pdf.fonttype'] = 42
</code>
<code>
chromsizes
</code>
### Add cell type annotation information
The barcode metadata should be provided as a pd.DataFrame.
* The index* of the pandas dataframe should correspond to BARCODE (e.g. ATGTCTGATAGA-1, additional tags are possible using ___; e.g. ATGTCTGATAGA-1___sample_1) and it must contain a ‘sample_id’ column indicating the sample label fo origin. It is also possible to use other separation pattern (e.g. -), but then it will have to be specified in the function.
* Alternative: add a column named ‘barcode’ to the metadata with the corresponding cell barcodes (in this case the name of the cells will not be used to infer the barcode id). This is the option we use in this tutorial as well.
<code>
females_late = pd.read_csv("/nfs/team292/vl6/FetalReproductiveTract/ATAC_QC/ArchR/females_late/umap_coords.csv", index_col = 0)
females_late.head()
</code>
<code>
cell_data = females_late.copy()
cell_data.shape
</code>
<code>
cell_data['predictedGroup_Un'].value_counts(dropna = False)
</code>
<code>
cell_data = cell_data[cell_data['predictedGroup_Un'].isin(['Fallopian Mese',
'Uterus Mese',
'Cervix Mese', 'Upper Vagina Mese'])]
</code>
<code>
import numpy as np
</code>
<code>
cell_data.shape
</code>
<code>
cell_data.tail()
</code>
<code>
import seaborn as sns
import matplotlib.pyplot as plt
sns.set(rc={'figure.figsize':(7, 5)}, font_scale=1)
sns.set_style("whitegrid")
ax = sns.boxplot(x = 'predictedGroup_Un', y = 'predictedScore_Un', hue = 'predictedGroup_Un', data = cell_data, width = 0.8, orient = 'v', dodge = True, fliersize = 2)
ax.set_xticklabels(ax.get_xticklabels(),rotation = 90)
ax.set_ylabel('predictedScore_Un')
ax.set_xlabel('predictedGroup_Un')
ax.grid(False)
ax.axhline(y=0.5, color = 'gray', linestyle = '--')
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.1, title = 'predictedGroup')
fig = plt.gcf()
plt.show()
plt.clf()
plt.close()
</code>
<code>
mapping_dict = {'Fallopian Mese' : 'FallopianMese',
'Uterus Mese' : 'UterusMese', 'Cervix Mese' : 'CervixMese',
'Upper Vagina Mese' : 'UpperVaginaMese'}
cell_data['HarmonisedClusters'] = cell_data['predictedGroup_Un'].map(mapping_dict)
</code>
<code>
cell_data.shape
</code>
<code>
cell_data[['Sample']].value_counts()
</code>
<code>
color_palette = {
'FallopianMese' : 'orange',
'UterusMese' : 'orangered',
'CervixMese' : 'palevioletred',
'UpperVaginaMese' : 'lightpink'}
</code>
<code>
cell_data[['Sample', 'HarmonisedClusters']].value_counts()
</code>
<code>
import seaborn as sns
import matplotlib.pyplot as plt
sns.set(rc={'figure.figsize':(8, 5)}, font_scale=1)
sns.set_style("whitegrid")
ax = sns.boxplot(x = 'Sample', y = 'predictedScore_Un', hue = 'HarmonisedClusters', data = cell_data, width = 0.8, palette = color_palette, orient = 'v', dodge = True, fliersize = 2)
ax.set_xticklabels(ax.get_xticklabels(),rotation = 90)
ax.set_ylabel('predictedScore')
ax.set_xlabel('sample')
ax.grid(False)
ax.axhline(y=0.5, color = 'gray', linestyle = '--')
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.1, title = 'HarmonisedClusters')
fig = plt.gcf()
plt.show()
plt.clf()
plt.close()
fig.savefig('./boxplot_differentiatingmullerianmesenchyme.pdf', bbox_inches = 'tight')
</code>
<code>
cell_data = cell_data[cell_data['predictedScore_Un'] >= 0.5]
</code>
<code>
cell_data[['HarmonisedClusters']].value_counts()
</code>
<code>
cell_data[['donor']].value_counts()
</code>
<code>
cell_data[['stage']].value_counts()
</code>
<code>
import random
from itertools import chain
def downsample(df, labels, n):
myindex = df[labels].value_counts().index
myvalues = df[labels].value_counts().values
clusters = pd.Series(myvalues, index = myindex)
# Find clusters with > n cells
cl2downsample = clusters.index[ clusters.values > n ]
# save all barcode ids from small clusters
holder = []
holder.append( df.index[[ i not in cl2downsample for i in df[labels] ]] )
# randomly sample n cells in the cl2downsample
for cl in cl2downsample:
print(cl)
cl_sample = df[[ i == cl for i in df[labels]]].index
cl_downsample = random.sample(set(cl_sample), n )
holder.append(cl_downsample)
# samples to include
samples = list(chain(*holder))
# Filter adata_count
df = df[[ i in samples for i in df.index ]]
return df
</code>
<code>
cell_data_downsampled = downsample(cell_data, 'HarmonisedClusters', 1500)
</code>
<code>
cell_data_downsampled['donor'].value_counts()
</code>
<code>
cell_data_downsampled['stage'].value_counts()
</code>
<code>
cell_data_downsampled[['HarmonisedClusters']].value_counts()
</code>
### Try first without downsampling
<code>
#cell_data_downsampled = cell_data.copy()
</code>
<code>
cell_data_downsampled.head()
</code>
<code>
import numpy as np
</code>
<code>
np.unique(cell_data_downsampled['Sample'])
</code>
<code>
cell_data_downsampled.shape
</code>
<code>
cell_data_downsampled['HarmonisedClusters'].value_counts()
</code>
<code>
cell_data_downsampled['Sample'].value_counts()
</code>
<code>
cell_data_downsampled = cell_data_downsampled[cell_data_downsampled['Sample'] != 'HD_F_GON12449010']
cell_data_downsampled = cell_data_downsampled[cell_data_downsampled['Sample'] != 'HD_F_GON12877982']
</code>
<code>
cell_data_downsampled['Sample'].value_counts()
</code>
<code>
cell_data_downsampled['HarmonisedClusters'].value_counts(dropna = False)
</code>
<code>
cell_data_downsampled['barcode'] = [x.split('#')[1] for x in cell_data_downsampled.index.tolist()]
</code>
<code>
cell_data_downsampled['index'] = cell_data_downsampled['barcode'] + '___' + cell_data_downsampled['Sample'].astype(str)
</code>
<code>
cell_data_downsampled = cell_data_downsampled.set_index('index')
</code>
<code>
cell_data_downsampled.head()
</code>
### Generate pseudobulk files per cell type
Now we have all the ingredients we need to generate the pseudobulk files. With this function we will generate fragments files per group and the corresponding bigwigs. The mandatory input to this function are:
* The annotation dataframe (input_data)
* The variable used to group the cells (multiome_GermCells)
* The chromosome sizes
* The paths to where the bed and bigiwg files will be written
* A dictionary indicating the fragments file corresponsing to each sample. The sample ids used as keys in this dictionary must match with the sample ids in the annotation data frame!
The output will be two dictionaries containing the paths to the bed and bigwig files, respectively, to each group.
<code>
np.unique(cell_data_downsampled['Sample'])
</code>
<code>
## Path to fragments files of samples
fragments_dict = {'HD_F_GON11282675' : '/nfs/team292/vl6/FetalReproductiveTract/ATAC_QC/data/HD_F_GON11282675/fragments.tsv.gz',
'HD_F_GON11389960' : '/nfs/team292/vl6/FetalReproductiveTract/ATAC_QC/data/HD_F_GON11389960/fragments.tsv.gz',
'HD_F_GON11389961' : '/nfs/team292/vl6/FetalReproductiveTract/ATAC_QC/data/HD_F_GON11389961/fragments.tsv.gz',
'HD_F_GON12449011' : '/nfs/team292/vl6/FetalReproductiveTract/ATAC_QC/data/HD_F_GON12449011/fragments.tsv.gz',
'HD_F_GON11282676' : '/nfs/team292/vl6/FetalReproductiveTract/ATAC_QC/data/HD_F_GON11282676/fragments.tsv.gz',
'HD_F_GON12877983' : '/nfs/team292/vl6/FetalReproductiveTract/ATAC_QC/data/HD_F_GON12877983/fragments.tsv.gz',
'HD_F_GON12877984' : '/nfs/team292/vl6/FetalReproductiveTract/ATAC_QC/data/HD_F_GON12877984/fragments.tsv.gz',
'HD_F_GON14609874' : '/nfs/team292/vl6/FetalReproductiveTract/ATAC_QC/data/HD_F_GON14609874/fragments.tsv.gz',
'HD_F_GON14666992' : '/nfs/team292/vl6/FetalReproductiveTract/ATAC_QC/data/HD_F_GON14666992/fragments.tsv.gz',
'HD_F_GON13941947' : '/nfs/team292/vl6/FetalReproductiveTract/ATAC_QC/data/HD_F_GON13941947/fragments.tsv.gz',
'HD_F_GON13941946' : '/nfs/team292/vl6/FetalReproductiveTract/ATAC_QC/data/HD_F_GON13941946/fragments.tsv.gz',
'HCA_F_GON11173192_and_HCA_F_GON11212447' : '/nfs/team292/vl6/FetalReproductiveTract/MULTIOME_QC/data/HCA_F_GON11173192_and_HCA_F_GON11212447/fragments.tsv.gz', # 12 PCW (Hrv103)
'HD_F_GON13077785_and_HD_F_GON13094224' : '/nfs/team292/vl6/FetalReproductiveTract/MULTIOME_QC/data/HD_F_GON13077785_and_HD_F_GON13094224/fragments.tsv.gz',
}
</code>
<code>
outDir = '/lustre/scratch126/cellgen/team292/vl6/pycistopic/mullerian_mese_withvagina_post9pcw/'
tmpDir = '/lustre/scratch126/cellgen/team292/vl6/pycistopic/temp/'
</code>
<code>
from pycisTopic.pseudobulk_peak_calling import *
bw_paths, bed_paths = export_pseudobulk(input_data = cell_data_downsampled,
variable = 'HarmonisedClusters',
sample_id_col = 'Sample',
chromsizes = chromsizes,
bed_path = outDir + 'consensus_peak_calling/pseudobulk_bed_files/',
bigwig_path = outDir + 'consensus_peak_calling/pseudobulk_bw_files/',
path_to_fragments = fragments_dict,
n_cpu = 1,
normalize_bigwig = True,
remove_duplicates = True,
#_temp_dir = tmpDir + 'ray_spill',
split_pattern = '___')
</code>
<code>
# Save
import pickle
with open(outDir + 'consensus_peak_calling/pseudobulk_bed_files/bed_paths.pkl', 'wb') as f:
pickle.dump(bed_paths, f)
import pickle
with open(outDir + 'consensus_peak_calling/pseudobulk_bed_files/bw_paths.pkl', 'wb') as f:
pickle.dump(bw_paths, f)
</code>
### Calling peaks with MACS2
<code>
from pycisTopic.pseudobulk_peak_calling import *
macs_path='/opt/conda/envs/scenicplus/bin/macs2'
macs_outdir = outDir + 'consensus_peak_calling/MACS/'
# os.mkdir(macs_outdir)
</code>
<code>
#sys.stderr = open(os.devnull, "w") # silence stderr
</code>
<code>
#ray.shutdown()
</code>
<code>
# Run peak calling
narrow_peaks_dict = peak_calling(macs_path,
bed_paths,
macs_outdir,
genome_size='hs',
n_cpu=1,
input_format='BEDPE',
shift=73,
ext_size=146,
keep_dup = 'all',
q_value = 0.05,
#_temp_dir = tmpDir + 'ray_spill'
)
sys.stderr = sys.__stderr__ # unsilence stderr
</code>
<code>
# Save
import pickle
with open(outDir + 'consensus_peak_calling/MACS/narrow_peaks_dict.pkl', 'wb') as f:
pickle.dump(narrow_peaks_dict, f)
</code>
### Deriving consensus peaks with iterative overlapping
Finally, it is time to derive the consensus peaks. To do so, we use the TGCA iterative peak filtering approach. First, each summit is extended a peak_half_width in each direction and then we iteratively filter out less significant peaks that overlap with a more significant one. During this procedure peaks will be merged and depending on the number of peaks included into them, different processes will happen:
* 1 peak: The original peak region will be kept
* 2 peaks: The original peak region with the highest score will be kept
* 3 or more peaks: The orignal peak region with the most significant score will be taken, and all the original peak regions in this merged peak region that overlap with the significant peak region will be removed. The process is repeated with the next most significant peak (if it was not removed already) until all peaks are processed.
This process will happen twice, first in each pseudobulk peaks; and after peak score normalization, to process all peaks together.
<code>
path_to_blacklist = '/nfs/team292/vl6/scenicplus/pycisTopic/blacklist/hg38-blacklist.v2.bed'
</code>
<code>
from pycisTopic.iterative_peak_calling import *
# Other param
peak_half_width = 250
# Get consensus peaks
sys.stderr = open(os.devnull, "w") # silence stderr
consensus_peaks=get_consensus_peaks(narrow_peaks_dict, peak_half_width, chromsizes=chromsizes, path_to_blacklist=path_to_blacklist)
sys.stderr = sys.__stderr__ # unsilence stderr
</code>
<code>
# Write to bed
consensus_peaks.to_bed(path= outDir + 'consensus_peak_calling/consensus_regions.bed', keep=True, compression='infer', chain=False)
</code>
### Quality control
The next step is to perform QC in the scATAC-seq samples (in this case, only one run). There are several measurements and visualizations performed in this step:
* Barcode rank plot
* Duplication rate
* Insertion size
* TSS enrichment
* Fraction of Reads In Peaks (FRIP)
To calculate the TSS enrichment we need to provide TSS annotations. You can easily download them via pybiomart.
<code>
# Get TSS annotations
import pybiomart as pbm
dataset = pbm.Dataset(name='hsapiens_gene_ensembl', host='http://www.ensembl.org')
annot = dataset.query(attributes=['chromosome_name', 'transcription_start_site', 'strand', 'external_gene_name', 'transcript_biotype'])
annot['Chromosome/scaffold name'] = annot['Chromosome/scaffold name'].to_numpy(dtype = str)
filter = annot['Chromosome/scaffold name'].str.contains('CHR|GL|JH|MT')
annot = annot[~filter]
annot['Chromosome/scaffold name'] = annot['Chromosome/scaffold name'].str.replace(r'(\b\S)', r'chr\1')
annot.columns=['Chromosome', 'Start', 'Strand', 'Gene', 'Transcript_type']
annot = annot[annot.Transcript_type == 'protein_coding']
</code>
<code>
annot.tail()
</code>
<code>
#ray.shutdown()
</code>
<code>
fragments_dict
</code>
<code>
from pycisTopic.qc import *
## Set regions. We will use the consensus peaks we have just called, but we could also use the bulk peaks per sample instead for this step
path_to_regions= {'HD_F_GON11282675' : outDir + 'consensus_peak_calling/consensus_regions.bed',
'HD_F_GON11389960' : outDir + 'consensus_peak_calling/consensus_regions.bed',
'HD_F_GON11389961' : outDir + 'consensus_peak_calling/consensus_regions.bed',
'HD_F_GON12449011' : outDir + 'consensus_peak_calling/consensus_regions.bed',
'HD_F_GON11282676' : outDir + 'consensus_peak_calling/consensus_regions.bed',
'HD_F_GON12877983' : outDir + 'consensus_peak_calling/consensus_regions.bed',
'HD_F_GON12877984' : outDir + 'consensus_peak_calling/consensus_regions.bed',
'HD_F_GON14609874' : outDir + 'consensus_peak_calling/consensus_regions.bed',
'HD_F_GON14666992' : outDir + 'consensus_peak_calling/consensus_regions.bed',
'HD_F_GON13941947' : outDir + 'consensus_peak_calling/consensus_regions.bed',
'HD_F_GON13941946': outDir + 'consensus_peak_calling/consensus_regions.bed',
'HCA_F_GON11173192_and_HCA_F_GON11212447' : outDir + 'consensus_peak_calling/consensus_regions.bed',
'HD_F_GON13077785_and_HD_F_GON13094224' : outDir + 'consensus_peak_calling/consensus_regions.bed',
}
metadata_bc, profile_data_dict = compute_qc_stats(fragments_dict = fragments_dict,
tss_annotation = annot,
stats=['barcode_rank_plot', 'duplicate_rate', 'insert_size_distribution', 'profile_tss', 'frip'],
label_list = None,
path_to_regions = path_to_regions,
n_cpu = 1,
valid_bc = None,
n_frag = 100,
n_bc = None,
tss_flank_window = 1000,
tss_window = 50,
tss_minimum_signal_window = 100,
tss_rolling_window = 10,
remove_duplicates = True,
#_temp_dir = '/nfs/team292/vl6/symtopic/'
)
</code>
<code>
#os.makedirs(outDir+'quality_control')
import pickle
with open(outDir + 'quality_control/metadata_bc.pkl', 'wb') as f:
pickle.dump(metadata_bc, f)
import pickle
with open(outDir + 'quality_control/profile_data_dict.pkl', 'wb') as f:
pickle.dump(profile_data_dict, f)
</code>
### Sample-level statistics
Once the QC metrics have been computed you can visualize the results at the sample-level and the barcode-level. Sample-level statistics can be used to assess the overall quality of the sample, while barcode level statistics can be use to differentiate good quality cells versus the rest. The sample-level graphs include:
* **Barcode rank plot**: The barcode rank plot shows the distribution of non-duplicate reads and which barcodes were inferred to be associated with cells. A steep drop-off (‘knee’) is indicative of good separation between the cell-associated barcodes and the barcodes associated with empty partitions.
* **Insertion size**: ATAC-seq requires a proper pair of Tn5 transposase cutting events at the ends of DNA. In the nucleosome-free open chromatin regions, many molecules of Tn5 can kick in and chop the DNA into small pieces; around nucleosome-occupied regions, and Tn5 can only access the linker regions. Therefore, in a good ATAC-seq library, you should expect to see a sharp peak at the <100 bp region (open chromatin), and a peak at ~200bp region (mono-nucleosome), and other larger peaks (multi-nucleosomes). A clear nucleosome pattern indicates a good quality of the experiment.
* **Sample TSS enrichment**: The TSS enrichment calculation is a signal to noise calculation. The reads around a reference set of TSSs are collected to form an aggregate distribution of reads centered on the TSSs and extending to 1000 bp in either direction (for a total of 2000bp). This distribution is then normalized by taking the average read depth in the 100 bps at each of the end flanks of the distribution (for a total of 200bp of averaged data) and calculating a fold change at each position over that average read depth. This means that the flanks should start at 1, and if there is high read signal at transcription start sites (highly open regions of the genome) there should be an increase in signal up to a peak in the middle.
* **FRIP distribution**: Fraction of all mapped reads that fall into the called peak regions, i.e. usable reads in significantly enriched peaks divided by all usable reads. A low FRIP indicates that many reads form part of the background, and so that the data is noisy.
* **Duplication rate**: A fragment is considered “usable” if it uniquely maps to the genome and remains after removing PCR duplicates (defined as two fragments that map to the same genomic position and have the same unique molecular identifier). The duplication rate serves to estimate the amount of usable reads per barcode. High duplication rates may indicate over-sequencing or lack of fragments after transposition and encapsulation. We recommend using duplicate_rate_as_hexbin = True when working with big fragments files.
<code>
# Load sample metrics
import pickle
infile = open(outDir + 'quality_control/profile_data_dict.pkl', 'rb')
profile_data_dict = pickle.load(infile)
infile.close()
</code>
<code>
from pycisTopic.qc import *
plot_sample_metrics(profile_data_dict,
insert_size_distribution_xlim=[0,600],
ncol=2,
plot=True,
save= outDir + 'quality_control/sample_metrics.pdf',
duplicate_rate_as_hexbin = True)
</code>
### Barcode level statistics
Barcode-level statistics can be used to select high quality cells. Typical measurements that can be used are:
* **Total number of (unique) fragments**
* **TSS enrichment**: The score at position in the TSS enrichmen score for for each barcode (at position 0, the TSS). Noisy cells will have a low TSS enrichment.
* **FRIP**: The fraction of reads in peaks for each barcode. Noisy cells have low FRIP values. However, this filter should be used with nuance, as it depends on the quality of the original peaks. For example, if there is a rare population in the sample, its specific peaks may be missed by peak calling algorithms, causing a decrease in their FRIP values.
<code>
# Load barcode metrics
import pickle
infile = open(outDir + 'quality_control/metadata_bc.pkl', 'rb')
metadata_bc = pickle.load(infile)
infile.close()
</code>
<code>
# Return figure to plot together with other metrics, and cells passing filters. Figure will be saved as pdf.
from pycisTopic.qc import *
FRIP_NR_FRAG_fig = {}
FRIP_NR_FRAG_filter = {}
TSS_NR_FRAG_fig = {}
TSS_NR_FRAG_filter = {}
DR_NR_FRAG_fig = {}
for sample in metadata_bc.keys():
FRIP_NR_FRAG_fig[sample], FRIP_NR_FRAG_filter[sample]=plot_barcode_metrics(metadata_bc[sample],
var_x='Log_unique_nr_frag',
var_y='FRIP',
min_x=3,
max_x=None,
min_y=0.4,
max_y=None,
return_cells=True,
return_fig=True,
plot=False,
save= outDir + 'quality_control/barcode_metrics_FRIP-VS-NRFRAG_'+sample+'.pdf')
# Return figure to plot together with other metrics, and cells passing filters
TSS_NR_FRAG_fig[sample], TSS_NR_FRAG_filter[sample]=plot_barcode_metrics(metadata_bc[sample],
var_x='Log_unique_nr_frag',
var_y='TSS_enrichment',
min_x=3,
max_x=None,
min_y=4,
max_y=None,
return_cells=True,
return_fig=True,
plot=False,
save= outDir + 'quality_control/barcode_metrics_TSS-VS-NRFRAG_'+sample+'.pdf')
# Return figure to plot together with other metrics, but not returning cells (no filter applied for the duplication rate per barcode)
DR_NR_FRAG_fig[sample]=plot_barcode_metrics(metadata_bc[sample],
var_x='Log_unique_nr_frag',
var_y='Dupl_rate',
min_x=3,
max_x=None,
min_y=None,
max_y=None,
return_cells=False,
return_fig=True,
plot=False,
plot_as_hexbin = True)
</code>
<code>
# # Plot barcode stats in one figure
# fig=plt.figure(figsize=(40, 100))
# i=1
# for sample in FRIP_NR_FRAG_fig.keys():
# plt.subplot(9, 3, i)
# plt.gca().set_title(sample, fontsize=30)
# i += 1
# img = fig2img(FRIP_NR_FRAG_fig[sample]) #To convert figures to png to plot together, see .utils.py. This converts the figure to png.
# plt.imshow(img)
# plt.axis('off')
# plt.subplot(10, 3, i)
# plt.gca().set_title(sample, fontsize=30)
# i += 1
# img = fig2img(TSS_NR_FRAG_fig[sample])
# plt.imshow(img)
# plt.axis('off')
# plt.subplot(10, 3, i)
# plt.gca().set_title(sample, fontsize=30)
# i += 1
# img = fig2img(DR_NR_FRAG_fig[sample])
# plt.imshow(img)
# plt.axis('off')
# plt.savefig(outDir + 'quality_control/combined_qc.pdf')
</code>
<code>
cell_data_downsampled.head()
</code>
<code>
sel_cells_dict = {}
for sample in np.unique(cell_data_downsampled['Sample']):
sel_cells_dict[sample] = list(set(cell_data_downsampled[cell_data_downsampled['Sample'] == sample]['barcode']))
print(f"{len(sel_cells_dict[sample])} barcodes passed filters for sample {sample}")
</code>
<code>
4+3
</code>
<code>
import pickle
with open(outDir +'/quality_control/bc_passing_filters.pkl', 'wb') as f:
pickle.dump(sel_cells_dict, f)
</code>
### Create cisTopic object
In this step a fragments count matrix will be generated, in which the fragments in each region for each barcode is indicated. For multiple samples, you can add additional entries in fragment_dict, and a cisTopic object will be generated per sample. As regions, we will use the consensus peaks derived from the scRNA-seq annotations. This cisTopic object will contain:
* **Path/s to fragment file/s (if generated from fragments files)**
* **Fragment count matrix and binary accessibility matrix**
* **Cell and region metadata**
<code>
# Metrics
import pickle
infile = open(outDir + 'quality_control/metadata_bc.pkl', 'rb')
metadata_bc = pickle.load(infile)
infile.close()
# Valid barcodes
import pickle
infile = open(outDir +'/quality_control/bc_passing_filters.pkl', 'rb')
bc_passing_filters = pickle.load(infile)
infile.close()
</code>
<code>
# Path to regions
path_to_regions = outDir + 'consensus_peak_calling/consensus_regions.bed'
path_to_blacklist = '/nfs/team292/vl6/scenicplus/pycisTopic/blacklist/hg38-blacklist.v2.bed'
</code>
<code>
#Create objects
from pycisTopic.cistopic_class import *
cistopic_obj_list=[create_cistopic_object_from_fragments(path_to_fragments=fragments_dict[key],
path_to_regions=path_to_regions,
path_to_blacklist=path_to_blacklist,
metrics=metadata_bc[key],
valid_bc=bc_passing_filters[key],
n_cpu=1,
project=key) for key in fragments_dict.keys()]
</code>
<code>
cistopic_obj = merge(cistopic_obj_list)
</code>
<code>
print(cistopic_obj)
</code>
<code>
# Save
with open(outDir + 'cisTopicObject.pkl', 'wb') as f:
pickle.dump(cistopic_obj, f)
</code>
<code>
# Load cisTopic object
import pickle
infile = open(outDir + 'cisTopicObject.pkl', 'rb')
cistopic_obj = pickle.load(infile)
infile.close()
</code>
<code>
cistopic_obj.add_cell_data(cell_data_downsampled)
</code>
<code>
print(cistopic_obj)
</code>
<code>
cistopic_obj.cell_data['Sample'].value_counts(dropna = False)
</code>
<code>
cistopic_obj.cell_data.HarmonisedClusters = cistopic_obj.cell_data.HarmonisedClusters.astype(str)
</code>
<code>
high_quality = cistopic_obj.cell_data[cistopic_obj.cell_data.HarmonisedClusters != 'nan'].index.tolist()
cistopic_obj = cistopic_obj.subset(high_quality, copy=True)
</code>
<code>
cistopic_obj.cell_data['HarmonisedClusters'].value_counts(dropna = False)
</code>
<code>
# Save
with open(outDir + 'cisTopicObject.pkl', 'wb') as f:
pickle.dump(cistopic_obj, f)
</code>
#### Since the early sample is male and the older are only female, exclude Y chromosome regions
<code>
cistopic_obj.region_data
</code>
<code>
ychrom = cistopic_obj.region_data[cistopic_obj.region_data['Chromosome'] == 'chrY'].index.to_list()
</code>
<code>
len(ychrom)
</code>
<code>
nonychrom = [i for i in cistopic_obj.region_data.index.to_list() if i not in ychrom]
len(nonychrom)
</code>
<code>
cistopic_obj = cistopic_obj.subset(regions = nonychrom, copy = True)
</code>
<code>
cistopic_obj.region_data
</code>
<code>
# Save
with open(outDir + 'cisTopicObject.pkl', 'wb') as f:
pickle.dump(cistopic_obj, f)
</code>
### Run LDA models
The next step is to run the LDA models. There are two types of LDA models (with Collapsed Gibbs Sampling) you can run:
* **Serial LDA**: The parallelization is done between models rather than within each model. Recommended for small-medium sized data sets in which several models with different number os topics are being tested. You can run these models with runCGSModels().
* **Parallel LDA with MALLET**: The parallelization is done within each model. Recommended for large data sets where a few models with different number of topics are being tested. If working in a cluster, we recommed to submit a job per model so they can run simultaneously. You can run it with runCGSModelsMallet().
<code>
# Load cisTopic object
import pickle
infile = open(outDir + 'cisTopicObject.pkl', 'rb')
cistopic_obj = pickle.load(infile)
infile.close()
</code>
<code>
cistopic_obj.cell_data.head()
</code>
<code>
outDir
</code>
<code>
from pycisTopic.cistopic_class import *
# Configure path Mallet
path_to_mallet_binary='/nfs/team292/vl6/scenicplus/Mallet/bin/mallet'
import os
os.environ['MALLET_MEMORY'] = '300G'
# Run models
models=run_cgs_models_mallet(path_to_mallet_binary,
cistopic_obj,
n_topics=[2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30,
32, 34, 36, 38, 40, 42, 44, 46, 48, 50],
n_cpu=24,
n_iter=150,
random_state=555,
alpha=50,
alpha_by_topic=True,
eta=0.1,
eta_by_topic=False,
tmp_path='/lustre/scratch126/cellgen/team292/vl6/pycistopic/temp/', #Use SCRATCH if many models or big data set
save_path='/lustre/scratch126/cellgen/team292/vl6/pycistopic/temp/')
# Save
with open(outDir + 'models/mallet.pkl', 'wb') as f:
pickle.dump(models, f)
</code>
<code>
# Save
#with open(outDir + 'models/mallet.pkl', 'wb') as f:
# pickle.dump(models, f)
</code>
### Model selection
There are several methods that can be used for model selection:
* **Minmo_2011**: Uses the average model coherence as calculated by Mimno et al (2011). In order to reduce the impact of the number of topics, we calculate the average coherence based on the top selected average values. The better the model, the higher coherence.
* **Log-likelihood**: Uses the log-likelihood in the last iteration as calculated by Griffiths and Steyvers (2004). The better the model, the higher the log-likelihood.
* **Arun_2010**: Uses a density-based metric as in Arun et al (2010) using the topic-region distribution, the cell-topic distribution and the cell coverage. The better the model, the lower the metric.
* **Cao_Juan_2009**: Uses a divergence-based metric as in Cao Juan et al (2009) using the topic-region distribution. The better the model, the lower the metric.
For scATAC-seq data models, the most helpful methods are Minmo (topic coherence) and the log-likelihood in the last iteration.
<code>
outDir
</code>
<code>
# Load cisTopic object
import pickle
infile = open(outDir + 'cisTopicObject.pkl', 'rb')
cistopic_obj = pickle.load(infile)
infile.close()
# Load models
import pickle
infile = open(outDir + 'models/mallet.pkl', 'rb')
models = pickle.load(infile)
infile.close()
</code>
<code>
numTopics = 24
from pycisTopic.lda_models import *
model=evaluate_models(models,
select_model=numTopics,
return_model=True,
metrics=['Arun_2010','Cao_Juan_2009', 'Minmo_2011', 'loglikelihood'],
plot_metrics=False,
save= outDir + 'models/model_selection.pdf')
</code>
<code>
# Add model to cisTopicObject
cistopic_obj.add_LDA_model(model)
</code>
<code>
# Save
with open(outDir + 'cisTopicObject.pkl', 'wb') as f:
pickle.dump(cistopic_obj, f)
</code>
<code>
# Load cisTopic object
import pickle
infile = open(outDir + 'cisTopicObject.pkl', 'rb')
cistopic_obj = pickle.load(infile)
infile.close()
</code>
<code>
print(cistopic_obj)
</code>
<code>
cistopic_obj.fragment_matrix.shape
</code>
<code>
cistopic_obj.cell_data.shape
</code>
<code>
cistopic_obj.region_data.shape
</code>
<code>
from pycisTopic.clust_vis import *
run_umap(cistopic_obj,
target = 'cell', scale=False)
run_tsne(cistopic_obj,
target = 'cell', scale=False)
</code>
<code>
from pycisTopic.clust_vis import *
plot_metadata(cistopic_obj,
reduction_name='UMAP',
variables=['HarmonisedClusters', 'Sample', 'stage', 'predictedScore'], # Labels from RNA and new clusters
target='cell', num_columns=2,
text_size=10,
dot_size=5,
figsize=(10,10),
save= outDir + 'visualization/umap_dimensionality_reduction_label_uncorrected.pdf')
</code>
<code>
from pycisTopic.clust_vis import *
plot_metadata(cistopic_obj,
reduction_name='tSNE',
variables=['HarmonisedClusters', 'Sample', 'stage', 'predictedScore'], # Labels from RNA and new clusters
target='cell', num_columns=2,
text_size=10,
dot_size=5,
figsize=(10,10),
save= outDir + 'visualization/tsne_dimensionality_reduction_label_uncorrected.pdf')
</code>
<code>
cistopic_obj.cell_data.Sample.value_counts()
</code>
<code>
cistopic_obj.cell_data.HarmonisedClusters.value_counts()
</code>
<code>
color_palette
</code>
<code>
from pycisTopic.clust_vis import *
plot_metadata(cistopic_obj,
reduction_name='UMAP',
variables=['Sample', 'HarmonisedClusters'], # Labels from RNA and new clusters
target='cell', num_columns=2,
text_size=10,
dot_size=2,
figsize=(10,5),
color_dictionary = {
'HarmonisedClusters' : {
'FallopianMese': 'darkorange',
'UterusMese': 'orangered',
'CervixMese': 'palevioletred',
'UpperVaginaMese': 'lightpink'}},
save= outDir + 'visualization/umap_dimensionality_reduction_label_uncorrected2.pdf')
</code>
<code>
plot_topic(cistopic_obj,
reduction_name = 'UMAP',
target = 'cell',
num_columns=5,
save= outDir + 'visualization/umap_dimensionality_reduction_topic_uncorrected.pdf')
</code>
<code>
from pycisTopic.clust_vis import *
cell_topic_heatmap(cistopic_obj,
variables = ['HarmonisedClusters'],
scale = False,
legend_loc_x = 1.05,
legend_loc_y = -1.2,
legend_dist_y = -1,
figsize=(5,10),
color_dict = {'HarmonisedClusters' : {
'FallopianMese': 'darkorange',
'UterusMese': 'orangered',
'CervixMese': 'palevioletred',
'UpperVaginaMese': 'lightpink'}},
save = outDir + 'visualization/heatmap_topic_contr.pdf')
</code>
### Harmony
<code>
cistopic_obj.cell_data['donor'].value_counts(dropna = False)
</code>
<code>
# Harmony
harmony(cistopic_obj, 'donor', random_state=555, theta = 0)
# UMAP
run_umap(cistopic_obj, reduction_name='harmony_UMAP',
target = 'cell', harmony=True)
run_tsne(cistopic_obj, reduction_name='harmony_tSNE',
target = 'cell', harmony=True)
</code>
<code>
plot_metadata(cistopic_obj,
reduction_name='harmony_UMAP',
variables=[ 'HarmonisedClusters', 'donor', 'stage', 'predictedScore'], # Labels from RNA and new clusters
target='cell', num_columns=2,
text_size=10,
dot_size=5,
figsize=(10,10),
color_dictionary = {
'HarmonisedClusters' : {'FallopianMese': 'darkorange',
'UterusMese': 'orangered',
'CervixMese': 'palevioletred',
'UpperVaginaMese': 'lightpink'}
},
save= outDir + 'visualization/umap_dimensionality_reduction_label_corrected.pdf')
</code>
<code>
plot_metadata(cistopic_obj,
reduction_name='harmony_tSNE',
variables=[ 'HarmonisedClusters', 'donor', 'stage', 'predictedScore'], # Labels from RNA and new clusters
target='cell', num_columns=2,
text_size=10,
dot_size=5,
figsize=(10,10),
color_dictionary = {
'HarmonisedClusters' : {'FallopianMese': 'darkorange',
'UterusMese': 'orangered',
'CervixMese': 'palevioletred',
'UpperVaginaMese': 'lightpink'}},
save= outDir + 'visualization/tsne_dimensionality_reduction_label_corrected.pdf')
</code>
<code>
plot_topic(cistopic_obj,
reduction_name = 'harmony_tSNE',
target = 'cell',
num_columns=5,
save= outDir + 'visualization/tsne_dimensionality_reduction_topic_corrected.pdf')
</code>
<code>
from pycisTopic.clust_vis import *
find_clusters(cistopic_obj,
target = 'cell',
harmony = True,
k = 12,
res = [0.1, 0.3, 0.7],
prefix = 'pycisTopic_',
scale = True,
split_pattern = '-')
</code>
<code>
plot_metadata(cistopic_obj,
reduction_name = 'harmony_tSNE',
variables=['HarmonisedClusters', 'pycisTopic_leiden_12_0.1', 'pycisTopic_leiden_12_0.3', 'pycisTopic_leiden_12_0.7'], # Labels from RNA and new clusters
target='cell', num_columns=2,
text_size=10,
dot_size=5,
figsize=(10,10),
save= outDir + 'visualization/tsne_dimensionality_reduction_clustering.pdf')
</code>
<code>
color_palette
</code>
<code>
annot_dict_lowres={}
annot_dict_lowres['pycisTopic_leiden_12_0.7'] = {'1':'UpperVaginaMese', '0':'FallopianMese',
'2': 'CervixMese', '3': 'UterusMese',
'4' : 'UterusMese',
'5': 'UpperVaginaMese',
'6' : 'FallopianMese',
'7' : 'UpperVaginaMese',
'8' : 'CervixMese',
}
cistopic_obj.cell_data['mese_mullerian_lowres'] = [annot_dict_lowres['pycisTopic_leiden_12_0.7'][x] for x in cistopic_obj.cell_data['pycisTopic_leiden_12_0.7'].tolist()]
</code>
<code>
plot_metadata(cistopic_obj,
reduction_name = 'harmony_tSNE',
variables=['mese_mullerian_lowres'], # Labels from RNA and new clusters
target='cell', num_columns=2,
text_size=10,
dot_size=5,
figsize=(10,5),
color_dictionary = {
'mese_mullerian_lowres' : {'FallopianMese': 'orange',
'UterusMese': 'orangered',
'CervixMese': 'palevioletred',
'UpperVaginaMese': 'lightpink'}},
save= outDir + 'visualization/tsne_dimensionality_reduction_lowres.pdf')
</code>
<code>
# Save
with open(outDir + 'cisTopicObject_clean.pkl', 'wb') as f:
pickle.dump(cistopic_obj, f)
</code>
<code>
# Load cisTopic object
import pickle
infile = open(outDir + 'cisTopicObject_clean.pkl', 'rb')
cistopic_obj = pickle.load(infile)
infile.close()
</code>
<code>
from pycisTopic.clust_vis import *
</code>
<code>
outDir
</code>
<code>
plot_metadata(cistopic_obj,
reduction_name = 'harmony_tSNE',
variables=['mese_mullerian_lowres'], # Labels from RNA and new clusters
target='cell',
num_columns=1,
text_size=10,
dot_size=2,
figsize=(5,5),
show_label = False,
show_legend = False,
color_dictionary = {
'mese_mullerian_lowres' : {'FallopianMese': 'orange',
'UterusMese': 'orangered',
'CervixMese': 'palevioletred',
'UpperVaginaMese': 'lightpink'}},
save= outDir + 'visualization/tsne_dimensionality_reduction_lowres.pdf')
</code>
<code>
plot_metadata(cistopic_obj,
reduction_name = 'harmony_tSNE',
variables=['stage'], # Labels from RNA and new clusters
target='cell',
num_columns=1,
text_size=10,
dot_size=2,
figsize=(6,5),
show_label = False,
show_legend = False,
save= outDir + 'visualization/tsne_dimensionality_reduction_lowres_stage.pdf')
</code>
<code>
plot_metadata(cistopic_obj,
reduction_name = 'harmony_tSNE',
variables=['donor'], # Labels from RNA and new clusters
target='cell',
num_columns=1,
text_size=10,
dot_size=2,
figsize=(6,5),
show_label = False,
show_legend = True,
save= outDir + 'visualization/tsne_dimensionality_reduction_lowres_donor.pdf')
</code>
<code>
# os.mkdir(outDir+'topic_binarization')
from pycisTopic.topic_binarization import *
region_bin_topics = binarize_topics(cistopic_obj, method='otsu', ntop=3000, plot=True, num_columns=5, save= outDir + 'topic_binarization/otsu.pdf')
</code>
<code>
binarized_cell_topic = binarize_topics(cistopic_obj, target='cell', method='li', plot=True, num_columns=5, nbins=100)
</code>
<code>
from pycisTopic.topic_qc import *
topic_qc_metrics = compute_topic_metrics(cistopic_obj)
</code>
<code>
fig_dict={}
fig_dict['CoherenceVSAssignments']=plot_topic_qc(topic_qc_metrics, var_x='Coherence', var_y='Log10_Assignments', var_color='Gini_index', plot=False, return_fig=True)
fig_dict['AssignmentsVSCells_in_bin']=plot_topic_qc(topic_qc_metrics, var_x='Log10_Assignments', var_y='Cells_in_binarized_topic', var_color='Gini_index', plot=False, return_fig=True)
fig_dict['CoherenceVSCells_in_bin']=plot_topic_qc(topic_qc_metrics, var_x='Coherence', var_y='Cells_in_binarized_topic', var_color='Gini_index', plot=False, return_fig=True)
fig_dict['CoherenceVSRegions_in_bin']=plot_topic_qc(topic_qc_metrics, var_x='Coherence', var_y='Regions_in_binarized_topic', var_color='Gini_index', plot=False, return_fig=True)
fig_dict['CoherenceVSMarginal_dist']=plot_topic_qc(topic_qc_metrics, var_x='Coherence', var_y='Marginal_topic_dist', var_color='Gini_index', plot=False, return_fig=True)
fig_dict['CoherenceVSGini_index']=plot_topic_qc(topic_qc_metrics, var_x='Coherence', var_y='Gini_index', var_color='Gini_index', plot=False, return_fig=True)
</code>
<code>
# Plot topic stats in one figure
fig=plt.figure(figsize=(40, 43))
i = 1
for fig_ in fig_dict.keys():
plt.subplot(2, 3, i)
img = fig2img(fig_dict[fig_]) #To convert figures to png to plot together, see .utils.py. This converts the figure to png.
plt.imshow(img)
plt.axis('off')
i += 1
plt.subplots_adjust(wspace=0, hspace=-0.70)
fig.savefig(outDir + 'topic_binarization/Topic_qc.pdf', bbox_inches='tight')
plt.show()
</code>
<code>
topic_annot = topic_annotation(cistopic_obj, annot_var='mese_mullerian_lowres', binarized_cell_topic=binarized_cell_topic, general_topic_thr = 0.2)
topic_qc_metrics = pd.concat([topic_annot[['mese_mullerian_lowres', 'Ratio_cells_in_topic', 'Ratio_group_in_population']], topic_qc_metrics], axis=1)
topic_qc_metrics.head()
</code>
<code>
# Save
with open(outDir + 'topic_binarization/Topic_qc_metrics_annot.pkl', 'wb') as f:
pickle.dump(topic_qc_metrics, f)
with open(outDir + 'topic_binarization/binarized_cell_topic.pkl', 'wb') as f:
pickle.dump(binarized_cell_topic, f)
with open(outDir + 'topic_binarization/binarized_topic_region.pkl', 'wb') as f:
pickle.dump(region_bin_topics, f)
</code>
### Differentially Accessible Regions
<code>
# Load cisTopic object
import pickle
infile = open(outDir + 'cisTopicObject_clean.pkl', 'rb')
cistopic_obj = pickle.load(infile)
infile.close()
</code>
<code>
from pycisTopic.diff_features import *
imputed_acc_obj = impute_accessibility(cistopic_obj, selected_cells=None, selected_regions=None, scale_factor=10**6)
</code>
<code>
include = set(imputed_acc_obj.feature_names) & set(cistopic_obj.region_data.index.to_list())
len(include)
</code>
<code>
diff = set(imputed_acc_obj.feature_names) - set(cistopic_obj.region_data.index.to_list())
</code>
<code>
diff
</code>
<code>
imputed_acc_obj = imputed_acc_obj.subset(features = list(include), copy = True)
</code>
<code>
str(imputed_acc_obj)
</code>
<code>
normalized_imputed_acc_obj = normalize_scores(imputed_acc_obj, scale_factor=10**4)
</code>
<code>
# os.mkdir(outDir + 'DARs/')
variable_regions = find_highly_variable_features(normalized_imputed_acc_obj,
min_disp = 0.05,
min_mean = 0.0125,
max_mean = 3,
max_disp = np.inf,
n_bins=20,
n_top_features=None,
plot=True,
save= outDir + 'DARs/HVR_plot.pdf')
</code>
<code>
len(variable_regions)
</code>
<code>
markers_dict= find_diff_features(cistopic_obj,
imputed_acc_obj,
variable='mese_mullerian_lowres',
var_features=variable_regions,
contrasts=None,
adjpval_thr=0.05,
log2fc_thr=np.log2(1.5),
n_cpu=10)
</code>
<code>
x = [print(x + ': '+ str(len(markers_dict[x]))) for x in markers_dict.keys()]
</code>
<code>
# Save
with open(outDir + 'DARs/Imputed_accessibility.pkl', 'wb') as f:
pickle.dump(imputed_acc_obj, f)
with open(outDir + 'DARs/DARs.pkl', 'wb') as f:
pickle.dump(markers_dict, f)
with open(outDir + 'DARs/variable_regions.pkl', 'wb') as f:
pickle.dump(variable_regions, f)
</code>
<code>
from pycisTopic.clust_vis import *
plot_imputed_features(cistopic_obj,
reduction_name='harmony_tSNE',
imputed_data=imputed_acc_obj,
features=[markers_dict[x].index.tolist()[0] for x in ['FallopianMese',
'UterusMese',
'CervixMese',
'UpperVaginaMese']],
scale=False,
num_columns=3,
save= outDir + 'DARs/example_best_DARs.pdf')
</code>
### Gene activity scores
<code>
# Load cisTopic object
import pickle
infile = open(outDir + 'cisTopicObject_clean.pkl', 'rb')
cistopic_obj = pickle.load(infile)
infile.close()
# Load imputed accessibility
import pickle
infile = open(outDir + 'DARs/Imputed_accessibility.pkl', 'rb')
imputed_acc_obj = pickle.load(infile)
infile.close()
# Load DARs
import pickle
infile = open(outDir + 'DARs/DARs.pkl', 'rb')
DARs_dict = pickle.load(infile)
infile.close()
</code>
<code>
str(imputed_acc_obj)
</code>
<code>
# Get TSS annotations
import pybiomart as pbm
import pyranges as pr
# For mouse
#dataset = pbm.Dataset(name='mmusculus_gene_ensembl', host='http://www.ensembl.org')
# For human (hg38)
dataset = pbm.Dataset(name='hsapiens_gene_ensembl', host='http://www.ensembl.org')
# For human (hg19)
#dataset = pbm.Dataset(name='hsapiens_gene_ensembl', host='http://grch37.ensembl.org/')
# For fly
#dataset = pbm.Dataset(name='dmelanogaster_gene_ensembl', host='http://www.ensembl.org')
annot = dataset.query(attributes=['chromosome_name', 'start_position', 'end_position', 'strand', 'external_gene_name', 'transcription_start_site', 'transcript_biotype'])
annot['Chromosome/scaffold name'] = 'chr' + annot['Chromosome/scaffold name'].astype(str)
annot.columns=['Chromosome', 'Start', 'End', 'Strand', 'Gene','Transcription_Start_Site', 'Transcript_type']
annot = annot[annot.Transcript_type == 'protein_coding']
annot.Strand[annot.Strand == 1] = '+'
annot.Strand[annot.Strand == -1] = '-'
pr_annotation = pr.PyRanges(annot.dropna(axis = 0))
</code>
<code>
# Get chromosome sizes
import pandas as pd
import requests
target_url='http://hgdownload.cse.ucsc.edu/goldenPath/hg38/bigZips/hg38.chrom.sizes'
chromsizes=pd.read_csv(target_url, sep='\t', header=None)
chromsizes.columns=['Chromosome', 'End']
chromsizes['Start']=[0]*chromsizes.shape[0]
chromsizes=chromsizes.loc[:,['Chromosome', 'Start', 'End']]
chromsizes=pr.PyRanges(chromsizes)
</code>
<code>
from pycisTopic.gene_activity import *
gene_act, weigths = get_gene_activity(imputed_acc_obj, # Region-cell probabilities
pr_annotation, # Gene annotation
chromsizes, # Chromosome size
use_gene_boundaries=True, # Whether to use the whole search space or stop when encountering another gene
upstream=[1000, 100000], # Search space upstream. The minimum means that even if there is a gene right next to it
#these bp will be taken (1kbp here)
downstream=[1000,100000], # Search space downstream
distance_weight=True, # Whether to add a distance weight (an exponential function, the weight will decrease with distance)
decay_rate=1, # Exponent for the distance exponential funciton (the higher the faster will be the decrease)
extend_gene_body_upstream=10000, # Number of bp upstream immune to the distance weight (their value will be maximum for
#this weight)
extend_gene_body_downstream=500, # Number of bp downstream immune to the distance weight
gene_size_weight=False, # Whether to add a weights based on the length of the gene
gene_size_scale_factor='median', # Dividend to calculate the gene size weigth. Default is the median value of all genes
#in the genome
remove_promoters=False, # Whether to remove promoters when computing gene activity scores
average_scores=True, # Whether to divide by the total number of region assigned to a gene when calculating the gene
#activity score
scale_factor=1, # Value to multiply for the final gene activity matrix
extend_tss=[10,10], # Space to consider a promoter
gini_weight = True, # Whether to add a gini index weigth. The more unique the region is, the higher this weight will be
return_weights= True, # Whether to return the final weights
project='Gene_activity') # Project name for the gene activity object
</code>
<code>
markers_dict= find_diff_features(cistopic_obj,
gene_act,
variable='mese_mullerian_lowres',
var_features=None,
contrasts=None,
adjpval_thr=0.05,
log2fc_thr=np.log2(1.5),
n_cpu=1,
#_temp_dir=tmpDir + 'ray_spill'
)
</code>
<code>
# os.mkdir(outDir+'DAGs')
from pycisTopic.clust_vis import *
plot_imputed_features(cistopic_obj,
reduction_name='harmony_tSNE',
imputed_data=gene_act,
features=['LGR5', 'TSPAN8', 'CD36', 'ITGBL1', 'HMGA2', 'KRT8', 'KRT18', 'ATF3', 'KLF2', 'ITGA4', 'SEMA3A', 'NR4A1', 'MAFF', 'CSRNP1',
'HOXA9', 'HOXD9', 'HOXA10', 'HOXD10', 'HOXA11', 'HOXD11', 'HOXA7', 'HOXC8', 'HOXC6', 'HOXC5', 'HOXC4',
'ETV4', 'CRABP1', 'CNTN1', 'TMEM163', 'ZAP70', 'MMP28', 'HOXA13', 'SRD5A2', 'WIF1'],
scale=True,
num_columns=4, cmap = 'jet',
save= outDir + 'DAGs/example_best_DAGs.pdf')
</code>
<code>
x = [print(x + ': '+ str(len(markers_dict[x]))) for x in markers_dict.keys()]
</code>
<code>
# Save
with open(outDir + 'DAGs/Gene_activity.pkl', 'wb') as f:
pickle.dump(gene_act, f)
with open(outDir + 'DAGs/DAGs.pkl', 'wb') as f:
pickle.dump(markers_dict, f)
</code>
### Label transfer
<code>
# # Load cisTopic object
# import pickle
# infile = open(outDir + 'cisTopicObject_clean.pkl', 'rb')
# cistopic_obj = pickle.load(infile)
# infile.close()
</code>
<code>
# cistopic_obj.cell_data
</code>
<code>
# # Prepare RNA
# from loomxpy.loomxpy import SCopeLoom
# from pycisTopic.loom import *
# import itertools
# import anndata
# import scanpy as sc
# rna_anndata = sc.read('/nfs/team292/vl6/FetalReproductiveTract/mullerian_mese_late_downsampled.h5ad')
# rna_anndata
</code>
<code>
# rna_anndata.obs['mese_mullerian_highres'].value_counts()
</code>
<code>
# # Recode RNA
# recode = {'Mesenchymal_FallopianTube_late' : 'MesenchymalFallopianTubeLate', 'Mesenchymal_Uterus_late' : 'MesenchymalUterusLate', 'Mesenchymal_FallopianTube_early' : 'MesenchymalFallopianTubeEarly', 'Mesenchymal_Uterus_early' : 'MesenchymalUterusEarly',
# 'Mesenchymal_MüllerianDuct' : 'MesenchymalMüllerianDuct'}
</code>
<code>
# rna_anndata.obs['mese_mullerian_highres'] = rna_anndata.obs['mese_mullerian_highres'].map(recode)
</code>
<code>
# rna_anndata = anndata.AnnData(X = rna_anndata.raw.X, var = rna_anndata.raw.var, obs = rna_anndata.obs)
</code>
<code>
# rna_anndata.obs['mese_mullerian_highres'].value_counts()
</code>
<code>
# # Prepare ATAC
# import pickle
# infile = open(outDir + 'DAGs/Gene_activity.pkl', 'rb') #Here I am using pycisTopic gene activity matrix, but could be any :)
# gene_act = pickle.load(infile)
# infile.close()
# atac_anndata = anndata.AnnData(X=gene_act.mtx.T, obs=pd.DataFrame(index=gene_act.cell_names), var=pd.DataFrame(index=gene_act.feature_names))
# atac_anndata.obs = cistopic_obj.cell_data
</code>
<code>
# atac_anndata
</code>
<code>
# atac_anndata.obs['mese_mullerian_highres'].value_counts()
</code>
<code>
# from pycisTopic.label_transfer import *
# label_dict = label_transfer(rna_anndata,
# atac_anndata,
# labels_to_transfer = ['mese_mullerian_highres'],
# variable_genes = True,
# methods = ['ingest', 'harmony', 'bbknn', 'scanorama', 'cca'],
# return_label_weights = False,
# #_temp_dir= ''
# )
</code>
<code>
# label_dict_x=[label_dict[key] for key in label_dict.keys()]
# label_pd = pd.concat(label_dict_x, axis=1, sort=False)
# label_pd.index = cistopic_obj.cell_names
# label_pd.columns = ['pycisTopic_' + x for x in label_pd.columns]
# cistopic_obj.add_cell_data(label_pd, split_pattern = '-')
</code>
<code>
# from pycisTopic.clust_vis import *
# plot_metadata(cistopic_obj,
# reduction_name='harmony_tSNE',
# variables= label_pd.columns.to_list(),
# remove_nan=True,
# cmap=cm.viridis,
# seed=555,
# num_columns=3,
# color_dictionary={},
# save= outDir + 'DAGs/label_transfer.pdf')
</code>
### pycisTarget
<code>
outDir
</code>
<code>
# Load region binarized topics
import pickle
infile = open(outDir+'topic_binarization/binarized_topic_region.pkl', 'rb')
binarized_topic_region = pickle.load(infile)
infile.close()
# Load DARs
import pickle
infile = open(outDir+'DARs/DARs.pkl', 'rb')
DARs_dict = pickle.load(infile)
infile.close()
# Format region sets
import re
import pyranges as pr
from pycistarget.utils import *
region_sets = {}
region_sets['Topics'] = {key: pr.PyRanges(region_names_to_coordinates(binarized_topic_region[key].index.tolist())) for key in binarized_topic_region.keys()}
region_sets['DARs'] = {re.sub('[^A-Za-z0-9]+', '_', key): pr.PyRanges(region_names_to_coordinates(DARs_dict[key].index.tolist())) for key in DARs_dict.keys()}
</code>
<code>
outDir
</code>
<code>
len('/lustre/scratch126/cellgen/team292/vl6/tmp/session_2023-01-23_22-04-33_639589_39002/sockets/plasma_store')
</code>
<code>
# Run pycistarget
# run_without_promoters = True, will run the methods in all regions + the region sets without promoters
import os
os.chdir('/nfs/team292/vl6/scenicplus/src/')
from scenicplus.wrappers.run_pycistarget import *
run_pycistarget(region_sets,
ctx_db_path = '/nfs/team292/vl6/scenicplus/hg38_screen_v10_clust.regions_vs_motifs.rankings.feather',
species = 'homo_sapiens',
save_path = '/lustre/scratch126/cellgen/team292/vl6/pycistarget/mullerian_mese_withvagina_post9pcw/',
dem_db_path = '/nfs/team292/vl6/scenicplus/hg38_screen_v10_clust.regions_vs_motifs.scores.feather',
run_without_promoters = True,
biomart_host = 'http://www.ensembl.org',
promoter_space = 500,
ctx_auc_threshold = 0.005,
ctx_nes_threshold = 3.0,
ctx_rank_threshold = 0.05,
dem_log2fc_thr = 0.5,
dem_motif_hit_thr = 3.0,
dem_max_bg_regions = 500,
path_to_motif_annotations = '/nfs/team292/vl6/scenicplus/motifs-v10nr_clust-nr.hgnc-m0.001-o0.0.tbl',
annotation_version = 'v10nr_clust',
annotation = ['Direct_annot', 'Orthology_annot'],
n_cpu = 1,
#_temp_dir = '/lustre/scratch126/cellgen/team292/vl6/pycistarget/temp/'
)
</code>
<code>
save_path = '/lustre/scratch126/cellgen/team292/vl6/pycistarget/mullerian_mese_withvagina_post9pcw/'
</code>
<code>
import dill
import os
menr = dill.load(open(os.path.join(save_path, 'menr.pkl'), 'rb'))
</code>
<code>
menr.keys()
</code>
<code>
outDir = '/lustre/scratch126/cellgen/team292/vl6/pycistopic/mullerian_mese_withvagina_post9pcw/'
outDir
</code>
### Infer eGRNs
<code>
import dill
import scanpy as sc
import os
import warnings
warnings.filterwarnings("ignore")
import pandas
import pyranges
# Set stderr to null to avoid strange messages from ray
import sys
adata = sc.read_h5ad('/nfs/team292/vl6/FetalReproductiveTract/mullerian_mese_late_post10pcw.h5ad')
cistopic_obj = dill.load(open(os.path.join(outDir, 'cisTopicObject_clean.pkl'), 'rb'))
</code>
<code>
import dill
import scanpy as sc
import os
import warnings
warnings.filterwarnings("ignore")
import pandas
import pyranges
# Set stderr to null to avoid strange messages from ray
import sys
cistopic_obj = dill.load(open(os.path.join(outDir, 'cisTopicObject_clean.pkl'), 'rb'))
cistopic_obj
</code>
<code>
cell_metadata = cistopic_obj.cell_data
</code>
<code>
cell_metadata.head()
</code>
<code>
cell_metadata.to_csv(outDir + "cell_metadata_for_cicero.csv")
</code>
<code>
peak_metadata = cistopic_obj.region_data
peak_metadata.head()
</code>
<code>
peak_metadata.to_csv(outDir + "region_metadata_for_cicero.csv")
</code>
<code>
cell_metadata.shape
</code>
<code>
lowdim = pd.DataFrame(index=cell_metadata.index, columns=['tsne1', 'tsne2'])
</code>
<code>
lowdim['tsne1'] = lowdim.index.map(cistopic_obj.projections['cell']['harmony_tSNE']['tSNE_1'].to_dict())
lowdim['tsne2'] = lowdim.index.map(cistopic_obj.projections['cell']['harmony_tSNE']['tSNE_2'].to_dict())
</code>
<code>
lowdim.head()
</code>
<code>
lowdim.to_csv(outDir + "tsne_harmony_for_cicero.csv")
</code>
<code>
from scipy.io import mmwrite
</code>
<code>
count_matrix = cistopic_obj.binary_matrix
count_matrix.shape
</code>
<code>
mmwrite(outDir + 'fragment_matrix_for_cicero.mtx', count_matrix)
</code>
<code>
outDir
</code>
<code>
adata.X[20:30, 20:30].toarray()
</code>
<code>
adata.raw.X.shape
</code>
<code>
# Find common genes between adata.raw and adata
common_genes = adata.var_names.intersection(adata.raw.var_names)
# Subset adata.raw to include only the common genes
adata_raw_common = adata.raw[:, common_genes]
</code>
<code>
adata_raw_common.shape
</code>
<code>
adata.layers["raw_count"] = adata_raw_common.X.copy()
</code>
<code>
adata.layers["raw_count"][20:25, 20:25].toarray()
</code>
<code>
adata.obs.head()
</code>
<code>
adata.var['highly_variable'].value_counts()
</code>
<code>
import pickle
# Load the list from the file
with open('/lustre/scratch126/cellgen/team292/vl6/VISIUM/tot_spatially_variable_genes_mullerian_mese.pkl', 'rb') as f:
spatially_variable_genes = pickle.load(f)
print(len(spatially_variable_genes))
</code>
## Take intersection of HVGs and spatially variable genes for CellOracle modelling
<code>
# Step 1: Extract the genes that are highly variable
highly_variable_genes = adata.var_names[adata.var['highly_variable'] == True]
# Step 2: Take the union with loaded_list
# Convert the loaded_list to a set for the union operation
genes_union = set(highly_variable_genes).union(set(spatially_variable_genes))
# Step 3: Convert back to list (optional) and print the result
genes_union_list = list(genes_union)
print(len(genes_union_list))
</code>
<code>
adata.obs['mese_mullerian_lowres'].value_counts()
</code>
<code>
# Recode RNA
recode = {'Fallopian Mese' : 'FallopianMese',
'Uterus Mese' : 'UterusMese',
'Cervix Mese' : 'CervixMese',
'Upper Vagina Mese' : 'UpperVaginaMese'}
adata.obs['mese_mullerian_lowres'] = adata.obs['mese_mullerian_lowres'].map(recode)
</code>
<code>
adata.obs['mese_mullerian_lowres'].value_counts(dropna = False)
</code>
<code>
sc.pl.umap(adata, color = 'mese_mullerian_lowres')
</code>
<code>
adata.obs['mese_mullerian_lowres'].value_counts()
</code>
<code>
# Random downsampling per cell type
import random
import pandas as pd
from itertools import chain
def downsample(adata, labels, n):
myindex = adata.obs[labels].value_counts().index
myvalues = adata.obs[labels].value_counts().values
clusters = pd.Series(myvalues, index = myindex)
# Find clusters with > n cells
cl2downsample = clusters.index[ clusters.values > n ]
# save all barcode ids from small clusters
holder = []
holder.append( adata.obs_names[[ i not in cl2downsample for i in adata.obs[labels] ]] )
# randomly sample n cells in the cl2downsample
for cl in cl2downsample:
print(cl)
cl_sample = adata[[ i == cl for i in adata.obs[labels]]].obs_names
# n = int(round(len(cl_sample)/2, 0))
if cl == 'Mese_ExtraGonad':
cl_downsample = random.sample(set(cl_sample), 9000 )
else:
cl_downsample = random.sample(set(cl_sample), n )
holder.append(cl_downsample)
# samples to include
samples = list(chain(*holder))
# Filter adata_count
adata = adata[[ i in samples for i in adata.obs_names ]]
return adata
</code>
<code>
adata = downsample(adata, 'mese_mullerian_lowres', 2000)
</code>
<code>
sc.pl.umap(adata, color = 'mese_mullerian_lowres')
</code>
<code>
adata.shape
</code>
<code>
adata = adata[:, genes_union_list]
adata.shape
</code>
<code>
adata.write(outDir + 'scrnaseq_for_celloracle.h5ad')
</code>
<code>
to_del = ['GeneID-0', 'GeneName-0', 'n_cells-0', 'GeneID-1', 'GeneName-1', 'n_cells-1', 'GeneID-10', 'GeneName-10', 'n_cells-10', 'GeneID-11', 'GeneName-11', 'n_cells-11', 'GeneID-12', 'GeneName-12', 'n_cells-12', 'GeneID-13', 'GeneName-13', 'n_cells-13', 'GeneID-14', 'GeneName-14', 'n_cells-14', 'GeneID-15', 'GeneName-15', 'n_cells-15', 'GeneID-16', 'GeneName-16', 'n_cells-16', 'GeneID-17', 'GeneName-17', 'n_cells-17', 'GeneID-18', 'GeneName-18', 'n_cells-18', 'GeneID-19', 'GeneName-19', 'n_cells-19', 'GeneID-2', 'GeneName-2', 'n_cells-2', 'GeneID-20', 'GeneName-20', 'n_cells-20', 'GeneID-21', 'GeneName-21', 'n_cells-21', 'GeneID-22', 'GeneName-22', 'n_cells-22', 'GeneID-23', 'GeneName-23', 'n_cells-23', 'GeneID-24', 'GeneName-24', 'n_cells-24', 'GeneID-25', 'GeneName-25', 'n_cells-25', 'GeneID-26', 'GeneName-26', 'n_cells-26', 'GeneID-27', 'GeneName-27', 'n_cells-27', 'GeneID-28', 'GeneName-28', 'n_cells-28', 'GeneID-29', 'GeneName-29', 'n_cells-29', 'GeneID-3', 'GeneName-3', 'n_cells-3', 'GeneID-30', 'GeneName-30', 'n_cells-30', 'GeneID-31', 'GeneName-31', 'n_cells-31', 'GeneID-32', 'GeneName-32', 'n_cells-32', 'GeneID-33', 'GeneName-33', 'n_cells-33', 'GeneID-34', 'GeneName-34', 'n_cells-34', 'GeneID-35', 'GeneName-35', 'n_cells-35', 'GeneID-36', 'GeneName-36', 'n_cells-36', 'GeneID-37', 'GeneName-37', 'n_cells-37', 'GeneID-38', 'GeneName-38', 'n_cells-38', 'GeneID-39', 'GeneName-39', 'n_cells-39', 'GeneID-4', 'GeneName-4', 'n_cells-4', 'GeneID-40', 'GeneName-40', 'n_cells-40', 'GeneID-41', 'GeneName-41', 'n_cells-41', 'GeneID-42', 'GeneName-42', 'n_cells-42', 'GeneID-43', 'GeneName-43', 'n_cells-43', 'GeneID-44', 'GeneName-44', 'n_cells-44', 'GeneID-45', 'GeneName-45', 'n_cells-45', 'GeneID-46', 'GeneName-46', 'n_cells-46', 'GeneID-47', 'GeneName-47', 'n_cells-47', 'GeneID-48', 'GeneName-48', 'n_cells-48', 'GeneID-49', 'GeneName-49', 'n_cells-49', 'GeneID-5', 'GeneName-5', 'n_cells-5', 'GeneID-50', 'GeneName-50', 'n_cells-50', 'GeneID-51', 'GeneName-51', 'n_cells-51', 'GeneID-52', 'GeneName-52', 'n_cells-52', 'GeneID-53', 'GeneName-53', 'n_cells-53', 'GeneID-54', 'GeneName-54', 'n_cells-54', 'GeneID-55', 'GeneName-55', 'n_cells-55', 'GeneID-56', 'GeneName-56', 'n_cells-56', 'GeneID-57', 'GeneName-57', 'n_cells-57', 'GeneID-58', 'GeneName-58', 'n_cells-58', 'GeneID-59', 'GeneName-59', 'n_cells-59', 'GeneID-6', 'GeneName-6', 'n_cells-6', 'GeneID-60', 'GeneName-60', 'n_cells-60', 'GeneID-61', 'GeneName-61', 'n_cells-61', 'GeneID-62', 'GeneName-62', 'n_cells-62', 'GeneID-63', 'GeneName-63', 'n_cells-63', 'GeneID-64', 'GeneName-64', 'n_cells-64', 'GeneID-65', 'GeneName-65', 'n_cells-65', 'GeneID-66', 'GeneName-66', 'n_cells-66', 'GeneID-67', 'GeneName-67', 'n_cells-67', 'GeneID-68', 'GeneName-68', 'n_cells-68', 'GeneID-69', 'GeneName-69', 'n_cells-69', 'GeneID-7', 'GeneName-7', 'n_cells-7', 'GeneID-70', 'GeneName-70', 'n_cells-70', 'GeneID-71', 'GeneName-71', 'n_cells-71', 'GeneID-72', 'GeneName-72', 'n_cells-72', 'GeneID-73', 'GeneName-73', 'n_cells-73', 'GeneID-74', 'GeneName-74', 'n_cells-74', 'GeneID-75', 'GeneName-75', 'n_cells-75', 'GeneID-76', 'GeneName-76', 'n_cells-76', 'GeneID-77', 'GeneName-77', 'n_cells-77', 'GeneID-78', 'GeneName-78', 'n_cells-78', 'GeneID-79', 'GeneName-79', 'n_cells-79', 'GeneID-8', 'GeneName-8', 'n_cells-8', 'gene_ids-80', 'feature_types-80', 'gene_ids-81', 'feature_types-81', 'gene_ids-82', 'feature_types-82', 'gene_ids-83', 'feature_types-83', 'gene_ids-84', 'feature_types-84', 'gene_ids-85', 'feature_types-85', 'gene_ids-86', 'feature_types-86', 'gene_ids-87', 'feature_types-87', 'gene_ids-88', 'feature_types-88', 'gene_ids-89', 'feature_types-89', 'GeneID-9', 'GeneName-9', 'n_cells-9']
for d in to_del:
del adata.var[d]
</code>
<code>
adata
</code>
<code>
import anndata
adata = anndata.AnnData(X = adata.raw.X, var = adata.raw.var, obs = adata.obs)
</code>
<code>
str(cistopic_obj)
</code>
<code>
cistopic_obj.cell_data.head()
</code>
<code>
cistopic_obj.cell_data.columns
</code>
<code>
cistopic_obj.region_data.head()
</code>
<code>
import pickle
infile = open(outDir + 'DARs/Imputed_accessibility.pkl', 'rb')
imputed_acc_obj = pickle.load(infile)
infile.close()
</code>
<code>
imputed_acc_obj
</code>
<code>
from scenicplus.scenicplus_class import create_SCENICPLUS_object
import numpy as np
scplus_obj = create_SCENICPLUS_object(
GEX_anndata = adata,
cisTopic_obj = cistopic_obj,
imputed_acc_obj = imputed_acc_obj,
menr = menr,
multi_ome_mode = False,
nr_cells_per_metacells = 20,
key_to_group_by = 'mese_mullerian_lowres')
</code>
<code>
print(scplus_obj)
</code>
<code>
from scenicplus.preprocessing.filtering import *
</code>
<code>
filter_genes(scplus_obj, min_pct = 10)
filter_regions(scplus_obj, min_pct = 10)
</code>
<code>
# Merge cistromes (all)
from scenicplus.cistromes import *
import time
start_time = time.time()
merge_cistromes(scplus_obj)
time = time.time()-start_time
print(time/60)
</code>
<code>
ensembl_version_dict = {'105': 'http://www.ensembl.org',
'104': 'http://may2021.archive.ensembl.org/',
'103': 'http://feb2021.archive.ensembl.org/',
'102': 'http://nov2020.archive.ensembl.org/',
'101': 'http://aug2020.archive.ensembl.org/',
'100': 'http://apr2020.archive.ensembl.org/',
'99': 'http://jan2020.archive.ensembl.org/',
'98': 'http://sep2019.archive.ensembl.org/',
'97': 'http://jul2019.archive.ensembl.org/',
'96': 'http://apr2019.archive.ensembl.org/',
'95': 'http://jan2019.archive.ensembl.org/',
'94': 'http://oct2018.archive.ensembl.org/',
'93': 'http://jul2018.archive.ensembl.org/',
'92': 'http://apr2018.archive.ensembl.org/',
'91': 'http://dec2017.archive.ensembl.org/',
'90': 'http://aug2017.archive.ensembl.org/',
'89': 'http://may2017.archive.ensembl.org/',
'88': 'http://mar2017.archive.ensembl.org/',
'87': 'http://dec2016.archive.ensembl.org/',
'86': 'http://oct2016.archive.ensembl.org/',
'80': 'http://may2015.archive.ensembl.org/',
'77': 'http://oct2014.archive.ensembl.org/',
'75': 'http://feb2014.archive.ensembl.org/',
'54': 'http://may2009.archive.ensembl.org/'}
import pybiomart as pbm
def test_ensembl_host(scplus_obj, host, species):
dataset = pbm.Dataset(name=species+'_gene_ensembl', host=host)
annot = dataset.query(attributes=['chromosome_name', 'transcription_start_site', 'strand', 'external_gene_name', 'transcript_biotype'])
annot.columns = ['Chromosome', 'Start', 'Strand', 'Gene', 'Transcript_type']
annot['Chromosome'] = annot['Chromosome'].astype('str')
filter = annot['Chromosome'].str.contains('CHR|GL|JH|MT')
annot = annot[~filter]
annot.columns=['Chromosome', 'Start', 'Strand', 'Gene', 'Transcript_type']
gene_names_release = set(annot['Gene'].tolist())
ov=len([x for x in scplus_obj.gene_names if x in gene_names_release])
print('Genes recovered: ' + str(ov) + ' out of ' + str(len(scplus_obj.gene_names)))
return ov
n_overlap = {}
for version in ensembl_version_dict.keys():
print(f'host: {version}')
try:
n_overlap[version] = test_ensembl_host(scplus_obj, ensembl_version_dict[version], 'hsapiens')
except:
print('Host not reachable')
v = sorted(n_overlap.items(), key=lambda item: item[1], reverse=True)[0][0]
print(f"version: {v} has the largest overlap, use {ensembl_version_dict[v]} as biomart host")
</code>
<code>
tf_file = '/nfs/team292/vl6/scenicplus/allTFs_hg38.txt'
# Open the file in read mode
with open(tf_file, 'r') as file:
# Read lines from the file and remove newline characters
tfs = [line.strip() for line in file.readlines()]
</code>
<code>
len(tfs)
</code>
<code>
"ESRRG" in tfs
</code>
<code>
# tfs = [t for t in tfs if not t.startswith("ZNF")]
</code>
<code>
# len(tfs)
</code>
<code>
# # Specify the file path
# file_path = '/nfs/team292/vl6/scenicplus/nonZNF_TFs_hg38.txt'
# # Open the file in write mode
# with open(file_path, 'w') as file:
# # Write each element of the list followed by a newline character
# for element in tfs:
# file.write(element + '\n')
</code>
<code>
biomart_host = "http://sep2019.archive.ensembl.org/"
</code>
<code>
from scenicplus.enhancer_to_gene import get_search_space, calculate_regions_to_genes_relationships, GBM_KWARGS
get_search_space(scplus_obj,
biomart_host = biomart_host,
species = 'hsapiens',
assembly = 'hg38',
upstream = [1000, 150000],
downstream = [1000, 150000])
</code>
<code>
calculate_regions_to_genes_relationships(scplus_obj,
ray_n_cpu = 20,
#_temp_dir = tmpDir,
importance_scoring_method = 'GBM',
importance_scoring_kwargs = GBM_KWARGS)
</code>
<code>
# Save
import pickle
with open(outDir + 'scplus_obj.pkl', 'wb') as f:
pickle.dump(scplus_obj, f)
</code>
<code>
import pickle
infile = open(outDir + 'scplus_obj.pkl', 'rb')
scplus_obj = pickle.load(infile)
infile.close()
</code>
<code>
print(scplus_obj)
</code>
<code>
scplus_obj.uns.keys()
</code>
<code>
def timestamp(dt):
return f"{dt.year}{dt.month}{dt.day}_{dt.hour}{dt.minute}{dt.second}"
</code>
<code>
"""Link transcription factors (TFs) to genes based on co-expression of TF and target genes.
Both linear methods (spearman or pearson correlation) and non-linear methods (random forrest or gradient boosting) are used to link TF to genes.
The correlation methods are used to seperate TFs which are infered to have a positive influence on gene expression (i.e. positive correlation)
and TFs which are infered to have a negative influence on gene expression (i.e. negative correlation).
"""
import logging
import os
import shutil
import sys
import tempfile
import time
from datetime import datetime
import joblib
import numpy as np
import pandas as pd
import scipy.sparse
from arboreto.algo import _prepare_input
from arboreto.core import (EARLY_STOP_WINDOW_LENGTH, RF_KWARGS, SGBM_KWARGS,
infer_partial_network, to_tf_matrix)
from arboreto.utils import load_tf_names
from tqdm import tqdm
from scenicplus.scenicplus_class import SCENICPLUS
from scenicplus.utils import _create_idx_pairs, masked_rho4pairs
COLUMN_NAME_TARGET = "target"
COLUMN_NAME_WEIGHT = "importance"
COLUMN_NAME_REGULATION = "regulation"
COLUMN_NAME_CORRELATION = "rho"
COLUMN_NAME_TF = "TF"
COLUMN_NAME_SCORE_1 = "importance_x_rho"
COLUMN_NAME_SCORE_2 = "importance_x_abs_rho"
RHO_THRESHOLD = 0.03
# Create logger
level = logging.INFO
format = '%(asctime)s %(name)-12s %(levelname)-8s %(message)s'
handlers = [logging.StreamHandler(stream=sys.stdout)]
logging.basicConfig(level=level, format=format, handlers=handlers)
log = logging.getLogger('TF2G')
def _inject_TF_as_its_own_target(
scplus_obj: SCENICPLUS = None,
TF2G_adj: pd.DataFrame = None,
ex_mtx: pd.DataFrame = None,
rho_threshold = RHO_THRESHOLD,
TF2G_key = 'TF2G_adj',
out_key = 'TF2G_adj',
inplace = True,
increase_importance_by = 0.00001) -> pd.DataFrame:
if scplus_obj is None and TF2G_adj is None:
raise ValueError('Either provide a SCENIC+ object of a pd.DataFrame with TF to gene adjecencies!')
if scplus_obj is not None and TF2G_adj is not None:
raise ValueError('Either provide a SCENIC+ object of a pd.DataFrame with TF to gene adjecencies! Not both!')
log.info(f"Warning: adding TFs as their own target to adjecencies matrix. Importance values will be max + {increase_importance_by}")
origin_TF2G_adj = scplus_obj.uns[TF2G_key] if scplus_obj is not None else TF2G_adj
ex_mtx = scplus_obj.to_df(layer='EXP') if scplus_obj is not None else ex_mtx
origin_TF2G_adj = origin_TF2G_adj.sort_values('TF')
max_importances = origin_TF2G_adj.groupby('TF').max()['importance']
TFs_in_adj = list(set(origin_TF2G_adj['TF'].to_list()))
TF_to_TF_adj = pd.DataFrame(
data = {"TF": TFs_in_adj,
"target": TFs_in_adj,
"importance": max_importances.loc[TFs_in_adj] + increase_importance_by})
TF_to_TF_adj = _add_correlation(
adjacencies=TF_to_TF_adj,
ex_mtx = ex_mtx,
rho_threshold=rho_threshold)
new_TF2G_adj = pd.concat([origin_TF2G_adj, TF_to_TF_adj]).reset_index(drop = True)
if inplace:
scplus_obj.uns[out_key] = new_TF2G_adj
return None
else:
return new_TF2G_adj
def load_TF2G_adj_from_file(SCENICPLUS_obj: SCENICPLUS,
f_adj: str,
inplace=True,
key='TF2G_adj',
rho_threshold=RHO_THRESHOLD):
"""
Function to load TF2G adjacencies from file
Parameters
----------
SCENICPLUS_obj
An instance of :class:`~scenicplus.scenicplus_class.SCENICPLUS`
f_adj
File path to TF2G adjacencies matrix
inplace
Boolean specifying wether or not to store adjacencies matrix in `SCENICPLUS_obj` under slot .uns[key].
Default: True
key_added
String specifying where in the .uns slot to store the adjacencies matrix in `SCENICPLUS_obj`
Default: "TF2G_adj"
rho_threshold
A floating point number specifying from which absolute value to consider a correlation coefficient positive or negative.
Default: 0.03
"""
log.info(f'Reading file: {f_adj}')
df_TF_gene_adj = pd.read_csv(f_adj, sep='\t')
# only keep relevant entries
idx_to_keep = np.logical_and(np.array([tf in SCENICPLUS_obj.gene_names for tf in df_TF_gene_adj['TF']]),
np.array([gene in SCENICPLUS_obj.gene_names for gene in df_TF_gene_adj['target']]))
df_TF_gene_adj_subset = df_TF_gene_adj.loc[idx_to_keep]
if COLUMN_NAME_CORRELATION not in df_TF_gene_adj_subset.columns:
log.info('Adding correlation coefficients to adjacencies.')
df_TF_gene_adj_subset = _add_correlation(
adjacencies=df_TF_gene_adj_subset,
ex_mtx=SCENICPLUS_obj.to_df(layer='EXP'),
rho_threshold=rho_threshold)
df_TF_gene_adj_subset = _inject_TF_as_its_own_target(
TF2G_adj=df_TF_gene_adj_subset,
inplace = False,
ex_mtx = SCENICPLUS_obj.to_df(layer='EXP'))
if COLUMN_NAME_SCORE_1 not in df_TF_gene_adj_subset.columns:
log.info('Adding importance x rho scores to adjacencies.')
df_TF_gene_adj_subset[COLUMN_NAME_SCORE_1] = df_TF_gene_adj_subset[COLUMN_NAME_CORRELATION] * \
df_TF_gene_adj_subset[COLUMN_NAME_WEIGHT]
if COLUMN_NAME_SCORE_2 not in df_TF_gene_adj_subset.columns:
log.info('Adding importance x |rho| scores to adjacencies.')
df_TF_gene_adj_subset[COLUMN_NAME_SCORE_2] = abs(
df_TF_gene_adj_subset[COLUMN_NAME_CORRELATION]) * abs(df_TF_gene_adj_subset[COLUMN_NAME_WEIGHT])
if inplace:
log.info(f'Storing adjacencies in .uns["{key}"].')
SCENICPLUS_obj.uns[key] = df_TF_gene_adj_subset
else:
return df_TF_gene_adj_subset
def _add_correlation(
adjacencies: pd.DataFrame,
ex_mtx: pd.DataFrame,
rho_threshold=RHO_THRESHOLD,
mask_dropouts=False):
"""
Add correlation in expression levels between target and factor.
Parameters
----------
adjacencies: pd.DataFrame
The dataframe with the TF-target links.
ex_mtx: pd.DataFrame
The expression matrix (n_cells x n_genes).
rho_threshold: float
The threshold on the correlation to decide if a target gene is activated
(rho > `rho_threshold`) or repressed (rho < -`rho_threshold`).
mask_dropouts: boolean
Do not use cells in which either the expression of the TF or the target gene is 0 when
calculating the correlation between a TF-target pair.
Returns
-------
The adjacencies dataframe with an extra column.
"""
assert rho_threshold > 0, "rho_threshold should be greater than 0."
# Calculate Pearson correlation to infer repression or activation.
if mask_dropouts:
ex_mtx = ex_mtx.sort_index(axis=1)
col_idx_pairs = _create_idx_pairs(adjacencies, ex_mtx)
rhos = masked_rho4pairs(ex_mtx.values, col_idx_pairs, 0.0)
else:
genes = list(set(adjacencies[COLUMN_NAME_TF]).union(
set(adjacencies[COLUMN_NAME_TARGET])))
ex_mtx = ex_mtx[ex_mtx.columns[ex_mtx.columns.isin(genes)]]
corr_mtx = pd.DataFrame(
index=ex_mtx.columns, columns=ex_mtx.columns, data=np.corrcoef(ex_mtx.values.T))
rhos = np.array([corr_mtx[s2][s1]
for s1, s2 in zip(adjacencies.TF, adjacencies.target)])
regulations = (rhos > rho_threshold).astype(
int) - (rhos < -rho_threshold).astype(int)
return pd.DataFrame(
data={
COLUMN_NAME_TF: adjacencies[COLUMN_NAME_TF].values,
COLUMN_NAME_TARGET: adjacencies[COLUMN_NAME_TARGET].values,
COLUMN_NAME_WEIGHT: adjacencies[COLUMN_NAME_WEIGHT].values,
COLUMN_NAME_REGULATION: regulations,
COLUMN_NAME_CORRELATION: rhos,
}
)
def calculate_TFs_to_genes_relationships(scplus_obj: SCENICPLUS,
tf_file: str,
method: str = 'GBM',
n_cpu: int = 1,
key: str = 'TF2G_adj',
temp_dir = None):
"""
A function to calculate TF to gene relationships using arboreto and correlation
Parameters
----------
scplus_obj
An instance of :class:`~scenicplus.scenicplus_class.SCENICPLUS`
tf_file
Path to a file specifying with genes are TFs
method
Whether to use Gradient Boosting Machines (GBM) or random forest (RF)
n_cpu
Number of cpus to use
key
String specifying where in the .uns slot to store the adjacencies matrix in :param:`SCENICPLUS_obj`
default: "TF2G_adj"
**kwargs
Parameters to pass to ray.init
"""
if(method == 'GBM'):
method_params = [
'GBM', # regressor_type
SGBM_KWARGS # regressor_kwargs
]
elif(method == 'RF'):
method_params = [
'RF', # regressor_type
RF_KWARGS # regressor_kwargs
]
gene_names = list(scplus_obj.gene_names)
if len(set(gene_names)) != len(gene_names):
raise ValueError("scplus_obj contains duplicate gene names!")
ex_matrix = scplus_obj.X_EXP
tf_names = load_tf_names(tf_file)
ex_matrix, gene_names, tf_names = _prepare_input(
ex_matrix, gene_names, tf_names)
tf_matrix, tf_matrix_gene_names = to_tf_matrix(
ex_matrix, gene_names, tf_names)
#convert ex_matrix, tf_matrix to np.array if necessary
if isinstance(ex_matrix, np.matrix):
ex_matrix = np.array(ex_matrix)
elif scipy.sparse.issparse(ex_matrix):
ex_matrix = ex_matrix.toarray()
if isinstance(tf_matrix, np.matrix):
tf_matrix = np.array(tf_matrix)
elif scipy.sparse.issparse(tf_matrix):
tf_matrix = tf_matrix.toarray()
log.info('Calculating TF-to-gene importance')
start_time = time.time()
if temp_dir is None:
if os.access('/dev/shm', os.W_OK):
temp_dir = '/dev/shm'
else:
temp_dir = tempfile.gettempdir()
dt = datetime.now()
joblib.dump(
ex_matrix,
os.path.join(temp_dir, f'scenicplus_ex_matrix_{timestamp(dt)}'))
joblib.dump(
tf_matrix,
os.path.join(temp_dir, f'scenicplus_tf_matrix_{timestamp(dt)}'))
ex_matrix_memmap = joblib.load(
os.path.join(temp_dir, f'scenicplus_ex_matrix_{timestamp(dt)}'),
mmap_mode = 'r')
tf_matrix_memmap = joblib.load(
os.path.join(temp_dir, f'scenicplus_tf_matrix_{timestamp(dt)}'),
mmap_mode = 'r')
def pf_inter_partial_network(target_gene_name):
return infer_partial_network(
target_gene_name = target_gene_name,
target_gene_expression = ex_matrix_memmap[
:, gene_names.index(target_gene_name)],
regressor_type = method_params[0],
regressor_kwargs = method_params[1],
tf_matrix = tf_matrix_memmap,
tf_matrix_gene_names = tf_matrix_gene_names,
include_meta = False,
early_stop_window_length = EARLY_STOP_WINDOW_LENGTH,
seed = 666)
def clean_shared_memory():
os.remove(os.path.join(temp_dir, f'scenicplus_ex_matrix_{timestamp(dt)}'))
os.remove(os.path.join(temp_dir, f'scenicplus_tf_matrix_{timestamp(dt)}'))
try:
TF_to_genes = joblib.Parallel(
n_jobs = n_cpu)(
joblib.delayed(pf_inter_partial_network)(gene)
for gene in tqdm(
gene_names,
total=len(gene_names),
desc=f'Running using {n_cpu} cores'))
except Exception as e:
clean_shared_memory()
raise Exception(e)
finally:
clean_shared_memory()
adj = pd.concat(TF_to_genes).sort_values(by='importance', ascending=False)
log.info('Took {} seconds'.format(time.time() - start_time))
start_time = time.time()
log.info('Adding correlation coefficients to adjacencies.')
ex_matrix = scplus_obj.to_df(layer = 'EXP')
adj = _add_correlation(adj, ex_matrix)
adj = _inject_TF_as_its_own_target(
TF2G_adj=adj,
inplace = False,
ex_mtx = scplus_obj.to_df(layer='EXP'))
log.info('Adding importance x rho scores to adjacencies.')
adj[COLUMN_NAME_SCORE_1] = adj[COLUMN_NAME_CORRELATION] * \
adj[COLUMN_NAME_WEIGHT]
adj[COLUMN_NAME_SCORE_2] = abs(
adj[COLUMN_NAME_CORRELATION]) * abs(adj[COLUMN_NAME_WEIGHT])
log.info('Took {} seconds'.format(time.time() - start_time))
scplus_obj.uns[key] = adj
</code>
<code>
#from scenicplus.TF_to_gene import *
tf_file = '/nfs/team292/vl6/scenicplus/allTFs_hg38.txt'
calculate_TFs_to_genes_relationships(scplus_obj,
tf_file = tf_file,
n_cpu = 20,
method = 'GBM',
key= 'TF2G_adj')
</code>
<code>
# Save
import pickle
with open(outDir + 'scplus_obj.pkl', 'wb') as f:
pickle.dump(scplus_obj, f)
</code>
<code>
import pickle
infile = open(outDir + 'scplus_obj.pkl', 'rb')
scplus_obj = pickle.load(infile)
infile.close()
</code>
<code>
outDir
</code>
<code>
scplus_obj.uns
</code>
<code>
from scenicplus.plotting import coverageplot
</code>
<code>
outDir
</code>
### Integrated multiome plot - haven't yet implemented this
Generate plots showing the chromatin profiles per group, region-to-gene relationships and TF and gene expression to test hypothesis:
* As the Mullerian epithelium can change identity based on the surrounding mesenchyme, we can see if the genes associated with Fallopian Tube identity are accessible despite not being expressed in the Uterus (and viceversa)
* As the Wolffian epithelium can change identity based on the surrounding mesenchyme, we can see if the genes associated with Epididymis identity are accessible despite not being expressed in the Uterus (and viceversa)
**Genes of interest**
* **DLX5** (uterus) = chr7:97,020,396-97,024,831
* **ERP27** (fallopian tube) = chr12:14,914,039-14,938,537
* **MSX1** (uterus) = chr4:4,859,665-4,863,936
* **WNT11** (uterus) = chr11:76,186,325-76,206,502
* **EMX1** (wolffian) = chr2:72,917,519-72,934,891
* **MARCH11** (wolffian) =
* **CALB1** (wolffian) = chr8:90,063,299-90,095,475
* **AVPR1A** (wolffian) = chr12:63,142,759-63,151,201
* **LEFTY1** (vas deferens) = chr1:225,886,282-225,889,146
* **CLDN2** (epididymis) = chrX:106,900,164-106,929,580
* **GLYAT** (epididymis) = chr11:58,708,757-58,731,943
* **SPAG11B** (epididymis) = chr8:7,450,603-7,463,542
* **SPINK2** (epididymis) = chr4:56,809,861-56,821,742
* **MGAM** (epididymis) = chr7:141,995,879-142,106,747
<code>
scplus_obj.uns.keys()
</code>
<code>
# Load functions
from scenicplus.grn_builder.gsea_approach3 import build_grn
</code>
<code>
build_grn
</code>
<code>
build_grn(scplus_obj,
min_target_genes = 10,
adj_pval_thr = 1,
min_regions_per_gene = 0,
quantiles = (0.85, 0.90, 0.95),
top_n_regionTogenes_per_gene = (5, 10, 15),
top_n_regionTogenes_per_region = (),
binarize_using_basc = True,
rho_dichotomize_tf2g = True,
rho_dichotomize_r2g = True,
rho_dichotomize_eregulon = True,
rho_threshold = 0.05,
keep_extended_motif_annot = True,
merge_eRegulons = True,
order_regions_to_genes_by = 'importance',
order_TFs_to_genes_by = 'importance',
key_added = 'eRegulons_importance',
cistromes_key = 'Unfiltered',
disable_tqdm = False, #If running in notebook, set to True
ray_n_cpu = 20,
#_temp_dir = '/lustre/scratch117/cellgen/team292/vl6/'
)
</code>
<code>
import dill
with open(outDir + 'scplus_obj2.pkl', 'wb') as f:
dill.dump(scplus_obj, f)
</code>
<code>
3+4
</code>
<code>
import dill
infile = open(outDir + 'scplus_obj2.pkl', 'rb')
scplus_obj = dill.load(infile)
infile.close()
</code>
<code>
print(scplus_obj)
</code>
<code>
scplus_obj.uns.keys()
</code>
<code>
from scenicplus.utils import format_egrns
format_egrns(scplus_obj, eregulons_key = 'eRegulons_importance', TF2G_key = 'TF2G_adj', key_added = 'eRegulon_metadata')
</code>
<code>
scplus_obj.uns['eRegulon_metadata'][40:50]
</code>
<code>
len(scplus_obj.uns['eRegulons_importance'])
</code>
<code>
# Format eRegulons
from scenicplus.eregulon_enrichment import *
get_eRegulons_as_signatures(scplus_obj, eRegulon_metadata_key='eRegulon_metadata', key_added='eRegulon_signatures')
</code>
<code>
## Score chromatin layer
# Region based raking
from scenicplus.cistromes import *
import time
start_time = time.time()
region_ranking = make_rankings(scplus_obj, target='region')
# Score region regulons
score_eRegulons(scplus_obj,
ranking = region_ranking,
eRegulon_signatures_key = 'eRegulon_signatures',
key_added = 'eRegulon_AUC',
enrichment_type= 'region',
auc_threshold = 0.05,
normalize = False,
n_cpu = 10)
time = time.time()-start_time
print(time/60)
</code>
<code>
## Score transcriptome layer
# Gene based raking
from scenicplus.cistromes import *
import time
start_time = time.time()
gene_ranking = make_rankings(scplus_obj, target='gene')
# Score gene regulons
score_eRegulons(scplus_obj,
gene_ranking,
eRegulon_signatures_key = 'eRegulon_signatures',
key_added = 'eRegulon_AUC',
enrichment_type = 'gene',
auc_threshold = 0.05,
normalize= False,
n_cpu = 10)
time = time.time()-start_time
print(time/60)
</code>
<code>
# Generate pseudobulks
import time
start_time = time.time()
generate_pseudobulks(scplus_obj,
variable = 'mese_mullerian_lowres',
auc_key = 'eRegulon_AUC',
signature_key = 'Gene_based',
nr_cells = 5,
nr_pseudobulks = 100,
seed=555)
generate_pseudobulks(scplus_obj,
variable = 'mese_mullerian_lowres',
auc_key = 'eRegulon_AUC',
signature_key = 'Region_based',
nr_cells = 5,
nr_pseudobulks = 100,
seed=555)
time = time.time()-start_time
print(time/60)
</code>
<code>
# Correlation between TF and eRegulons
import time
start_time = time.time()
TF_cistrome_correlation(scplus_obj,
variable = 'mese_mullerian_lowres',
auc_key = 'eRegulon_AUC',
signature_key = 'Gene_based',
out_key = 'mese_mullerian_lowres_eGRN_gene_based')
TF_cistrome_correlation(scplus_obj,
variable = 'mese_mullerian_lowres',
auc_key = 'eRegulon_AUC',
signature_key = 'Region_based',
out_key = 'mese_mullerian_lowres_eGRN_region_based')
time = time.time()-start_time
print(time/60)
</code>
<code>
scplus_obj
</code>
<code>
color_dict = {'FallopianMese': 'orange',
'UterusMese': 'orangered',
'CervixMese': 'palevioletred',
'UpperVaginaMese': 'lightpink'}
</code>
<code>
# Region based
%matplotlib inline
import seaborn as sns
sns.set_style("white")
categories = sorted(set(scplus_obj.metadata_cell['mese_mullerian_lowres']))
print(categories)
print(color_dict)
prune_plot(scplus_obj,
'HOXA10_+_+',
pseudobulk_variable = 'mese_mullerian_lowres',
show_dot_plot = True,
show_line_plot = False,
color_dict = color_dict,
use_pseudobulk = True,
auc_key = 'eRegulon_AUC',
signature_key = 'Region_based',
seed=555)
</code>
<code>
# Gene based
%matplotlib inline
sns.set_style("white")
prune_plot(scplus_obj,
'HOXA10_+_+',
pseudobulk_variable = 'mese_mullerian_lowres',
show_dot_plot = True,
show_line_plot = False,
color_dict = color_dict,
use_pseudobulk = True,
auc_key = 'eRegulon_AUC',
signature_key = 'Gene_based',
seed=555)
</code>
<code>
# Region based
%matplotlib inline
import seaborn as sns
sns.set_style("white")
categories = sorted(set(scplus_obj.metadata_cell['mese_mullerian_lowres']))
print(categories)
print(color_dict)
prune_plot(scplus_obj,
'HOXC8_+_+',
pseudobulk_variable = 'mese_mullerian_lowres',
show_dot_plot = True,
show_line_plot = False,
color_dict = color_dict,
use_pseudobulk = True,
auc_key = 'eRegulon_AUC',
signature_key = 'Region_based',
seed=555)
</code>
<code>
# Region based
%matplotlib inline
import seaborn as sns
sns.set_style("white")
categories = sorted(set(scplus_obj.metadata_cell['mese_mullerian_lowres']))
print(categories)
print(color_dict)
prune_plot(scplus_obj,
'HOXC8_+_+',
pseudobulk_variable = 'mese_mullerian_lowres',
show_dot_plot = True,
show_line_plot = False,
color_dict = color_dict,
use_pseudobulk = True,
auc_key = 'eRegulon_AUC',
signature_key = 'Gene_based',
seed=555)
</code>
<code>
# Region based
%matplotlib inline
import seaborn as sns
sns.set_style("white")
categories = sorted(set(scplus_obj.metadata_cell['mese_mullerian_lowres']))
print(categories)
print(color_dict)
prune_plot(scplus_obj,
'HOXC6_+_+',
pseudobulk_variable = 'mese_mullerian_lowres',
show_dot_plot = True,
show_line_plot = False,
color_dict = color_dict,
use_pseudobulk = True,
auc_key = 'eRegulon_AUC',
signature_key = 'Region_based',
seed=555)
</code>
<code>
# Region based
%matplotlib inline
import seaborn as sns
sns.set_style("white")
categories = sorted(set(scplus_obj.metadata_cell['mese_mullerian_lowres']))
print(categories)
print(color_dict)
prune_plot(scplus_obj,
'HOXC6_+_+',
pseudobulk_variable = 'mese_mullerian_lowres',
show_dot_plot = True,
show_line_plot = False,
color_dict = color_dict,
use_pseudobulk = True,
auc_key = 'eRegulon_AUC',
signature_key = 'Gene_based',
seed=555)
</code>
<code>
# Region based
%matplotlib inline
import seaborn as sns
sns.set_style("white")
categories = sorted(set(scplus_obj.metadata_cell['mese_mullerian_lowres']))
print(categories)
print(color_dict)
prune_plot(scplus_obj,
'HOXA13_+_+',
pseudobulk_variable = 'mese_mullerian_lowres',
show_dot_plot = True,
show_line_plot = False,
color_dict = color_dict,
use_pseudobulk = True,
auc_key = 'eRegulon_AUC',
signature_key = 'Region_based',
seed=555)
</code>
<code>
# Region based
%matplotlib inline
import seaborn as sns
sns.set_style("white")
categories = sorted(set(scplus_obj.metadata_cell['mese_mullerian_lowres']))
print(categories)
print(color_dict)
prune_plot(scplus_obj,
'HOXA13_+_+',
pseudobulk_variable = 'mese_mullerian_lowres',
show_dot_plot = True,
show_line_plot = False,
color_dict = color_dict,
use_pseudobulk = True,
auc_key = 'eRegulon_AUC',
signature_key = 'Gene_based',
seed=555)
</code>
### Identification of high quality regions
<code>
# Correlation between region based regulons and gene based regulons
import pandas
df1 = scplus_obj.uns['eRegulon_AUC']['Gene_based'].copy()
df2 = scplus_obj.uns['eRegulon_AUC']['Region_based'].copy()
df1.columns = [x.split('_(')[0] for x in df1.columns]
df2.columns = [x.split('_(')[0] for x in df2.columns]
correlations = df1.corrwith(df2, axis = 0)
correlations = correlations[abs(correlations) > 0.6]
# Kepp only R2G +
keep = [x for x in correlations.index if '+_+' in x] + [x for x in correlations.index if '-_+' in x]
# Keep extended if not direct
extended = [x for x in keep if 'extended' in x]
direct = [x for x in keep if not 'extended' in x]
keep_extended = [x for x in extended if not x.replace('extended_', '') in direct]
keep = direct + keep_extended
# Keep regulons with more than 10 genes
keep_gene = [x for x in scplus_obj.uns['eRegulon_AUC']['Gene_based'].columns if x.split('_(')[0] in keep]
keep_gene = [x for x in keep_gene if (int(x.split('_(')[1].replace('g)', '')) > 10)]
keep_all = [x.split('_(')[0] for x in keep_gene]
keep_region = [x for x in scplus_obj.uns['eRegulon_AUC']['Region_based'].columns if x.split('_(')[0] in keep]
scplus_obj.uns['selected_eRegulons'] = {}
scplus_obj.uns['selected_eRegulons']['Gene_based'] = keep_gene
scplus_obj.uns['selected_eRegulons']['Region_based'] = keep_region
</code>
<code>
print(len(keep_gene))
print(len(keep_region))
</code>
<code>
%matplotlib inline
</code>
<code>
from scenicplus.plotting.correlation_plot import *
correlation_heatmap(scplus_obj,
auc_key = 'eRegulon_AUC',
signature_keys = ['Gene_based'],
selected_regulons = scplus_obj.uns['selected_eRegulons']['Gene_based'],
fcluster_threshold = 0.1,
fontsize = 8,
save = outDir + 'correlation_heatmap.pdf')
</code>
<code>
#from scenicplus.plotting.correlation_plot import *
jaccard_heatmap(scplus_obj,
gene_or_region_based = 'Gene_based',
signature_key = 'eRegulon_signatures',
selected_regulons = scplus_obj.uns['selected_eRegulons']['Gene_based'],
fcluster_threshold = 0.1,
fontsize = 8,
method='intersect',
save = outDir + 'jaccard_heatmap.pdf')
</code>
<code>
binarize_AUC(scplus_obj,
auc_key='eRegulon_AUC',
out_key='eRegulon_AUC_thresholds',
signature_keys=['Gene_based', 'Region_based'],
n_cpu=20)
</code>
<code>
import dill
with open(outDir + 'scplus_obj2.pkl', 'wb') as f:
dill.dump(scplus_obj, f)
</code>
<code>
import dill
infile = open(outDir + 'scplus_obj2.pkl', 'rb')
scplus_obj = dill.load(infile)
infile.close()
</code>
<code>
from scenicplus.dimensionality_reduction import *
run_eRegulons_umap(scplus_obj,
scale=True, signature_keys=['Gene_based', 'Region_based'], selected_regulons=scplus_obj.uns['selected_eRegulons']['Gene_based'])
run_eRegulons_tsne(scplus_obj,
scale=True, signature_keys=['Gene_based', 'Region_based'], selected_regulons=scplus_obj.uns['selected_eRegulons']['Gene_based'])
</code>
<code>
run_eRegulons_umap(scplus_obj,
scale=True, signature_keys=['Gene_based'],
reduction_name='eRegulons_UMAP_gb', selected_regulons=scplus_obj.uns['selected_eRegulons']['Gene_based'])
run_eRegulons_tsne(scplus_obj,
scale=True, signature_keys=['Gene_based'],
reduction_name='eRegulons_tSNE_gb', selected_regulons=scplus_obj.uns['selected_eRegulons']['Gene_based'])
run_eRegulons_umap(scplus_obj,
scale=True, signature_keys=['Region_based'],
reduction_name='eRegulons_UMAP_rb', selected_regulons=scplus_obj.uns['selected_eRegulons']['Region_based'])
run_eRegulons_tsne(scplus_obj,
scale=True, signature_keys=['Region_based'],
reduction_name='eRegulons_tSNE_rb', selected_regulons=scplus_obj.uns['selected_eRegulons']['Region_based'])
</code>
<code>
from scenicplus.dimensionality_reduction import *
</code>
<code>
from scenicplus.dimensionality_reduction import *
plot_metadata(scplus_obj,
reduction_name='eRegulons_UMAP_rb',
variables=['mese_mullerian_lowres'],
num_columns=1,
text_size=10,
dot_size=5,
figsize = (5,5),
# color_dictionary = {'mese_mullerian_lowres' : color_dict},
save = outDir + 'umap_regulons.pdf')
</code>
<code>
from scenicplus.dimensionality_reduction import *
plot_metadata(scplus_obj,
reduction_name='eRegulons_tSNE_rb',
variables=['mese_mullerian_lowres'],
num_columns=1,
text_size=10,
dot_size=5,
figsize = (5,5),
# color_dictionary = {'mese_mullerian_lowres' : color_dict},
save = outDir + 'tsne_regulons.pdf')
</code>
<code>
find_clusters(scplus_obj,
signature_keys=['Gene_based', 'Region_based'],
k = 10,
res = [0.6, 1.2, 1.5],
prefix = 'SCENIC+_',
scale = True)
</code>
<code>
plot_metadata(scplus_obj,
reduction_name='eRegulons_tSNE_rb',
variables=['mese_mullerian_lowres', 'SCENIC+_leiden_10_0.6'],
num_columns=2,
text_size=10,
dot_size=5)
</code>
<code>
from scenicplus.RSS import *
regulon_specificity_scores(scplus_obj,
'mese_mullerian_lowres',
signature_keys=['Gene_based'],
selected_regulons=scplus_obj.uns['selected_eRegulons']['Gene_based'],
out_key_suffix='_gene_based',
scale=False)
</code>
<code>
scplus_obj.uns['RSS']
</code>
<code>
plot_rss(scplus_obj, 'mese_mullerian_lowres_gene_based', num_columns=2, top_n=10, figsize = (12, 12), fontsize = 12,
#selected_groups = ['MeseMullerianFallopianTube', 'MeseMullerianUterus'],
save = outDir + 'rss_importances.pdf')
</code>
<code>
mat = scplus_obj.uns['RSS']['mese_mullerian_lowres_gene_based']
# Reorder the indices
new_indices = ['FallopianMese',
'UterusMese',
'CervixMese', 'UpperVaginaMese'] # Specify the desired order of indices
mat_reordered = mat.reindex(new_indices)
</code>
<code>
mat_reordered
</code>
<code>
scplus_obj.uns['selected_eRegulons']
</code>
<code>
# Select only activators
regs = scplus_obj.uns['selected_eRegulons']['Gene_based']
repressors = [r for r in regs if '-' in r]
activators = [r for r in regs if r not in repressors]
</code>
<code>
activators
</code>
<code>
len(activators)
</code>
<code>
# Order activators per cell type by RSS (top 10 per cell type)
mat_reordered_activators = mat_reordered[activators]
print(mat_reordered_activators.shape)
activators_per_celltype = {'FallopianMese' : [],
'UterusMese' : [],
'CervixMese' : [], 'UpperVaginaMese' : []}
# Iterate through each row in the DataFrame
for index, row in mat_reordered_activators.iterrows():
print(index)
# Sort the row values and get the top 3 columns
top_columns = list(row.nlargest(20).index)
print(top_columns)
activators_per_celltype[index].extend(top_columns)
</code>
<code>
top_activators = list(np.unique(list(activators_per_celltype.values())))
</code>
<code>
len(top_activators)
</code>
<code>
mat_reordered_top_activators = mat_reordered_activators[top_activators]
mat_reordered_top_activators
</code>
<code>
import matplotlib as mpl
mpl.rcParams['pdf.fonttype'] = 42
</code>
<code>
hox_tfs = [i for i in activators if i.startswith('HOX')]
</code>
<code>
hox_tfs
</code>
<code>
hox_tfs = [ 'HOXC4_+_+_(33g)','HOXA5_+_+_(67g)',
'HOXC6_+_+_(61g)', 'HOXA7_+_+_(58g)', 'HOXC8_+_+_(90g)',
'HOXA9_+_+_(570g)', 'HOXD9_+_+_(186g)', 'HOXA10_+_+_(399g)', 'HOXD10_+_+_(259g)','HOXA11_+_+_(274g)',
'HOXA13_+_+_(443g)','HOXD13_+_+_(321g)',]
</code>
<code>
from scenicplus.plotting.dotplot import *
heatmap_dotplot(
scplus_obj = scplus_obj,
size_matrix = mat_reordered,
color_matrix = scplus_obj.to_df('EXP'),
scale_size_matrix = True,
scale_color_matrix = True,
group_variable = 'mese_mullerian_lowres',
subset_eRegulons = hox_tfs,
figsize = (10, 1.8),
orientation = 'horizontal',
split_repressor_activator=True,
index_order = ['FallopianMese',
'UterusMese',
'CervixMese', 'UpperVaginaMese'],
save = outDir + 'mese_importances_heatmap_hox.pdf')
</code>
<code>
from scenicplus.plotting.dotplot import *
heatmap_dotplot(
scplus_obj = scplus_obj,
size_matrix = mat_reordered,
color_matrix = scplus_obj.to_df('EXP'),
scale_size_matrix = True,
scale_color_matrix = True,
group_variable = 'mese_mullerian_lowres',
subset_eRegulons = top_activators,
figsize = (5.5, 24),
orientation = 'vertical',
split_repressor_activator=True,
index_order = ['FallopianMese',
'UterusMese',
'CervixMese', 'UpperVaginaMese'],
save = outDir + 'mese_importances_heatmap.pdf')
</code>
<code>
outDir
</code>
<code>
# Order activators per cell type by RSS (top 10 per cell type)
mat_reordered_activators = mat_reordered[activators]
print(mat_reordered_activators.shape)
activators_per_celltype = {'FallopianMese' : [],
'UterusMese' : [],
'CervixMese' : [], 'UpperVaginaMese' : []}
# Iterate through each row in the DataFrame
for index, row in mat_reordered_activators.iterrows():
print(index)
# Sort the row values and get the top 3 columns
top_columns = list(row.nlargest(20).index)
hox = [i for i in top_columns if i.startswith('HOX')]
top_columns = [i for i in top_columns if i not in hox]
print(top_columns)
activators_per_celltype[index].extend(top_columns)
</code>
<code>
top_activators = list(np.unique(list(activators_per_celltype.values())))
</code>
<code>
top_activators = [item for sublist in top_activators for item in sublist]
</code>
<code>
top_activators = [i for i in top_activators if i not in hox_tfs]
</code>
<code>
top_activators = [i for i in top_activators if not i.startswith('ZNF')]
</code>
<code>
len(top_activators)
</code>
<code>
spatially_variable_tfs = ['PROX1', 'GATA6', 'NFATC2', 'LEF1', 'FOXL2', 'MEIS2',
'EMX2', 'FOXO1', 'ESR1', 'RORB', 'HMGA2', 'MSX1',
'AR', 'TWIST1', 'ESRRG', 'RUNX1', 'PRRX2', 'TWIST2', 'LBX2',
'PBX3', 'AHR', 'EVX1', 'EVX2', 'IRF6', 'NR0B1', 'ISL1', 'HMBOX1', 'ASCL2',
'TBX18']
</code>
<code>
top_activators_tfs = [i.split('_')[0] for i in top_activators]
</code>
<code>
len(top_activators_tfs)
</code>
<code>
top_activators_tfs_variable = [i for i in top_activators_tfs if i in spatially_variable_tfs]
</code>
<code>
top_activators_tfs_variable
</code>
<code>
top_activators = [i for i in top_activators if not i.startswith('ZNF')]
</code>
<code>
len(top_activators)
</code>
<code>
# spatially_variable = ['GATA6_+_+_(150g)', 'PROX1_+_+_(49g)','NFATC2_+_+_(308g)',
# 'FOXL2_+_+_(193g)', 'EMX2_+_+_(268g)', 'FOXO1_+_+_(299g)',
# 'HMGA2_+_+_(281g)',
# 'PBX3_+_+_(86g)', 'PRRX2_+_+_(126g)', 'EVX1_+_+_(116g)', 'EVX2_+_+_(74g)', 'LBX2_+_+_(39g)',
# 'AR_+_+_(68g)', 'AHR_+_+_(141g)', 'ISL1_+_+_(55g)', 'TCF21_+_+_(144g)',
# 'ASCL2_+_+_(68g)', 'TWIST2_extended_+_+_(149g)', 'IRF6_extended_+_+_(55g)'
# ]
</code>
<code>
# mat_reordered_spatially_variable = mat_reordered[spatially_variable]
# mat_reordered_spatially_variable.shape
</code>
<code>
mat_reordered_top_activators = mat_reordered_activators[top_activators]
mat_reordered_top_activators
</code>
<code>
from scenicplus.plotting.dotplot import *
heatmap_dotplot(
scplus_obj = scplus_obj,
size_matrix = mat_reordered,
color_matrix = scplus_obj.to_df('EXP'),
scale_size_matrix = True,
scale_color_matrix = True,
group_variable = 'mese_mullerian_lowres',
subset_eRegulons = top_activators,
figsize = (25, 4),
orientation = 'horizontal',
split_repressor_activator=True,
sort_by = 'color_val',
index_order = ['FallopianMese',
'UterusMese', 'CervixMese', 'UpperVaginaMese'],
save = outDir + 'mese_importances_heatmap_top25.pdf')
</code>
<code>
outDir
</code>
<code>
import dill
with open(outDir + 'scplus_obj2.pkl', 'wb') as f:
dill.dump(scplus_obj, f)
</code>
<code>
import dill
infile = open(outDir + 'scplus_obj2.pkl', 'rb')
scplus_obj = dill.load(infile)
infile.close()
</code>
## Visualisations in scanpy-compatible format for figures
<code>
cistopic_obj = dill.load(open(os.path.join(outDir, 'cisTopicObject_clean.pkl'), 'rb'))
</code>
<code>
import scanpy
</code>
<code>
annots = cistopic_obj.cell_data.copy()
</code>
<code>
annots['tsne1'] = annots.index.map(cistopic_obj.projections['cell']['harmony_tSNE']['tSNE_1'].to_dict())
annots['tsne2'] = annots.index.map(cistopic_obj.projections['cell']['harmony_tSNE']['tSNE_2'].to_dict())
</code>
<code>
annots.shape
</code>
<code>
annots.to_csv(outDir + 'mull_mese_embedding.csv')
</code>
### Network analysis
<code>
df = scplus_obj.uns['eRegulon_metadata']
</code>
## Fallopian tube mesenchyme
<code>
spatially_variable_interactors = ['LGR5', 'NTRK2', 'CD36', 'CD55', 'ALDH1A2', 'DLK1', 'NRG1', 'WNT4',
'BMP4', 'BMP7']
</code>
<code>
tfs = ['HOXA5', 'HOXC5', 'HOXA7', 'HOXC6']
</code>
<code>
import numpy as np
</code>
<code>
targets = np.unique(df[df['TF'].isin(tfs)]['Gene'].tolist())
</code>
<code>
len(targets)
</code>
<code>
final = [i for i in spatially_variable_interactors if i in targets]
print(final)
</code>
<code>
len(final)
</code>
<code>
from scenicplus.networks import *
import networkx as nx
subset_genes = final
nx_tables = create_nx_tables(scplus_obj,
eRegulon_metadata_key = 'eRegulon_metadata',
subset_eRegulons = tfs,
subset_regions = None,
subset_genes = subset_genes,
add_differential_gene_expression = True,
add_differential_region_accessibility = True,
differential_variable = ['mese_mullerian_lowres'])
</code>
<code>
tfs
</code>
<code>
from scenicplus.networks import *
G_kk, pos_kk, edge_tables_kk, node_tables_kk = create_nx_graph(nx_tables,
use_edge_tables = ['TF2R','R2G'],
color_edge_by = {'TF2R': {'variable' : 'TF', 'category_color' : {
'HOXA7' : 'orchid',
'HOXA5' : 'orchid', 'HOXC6' : 'orchid',
'HOXC5' : 'orchid',
}},
'R2G': {'variable' : 'R2G_rho', 'continuous_color' : 'viridis', 'v_min': -1, 'v_max': 1}},
transparency_edge_by = {'R2G': {'variable' : 'R2G_importance', 'min_alpha': 0.6, 'v_min': 0}},
width_edge_by = {'R2G': {'variable' : 'R2G_importance', 'max_size' : 1.5, 'min_size' : 1}},
color_node_by = {'TF': {'variable': 'TF', 'category_color' : {
'HOXA7' : 'orchid',
'HOXA5' : 'orchid', 'HOXC6' : 'orchid',
'HOXC5' : 'orchid',
}},
'Gene': {'variable': 'mese_mullerian_lowres_Log2FC_FallopianMese', 'continuous_color' : 'Blues'},
'Region': {'variable': 'mese_mullerian_lowres_Log2FC_FallopianMese', 'continuous_color' : 'Blues'}},
transparency_node_by = {'Region': {'variable' : 'mese_mullerian_lowres_Log2FC_FallopianMese', 'min_alpha': 0.2},
'Gene': {'variable' : 'mese_mullerian_lowres_Log2FC_FallopianMese', 'min_alpha': 0.2}},
size_node_by = {'TF': {'variable': 'fixed_size', 'fixed_size': 60},
'Gene': {'variable': 'fixed_size', 'fixed_size': 50},
'Region': {'variable': 'fixed_size', 'fixed_size': 30}},
shape_node_by = {'TF': {'variable': 'fixed_shape', 'fixed_shape': 'ellipse'},
'Gene': {'variable': 'fixed_shape', 'fixed_shape': 'ellipse'},
'Region': {'variable': 'fixed_shape', 'fixed_shape': 'diamond'}},
label_size_by = {'TF': {'variable': 'fixed_label_size', 'fixed_label_size': 15.0},
'Gene': {'variable': 'fixed_label_size', 'fixed_label_size': 10.0},
'Region': {'variable': 'fixed_label_size', 'fixed_label_size': 0.0}},
label_color_by = {'TF': {'variable': 'fixed_label_color', 'fixed_label_color': 'black'},
'Gene': {'variable': 'fixed_label_color', 'fixed_label_color': 'black'},
'Region': {'variable': 'fixed_label_color', 'fixed_label_color': 'darkgray'}},
layout = 'kamada_kawai_layout',
scale_position_by = 500)
</code>
<code>
edge_tables_kk
</code>
<code>
import matplotlib as mpl
mpl.rcParams['pdf.fonttype'] = 42
</code>
<code>
nx.draw_networkx_nodes(G_kk, pos_kk, node_color=nx.get_node_attributes(G_kk,'color').values(),
node_size=list(nx.get_node_attributes(G_kk,'size').values()),
node_shape = 'D')
nx.draw_networkx_edges(G_kk, pos_kk, edge_color = nx.get_edge_attributes(G_kk,'color').values(),
width = list(nx.get_edge_attributes(G_kk,'width').values()))
fontsize_d = {y:x['size'] for x,y in zip(list(nx.get_node_attributes(G_kk,'font').values()),list(nx.get_node_attributes(G_kk,'label').values())) if x['size'] != 0.0}
fontcolor_d = {y:x['color'] for x,y in zip(list(nx.get_node_attributes(G_kk,'font').values()),list(nx.get_node_attributes(G_kk,'label').values())) if x['size'] != 0.0}
for node, (x, y) in pos_kk.items():
if node in fontsize_d.keys():
plt.text(x, y, node, fontsize=fontsize_d[node], color=fontcolor_d[node], ha='center', va='center')
ax = plt.gca()
ax.margins(0.11)
plt.tight_layout()
plt.axis("off")
plt.savefig('/home/jovyan/network_scenicplus_mull_mese_fallopiantube.pdf', bbox_inches='tight', dpi=1000)
plt.show()
</code>
### Uterus mesenchyme
<code>
spatially_variable_interactors = ['WNT4', 'WNT5A', 'CDH3', 'FLTR2', 'GRIA4',
'TGM2', 'ALDH1A1', 'LRRTM1', 'NRP1', 'RORB']
</code>
<code>
tfs = ['HOXA10', 'HOXA9', 'HOXA11']
</code>
<code>
targets = np.unique(df[df['TF'].isin(tfs)]['Gene'].tolist())
</code>
<code>
final = [i for i in spatially_variable_interactors if i in targets]
print(final)
</code>
<code>
from scenicplus.networks import *
import networkx as nx
subset_genes = final
nx_tables = create_nx_tables(scplus_obj,
eRegulon_metadata_key = 'eRegulon_metadata',
subset_eRegulons = tfs,
subset_regions = None,
subset_genes = subset_genes,
add_differential_gene_expression = True,
add_differential_region_accessibility = True,
differential_variable = ['mese_mullerian_lowres'])
</code>
<code>
from scenicplus.networks import *
G_kk, pos_kk, edge_tables_kk, node_tables_kk = create_nx_graph(nx_tables,
use_edge_tables = ['TF2R','R2G'],
color_edge_by = {'TF2R': {'variable' : 'TF', 'category_color' : {
'HOXA10' : 'orange',
'HOXA11' : 'orange', 'HOXA9': 'orange',
}},
'R2G': {'variable' : 'R2G_rho', 'continuous_color' : 'viridis', 'v_min': -1, 'v_max': 1}},
transparency_edge_by = {'R2G': {'variable' : 'R2G_importance', 'min_alpha': 0.4, 'v_min': 0}},
width_edge_by = {'R2G': {'variable' : 'R2G_importance', 'max_size' : 1.5, 'min_size' : 1}},
color_node_by = {'TF': {'variable': 'TF', 'category_color' : {
'HOXA10' : 'orange',
'HOXA11' : 'orange', 'HOXA9': 'orange',
}},
'Gene': {'variable': 'mese_mullerian_lowres_Log2FC_UterusMese', 'continuous_color' : 'Blues'},
'Region': {'variable': 'mese_mullerian_lowres_Log2FC_UterusMese', 'continuous_color' : 'Blues'}},
transparency_node_by = {'Region': {'variable' : 'mese_mullerian_lowres_Log2FC_UterusMese', 'min_alpha': 0.2},
'Gene': {'variable' : 'mese_mullerian_lowres_Log2FC_UterusMese', 'min_alpha': 0.2}},
size_node_by = {'TF': {'variable': 'fixed_size', 'fixed_size': 60},
'Gene': {'variable': 'fixed_size', 'fixed_size': 50},
'Region': {'variable': 'fixed_size', 'fixed_size': 30}},
shape_node_by = {'TF': {'variable': 'fixed_shape', 'fixed_shape': 'ellipse'},
'Gene': {'variable': 'fixed_shape', 'fixed_shape': 'ellipse'},
'Region': {'variable': 'fixed_shape', 'fixed_shape': 'diamond'}},
label_size_by = {'TF': {'variable': 'fixed_label_size', 'fixed_label_size': 20.0},
'Gene': {'variable': 'fixed_label_size', 'fixed_label_size': 15.0},
'Region': {'variable': 'fixed_label_size', 'fixed_label_size': 0.0}},
label_color_by = {'TF': {'variable': 'fixed_label_color', 'fixed_label_color': 'black'},
'Gene': {'variable': 'fixed_label_color', 'fixed_label_color': 'black'},
'Region': {'variable': 'fixed_label_color', 'fixed_label_color': 'darkgray'}},
layout = 'kamada_kawai_layout',
scale_position_by = 500)
</code>
<code>
edge_tables_kk
</code>
<code>
nx.draw_networkx_nodes(G_kk, pos_kk, node_color=nx.get_node_attributes(G_kk,'color').values(),
node_size=list(nx.get_node_attributes(G_kk,'size').values()),
node_shape = 'D')
nx.draw_networkx_edges(G_kk, pos_kk, edge_color = nx.get_edge_attributes(G_kk,'color').values(),
width = list(nx.get_edge_attributes(G_kk,'width').values()))
fontsize_d = {y:x['size'] for x,y in zip(list(nx.get_node_attributes(G_kk,'font').values()),list(nx.get_node_attributes(G_kk,'label').values())) if x['size'] != 0.0}
fontcolor_d = {y:x['color'] for x,y in zip(list(nx.get_node_attributes(G_kk,'font').values()),list(nx.get_node_attributes(G_kk,'label').values())) if x['size'] != 0.0}
for node, (x, y) in pos_kk.items():
if node in fontsize_d.keys():
plt.text(x, y, node, fontsize=fontsize_d[node], color=fontcolor_d[node], ha='center', va='center')
ax = plt.gca()
ax.margins(0.11)
plt.tight_layout()
plt.axis("off")
plt.savefig('/home/jovyan/network_scenicplus_mull_mese_uterus.pdf', bbox_inches='tight', dpi=1000)
plt.show()
</code>
## Upper vagina mesenchyme
<code>
spatially_variable_interactors = ['GDF7', 'GDF10', 'COL26A1', 'TNC', 'WIF1', 'SFRP5', 'IGF1', 'BMP4', 'BMP7']
</code>
<code>
tfs = ['HOXA13', 'HOXD13']
</code>
<code>
targets = np.unique(df[df['TF'].isin(tfs)]['Gene'].tolist())
</code>
<code>
final = [i for i in spatially_variable_interactors if i in targets]
print(final)
</code>
<code>
from scenicplus.networks import *
import networkx as nx
subset_genes = final
nx_tables = create_nx_tables(scplus_obj,
eRegulon_metadata_key = 'eRegulon_metadata',
subset_eRegulons = tfs,
subset_regions = None,
subset_genes = subset_genes,
add_differential_gene_expression = True,
add_differential_region_accessibility = True,
differential_variable = ['mese_mullerian_lowres'])
</code>
<code>
from scenicplus.networks import *
G_kk, pos_kk, edge_tables_kk, node_tables_kk = create_nx_graph(nx_tables,
use_edge_tables = ['TF2R','R2G'],
color_edge_by = {'TF2R': {'variable' : 'TF', 'category_color' : {
'HOXA13' : 'yellowgreen', 'HOXD13' : 'yellowgreen',
}},
'R2G': {'variable' : 'R2G_rho', 'continuous_color' : 'viridis', 'v_min': -1, 'v_max': 1}},
transparency_edge_by = {'R2G': {'variable' : 'R2G_importance', 'min_alpha': 0.4, 'v_min': 0}},
width_edge_by = {'R2G': {'variable' : 'R2G_importance', 'max_size' : 1.5, 'min_size' : 1}},
color_node_by = {'TF': {'variable': 'TF', 'category_color' : {
'HOXA13' : 'yellowgreen', 'HOXD13' : 'yellowgreen',
}},
'Gene': {'variable': 'mese_mullerian_lowres_Log2FC_UpperVaginaMese', 'continuous_color' : 'Blues'},
'Region': {'variable': 'mese_mullerian_lowres_Log2FC_UpperVaginaMese', 'continuous_color' : 'Blues'}},
transparency_node_by = {'Region': {'variable' : 'mese_mullerian_lowres_Log2FC_UpperVaginaMese', 'min_alpha': 0.2},
'Gene': {'variable' : 'mese_mullerian_lowres_Log2FC_UpperVaginaMese', 'min_alpha': 0.2}},
size_node_by = {'TF': {'variable': 'fixed_size', 'fixed_size': 60},
'Gene': {'variable': 'fixed_size', 'fixed_size': 50},
'Region': {'variable': 'fixed_size', 'fixed_size': 30}},
shape_node_by = {'TF': {'variable': 'fixed_shape', 'fixed_shape': 'ellipse'},
'Gene': {'variable': 'fixed_shape', 'fixed_shape': 'ellipse'},
'Region': {'variable': 'fixed_shape', 'fixed_shape': 'diamond'}},
label_size_by = {'TF': {'variable': 'fixed_label_size', 'fixed_label_size': 20.0},
'Gene': {'variable': 'fixed_label_size', 'fixed_label_size': 15.0},
'Region': {'variable': 'fixed_label_size', 'fixed_label_size': 0.0}},
label_color_by = {'TF': {'variable': 'fixed_label_color', 'fixed_label_color': 'black'},
'Gene': {'variable': 'fixed_label_color', 'fixed_label_color': 'black'},
'Region': {'variable': 'fixed_label_color', 'fixed_label_color': 'darkgray'}},
layout = 'kamada_kawai_layout',
scale_position_by = 500)
</code>
<code>
edge_tables_kk
</code>
<code>
nx.draw_networkx_nodes(G_kk, pos_kk, node_color=nx.get_node_attributes(G_kk,'color').values(),
node_size=list(nx.get_node_attributes(G_kk,'size').values()),
node_shape = 'D')
nx.draw_networkx_edges(G_kk, pos_kk, edge_color = nx.get_edge_attributes(G_kk,'color').values(),
width = list(nx.get_edge_attributes(G_kk,'width').values()))
fontsize_d = {y:x['size'] for x,y in zip(list(nx.get_node_attributes(G_kk,'font').values()),list(nx.get_node_attributes(G_kk,'label').values())) if x['size'] != 0.0}
fontcolor_d = {y:x['color'] for x,y in zip(list(nx.get_node_attributes(G_kk,'font').values()),list(nx.get_node_attributes(G_kk,'label').values())) if x['size'] != 0.0}
for node, (x, y) in pos_kk.items():
if node in fontsize_d.keys():
plt.text(x, y, node, fontsize=fontsize_d[node], color=fontcolor_d[node], ha='center', va='center')
ax = plt.gca()
ax.margins(0.11)
plt.tight_layout()
plt.axis("off")
plt.savefig('/home/jovyan/network_scenicplus_mull_mese_uppervagina.pdf', bbox_inches='tight', dpi=1000)
plt.show()
</code>
|
{
"filename": "MullerianMesenchymeDifferentiation_SCENICPLUS.ipynb",
"repository": "ventolab/Human-ReproductiveTract-Development-Atlas",
"query": "transformed_from_existing",
"size": 219070,
"sha": ""
}
|
# trrust-single-branch.ipynb
Repository: joepatmckenna/scRutiNy
# Single branch from human TRRUST network
This is an example of using [scRutiNy](http://lbm.niddk.nih.gov/mckennajp/scRutiNy) to generate single-cell RNA-seq data from a biologically realistic genetic regulatory network (TRRUST: http://www.grnpedia.org/trrust/), then inferring pseudotime and the network.
<code>
# load packages
import RNAscrutiny as scrutiny
import numpy as np
import matplotlib.pyplot as plt
import time
from IPython.display import HTML
# set random seed for reproducibility
np.random.seed(seed=0)
</code>
<code>
# get GRN based on TRRUST human network
# Note: requires internet connection
w, genes = scrutiny.get_grn('TRRUST', organism='human')
</code>
<code>
# set development tree parameters
# three cell types
cell_types = ['stem', 'type_a', 'type_b']
# parent type of 'stem' cell type is None (root node)
# parent type of 'type_a' is 'stem'
# parent type of 'type_b' is 'type_a'
parent_types = [None, 'stem', 'type_a']
# create development tree
# a single branch with 3 cell types in this case
dev_tree = scrutiny.development_tree(w, cell_types, parent_types)
</code>
<code>
# simulate scRNAseq data
start = time.time()
# number of cells of each cell type
n_cells = [100, 100, 100]
data, history, cells, t = scrutiny.simulate_scRNAseq_data(dev_tree, n_cells)
print 'simulation time:', time.time() - start, 'sec.'
</code>
<code>
# animate the development of the data
anim = scrutiny.animate_scRNAseq_data(history, t=t, color_by='time')
HTML(anim.to_html5_video())
</code>
<code>
# plot final data, colored by cell type
scrutiny.plot_scRNAseq_data(data, cells=cells, color_by='cell_type')
</code>
<code>
# infer pseudotime by fitting a spanning tree to the data
t_fit, tr = scrutiny.pseudotime(data, cells, k=100)
# plot the spanning tree
scrutiny.plot_pseudotime_tree(tr, data, t=t_fit)
</code>
<code>
# plot pseudotime vs time
fig = plt.figure()
ax = plt.gca()
cell_types = np.unique(cells)
c = np.array(
[np.where(cell_types == cell)[0][0] for cell in cells], dtype=float)
c /= c.max()
ax.scatter(t, t_fit, c=c, cmap=plt.get_cmap('viridis'))
ax.set_xlabel('time (normalized)')
ax.set_ylabel('pseudotime (normalized)')
fig
</code>
<code>
# infer the gene regulatory network and print network structure score
# network structure score is the fraction of zero/nonzero agreement
start = time.time()
w_fit = scrutiny.fit_grn(data, t)
score = scrutiny.grn_structure_score(w, w_fit)
print 'inference time:', (time.time() - start) / 60.0, 'min., network structure score:', score
</code>
|
{
"filename": "trrust-single-branch.ipynb",
"repository": "joepatmckenna/scRutiNy",
"query": "transformed_from_existing",
"size": 184798,
"sha": ""
}
|
# eval.ipynb
Repository: agatha-duzan/feature-intervention-for-unlearning
<code>
!pip install "lm-eval"
!pip install "lm-eval[api]"
</code>
<code>
import os
key_path = 'goodfire_api_key.txt'
with open(key_path, 'r') as file:
GOODFIRE_API_KEY = file.read().strip()
os.environ['OPENAI_API_KEY'] = GOODFIRE_API_KEY
api_url="https://api.goodfire.ai/api/inference/v1/chat/completions"
</code>
<code>
import subprocess
subprocess.run([
'lm_eval',
'--model', 'openai-chat-completions',
'--model_args', f'model=meta-llama/Meta-Llama-3-8B-Instruct,tokenized_requests=False,base_url={api_url},num_concurrent=25',
'--tasks', 'mmlu_flan_n_shot_generative_college_computer_science,mmlu_flan_n_shot_generative_computer_security',
'--log_samples',
'--apply_chat_template', 'True',
'--num_fewshot', '0',
'--output_path', 'out_example'
])
</code>
<code>
import subprocess
subprocess.run([
'lm_eval',
'--model', 'openai-chat-completions',
'--model_args', f'model=meta-llama/Meta-Llama-3-8B-Instruct,tokenized_requests=False,base_url={api_url},num_concurrent=10',
'--tasks', 'mmlu_flan_n_shot_generative_college_biology,mmlu_flan_n_shot_generative_virology',
'--log_samples',
'--apply_chat_template', 'True',
'--num_fewshot', '0',
'--output_path', 'out_example'
])
</code>
|
{
"filename": "eval.ipynb",
"repository": "agatha-duzan/feature-intervention-for-unlearning",
"query": "transformed_from_existing",
"size": 80928,
"sha": ""
}
|
# corona.ipynb
Repository: 0xpranjal/COVID-Genome-Computational-Analysis
# Corona Genome Analysis
#### Let's start by retreiving the complete genome of Coronavirus. The records are extracted from the wuhan region. Source: https://www.ncbi.nlm.nih.gov/nuccore/NC_045512
>Orthocoronavirinae, in the family Coronaviridae, order Nidovirales, and realm Riboviria. They are enveloped viruses with a positive-sense single-stranded RNA genome and a nucleocapsid of helical symmetry. This is wrapped in a icosahedral protein shell. The genome size of coronaviruses ranges from approximately 26 to 32 kilobases, one of the largest among RNA viruses. They have characteristic club-shaped spikes that project from their surface, which in electron micrographs create an image reminiscent of the solar corona, from which their name derives.
<img src="./images_copy/wiki.png" width="480">
> **Basic Information:** Coronavirus is a single stranded RNA-virus (DNA is double stranded). RNA polymers are made up of nucleotides. These nucleotides have three parts: 1) a five carbon Ribose sugar, 2) a phosphate molecule and 3) one of four nitrogenous bases: adenine(a), guanine(g), cytosine(c) or uracil(u) / thymine(t).
> Thymine is found in DNA and Uracil in RNA. But for following analysis, you can consider (u) and (t) to be analogous.
<img src="./images_copy/parts-of-nucleotide.jpg" width="480">
<code>
corona = """
1 attaaaggtt tataccttcc caggtaacaa accaaccaac tttcgatctc ttgtagatct
61 gttctctaaa cgaactttaa aatctgtgtg gctgtcactc ggctgcatgc ttagtgcact
121 cacgcagtat aattaataac taattactgt cgttgacagg acacgagtaa ctcgtctatc
181 ttctgcaggc tgcttacggt ttcgtccgtg ttgcagccga tcatcagcac atctaggttt
241 cgtccgggtg tgaccgaaag gtaagatgga gagccttgtc cctggtttca acgagaaaac
301 acacgtccaa ctcagtttgc ctgttttaca ggttcgcgac gtgctcgtac gtggctttgg
361 agactccgtg gaggaggtct tatcagaggc acgtcaacat cttaaagatg gcacttgtgg
421 cttagtagaa gttgaaaaag gcgttttgcc tcaacttgaa cagccctatg tgttcatcaa
481 acgttcggat gctcgaactg cacctcatgg tcatgttatg gttgagctgg tagcagaact
541 cgaaggcatt cagtacggtc gtagtggtga gacacttggt gtccttgtcc ctcatgtggg
601 cgaaatacca gtggcttacc gcaaggttct tcttcgtaag aacggtaata aaggagctgg
661 tggccatagt tacggcgccg atctaaagtc atttgactta ggcgacgagc ttggcactga
721 tccttatgaa gattttcaag aaaactggaa cactaaacat agcagtggtg ttacccgtga
781 actcatgcgt gagcttaacg gaggggcata cactcgctat gtcgataaca acttctgtgg
841 ccctgatggc taccctcttg agtgcattaa agaccttcta gcacgtgctg gtaaagcttc
901 atgcactttg tccgaacaac tggactttat tgacactaag aggggtgtat actgctgccg
961 tgaacatgag catgaaattg cttggtacac ggaacgttct gaaaagagct atgaattgca
1021 gacacctttt gaaattaaat tggcaaagaa atttgacacc ttcaatgggg aatgtccaaa
1081 ttttgtattt cccttaaatt ccataatcaa gactattcaa ccaagggttg aaaagaaaaa
1141 gcttgatggc tttatgggta gaattcgatc tgtctatcca gttgcgtcac caaatgaatg
1201 caaccaaatg tgcctttcaa ctctcatgaa gtgtgatcat tgtggtgaaa cttcatggca
1261 gacgggcgat tttgttaaag ccacttgcga attttgtggc actgagaatt tgactaaaga
1321 aggtgccact acttgtggtt acttacccca aaatgctgtt gttaaaattt attgtccagc
1381 atgtcacaat tcagaagtag gacctgagca tagtcttgcc gaataccata atgaatctgg
1441 cttgaaaacc attcttcgta agggtggtcg cactattgcc tttggaggct gtgtgttctc
1501 ttatgttggt tgccataaca agtgtgccta ttgggttcca cgtgctagcg ctaacatagg
1561 ttgtaaccat acaggtgttg ttggagaagg ttccgaaggt cttaatgaca accttcttga
1621 aatactccaa aaagagaaag tcaacatcaa tattgttggt gactttaaac ttaatgaaga
1681 gatcgccatt attttggcat ctttttctgc ttccacaagt gcttttgtgg aaactgtgaa
1741 aggtttggat tataaagcat tcaaacaaat tgttgaatcc tgtggtaatt ttaaagttac
1801 aaaaggaaaa gctaaaaaag gtgcctggaa tattggtgaa cagaaatcaa tactgagtcc
1861 tctttatgca tttgcatcag aggctgctcg tgttgtacga tcaattttct cccgcactct
1921 tgaaactgct caaaattctg tgcgtgtttt acagaaggcc gctataacaa tactagatgg
1981 aatttcacag tattcactga gactcattga tgctatgatg ttcacatctg atttggctac
2041 taacaatcta gttgtaatgg cctacattac aggtggtgtt gttcagttga cttcgcagtg
2101 gctaactaac atctttggca ctgtttatga aaaactcaaa cccgtccttg attggcttga
2161 agagaagttt aaggaaggtg tagagtttct tagagacggt tgggaaattg ttaaatttat
2221 ctcaacctgt gcttgtgaaa ttgtcggtgg acaaattgtc acctgtgcaa aggaaattaa
2281 ggagagtgtt cagacattct ttaagcttgt aaataaattt ttggctttgt gtgctgactc
2341 tatcattatt ggtggagcta aacttaaagc cttgaattta ggtgaaacat ttgtcacgca
2401 ctcaaaggga ttgtacagaa agtgtgttaa atccagagaa gaaactggcc tactcatgcc
2461 tctaaaagcc ccaaaagaaa ttatcttctt agagggagaa acacttccca cagaagtgtt
2521 aacagaggaa gttgtcttga aaactggtga tttacaacca ttagaacaac ctactagtga
2581 agctgttgaa gctccattgg ttggtacacc agtttgtatt aacgggctta tgttgctcga
2641 aatcaaagac acagaaaagt actgtgccct tgcacctaat atgatggtaa caaacaatac
2701 cttcacactc aaaggcggtg caccaacaaa ggttactttt ggtgatgaca ctgtgataga
2761 agtgcaaggt tacaagagtg tgaatatcac ttttgaactt gatgaaagga ttgataaagt
2821 acttaatgag aagtgctctg cctatacagt tgaactcggt acagaagtaa atgagttcgc
2881 ctgtgttgtg gcagatgctg tcataaaaac tttgcaacca gtatctgaat tacttacacc
2941 actgggcatt gatttagatg agtggagtat ggctacatac tacttatttg atgagtctgg
3001 tgagtttaaa ttggcttcac atatgtattg ttctttctac cctccagatg aggatgaaga
3061 agaaggtgat tgtgaagaag aagagtttga gccatcaact caatatgagt atggtactga
3121 agatgattac caaggtaaac ctttggaatt tggtgccact tctgctgctc ttcaacctga
3181 agaagagcaa gaagaagatt ggttagatga tgatagtcaa caaactgttg gtcaacaaga
3241 cggcagtgag gacaatcaga caactactat tcaaacaatt gttgaggttc aacctcaatt
3301 agagatggaa cttacaccag ttgttcagac tattgaagtg aatagtttta gtggttattt
3361 aaaacttact gacaatgtat acattaaaaa tgcagacatt gtggaagaag ctaaaaaggt
3421 aaaaccaaca gtggttgtta atgcagccaa tgtttacctt aaacatggag gaggtgttgc
3481 aggagcctta aataaggcta ctaacaatgc catgcaagtt gaatctgatg attacatagc
3541 tactaatgga ccacttaaag tgggtggtag ttgtgtttta agcggacaca atcttgctaa
3601 acactgtctt catgttgtcg gcccaaatgt taacaaaggt gaagacattc aacttcttaa
3661 gagtgcttat gaaaatttta atcagcacga agttctactt gcaccattat tatcagctgg
3721 tatttttggt gctgacccta tacattcttt aagagtttgt gtagatactg ttcgcacaaa
3781 tgtctactta gctgtctttg ataaaaatct ctatgacaaa cttgtttcaa gctttttgga
3841 aatgaagagt gaaaagcaag ttgaacaaaa gatcgctgag attcctaaag aggaagttaa
3901 gccatttata actgaaagta aaccttcagt tgaacagaga aaacaagatg ataagaaaat
3961 caaagcttgt gttgaagaag ttacaacaac tctggaagaa actaagttcc tcacagaaaa
4021 cttgttactt tatattgaca ttaatggcaa tcttcatcca gattctgcca ctcttgttag
4081 tgacattgac atcactttct taaagaaaga tgctccatat atagtgggtg atgttgttca
4141 agagggtgtt ttaactgctg tggttatacc tactaaaaag gctggtggca ctactgaaat
4201 gctagcgaaa gctttgagaa aagtgccaac agacaattat ataaccactt acccgggtca
4261 gggtttaaat ggttacactg tagaggaggc aaagacagtg cttaaaaagt gtaaaagtgc
4321 cttttacatt ctaccatcta ttatctctaa tgagaagcaa gaaattcttg gaactgtttc
4381 ttggaatttg cgagaaatgc ttgcacatgc agaagaaaca cgcaaattaa tgcctgtctg
4441 tgtggaaact aaagccatag tttcaactat acagcgtaaa tataagggta ttaaaataca
4501 agagggtgtg gttgattatg gtgctagatt ttacttttac accagtaaaa caactgtagc
4561 gtcacttatc aacacactta acgatctaaa tgaaactctt gttacaatgc cacttggcta
4621 tgtaacacat ggcttaaatt tggaagaagc tgctcggtat atgagatctc tcaaagtgcc
4681 agctacagtt tctgtttctt cacctgatgc tgttacagcg tataatggtt atcttacttc
4741 ttcttctaaa acacctgaag aacattttat tgaaaccatc tcacttgctg gttcctataa
4801 agattggtcc tattctggac aatctacaca actaggtata gaatttctta agagaggtga
4861 taaaagtgta tattacacta gtaatcctac cacattccac ctagatggtg aagttatcac
4921 ctttgacaat cttaagacac ttctttcttt gagagaagtg aggactatta aggtgtttac
4981 aacagtagac aacattaacc tccacacgca agttgtggac atgtcaatga catatggaca
5041 acagtttggt ccaacttatt tggatggagc tgatgttact aaaataaaac ctcataattc
5101 acatgaaggt aaaacatttt atgttttacc taatgatgac actctacgtg ttgaggcttt
5161 tgagtactac cacacaactg atcctagttt tctgggtagg tacatgtcag cattaaatca
5221 cactaaaaag tggaaatacc cacaagttaa tggtttaact tctattaaat gggcagataa
5281 caactgttat cttgccactg cattgttaac actccaacaa atagagttga agtttaatcc
5341 acctgctcta caagatgctt attacagagc aagggctggt gaagctgcta acttttgtgc
5401 acttatctta gcctactgta ataagacagt aggtgagtta ggtgatgtta gagaaacaat
5461 gagttacttg tttcaacatg ccaatttaga ttcttgcaaa agagtcttga acgtggtgtg
5521 taaaacttgt ggacaacagc agacaaccct taagggtgta gaagctgtta tgtacatggg
5581 cacactttct tatgaacaat ttaagaaagg tgttcagata ccttgtacgt gtggtaaaca
5641 agctacaaaa tatctagtac aacaggagtc accttttgtt atgatgtcag caccacctgc
5701 tcagtatgaa cttaagcatg gtacatttac ttgtgctagt gagtacactg gtaattacca
5761 gtgtggtcac tataaacata taacttctaa agaaactttg tattgcatag acggtgcttt
5821 acttacaaag tcctcagaat acaaaggtcc tattacggat gttttctaca aagaaaacag
5881 ttacacaaca accataaaac cagttactta taaattggat ggtgttgttt gtacagaaat
5941 tgaccctaag ttggacaatt attataagaa agacaattct tatttcacag agcaaccaat
6001 tgatcttgta ccaaaccaac catatccaaa cgcaagcttc gataatttta agtttgtatg
6061 tgataatatc aaatttgctg atgatttaaa ccagttaact ggttataaga aacctgcttc
6121 aagagagctt aaagttacat ttttccctga cttaaatggt gatgtggtgg ctattgatta
6181 taaacactac acaccctctt ttaagaaagg agctaaattg ttacataaac ctattgtttg
6241 gcatgttaac aatgcaacta ataaagccac gtataaacca aatacctggt gtatacgttg
6301 tctttggagc acaaaaccag ttgaaacatc aaattcgttt gatgtactga agtcagagga
6361 cgcgcaggga atggataatc ttgcctgcga agatctaaaa ccagtctctg aagaagtagt
6421 ggaaaatcct accatacaga aagacgttct tgagtgtaat gtgaaaacta ccgaagttgt
6481 aggagacatt atacttaaac cagcaaataa tagtttaaaa attacagaag aggttggcca
6541 cacagatcta atggctgctt atgtagacaa ttctagtctt actattaaga aacctaatga
6601 attatctaga gtattaggtt tgaaaaccct tgctactcat ggtttagctg ctgttaatag
6661 tgtcccttgg gatactatag ctaattatgc taagcctttt cttaacaaag ttgttagtac
6721 aactactaac atagttacac ggtgtttaaa ccgtgtttgt actaattata tgccttattt
6781 ctttacttta ttgctacaat tgtgtacttt tactagaagt acaaattcta gaattaaagc
6841 atctatgccg actactatag caaagaatac tgttaagagt gtcggtaaat tttgtctaga
6901 ggcttcattt aattatttga agtcacctaa tttttctaaa ctgataaata ttataatttg
6961 gtttttacta ttaagtgttt gcctaggttc tttaatctac tcaaccgctg ctttaggtgt
7021 tttaatgtct aatttaggca tgccttctta ctgtactggt tacagagaag gctatttgaa
7081 ctctactaat gtcactattg caacctactg tactggttct ataccttgta gtgtttgtct
7141 tagtggttta gattctttag acacctatcc ttctttagaa actatacaaa ttaccatttc
7201 atcttttaaa tgggatttaa ctgcttttgg cttagttgca gagtggtttt tggcatatat
7261 tcttttcact aggtttttct atgtacttgg attggctgca atcatgcaat tgtttttcag
7321 ctattttgca gtacatttta ttagtaattc ttggcttatg tggttaataa ttaatcttgt
7381 acaaatggcc ccgatttcag ctatggttag aatgtacatc ttctttgcat cattttatta
7441 tgtatggaaa agttatgtgc atgttgtaga cggttgtaat tcatcaactt gtatgatgtg
7501 ttacaaacgt aatagagcaa caagagtcga atgtacaact attgttaatg gtgttagaag
7561 gtccttttat gtctatgcta atggaggtaa aggcttttgc aaactacaca attggaattg
7621 tgttaattgt gatacattct gtgctggtag tacatttatt agtgatgaag ttgcgagaga
7681 cttgtcacta cagtttaaaa gaccaataaa tcctactgac cagtcttctt acatcgttga
7741 tagtgttaca gtgaagaatg gttccatcca tctttacttt gataaagctg gtcaaaagac
7801 ttatgaaaga cattctctct ctcattttgt taacttagac aacctgagag ctaataacac
7861 taaaggttca ttgcctatta atgttatagt ttttgatggt aaatcaaaat gtgaagaatc
7921 atctgcaaaa tcagcgtctg tttactacag tcagcttatg tgtcaaccta tactgttact
7981 agatcaggca ttagtgtctg atgttggtga tagtgcggaa gttgcagtta aaatgtttga
8041 tgcttacgtt aatacgtttt catcaacttt taacgtacca atggaaaaac tcaaaacact
8101 agttgcaact gcagaagctg aacttgcaaa gaatgtgtcc ttagacaatg tcttatctac
8161 ttttatttca gcagctcggc aagggtttgt tgattcagat gtagaaacta aagatgttgt
8221 tgaatgtctt aaattgtcac atcaatctga catagaagtt actggcgata gttgtaataa
8281 ctatatgctc acctataaca aagttgaaaa catgacaccc cgtgaccttg gtgcttgtat
8341 tgactgtagt gcgcgtcata ttaatgcgca ggtagcaaaa agtcacaaca ttgctttgat
8401 atggaacgtt aaagatttca tgtcattgtc tgaacaacta cgaaaacaaa tacgtagtgc
8461 tgctaaaaag aataacttac cttttaagtt gacatgtgca actactagac aagttgttaa
8521 tgttgtaaca acaaagatag cacttaaggg tggtaaaatt gttaataatt ggttgaagca
8581 gttaattaaa gttacacttg tgttcctttt tgttgctgct attttctatt taataacacc
8641 tgttcatgtc atgtctaaac atactgactt ttcaagtgaa atcataggat acaaggctat
8701 tgatggtggt gtcactcgtg acatagcatc tacagatact tgttttgcta acaaacatgc
8761 tgattttgac acatggttta gccagcgtgg tggtagttat actaatgaca aagcttgccc
8821 attgattgct gcagtcataa caagagaagt gggttttgtc gtgcctggtt tgcctggcac
8881 gatattacgc acaactaatg gtgacttttt gcatttctta cctagagttt ttagtgcagt
8941 tggtaacatc tgttacacac catcaaaact tatagagtac actgactttg caacatcagc
9001 ttgtgttttg gctgctgaat gtacaatttt taaagatgct tctggtaagc cagtaccata
9061 ttgttatgat accaatgtac tagaaggttc tgttgcttat gaaagtttac gccctgacac
9121 acgttatgtg ctcatggatg gctctattat tcaatttcct aacacctacc ttgaaggttc
9181 tgttagagtg gtaacaactt ttgattctga gtactgtagg cacggcactt gtgaaagatc
9241 agaagctggt gtttgtgtat ctactagtgg tagatgggta cttaacaatg attattacag
9301 atctttacca ggagttttct gtggtgtaga tgctgtaaat ttacttacta atatgtttac
9361 accactaatt caacctattg gtgctttgga catatcagca tctatagtag ctggtggtat
9421 tgtagctatc gtagtaacat gccttgccta ctattttatg aggtttagaa gagcttttgg
9481 tgaatacagt catgtagttg cctttaatac tttactattc cttatgtcat tcactgtact
9541 ctgtttaaca ccagtttact cattcttacc tggtgtttat tctgttattt acttgtactt
9601 gacattttat cttactaatg atgtttcttt tttagcacat attcagtgga tggttatgtt
9661 cacaccttta gtacctttct ggataacaat tgcttatatc atttgtattt ccacaaagca
9721 tttctattgg ttctttagta attacctaaa gagacgtgta gtctttaatg gtgtttcctt
9781 tagtactttt gaagaagctg cgctgtgcac ctttttgtta aataaagaaa tgtatctaaa
9841 gttgcgtagt gatgtgctat tacctcttac gcaatataat agatacttag ctctttataa
9901 taagtacaag tattttagtg gagcaatgga tacaactagc tacagagaag ctgcttgttg
9961 tcatctcgca aaggctctca atgacttcag taactcaggt tctgatgttc tttaccaacc
10021 accacaaacc tctatcacct cagctgtttt gcagagtggt tttagaaaaa tggcattccc
10081 atctggtaaa gttgagggtt gtatggtaca agtaacttgt ggtacaacta cacttaacgg
10141 tctttggctt gatgacgtag tttactgtcc aagacatgtg atctgcacct ctgaagacat
10201 gcttaaccct aattatgaag atttactcat tcgtaagtct aatcataatt tcttggtaca
10261 ggctggtaat gttcaactca gggttattgg acattctatg caaaattgtg tacttaagct
10321 taaggttgat acagccaatc ctaagacacc taagtataag tttgttcgca ttcaaccagg
10381 acagactttt tcagtgttag cttgttacaa tggttcacca tctggtgttt accaatgtgc
10441 tatgaggccc aatttcacta ttaagggttc attccttaat ggttcatgtg gtagtgttgg
10501 ttttaacata gattatgact gtgtctcttt ttgttacatg caccatatgg aattaccaac
10561 tggagttcat gctggcacag acttagaagg taacttttat ggaccttttg ttgacaggca
10621 aacagcacaa gcagctggta cggacacaac tattacagtt aatgttttag cttggttgta
10681 cgctgctgtt ataaatggag acaggtggtt tctcaatcga tttaccacaa ctcttaatga
10741 ctttaacctt gtggctatga agtacaatta tgaacctcta acacaagacc atgttgacat
10801 actaggacct ctttctgctc aaactggaat tgccgtttta gatatgtgtg cttcattaaa
10861 agaattactg caaaatggta tgaatggacg taccatattg ggtagtgctt tattagaaga
10921 tgaatttaca ccttttgatg ttgttagaca atgctcaggt gttactttcc aaagtgcagt
10981 gaaaagaaca atcaagggta cacaccactg gttgttactc acaattttga cttcactttt
11041 agttttagtc cagagtactc aatggtcttt gttctttttt ttgtatgaaa atgccttttt
11101 accttttgct atgggtatta ttgctatgtc tgcttttgca atgatgtttg tcaaacataa
11161 gcatgcattt ctctgtttgt ttttgttacc ttctcttgcc actgtagctt attttaatat
11221 ggtctatatg cctgctagtt gggtgatgcg tattatgaca tggttggata tggttgatac
11281 tagtttgtct ggttttaagc taaaagactg tgttatgtat gcatcagctg tagtgttact
11341 aatccttatg acagcaagaa ctgtgtatga tgatggtgct aggagagtgt ggacacttat
11401 gaatgtcttg acactcgttt ataaagttta ttatggtaat gctttagatc aagccatttc
11461 catgtgggct cttataatct ctgttacttc taactactca ggtgtagtta caactgtcat
11521 gtttttggcc agaggtattg tttttatgtg tgttgagtat tgccctattt tcttcataac
11581 tggtaataca cttcagtgta taatgctagt ttattgtttc ttaggctatt tttgtacttg
11641 ttactttggc ctcttttgtt tactcaaccg ctactttaga ctgactcttg gtgtttatga
11701 ttacttagtt tctacacagg agtttagata tatgaattca cagggactac tcccacccaa
11761 gaatagcata gatgccttca aactcaacat taaattgttg ggtgttggtg gcaaaccttg
11821 tatcaaagta gccactgtac agtctaaaat gtcagatgta aagtgcacat cagtagtctt
11881 actctcagtt ttgcaacaac tcagagtaga atcatcatct aaattgtggg ctcaatgtgt
11941 ccagttacac aatgacattc tcttagctaa agatactact gaagcctttg aaaaaatggt
12001 ttcactactt tctgttttgc tttccatgca gggtgctgta gacataaaca agctttgtga
12061 agaaatgctg gacaacaggg caaccttaca agctatagcc tcagagttta gttcccttcc
12121 atcatatgca gcttttgcta ctgctcaaga agcttatgag caggctgttg ctaatggtga
12181 ttctgaagtt gttcttaaaa agttgaagaa gtctttgaat gtggctaaat ctgaatttga
12241 ccgtgatgca gccatgcaac gtaagttgga aaagatggct gatcaagcta tgacccaaat
12301 gtataaacag gctagatctg aggacaagag ggcaaaagtt actagtgcta tgcagacaat
12361 gcttttcact atgcttagaa agttggataa tgatgcactc aacaacatta tcaacaatgc
12421 aagagatggt tgtgttccct tgaacataat acctcttaca acagcagcca aactaatggt
12481 tgtcatacca gactataaca catataaaaa tacgtgtgat ggtacaacat ttacttatgc
12541 atcagcattg tgggaaatcc aacaggttgt agatgcagat agtaaaattg ttcaacttag
12601 tgaaattagt atggacaatt cacctaattt agcatggcct cttattgtaa cagctttaag
12661 ggccaattct gctgtcaaat tacagaataa tgagcttagt cctgttgcac tacgacagat
12721 gtcttgtgct gccggtacta cacaaactgc ttgcactgat gacaatgcgt tagcttacta
12781 caacacaaca aagggaggta ggtttgtact tgcactgtta tccgatttac aggatttgaa
12841 atgggctaga ttccctaaga gtgatggaac tggtactatc tatacagaac tggaaccacc
12901 ttgtaggttt gttacagaca cacctaaagg tcctaaagtg aagtatttat actttattaa
12961 aggattaaac aacctaaata gaggtatggt acttggtagt ttagctgcca cagtacgtct
13021 acaagctggt aatgcaacag aagtgcctgc caattcaact gtattatctt tctgtgcttt
13081 tgctgtagat gctgctaaag cttacaaaga ttatctagct agtgggggac aaccaatcac
13141 taattgtgtt aagatgttgt gtacacacac tggtactggt caggcaataa cagttacacc
13201 ggaagccaat atggatcaag aatcctttgg tggtgcatcg tgttgtctgt actgccgttg
13261 ccacatagat catccaaatc ctaaaggatt ttgtgactta aaaggtaagt atgtacaaat
13321 acctacaact tgtgctaatg accctgtggg ttttacactt aaaaacacag tctgtaccgt
13381 ctgcggtatg tggaaaggtt atggctgtag ttgtgatcaa ctccgcgaac ccatgcttca
13441 gtcagctgat gcacaatcgt ttttaaacgg gtttgcggtg taagtgcagc ccgtcttaca
13501 ccgtgcggca caggcactag tactgatgtc gtatacaggg cttttgacat ctacaatgat
13561 aaagtagctg gttttgctaa attcctaaaa actaattgtt gtcgcttcca agaaaaggac
13621 gaagatgaca atttaattga ttcttacttt gtagttaaga gacacacttt ctctaactac
13681 caacatgaag aaacaattta taatttactt aaggattgtc cagctgttgc taaacatgac
13741 ttctttaagt ttagaataga cggtgacatg gtaccacata tatcacgtca acgtcttact
13801 aaatacacaa tggcagacct cgtctatgct ttaaggcatt ttgatgaagg taattgtgac
13861 acattaaaag aaatacttgt cacatacaat tgttgtgatg atgattattt caataaaaag
13921 gactggtatg attttgtaga aaacccagat atattacgcg tatacgccaa cttaggtgaa
13981 cgtgtacgcc aagctttgtt aaaaacagta caattctgtg atgccatgcg aaatgctggt
14041 attgttggtg tactgacatt agataatcaa gatctcaatg gtaactggta tgatttcggt
14101 gatttcatac aaaccacgcc aggtagtgga gttcctgttg tagattctta ttattcattg
14161 ttaatgccta tattaacctt gaccagggct ttaactgcag agtcacatgt tgacactgac
14221 ttaacaaagc cttacattaa gtgggatttg ttaaaatatg acttcacgga agagaggtta
14281 aaactctttg accgttattt taaatattgg gatcagacat accacccaaa ttgtgttaac
14341 tgtttggatg acagatgcat tctgcattgt gcaaacttta atgttttatt ctctacagtg
14401 ttcccaccta caagttttgg accactagtg agaaaaatat ttgttgatgg tgttccattt
14461 gtagtttcaa ctggatacca cttcagagag ctaggtgttg tacataatca ggatgtaaac
14521 ttacatagct ctagacttag ttttaaggaa ttacttgtgt atgctgctga ccctgctatg
14581 cacgctgctt ctggtaatct attactagat aaacgcacta cgtgcttttc agtagctgca
14641 cttactaaca atgttgcttt tcaaactgtc aaacccggta attttaacaa agacttctat
14701 gactttgctg tgtctaaggg tttctttaag gaaggaagtt ctgttgaatt aaaacacttc
14761 ttctttgctc aggatggtaa tgctgctatc agcgattatg actactatcg ttataatcta
14821 ccaacaatgt gtgatatcag acaactacta tttgtagttg aagttgttga taagtacttt
14881 gattgttacg atggtggctg tattaatgct aaccaagtca tcgtcaacaa cctagacaaa
14941 tcagctggtt ttccatttaa taaatggggt aaggctagac tttattatga ttcaatgagt
15001 tatgaggatc aagatgcact tttcgcatat acaaaacgta atgtcatccc tactataact
15061 caaatgaatc ttaagtatgc cattagtgca aagaatagag ctcgcaccgt agctggtgtc
15121 tctatctgta gtactatgac caatagacag tttcatcaaa aattattgaa atcaatagcc
15181 gccactagag gagctactgt agtaattgga acaagcaaat tctatggtgg ttggcacaac
15241 atgttaaaaa ctgtttatag tgatgtagaa aaccctcacc ttatgggttg ggattatcct
15301 aaatgtgata gagccatgcc taacatgctt agaattatgg cctcacttgt tcttgctcgc
15361 aaacatacaa cgtgttgtag cttgtcacac cgtttctata gattagctaa tgagtgtgct
15421 caagtattga gtgaaatggt catgtgtggc ggttcactat atgttaaacc aggtggaacc
15481 tcatcaggag atgccacaac tgcttatgct aatagtgttt ttaacatttg tcaagctgtc
15541 acggccaatg ttaatgcact tttatctact gatggtaaca aaattgccga taagtatgtc
15601 cgcaatttac aacacagact ttatgagtgt ctctatagaa atagagatgt tgacacagac
15661 tttgtgaatg agttttacgc atatttgcgt aaacatttct caatgatgat actctctgac
15721 gatgctgttg tgtgtttcaa tagcacttat gcatctcaag gtctagtggc tagcataaag
15781 aactttaagt cagttcttta ttatcaaaac aatgttttta tgtctgaagc aaaatgttgg
15841 actgagactg accttactaa aggacctcat gaattttgct ctcaacatac aatgctagtt
15901 aaacagggtg atgattatgt gtaccttcct tacccagatc catcaagaat cctaggggcc
15961 ggctgttttg tagatgatat cgtaaaaaca gatggtacac ttatgattga acggttcgtg
16021 tctttagcta tagatgctta cccacttact aaacatccta atcaggagta tgctgatgtc
16081 tttcatttgt acttacaata cataagaaag ctacatgatg agttaacagg acacatgtta
16141 gacatgtatt ctgttatgct tactaatgat aacacttcaa ggtattggga acctgagttt
16201 tatgaggcta tgtacacacc gcatacagtc ttacaggctg ttggggcttg tgttctttgc
16261 aattcacaga cttcattaag atgtggtgct tgcatacgta gaccattctt atgttgtaaa
16321 tgctgttacg accatgtcat atcaacatca cataaattag tcttgtctgt taatccgtat
16381 gtttgcaatg ctccaggttg tgatgtcaca gatgtgactc aactttactt aggaggtatg
16441 agctattatt gtaaatcaca taaaccaccc attagttttc cattgtgtgc taatggacaa
16501 gtttttggtt tatataaaaa tacatgtgtt ggtagcgata atgttactga ctttaatgca
16561 attgcaacat gtgactggac aaatgctggt gattacattt tagctaacac ctgtactgaa
16621 agactcaagc tttttgcagc agaaacgctc aaagctactg aggagacatt taaactgtct
16681 tatggtattg ctactgtacg tgaagtgctg tctgacagag aattacatct ttcatgggaa
16741 gttggtaaac ctagaccacc acttaaccga aattatgtct ttactggtta tcgtgtaact
16801 aaaaacagta aagtacaaat aggagagtac acctttgaaa aaggtgacta tggtgatgct
16861 gttgtttacc gaggtacaac aacttacaaa ttaaatgttg gtgattattt tgtgctgaca
16921 tcacatacag taatgccatt aagtgcacct acactagtgc cacaagagca ctatgttaga
16981 attactggct tatacccaac actcaatatc tcagatgagt tttctagcaa tgttgcaaat
17041 tatcaaaagg ttggtatgca aaagtattct acactccagg gaccacctgg tactggtaag
17101 agtcattttg ctattggcct agctctctac tacccttctg ctcgcatagt gtatacagct
17161 tgctctcatg ccgctgttga tgcactatgt gagaaggcat taaaatattt gcctatagat
17221 aaatgtagta gaattatacc tgcacgtgct cgtgtagagt gttttgataa attcaaagtg
17281 aattcaacat tagaacagta tgtcttttgt actgtaaatg cattgcctga gacgacagca
17341 gatatagttg tctttgatga aatttcaatg gccacaaatt atgatttgag tgttgtcaat
17401 gccagattac gtgctaagca ctatgtgtac attggcgacc ctgctcaatt acctgcacca
17461 cgcacattgc taactaaggg cacactagaa ccagaatatt tcaattcagt gtgtagactt
17521 atgaaaacta taggtccaga catgttcctc ggaacttgtc ggcgttgtcc tgctgaaatt
17581 gttgacactg tgagtgcttt ggtttatgat aataagctta aagcacataa agacaaatca
17641 gctcaatgct ttaaaatgtt ttataagggt gttatcacgc atgatgtttc atctgcaatt
17701 aacaggccac aaataggcgt ggtaagagaa ttccttacac gtaaccctgc ttggagaaaa
17761 gctgtcttta tttcacctta taattcacag aatgctgtag cctcaaagat tttgggacta
17821 ccaactcaaa ctgttgattc atcacagggc tcagaatatg actatgtcat attcactcaa
17881 accactgaaa cagctcactc ttgtaatgta aacagattta atgttgctat taccagagca
17941 aaagtaggca tactttgcat aatgtctgat agagaccttt atgacaagtt gcaatttaca
18001 agtcttgaaa ttccacgtag gaatgtggca actttacaag ctgaaaatgt aacaggactc
18061 tttaaagatt gtagtaaggt aatcactggg ttacatccta cacaggcacc tacacacctc
18121 agtgttgaca ctaaattcaa aactgaaggt ttatgtgttg acatacctgg catacctaag
18181 gacatgacct atagaagact catctctatg atgggtttta aaatgaatta tcaagttaat
18241 ggttacccta acatgtttat cacccgcgaa gaagctataa gacatgtacg tgcatggatt
18301 ggcttcgatg tcgaggggtg tcatgctact agagaagctg ttggtaccaa tttaccttta
18361 cagctaggtt tttctacagg tgttaaccta gttgctgtac ctacaggtta tgttgataca
18421 cctaataata cagatttttc cagagttagt gctaaaccac cgcctggaga tcaatttaaa
18481 cacctcatac cacttatgta caaaggactt ccttggaatg tagtgcgtat aaagattgta
18541 caaatgttaa gtgacacact taaaaatctc tctgacagag tcgtatttgt cttatgggca
18601 catggctttg agttgacatc tatgaagtat tttgtgaaaa taggacctga gcgcacctgt
18661 tgtctatgtg atagacgtgc cacatgcttt tccactgctt cagacactta tgcctgttgg
18721 catcattcta ttggatttga ttacgtctat aatccgttta tgattgatgt tcaacaatgg
18781 ggttttacag gtaacctaca aagcaaccat gatctgtatt gtcaagtcca tggtaatgca
18841 catgtagcta gttgtgatgc aatcatgact aggtgtctag ctgtccacga gtgctttgtt
18901 aagcgtgttg actggactat tgaatatcct ataattggtg atgaactgaa gattaatgcg
18961 gcttgtagaa aggttcaaca catggttgtt aaagctgcat tattagcaga caaattccca
19021 gttcttcacg acattggtaa ccctaaagct attaagtgtg tacctcaagc tgatgtagaa
19081 tggaagttct atgatgcaca gccttgtagt gacaaagctt ataaaataga agaattattc
19141 tattcttatg ccacacattc tgacaaattc acagatggtg tatgcctatt ttggaattgc
19201 aatgtcgata gatatcctgc taattccatt gtttgtagat ttgacactag agtgctatct
19261 aaccttaact tgcctggttg tgatggtggc agtttgtatg taaataaaca tgcattccac
19321 acaccagctt ttgataaaag tgcttttgtt aatttaaaac aattaccatt tttctattac
19381 tctgacagtc catgtgagtc tcatggaaaa caagtagtgt cagatataga ttatgtacca
19441 ctaaagtctg ctacgtgtat aacacgttgc aatttaggtg gtgctgtctg tagacatcat
19501 gctaatgagt acagattgta tctcgatgct tataacatga tgatctcagc tggctttagc
19561 ttgtgggttt acaaacaatt tgatacttat aacctctgga acacttttac aagacttcag
19621 agtttagaaa atgtggcttt taatgttgta aataagggac actttgatgg acaacagggt
19681 gaagtaccag tttctatcat taataacact gtttacacaa aagttgatgg tgttgatgta
19741 gaattgtttg aaaataaaac aacattacct gttaatgtag catttgagct ttgggctaag
19801 cgcaacatta aaccagtacc agaggtgaaa atactcaata atttgggtgt ggacattgct
19861 gctaatactg tgatctggga ctacaaaaga gatgctccag cacatatatc tactattggt
19921 gtttgttcta tgactgacat agccaagaaa ccaactgaaa cgatttgtgc accactcact
19981 gtcttttttg atggtagagt tgatggtcaa gtagacttat ttagaaatgc ccgtaatggt
20041 gttcttatta cagaaggtag tgttaaaggt ttacaaccat ctgtaggtcc caaacaagct
20101 agtcttaatg gagtcacatt aattggagaa gccgtaaaaa cacagttcaa ttattataag
20161 aaagttgatg gtgttgtcca acaattacct gaaacttact ttactcagag tagaaattta
20221 caagaattta aacccaggag tcaaatggaa attgatttct tagaattagc tatggatgaa
20281 ttcattgaac ggtataaatt agaaggctat gccttcgaac atatcgttta tggagatttt
20341 agtcatagtc agttaggtgg tttacatcta ctgattggac tagctaaacg ttttaaggaa
20401 tcaccttttg aattagaaga ttttattcct atggacagta cagttaaaaa ctatttcata
20461 acagatgcgc aaacaggttc atctaagtgt gtgtgttctg ttattgattt attacttgat
20521 gattttgttg aaataataaa atcccaagat ttatctgtag tttctaaggt tgtcaaagtg
20581 actattgact atacagaaat ttcatttatg ctttggtgta aagatggcca tgtagaaaca
20641 ttttacccaa aattacaatc tagtcaagcg tggcaaccgg gtgttgctat gcctaatctt
20701 tacaaaatgc aaagaatgct attagaaaag tgtgaccttc aaaattatgg tgatagtgca
20761 acattaccta aaggcataat gatgaatgtc gcaaaatata ctcaactgtg tcaatattta
20821 aacacattaa cattagctgt accctataat atgagagtta tacattttgg tgctggttct
20881 gataaaggag ttgcaccagg tacagctgtt ttaagacagt ggttgcctac gggtacgctg
20941 cttgtcgatt cagatcttaa tgactttgtc tctgatgcag attcaacttt gattggtgat
21001 tgtgcaactg tacatacagc taataaatgg gatctcatta ttagtgatat gtacgaccct
21061 aagactaaaa atgttacaaa agaaaatgac tctaaagagg gttttttcac ttacatttgt
21121 gggtttatac aacaaaagct agctcttgga ggttccgtgg ctataaagat aacagaacat
21181 tcttggaatg ctgatcttta taagctcatg ggacacttcg catggtggac agcctttgtt
21241 actaatgtga atgcgtcatc atctgaagca tttttaattg gatgtaatta tcttggcaaa
21301 ccacgcgaac aaatagatgg ttatgtcatg catgcaaatt acatattttg gaggaataca
21361 aatccaattc agttgtcttc ctattcttta tttgacatga gtaaatttcc ccttaaatta
21421 aggggtactg ctgttatgtc tttaaaagaa ggtcaaatca atgatatgat tttatctctt
21481 cttagtaaag gtagacttat aattagagaa aacaacagag ttgttatttc tagtgatgtt
21541 cttgttaaca actaaacgaa caatgtttgt ttttcttgtt ttattgccac tagtctctag
21601 tcagtgtgtt aatcttacaa ccagaactca attaccccct gcatacacta attctttcac
21661 acgtggtgtt tattaccctg acaaagtttt cagatcctca gttttacatt caactcagga
21721 cttgttctta cctttctttt ccaatgttac ttggttccat gctatacatg tctctgggac
21781 caatggtact aagaggtttg ataaccctgt cctaccattt aatgatggtg tttattttgc
21841 ttccactgag aagtctaaca taataagagg ctggattttt ggtactactt tagattcgaa
21901 gacccagtcc ctacttattg ttaataacgc tactaatgtt gttattaaag tctgtgaatt
21961 tcaattttgt aatgatccat ttttgggtgt ttattaccac aaaaacaaca aaagttggat
22021 ggaaagtgag ttcagagttt attctagtgc gaataattgc acttttgaat atgtctctca
22081 gccttttctt atggaccttg aaggaaaaca gggtaatttc aaaaatctta gggaatttgt
22141 gtttaagaat attgatggtt attttaaaat atattctaag cacacgccta ttaatttagt
22201 gcgtgatctc cctcagggtt tttcggcttt agaaccattg gtagatttgc caataggtat
22261 taacatcact aggtttcaaa ctttacttgc tttacataga agttatttga ctcctggtga
22321 ttcttcttca ggttggacag ctggtgctgc agcttattat gtgggttatc ttcaacctag
22381 gacttttcta ttaaaatata atgaaaatgg aaccattaca gatgctgtag actgtgcact
22441 tgaccctctc tcagaaacaa agtgtacgtt gaaatccttc actgtagaaa aaggaatcta
22501 tcaaacttct aactttagag tccaaccaac agaatctatt gttagatttc ctaatattac
22561 aaacttgtgc ccttttggtg aagtttttaa cgccaccaga tttgcatctg tttatgcttg
22621 gaacaggaag agaatcagca actgtgttgc tgattattct gtcctatata attccgcatc
22681 attttccact tttaagtgtt atggagtgtc tcctactaaa ttaaatgatc tctgctttac
22741 taatgtctat gcagattcat ttgtaattag aggtgatgaa gtcagacaaa tcgctccagg
22801 gcaaactgga aagattgctg attataatta taaattacca gatgatttta caggctgcgt
22861 tatagcttgg aattctaaca atcttgattc taaggttggt ggtaattata attacctgta
22921 tagattgttt aggaagtcta atctcaaacc ttttgagaga gatatttcaa ctgaaatcta
22981 tcaggccggt agcacacctt gtaatggtgt tgaaggtttt aattgttact ttcctttaca
23041 atcatatggt ttccaaccca ctaatggtgt tggttaccaa ccatacagag tagtagtact
23101 ttcttttgaa cttctacatg caccagcaac tgtttgtgga cctaaaaagt ctactaattt
23161 ggttaaaaac aaatgtgtca atttcaactt caatggttta acaggcacag gtgttcttac
23221 tgagtctaac aaaaagtttc tgcctttcca acaatttggc agagacattg ctgacactac
23281 tgatgctgtc cgtgatccac agacacttga gattcttgac attacaccat gttcttttgg
23341 tggtgtcagt gttataacac caggaacaaa tacttctaac caggttgctg ttctttatca
23401 ggatgttaac tgcacagaag tccctgttgc tattcatgca gatcaactta ctcctacttg
23461 gcgtgtttat tctacaggtt ctaatgtttt tcaaacacgt gcaggctgtt taataggggc
23521 tgaacatgtc aacaactcat atgagtgtga catacccatt ggtgcaggta tatgcgctag
23581 ttatcagact cagactaatt ctcctcggcg ggcacgtagt gtagctagtc aatccatcat
23641 tgcctacact atgtcacttg gtgcagaaaa ttcagttgct tactctaata actctattgc
23701 catacccaca aattttacta ttagtgttac cacagaaatt ctaccagtgt ctatgaccaa
23761 gacatcagta gattgtacaa tgtacatttg tggtgattca actgaatgca gcaatctttt
23821 gttgcaatat ggcagttttt gtacacaatt aaaccgtgct ttaactggaa tagctgttga
23881 acaagacaaa aacacccaag aagtttttgc acaagtcaaa caaatttaca aaacaccacc
23941 aattaaagat tttggtggtt ttaatttttc acaaatatta ccagatccat caaaaccaag
24001 caagaggtca tttattgaag atctactttt caacaaagtg acacttgcag atgctggctt
24061 catcaaacaa tatggtgatt gccttggtga tattgctgct agagacctca tttgtgcaca
24121 aaagtttaac ggccttactg ttttgccacc tttgctcaca gatgaaatga ttgctcaata
24181 cacttctgca ctgttagcgg gtacaatcac ttctggttgg acctttggtg caggtgctgc
24241 attacaaata ccatttgcta tgcaaatggc ttataggttt aatggtattg gagttacaca
24301 gaatgttctc tatgagaacc aaaaattgat tgccaaccaa tttaatagtg ctattggcaa
24361 aattcaagac tcactttctt ccacagcaag tgcacttgga aaacttcaag atgtggtcaa
24421 ccaaaatgca caagctttaa acacgcttgt taaacaactt agctccaatt ttggtgcaat
24481 ttcaagtgtt ttaaatgata tcctttcacg tcttgacaaa gttgaggctg aagtgcaaat
24541 tgataggttg atcacaggca gacttcaaag tttgcagaca tatgtgactc aacaattaat
24601 tagagctgca gaaatcagag cttctgctaa tcttgctgct actaaaatgt cagagtgtgt
24661 acttggacaa tcaaaaagag ttgatttttg tggaaagggc tatcatctta tgtccttccc
24721 tcagtcagca cctcatggtg tagtcttctt gcatgtgact tatgtccctg cacaagaaaa
24781 gaacttcaca actgctcctg ccatttgtca tgatggaaaa gcacactttc ctcgtgaagg
24841 tgtctttgtt tcaaatggca cacactggtt tgtaacacaa aggaattttt atgaaccaca
24901 aatcattact acagacaaca catttgtgtc tggtaactgt gatgttgtaa taggaattgt
24961 caacaacaca gtttatgatc ctttgcaacc tgaattagac tcattcaagg aggagttaga
25021 taaatatttt aagaatcata catcaccaga tgttgattta ggtgacatct ctggcattaa
25081 tgcttcagtt gtaaacattc aaaaagaaat tgaccgcctc aatgaggttg ccaagaattt
25141 aaatgaatct ctcatcgatc tccaagaact tggaaagtat gagcagtata taaaatggcc
25201 atggtacatt tggctaggtt ttatagctgg cttgattgcc atagtaatgg tgacaattat
25261 gctttgctgt atgaccagtt gctgtagttg tctcaagggc tgttgttctt gtggatcctg
25321 ctgcaaattt gatgaagacg actctgagcc agtgctcaaa ggagtcaaat tacattacac
25381 ataaacgaac ttatggattt gtttatgaga atcttcacaa ttggaactgt aactttgaag
25441 caaggtgaaa tcaaggatgc tactccttca gattttgttc gcgctactgc aacgataccg
25501 atacaagcct cactcccttt cggatggctt attgttggcg ttgcacttct tgctgttttt
25561 cagagcgctt ccaaaatcat aaccctcaaa aagagatggc aactagcact ctccaagggt
25621 gttcactttg tttgcaactt gctgttgttg tttgtaacag tttactcaca ccttttgctc
25681 gttgctgctg gccttgaagc cccttttctc tatctttatg ctttagtcta cttcttgcag
25741 agtataaact ttgtaagaat aataatgagg ctttggcttt gctggaaatg ccgttccaaa
25801 aacccattac tttatgatgc caactatttt ctttgctggc atactaattg ttacgactat
25861 tgtatacctt acaatagtgt aacttcttca attgtcatta cttcaggtga tggcacaaca
25921 agtcctattt ctgaacatga ctaccagatt ggtggttata ctgaaaaatg ggaatctgga
25981 gtaaaagact gtgttgtatt acacagttac ttcacttcag actattacca gctgtactca
26041 actcaattga gtacagacac tggtgttgaa catgttacct tcttcatcta caataaaatt
26101 gttgatgagc ctgaagaaca tgtccaaatt cacacaatcg acggttcatc cggagttgtt
26161 aatccagtaa tggaaccaat ttatgatgaa ccgacgacga ctactagcgt gcctttgtaa
26221 gcacaagctg atgagtacga acttatgtac tcattcgttt cggaagagac aggtacgtta
26281 atagttaata gcgtacttct ttttcttgct ttcgtggtat tcttgctagt tacactagcc
26341 atccttactg cgcttcgatt gtgtgcgtac tgctgcaata ttgttaacgt gagtcttgta
26401 aaaccttctt tttacgttta ctctcgtgtt aaaaatctga attcttctag agttcctgat
26461 cttctggtct aaacgaacta aatattatat tagtttttct gtttggaact ttaattttag
26521 ccatggcaga ttccaacggt actattaccg ttgaagagct taaaaagctc cttgaacaat
26581 ggaacctagt aataggtttc ctattcctta catggatttg tcttctacaa tttgcctatg
26641 ccaacaggaa taggtttttg tatataatta agttaatttt cctctggctg ttatggccag
26701 taactttagc ttgttttgtg cttgctgctg tttacagaat aaattggatc accggtggaa
26761 ttgctatcgc aatggcttgt cttgtaggct tgatgtggct cagctacttc attgcttctt
26821 tcagactgtt tgcgcgtacg cgttccatgt ggtcattcaa tccagaaact aacattcttc
26881 tcaacgtgcc actccatggc actattctga ccagaccgct tctagaaagt gaactcgtaa
26941 tcggagctgt gatccttcgt ggacatcttc gtattgctgg acaccatcta ggacgctgtg
27001 acatcaagga cctgcctaaa gaaatcactg ttgctacatc acgaacgctt tcttattaca
27061 aattgggagc ttcgcagcgt gtagcaggtg actcaggttt tgctgcatac agtcgctaca
27121 ggattggcaa ctataaatta aacacagacc attccagtag cagtgacaat attgctttgc
27181 ttgtacagta agtgacaaca gatgtttcat ctcgttgact ttcaggttac tatagcagag
27241 atattactaa ttattatgag gacttttaaa gtttccattt ggaatcttga ttacatcata
27301 aacctcataa ttaaaaattt atctaagtca ctaactgaga ataaatattc tcaattagat
27361 gaagagcaac caatggagat tgattaaacg aacatgaaaa ttattctttt cttggcactg
27421 ataacactcg ctacttgtga gctttatcac taccaagagt gtgttagagg tacaacagta
27481 cttttaaaag aaccttgctc ttctggaaca tacgagggca attcaccatt tcatcctcta
27541 gctgataaca aatttgcact gacttgcttt agcactcaat ttgcttttgc ttgtcctgac
27601 ggcgtaaaac acgtctatca gttacgtgcc agatcagttt cacctaaact gttcatcaga
27661 caagaggaag ttcaagaact ttactctcca atttttctta ttgttgcggc aatagtgttt
27721 ataacacttt gcttcacact caaaagaaag acagaatgat tgaactttca ttaattgact
27781 tctatttgtg ctttttagcc tttctgctat tccttgtttt aattatgctt attatctttt
27841 ggttctcact tgaactgcaa gatcataatg aaacttgtca cgcctaaacg aacatgaaat
27901 ttcttgtttt cttaggaatc atcacaactg tagctgcatt tcaccaagaa tgtagtttac
27961 agtcatgtac tcaacatcaa ccatatgtag ttgatgaccc gtgtcctatt cacttctatt
28021 ctaaatggta tattagagta ggagctagaa aatcagcacc tttaattgaa ttgtgcgtgg
28081 atgaggctgg ttctaaatca cccattcagt acatcgatat cggtaattat acagtttcct
28141 gtttaccttt tacaattaat tgccaggaac ctaaattggg tagtcttgta gtgcgttgtt
28201 cgttctatga agacttttta gagtatcatg acgttcgtgt tgttttagat ttcatctaaa
28261 cgaacaaact aaaatgtctg ataatggacc ccaaaatcag cgaaatgcac cccgcattac
28321 gtttggtgga ccctcagatt caactggcag taaccagaat ggagaacgca gtggggcgcg
28381 atcaaaacaa cgtcggcccc aaggtttacc caataatact gcgtcttggt tcaccgctct
28441 cactcaacat ggcaaggaag accttaaatt ccctcgagga caaggcgttc caattaacac
28501 caatagcagt ccagatgacc aaattggcta ctaccgaaga gctaccagac gaattcgtgg
28561 tggtgacggt aaaatgaaag atctcagtcc aagatggtat ttctactacc taggaactgg
28621 gccagaagct ggacttccct atggtgctaa caaagacggc atcatatggg ttgcaactga
28681 gggagccttg aatacaccaa aagatcacat tggcacccgc aatcctgcta acaatgctgc
28741 aatcgtgcta caacttcctc aaggaacaac attgccaaaa ggcttctacg cagaagggag
28801 cagaggcggc agtcaagcct cttctcgttc ctcatcacgt agtcgcaaca gttcaagaaa
28861 ttcaactcca ggcagcagta ggggaacttc tcctgctaga atggctggca atggcggtga
28921 tgctgctctt gctttgctgc tgcttgacag attgaaccag cttgagagca aaatgtctgg
28981 taaaggccaa caacaacaag gccaaactgt cactaagaaa tctgctgctg aggcttctaa
29041 gaagcctcgg caaaaacgta ctgccactaa agcatacaat gtaacacaag ctttcggcag
29101 acgtggtcca gaacaaaccc aaggaaattt tggggaccag gaactaatca gacaaggaac
29161 tgattacaaa cattggccgc aaattgcaca atttgccccc agcgcttcag cgttcttcgg
29221 aatgtcgcgc attggcatgg aagtcacacc ttcgggaacg tggttgacct acacaggtgc
29281 catcaaattg gatgacaaag atccaaattt caaagatcaa gtcattttgc tgaataagca
29341 tattgacgca tacaaaacat tcccaccaac agagcctaaa aaggacaaaa agaagaaggc
29401 tgatgaaact caagccttac cgcagagaca gaagaaacag caaactgtga ctcttcttcc
29461 tgctgcagat ttggatgatt tctccaaaca attgcaacaa tccatgagca gtgctgactc
29521 aactcaggcc taaactcatg cagaccacac aaggcagatg ggctatataa acgttttcgc
29581 ttttccgttt acgatatata gtctactctt gtgcagaatg aattctcgta actacatagc
29641 acaagtagat gtagttaact ttaatctcac atagcaatct ttaatcagtg tgtaacatta
29701 gggaggactt gaaagagcca ccacattttc accgaggcca cgcggagtac gatcgagtgt
29761 acagtgaaca atgctaggga gagctgccta tatggaagag ccctaatgtg taaaattaat
29821 tttagtagtg ctatccccat gtgattttaa tagcttctta ggagaatgac aaaaaaaaaa
29881 aaaaaaaaaa aaaaaaaaaa aaa"""
</code>
> **This genome can be replaced in this notebook by saving it on the disk by creating a txt file and calling it like we've done in the next cell**
<code>
with open('/Users/pranjal27bhardwaj/Desktop/Corona main/covid_genome.txt', 'r') as file:
corona = file.read()
</code>
<code>
corona
</code>
#### To remove all the numbers and spaces in the genome and just get the string of A, T, G, C. So using the replace function:
<code>
for a in " \n0123456789":
corona = corona.replace(a, "")
</code>
<code>
corona
</code>
#### Number of base pairs i.e. nucleotides in the modelule that made up the RNA and DNA
<code>
len(corona)
</code>
# Kolmogorov complexity
#### Predicting the size of virus by compressing the genome of Corona virus. 'Kolmogorov complexity' (upperbounding because lower bounding is not possible).
#### Compressing using zlib
<code>
import zlib
len(zlib.compress(corona.encode("utf-8")))
## for python 3 or more we need to utf-8 format encoding
</code>
#### The above result means - The RNA of Coronavirus can contain '8858' bytes of information. This is just an upper-bound. This means - Coronavirus cannot contain more than '8858' bytes of information. Let's see if we can compress it a little more. HEre we used the zlib method of compression. We can look for better compression types like lzma.
#### Compressing furthermore using lzma
<code>
import lzma
lzc = lzma.compress(corona.encode("utf-8"))
len(lzc)
</code>
#### So, The RNA of Coronavirus can contain '8308' bytes of information. This is just an upper-bound. Hence it's a better compression way.
# How to extract imformation from this genome information?
The genome contains the information about the proteins it can make. These proteins determine the characteristics of the cell in which they are produced. So we need to extract information about the proteins. To extract this info, we must know - how proteins are formed from the genetic material i.e. DNA/RNA.
> **Learning before applying:** RNAs and DNAs form proteins. This is how proteins are formed from DNA. In DNA, A-T/U and G-C form pairs. This pair formation is because - the chemical structure of A, T/U, G and C is such that - A and T are attracted towards each other by 2 hydrogen bonds and G and C together are attracted by 3 hydrogen bonds. A-C and G-T can't form such stable bonds.
<img src="./images_copy/AT-GC.jpg" width="480">
<img src="./images_copy/codon2.png">
> What happens during protein formation is:
<img src="./images_copy/transcript-translate-cell.jpg">
<img src="./images_copy/Codons_patterns.png">
> An enzyme called 'RNA polymerase' breaks these hydrogen bonds for a small part, takes one strand of DNA and forms its corresponding paired RNA. This process happens inside the nucleus of the cell. We call this RNA generated as 'mRNA' or 'messenger RNA' because this RNA will come out of nucleus and act like a messaage to Ribosome which will generate proteins accordingly. This process of generation of mRNA is called - **Transcription.** Now Ribosome will read the mRNA in sets of 3 bases. This set of 3 bases is called codon. Codons decide the Amino acids. Depending on the codon read by Ribosome, tRNA (transfer-RNA) brings the appropiate amino acid. These amino acids are then linked using peptide bonds to form a chain called *Polypeptide chain*. At the other end of Ribosome, tRNA is free and can go to take another amino acid.
> *Note:* Amino acids are organic compounds that contain amine (-NH2) and carboxyl (-COOH) functional groups. There are 20 standard amino acids and 2 non-standard. Of the 20 standard amino acids, nine (His, Ile, Leu, Lys, Met, Phe, Thr, Trp and Val) are called essential amino acids because the human body cannot synthesize them from other compounds at the level needed for normal growth, so they must be obtained from food. Here is the table of codons and their corresponding Amino acids. 'Met' is usually the starting amino acid i.e. 'AUG' forms the start of mRNA. Hence 'AUG' is called *start codon.* 'UAA', 'UGA' and 'UAG' are *stop codons* as they mark the ending of the polypeptide chain, so that a new chain should start from the next codon.
<img src="./images_copy/genetic-code-table.jpg" width="600">
> This process of generation of chains of amino acids is called - **Translation.** A very long chain of amino acids is called *Protein.* In summary, we can understand the process as:
<img src="./images_copy/transcription-translation.png" width="600">
Now since in Coronavirus, we only has RNA, the process of Transcription won't occur and only Translation will happen. So what we now need to write is - *a translation function*, which takes corona's genome as input and gives back all the polypeptide chains that could be formed from that genome. For that, we first need a dictionary of codons. Following codons' string is copied from 'Genetic code' - Wikipedia. (https://en.wikipedia.org/wiki/DNA_codon_table)
<img src="./images_copy/codons.png">
<code>
# Asn or Asp / B AAU, AAC; GAU, GAC
# Gln or Glu / Z CAA, CAG; GAA, GAG
# START AUG
## Seperating them from the table because these duplicates was creating problems
codons = """
Ala / A GCU, GCC, GCA, GCG
Ile / I AUU, AUC, AUA
Arg / R CGU, CGC, CGA, CGG; AGA, AGG, AGR;
Leu / L CUU, CUC, CUA, CUG; UUA, UUG, UUR;
Asn / N AAU, AAC
Lys / K AAA, AAG
Asp / D GAU, GAC
Met / M AUG
Phe / F UUU, UUC
Cys / C UGU, UGC
Pro / P CCU, CCC, CCA, CCG
Gln / Q CAA, CAG
Ser / S UCU, UCC, UCA, UCG; AGU, AGC;
Glu / E GAA, GAG
Thr / T ACU, ACC, ACA, ACG
Trp / W UGG
Gly / G GGU, GGC, GGA, GGG
Tyr / Y UAU, UAC
His / H CAU, CAC
Val / V GUU, GUC, GUA, GUG
STOP UAA, UGA, UAG""".strip()
for t in codons.split('\n'):
print(t.split('\t'))
</code>
> **To make this in a better readable format we'll making it into a decoder dictionary. Then making the decoder for DNA . We will also conevert the "U" to "T" in the list because the CoronaVIrus is a RNA virus and we will convert to DNA only dictionary.**
<code>
##decoder dictionary
dec = {}
for t in codons.split('\n'):
k, v = t.split('\t')
if '/' in k:
k = k.split('/')[-1].strip()
k = k.replace("STOP", "*")
v = v.replace(",", "").replace(";", "").lower().replace("u", "t").split(" ")
for vv in v:
if vv in dec:
print("duplicate", vv)
dec[vv] = k
dec
</code>
## We had to add the duplicate function because AUG is at multiple places, IT can bee seen in "Met" and in "Start" both. Which was creating problem in translation.
<code>
len(set(dec.values()))
</code>
> This means we have 21 amino acids in our decoder. Which can also be verified by the following chart, which shows there can be only 20 amino acids. Here we have a 'STOP' being the 21th amino acid. Which shows that the decoder works well.
<img src="./images_copy/aminoproof.png">
> Now, decoding the genome can result in one of the three possible ways. These 3 ways are called 'reading frames'.
<img src="./images_copy/reading-frames.png" width="480">
In molecular biology, a reading frame is a way of dividing the sequence of nucleotides in a nucleic acid (DNA or RNA) molecule into a set of consecutive, non-overlapping triplets.
<code>
def translation(x, isProtein = False):
aa = []
for i in range(0, len(x)-2, 3):
aa.append(dec[x[i:i+3]])
aa = ''.join(aa)
if isProtein:
if aa[0] != "M" or aa[-1] != "*":
print("BAD PROTEIN!")
return None
aa = aa[:-1]
return aa
aa = translation(corona[0:]) + translation(corona[1:]) + translation(corona[2:])
##Refer to reading of codons for the above algorithm
aa
</code>
# Polypeptides
>In molecular biology, a reading frame is a way of dividing the sequence of nucleotides in a nucleic acid (DNA or RNA) molecule into a set of consecutive, non-overlapping triplets. Where these triplets equate to amino acids or stop signals during translation, they are called codons.
>A polypeptide is a longer, continuous, and unbranched peptide chain of up to fifty amino acids. Hence, peptides fall under the broad chemical classes of biological oligomers and polymers, alongside nucleic acids, oligosaccharides, polysaccharides, and others.
>When a polypeptide contains more than fifty amino acids it is known as a protein. Proteins consist of one or more polypeptides arranged in a biologically functional way, often bound to ligands such as coenzymes and cofactors, or to another protein or other macromolecule such as DNA or RNA, or to complex macromolecular assemblies.
<code>
polypeptides = aa.split("*")
polypeptides
</code>
<code>
len(polypeptides)
</code>
<code>
long_polypep_chains = list(filter(lambda x: len(x) > 100, aa.split("*")))
long_polypep_chains
</code>
<code>
len(long_polypep_chains)
</code>
This is the genome organisation of Sars-Cov-2. _(Genome organisation is the linear order of genetic material (DNA/RNA) and its division into segments performing some specific function.)_
> Note: ORF stands for 'Open Reading Frame', the reading frame in which protein starts with M and ends with *.
Source: https://en.wikipedia.org/wiki/Severe_acute_respiratory_syndrome_coronavirus_2#Phylogenetics_and_taxonomy
<img src="./images_copy/SARS-CoV-2-genome.png" width="900">
Let's see if we can extract all the segments as mentioned here. We will refer to the following source again. Source: https://www.ncbi.nlm.nih.gov/nuccore/NC_045512
Also, if you will see the following genome organisation of Sars-Cov (old coronavirus), you will notice - the structure is very similar to Sars-CoV-2. _(Ignore the detailing given in the structure.)_
<img src="./images_copy/SARS-CoV-1-genome.png" width="800">
<code>
with open('/Users/pranjal27bhardwaj/Desktop/corona/sars_cov2_data _c/genome/sars_cov2_genome.txt', 'r') as file:
corona = file.read()
</code>
<code>
for s in "\n01234567789 ":
corona = corona.replace(s, "")
</code>
<code>
# https://www.ncbi.nlm.nih.gov/protein/1802476803 -
# Orf1a polyprotein, found in Sars-Cov-2 (new Covid 19)
orf1a_poly_v2 = translation(corona[265:13483], True)
orf1a_v2
</code>
<code>
# https://www.uniprot.org/uniprot/A7J8L3
# Orf1a polyprotein, found in Sars-Cov
with open('sars_cov2_data/proteins_copy/orf1a.txt', 'r') as file:
orf1a_poly_v1 = file.read().replace('\n', '')
orf1a_poly_v1
</code>
<code>
len(orf1a_poly_v1), len(orf1a_poly_v2)
</code>
> Usually orf1b is not studied alone but along with orf1a. So we need to look at 'orf1ab'. But just to prove that the length of orf1b is 2595, here is just finding the length of orf1b in SARS-CoV-2.
<code>
# For orf1b_v1, refer - https://www.uniprot.org/uniprot/A0A0A0QGJ0
orf1a_poly_v2 = translation(corona[13467:21555])
# Length calculated from first 'M'. The last base is *, so extra -1 for that.
len(orf1a_poly_v2) - orf1a_poly_v2.find('M') - 1
</code>
<code>
# https://www.ncbi.nlm.nih.gov/protein/1796318597 -
# Orf1ab polyprotein - found in Sars-cov-2
orf1a_poly_v2 = translation(corona[265:13468]) + translation(corona[13467:21555])
</code>
<code>
# https://www.uniprot.org/uniprot/A7J8L2
# Orf1ab polyprotein - found in Sars-cov
with open('sars_cov2_data/proteins_copy/orf1ab.txt', 'r') as file:
orf1a_poly_v2 = file.read().replace('\n', '')
</code>
<code>
len(orf1a_poly_v2), len(orf1a_poly_v1)
</code>
> So by now, we have extracted Orf1a and Orf1b RNA segments.
<code>
# https://www.ncbi.nlm.nih.gov/protein/1796318598
# Spike glycoprotein - found in Sars-cov-2
spike_pro_v2 = translation(corona[21562:25384], True)
# https://www.ncbi.nlm.nih.gov/Structure/pdb/6VXX CLOSED FORM of glycoprotein (structure of glycoprotein before delivering the payload)
# https://www.ncbi.nlm.nih.gov/Structure/pdb/6VYB OPEN FORM of glycoprotein (structure of glycoprotein after delivering the payload)
spike_pro_v2
</code>
<code>
cn3 = open('/Users/pranjal27bhardwaj/Desktop/Corona main/mmdb_6VXX.cn3', 'rb').read
</code>
> Spike gylcoprotein is has catalystic properties which is responsible for attacking the body and multiplying the number of cells. The infection begins when the viral spike(S) glycoprotein attaches to it's compliementary host cell receptor, which usually is ACE2.
<code>
# https://www.uniprot.org/uniprot/P59594
# Spike glycoprotein - found in Sars-cov
with open('sars_cov2_data/proteins_copy/spike.txt', 'r') as file:
spike_v1 = file.read().replace('\n', '')
</code>
<code>
len(spike_v2), len(spike_v1)
</code>
<code>
import nglview
view = nglview.show_pdbid("6VXX") # load "3pqr" from RCSB PDB and display viewer widget
view
</code>
<code>
import nglview
view = nglview.show_pdbid("6VYB") # load "3pqr" from RCSB PDB and display viewer widget
view
</code>
<code>
# https://www.ncbi.nlm.nih.gov/gene/43740569
# orf3a protein found in Sars-cov-2.
orf3a_pro_v2 = translation(corona[25392:26220], True)
</code>
<code>
# https://www.uniprot.org/uniprot/J9TEM7
with open('sars_cov2_data/proteins_copy/orf3a.txt', 'r') as file:
orf3a_pro_v1 = file.read().replace('\n', '')
</code>
<code>
len(orf3a_pro_v2), len(orf3a_pro_v1)
</code>
By now we have observed that there is very little difference in the corresponding protein lengths of SARS-CoV and SARS-CoV-2.
**So, Can we say that there isn't much difference between the proteins of two viruses?** Well, **Not Really**
Reason for that is that the length of both the proteins is not the most accurate measure of how dissimilar they are. That arises a different question in front of us.
### Q. 3 How much different is the protein of this novel coronavirus as compared to the older one?
The answer is - **The Edit Distance.** In computational linguistics and computer science, edit distance is a way of quantifying how dissimilar two strings (e.g., words) are to one another by counting the minimum number of operations required to transform one string into the other. In bioinformatics, it can be used to quantify the similarity of DNA sequences, which can be viewed as strings of the letters A, C, G and T.
Source: https://en.wikipedia.org/wiki/Edit_distance
Let's calculate the edit distance of the genomes of the two versions of coronaviruses.
Source of complete genome of old coronavirus: https://www.ncbi.nlm.nih.gov/nuccore/30271926
<code>
with open('sars_cov_data/genome/sars_cov_genome.txt', 'r') as file:
sars_cov = file.read()
print(sars_cov)
</code>
<code>
for s in "\n01234567789 ":
sars_cov = sars_cov.replace(s, "")
sars_cov
</code>
<code>
import lzma
lzc_v1 = lzma.compress(sars_cov.encode("utf-8"))
len(lzc_v1)
</code>
<code>
len(lzc_v1) - len(lzc)
</code>
<code>
len(corona) - len(sars_cov)
</code>
<code>
import editdistance
editdistance.eval(sars_cov, corona)
</code>
From this, we can see that - Novel coronavirus differ alot than expected from old coronavirus. Now that we know - the difference between two DNAs/RNAs is measured by calculating edit-distance, we can now just simply complete extracting other proteins.
> Cross verifying the length of Envelope protein as 75 aa
<code>
# https://www.ncbi.nlm.nih.gov/gene/43740570 - Envelope protein in Cov-2
envelope_pro_v2 = translation(corona[26244:26472], True)
</code>
<code>
len(envelope_pro_v2)
</code>
> Cross verifying the length of Memberane protein which ia supposed to be 222aa
<code>
# https://www.ncbi.nlm.nih.gov/gene/43740571 - Membrane Glycoprotein in Cov-2
membrane_pro_v2 = translation(corona[26522:27191], True)
</code>
<code>
len(membrane_pro_v2)
</code>
> Cross verifying the length of ORF6 protein which is supposed to be 61 aa
<code>
# https://www.ncbi.nlm.nih.gov/gene/43740572 - Orf6 in Cov-2
orf6_pro_v2 = translation(corona[27201:27387], True)
</code>
<code>
len(orf6_pro_v2)
</code>
> Cross verifying the length of ORF7a protein which is supposed to be 121aa
<code>
# https://www.ncbi.nlm.nih.gov/gene/43740573 - orf7a in Cov-2
orf7a_pro = translation(corona[27393:27759], True)
</code>
<code>
len(orf7a_pro)
</code>
> Cross verifying the length of ORF7b protein which is supposed to be 43aa
<code>
# https://www.ncbi.nlm.nih.gov/gene/43740574 - orf7b in Cov-2
orf7b_pro = translation(corona[27755:27887], True)
</code>
<code>
len(orf7b_pro)
</code>
> Cross verifying the length of ORF8 protein which is supposed to be 121aa
<code>
# https://www.ncbi.nlm.nih.gov/gene/43740577 - orf8 in Cov-2
orf8_pro = translation(corona[27893:28259], True)
</code>
<code>
len(orf8_pro)
</code>
> Cross verifying the length of ORF10 protein which is supposed to be 38aa
<code>
# https://www.ncbi.nlm.nih.gov/gene/43740576 - orf10 in Cov-2
orf10_pro = translation(corona[29557:29674], True)
</code>
<code>
len(orf10_pro)
</code>
|
{
"filename": "corona.ipynb",
"repository": "0xpranjal/COVID-Genome-Computational-Analysis",
"query": "transformed_from_existing",
"size": 320296,
"sha": ""
}
|
# design_NLLB_model_1.ipynb
Repository: Dimildizio/system
<a href="https://colab.research.google.com/github/Dimildizio/system_design/blob/main/NLLB_model.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Install huggingface lib
<code>
%%capture
!pip install transformers rouge-score sacrebleu sentencepiece
</code>
## Imports
<code>
import nltk
import pandas as pd
import sentencepiece as sp_module
import sacrebleu
import urllib.request
import io
from google.colab import files
from nltk.translate.bleu_score import corpus_bleu, sentence_bleu, SmoothingFunction
from nltk.translate.meteor_score import meteor_score
from rouge_score import rouge_scorer
from sacrebleu.metrics import BLEU, CHRF
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, MarianTokenizer, MarianMTModel
from typing import List
</code>
<code>
nltk.download('wordnet')
</code>
### Download data samples
<code>
%%capture
!wget https://raw.githubusercontent.com/Dimildizio/system_design/main/data/gtrans.txt
!wget https://raw.githubusercontent.com/Dimildizio/system_design/main/data/orig.txt
!wget https://raw.githubusercontent.com/Dimildizio/system_design/main/data/reference.txt
!wget https://raw.githubusercontent.com/Dimildizio/system_design/main/data/translation.txt
</code>
### Download sentencepiece vocab
<code>
%%capture
!wget https://raw.githubusercontent.com/google/sentencepiece/master/data/botchan.txt
</code>
## Specify huggingface access token to download model
<code>
access_token ='' #Put your huggingface token here
</code>
## Download tokenization models for rus and english corpus
<code>
eng_tokenizer = AutoTokenizer.from_pretrained(
"facebook/nllb-200-distilled-600M", token=access_token)
rus_tokenizer = AutoTokenizer.from_pretrained(
"facebook/nllb-200-distilled-600M", src_lang="rus_Cyrl", token=access_token)
</code>
# Trying the out-of-the-box model
## Create model
<code>
model = AutoModelForSeq2SeqLM.from_pretrained("facebook/nllb-200-distilled-600M", token=access_token)
</code>
## Create example data
<code>
doc = 'Шустрая бурая лисица прыгает через ленивого пса!'
reference = 'The quick brown fox jumps over the lazy dog!'
g_trans = 'The nimble brown fox jumps over the lazy dog!'
</code>
## Tokenize
<code>
rus_tok = rus_tokenizer(doc, return_tensors='pt')
</code>
### Sentence piece
<code>
with open("botchan.txt", "rb") as f:
text_data = f.read()
</code>
<code>
model_sp = io.BytesIO()
sp_vocab_size = 1000 #should consider enlarging
sp_module.SentencePieceTrainer.train(sentence_iterator=io.BytesIO(text_data),
model_writer=model_sp,
vocab_size=sp_vocab_size)
sp_tokenizer = sp_module.SentencePieceProcessor(model_proto=model_sp.getvalue())
</code>
<code>
#with open('out.model', 'wb') as f:
# f.write(model_sp.getvalue())
#sp_processor = sp_module.SentencePieceProcessor()
#sp_processor.load('out.model')
</code>
<code>
sp_tokens = sp_tokenizer.encode_as_pieces(doc)
</code>
<code>
sp_tokens
</code>
SentencePiece tokenizer has different funcs from AutoTokenizer
gotta think how to implement it in a MT class or write a new class and make sp tokenizer and nllb model work together
### Translate
<code>
translated_tokens = model.generate(
**rus_tok, forced_bos_token_id=rus_tokenizer.lang_code_to_id["eng_Latn"], max_length=30)
</code>
<code>
translated = rus_tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0] #for multiple entries
</code>
# Metrics
#### **BLEU** - BiLingual Evaluation Understudy
>cares more about word overlap.
>Precision is more important.
> Uses n-grams for evaluation.
>Normalizes scores for text length.
>Typical for machine translation.
>Rewards model for producing matching with reference words.
> Penalizes longer sentences.
#### **ROUGE** - Recall-Oriented Understudy for Gisting Evaluation
>Focused on capturing context (Gisting Evaluation).
>Recall is more important. (Recall-Oriented)
> Uses longest common subseq for evaluation.
> Doesn't normalize scores for text length.
>Typical for text summarization.
>Rewards model if in general generated text represents the contexts of reference.
> Longer texts have advantage for recall.
#### **METEOR** - Metrics for Evaluation of Translation with Explicit ORdering
> Takes word order into account.
> Uses stemming and other techniques for synonyms and paraphrasing
> F1 score is more important.
> Uses unigrams (1 word) along with synonyms with preloaded WordNet synonym dictionary.
> More robust in variations
> Doesn't penalize longer texts
> Typical for machine translation and text summarization.
> Again: more flexible due to use of synonyms (more complex than word overlap)
#### **TER** - Translation Edit Rate
> Represents the **number of edits** needed to get from the hypothesis to the reference sentence. Lower is better.
> Basically quantifies the dissimilarity of the reference and translation
> Possible changes include: deletions, substitution, insertion, shifting
> Used in machine translation and texts summarization.
> Not included in commonly-used libs like nltk
> More complex
> No percentage score. Difficult to interprete the result. 0 edits is the best.
<code>
Rouge = rouge_scorer.RougeScorer(['rouge1', 'rouge2', 'rougeL'], use_stemmer=True)
</code>
<code>
smoothing_zero_ngrams = SmoothingFunction()
</code>
<code>
def round_perc(num: float) -> float:
return round(num*100, 2)
def get_sent_bleu(sentence: str, reference: str=reference) -> float:
'''n=3 gram'''
score = sentence_bleu([reference.split()], sentence.split(),
weights = (0.25, 0.5, 0.25), smoothing_function=smoothing_zero_ngrams.method1) #weights define the 'window size'
return round_perc(score)
def get_bleu(sentence: str, reference: str=reference) -> float:
score = corpus_bleu([[reference.split()]], [sentence.split()],
smoothing_function=smoothing_zero_ngrams.method1)
return round_perc(score)
def get_sacrebleu(sentence, reference):
#bleu = BLEU()
result = sacrebleu.corpus_bleu([sentence], [[reference]])
#print(bleu.get_signature())
return result
def get_chrf(sentence, reference):
chrf = CHRF()
return chrf.corpus_score([sentence], [[reference]])
def get_meteor(sentence: str, reference: str=reference) -> float:
score = meteor_score([reference.split()], sentence.split())
return round_perc(score)
def get_rouge(sentence: str, reference: str=reference):
'''rouge-1 unigrams, individual words
rouge-2 bigrams, word pairs
rouge-L longest sequence'''
scores = Rouge.score(reference, sentence)
idict = {key:{} for key in scores}
for key in scores:
idict[key]['precision'] = str(round_perc(scores[key].precision))+'%'
idict[key]['recall'] = str(round_perc(scores[key].recall))+'%'
idict[key]['f1'] = str(round_perc(scores[key].fmeasure))+'%'
rouge_dict = {key: idict[key] for key in idict}
return rouge_dict
</code>
<code>
def eval_rouge(translations, references):
rouge_dict = {'rouge1':{'precision':0, 'recall':0, 'f_measure':0},
'rouge2':{'precision':0, 'recall':0, 'f_measure':0},
'rougeL':{'precision':0, 'recall':0, 'f_measure':0}
}
for num in range(len(translations)):
rouges = Rouge.score(references[num], translations[num])
for key in rouges.keys():
rouge_dict[key]['precision'] += rouges[key].precision
rouge_dict[key]['recall'] += rouges[key].recall
rouge_dict[key]['f_measure'] += rouges[key].fmeasure
for r in rouge_dict.keys():
for metric in rouge_dict[r]:
print(f'{r}: {metric}: {round(rouge_dict[r][metric]/(num+1), 2)}')
eval_rouge([translated], [reference])
</code>
<code>
def ter(hypothesis, reference=reference):
n = len(reference)
m = len(hypothesis)
# Init matrix for dynamic programming
dp = [[0] * (m + 1) for _ in range(n + 1)]
# Init first row and column
for i in range(n + 1):
dp[i][0] = i
for j in range(m + 1):
dp[0][j] = j
# Fill in the DP matrix
for i in range(1, n + 1):
for j in range(1, m + 1):
cost = 0 if reference[i - 1] == hypothesis[j - 1] else 1
dp[i][j] = min(dp[i - 1][j] + 1, dp[i][j - 1] + 1, dp[i - 1][j - 1] + cost)
return dp[n][m]
</code>
### Test metrics
<code>
def test_metrics(reference, list_of_trans, transname=('google translate', 'machine translation')):
#print(f'Reference: {reference}\n\n')
for name, func in zip(['sent_bleu', 'sacrebleu', 'sacre_chrf','corpus_bleu', 'meteor', 'rouge'], #, 'ter'],
[get_sent_bleu, get_sacrebleu, get_chrf, get_bleu, get_meteor, get_rouge]): #, ter]):
print('Metric:', name)
symbol = '%'# if name != 'ter' else ' edits'
for num in range(len(list_of_trans)):
score = func(list_of_trans[num], reference)
print(f"Translation: {transname[num]}: {score}{symbol}")
print()
</code>
## Upload files to compare
<code>
def onefile(files):
to_compare = []
for filename in files:
with open (filename+'.txt') as f:
new = f.readlines()
to_compare.append(''.join(new).replace('\n', '').replace('\t', ''))
return to_compare
</code>
<code>
filenames = ['orig', 'gtrans', 'translation', 'reference']
orig, gtrans, trans, ref = onefile(filenames)
</code>
<code>
test_metrics(ref, [gtrans, trans])
</code>
## Flow
<code>
class MachineTranslation:
def __init__(self, model, tokenizer, target_lang='eng_Latn', sent_len=300):
self.model=model
self.tokenizer = tokenizer
self.to_lang = target_lang
self.sent_len = sent_len
self.metrics_dict = {'TER':ter,
'BLEU corpus':get_bleu,
'BLEU sentence': get_sent_bleu,
'sacre BLEU':get_sacrebleu,
'sacre_CHRF++': get_chrf,
'METEOR': get_meteor,
'ROUGE': get_rouge,
}
def tokenize(self, sent: str):
'''Tokenize input sentence'''
return self.tokenizer(sent, return_tensors='pt')
def translate(self, inputs):
'''
Generate translation
'''
return self.model.generate(
**inputs, forced_bos_token_id=self.tokenizer.lang_code_to_id[self.to_lang],
max_length=self.sent_len)
def get_decoded(self, toks) -> list:
'''
Convert vect tokens into sentences
'''
return self.tokenizer.batch_decode(toks, skip_special_tokens=True)
def generate_metrics(self, translation: str, reference: str) -> None:
'''
Use BLEU metrics and compare translated sent to the best translation
'''
#print(f'Reference: {reference}\nTranslation: {translation}\n')
for name, func in self.metrics_dict.items():
score = func(translation.lower(), reference.lower())
self.print_metrics(translation, name, score)
def print_metrics(self, translation, metrics_name, score):
if metrics_name == 'TER':
perc_sign = ' edits'
elif metrics_name in ['sacre BLEU', 'ROUGE']:
perc_sign = ''
else:
perc_sign = '%'
print(f"{metrics_name} score: {score}{perc_sign}")
def process_sentence(self, sent: str):
'''
main process for translation
'''
tokens = self.tokenize(sent)
translated_tokens = self.translate(tokens)
result = self.get_decoded(translated_tokens)
return result
def infer(self, sent: str, reference: str) -> None:
''' TO BE CHANGED
Compare first sentence of the doc to the reference
'''
translation = self.process_sentence(sent)
print('Translated:', translation[0])
self.generate_metrics(translation[0].lower(), reference.lower())
</code>
<code>
MT = MachineTranslation(model, rus_tokenizer)
</code>
<code>
MT.infer(doc, reference)
</code>
<code>
MT.infer('Ложка дёгтя в бочке меда', 'А fly in the ointment')
</code>
#### Sample from dataset
<code>
a_few_sentences_rus = [
'НЕЙТРОННАЯ РЕФЛЕКТОМЕТРИЯ В РОССИИ: ТЕКУЩЕЕ СОСТОЯНИЕ И ПЕРСПЕКТИВЫ',
'В обзоре дано описание текущего состояния дел и перспектив развития в области нейтронной рефлектометрии на действующих и будущих нейтронных источниках Российской Федерации.',
'В результате ввода в эксплуатацию новых инструментов на реакторах ИР-8 и ПИК число нейтронных рефлектометров в РФ должно удвоиться.',
'В результате должен появиться набор инструментов, нацеленных на решение широкого круга задач в области физики, химии, биологии слоистых систем в интересах научного сообщества, а также для подготовки специалистов для дальнейшего развития и совершенствования данной методики.'
]
a_few_sentences_eng = [
'Neutron Reflectometry in Russia: Current State and Prospects',
'The review is devoted to the current state of affairs and prospects for development in the field of neutron reflectometry on the existing and future neutron sources in the Russian Federation.',
'Due to the commissioning of new instruments at the IR-8 and PIK reactors, the number of neutron reflectometers in the Russian Federation should double.',
'As a result, there must arise a set of instruments aimed at solving various problems in the fields of physics, chemistry, and biology of layered systems in the interests of the scientific community and to train experts for further development and improvement of this technique.'
]
a_few_sentences_gtrans = ['NEUTRON REFLECTOMETRY IN RUSSIA: CURRENT STATUS AND PROSPECTS',
'The review describes the current state of affairs and prospects for development in the field of neutron reflectometry at existing and future neutron sources in the Russian Federation.',
'As a result of the commissioning of new instruments at the IR-8 and PIK reactors, the number of neutron reflectometers in Russia should double.',
'The result should be a set of tools aimed at solving a wide range of problems in the field of physics, chemistry, biology of layered systems in the interests of the scientific community, as well as training specialists for further development and improvement of this technique.']
pairs = list(zip(a_few_sentences_rus, a_few_sentences_eng))
</code>
#### Metrics for each sentence
<code>
MT = MachineTranslation(model, rus_tokenizer)
for pair in pairs:
print('Source:', pair[0])
print('Reference:', pair[1])
rus, eng = [sent.lower() for sent in pair]
MT.infer(rus, eng)
print('\n\n')
</code>
### Try using SentencePiece as tokenizer
<code>
#some OOP code here for sp tokenizer and nllb model
</code>
### Try using a paragraph as a single entry
<code>
class TextMachineTranslation(MachineTranslation):
def process_text(self, text: List[str]) -> str:
'''
main process for multi-sentence translation
'''
tokens = [self.tokenize(sent) for sent in text]
translated_tokens = [self.translate(token) for token in tokens]
translations = [self.get_decoded(toks)[0] for toks in translated_tokens]
result = ' '.join(translations)
return result
def infer(self, text: List[str], reference: List[str]) -> None:
''' TO BE CHANGED
Compare the whole text to the reference
'''
translation = self.process_text(text)
reference = ' '.join(reference)
self.generate_metrics(translation.lower(), reference.lower())
</code>
<code>
TMT = TextMachineTranslation(model, rus_tokenizer)
TMT.infer(a_few_sentences_rus, a_few_sentences_eng)
</code>
### Try it on a whole article
No ETL so far, no DB, just plain pd.read
<code>
filename = 'Krist_sample_data.xlsx'
sheet_name = 'Krist2202003Borisov'
df = pd.read_excel(filename, sheet_name=sheet_name).dropna(subset=['ru'])
</code>
<code>
orig_text = df['ru'].tolist()
ref_text = df['en'].tolist()
</code>
<code>
len(ref_text)
</code>
<code>
txt_model = TextMachineTranslation(model, rus_tokenizer)
#txt_model.infer(orig_text, ref_text)
</code>
<code>
%%time
text_translated = txt_model.process_text(orig_text)
</code>
<code>
filepath = 'translation.txt'
with open(filepath, 'w') as f:
f.write(text_translated)
files.download(filepath)
</code>
## try another model
<code>
%%capture
m_tokenizer = MarianTokenizer.from_pretrained("Helsinki-NLP/opus-mt-ru-en")
m_model = MarianMTModel.from_pretrained("Helsinki-NLP/opus-mt-ru-en")
list_of_tokens = ['</s>', '<pad>'] #tokens to skip other than UNK
</code>
<code>
def marian_translate(model, tokenizer, sentence, maxlen=300):
# Tokenize. As i take it SentencePiece is used as tokenizer
input_ids_ru = tokenizer.encode(sentence, return_tensors="pt")
# Translate
translated_ids_en = model.generate(input_ids_ru, max_length=maxlen, num_beams=4, early_stopping=True)
# Decode translated tokens to text. We need to check if there are [UNK]s so skip token False.
result = tokenizer.decode(translated_ids_en[0], skip_special_tokens=True)
#for tok in list_of_tokens:
# if tok in result:
# result = result.replace(tok, '')
return result
</code>
Test metrics
<code>
for num in range(len(a_few_sentences_rus)):
m_translated = marian_translate(m_model, m_tokenizer, a_few_sentences_rus[num])
print('SENTENCE:', m_translated)
print('REFERENCE:', a_few_sentences_eng[num])
test_metrics(a_few_sentences_eng[num].lower(), [m_translated.lower()], ['marian'])
</code>
As a result out-of-the-box Marian model performs slightly better than NLLB model
## Issues
1. Choose proper metrics
2. Evaluate metrics not for each sentence but for the whole text. I.e. cumulative metric or take average.
|
{
"filename": "design_NLLB_model_1.ipynb",
"repository": "Dimildizio/system",
"query": "transformed_from_existing",
"size": 149499,
"sha": ""
}
|
# URTmetaanalysis_logistic_regression.ipynb
Repository: Gibbons-Lab/2023
# Case vs. Control Analysis
In this notebook we'll use logistic regression to examine differences in taxonomic composition between cases and controls, conducted on a per-study basis to account for covariates. Here, we hope to uncover URT microbiome-based |associations that can be causative or preventative of respiratory diseases.
____
<code>
import pandas as pd
import os
import statsmodels.formula.api as smf
import seaborn as sns
import scipy
import statsmodels as sm
import matplotlib.pyplot as plt
from utils import *
from matplotlib.font_manager import FontProperties
%matplotlib inline
</code>
## Collect Reads
First, load the merged table constructed earlier
<code>
merged_table = pd.read_csv('../data/merged_table.csv')
</code>
## Specify Color Encoding
Import the disease-specific color dictionary we've been using
<code>
color_dict = {'Asthma':'#a6cee3',
'COVID-19':'#1f78b4',
'Influenza':'#b2df8a',
'Pneumonia':'#33a02c',
'RSV':'#fb9a99',
'RTI':'#e31a1c',
'Resp. Allergies':'#fdbf6f',
'Rhinosinusitis':'#ff7f00',
'COPD':'#cab2d6',
'Tonsillitis':'#6a3d9a'}
</code>
## Per-Study Case vs. Control Logistic Regression
Using logistic regression, find associations between taxon abundance and case/control status. This is done on a per-study basis to remove biases from covariates.
<code>
# initialize data frame
study_specific = pd.DataFrame()
# iterate through studies
for study in merged_table['study'].unique():
res_temp = merged_table[merged_table['study']==study]
# iterate through taxa in study
for x in res_temp['full_taxonomy'].unique():
try:
df = res_temp[res_temp['full_taxonomy']==x]
df = df.copy()
# binarize case/control status
df['condition_bin'] = (df['condition'] == 'control').astype(int)
# skip if not enough samples
if df['condition_bin'].nunique()==1:
continue
# logistic regression
model = smf.logit('condition_bin ~ clr', data = df)
sol = model.fit(disp=0)
# calculate log fold change
log2 = np.log2(df[df['condition']=='case']['relative'].mean()/
df[df['condition']=='control']['relative'].mean())
# add result to dataframe
study_specific = pd.concat([study_specific, pd.DataFrame({
'taxon':[x],
'pvalue':[sol.pvalues['clr']],
'log2_foldchange':[log2],
'study':[study]})])
# account for exceptions
except sm.tools.sm_exceptions.PerfectSeparationError:
# print("Skipping group", x,"in", study, "due to perfect predictor error") ## uncomment for output
continue
except np.linalg.LinAlgError:
# print("Skipping group", x,"in", study, "due to singular matrix") ## uncomment for output
continue
</code>
## Format Results
Format the resulting dataframe
<code>
# fdr correction of pvalues
study_specific['q'] = sm.stats.multitest.fdrcorrection(study_specific['pvalue'])[1]
# shorten taxon id to just genus name
study_specific['genus'] = study_specific['taxon'].str.split('|').str[-1]
# determine enrichment direction (1 = enriched in cases, -1 = enriched in controls)
study_specific['enrichment'] = study_specific['log2_foldchange']>0
study_specific['enrichment'] = study_specific['enrichment'].map({True:1,False:-1})
# filter to significant results
study_specific.loc[study_specific['q']>0.05,'enrichment']=0
# create dataframe with pvalues and with enrichments
p_frame = study_specific.pivot(index = 'genus',columns = 'study',values = 'q')
hits = study_specific.pivot(index = 'genus',columns = 'study',values = 'enrichment')
# fill in zeroes for easy plotting
hits.fillna(0.0, inplace = True)
# # remove rows with no significant enrichments
hits = hits.loc[(hits != 0).any(axis=1)]
hits
</code>
## Calculate Prevalence and Abundance
Here we calculate the prevalence and abundance of each taxon in the analysis.
<code>
# total number of samples
n = merged_table.sample_id.nunique()
# calculate prevalence and abundance for each genus
prevalence = merged_table[merged_table.reads > 0]['full_taxonomy'].value_counts() / n
abundance = merged_table[merged_table.reads > 0].groupby('full_taxonomy')['relative'].mean()
# shorten genus name for each
prevalence.index = prevalence.index.str.split('|').str[-1]
abundance.index = abundance.index.str.split('|').str[-1]
# map to dataframe
hits['prevalence'] = hits.index.map(prevalence)
hits['abundance'] = hits.index.map(abundance)
</code>
## Calculate Enrichment Heuristic
Here we'll calculate the between study enrichment, defined by N(same direction) - N(opposite direction). If the result is greater than 3, we'll include this in the results
<code>
# remove low abundance taxa
hits = hits[hits['abundance']>0.005]
# calculate heuristic
hits['overall'] = hits[hits.columns[0:-2]].sum(axis = 1)
# sort by abundance
hits.sort_values(by = 'abundance', inplace = True)
# assign signature for heuristic
hits.loc[hits['overall']>=3, 'signature'] = 1
hits.loc[hits['overall']<= -3, 'signature'] = -1
hits['signature'].fillna(0.0,inplace = True)
# drop calculation column
hits.drop(columns = 'overall',inplace = True)
# transpose for plotting
hits = hits.T
# formate for plotting
hits['authors'] = hits.index.str.split(',').str[0]
hits['disease'] = hits.index.map(merged_table.set_index('study')['disease'].to_dict())
hits['fill'] = hits['disease'].map(color_dict)
hits.sort_values(by = 'disease', inplace = True)
</code>
## Plot associations
Now, using a heatmap, we plot results from the logistic regression. Overall hits are included as an additional subplot, as are prevalence and abundance.
<code>
fig, (ax1, ax2, ax3, ax4) = plt.subplots(nrows=4,
figsize=(18, 10),
gridspec_kw={'height_ratios': [30,1.5,3,3]})
sns.heatmap(hits.iloc[0:-3].drop(columns = ['fill','disease', 'authors']),
cmap=sns.diverging_palette(220,20,center='light',as_cmap=True),
cbar = False,
ax = ax1)
for i, color in enumerate(hits.iloc[0:-3]['fill']):
ax1.add_patch(plt.Rectangle(xy=(-0.02, i), width=0.02, height=1, color=color, lw=0,
transform=ax1.get_yaxis_transform(), clip_on=False))
sns.heatmap(hits[hits.index=='signature'].drop(columns = ['fill','disease', 'authors']),
cmap=sns.diverging_palette(220,20,center='light',as_cmap=True),center=0.00,
cbar = False,
ax = ax2)
sns.barplot(x=hits.iloc[0:-3].T.iloc[0:-3].index,
y=hits.T.iloc[0:-3]['abundance'],
ax=ax3,
color='gray')
sns.barplot(x=hits.iloc[0:-3].T.iloc[0:-3].index,
y=hits.T.iloc[0:-3]['prevalence'],
ax=ax4,
color='gray')
font_props = FontProperties().copy()
font_props.set_size(14)
ax1.set_xticks([])
ax1.tick_params(axis='y', which='major', pad=25, length=0)
ax1.set(xlabel=None)
ax1.set_yticklabels(ax1.get_ymajorticklabels(), fontproperties=font_props)
ax2.set_xticks([])
ax2.set_yticks([])
ax2.set(xlabel=None)
ax2.set(ylabel=None)
font_props.set_style("italic")
ax3.set_ylabel('',fontsize = 14)
ax3.set(xlabel=None)
ax3.set_xticklabels([])
font_props.set_style("italic")
ax4.set_ylabel('', fontsize =14, rotation = 0)
ax4.set(xlabel=None)
plt.xticks(rotation =80)
ax4.set_xticklabels(ax4.get_xmajorticklabels(), fontproperties=font_props)
plt.savefig('../visualizations/logistic_regression.svg', dpi=300, bbox_inches="tight", format = 'svg')
plt.show()
</code>
## Calculate Effect Sizes
Calcuate effect sizes and p-values for each association
<code>
# isolate columns with abundances
taxa = hits.columns[0:-3].unique()
# isolate rows with studies
studies = hits.iloc[0:-3].index.unique()
# initialize dataframe
effects = pd.DataFrame(index = taxa, columns = studies)
# iterate through taxa and studies
for taxon in taxa:
for study in studies:
# calculate effect size
cohens_d = effectsize(
merged_table[(merged_table['genus']==taxon)&
(merged_table['study']==study)&
(merged_table['condition'] =='control')]['clr'],
merged_table[(merged_table['genus']==taxon)&
(merged_table['study']==study)&
(merged_table['condition'] =='case')]['clr'])
effects.at[taxon, study] = cohens_d
# create dataframes
effects = effects[effects.index.isin(hits.columns)].T
p_frame = p_frame[p_frame.index.isin(hits.columns)].T
</code>
|
{
"filename": "URTmetaanalysis_logistic_regression.ipynb",
"repository": "Gibbons-Lab/2023",
"query": "transformed_from_existing",
"size": 15286,
"sha": ""
}
|
# schema.ipynb
Repository: EATRIS/motbx
# Schema for MOTBX resources
This notebook defines a data schema for MOTBX resources. The schema is first validated against the metaschema JSON schema draft 2020-12. It is then used to validate MOTBX resources. While MOTBX resources are stored as YAML files and the schema is stored in JSON, both are imported to Python as dictionaries using the *yaml* and *json* libraries, respectively. The library *jsonschema* is used to validate resources.
<code>
import yaml
import json
import jsonschema
from pathlib import Path
import pprint
pp = pprint.PrettyPrinter(indent=2, width=80, compact=True)
CWD = Path.cwd()
if CWD.name != "notebooks":
print("Make sure to run this notebook from the 'notebooks' directory.")
MOTBX_DIR = CWD.parent
SCHEMA_JSON = MOTBX_DIR.joinpath("schema/motbxschema.json")
TEST_RESOURCE_YAML = MOTBX_DIR.joinpath("tests/resources_pass/test1.yaml")
</code>
<code>
schema = {
# "$id": a URI
"$schema": "https://json-schema.org/draft/2020-12/schema",
"title": "MOTBX resource",
"description": "Schema for resources of the EATRIS Multi-omics Toolbox (MOTBX)",
"type": "object",
"properties": {
# "resource": {
# "type": "object",
"resourceID": {"type": "string"},
# "properties": {
"resourceCategory": {
"type": "string",
"enum": [
# allowed values for filed "resourceCategory"
"Genomics",
"Epigenomics",
"Transcriptomics",
"Proteomics",
"Metabolomics",
#"Internal Quality Control",
#"External Quality Assessment",
"Quality Control and Assessment",
#"Omics data management and analysis"
"Data Management and Stewardship",
"Data Analysis"
]
},
"resourceSubcategory": {
"type": "string",
# allowed values are defined below under "anyOf" based on value of
# "resourceCategory"
},
"resourceTitle": {
"type": "string",
"minLength": 15,
"maxLength": 160
},
"resourceDescription": {
"type": "string",
"minLength": 50,
"maxLength": 2500
},
"resourceUrl": {
"type": "string",
"format": "uri",
"pattern": "^https://|.pdf$" #"^https?://"
},
"resourceTags": {
"type": "array",
"items": {
"type": "string"},
"minItems": 1
},
"resourceKeywords": {
"type": "array",
"items": {
"type": "string"}
},
},
"anyOf": [
{"properties": {
"resourceCategory": {"enum": [
"Genomics",
"Epigenomics",
"Transcriptomics",
"Proteomics",
"Metabolomics"]},
"resourceSubcategory": {"enum": [
"Guidelines and best practices",
"Laboratory protocols and methods",
"Translational research use case"]}
}},
{"properties": {
"resourceCategory": {"enum": [
"Quality Control and Assessment",]},
"resourceSubcategory": {"enum": [
"Guidelines and best practices",
"Reference materials for quality control",
"Proficiency testing and external quality assessment",
"Quality certification"]}
}},
{"properties": {
"resourceCategory": {"enum": [
"Data Management and Stewardship"]},
"resourceSubcategory": {"enum": [
"Guidelines and best practices",
"Data and metadata standards",
"Databases and catalogues",
"Translational research use cases"]}
}},
{"properties": {
"resourceCategory": {"enum": [
"Data Analysis"]},
"resourceSubcategory": {"enum": [
"Guidelines and best practices",
"Software applications and workflows",
"Computing platforms",
"Translational research use cases"]}
}}
],
"required": [
"resourceID",
"resourceCategory",
"resourceSubcategory",
"resourceTitle",
"resourceDescription",
"resourceUrl",
"resourceTags"], # "resourceKeywords" are optional
#"additionalProperties": False,
#"examples":
}
# exmaple resource
resource = {
"resourceID": "1",
# "resource": {
"resourceCategory": "Quality Control and Assessment",
"resourceSubcategory": "Guidelines and best practices",
"resourceTitle": "ISO Guide 80:2014: Guidance for in-house preparation of quality control materials",
"resourceDescription": "ISO Guide 80:2014 guidance for the in-house preparation of quality control materials (QCMs). ISO Guide 80 outlines the characteristics and preparation processes of reference materials for quality control. It applies to stable materials used locally and those transported without significant property changes. Laboratory staff preparing in-house quality control materials should follow ISO Guides 34 and 35 for transportation-based supply chains. The preparation of quality control materials requires assessments for homogeneity, stability, and limited characterization. It aims to demonstrate statistical control in a measurement system but does not provide usage guidance. The guide offers general information on preparation and includes case studies for different sectors. Users should have material knowledge and be aware of matrix effects and contamination risks.",
"resourceUrl": "https://www.iso.org/standard/44313.html",
"resourceTags": ["ISO standard", "guidelines", "quality control material", "in-house", "genomics"],
# },
# "resourceMetadata": {"last_modified": str(datetime.date(2023, 8, 4))}
}
# validate schema against metaschema
jsonschema.Draft202012Validator.check_schema(schema)
# validate example resource against schema
jsonschema.validate(resource, schema, format_checker = jsonschema.FormatChecker())
</code>
<code>
with open(TEST_RESOURCE_YAML, "w") as fp:
yaml.dump(resource, fp)
</code>
<code>
# print schema formatted as YAML
print(yaml.dump(schema))
</code>
<code>
# print schema formatted as JSON
print(json.dumps(schema, indent = 2))
</code>
<code>
# save schema
with open(SCHEMA_JSON, "w") as fp:
json.dump(schema, fp, indent = 2)
</code>
|
{
"filename": "schema.ipynb",
"repository": "EATRIS/motbx",
"query": "transformed_from_existing",
"size": 20330,
"sha": ""
}
|
# lab4_1.ipynb
Repository: CSCI-360-Spring2024/Lab4
# Lab 4
- Name:
- USC Id:
### 1. Gene expression cancer RNA-Seq
Package Imports
<code>
import pandas as pd
import numpy as np
from sklearn.preprocessing import OrdinalEncoder
from sklearn.naive_bayes import GaussianNB
from sklearn.metrics import accuracy_score
from sklearn.naive_bayes import BernoulliNB
from sklearn.naive_bayes import CategoricalNB
# ===== Optional : import other libraries here ===== #
# ===== End of Optional : import other libraries here ===== #
</code>
### Problem 1 (a) Load the files data.csv and labels.csv
- They contain the Data Set from: https://archive.ics.uci.edu/dataset/401/gene+expression+cancer+rna+seq
- data.csv contains the genetic features for each tumor and labels.csv contains the label of each tumor. After loading the datasets, combine the data sets into a single dataframe.
<code>
data = None
# ===== Read the data.csv file ===== #
# ===== End of reading data.csv file ===== #
</code>
<code>
data.head()
</code>
<code>
labels = None
# ===== Read the labels.csv file ===== #
# ===== End of reading labels.csv file ===== #
</code>
<code>
labels
</code>
<code>
print('Class label counts:')
labels['Class'].value_counts()
</code>
<code>
# Test 1 (a)
assert(data.shape == (801, 20531))
assert(labels.shape == (801,1))
print('Test 1(a) Passed.')
</code>
### Problem 1 (b): Exploratory data analysis
- Select the first 640 instances as the training set and the rest of the data as the test set.
- Encode the classes as follows BRCA = 0, KIRC = 1, COAD = 2, LUAD = 3, and PRAD = 4. You can use Ordinal Encoder.
<code>
training_data, training_class = None, None
test_data, test_class = None, None
# ===== split data and labels into test and train ===== #
# ===== End of split data and labels into test and train ===== #
# ===== Use Ordinal Encoder to encode the classes ===== #
# ===== End of Use Ordinal Encoder to encode the classes ===== #
</code>
<code>
test_data
</code>
<code>
# Test 1 (b)
assert(training_data.shape[0] == 640 and training_data.shape[1] == 20531)
assert(test_data.shape[0] == 161 and test_data.shape[1] == 20531)
assert(training_class.shape[0] == 640)
assert(test_class.shape[0] == 161)
print(f"Test 1(b) Passed")
</code>
<code>
training_class
</code>
<code>
test_class
</code>
### Problem 1 (c) Classification using Gaussian Naive Bayes
- Use sklearn’s Gaussian Naive Bayes method to build a classifier based on training data. Report the training misclassification error rate (the percentage of training data that are misclassified)
- Use sklearn’s Gaussian Naive Bayes method to classify test data, using the model you developed in 1(c). Report test misclassification error rate (the percentage of training data that are misclassified).
<code>
# ===== Train Gaussian NB model ===== #
# ===== End of Train Gaussian NB model ===== #
training_misclassification_rate = None
# ===== Use the model to predict labels for the training data ===== #
# ===== End of Use the model to predict labels for the training data ===== #
print(f">>> Gaussian NB")
print(f"Training Misclassification Rate: {training_misclassification_rate}")
</code>
<code>
test_misclassification_rate = None
# ===== Use the model to predict labels for the test data ===== #
# ===== End of Use the model to predict labels for the test data ===== #
print(f">>> Gaussian NB")
print(f"Test Misclassification Rate: {test_misclassification_rate}")
</code>
### Ploblem 1 (d): Classification using Bernoulli Naive Bayes
- Calculate the median of each of the gene features. Binarize the features: any feature greater or equal than the median of that feature must be converted to 1 and any feature less than or equal to the median must be converted to zero. (5 pts)
- Use sklearn’s Bernoulli Na ̈ıve Bayes method with Laplace Smoothing To build a classifier based on binarized training data. Report the training misclassification error rate (the percentage of training data that are misclassified). (20 pts)
- Use sklearn’s Bernoulli Na ̈ıve Bayes method to classify test data, using the model you developed in 1(d)ii. Report test misclassification error rate (the percentage of training data that are misclassified). (20 pts)
<code>
binarized_data = None
binarized_training_data = None
binarized_test_data = None
# ===== Calculate the medians for each feature in the trainind data and binarize the values for each column ===== #
# ===== End of Calculate the medians for each feature and binarize the values for each column ===== #
</code>
<code>
binarized_training_data.head()
</code>
<code>
binarized_test_data
</code>
<code>
# Test
assert (binarized_training_data.shape == (640,20531))
assert (binarized_test_data.shape == (161,20531))
assert (len(binarized_training_data['gene_1'].value_counts()) == 2)
print('Test Passed')
</code>
<code>
# ===== Train Bernoulli NB model ===== #
# ===== End of Train Bernoulli NB model ===== #
training_misclassification_rate = None
# ===== Use the model to predict labels for the training data ===== #
# ===== End of Use the model to predict labels for the training data ===== #
print(f">>> Bernoulli NB")
print(f"Training Misclassification Rate: {training_misclassification_rate}")
</code>
<code>
test_misclassification_rate = None
# ===== Use the model to predict labels for the test data ===== #
# ===== End of Use the model to predict labels for the test data ===== #
print(f">>> Bernoulli NB")
print(f"Test Misclassification Rate: {test_misclassification_rate}")
</code>
### 2. Extra Credit: Categorical Naive Bayes
Create 10 equally spaced bins between the maximum and minimum of each feature in the training set and convert the features to categorical training data using those bins. Convert the test data based on the bins you calculated for training data into categorical features.
<code>
binned_training_data = pd.DataFrame()
binned_test_data = pd.DataFrame()
# Hint: Use np.linpsace to create bin boundaries and pd.cut to segregate data into the bins
# ===== For each feature create 10 bins using the training data and convert test and train data into categorical values ===== #
# ===== End of For each feature create 10 bins using the training data and convert test and train data into categorical values ===== #
</code>
<code>
# ===== Train Categorical NB model ===== #
# ===== End of Train Categorical NB model ===== #
</code>
<code>
training_misclassification_rate = None
# ===== Use the model to predict labels for the training data ===== #
# ===== End of Use the model to predict labels for the training data ===== #
print(f">>> Categorical NB")
print(f"Training Misclassification Rate: {training_misclassification_rate}")
</code>
<code>
test_misclassification_rate = None
# ===== Use the model to predict labels for the test data ===== #
# ===== End of Use the model to predict labels for the test data ===== #
print(f">>> Categorical NB")
print(f"Test Misclassification Rate: {test_misclassification_rate}")
</code>
|
{
"filename": "lab4_1.ipynb",
"repository": "CSCI-360-Spring2024/Lab4",
"query": "transformed_from_existing",
"size": 14937,
"sha": ""
}
|
# L02_1.ipynb
Repository: let-unimi/handouts
# Strutture dati ed algoritmi
## Alberi
La rappresentazione più comune che sarà adoperata per il corso per gli alberi $n$-ari sono le *lol* (liste di liste)
<code>
# [radice]
# [radice alberi…]
tree = [1, [11, [111]], [12, [121], [122]], [13]]
</code>
Accedere a radice e figli con l'[iterable unpacking](https://docs.python.org/3/reference/expressions.html?highlight=iterable+unpacking#expression-lists)…
<code>
root, *children = tree
children
</code>
<code>
# uso di liblet per ottenre una rappresentazione grafica
from liblet import Tree, side_by_side
t = Tree.from_lol(tree)
t
</code>
<code>
# funziona il medesimo unpacking
root, *children = t
root
</code>
<code>
side_by_side(child for child in children)
</code>
### Visite
* preordine,
* postordine,
* per livello.
<code>
def preorder(tree, visitor):
root, *children = tree
visitor(root)
for child in children: preorder(child, visitor)
t
</code>
<code>
preorder(tree, print)
</code>
<code>
def postorder(tree, visitor):
root, *children = tree
for child in children: postorder(child, visitor)
visitor(root)
t
</code>
<code>
postorder(tree, print)
</code>
<code>
from liblet import Queue
def levelorder(tree, visitor):
Q = Queue()
Q.enqueue(tree)
while Q:
tree = Q.dequeue()
root, *children = tree
visitor(root)
for child in children: Q.enqueue(child)
t
</code>
<code>
levelorder(tree, print)
</code>
### Alberi con attributi
Per ora gli alberi avevano interi come velori dei nodi, costruiamo un albero che abbia `dict` come valori (e che conservi il valore numerico come valore della chiave `val`).
<code>
def add_attr(tree):
root, *children = tree
return [{'val': root}] + [add_attr(child) for child in children]
</code>
<code>
tree = [1, [11, [111]], [1200, [121], [122]], [13]]
add_attr(tree)
</code>
<code>
Tree.from_lol(add_attr(tree))
</code>
#### Attributi ereditati e preorder
Come vedremo più avanti, gli attributi ereditati sono attributi che i nodi dei sottoalberi ereditano dal padre; ad esempio la *profondità*.
Per calcolarli si può usare una modifica della visita in preordine in cui al *visitor* venga passato non solo il nodo ma anche il valore dell'attributo ereditato.
<code>
def preorder_with_value(tree, visitor, value = None):
root, *children = tree
visitor(root, value)
for child in children: preorder_with_value(child, visitor, root['depth'])
</code>
<code>
# visitor che aggiunge l'attributo depth (pari a 1 + il valore ereditato, il caso None riguarda la radice)
def add_depth(root, value):
root['depth'] = value + 1 if value is not None else 0
</code>
<code>
attr_tree = add_attr(tree)
# la radice riceverà None perché è il valore di default di value
preorder_with_value(attr_tree, add_depth)
Tree.from_lol(attr_tree)
</code>
#### Attributi sintetizzati e postorder
Gli attributi sintetizzati sono attributi che il nodo radice di un albero ricava dal valore degli attributi nei sottoalberi; ad esempio, il *massimo* valore.
Per calcolarli si può usare una modifica della visita in postordine in cui al *visitor* venga passato non solo il nodo ma anche i valori restituiti dalle visite dei sottoalberi.
<code>
def postorder_with_return(tree, visitor):
root, *children = tree
values = [postorder_with_return(child, visitor) for child in children] # sarà la lista vuota se non ci sono figli
return visitor(root, values)
</code>
<code>
# visitor che aggiunge l'attributo max (pari al massimo tra il valore del nodo e quelli sintetizzati dai figli)
def add_max(root, values):
root['max'] = max([root['val']] + values)
return root['max']
</code>
<code>
attr_tree = add_attr(tree)
postorder_with_return(attr_tree, add_max)
Tree.from_lol(attr_tree)
</code>
## Grafi
Per i grafi sono usuali due rappresentazioni: per *archi* (dappresentati da `tuple` di `tuple`) e tramite la relazione di *adiacenza* (rappresentata da un `dict` di `set`).
<code>
arcs = (
(1, 2),
(1, 4),
(2, 3),
(3, 2),
(3, 4),
(3, 5)
)
</code>
<code>
from liblet import Graph
g = Graph(arcs)
g
</code>
<code>
# dagli archi alla mappa delle adiacenze
# per ogni nodo n (sia s o t), adjacency[n] = set()
adjacency = dict()
for s, t in arcs:
adjacency[s] = set()
adjacency[t] = set()
# aggiungo gli outlink
for s, t in arcs: adjacency[s] |= {t}
adjacency
</code>
<code>
# e viceversa
for s, ts in adjacency.items():
for t in ts: print(s, t)
</code>
### Visite
* profondità,
* ampiezza.
<code>
def depthfirst(adjacency, start, visit):
def walk(src):
visit(src)
seen.add(src)
for dst in adjacency[src]:
if dst not in seen:
walk(dst)
seen = set()
walk(start)
g
</code>
<code>
depthfirst(adjacency, 1, print)
</code>
<code>
from liblet import Queue
def breadthfirst(adjacency, start, visit):
Q = Queue()
seen = set()
Q.enqueue(start)
while Q:
src = Q.dequeue()
visit(src)
seen.add(src)
for dst in adjacency[src]:
if dst not in seen:
Q.enqueue(dst)
</code>
<code>
g
</code>
<code>
breadthfirst(adjacency, 1, print)
</code>
## Backtracking
Il [backtracking](https://en.wikipedia.org/wiki/Backtracking) è uno schema di algoritmi ricorsivi per problemi la cui soluzione possa essere costruita incrementalmente a partire da una soluzione "candidata". Lo schema generale è
```python
def backtrack(candidate):
if reject(candidate): return
if accept(candidate): output(candidate)
s = first(candidate)
while s:
backtrack(s)
s = next(candidate)
```
Le funzioni `reject` e `accept` hanno l'ovvio significato di indicare, rispettivamente, se una soluzione candidata è non corretta (e non ulteriormente emendabile), oppure se costituisce una soluzione (completa). Le funzioni `first` e `next` costruiscono rispettivamente il primo e i successivi candidati a partire dal candidato corrente.
### Segmentazione di una parola
<code>
from urllib.request import urlopen
# WORDS sono le parole di almeno 2 caratteri (3 conta anche l'a-capo)
with urlopen('https://raw.githubusercontent.com/napolux/paroleitaliane/master/paroleitaliane/60000_parole_italiane.txt') as url:
WORDS = {word.decode().strip().upper() for word in url if len(word) >= 3}
print(len(WORDS))
</code>
<code>
def segmenta(segmenti, resto):
if segmenti and not segmenti[-1] in WORDS: return
if not resto:
print(segmenti)
return
for i in range(1, 1 + len(resto)):
segmenta(segmenti + [resto[:i]], resto[i:])
</code>
<code>
segmenta([], 'ILCORRIEREDELLASERAEDIZIONENOTTURNA')
</code>
|
{
"filename": "L02_1.ipynb",
"repository": "let-unimi/handouts",
"query": "transformed_from_existing",
"size": 98140,
"sha": ""
}
|
# EnrichrConsensus.ipynb
Repository: MaayanLab/appyter-catalog
<code>
#%%appyter init
from appyter import magic
magic.init(lambda _=globals: _())
</code>
<code>
%%appyter hide_code
{% do SectionField(
name='PRIMARY',
title='Enrichr Consensus Terms',
subtitle='This appyter returns consensus Enrichr terms using a set of gene sets',
img='enrichr.png'
) %}
</code>
<code>
%%appyter code_exec
{% set title = StringField(
name='title',
label='Notebook name',
default='Enrichr Consensus Terms',
section="PRIMARY",
) %}
title = {{ title }}
</code>
<code>
import time
import requests
import pandas as pd
import json
import seaborn as sns
import matplotlib.pyplot as plt
from IPython.display import display, IFrame, Markdown
import math
import scipy.stats as st
import fastcluster
</code>
<code>
display(Markdown("# %s"%(title)), display_id="title")
</code>
<code>
clustergrammer_url = 'https://amp.pharm.mssm.edu/clustergrammer/matrix_upload/'
ENRICHR_URL = 'https://maayanlab.cloud/Enrichr'
# libraries = ["ChEA_2016", "GO_Biological_Process_2018" ,"GWAS_Catalog_2019" , "KEGG_2019_Human"]
table = 1
figure = 1
</code>
<code>
def clustergrammer(df, name, clustergrammer_url, display_id, fignum=1, label="Clustergrammer"):
clustergram_df = df.rename(columns={i:"Signature: %s"%i for i in df.columns}, index={i:"Enriched Term: %s"%i for i in df.index})
clustergram_df.to_csv(name, sep="\t")
response = ''
for i in range(5):
try:
res = requests.post(clustergrammer_url, files={'file': open(name, 'rb')})
if not res.ok:
response = res.text
time.sleep(1)
else:
clustergrammer_url = res.text.replace("http:","https:")
break
except Exception as e:
response = e
time.sleep(2)
else:
if type(response) == Exception:
raise response
else:
raise Exception(response)
display(IFrame(clustergrammer_url, width="1000", height="1000"), display_id="clustergram_%s"%display_id)
display(Markdown("**Figure %d** %s [Go to url](%s)"%(fignum, label, clustergrammer_url)), display_id="clustergram_label_%s"%display_id )
</code>
<code>
cmap = sns.cubehelix_palette(50, hue=0.05, rot=0, light=1, dark=0)
def heatmap(df, filename, display_id, width=15, height=15):
# fig = plt.figure(figsize=(width,height))
cg = sns.clustermap(df, cmap=cmap, figsize=(width, height), cbar_pos=(0.02, 0.65, 0.05, 0.18),)
cg.ax_row_dendrogram.set_visible(False)
cg.ax_col_dendrogram.set_visible(False)
display(cg, display_id="heatmap_%s"%display_id)
plt.show()
cg.savefig(filename)
</code>
<code>
def get_dataframe(enrichment, lib, table, display_id):
term_df = pd.DataFrame(index=enrichment.keys())
for k,v in enrichment.items():
sigs = v["libraries"][lib]
for sig in sigs:
term = sig[1]
if term not in term_df.columns:
term_df[term] = 0.0
p = sig[2]
term_df.at[k, term] = -math.log(p)
term_df = term_df.transpose()
term_df.to_csv("%s_enrichment.tsv"%lib, sep="\t")
display(term_df.head(10), display_id="dataframe_%s"%display_id)
display(Markdown("**Table %d** The table below shows the result of the enrichment analysis of %d gene sets \
with the %s library in Enrichr. Each score is computed by getting the negative logarithm of the p-value \
($-\ln{pval}$). [Download complete table](%s_enrichment.tsv)"%(table, num_sigs, lib.replace("_"," "), lib)),
display_id="dataframe_caption_%s"%display_id
)
table+=1
return term_df, table
def get_consensus(df, lib, top_results, table, display_id):
consensus = df.sum(1).sort_values(ascending=False)[0:top_results].to_frame(name="scores")
# Save to tsv
consensus.to_csv("%s_consensus.tsv"%lib, sep="\t")
display(consensus.head(10), display_id="consensus_%s"%display_id)
display(Markdown("**Table %d** %s Consensus terms. \
Consensus scores are computed by taking the sum of scores in Table %d. \
[Download top %d terms](%s_consensus.tsv)"%(table, lib.replace("_"," "), (table-1), top_results, lib)),
display_id="consensus_caption_%s"%display_id
)
table+=1
return consensus, table
def stackedBarPlot(df, filename, display_id, width = 15, height = 15):
df['mean'] = df.mean(axis=1)
df = df.sort_values(by = 'mean')[0:top_results]\
.drop(['mean'], axis = 1)
if df.shape[0]==0:
return False
plot = df.plot.barh(stacked = True, figsize = (width,height), fontsize = 20)
plt.legend(bbox_to_anchor=(1.7, 0), loc='lower right', prop={'size': 16})
plt.xlabel('-log(p)',labelpad = 20, fontsize = 'xx-large')
display(plot, display_id="stacked_%s"%display_id)
plt.savefig(filename, format = 'svg', bbox_inches='tight')
plt.show()
return True
</code>
<code>
# Enrichr Functions
def addList(genes, description):
payload = {
'list': (None, '\n'.join(genes)),
'description': (None, description)
}
res = requests.post(ENRICHR_URL + "/addList", files=payload)
time.sleep(1)
if not res.ok:
raise Exception('Error analyzing gene list')
data = res.json()
return data["userListId"]
def enrich(userListId, library, alpha):
res = requests.get(
ENRICHR_URL +"/enrich", params={"userListId": userListId, "backgroundType": library}
)
time.sleep(1)
if not res.ok:
raise Exception('Error fetching enrichment results')
data = res.json()
return [i for i in data[library] if i[2] < alpha]
</code>
## Get Input
<code>
%%appyter code_exec
{% set input_gene_set = FileField(
name='input_gene_set',
label='Gene Set',
default='input.gmt',
section="PRIMARY",
examples={
'input.gmt': 'https://appyters.maayanlab.cloud/storage/EnrichrConsensus/sample_input/10input.gmt'
}
) %}
input_gene_set = {{ input_gene_set }}
</code>
<code>
%%appyter code_exec
transcription_libraries = {{ MultiChoiceField(name='transcription_libraries',
description='Select the Enrichr libraries you would like in your figure.',
label='Transcription',
default=[],
section = 'PRIMARY',
choices=[
'ARCHS4_TFs_Coexp',
'ChEA_2016',
'ENCODE_and_ChEA_Consensus_TFs_from_ChIP-X',
'ENCODE_Histone_Modifications_2015',
'ENCODE_TF_ChIP-seq_2015',
'Epigenomics_Roadmap_HM_ChIP-seq',
'Enrichr_Submissions_TF-Gene_Coocurrence',
'Genome_Browser_PWMs',
'lncHUB_lncRNA_Co-Expression',
'miRTarBase_2017',
'TargetScan_microRNA_2017',
'TF-LOF_Expression_from_GEO',
'TF_Perturbations_Followed_by_Expression',
'Transcription_Factor_PPIs',
'TRANSFAC_and_JASPAR_PWMs',
'TRRUST_Transcription_Factors_2019']) }}
pathways_libraries = {{ MultiChoiceField(name='pathways_libraries',
description='Select the Enrichr libraries you would like in your figure.',
label='Pathways',
default=[],
section = 'PRIMARY',
choices=[
'ARCHS4_Kinases_Coexp',
'BioCarta_2016',
'BioPlanet_2019',
'BioPlex_2017',
'CORUM',
'Elsevier_Pathway_Collection',
'HMS_LINCS_KinomeScan',
'HumanCyc_2016',
'huMAP',
'KEA_2015',
'KEGG_2019_Human',
'KEGG_2019_Mouse',
'Kinase_Perturbations_from_GEO_down',
'Kinase_Perturbations_from_GEO_up',
'L1000_Kinase_and_GPCR_Perturbations_down',
'L1000_Kinase_and_GPCR_Perturbations_up',
'NCI-Nature_2016',
'NURSA_Human_Endogenous_Complexome',
'Panther_2016',
'Phosphatase_Substrates_from_DEPOD',
'PPI_Hub_Proteins',
'Reactome_2016',
'SILAC_Phosphoproteomics',
'SubCell_BarCode',
'Virus-Host_PPI_P-HIPSTer_2020',
'WikiPathways_2019_Human',
'WikiPathways_2019_Mouse']) }}
ontologies_libraries = {{ MultiChoiceField(name='ontologies_libraries',
description='Select the Enrichr libraries you would like in your figure.',
label='Ontologies',
default=[],
section = 'PRIMARY',
choices=[
'GO_Biological_Process_2018',
'GO_Cellular_Component_2018',
'GO_Molecular_Function_2018',
'Human_Phenotype_Ontology',
'Jensen_COMPARTMENTS',
'Jensen_DISEASES',
'Jensen_TISSUES',
'MGI_Mammalian_Phenotype_Level_4_2019']) }}
diseases_drugs_libraries = {{ MultiChoiceField(name='diseases_drugs_libraries',
description='Select the Enrichr libraries you would like in your figure.',
label='Diseases/Drugs',
default=[],
section = 'PRIMARY',
choices=[
'Achilles_fitness_decrease',
'Achilles_fitness_increase',
'ARCHS4_IDG_Coexp',
'ClinVar_2019',
'dbGaP',
'DepMap_WG_CRISPR_Screens_Broad_CellLines_2019',
'DepMap_WG_CRISPR_Screens_Sanger_CellLines_2019',
'DisGeNET',
'DrugMatrix',
'DSigDB',
'GeneSigDB',
'GWAS_Catalog_2019',
'LINCS_L1000_Chem_Pert_down',
'LINCS_L1000_Chem_Pert_up',
'LINCS_L1000_Ligand_Perturbations_down',
'LINCS_L1000_Ligand_Perturbations_up',
'MSigDB_Computational',
'MSigDB_Oncogenic_Signatures',
'Old_CMAP_down',
'Old_CMAP_up',
'OMIM_Disease',
'OMIM_Expanded',
'PheWeb_2019',
'Rare_Diseases_AutoRIF_ARCHS4_Predictions',
'Rare_Diseases_AutoRIF_Gene_Lists',
'Rare_Diseases_GeneRIF_ARCHS4_Predictions',
'Rare_Diseases_GeneRIF_Gene_Lists',
'UK_Biobank_GWAS_v1',
'Virus_Perturbations_from_GEO_down',
'Virus_Perturbations_from_GEO_up',
'VirusMINT'])
}}
</code>
<code>
libraries = transcription_libraries + pathways_libraries + ontologies_libraries + diseases_drugs_libraries
</code>
<code>
enrichment = {}
with open(input_gene_set) as o:
for line in o:
unpacked = line.strip().split("\t")
if len(unpacked) == 1:
raise ValueError("Line '%s' is either empty or not formatted properly. Please consult README for more information"%line)
sigid = unpacked[0]
geneset = [i for i in unpacked[1:] if len(i) > 0]
enrichment[sigid] = {
"genes": [i.split(",")[0] for i in geneset]
}
</code>
<code>
num_sigs = len(enrichment)
input_sigs = pd.DataFrame.from_dict(enrichment, orient="index")
display(input_sigs.head(10))
display(Markdown("**Table %d** Input Signatures"%(table)), display_id="input_sigs")
table+=1
</code>
## User defined parameters
<code>
%%appyter code_exec
alpha = {{FloatField(name='alpha', label='p-value cutoff', default=0.05, section='PRIMARY')}}
top_results = {{IntField(name='min_count', label='Top results', description="Number of top results to keep", default=25, section='PRIMARY')}}
width = {{FloatField(name='width', label='image width', default=15, section='PRIMARY')}}
height = {{FloatField(name='height', label='image height', default=15, section='PRIMARY')}}
</code>
## Enrichment
<code>
failed_userlist = []
failed_enrich = {}
for description, values in enrichment.items():
print("Querying %s"%(description), end="\r", flush=True)
genes = values["genes"]
for tries in range(5):
try:
userListId = addList(genes, description)
enrichment[description]["userListId"] = userListId
break
except Exception as e:
print(e)
time.sleep(0.5)
else:
failed_userlist.append(description)
continue
time.sleep(0.1)
enrichment[description]["libraries"] = {}
for library in libraries:
for tries in range(5):
try:
userlistId = enrichment[description]["userListId"]
results = enrich(userListId, library, alpha)
enrichment[description]["libraries"][library] = results
break
except Exception as e:
print(e)
time.sleep(0.5)
else:
if description not in failed_enrich:
failed_enrich[description] = []
failed_enrich[description].append(library)
continue
time.sleep(0.1)
if len(failed_userlist):
print("Failed to add %d list"%len(failed_userlist))
if len(failed_enrich):
print("Failed enrichment for %d gene sets"%len(failed_enrich))
</code>
<code>
for lib in libraries:
display(Markdown("## %s"%lib.replace("_"," ")), display_id="title_%s"%lib)
term_df,table = get_dataframe(enrichment, lib, table, display_id=lib)
consensus, table = get_consensus(term_df, lib, top_results, table, display_id=lib)
# Visualize
consensus_df = term_df.loc[consensus.index]
if (consensus_df.shape[1] > 0):
clustergram_filename = "%s_consensus_clust.tsv"%lib
clustergram_caption = "Clustergrammer for the top %d consensus terms for %s "%(top_results, lib.replace("_"," "))
clustergrammer(consensus_df,
clustergram_filename,
clustergrammer_url,
lib,
figure,
clustergram_caption,
)
figure+=1
results_count = len(consensus.index) if len(consensus.index) < top_results else top_results
heatmap(consensus_df, "%s_consensus.svg"%lib, lib, width, height)
display(Markdown("**Figure %d** Heatmap for the top %d consensus terms for %s. [Download figure](%s_consensus.svg)"%(figure, results_count, lib.replace("_"," "), lib)),
display_id="heatmap_caption_%s"%lib)
figure+=1
# if num_sigs <=15:
status = stackedBarPlot(consensus_df, "%s_consensus_barplot.svg"%lib, display_id=lib)
if status:
display(Markdown("**Figure %d** Stacked bar plot for the top %d consensus terms for **%s**. [Download figure](%s_consensus_barplot.svg)"%(figure, top_results, lib.replace("_"," "), lib)),
display_id="stacked_bar_caption_%s"%lib)
figure +=1
else:
print("No terms found")
</code>
## References
[1] Chen EY, Tan CM, Kou Y, Duan Q, Wang Z, Meirelles GV, Clark NR, Ma'ayan A. Enrichr: interactive and collaborative HTML5 gene list enrichment analysis tool. BMC Bioinformatics. 2013;128(14).
[2] Kuleshov MV, Jones MR, Rouillard AD, Fernandez NF, Duan Q, Wang Z, Koplev S, Jenkins SL, Jagodnik KM, Lachmann A, McDermott MG, Monteiro CD, Gundersen GW, Ma'ayan A. Enrichr: a comprehensive gene set enrichment analysis web server 2016 update. Nucleic Acids Research. 2016; gkw377.
[3] Fernandez, N. F. et al. Clustergrammer, a web-based heatmap visualization and analysis tool for high-dimensional biological data. Sci. Data 4:170151 doi: 10.1038/sdata.2017.151 (2017).
|
{
"filename": "EnrichrConsensus.ipynb",
"repository": "MaayanLab/appyter-catalog",
"query": "transformed_from_existing",
"size": 23367,
"sha": ""
}
|
# 03_learner_1.ipynb
Repository: matjesg/deepflash2
<code>
#default_exp learner
from nbdev.showdoc import show_doc
</code>
# Ensemble Training and Prediction
> Implements the meta classes for training and inference with deep model ensembles for deepflash2.
<code>
#hide
from fastcore.test import *
</code>
<code>
#export
import torch
import time
import zarr
import pandas as pd
import numpy as np
import cv2
import tifffile
from pathlib import Path
from typing import List, Union, Tuple
from skimage.color import label2rgb
from sklearn.model_selection import KFold
from fastprogress import progress_bar
from fastcore.basics import GetAttr
from fastcore.foundation import L
from fastai import optimizer
from fastai.learner import Learner
from fastai.callback.all import *
from fastai.callback.tracker import SaveModelCallback
from fastai.callback.progress import CSVLogger
from fastai.data.core import DataLoaders
from fastai.data.transforms import get_image_files, get_files
from deepflash2.config import Config
from deepflash2.data import BaseDataset, TileDataset, RandomTileDataset
from deepflash2.models import create_smp_model, save_smp_model, load_smp_model, run_cellpose
from deepflash2.inference import InferenceEnsemble
from deepflash2.losses import get_loss
from deepflash2.utils import compose_albumentations as _compose_albumentations
from deepflash2.utils import dice_score, binary_dice_score, plot_results, get_label_fn, save_mask, save_unc, export_roi_set, get_instance_segmentation_metrics
from fastai.metrics import Dice, DiceMulti
import matplotlib.pyplot as plt
import warnings
#https://discuss.pytorch.org/t/slow-forward-on-traced-graph-on-cuda-2nd-iteration/118445/7
try: torch._C._jit_set_fusion_strategy([('STATIC', 0)])
except: torch._C._jit_set_bailout_depth(0)
</code>
<code>
#export
_optim_dict = {
'ranger' : optimizer.ranger,
'Adam' : optimizer.Adam,
'RAdam' : optimizer.RAdam,
'QHAdam' :optimizer.QHAdam,
'Larc' : optimizer.Larc,
'Lamb' : optimizer.Lamb,
'SGD' : optimizer.SGD,
'RMSProp' : optimizer.RMSProp,
}
</code>
## Base class
<code>
#export
class EnsembleBase(GetAttr):
_default = 'config'
def __init__(self, image_dir:str=None, mask_dir:str=None, files:List[Path]=None, label_fn:callable=None,
config:Config=None, path:Path=None, zarr_store:str=None):
self.config = config or Config()
self.path = Path(path) if path is not None else Path('.')
self.label_fn = None
self.files = L()
store = str(zarr_store) if zarr_store else zarr.storage.TempStore()
root = zarr.group(store=store, overwrite=False)
self.store = root.chunk_store.path
self.g_pred, self.g_smx, self.g_std = root.require_groups('preds', 'smxs', 'stds')
if any(v is not None for v in (image_dir, files)):
self.files = L(files) or self.get_images(image_dir)
if any(v is not None for v in (mask_dir, label_fn)):
assert hasattr(self, 'files'), 'image_dir or files must be provided'
self.label_fn = label_fn or self.get_label_fn(mask_dir)
self.check_label_fn()
def get_images(self, img_dir:str='images', img_path:Path=None) -> List[Path]:
'Returns list of image paths'
path = img_path or self.path/img_dir
files = get_image_files(path, recurse=False)
print(f'Found {len(files)} images in "{path}".')
if len(files)==0: warnings.warn('Please check your provided images and image folder')
return files
def get_label_fn(self, msk_dir:str='masks', msk_path:Path=None):
'Returns label function to get paths of masks'
path = msk_path or self.path/msk_dir
return get_label_fn(self.files[0], path)
def check_label_fn(self):
'Checks label function'
mask_check = [self.label_fn(x).exists() for x in self.files]
chk_str = f'Found {sum(mask_check)} corresponding masks.'
print(chk_str)
if len(self.files)!=sum(mask_check):
warnings.warn(f'Please check your images and masks (and folders).')
def predict(self, arr:Union[np.ndarray, torch.Tensor]) -> Tuple[np.ndarray, np.ndarray, np.ndarray]:
'Get prediction for arr using inference_ensemble'
inp = torch.tensor(arr).float().to(self.device)
with torch.inference_mode():
preds = self.inference_ensemble(inp)
preds = [x.cpu().numpy() for x in preds]
return tuple(preds)
def save_preds_zarr(self, f_name, pred, smx, std):
self.g_pred[f_name] = pred
self.g_smx[f_name] = smx
self.g_std[f_name] = std
def _create_ds(self, **kwargs):
self.ds = BaseDataset(self.files, label_fn=self.label_fn, instance_labels=self.instance_labels,
num_classes=self.num_classes, **kwargs)
</code>
<code>
# Tests tbd.
tst = EnsembleBase()
</code>
## Ensemble Learner
<code>
#export
class EnsembleLearner(EnsembleBase):
"Meta class to training model ensembles with `n` models"
def __init__(self, *args, ensemble_path=None, preproc_dir=None, metrics=None, cbs=None,
ds_kwargs={}, dl_kwargs={}, model_kwargs={}, stats=None, **kwargs):
super().__init__(*args, **kwargs)
assert hasattr(self, 'label_fn'), 'mask_dir or label_fn must be provided.'
self.stats = stats
self.dl_kwargs = dl_kwargs
self.model_kwargs = model_kwargs
self.add_ds_kwargs = ds_kwargs
default_metrics = [Dice()] if self.num_classes==2 else [DiceMulti()]
self.metrics = metrics or default_metrics
self.loss_fn = self.get_loss()
self.cbs = cbs or [SaveModelCallback(monitor='dice' if self.num_classes==2 else 'dice_multi')] #ShowGraphCallback
self.ensemble_dir = ensemble_path or self.path/self.ens_dir
if ensemble_path is not None:
ensemble_path.mkdir(exist_ok=True, parents=True)
self.load_models(path=ensemble_path)
else: self.models = {}
self.n_splits=min(len(self.files), self.max_splits)
self._set_splits()
self._create_ds(stats=self.stats, preproc_dir=preproc_dir, verbose=1, **self.add_ds_kwargs)
self.stats = self.ds.stats
self.in_channels = self.ds.get_data(max_n=1)[0].shape[-1]
self.df_val, self.df_ens, self.df_model, self.ood = None,None,None,None
self.recorder = {}
def _set_splits(self):
if self.n_splits>1:
kf = KFold(self.n_splits, shuffle=True, random_state=self.random_state)
self.splits = {key:(self.files[idx[0]], self.files[idx[1]]) for key, idx in zip(range(1,self.n_splits+1), kf.split(self.files))}
else:
self.splits = {1: (self.files[0], self.files[0])}
def _compose_albumentations(self, **kwargs):
return _compose_albumentations(**kwargs)
@property
def pred_ds_kwargs(self):
# Setting default shapes and padding
ds_kwargs = self.add_ds_kwargs.copy()
ds_kwargs['use_preprocessed_labels']= True
ds_kwargs['preproc_dir']=self.ds.preproc_dir
ds_kwargs['instance_labels']= self.instance_labels
ds_kwargs['tile_shape']= (self.tile_shape,)*2
ds_kwargs['num_classes']= self.num_classes
ds_kwargs['max_tile_shift']= self.max_tile_shift
ds_kwargs['scale']= self.scale
ds_kwargs['border_padding_factor']= self.border_padding_factor
return ds_kwargs
@property
def train_ds_kwargs(self):
# Setting default shapes and padding
ds_kwargs = self.add_ds_kwargs.copy()
# Settings from config
ds_kwargs['use_preprocessed_labels']= True
ds_kwargs['preproc_dir']=self.ds.preproc_dir
ds_kwargs['instance_labels']= self.instance_labels
ds_kwargs['stats']= self.stats
ds_kwargs['tile_shape']= (self.tile_shape,)*2
ds_kwargs['num_classes']= self.num_classes
ds_kwargs['scale']= self.scale
ds_kwargs['flip'] = self.flip
ds_kwargs['max_tile_shift']= 1.
ds_kwargs['border_padding_factor']= 0.
ds_kwargs['scale']= self.scale
ds_kwargs['albumentations_tfms'] = self._compose_albumentations(**self.albumentation_kwargs)
ds_kwargs['sample_mult'] = self.sample_mult if self.sample_mult>0 else None
return ds_kwargs
@property
def model_name(self):
encoder_name = self.encoder_name.replace('_', '-')
return f'{self.arch}_{encoder_name}_{self.num_classes}classes'
def get_loss(self):
kwargs = {'mode':self.mode,
'classes':[x for x in range(1, self.num_classes)],
'smooth_factor': self.loss_smooth_factor,
'alpha':self.loss_alpha,
'beta':self.loss_beta,
'gamma':self.loss_gamma}
return get_loss(self.loss, **kwargs)
def _get_dls(self, files, files_val=None):
ds = []
ds.append(RandomTileDataset(files, label_fn=self.label_fn, **self.train_ds_kwargs, verbose=0))
if files_val:
ds.append(TileDataset(files_val, label_fn=self.label_fn, **self.train_ds_kwargs, verbose=0))
else:
ds.append(ds[0])
dls = DataLoaders.from_dsets(*ds, bs=self.batch_size, pin_memory=True, **self.dl_kwargs).to(self.device)
return dls
def _create_model(self):
model = create_smp_model(arch=self.arch,
encoder_name=self.encoder_name,
encoder_weights=self.encoder_weights,
in_channels=self.in_channels,
classes=self.num_classes,
**self.model_kwargs).to(self.device)
return model
def fit(self, i, n_epochs=None, base_lr=None, **kwargs):
'Fit model number `i`'
n_epochs = n_epochs or self.n_epochs
base_lr = base_lr or self.base_lr
name = self.ensemble_dir/'single_models'/f'{self.model_name}-fold{i}.pth'
model = self._create_model()
files_train, files_val = self.splits[i]
dls = self._get_dls(files_train, files_val)
log_name = f'{name.name}_{time.strftime("%Y%m%d-%H%M%S")}.csv'
log_dir = self.ensemble_dir/'logs'
log_dir.mkdir(exist_ok=True, parents=True)
cbs = self.cbs + [CSVLogger(fname=log_dir/log_name)]
self.learn = Learner(dls, model,
metrics=self.metrics,
wd=self.weight_decay,
loss_func=self.loss_fn,
opt_func=_optim_dict[self.optim],
cbs=cbs)
self.learn.model_dir = self.ensemble_dir.parent/'.tmp'
if self.mixed_precision_training: self.learn.to_fp16()
print(f'Starting training for {name.name}')
self.learn.fine_tune(n_epochs, base_lr=base_lr)
print(f'Saving model at {name}')
name.parent.mkdir(exist_ok=True, parents=True)
save_smp_model(self.learn.model, self.arch, name, stats=self.stats)
self.models[i]=name
self.recorder[i]=self.learn.recorder
del model
if torch.cuda.is_available(): torch.cuda.empty_cache()
def get_inference_ensemble(self, model_path=None):
model_paths = [model_path] if model_path is not None else self.models.values()
models = [load_smp_model(p)[0] for p in model_paths]
with warnings.catch_warnings():
warnings.simplefilter("ignore")
ensemble = InferenceEnsemble(models,
num_classes=self.num_classes,
in_channels=self.in_channels,
channel_means=self.stats['channel_means'].tolist(),
channel_stds=self.stats['channel_stds'].tolist(),
tile_shape=(self.tile_shape,)*2,
**self.inference_kwargs).to(self.device)
return torch.jit.script(ensemble)
def save_inference_ensemble(self):
ensemble = self.get_inference_ensemble()
ensemble_name = self.ensemble_dir/f'ensemble_{self.model_name}.pt'
print(f'Saving model at {ensemble_name}')
ensemble.save(ensemble_name)
def fit_ensemble(self, n_epochs=None, skip=False, save_inference_ensemble=True, **kwargs):
'Fit `i` models and `skip` existing'
for i in range(1, self.n_models+1):
if skip and (i in self.models): continue
self.fit(i, n_epochs, **kwargs)
if save_inference_ensemble: self.save_inference_ensemble()
def set_n(self, n):
"Change to `n` models per ensemble"
for i in range(n, len(self.models)):
self.models.pop(i+1, None)
self.n_models = n
def get_valid_results(self, model_no=None, zarr_store=None, export_dir=None, filetype='.png', **kwargs):
"Validate models on validation data and save results"
res_list = []
model_dict = self.models if not model_no else {k:v for k,v in self.models.items() if k==model_no}
metric_name = 'dice_score' if self.num_classes==2 else 'average_dice_score'
if export_dir:
export_dir = Path(export_dir)
pred_path = export_dir/'masks'
pred_path.mkdir(parents=True, exist_ok=True)
unc_path = export_dir/'uncertainties'
unc_path.mkdir(parents=True, exist_ok=True)
for i, model_path in model_dict.items():
print(f'Validating model {i}.')
self.inference_ensemble = self.get_inference_ensemble(model_path=model_path)
_, files_val = self.splits[i]
for j, f in progress_bar(enumerate(files_val), total=len(files_val)):
pred, smx, std = self.predict(self.ds.data[f.name][:])
self.save_preds_zarr(f.name, pred, smx, std)
msk = self.ds.labels[f.name][:] #.get_data(f, mask=True)[0])
m_dice = dice_score(msk, pred, num_classes=self.num_classes)
df_tmp = pd.Series({'file' : f.name,
'model' : model_path,
'model_no' : i,
metric_name: m_dice,
'uncertainty_score': np.mean(std[pred>0]),
'image_path': f,
'mask_path': self.label_fn(f),
'pred_path': f'{self.store}/{self.g_pred.path}/{f.name}',
'softmax_path': f'{self.store}/{self.g_smx.path}/{f.name}',
'uncertainty_path': f'{self.store}/{self.g_std.path}/{f.name}'})
res_list.append(df_tmp)
if export_dir:
save_mask(pred, pred_path/f'{df_tmp.file}_model{df_tmp.model_no}_mask', filetype)
save_unc(std, unc_path/f'{df_tmp.file}_model{df_tmp.model_no}_uncertainty', filetype)
del self.inference_ensemble
if torch.cuda.is_available(): torch.cuda.empty_cache()
self.df_val = pd.DataFrame(res_list)
if export_dir:
self.df_val.to_csv(export_dir/f'val_results.csv', index=False)
self.df_val.to_excel(export_dir/f'val_results.xlsx')
return self.df_val
def show_valid_results(self, model_no=None, files=None, metric_name='auto', **kwargs):
"Plot results of all or `file` validation images",
if self.df_val is None: self.get_valid_results(**kwargs)
df = self.df_val
if files is not None: df = df.set_index('file', drop=False).loc[files]
if model_no is not None: df = df[df.model_no==model_no]
if metric_name=='auto': metric_name = 'dice_score' if self.num_classes==2 else 'average_dice_score'
for _, r in df.iterrows():
img = self.ds.data[r.file][:]
msk = self.ds.labels[r.file][:]
pred = self.g_pred[r.file][:]
std = self.g_std[r.file][:]
_d_model = f'Model {r.model_no}'
plot_results(img, msk, pred, std, df=r, num_classes=self.num_classes, metric_name=metric_name, model=_d_model)
def load_models(self, path=None):
"Get models saved at `path`"
path = path or self.ensemble_dir/'single_models'
models = sorted(get_files(path, extensions='.pth', recurse=False))
self.models = {}
for i, m in enumerate(models,1):
if i==0: self.num_classes = int(m.name.split('_')[2][0])
else: assert self.num_classes==int(m.name.split('_')[2][0]), 'Check models. Models are trained on different number of classes.'
self.models[i] = m
if len(self.models)>0:
self.set_n(len(self.models))
print(f'Found {len(self.models)} models in folder {path}:')
print([m.name for m in self.models.values()])
# Reset stats
print(f'Loading stats from {self.models[1].name}')
_, self.stats = load_smp_model(self.models[1])
def lr_find(self, files=None, **kwargs):
"Wrapper function for learning rate finder"
files = files or self.files
dls = self._get_dls(files)
model = self._create_model()
learn = Learner(dls, model, metrics=self.metrics, wd=self.weight_decay, loss_func=self.loss_fn, opt_func=_optim_dict[self.optim])
if self.mixed_precision_training: learn.to_fp16()
sug_lrs = learn.lr_find(**kwargs)
return sug_lrs, learn.recorder
</code>
<code>
show_doc(EnsembleLearner)
</code>
## Ensemble Prediction Class
<code>
#export
class EnsemblePredictor(EnsembleBase):
def __init__(self, *args, ensemble_path:Path=None, **kwargs):
if ensemble_path is not None:
self.load_inference_ensemble(ensemble_path)
super().__init__(*args, **kwargs)
if hasattr(self, 'inference_ensemble'):
self.config.num_classes = self.inference_ensemble.num_classes
if hasattr(self, 'files'):
self._create_ds(stats={}, use_zarr_data = False, verbose=1)
self.ensemble_dir = self.path/self.ens_dir
#if ensemble_path is not None:
# self.load_inference_ensemble(ensemble_path)
def load_inference_ensemble(self, ensemble_path:Path=None):
"Load inference_ensemble from `self.ensemle_dir` or from `path`"
path = ensemble_path or self.ensemble_dir
if path.is_dir():
path_list = get_files(path, extensions='.pt', recurse=False)
if len(path_list)==0:
warnings.warn(f'No inference ensemble available at {path}. Did you train your ensemble correctly?')
return
path = path_list[0]
self.inference_ensemble_name = path.name
if hasattr(self, 'device'): self.inference_ensemble = torch.jit.load(path).to(self.device)
else: self.inference_ensemble = torch.jit.load(path)
print(f'Successfully loaded InferenceEnsemble from {path}')
def get_ensemble_results(self, file_list=None, export_dir=None, filetype='.png', **kwargs):
'Predict files in file_list using InferenceEnsemble'
if file_list is not None:
self.files = file_list
self._create_ds(stats={}, use_zarr_data = False, verbose=1)
if export_dir:
export_dir = Path(export_dir)
pred_path = export_dir/'masks'
pred_path.mkdir(parents=True, exist_ok=True)
unc_path = export_dir/'uncertainties'
unc_path.mkdir(parents=True, exist_ok=True)
res_list = []
for f in progress_bar(self.files):
img = self.ds.read_img(f)
pred, smx, std = self.predict(img)
self.save_preds_zarr(f.name, pred, smx, std)
df_tmp = pd.Series({'file' : f.name,
'ensemble' : self.inference_ensemble_name,
'uncertainty_score': np.mean(std[pred>0]),
'image_path': f,
'pred_path': f'{self.store}/{self.g_pred.path}/{f.name}',
'softmax_path': f'{self.store}/{self.g_smx.path}/{f.name}',
'uncertainty_path': f'{self.store}/{self.g_std.path}/{f.name}'})
res_list.append(df_tmp)
if export_dir:
save_mask(pred, pred_path/f'{df_tmp.file}_mask', filetype)
save_unc(std, unc_path/f'{df_tmp.file}_unc', filetype)
self.df_ens = pd.DataFrame(res_list)
return self.g_pred, self.g_smx, self.g_std
def score_ensemble_results(self, mask_dir=None, label_fn=None):
"Compare ensemble results to given segmentation masks."
if any(v is not None for v in (mask_dir, label_fn)):
self.label_fn = label_fn or self.get_label_fn(mask_dir)
self._create_ds(stats={}, use_zarr_data = False, verbose=1)
print('Calculating metrics')
for i, r in progress_bar(self.df_ens.iterrows(), total=len(self.df_ens)):
msk = self.ds.labels[r.file][:]
pred = self.g_pred[r.file][:]
if self.num_classes==2:
self.df_ens.loc[i, f'dice_score'] = binary_dice_score(msk, pred)
else:
for cl in range(self.num_classes):
msk_bin = msk==cl
pred_bin = pred==cl
if np.any([msk_bin, pred_bin]):
self.df_ens.loc[i, f'dice_score_class{cl}'] = binary_dice_score(msk_bin, pred_bin)
if self.num_classes>2:
self.df_ens['average_dice_score'] = self.df_ens[[col for col in self.df_ens if col.startswith('dice_score_class')]].mean(axis=1)
return self.df_ens
def show_ensemble_results(self, files=None, unc=True, unc_metric=None, metric_name='auto'):
"Show result of ensemble or `model_no`"
assert self.df_ens is not None, "Please run `get_ensemble_results` first."
df = self.df_ens
if files is not None: df = df.reset_index().set_index('file', drop=False).loc[files]
if metric_name=='auto': metric_name = 'dice_score' if self.num_classes==2 else 'average_dice_score'
for _, r in df.iterrows():
imgs = []
imgs.append(self.ds.read_img(r.image_path))
if metric_name in r.index:
imgs.append(self.ds.labels[r.file][:])
hastarget=True
else:
hastarget=False
imgs.append(self.g_pred[r.file])
if unc: imgs.append(self.g_std[r.file])
plot_results(*imgs, df=r, hastarget=hastarget, num_classes=self.num_classes, metric_name=metric_name, unc_metric=unc_metric)
def get_cellpose_results(self, export_dir=None, check_missing=True):
'Get instance segmentation results using the cellpose integration'
assert self.df_ens is not None, "Please run `get_ensemble_results` first."
cl = self.cellpose_export_class
assert cl<self.num_classes, f'{cl} not avaialable from {self.num_classes} classes'
smxs, preds = [], []
for _, r in self.df_ens.iterrows():
smxs.append(self.g_smx[r.file][:])
preds.append(self.g_pred[r.file][:])
probs = [x[cl] for x in smxs]
masks = [x==cl for x in preds]
cp_masks = run_cellpose(probs, masks,
model_type=self.cellpose_model,
diameter=self.cellpose_diameter,
min_size=self.min_pixel_export,
flow_threshold=self.cellpose_flow_threshold,
gpu=torch.cuda.is_available())
# Check for missing pixels in cellpose masks
if check_missing:
for i, _ in self.df_ens.iterrows():
cp_mask_bin = (cp_masks[i]>0).astype('uint8')
n_diff = np.sum(masks[i]!=cp_mask_bin, dtype='uint8')
self.df_ens.at[i,f'cellpose_removed_pixels_class{cl}'] = n_diff
if export_dir:
export_dir = Path(export_dir)/'instance_labels'
export_dir.mkdir(parents=True, exist_ok=True)
for idx, r in self.df_ens.iterrows():
tifffile.imwrite(export_dir/f'{r.file}_class{cl}.tif', cp_masks[idx], compress=6)
self.cellpose_masks = cp_masks
return cp_masks
def score_cellpose_results(self, mask_dir=None, label_fn=None):
"Compare cellpose nstance segmentation results to given masks."
assert self.cellpose_masks is not None, 'Run get_cellpose_results() first'
if any(v is not None for v in (mask_dir, label_fn)):
self.label_fn = label_fn or self.get_label_fn(mask_dir)
self._create_ds(stats={}, use_zarr_data = False, verbose=1)
cl = self.cellpose_export_class
for i, r in self.df_ens.iterrows():
msk = self.ds.labels[r.file][:]==cl
_, msk = cv2.connectedComponents(msk.astype('uint8'), connectivity=4)
pred = self.cellpose_masks[i]
ap, tp, fp, fn = get_instance_segmentation_metrics(msk, pred, is_binary=False, min_pixel=self.min_pixel_export)
self.df_ens.loc[i, f'mAP_class{cl}'] = ap.mean()
self.df_ens.loc[i, f'mAP_iou50_class{cl}'] = ap[0]
return self.df_ens
def show_cellpose_results(self, files=None, unc_metric=None, metric_name='auto'):
'Show instance segmentation results from cellpose predictions.'
assert self.df_ens is not None, "Please run `get_ensemble_results` first."
df = self.df_ens.reset_index()
if files is not None: df = df.set_index('file', drop=False).loc[files]
if metric_name=='auto': metric_name=f'mAP_class{self.cellpose_export_class}'
for _, r in df.iterrows():
imgs = [self.ds.read_img(r.image_path)]
if metric_name in r.index:
mask = self.ds.labels[r.file][:]
mask = (mask==self.cellpose_export_class).astype('uint8')
_, comps = cv2.connectedComponents(mask, connectivity=4)
imgs.append(label2rgb(comps, bg_label=0))
hastarget=True
else:
hastarget=False
imgs.append(label2rgb(self.cellpose_masks[r['index']], bg_label=0))
imgs.append(self.g_std[r.file])
plot_results(*imgs, df=r, hastarget=hastarget, num_classes=self.num_classes, instance_labels=True, metric_name=metric_name, unc_metric=unc_metric)
def export_imagej_rois(self, output_folder='ROI_sets', **kwargs):
'Export ImageJ ROI Sets to `ouput_folder`'
assert self.df_ens is not None, "Please run prediction first."
output_folder = Path(output_folder)
output_folder.mkdir(exist_ok=True, parents=True)
for idx, r in progress_bar(self.df_ens.iterrows(), total=len(self.df_ens)):
pred = self.g_pred[r.file][:]
uncertainty = self.g_std[r.file][:]
export_roi_set(pred, uncertainty, name=r.file, path=output_folder, ascending=False, **kwargs)
def export_cellpose_rois(self, output_folder='cellpose_ROI_sets', **kwargs):
'Export cellpose predictions to ImageJ ROI Sets in `ouput_folder`'
output_folder = Path(output_folder)
output_folder.mkdir(exist_ok=True, parents=True)
for idx, r in progress_bar(self.df_ens.iterrows(), total=len(self.df_ens)):
pred = self.cellpose_masks[idx]
uncertainty = self.g_std[r.file][:]
export_roi_set(pred, uncertainty, instance_labels=True, name=r.file, path=output_folder, ascending=False, **kwargs)
</code>
<code>
show_doc(EnsemblePredictor)
</code>
<code>
# Tests tbd.
t = EnsemblePredictor()
</code>
## Export -
<code>
#hide
from nbdev.export import *
notebook2script()
</code>
|
{
"filename": "03_learner_1.ipynb",
"repository": "matjesg/deepflash2",
"query": "transformed_from_existing",
"size": 38796,
"sha": ""
}
|
# Fall2018Import.ipynb
Repository: mglerner/IntroductoryPhysics
<code>
import pandas as pd, makesyllabus as ms, imp
from IPython.display import HTML
</code>
<code>
df = pd.read_csv('PHYS125-0201910(10302)-Non Newtonian Physicist-responses.csv')
</code>
<code>
for row in df.sort_values('Surname').iterrows():
r = row[1]
print(r['First name'], r['Surname'])
print(r['Response 1'])
</code>
<code>
imp.reload(ms)
</code>
<code>
for i in df['Response 1']:
if i.strip() != '-':
print(ms.Scientist(i))
</code>
|
{
"filename": "Fall2018Import.ipynb",
"repository": "mglerner/IntroductoryPhysics",
"query": "transformed_from_existing",
"size": 79977,
"sha": ""
}
|
# Preprocess_sample22.ipynb
Repository: jiang-junyao/DRCTdb
<code>
import scanpy as sc
import numpy as np
import pandas as pd
import scipy.io as sio
import scipy.sparse as sparse
import sys
import os
</code>
<code>
def convert(filename,anndata):
if not os.path.lexists(filename):
os.makedirs(filename)
#Create dir
h5ad_file = anndata
h5ad_file.obs.to_csv(f'./{filename}/{filename}_metadata.txt.gz', compression='gzip',sep='\t', index=True)
#write metadata
sio.mmwrite(f'./{filename}/{filename}.mtx',sparse.csr_matrix(h5ad_file.X.T))
#write sparce matrix
h5ad_file.var.to_csv(f'./{filename}/{filename}_features.txt.gz',compression='gzip',sep='\t')
#write features
#gzip files
print('Converted finish')
</code>
<code>
islet_scRNA = sc.read_h5ad('../../data/scRNA-seq/Sample22/GSE202497_final_cluster.h5ad')
</code>
<code>
islet_scRNA.obs.to_csv(f'./scRNA-seq/sample22_metadata.txt.gz', compression='gzip',sep='\t', index=True)
</code>
<code>
islet_scATAC = sc.read_h5ad('../../data/scATAC-seq/Sample22/GSE202498_final_cluster.h5ad')
</code>
<code>
islet_scATAC.obs.to_csv(f'sample22_metadata.txt.gz', compression='gzip',sep='\t', index=True)
</code>
<code>
convert('sample21_islet',islet)
</code>
|
{
"filename": "Preprocess_sample22.ipynb",
"repository": "jiang-junyao/DRCTdb",
"query": "transformed_from_existing",
"size": 3043,
"sha": ""
}
|
# marrow_analysis_bone_marrow_atlas.ipynb
Repository: Sarah145/bone
<a href="https://colab.research.google.com/github/Sarah145/bone_marrow_analysis/blob/master/bone_marrow_atlas.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Bone Marrow Atlas
This notebook contains instructions to explore a single-cell atlas of the bone marrow in health and leukemia.
The atlas is described in this paper: Cell-cell interactome of the hematopoietic niche and its changes in acute myeloid leukemia. *Ennis S et. al., iScience, 2023. DOI: [10.1016/j.isci.2023.106943](https://doi.org/10.1016/j.isci.2023.106943)*.
If you want to explore the dataset, check the expression of different genes, look at cell type distributions etc., then follow the instructions below to generate an interactive browser of the dataset using cellxgene.
If you're interested in using this dataset as a reference and querying your own bone marrow single-cell RNA-seq data, you can download the scVI model parameters from [here](https://zenodo.org/record/5931689) and follow the instructions from the [scVI tutorial](https://scarches.readthedocs.io/en/latest/scvi_surgery_pipeline.html#Perform-surgery-on-reference-model-and-train-on-query-dataset).
-------
## Running a cellxgene server
To start a cellxgene session, run the following code chunks (press the ▶ button to the left corner of each cell or hit Ctrl+Enter) to install packages, download data and start a server. Only after running ALL cells in order, click the link in the output of the second last cell and it will take you to your cellxgene session and you can play with the data.
<code>
# install python modules
!pip install --quiet cellxgene
</code>
<code>
# download data (this may take a while)
!wget https://zenodo.org/record/5931689/files/bone_marrow.h5ad
</code>
<code>
from google.colab.output import eval_js
print("Click on the following link AFTER running the cellxgene command in the next chunk.")
print(eval_js("google.colab.kernel.proxyPort(5005)"))
</code>
<code>
# launch cellxgene server
!cellxgene launch bone_marrow.h5ad
</code>
|
{
"filename": "marrow_analysis_bone_marrow_atlas.ipynb",
"repository": "Sarah145/bone",
"query": "transformed_from_existing",
"size": 3827,
"sha": ""
}
|
# nn.ipynb
Repository: lucaprotelli/nn-MNIST
### Semplice rete neurale per MNIST da zero
Questa progetto mostra come costruire e addestrare una rete neurale semplice (1 hidden layer) per classificare le cifre MNIST, implementando tutto da zero in NumPy.
##### 1. Import delle librerie e caricamento del dataset
Importo le librerie fondamentali e carico il dataset MNIST per il training.
Pandas è comodissimo per leggere file .csv, ma non è adatto per fare calcoli con matrici. Per allenare la rete userò NumPy, che è molto più veloce sui calcoli vettoriali. Matplotlib: servirà a “disegnare” le cifre per controllare visivamente se la rete sta funzionando.
<code>
import numpy as np # Calcolo numerico
import pandas as pd # Gestione dati
from matplotlib import pyplot as plt # Visualizzazione immagini
# Carica il dataset MNIST (train.csv: immagini + etichette)
data = pd.read_csv('train.csv')
</code>
##### 2. Preprocessing dei dati
Divido i dati in training/dev set, normalizzo i pixel e separo immagini e labels.
**Obiettivo**: trasformare le immagini MNIST (28×28 px, valori 0–255) in vettori colonna 784×1 e normalizzarli su 0,1.
<code>
# trasformo in array NumPy e ne ricavo dimensioni
data = np.array(data)
m, n = data.shape # m = numero esempi, n = 785 (784 pixel + 1 label)
np.random.shuffle(data) # Mescola i dati per evitare bias
# Set di sviluppo (dev): prendo i primi 1000 esempi e li trasposto
data_dev = data[0:1000].T
Y_dev = data_dev[0] # etichette (0–9)
X_dev = data_dev[1:n] # pixel delle immagini (shape 784×1000)
X_dev = X_dev / 255. # normalizzo i pixel in [0,1]
# Set di training: il resto dei dati
data_train = data[1000:m].T
Y_train = data_train[0]
X_train = data_train[1:n]
X_train = X_train / 255. # normalizzazione dei pixel
# Qui X_train diventa una matrice 784×m, con m esempi.
_, m_train = X_train.shape # m_train: numero esempi di training
</code>
- **Shuffle**: se i dati fossero ordinati per etichetta (tutti gli “0”, poi tutti gli “1” …), il training imparerà male.
- **Dev set**: un piccolo sotto-insieme tenuto da parte serve a “tarare” iperparametri e verificare overfitting.
- **Normalizzazione**: i pixel originali vanno da 0 a 255; riportandoli in [0,1] aiutiamo l’ottimizzatore a convergere più velocemente e evitiamo gradienti troppo grandi.
<code>
Y_train
</code>
##### 3. Definizione della rete neurale e funzioni di supporto
La rete neurale avrà un’architettura semplice a due layers. Il layer di input $a^{[0]}$ avrà 784 unità corrispondenti ai 784 pixel di ciascuna immagine di input 28x28. Un layer nascosto $a^{[1]}$ avrà 10 unità con attivazione ReLU, e infine il layer di output $a^{[2]}$ avrà 10 unità corrispondenti alle dieci classi di cifre con attivazione softmax.
**Forward propagation**
$$Z^{[1]} = W^{[1]} X + b^{[1]}$$
$$A^{[1]} = g_{\text{ReLU}}(Z^{[1]}))$$
$$Z^{[2]} = W^{[2]} A^{[1]} + b^{[2]}$$
$$A^{[2]} = g_{\text{softmax}}(Z^{[2]})$$
**Backward propagation**
$$dZ^{[2]} = A^{[2]} - Y$$
$$dW^{[2]} = \frac{1}{m} dZ^{[2]} A^{[1]T}$$
$$dB^{[2]} = \frac{1}{m} \Sigma {dZ^{[2]}}$$
$$dZ^{[1]} = W^{[2]T} dZ^{[2]} .* g^{[1]\prime} (z^{[1]})$$
$$dW^{[1]} = \frac{1}{m} dZ^{[1]} A^{[0]T}$$
$$dB^{[1]} = \frac{1}{m} \Sigma {dZ^{[1]}}$$
**Parameter updates**
$$W^{[2]} := W^{[2]} - \alpha dW^{[2]}$$
$$b^{[2]} := b^{[2]} - \alpha db^{[2]}$$
$$W^{[1]} := W^{[1]} - \alpha dW^{[1]}$$
$$b^{[1]} := b^{[1]} - \alpha db^{[1]}$$
**Vars and shapes**
Forward prop
- $A^{[0]} = X$: 784 x m
- $Z^{[1]} \sim A^{[1]}$: 10 x m
- $W^{[1]}$: 10 x 784 (as $W^{[1]} A^{[0]} \sim Z^{[1]}$)
- $B^{[1]}$: 10 x 1
- $Z^{[2]} \sim A^{[2]}$: 10 x m
- $W^{[1]}$: 10 x 10 (as $W^{[2]} A^{[1]} \sim Z^{[2]}$)
- $B^{[2]}$: 10 x 1
Backprop
- $dZ^{[2]}$: 10 x m ($~A^{[2]}$)
- $dW^{[2]}$: 10 x 10
- $dB^{[2]}$: 10 x 1
- $dZ^{[1]}$: 10 x m ($~A^{[1]}$)
- $dW^{[1]}$: 10 x 10
- $dB^{[1]}$: 10 x 1
<code>
# Inizializza i pesi e bias per i due layers
def init_params():
W1 = np.random.rand(10, 784) - 0.5 # Pesi hidden layer
b1 = np.random.rand(10, 1) - 0.5 # Bias hidden layer
W2 = np.random.rand(10, 10) - 0.5 # Pesi output layer
b2 = np.random.rand(10, 1) - 0.5 # Bias output layer
return W1, b1, W2, b2
# Perché casuale? Se tutti i pesi iniziassero da zero, tutti i neuroni di uno stesso layer ricevono lo stesso gradiente e imparano esattamente le stesse cose (problema di simmetria).
# Scegliere un offset -0.5 porta i valori iniziali in un intorno di zero, utile per mantenere attivazioni bilanciate.
# Funzione di attivazione ReLU: “spiana” i valori negativi a zero, rende le reti più facili da addestrare rispetto a sigmoid/tanh e produce attivazioni sparse (molti zeri)
def ReLU(Z):
return np.maximum(Z, 0)
# Funzione di attivazione softmax (output layer): trasforma i valori grezzi del layer di output in una probabilità su 10 classi che somma a 1.
def softmax(Z):
A = np.exp(Z) / sum(np.exp(Z))
return A
# Forward propagation: calcola output e attivazioni di ogni layer
def forward_prop(W1, b1, W2, b2, X):
Z1 = W1.dot(X) + b1 # somma pesata + bias
A1 = ReLU(Z1) # attivazione nascosta
Z2 = W2.dot(A1) + b2 # somma pesata + bias
A2 = softmax(Z2) # probabilità sulle 10 classi
return Z1, A1, Z2, A2
# Derivata della ReLU: serve in backpropagation, è 1 dove Z>0, 0 altrove.
def ReLU_deriv(Z):
return Z > 0
# Trasforma le etichette in vettori one-hot: trasforma l’etichetta intera (es. 7) in un vettore di dimensione 10 con uno “1” in posizione 7 e zeri altrove, come richiede la formula di cross-entropy.
def one_hot(Y):
one_hot_Y = np.zeros((Y.size, Y.max() + 1))
one_hot_Y[np.arange(Y.size), Y] = 1
one_hot_Y = one_hot_Y.T
return one_hot_Y
# Perché: la cross-entropy richiede vettori di etichette in formato “one-hot”.
# Backward propagation: applica la regola della catena (chain rule) per risalire dai gradienti di output fino ai pesi d’ingresso. Grazie a softmax+cross-entropy, la derivata del loss rispetto a Z2 è semplicemente A2 − Y_one_hot.
def backward_prop(Z1, A1, Z2, A2, W1, W2, X, Y):
one_hot_Y = one_hot(Y)
# gradiente dello strato di output combinando softmax + cross-entropy
dZ2 = A2 - one_hot_Y
dW2 = 1 / m * dZ2.dot(A1.T)
db2 = 1 / m * np.sum(dZ2)
# gradiente dell'hidden layer
dZ1 = W2.T.dot(dZ2) * ReLU_deriv(Z1)
dW1 = 1 / m * dZ1.dot(X.T)
db1 = 1 / m * np.sum(dZ1)
return dW1, db1, dW2, db2
# Aggiorna i parametri usando il gradiente e il learning rate. Sottrae una frazione (alpha) del gradiente da ogni parametro.
def update_params(W1, b1, W2, b2, dW1, db1, dW2, db2, alpha):
W1 = W1 - alpha * dW1
b1 = b1 - alpha * db1
W2 = W2 - alpha * dW2
b2 = b2 - alpha * db2
return W1, b1, W2, b2
</code>
##### 4. Funzioni di valutazione e training
Definisco le funzioni per calcolare predizioni, accuratezza e per eseguire il ciclo di training (gradient descent).
<code>
# Restituisce la classe con probabilità massima per ogni esempio
def get_predictions(A2):
return np.argmax(A2, 0)
# Calcola l'accuratezza delle predizioni rispetto alle etichette reali
def get_accuracy(predictions, Y):
print(predictions, Y)
return np.sum(predictions == Y) / Y.size
# Ciclo di training: itera il processo di forward→backward→update, stampando periodicamente l’accuratezza sul training set per monitorare i progressi. (gradient descent)
def gradient_descent(X, Y, alpha, iterations):
W1, b1, W2, b2 = init_params()
for i in range(iterations):
Z1, A1, Z2, A2 = forward_prop(W1, b1, W2, b2, X)
dW1, db1, dW2, db2 = backward_prop(Z1, A1, Z2, A2, W1, W2, X, Y)
W1, b1, W2, b2 = update_params(W1, b1, W2, b2, dW1, db1, dW2, db2, alpha)
if i % 10 == 0:
print("Iteration: ", i)
predictions = get_predictions(A2)
print(get_accuracy(predictions, Y))
return W1, b1, W2, b2
</code>
<code>
W1, b1, W2, b2 = gradient_descent(X_train, Y_train, 0.10, 500)
</code>
- Learning rate = 0.10: bilancia velocità di apprendimento vs stabilità.
- 500 iterazioni: sufficiente per far convergere il modello su MNIST senza eccessivo overfitting.
~85% accuratezza sul training set.
##### 5. Test e visualizzazione delle predizioni
Funzioni per testare la rete su singole immagini e visualizzare il risultato.
<code>
# Calcola le predizioni per un set di dati
def make_predictions(X, W1, b1, W2, b2):
_, _, _, A2 = forward_prop(W1, b1, W2, b2, X)
predictions = get_predictions(A2)
return predictions
# aiuta a verificare caso per caso: stampa predizione vs label e mostra l’immagine.
def test_prediction(index, W1, b1, W2, b2):
current_image = X_train[:, index, None]
prediction = make_predictions(X_train[:, index, None], W1, b1, W2, b2)
label = Y_train[index]
print("Prediction: ", prediction)
print("Label: ", label)
current_image = current_image.reshape((28, 28)) * 255
plt.gray()
plt.imshow(current_image, interpolation='nearest')
plt.show()
</code>
Guardiamo un paio di esempi:
<code>
test_prediction(0, W1, b1, W2, b2)
test_prediction(1, W1, b1, W2, b2)
test_prediction(2, W1, b1, W2, b2)
test_prediction(3, W1, b1, W2, b2)
</code>
##### 6. Valutazione sul set di sviluppo (dev set)
Calcolo l'accuratezza della rete sul set di sviluppo per valutare la generalizzazione.
<code>
dev_predictions = make_predictions(X_dev, W1, b1, W2, b2)
get_accuracy(dev_predictions, Y_dev)
</code>
84% accuracy, posso dire che la rete ha generalizzato piuttosto bene i dati di training.
##### 7. Predizione sul test set e salvataggio risultati
Applico la rete ai dati di test e salvo le predizioni in un file CSV per la submission.
<code>
# Carica e normalizza il test set
test_data = pd.read_csv('test.csv')
X_test = np.array(test_data)
X_test = X_test / 255. # Normalizza pixel
# Predizioni sul test set
_, _, _, A2_test = forward_prop(W1, b1, W2, b2, X_test.T)
# Scelgo le classi più probabili
test_predictions = get_predictions(A2_test)
# Crea il DataFrame per la submission
submission = pd.DataFrame({
'ImageId': range(1, len(test_predictions) + 1),
'Label': test_predictions
})
# Salvo CSV di submission
submission.to_csv('Submission.csv', index=False)
print(f'Predictions saved to Submission.csv')
print(f'Number of test images: {len(test_predictions)}')
print('\nFirst few predictions:')
print(submission.head())
</code>
#### Schema della rete
- **Input:** 784 neuroni (pixel)
- **Hidden layer:** 10 neuroni, attivazione ReLU
- **Output layer:** 10 neuroni, attivazione softmax (classi 0-9)
Questa architettura semplice permette di classificare le cifre MNIST con una buona accuratezza, implementando tutto da zero senza librerie di deep learning avanzate.
|
{
"filename": "nn.ipynb",
"repository": "lucaprotelli/nn-MNIST",
"query": "transformed_from_existing",
"size": 69369,
"sha": ""
}
|
# pyreft.ipynb
Repository: 3ricchen/CS224N-Project
<code>
import argparse
import random
import torch
import numpy as np
import torch.nn.functional as F
from torch import nn
from torch.utils.data import DataLoader
from tqdm import tqdm
from my_datasets import (
ParaphraseDetectionDataset,
ParaphraseDetectionTestDataset,
load_paraphrase_data
)
from evaluation_reft import model_eval_paraphrase, model_test_paraphrase
from models.gpt2 import GPT2Model
from optimizer import AdamW
import transformers
import pyreft
</code>
<code>
!export CUDA_VISIBLE_DEVICES=2
</code>
<code>
device = 'cuda' if torch.cuda.is_available() else 'cpu'
gpt2 = transformers.AutoModelForCausalLM.from_pretrained('gpt2-large').to(device)
gpt2_tokenizer = transformers.AutoTokenizer.from_pretrained('gpt2-large', device = 'cuda')
gpt2_tokenizer.pad_token = gpt2_tokenizer.unk_token
EOS_TOKEN=gpt2_tokenizer.eos_token
</code>
<code>
## VANILLA REFT CONFIG
# layers 20, 26, 32, 36 interventions, middle to last
# Each intervention is rank 2-4.
# For sonnet, try first 3/last 3 interventions.
# For classification, try last 3.
reft_config = pyreft.ReftConfig(representations=[{
"layer": l, "component": "block_output",
"low_rank_dimension": 4,
"intervention": pyreft.LoreftIntervention(embed_dim=gpt2.config.hidden_size,
low_rank_dimension=4)} for l in [19, 24, 29, 35]])
reft_model = pyreft.get_reft_model(gpt2, reft_config)
reft_model.set_device("cuda")
reft_model = reft_model.float()
reft_model.print_trainable_parameters()
</code>
<code>
from types import SimpleNamespace
args = SimpleNamespace(
para_train="data/quora-train.csv",
para_dev="data/quora-dev.csv",
para_test="data/quora-test-student.csv",
para_dev_out="predictions/para-dev-output.csv",
para_test_out="predictions/para-test-output.csv",
seed=11711,
epochs=10,
use_gpu=True, # change to True if you want GPU usage
batch_size=32,
lr=1e-5,
model_size="gpt2"
)
</code>
<code>
para_train_data = load_paraphrase_data(args.para_train)
para_dev_data = load_paraphrase_data(args.para_dev)
para_train_data = ParaphraseDetectionDataset(para_train_data, args, tokenizer = gpt2_tokenizer)
para_dev_data = ParaphraseDetectionDataset(para_dev_data, args, tokenizer = gpt2_tokenizer)
para_train_dataloader = DataLoader(para_train_data, shuffle=True, batch_size=args.batch_size,
collate_fn=para_train_data.collate_fn)
para_dev_dataloader = DataLoader(para_dev_data, shuffle=False, batch_size=args.batch_size,
collate_fn=para_dev_data.collate_fn)
</code>
<code>
inputs = [f'<|user|>:Tell me if these questions are asking the same thing.\nQuestion 1: {p[0]}\nQuestion 2: {p[1]}\nAre these questions asking the same thing?</s>\n<|assistant|>:' for p in para_train_data]
outputs = [('yes' if p[2] == 1 else 'no') for p in para_train_data]
print('DATA LOADED')
positions = 'l3'
data_module = pyreft.make_multiple_position_supervised_data_module(
gpt2_tokenizer, gpt2, inputs, outputs,
positions = positions, num_interventions = len(reft_config.representations))
</code>
<code>
training_args = transformers.TrainingArguments(
num_train_epochs=1, output_dir="./tmp", per_device_train_batch_size=10,
learning_rate=5e-5, logging_steps=100,
lr_scheduler_type=transformers.SchedulerType.LINEAR,
report_to = [], # disable logging
warmup_steps=500,
weight_decay = 0.001
# L2 regularization
# weight_decay=0.01
)
trainer = pyreft.ReftTrainerForCausalLM(
model=reft_model, tokenizer=gpt2_tokenizer, args=training_args, **data_module)
_ = trainer.train()
</code>
<code>
print(inputs[0:5])
</code>
<code>
print(EOS_TOKEN)
</code>
<code>
import os
import shutil
save_dir = "./reft_gpt_large_PARAPHRASE_BIGGER_AND_BETTER"
if os.path.exists(save_dir):
shutil.rmtree(save_dir) # Remove the existing directory
reft_model.set_device("cpu") # Move model to CPU before saving
reft_model.save(save_directory=save_dir)
# reft_model.set_device("cpu") # send back to cpu before saving.
# reft_model.save(
# save_directory="./reft_gpt_large_PARAPHRASE",
# overwrite=True,
# )
</code>
<code>
import torch, transformers, pyreft
device = "cuda"
model_name_or_path = "gpt2-large"
model = transformers.AutoModelForCausalLM.from_pretrained(
model_name_or_path, torch_dtype=torch.bfloat16, device_map=device)
reft_model = pyreft.ReftModel.load(
"./reft_gpt_large_PARAPHRASE_BIGGER", model
)
reft_model.set_device(device)
device = 'cuda' if torch.cuda.is_available() else 'cpu'
gpt2 = transformers.AutoModelForCausalLM.from_pretrained('gpt2-large').to(device)
gpt2_tokenizer = transformers.AutoTokenizer.from_pretrained('gpt2-large', device = 'cuda')
gpt2_tokenizer.pad_token = gpt2_tokenizer.unk_token
EOS_TOKEN=gpt2_tokenizer.eos_token
</code>
<code>
prompt = ["<|user|>:Tell me if these questions are asking the same thing.\nQuestion 1: Are you gay?\nQuestion 2: What is the capital of France?\nAre these questions asking the same thing?</s>\n<|assistant|>:"]
prompt = gpt2_tokenizer(prompt, return_tensors="pt").to(device)
full_prompt = gpt2_tokenizer.decode(prompt["input_ids"][0], skip_special_tokens=True)
print(full_prompt)
base_unit_location = prompt["input_ids"].shape[-1] - 1 # last position
_, reft_response = reft_model.generate(
prompt, unit_locations={"sources->base": (None, [[[base_unit_location]]])},
intervene_on_prompt=True, max_new_tokens=512, do_sample=True,
eos_token_id=gpt2_tokenizer.eos_token_id, early_stopping=True
)
print(gpt2_tokenizer.decode(reft_response[0], skip_special_tokens=True))
</code>
<code>
from evaluation_reft import model_eval_paraphrase_intervenable
model_eval_paraphrase_intervenable(para_dev_dataloader, reft_model, 'cuda', gpt2_tokenizer)
</code>
|
{
"filename": "pyreft.ipynb",
"repository": "3ricchen/CS224N-Project",
"query": "transformed_from_existing",
"size": 74538,
"sha": ""
}
|
# a_1.ipynb
Repository: Abhishekyes/Sensor-Fault-Detection
<code>
pwd
</code>
<code>
import pandas as pd
</code>
<code>
pip install PyYAML
</code>
<code>
import yaml
</code>
<code>
pip install dill
</code>
<code>
import dill
</code>
<code>
from sensor.utils.main_utils import write_yaml_file
</code>
<code>
path = r"C:\VE\Scratch Pad\AI_PRACTICE\Projects\Sensor-Fault-detection\Sensor-Fault-Detection\data\sensor_data.csv"
</code>
<code>
df = pd.read_csv(path)
df.dtypes
</code>
<code>
print(df.columns)
print(type(df))
</code>
<code>
write_yaml_file(file_path="sen.yaml",content="df.dtypes")
</code>
<code>
import pandas as pd
import yaml
# Convert column names and data types to a dictionary
dtypes_dict = df.dtypes.apply(lambda x: x.name).to_dict()
# Convert the dictionary to YAML
yaml_content = yaml.dump(dtypes_dict, default_style='|')
# Write the YAML content to a file
file_path = "data_types.yaml"
with open(file_path, "w") as file:
file.write(yaml_content)
print(f"YAML file '{file_path}' has been created.")
</code>
<code>
import pandas as pd
import yaml
# Extract column names and their data types
column_names = df.columns.tolist()
column_types = df.dtypes.apply(lambda x: x.name).tolist()
# Create a dictionary with the structure you want
yaml_dict = {
'columns': ['class: category'] + [f'{col}: {col_type}' for col, col_type in zip(column_names, column_types)],
'numerical_columns': column_names,
'drop_columns': ['br_000', 'bq_000', 'bp_000', 'ab_000', 'cr_000', 'bo_000', 'bn_000']
}
# Convert the dictionary to YAML
yaml_content = yaml.dump(yaml_dict)
# Write the YAML content to a file
file_path = "data_structure.yaml"
with open(file_path, "w") as file:
file.write(yaml_content)
print(f"YAML file '{file_path}' has been created.")
</code>
|
{
"filename": "a_1.ipynb",
"repository": "Abhishekyes/Sensor-Fault-Detection",
"query": "transformed_from_existing",
"size": 12195,
"sha": ""
}
|
# ref_rem_1.ipynb
Repository: skand001/MSc-Medical-Text-Summarisation-for-IRD-Publications
<code>
import re
def remove_references(text):
"""
Remove the references section from the text. This function looks for the word 'References'
followed by '1.' and removes everything from that point onward.
"""
# Pattern to match 'References' followed by '1.'
reference_pattern = r'\bReferences\s+1\.\s'
# Search for the pattern in the text
match = re.search(reference_pattern, text, re.IGNORECASE)
if match:
# If 'References 1.' is found, remove everything from that point onward
return text[:match.start()].strip()
# If no references section is found, return the original text
return text
# Input text
input_text = """ Title: /gid00030/gid00035/gid00032/gid00030/gid00038/gid00001/gid00033/gid00042/gid00045 /gid00001
/gid00030/gid00035/gid00032/gid00030/gid00038/gid00001/gid00033/gid00042/gid00045 /gid00001
/gid00048/gid00043/gid00031/gid00028/gid00047/gid00032/gid00046Citation: Yan, J.; Günter, A.; Das, S.;
Mühlfriedel, R.; Michalakis, S.; Jiao,
K.; Seeliger, M.W.; Paquet-Durand, F.
Inherited Retinal Degeneration:
PARP-Dependent Activation of
Calpain Requires CNG Channel
Activity. Biomolecules 2022, 12, 455.
https://doi.org/10.3390/
biom12030455
Academic Editor: Massimo Dal
Monte
Received: 4 February 2022
Accepted: 10 March 2022
Published: 15 March 2022
Publisher’s Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
Copyright: © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
biomolecules
Article
Inherited Retinal Degeneration: PARP-Dependent Activation of
Calpain Requires CNG Channel Activity
Jie Yan1,2
, Alexander Günter3
, Soumyaparna Das1
, Regine Mühlfriedel3, Stylianos Michalakis4
,
Kangwei Jiao5
, Mathias W. Seeliger3,* and François Paquet-Durand1,*
1Cell Death Mechanism Group, Institute for Ophthalmic Research, University of Tübingen,
72076 Tübingen, Germany; jieyan19910809@hotmail.com (J.Y.); soumyaparnadas@gmail.com (S.D.)
2Graduate Training Centre of Neuroscience, University of Tübingen, 72076 Tübingen, Germany
3Division of Ocular Neurodegeneration, Institute for Ophthalmic Research, University of Tübingen,
72076 Tübingen, Germany; alexander.guenter@uni-tuebingen.de (A.G.);
regine.muehlfriedel@med.uni-tuebingen.de (R.M.)
4Department of Ophthalmology, University Hospital, LMU Munich, 80539 München, Germany;
michalakis@lmu.de
5Key Laboratory of Yunnan Province, Affiliated Hospital of Yunnan University, Kunming 650051, China;
kangwei.jiao@ynu.edu.cn
*Correspondence: mathias.seeliger@uni-tuebingen.de (M.W.S.);
francois.paquet-durand@klinikum.uni-tuebingen.de (F.P .-D.)
Abstract: Inherited retinal degenerations (IRDs) are a group of blinding diseases, typically involving
a progressive loss of photoreceptors. The IRD pathology is often based on an accumulation of cGMP
in photoreceptors and associated with the excessive activation of calpain and poly (ADP-ribose)
polymerase (PARP). Inhibitors of calpain or PARP have shown promise in preventing photoreceptor
cell death, yet the relationship between these enzymes remains unclear. To explore this further,
organotypic retinal explant cultures derived from wild-type and IRD-mutant mice were treated
with inhibitors specific for calpain, PARP , and voltage-gated Ca2+channels (VGCCs). The outcomes
were assessed using in situ activity assays for calpain and PARP and immunostaining for activated
calpain-2, poly (ADP-ribose), and cGMP , as well as the TUNEL assay for cell death detection. The IRD
models included the Pde6b-mutant rd1mouse and rd1*Cngb1 / double-mutant mice, which lack the
beta subunit of the rod cyclic nucleotide-gated (CNG) channel and are partially protected from rd1
degeneration. We confirmed that an inhibition of either calpain or PARP reduces photoreceptor cell
death in rd1retina. However, while the activity of calpain was decreased by the inhibition of PARP ,
calpain inhibition did not alter the PARP activity. A combination treatment with calpain and PARP
inhibitors did not synergistically reduce cell death. In the slow degeneration of rd1*Cngb1 / double
mutant, VGCC inhibition delayed photoreceptor cell death, while PARP inhibition did not. Our
results indicate that PARP acts upstream of calpain and that both are part of the same degenerative
pathway in Pde6b-dependent photoreceptor degeneration. While PARP activation may be associated
with CNG channel activity, calpain activation is linked to VGCC opening. Overall, our data highlights
PARP as a target for therapeutic interventions in IRD-type diseases.
Keywords: retinitis pigmentosa; calcium; cGMP; nonapoptotic cell death; PKG; HDAC; photoreceptor
degeneration
1. Introduction
Inherited retinal degenerations (IRDs) are a genetically diverse group of diseases that
typically result in progressive photoreceptor cell death, severe visual handicap, and blind-
ness [ 1]. The most common disease within the IRD group is retinitis pigmentosa (RP) [ 2],
in which patients initially experience night blindness and gradual constriction of the visual
field due to primary loss of rod photoreceptors. This is followed by secondary degeneration
Biomolecules 2022, 12, 455. https://doi.org/10.3390/biom12030455 https://www.mdpi.com/journal/biomolecules
Biomolecules 2022, 12, 455 2 of 21
of cone photoreceptors, eventually resulting in complete blindness [ 3]. Approximately one
in four thousand people are affected by RP [ 2]. IRD-type blinding diseases are generally
considered to be untreatable [ 4]. The second messenger cyclic-guanosine-monophosphate
(cGMP) has been found to play a central role in the pathobiology of many genetically
distinct types of IRD [ 5], and excessive cGMP-signaling may be directly or indirectly asso-
ciated with the activity of poly (ADP-ribose) polymerase (PARP), cyclic nucleotide-gated
(CNG) channels, and calpain-type proteases [5–7].
One of the best studied animal models for IRD is the rd1mouse (retinal degenera-
tion 1), a naturally occurring mouse model first described by Keeler in the early 1920s [ 8].
Inrd1mice, the gene encoding for the beta subunit of the rod photoreceptor-specific
phosphodiesterase-6 (PDE6) is mutated [ 9], causing PDE6 dysfunction, accumulation of
cGMP in rod photoreceptors, and primary rod cell death, followed by secondary cone
photoreceptor cell loss [ 10,11]. The degeneration of rods in the rd1mouse is associated
with a prominent activation of both PARP and calpain [ 12,13]. In humans, 4 to 5% of IRD
patients carry mutations in PDE6 genes, making it seem likely that they will also suffer from
high cGMP levels and the corresponding up-regulation of downstream cGMP-signaling
targets [14].
PARP is a DNA repair enzyme [ 15] and catalyzes ADP-ribose transfer to target pro-
teins [ 16]. It can sequentially add ADP-ribose units from nicotinamide adenine dinucleotide
(NAD+) to form polymeric ADP-ribose (PAR) chains [ 17]. The enzymatic activity of PARP
has been related to a variety of different cellular functions, including DNA repair and
transcription, regulation of gene expression, metabolism, and aging [ 18]. However, PARP
may also be the primary driver for a specific form of cell death, termed PARthanatos [ 19].
The current view is that in IRD nonapoptotic cGMP-dependent cell death is characterized
by PARP over-activation and the accumulation of PAR [ 14], indicating a possible crosstalk
between cGMP signaling and PARthanatos [6].
Calpain is a Ca2+-dependent thiol protease, which has been implicated in fundamental
cellular processes, including cell proliferation, apoptosis, and differentiation [ 20]. The
Ca2+influx required for calpain activation may occur via CNG channels located in the
plasma membrane of the photoreceptor outer segment. These channels are gated by cGMP ,
which is often elevated in retinal degeneration [ 7,21,22]. The activation of CNG channels
leads to a depolarization of the photoreceptor plasma membrane, which activates synaptic
voltage-gated Ca2+channels (VGCCs), increasing the synaptic and cytosolic Ca2+levels [ 23].
The influx of Ca2+via CNG channels and/or VGCC is thought to cause the activation of
calpain-type proteases, which may promote cell destruction [24–27].
In IRD, PARP and calpain have been suggested to be part of two independent cell
death subroutines, both of which are triggered by elevated levels of cGMP [ 5]. On the other
hand, PARP is known to be cleaved by calpain, indicating that calpain could potentially
control PARP activity [ 28,29]. Conversely, in a model for NMDA toxicity in rat primary
cortical neurons, PARP was found to regulate the calpain activity via mitochondrial Ca2+
homeostasis [ 30]. We hypothesized that PARP and calpain crosstalk could also occur
during cGMP-dependent cell death in IRD. To investigate this possibility, we used specific
inhibitors for PARP , calpain, and VGCC to block the corresponding downstream pathways
and assessed the contribution of the rod CNG channel via knockout of its beta subunit.
Through these interventions, we show that (1) calpain and PARP are part of the same
degenerative pathway triggered by high levels of cGMP in photoreceptors and that (2)
PARP controls calpain activity, likely indirectly via CNG channel activity and excessive
demands on energy metabolism.
2. Materials and Methods
2.1. Animals
For retinal explant cultures C3H/HeA Pde6brd1/rd1animals (rd1), their congenic
wild-type C3H/HeA Pde6b+/+counterparts (wt) [ 31], and B6.129SvJ;C3H/HeA-CNGB1tm
double-mutant mice (rd1*Cngb1 / ) were used [ 22].The rd1*Cngb1 / double mutants were
Biomolecules 2022, 12, 455 3 of 21
generated by an intercross of rd1and Cngb1 / . Animals were used regardless of gender.
The stock has been maintained by repeated backcrossing over 10 generations to make a
congenic inbred strain, homozygous for both gene mutations. Animals were housed under
standard white cyclic lighting and had free access to food and water. Animal protocols
compliant with §4 of the German law of animal protection were reviewed and approved
by the Tübingen University committee on animal protection (Einrichtung für Tierschutz,
Tierärztlicher Dienst und Labortierkunde, Registration No. AK02/19M).
2.2. Retinal Explant Culture
To assess the effects of Olaparib, calpastatin, and D-cis-diltiazem on calpain activity , acti-
vated calpain-2, PARP activity , PAR, and photoreceptor degeneration, rd1and rd1*Cngb1 /
retinas were explanted at postnatal day 5 (P5). The explants were cultured on a polycar-
bonate membrane (Corning-Costar Transwell permeable support, 24-mm insert, #CLS3412)
with complete medium (Gibco R16 medium with supplements) [ 32]. The R16 medium was
changed every two days with treatment at either P7 and P9 for rd1or at P7, P9, P11, P13,
and P15 for rd1*Cngb1 / explants. Except for the wtsituation, the two retinas obtained
from a single animal were split across different experimental groups so as to maximize the
number of independent observations acquired per animal. The cultures were treated with
20-M calpastatin, 1- M Olaparib, 100- M D-cis-diltiazem, and 20- M calpastatin combined
with 1- M Olaparib, respectively. In these treatments, Olaparib was dissolved in DMSO at
a final medium concentration of 0.1% DMSO. Cultures were ended on P11 ( rd1) and P17
(rd1*Cngb1 / ) by either fixation with 4% paraformaldehyde (PFA) or without fixation and
direct freezing on liquid N 2. The explants were embedded in Tissue-Tek (Sakura Finetek
Europe B.V ., Alphen aan den Rijn, The Netherlands) and sectioned (12 m) in a cryostat
(Thermo Fisher Scientific, CryoStar NX50 OVP , Runcorn, UK).
2.3. TUNEL Staining
The TUNEL (terminal deoxynucleotidyl transferase dUTP nick end labeling) assay
kit (Roche Diagnostics, Mannheim, Germany) labeled dying cells. Histological sections
from retinal explants were dried and stored at 20C. The sections were rehydrated
with phosphate-buffered saline (PBS; 0.1 M) and incubated with proteinase K (1.5 g/L)
diluted in 50-mM TRIS-buffered saline (TBS; 1- L enzyme in 7-mL TBS) for 5 min. This
was followed by 3 times 5-min TBS washing and incubation with blocking solution (10%
normal goat serum, 1% bovine serum albumin, and 1% fish gelatin in phosphate-buffered
saline with 0.03% Tween-20). TUNEL staining solution was prepared using 10 parts of
blocking solution, 9 parts of TUNEL labeling solution, and 1 part of TUNEL enzyme. After
blocking, the sections were incubated with TUNEL staining solution overnight at 4C.
Finally, the sections were washed 2 times with PB, mounted using Vectashield with DAPI
(Vector Laboratories Inc., Burlingame, CA, USA), and imaged under a Zeiss (ApoTome 2)
microscope for further analysis.
2.4. Calpain-Activity Assay
This assay allows resolving the overall calpain activity in situ on unfixed tissue sections.
Retinal tissue sections were incubated and rehydrated for 15 min in a calpain reaction buffer
(CRB) (5.96-g HEPES, 4.85-g KCl, 0.47-g MgCl 2, and 0.22-g CaCl 2in 100-mL ddH 2O; pH 7.2)
with 2-mM dithiothreitol (DTT). The tissue sections were incubated for 2.5 h at 37C in CRB
with tBOC-Leu-Met-CMAC (25- M; Thermo Fisher Scientific, A6520). Then, the section
was washed with PBS and incubated with ToPro (1:1000 in PBS, Thermo Fisher Scientific,
OR, USA) for 15 min. Afterwards, the tissue sections were washed twice in PBS (5 min)
and mounted using Vectashield without DAPI (Vector Laboratories Inc., Burlingame, CA,
USA) for immediate visualization under the ZEISS ApoTome 2.
Biomolecules 2022, 12, 455 4 of 21
2.5. P ARP Activity Assay
This assay allows resolving the overall PARP activity in situ on unfixed tissue sec-
tions [ 33]. Retinal tissue sections were incubated and rehydrated for 10 min in PBS. The
reaction mixture (10-mM MgCl 2, 1-mM dithiothreitol, and 50- M 6-Fluo-10-NAD+(Biolog,
Cat. Nr.: N 023) in 100-mM Tris buffer with 0.2% Triton X100, pH 8.0) was applied to the
sections for 3 h at 37C. After three 5-min washes in PBS, the sections were mounted in
Vectashield with DAPI (Vector Laboratories Inc., Burlingame, CA, USA) for immediate
visualization under the ZEISS ApoTome 2.
2.6. P AR DAB Staining
For the detection of PAR DAB staining, we used fixed sections. 3,30-diaminobenzidine
(DAB) staining commenced with the quenching of endogenous peroxidase activity using
40% MeOH and 10% H 2O2in PBS with 0.3% Triton X-100 (PBST) for 20 min. The sections
were further incubated with 10% normal goat serum (NGS) in PBST for 30 min, followed by
anti-PAR antibody (1:200; Enzo Life Sciences, Farmingdale, NY, USA) incubation overnight
at 4C. Incubation with the biotinylated secondary antibody (1:150, Vector in 5% NGS
in PBST) for 1 h was followed by the Vector ABC Kit (Vector Laboratories, solution A
and solution B in PBS, 1:150 each) for 1 h. DAB staining solution (0.05-mg/mL NH 4Cl,
200-mg/mL glucose, 0.8-mg/mL nickel ammonium sulphate, 1-mg/mL DAB, and 0.1 vol.
% glucose oxidase in phosphate buffer) was applied evenly, incubated for precisely 3 min,
and immediately rinsed with phosphate buffer to stop the reaction. The sections were
mounted in Aquatex (Merck, Darmstadt, Germany).
2.7. Calpain-2/cGMP Immunohistochemistry
Sections were rehydrated with PBS for 15 min. The sections were then incubated
with blocking solution (10% NGS, 1% BSA 911, and 0.3% PBST) for 1 h. The primary
antibodies anti-calpain-2 (ab39165; 1:200; Abcam, Cambridge, UK) and cGMP (1:250; kindly
provided by Harry Steinbusch, Maastricht University, Maastricht, The Netherlands) were
diluted in blocking solution and incubated overnight at 4C. Rinsing with PBS for 3 times
10-min each was followed by incubation with the secondary antibody (Molecular Probes,
AlexaFluor568 (A11036), diluted 1:300 in PBS) for 1 h. The sections were further rinsed with
PBS for 3 times 10-min each and mounted with Vectashield with DAPI (Vector Laboratories
Inc., Burlingame, CA, USA).
2.8. Microscopy and Image Analysis in Retinal Cultures
The images of organotypic explant cultures were captured using a Zeiss Imager Z.2
fluorescence microscope, equipped with ApoTome 2, an Axiocam 506 mono camera, and
HXP-120V fluorescent lamp (Carl Zeiss Microscopy, Oberkochen, Germany). The excitation
(Exc.)/emission ( Em.) characteristics of the filter sets used for the different fluorophores
were as follows (in nm): DAPI ( Exc. = 369 nm, Em = 465 nm), AF488 ( Exc. = 490 nm,
Em = 525 nm), AF568 ( Exc. = 578 nm, Em = 602 nm), and ToPro ( Exc. = 642 nm,
Em = 661 nm). The Zen 2.3 blue edition software (Zeiss) captured images (tiled and
z-stack, 20magnification). Sections of 12- m thickness were analyzed using
12–15 Apotome Z-planes. For the quantification of positive cells in the retinal ONL, we
proceeded as follows: The number of cells in six different rectangular ONL areas was
counted manually based on the number of DAPI-stained nuclei and used to calculate an
average ONL cell size. This average ONL cell size was then used to rapidly calculate the
total number of cells in a given ONL area. The percentage of positive cells was calculated
by dividing the absolute number of positive cells by the total number of ONL cells.
2.9. Statistical Analysis
Quantitative data was compared by the Student’s t-test or Mann–Whitney Utest.
Multiple comparisons were made using the Kruskal–Wallis one-way analysis of variance
test. All calculations were performed with GraphPad Prism 8 (GraphPad Software, La Jolla,
Biomolecules 2022, 12, 455 5 of 21
CA, USA); p< 0.05 was considered significant. The figures were prepared using Photoshop
CS5 (Adobe, San Jose, CA, USA). The diagram was created with BioRender.com.
3. Results
3.1. Calpastatin, D-cis-diltiazem, and Olaparib Reduce Calpain Activity in Photoreceptors
To investigate whether and how PARP and calpain could interact with each other, we
treated organotypic retinal explants with well-validated and highly selective inhibitors
for calpain (i.e., calpastatin) [ 34] and PARP (i.e., Olaparib) [ 35]. In addition, we used
D-cis-diltiazem to block L-type voltage-gated Ca2+channels (VGCCs) [ 36]. As readouts, we
used in situ activity assays for calpain and PARP , immunolabeling for activated calpain-2
and poly (ADP-ribose) (PAR), as well as the TUNEL assay, to detect cell death. Retinal
explant cultures were derived from wtand rd1animals, explanted at postnatal day 5 (P5),
and treated from P7 to P11.
In the wtretina, calpain activity and calpain-2 activation were generally relatively low
when compared with rd1, in which both markers labeled large numbers of positive cells
in the ONL (Figure 1a and Appendix AScheme A1a). In rd1, treatment with calpastatin
and D-cis-diltiazem, as expected, reduced both the calpain activity and calpain-2 activation
(p< 0.0001, Figure 1d and Appendix AScheme A1d). The solvent DMSO used for Olaparib
dissolution did not affect the calpain activity (Appendix AScheme A2a–c). Surprisingly,
when the retinal explants were treated with Olaparib, the calpain activity and calpain-
2 activation in the ONL was also significantly decreased (p = 0.0012, Figure 1e,g and
Appendix AScheme A1e,k). However, the combined treatment with calpastatin and
Olaparib did not reduce cell death any further compared to the calpastatin or Olaparib
single treatments (Figure 1f).
Biomolecules 2022, 12, x FOR PEER REVIEW 5 of 22
of cells in a given ONL area. The percentage of positive cells was calculated by dividing
the absolute number of positive cells by the total number of ONL cells.
2.9. Statistical Analysis
Quantitative data was compared by the Student’s t-test or Mann –Whitney U test.
Multiple comparisons were made using the Kruskal –Wallis one -way analysis of variance
test. All calculations were performed with GraphPad Prism 8 (GraphPad Software, La
Jolla , CA , USA ); p < 0.05 was considered significant. The f igures were prepared using Pho-
toshop CS5 (Adobe, San Jose, CA, USA). The diagram was created with BioRender.com.
3. Results
3.1. Calpastatin, D-cis-diltiazem , and Olaparib Reduce Calpain Activity in Photoreceptors
To investigate whether and how PARP and calpain could interact with each other,
we treated organotypic retinal explants with well -validated and highly selective inhibitors
for calpain ( i.e., calpastatin) [34] and PARP (i.e., O laparib) [35]. In addition, we used D -
cis-diltiazem to block L -type voltage -gated Ca2+ channels (VGCCs) [36]. As readouts, we
used in situ activity assays for calpain and PARP, immunolabeling for activated calpain -
2 and poly (ADP -ribose) (PAR), as well as the TUNEL assay , to detect cell death. Retinal
explant cultures were derived from wt and rd1 animals, explanted at postnatal day 5 (P5),
and treated from P7 to P11.
In the wt retina, calpain activity and calpain -2 activation were generally relatively
low when compa red with rd1, in which both markers labeled large numbers of positive
cells in the ONL (Figure 1a and Appendix A Scheme A1a). In rd1, treatment with cal-
pastatin and D -cis-diltiazem, as expected, reduced both the calpain activity and calpain -2
activation ( p < 0.0001, Figure 1d and Appendix A Scheme A1d). The solvent DMSO used
for Olaparib dissolution did not affect the calpain activity (Appendix A Scheme A2a–c).
Surprisingly, when the retinal explants were treated with Olaparib, the calpain activity
and calpain -2 activation in the ONL was also significantly decreased ( p = 0.0012, Figure
1e,g and Appendix A Scheme A1e,k). However, the combined treatment with calpastatin
and Olaparib did not reduce cell death any further compared to the calpastatin or
Olapa rib single treatments (Figure 1 f).
Figure 1. Effects of calpastatin, D -cis-diltiazem, Olaparib, and combination treatments on calpain
activity. The calpain activity assay (blue) was performed on unfixed wt (a) and rd1 retinal cross -
sections. ToPro (red) was used as nuclear counterstaining. Untreated rd1 retina (untr.; b) was com-
pared to retina treated with either calpastatin ( c), D-cis-diltiazem ( d), Olaparib (e), or calpastatin
and Olaparib combined (f ). The scatter plots show the percentages of outer nuclear layer (ONL) cells
positive for calpain activity ( g) in wt and rd1 retina. Statistical significance was assessed using one -
way ANOVA and Tukey’s multiple comparison post hoc testing performed between the control
(rd1 untreated) and 20-
µ
M calpastatin (CAST20), 100-
µM D -
cis-diltiazem (D100), 1-
µ
M Olaparib
Figure 1. Effects of calpastatin, D-cis-diltiazem, Olaparib, and combination treatments on calpain
activity. The calpain activity assay (blue) was performed on unfixed wt(a) and rd1retinal cross-
sections. ToPro (red) was used as nuclear counterstaining. Untreated rd1retina (untr.; b) was
compared to retina treated with either calpastatin ( c), D-cis-diltiazem ( d), Olaparib ( e), or calpastatin
and Olaparib combined ( f). The scatter plots show the percentages of outer nuclear layer (ONL)
cells positive for calpain activity ( g) inwtand rd1retina. Statistical significance was assessed using
one-way ANOVA and Tukey’s multiple comparison post hoc testing performed between the control
(rd1 untreated) and 20- M calpastatin (CAST20), 100- M D-cis-diltiazem (D100), 1- M Olaparib
(OLA1), and 20- M calpastatin combined with 1- M Olaparib (CAST20+OLA1). All treatments
reduced the calpain activity in rd1ONL; however, there was no added synergistic benefit from the
CAST20+OLA1 combination. Untr. wt: 6 explants from 3 different mice; untr. rd1: 16/16; CAST20 rd1:
8/8; D100 rd1: 6/6; OLA1 rd1: 9/9; CAST20+OLA1 rd1: 8/8; error bars represent SD; *** = p0.001
and **** = p0.0001. INL = inner nuclear layer, GCL = ganglion cell layer. Scale bar = 50 m.
Biomolecules 2022, 12, 455 6 of 21
3.2. P ARP Activity in rd1 Photoreceptors Is Reduced by Olaparib and D-cis-diltiazem but Not
by Calpastatin
The reduction of calpain activity after Olaparib treatment suggested that PARP activity
controlled calpain activation. To further investigate the effects of PARP activity on calpain,
we assessed PARP activity on unfixed retinal tissue sections [ 33] and stained PFA-fixed
retinal tissues for poly (ADP-ribose) (PAR) to evaluate PARP activity also indirectly [ 13].
Calpastatin, D-cis-diltiazem, Olaparib, and calpastatin combined with Olaparib were used
to treat organotypic retinal explant cultures derived from rd1 mice. Untreated explant
cultures from wtand rd1animals were used as the control.
In untreated wtretina the numbers of photoreceptors in the ONL displaying PARP
activity were very low when compared to untreated rd1retina (Figure 2a,b). Calpastatin
did not reduce the numbers of ONL cells positive for PARP activity or PAR (Figure 2c,
quantification in Figure 2g, and Appendix AScheme A1h,l) when compared to the rd1
untreated control. Interestingly, D-cis-diltiazem significantly reduced the PARP activity
(p< 0.001, Figure 2d,g) and PAR generation (p < 0.01, Appendix AScheme A1i,l) in
rd1organotypic retinal explant cultures. As expected, Olaparib significantly decreased
the PARP activity (p < 0.0001, Figure 2e,g) and PAR-positive cells (p < 0.05, Appendix A
Scheme A1j,l), while its solvent DMSO had no effect (Appendix AScheme A2d–f). However,
the combination treatment with calpastatin and Olaparib did not reduce the PARP activity
further (Figure 2f,g) compared with the Olaparib single treatment.
Together, our data up to this point (Sections 3.1and 3.2) indicated that calpain activity
was regulated by PARP and that PARP activity in turn might be controlled by VGCC.
Biomolecules 2022, 12, x FOR PEER REVIEW 6 of 22
(OLA1), and 20 - µM c alpastatin combined with 1 - µ M Olaparib (CAST20+OLA1). All treatments re-
duced the calpain activity in rd1 ONL ; however, there was no added synergistic benefit from the
CAST20+OLA1 combination. Untr. wt: 6 explants from 3 different mice; untr. rd1: 16/16; CAST20 rd1 :
8/8; D100 rd1 : 6/6; OLA1 rd1: 9/9; CAST20+OLA1 rd1 : 8/8; error bars represent SD; *** = p ≤ 0.001 and
**** = p ≤ 0.0001. INL = inner nuclear layer, GCL = ganglion cell layer. Scale bar = 50 µm.
3.2. PARP Activity in rd1 Photoreceptors Is Reduced by Olaparib and D -cis-diltiazem but Not
by Calpastatin
The reduction of calpain activity after Olaparib treatment sugges ted that PARP ac-
tivity controlled calpain activation. To further investigate the effects of PARP activity on
calpain, we assessed PARP activity on unfixed retinal tissue sections [33] and stained PFA -
fixed retinal tissues for poly (ADP -ribose) (PAR) to evaluate PARP activity also indirectly
[13]. Calpastatin, D -cis-diltiazem, Olaparib, and calpastatin combined with O laparib were
used to treat organotypic retinal explant cultures derived from rd1 mice . Untreated explant
cultures from wt and rd1 animals were used as the control.
In untreated wt retina the numbers of photoreceptors in the ONL displaying PARP
activity were very low when compared to untreated rd1 retina (Figure 2a,b). Calpastatin
did not reduce the numbers of ONL cells positive for PARP activity or PAR (Figure 2c ,
quantification in Figure 2g , and Appendix A Scheme A1h,l) when compared to the rd1
untreated control. Interestingly, D -cis-diltiazem significantly reduced the PARP activity
(p < 0.001, Figure 2d,g ) and PAR generation ( p < 0.01 , Appendix A Scheme A1i,l) in rd1
organotypic retinal explant cultures. As expected, O laparib significantly decreased the
PARP activity ( p < 0.0001, Figure 2 e,g) and PAR -positive cells (p < 0.05, Appendix A
Scheme A1j,l), while its solvent DMSO had no effect (Appendix A Scheme A2d –f). How-
ever, the combination treatment with calpastatin and Olaparib did not reduce the PARP
activity further (Figure 2 f,g) compared with the Olaparib single treatment.
Together, our data up to this point ( Sections 3.1 and 3.2) indi cated that calpain activ-
ity was regulated by PARP and that PARP activity in turn might be controlled by VGCC.
Figure 2. Effects of calpastatin, D -cis-diltiazem, Olaparib, and combination treatments on PARP ac-
tivity. PARP activity assay (green) in wt and rd1 retina. DAPI (grey) was used as nuclear counter-
staining. Untreated wt (untr.; a) was compared to untreated rd1 retina ( b) and retinae treated with
either calpastatin ( c), D-cis-diltiazem (d ), Olaparib (e), or calpastatin and O laparib combined (f ). The
scatter plots show the percentages of the outer nuclear layer (ONL) cells positive for PARP activity
(g) in wt and rd1 retina. Statistical significance was assessed using o ne-way ANOVA and Tukey’s
multiple comparison post hoc testing performed between the control ( rd1 untreated) and 20 -
µ
M
calpastatin (CAST20), 100-
µ
M D -cis-diltiazem (D100), 1 -
µ
M Olaparib (OLA1), and 20 -
µ
M cal-
pastatin combined with 1 -
µ
M Olaparib (CAST20+OLA1). Calpastatin did not reduce the PARP ac-
tivity, while D -cis-diltiazem and O laparib did. Untr. wt: 6 explants from 3 different mice; untr. rd1:
18/18; CAST20 rd1: 4/4; D100 rd1 : 10/10; OLA1 rd1: 9/9; CAST20+OLA1 rd1 : 8/8; error bars represent
SD; ns = p > 0.05 , *** = p ≤ 0.001, and **** = p ≤ 0.0001. INL = inner nuclear layer, GCL = ganglion cell
layer. Scale bar = 50 µm.
Figure 2. Effects of calpastatin, D-cis-diltiazem, Olaparib, and combination treatments on PARP
activity. PARP activity assay (green) in wtand rd1retina. DAPI (grey) was used as nuclear counter-
staining. Untreated wt(untr.; a) was compared to untreated rd1retina ( b) and retinae treated with
either calpastatin ( c), D-cis-diltiazem ( d), Olaparib ( e), or calpastatin and Olaparib combined ( f). The
scatter plots show the percentages of the outer nuclear layer (ONL) cells positive for PARP activity
(g) inwtand rd1retina. Statistical significance was assessed using one-way ANOVA and Tukey’s
multiple comparison post hoc testing performed between the control (rd1 untreated) and 20- M
calpastatin (CAST20), 100- M D-cis-diltiazem (D100), 1- M Olaparib (OLA1), and 20- M calpastatin
combined with 1- M Olaparib (CAST20+OLA1). Calpastatin did not reduce the PARP activity, while
D-cis-diltiazem and Olaparib did. Untr. wt: 6 explants from 3 different mice; untr. rd1: 18/18;
CAST20 rd1: 4/4; D100 rd1: 10/10; OLA1 rd1: 9/9; CAST20+OLA1 rd1: 8/8; error bars represent SD;
ns = p> 0.05, *** = p0.001, and **** = p0.0001. INL = inner nuclear layer, GCL = ganglion cell
layer. Scale bar = 50 m.
Biomolecules 2022, 12, 455 7 of 21
3.3. rd1 Photoreceptor Degeneration Is Delayed by Calpastatin and Olaparib but Not
by D-cis-diltiazem
To further investigate the effects of calpain, VGCC, and PARP inhibition on the pro-
gression of rd1photoreceptor cell death, we used the TUNEL assay to quantify the numbers
of dying cells in the ONL. Since DMSO was used as a solvent for Olaparib, we also tested
for the effects of DMSO alone on cell death and found that DMSO had no significant effect
(Appendix AScheme A2g–i).
Inwtretinal explants, a relatively low number of ONL cells (i.e., photoreceptors)
were positive for the TUNEL assay compared with their rd1counterparts (Figure 3a,b).
Calpastatin treatment (Figure 3c) led to a significant reduction of cell death in rd1retina
(calpastatin: p= 0.0057, quantification in Figure 3g) when compared to the untreated rd1
control. In contrast, D-cis-diltiazem treatment did not reduce the number of TUNEL-
positive dying cells in rd1ONL (Figure 3d). Yet, the Olaparib treatment did result in a
significant reduction of ONL cell death (Olaparib: p< 0.0001, Figure 3e,g). The combination
treatment with calpastatin and Olaparib reduced cell death in the ONL (calpastatin and
Olaparib: p< 0.0001, Figure 3f,g), albeit without showing an additional effect compared
with either of the two compounds applied alone. These results provided a strong indication
that PARP and calpain were part of the same photoreceptor degenerative pathway.
As an additional control, to rule out possible off-target effects of notably D-cis-
diltiazem or Olaparib on cGMP synthesis in photoreceptors, we performed an immunos-
taining for cGMP on rd1retinal explant cultures. The cGMP staining and its quantification
did not indicate any treatment-related alterations in cGMP accumulation in ONL cells
(Appendix AScheme A3).
Biomolecules 2022, 12, x FOR PEER REVIEW 7 of 22
3.3. rd1 Photoreceptor Degeneration Is Delayed by Calpastatin and Olaparib but Not
by D -cis-diltiazem
To further investigate the effects of calpain, VGCC, and PARP inhibition on the pro-
gression of rd1 photoreceptor cell death, we used the TUNEL assay to quantify the num-
bers of dying cells in the ONL. Since DMSO was used as a solvent for O laparib , we also
tested for the effects of DMSO alone on cell death and found that DMSO had no significant
effect ( Appendix A Scheme A2g –i).
In wt retinal explants, a relatively low number of ONL cells ( i.e., photoreceptors)
were positive for the TUNEL assay compared with their rd1 counterparts (Figure 3a,b).
Calpastatin treatment (Figure 3c) led to a significant reduction of cell death in rd1 retina
(calpastatin: p = 0.0057, quantification in Figure 3g) when compared to the untreated rd1
control. In contrast, D -cis-diltiazem treatment did not reduce the number of TUNEL- pos-
itive dying cells in rd1 ONL ( Figure 3d). Yet, the Olaparib treatment did result in a signif-
icant reduction of ONL cell death ( Olaparib: p < 0.0001, Figure 3e,g). The combination
treatment with calpastatin and O laparib reduced cell death in the ONL (calpastatin and
Olaparib: p < 0.0001, Figure 3f,g), al beit without showing an additional effect compared
with either of the two compounds applied alone. These results provided a strong indica-
tion that PARP and calpain were part of the same photoreceptor degenerative pathway.
As an additional control, to rule out possible off -target effects of notably D -cis-dilti-
azem or Olaparib on cGMP synthesis in photoreceptors, we performed an immunostain-
ing for cGMP on rd1 retinal explant cultures. The cGMP staining and its quantification did
not indicate any treatment -related alterations in cGMP accumulation in ONL cells ( Ap-
pendix A Scheme A3).
Figure 3. Effects of calpastatin, D-cis -diltiazem, Olaparib, and combination treatments on rd1 retinal
cell viability. The TUNEL assay labeled dying cells (magenta) in wt and rd1 retinal explant cultures.
DAPI (grey) was used as a nuclear counterstain. Untreated wt (a) and rd1 control retina (untr.; b)
were compared to retina treated with either 20 -µM calpastatin (CAST20, c), 100 -µM D -cis-diltiazem
(D100, d), 1-µM Olaparib (OLA1, e), or 20 -µM calpastatin combined with 1 -µM Olaparib
(CAST20+OLA1, f). Note the large numbers of dying cells in the rd1 outer nuclear layer (ONL). The
scatter plot ( g) shows the percentage of TUNEL -positive cells in the ONL. Statistical significance
was assessed using one -way ANOVA and Tukey’s multiple comparison post hoc testing performed
between the control ( rd1 untreated) and 20 -
µ
M calpastatin (CAST20), 100-
µ
M D -cis-diltiazem
(D100), 1-
µM Ol
aparib (OLA1), and 20 -
µ
M calpastatin combined with 1 -
µM Ol
aparib
(CAST20+OLA1). Except for D-cis -diltiazem, all treatments decreased rd1 retinal degeneration. The
combination treatment CAST20+OLA1 did not improve the therapeutic effect seen with the respec-
tive single treatments. Untr. wt: 7 explants from 4 different mice; untr. rd1 : 27/27; CAST20 rd1 : 8/8;
D100 rd1: 16/16; OLA1 rd1 : 17/17; CAST20+OLA1 rd1 : 8/8; error bars represent SD; ns = p > 0.05 , ** =
p ≤ 0.01 , and **** = p ≤ 0.0001. ONL = outer nuclear layer, INL = inner nuclear layer, GCL = ganglion
cell layer. Scale bar = 50 µm.
Figure 3. Effects of calpastatin, D-cis-diltiazem, Olaparib, and combination treatments on rd1retinal
cell viability. The TUNEL assay labeled dying cells (magenta) in wtand rd1retinal explant cultures.
DAPI (grey) was used as a nuclear counterstain. Untreated wt(a) and rd1control retina (untr.; b) were
compared to retina treated with either 20- M calpastatin (CAST20, c), 100- M D-cis-diltiazem (D100, d),
1-M Olaparib (OLA1, e), or 20- M calpastatin combined with 1- M Olaparib (CAST20+OLA1, f).
Note the large numbers of dying cells in the rd1outer nuclear layer (ONL). The scatter plot ( g) shows
the percentage of TUNEL-positive cells in the ONL. Statistical significance was assessed using one-
way ANOVA and Tukey’s multiple comparison post hoc testing performed between the control ( rd1
untreated) and 20- M calpastatin (CAST20), 100- M D-cis-diltiazem (D100), 1- M Olaparib (OLA1),
and 20- M calpastatin combined with 1- M Olaparib (CAST20+OLA1). Except for D-cis-diltiazem,
all treatments decreased rd1retinal degeneration. The combination treatment CAST20+OLA1 did not
improve the therapeutic effect seen with the respective single treatments. Untr. wt: 7 explants from
4 different mice; untr. rd1: 27/27; CAST20 rd1: 8/8; D100 rd1: 16/16; OLA1 rd1: 17/17; CAST20+OLA1
rd1: 8/8; error bars represent SD; ns = p> 0.05, ** = p0.01, and **** = p0.0001. ONL = outer nuclear
layer, INL = inner nuclear layer, GCL = ganglion cell layer. Scale bar = 50 m.
Biomolecules 2022, 12, 455 8 of 21
3.4. Calpain-2 Activation Is Controlled by Both CNG Channel and VGCC Activity
To further investigate the role of the CNG channel for the activation of calpain and
PARP , we used rd1*Cngb1 / mice, i.e., rd1mice, in which the rod photoreceptor CNG
channel was not functional. These animals were generated by crossbreeding the Pde6b
mutant rd1mice with mice that lack the gene encoding for the beta subunit of the rod
CNG channel [ 22]. Without the Cngb1 subunit rod, CNG channels are not properly formed,
and rods essentially lose their ability to generate cGMP-gated currents and responses
to light [ 37]. Previously, we showed that the photoreceptors of rd1*Cngb1 / mice are
partially protected from cell death, with the peak of cell death shifting from P13 in rd1
retinas to approximately P18 in rd1*Cngb1 / retinas [ 22]. To account for this slower retinal
degeneration phenotype in the double-mutant retina, we performed D-cis-diltiazem and
Olaparib treatments on organotypic retinal tissue cultures derived from rd1*Cngb1 /
animals at P7 and cultured until P17. To assess how the inhibitors affected calpain, we used
a calpain activity assay and immunostaining for activated calpain-2.
The double-mutant rd1*Cngb1 / mice had a significantly lower number of calpain-positive
cells in the ONL than rd1mice (Figure 4a–d, cf. Figure 1). However, even in the absence of
functional rod CNG channels, there were still more calpain-positive cells in the ONL of the
rd1*Cngb1 / retina than in the wtretina (Figure 4a–d). While treatment with D-cis-diltiazem did
not decrease the overall calpain activity in double-mutant retina (Figure 4e), it did significantly
reduce calpain-2 activation ( p= 0.0026, Figure 4f) when compared to the rd1*Cngb1 / untreated
control. As opposed to the rd1single mutant situation, in the double-mutant retina, Olaparib
failed to reduce the calpain activity and calpain-2 activation. Overall, this data suggested
that CNG channel function was required for the PARP-mediated activation of calpain, while
calpain-2 activation was dependent on VGCC activity .
3.5. D-cis-diltiazem Reduced P ARP Activity in the rd1*Cngb1 / Retina
To dissect the contribution of the CNG channel and the VGCC to the activation of PARP
in degenerating rd1photoreceptors, we quantified the PARP activity on unfixed retinal tissue
sections and assessed the PAR accumulation on fixed tissue sections from organotypic retinal
explant cultures derived from rd1*Cngb1 / mice. These cultures were then exposed to either
D-cis-diltiazem or Olaparib, with untreated wtexplants serving as additional controls.
Inwtretina, both the numbers of ONL cells displaying PARP activity and PAR ac-
cumulation were much lower (Figure 5a,b, cf. Figure 2and Appendix AScheme A1)
when compared to rd1*Cngb1 / double-mutant retina (Figure 5c,d; quantification in Fig-
ure5i,j). Nevertheless, rd1*Cngb1 / double-mutant retina displayed fewer PARP activity
and PAR-positive cells in the ONL when compared with rd1single mutants (cf. Figure 5
with Figure 2), indicating that CNG channels might be related to the regulation of PARP
activity. When treated with D-cis-diltiazem, the percentages of photoreceptors showing
PARP activity and PAR accumulation were significantly decreased in rd1*Cngb1 / retina
(Figure 5e,f; PARP activity assay: p= 0.0011; PAR staining: p< 0.0001). When treated
with Olaparib, rd1*Cngb1 / retina showed a similar marked reduction of PARP activity
and PAR accumulation (Figure 5g,h; p< 0.0001). This data again hinted at a relationship
between VGCC opening and PARP activity.
3.6. Effect of D-cis-diltiazem and Olaparib on rd1*Cngb1 / Photoreceptor Degeneration
To evaluate the effect of VGCC and PARP inhibition on photoreceptor degeneration,
we performed TUNEL staining to label cell death on organotypic retinal explant cultures
derived from rd1*Cngb1 / mice and treated these with either D-cis-diltiazem or Olaparib.
Remarkably, D-cis-diltiazem significantly reduced the percentage of TUNEL-positive
cells in the ONL of rd1*Cngb1 / retina when compared to the untreated control (p = 0.0134,
Figure 6b,c, quantification in e). In contrast, Olaparib did not show a similar effect on cell
death in the rd1*Cngb1 / retina (Figure 6b,d). These results indicated that, in the absence
of rod CNG channel function, the cell death of photoreceptors depended on VGCC, but not
on PARP , activity.
Biomolecules 2022, 12, 455 9 of 21
Biomolecules 2022, 12, x FOR PEER REVIEW 9 of 22
Figure 4 . Effects of D -cis-diltiazem and Olaparib on calpain activity in rd1*Cngb1−/− retina. The cal-
pain activity assay (blue) and an immunostaining for activated calpain -2 (yellow) were performed
on wt (a,b) and rd1* Cngb1−/− retina. DAP I (grey) was used as nuclear counterstaining. Untreated
rd1*Cngb1−/− retina (untr.; c ,d) was compared to retina treated with D -cis-diltiazem (e ,f) or Olaparib
(g,h). The scatter plots show the percentages of ONL -positive cells for calpain activity ( i) and acti-
vated calpain -2 (j) in the wt and treated rd1* Cngb1−/− retina compared with the rd1*Cngb1−/− control
(untr.). Statistical significance was assessed using one -way ANOVA and Tukey’s multiple compar-
ison post hoc testing performed between the control ( rd1*Cngb1−/− untreated ), 100- µ M D-cis -dilti-
azem (D100), and 1-
µ
M Olaparib (OLA1). In rd1*Cngb1−/−, only D-cis -diltiazem reduced the cells
positive for activated calpain -2. In the calpain activity assay, untr. wt: 5 explants from 3 different
mice; untr. rd1 *Cngb1−/−: 11/11; D100 rd1 *Cngb1−/−: 6/6; OLA1 rd1 *Cngb1−/−: 6/6; in calpain -2 im-
munostaining, untr. wt : 6/3; untr. rd1 *Cngb1−/−: 17/17; D100 rd1*Cngb1−/−: 10/10; OLA1 rd1 *Cngb1−/−:
10/10; error bars represent SD; ns = p > 0.05 and **= p ≤ 0.01. ToPro (red) and ONL = outer nuclear
layer, INL = inner nuclear layer, G CL = ganglion cell layer. Scale bar = 50 µm.
3.5. D-cis-diltiazem Reduced PARP Activity in the rd1*Cngb1−/− Retina
To dissect the contribution of the CNG channel and the VGCC to the activation of
PARP in degenerating rd1 photoreceptors, we quantified the PARP activity on unfixed
retinal tissue sections and assessed the PAR accumulation on fixed tissue sections from
Figure 4. Effects of D-cis-diltiazem and Olaparib on calpain activity in rd1*Cngb1 / retina. The
calpain activity assay (blue) and an immunostaining for activated calpain-2 (yellow) were performed
onwt(a,b) and rd1*Cngb1 / retina. DAPI (grey) was used as nuclear counterstaining. Untreated
rd1*Cngb1 / retina (untr.; c,d) was compared to retina treated with D-cis-diltiazem ( e,f) or Olaparib
(g,h). The scatter plots show the percentages of ONL-positive cells for calpain activity ( i) and
activated calpain-2 ( j) in the wtand treated rd1*Cngb1 / retina compared with the rd1*Cngb1 /
control (untr.). Statistical significance was assessed using one-way ANOVA and Tukey’s multiple
comparison post hoc testing performed between the control (rd1*Cngb1 / untreated), 100- M
D-cis-diltiazem (D100), and 1- M Olaparib (OLA1). In rd1*Cngb1 / , only D-cis-diltiazem reduced
the cells positive for activated calpain-2. In the calpain activity assay, untr. wt: 5 explants from
3 different mice; untr. rd1*Cngb1 / : 11/11; D100 rd1*Cngb1 / : 6/6; OLA1 rd1*Cngb1 / : 6/6; in
calpain-2 immunostaining, untr. wt: 6/3; untr. rd1*Cngb1 / : 17/17; D100 rd1*Cngb1 / : 10/10;
OLA1 rd1*Cngb1 / : 10/10; error bars represent SD; ns = p> 0.05 and ** = p0.01. ToPro (red) and
ONL = outer nuclear layer, INL = inner nuclear layer, GCL = ganglion cell layer. Scale bar = 50 m.
Biomolecules 2022, 12, 455 10 of 21
Biomolecules 2022, 12, x FOR PEER REVIEW 10 of 22
organotypic retinal explant cultures derived from rd1* Cngb1−/− mice. These cultures were
then exposed to either D -cis-diltiazem or O laparib, with untreated wt explants serving as
additional controls.
In wt retina, both the numbers of ONL cells displaying PARP activity and PAR accu-
mulation were much lower (Figure 5a ,b, cf. Figure 2 and Appendix A Scheme A1) when
compared to rd1*Cngb1−/− double -mutant retina (Figure 5c,d; quantification in Figure 5i,j).
Nevertheless, rd1*Cngb1−/− double -mutant retina displayed fewer PARP activity and PAR -
positive cells in the ONL when compared with rd1 single mutants ( cf. Figure 5 with Figure
2), indicating that CNG channels might be related to the regulation of PARP activity.
When treated with D -cis-diltiazem, the percentages of photoreceptors showing PARP ac-
tivity and PAR accumulation were significantly decreased in rd1* Cngb1−/− retina (Figure
5e,f; PARP activity assay: p = 0.0011; PAR staining: p < 0.0001). When treated with
Olaparib, rd1*Cngb1−/− retina showed a similar marked reduction of PARP activity and
PAR accumulation (Figure 5g,h; p < 0.0001). This data again hinted at a relationship be-
tween VGCC opening and PARP activity.
Figure 5. Effects of D -cis-diltiazem and Olaparib on PARP activity and PAR accumulation in
rd1*Cngb1−/− double -mutant retina. The PARP activity assay (green) and immunostaining for PAR
Figure 5. Effects of D-cis-diltiazem and Olaparib on PARP activity and PAR accumulation in
rd1*Cngb1 / double-mutant retina. The PARP activity assay (green) and immunostaining for
PAR (black) were performed on wt(a,b) and rd1*Cngb1 / retina ( c–h). DAPI (grey) was used as nu-
clear counterstaining. Untreated rd1*Cngb1 / retina (untr.; c,d) was compared to retina treated with
D-cis-diltiazem (e,f) or Olaparib (g,h). The scatter plots show the percentages of outer nuclear layer
(ONL) cells positive for PARP activity ( i) and PAR ( j) inwtand treated rd1*Cngb1 / retina compared
to the rd1*Cngb1 / control (untr.). Statistical significance was assessed using one-way ANOVA
and Tukey’s multiple comparison post hoc testing performed between the control (rd1*Cngb1 /
untreated) and 100- M D-cis-diltiazem (D100) or 1- M Olaparib (OLA1). D-cis-diltiazem strongly
decreased the PARP activity and PAR. In the PARP activity assay, untr. wt: 4 explants from 2 different
mice; untr. rd1*Cngb1 / : 9/9; D100 rd1*Cngb1 / : 4/4; OLA1 rd1*Cngb1 / : 6/6. In PAR DAB
staining, untr. wt: 6/3; untr. rd1*Cngb1 / : 17/17; D100 rd1*Cngb1 / : 10/10; OLA1 rd1*Cngb1 / :
10/10; error bars represent SD; ** = p0.01 and **** = p0.0001. INL = inner nuclear layer,
GCL = ganglion cell layer. Scale bar = 50 m.
Biomolecules 2022, 12, 455 11 of 21
Biomolecules 2022, 12, x FOR PEER REVIEW 11 of 22
(black) were performed on wt (a,b) and rd1* Cngb1−/− retina ( c–h). DAPI (grey) was used as nuclear
counterstaining. Untreated rd1* Cngb1−/− retina (untr.; c ,d) was compared to retina treated with D -
cis-diltiazem (e ,f) or O laparib (g ,h). The scatter plots show the percentages of outer nuclear layer
(ONL) cells positive for PARP activity ( i) and PAR (j) in wt and treated rd1* Cngb1−/− retina compared
to the rd1*Cngb1−/− control (untr.). Statistical significance was assessed using one -way ANOVA and
Tukey’s multiple comparison post hoc testing performed between the control ( rd1*Cngb1−/− un-
treated) and 100-
µ
M D-cis -diltiazem (D100) or 1-
µ
M O laparib (OLA1). D-cis -diltiazem strongly de-
creased the PARP activity and PAR. In the PARP activity assay, untr. wt: 4 explants from 2 different
mice; untr. rd1 *Cngb1−/−: 9/9; D100 rd1*Cngb1−/−: 4/4; OLA1 rd1*Cngb1−/−: 6/6. In PAR DAB staining,
untr. wt: 6/3; untr. rd1 *Cngb1−/−: 17/17; D100 rd1 *Cngb1−/−: 10/10; OLA1 rd1 *Cngb1−/−: 10/10; error bars
represent SD; ** = p ≤ 0.01 and **** = p ≤ 0.0001. INL = inner nuclear layer, GCL = ganglion cell layer.
Scale bar = 50 µm.
3.6. Effect of D -cis-diltiazem and Olaparib on rd1* Cngb1−/− Photoreceptor Degeneration
To evaluate the effect of V GCC and PARP inhibition on photoreceptor degeneration,
we performed TUNEL staining to label cell death on organotypic retinal explant cultures
derived from rd1*Cngb1−/− mice and treated these with either D -cis-diltiazem or Olaparib.
Remarkably, D -cis-diltiazem significantly reduced the percentage of TUNEL- posi-
tive cells in the ONL of rd1* Cngb1−/− retina when compared to the untreated control ( p =
0.0134, Figure 6b,c, quantification in e). In contrast, O laparib did not show a similar effect
on cell death in the rd1* Cngb1−/− retina (Figure 6b,d). These results indicated that , in the
absence of rod CNG channel function, the cell death of photoreceptors depended on
VGCC , but not on PARP , activity.
Figure 6 . Effects of D- cis-diltiazem and Olaparib on rd1* Cngb1−/− retinal cell viability. The TUNEL
assay labeled dying cells (magenta) in wild -type ( wt) and rd1* Cngb1−/− retinal explant cultures. DAPI
(grey) was used as a nuclear counterstain. ( a) In wt retina , only a small fraction of cells in the outer
nuclear layer (ONL) were TUNEL -positive. ( b) Untreated (untr.) rd1* Cngb1−/− double -mutant retina
was compared to retina treated with either 100 -µM D -cis-diltiazem (D100, (c ) or 1 -µM Olaparib
(OLA1, (d ,e) The scatter plot shows the percentage of TUNEL -positive cells. Statistical significance
was assessed using one -way ANOVA and Tukey’s multiple comparison post hoc testing performed
between the control ( rd1*Cngb1−/− untreated) and 20- µ M calpastatin (CAST20), 100- µ M D -cis-dilti-
azem (D100), 1-
µ
M Olaparib (OLA1) , and 20-
µM c
alpastatin combined with 1 -
µM Ol
aparib
(CAST20+OLA1). Only D -cis-diltiazem alleviated the rd1* Cngb1−/− retinal degeneration. Untr. wt: 5
explants from 3 different mice; untr. rd1 *Cngb1−/−: 26/26; D100 rd1*Cngb1−/−: 16/16; OLA1 rd1 *Cngb1−/−:
16/16; error bars represent SD; ns = p > 0.05 and * = p ≤ 0.05. INL = inner nuclear layer, GCL = ganglion
cell layer. Scale bar = 50 µm.
4. Discussi on
Figure 6. Effects of D-cis-diltiazem and Olaparib on rd1*Cngb1 / retinal cell viability. The TUNEL
assay labeled dying cells (magenta) in wild-type (wt) and rd1*Cngb1 / retinal explant cultures.
DAPI (grey) was used as a nuclear counterstain. ( a) Inwtretina, only a small fraction of cells in
the outer nuclear layer (ONL) were TUNEL-positive. ( b) Untreated (untr.) rd1*Cngb1 / double-
mutant retina was compared to retina treated with either 100- M D-cis-diltiazem (D100, ( c) or 1- M
Olaparib (OLA1, ( d,e) The scatter plot shows the percentage of TUNEL-positive cells. Statistical
significance was assessed using one-way ANOVA and Tukey’s multiple comparison post hoc testing
performed between the control (rd1*Cngb1 / untreated) and 20- M calpastatin (CAST20), 100- M
D-cis-diltiazem (D100), 1- M Olaparib (OLA1), and 20- M calpastatin combined with 1- M Olaparib
(CAST20+OLA1). Only D-cis-diltiazem alleviated the rd1*Cngb1 / retinal degeneration. Untr. wt:
5 explants from 3 different mice; untr. rd1*Cngb1 / : 26/26; D100 rd1*Cngb1 / : 16/16; OLA1
rd1*Cngb1 / : 16/16; error bars represent SD; ns = p> 0.05 and * = p0.05. INL = inner nuclear
layer, GCL = ganglion cell layer. Scale bar = 50 m.
4. Discussion
In IRDs, excessive activation of PARP and calpain is closely related to the execution
of cGMP-induced cell death [ 5], yet it has been unclear whether these two enzymes act
independently or in concert within the same cell death pathway. Our present study
confirms that PARP and calpain take part in the same pathway and that PARP activity
occurs upstream of calpain activity. We also show that the two major Ca2+sources in
photoreceptors, the CNG channel and VGCC, contribute to PARP activation.
4.1. Calpain Activation Occurs Downstream of P ARP
Calpains belong to a family of Ca2+-dependent thiol proteases of which fifteen mem-
bers have been identified to date [ 20]. The best-characterized calpains in the central nervous
system are two distinct heterodimeric subtypes: calpain-1 and calpain-2 [ 38], also known as
-calpain and m-calpain, since they are activated by 3–50- M Ca2+, (i.e., micromolar Ca2+)
and 0.4–0.8-mM Ca2+(i.e., millimolar Ca2+), respectively [ 39]. Calpain-1 and calpain-2 are
thought to play opposing roles in neurodegeneration, with the activation of calpain-1 coun-
teracting degeneration of neurons and calpain-2 promoting it [ 25,40,41]. Typically, these
two calpains are not active at the same time and may even inactivate each other [ 24,42].
Our immunostaining for activated calpain-2 suggests that, in degenerating photoreceptors,
most of the signal detected by the in situ calpain activity assay stems from calpain-2 ac-
tivity. From non-photoreceptor cells, activated calpains are known to degrade substrate
proteins, such as -spectrin, Bcl-2 family members, RIP kinase, apoptosis-inducing factor
(AIF), and PARP-1 [ 28,29,38,43,44]. Calpastatin as the endogenous inhibitor of calpains,
inhibits predominantly calpain-1 and calpain-2 but also reduces the activity of calpain-8 and
Biomolecules 2022, 12, 455 12 of 21
calpain-9 [ 45]. Accordingly, the application of calpastatin in our experiments decreased
both the calpain activity and calpain-2 immunosignal and delayed photoreceptor degenera-
tion in rd1retinal explants, as demonstrated by the TUNEL assay. This data is consistent
with previous research [ 26] and highlights the importance of calpain-dependent proteolysis
for photoreceptor degeneration.
Based on previous reports that found calpain to be able to cleave PARP , our initial
hypothesis was that calpain might act upstream of PARP during photoreceptor degen-
eration [ 28,29]. However, treatment of retinal explants with calpastatin had no effect on
PARP activity or the accumulation of PAR. Yet, the other way around, the PARP inhibitor
Olaparib strongly decreased calpain activity. Together, this provided a strong indication
that activation of PARP occurred upstream of calpain.
4.2. VGCC and CNG Channel Contribute to Calpain Activation
When compared with the calpastatin treatment, D-cis-diltiazem had a very similar
effect on calpain activity in rd1retina, suggesting that the L-type calcium channel found in
the cell bodies and at the synaptic terminals of rods was involved in providing the Ca2+
required for calpain activation. Even in the absence of Cngb1 expression, photoreceptor
calpain activity can be observed, further supporting the role of VGCCs in calpain acti-
vation [ 46]. D-cis-diltiazem administration also significantly reduced rd1PARP activity.
However, D-cis-diltiazem did not significantly reduce the TUNEL signal in rd1retinal
explants, in line with previous studies that found that pharmacological inhibition or genetic
inactivation of VGCCs had only a short-term effect, if at all, on delaying rd1photoreceptor
degeneration [ 27,47,48]. Since D-cis-diltiazem may also produce oxidative stress in depolar-
ized rods, this may have offset the beneficial effects of the reduction of PARP and calpain
activity [36].
The major Ca2+source in rod photoreceptor outer segments is the rod CNG channel, a
heterotetrametric cGMP-gated cation channel assembled by three Cnga1 and one Cngb1
subunits [ 49]. Loss of the Cngb1 subunit leads to degradation of the Cnga1 subunit and,
essentially, loss of the rod CNG channel, thus eliminating the cGMP-gated dark current and
preventing the rods from responding to light by hyperpolarization [ 49]. In the rd1model,
the aberrantly high levels of cGMP lead to over-activation of the CNG channels and, thus,
to increased Na+- and Ca2+- influx and persistent depolarization [ 7]. Paradoxically, in the
Cngb1 knockout model, where rods lose the ability of light-dependent hyperpolarization,
the resting membrane potential remains essentially unchanged, and the cells are constantly
depolarized [ 46]. In rd1rod photoreceptors, the strong Na+- and Ca2+- influx that the CNG
channel mediates in its open state needs to be counterbalanced, a function that is fulfilled
by the ATP-driven Na+/K+exchanger (NKX). Importantly, NKX activity alone represents
at least 50% of the total ATP expenditure of a photoreceptor, thereby linking CNG channel
activity to energy metabolism [50,51].
Genetic deletion of the Cngb1 subunit of the rod CNG channel affords robust rd1
photoreceptor neuroprotection [ 22], even though genetic deletion of Cngb1 or the L-type
channel Cacna1f [52] alone led to photoreceptor degeneration, albeit with slow rates of
progression. Moreover, the combined inhibition of VGCC and the CNG channel could
interrupt Ca2+homeostasis and cause severe photoreceptor degeneration [ 53]. On the other
hand, when we used D-cis-diltiazem on rd1*Cngb1 / mice, photoreceptor cell death was
reduced, suggesting that D-cis-diltiazem could potentially exert therapeutic effects in the
absence of CNG channel function. Although D-cis-diltiazem has been reported to partially
inhibit CNG channels in rod outer segments [ 54,55], our data indirectly rule out that it had
an effect on rod degeneration in rd1retinal explants.
4.3. P ARP Regulates Calpain via a Pathway That Depends on CNG Channel Function
An abnormal activation of P ARP in degenerating photoreceptors is well-documented [ 5,6,56,57],
and accordingly, the PARP inhibitor Olaparib decreased photoreceptor cell death, consistent
with previous studies [ 13,58–61]. PARP inhibitors are primarily used for cancer therapy due
Biomolecules 2022, 12, 455 13 of 21
to their ability to prevent DNA repair, and several PARP inhibitors are being tested clinically
or have already been approved for clinical use [ 62]. Notably, Olaparib became the first PARP
inhibitor to be approved by the FDA to treat metastatic breast cancer in January 2018 [ 63].
Currently, Olaparib is the most specific PARP inhibitor available and has no binding to
392 unique human kinases [ 64]. Among the known off targets is inosine monophosphate
dehydrogenase 2 (IMPDH2) [ 65], which shares 84% sequence homology with IMPDH1,
an isozyme that is strongly expressed in the retina [ 66]. Since IMPDH1 catalyzes the
rate-limiting step of GTP production, the substrate employed by retinal guanylate cyclase
(retGC) for cGMP synthesis [ 14], an off-target inhibition of IMPDH1 by Olaparib could
potentially reduce cGMP production [ 67]. However, our cGMP immunostaining revealed
that the number of cGMP-positive cells in the rd1retina was not decreased in the Olaparib-
treated group when compared to the control, suggesting that interactions of Olaparib with
IMPDH1 were not responsible for the effects observed.
PARP activity consumes large amounts of NAD+with important ramifications for cell
metabolism. Since the biosynthesis of NAD+occurs directly from nicotinamide mononu-
cleotide (NMN) and ATP via nicotinamide mononucleotide adenylyl transferases (NM-
NATs), excessive consumption of NAD+can lead to ATP depletion [ 68]. Additionally,
NAD+is a key substrate in the tricarboxylic acid cycle, and its deficiency could result in
mitochondrial dysfunction [ 69], which can further aggravate ATP depletion [ 70]. Moreover,
PARP joins ADP-ribose units from NAD+to form PAR [ 17], high levels of which may
also induce mitochondrial dysfunction [ 71]. The dysfunction induced by NAD+depletion
and/or PAR accumulation may depolarize the mitochondrial membrane potential and open
the mitochondrial permeability transition pore [ 72], which could then release mitochondrial
Ca2+to the cytoplasm and activate Ca2+-dependent proteases [ 73–75]. Therefore, it appears
likely that PARP inhibition, through reduced NAD+consumption and PAR generation, will
alleviate mitochondrial stress. This, in turn, should increase the availability of ATP that
may be used to, for instance, drive the plasma membrane Ca2+- ATPase (PMCA), which, in
turn, will reduce the intracellular Ca2+- levels and, thus, calpain activity.
Inrd1*Cngb1 / mice, the lack of CNGB1 strongly reduces the Na+- and Ca2+- in-
flux [ 7,76]. In rd1*Cngb1 / retina, Olaparib treatment did not reduce the calpain activity,
calpain-2 activation, or photoreceptor cell death. This points to a mechanism where CNG
channel function is essential for PARP-dependent activation of calpain and the execution
of photoreceptor cell death. On the other hand, in rd1*Cngb1 / retinal explants, D-cis-
diltiazem treatment reduced PARP activity. This suggests that, in the absence of CNG
channel function, PARP activity is dependent, at least in part, on VGCC-mediated Ca2+-
influx. Indeed, PARP activity has been associated with elevated intracellular Ca2+- lev-
els [77–79], and it is thus conceivable that both CNG channels and/or VGCCs contribute
to PARP activation. The exact nature of this detrimental interaction between CNG chan-
nels, VGCCs, and PARP remains unclear but could potentially be related to alterations
of Na+- and Ca2+- homeostasis and the increased demands on cellular metabolism that
this represents. Moreover, because of the rapid degeneration phenotype in rd1mice, the
results obtained here relate to retina that is still not fully developed and immature. Future
studies may reveal to what extent the results obtained in early postnatal retina can be
extended to mature adult retina and, if so, how this may apply to IRD patients. At any
rate, in genetically very heterogenous IRDs, these findings, together with the fact that
cGMP-induced cell death may show an overlap with PARthanatos [ 6], highlight PARP as a
promising target for the development of mutation-independent therapies.
4.4. Connecting VGCCs and CNG Channels with the Activity of P ARP and Calpain
In the following we will attempt to summarize the main results of this study and
provide an overview of how the various experimental conditions may have affected the
metabolism in the photoreceptor outer segment, inner segment, cell body, and nucleus
and how, in turn, this may promote photoreceptor survival. As discussed above, high
cGMP observed in degenerating rd1rod photoreceptors activates the CNG channel in the
Biomolecules 2022, 12, 455 14 of 21
outer segment, leading to Ca2+- influx and depolarization [ 7], which then can activate
VGCCs in the cell body, causing more Ca2+- influx (Figure 7). In the cell body, Ca2+-
extrusion depends largely on the ATP-driven plasma membrane Ca2+-ATPase (PMCA) [ 51].
A lack of ATP in degenerating rods may cause a decrease of PMCA activity and further
potentiate the accumulation of intracellular Ca2+. In either case, high Ca2+levels may
activate calpain-type proteases [ 73], notably calpain-2, which is considered to promote
neuronal degeneration [ 38]. Independent of CNG channels, high cGMP levels can also
activate cGMP-dependent protein kinase (PKG), which may be associated with histone
deacetylase (HDAC) activity in the nucleus, leading to chromatin condensation and DNA
damage [ 6]. This, in turn, may trigger the over-activation of PARP [ 6] and cause the
execution of a PARthanatos-related form of cell death [19,80] (Figure 7).
Treatment with calpain inhibitors, such as calpastatin, may decrease proteolytic dam-
age and thereby prolong photoreceptor survival. Further upstream, blocking VGCCs with
D-cis-diltiazem will reduce the Ca2+-levels in the photoreceptor cell body and prevent cal-
pain activation. Moreover, since VGCCs appear to be involved in PARP activation, reduced
PARP activity and PAR levels likely alleviate mitochondrial stress, allowing the cell to
maintain ATP production. This becomes evident through the use of the PARP inhibitor
Olaparib, which decreases PAR generation. This, in turn, may preserve mitochondrial
function and intracellular ATP levels, enabling PMCA to extrude Ca2+and keep the calpain
activity low (Figure 7).
Biomolecules 2022, 12, x FOR PEER REVIEW 15 of 22
Figure 7. Differential effects of experimental conditions on cGMP -dependent cell death in rd1 pho-
toreceptors. The mutation -induced cGMP accumulation activates cyclic nucleotide-gated (CNG)
channels in the outer segment, leading to Na+-and Ca2+-influx and photoreceptor depolarization.
This leads to opening of voltage-gated Ca2+-channels (VGCCs) in the cell body, causing further Ca2+-
influx. In the cell body, high Ca2+ levels may activate calpain if not controlled by ATP -dependent
plasma membrane Ca2+-ATPase (PMCA). In addition, cGMP-dependent activation of protein kinase
G (PKG) has been associated with histone-deacetylase (HDAC) activity, causing chromatin conden-
sation and DNA breaks, which may trigger PARP activation. Excessive consumption of NAD+ by
PARP and the production of PAR may cause mitochondrial dysfunction, leading to ATP shortage.
Calpastatin treatment blocks calpain activation, decreasing proteolytic damage to the cell, even in
the presence of CNG channel/VGCC-mediated Ca2+-influx. D-cis -diltiazem inhibits VGCCs in the
cell body, reducing intracellular Ca2+-levels and calpain activity. Moreover, VGCCs could be in-
volved in PARP activation, even though D -cis-diltiazem fails to delay rd1 rod deg eneration.
Olaparib blocks PARP activity, decreasing NAD+ consumption and PAR generation. This may pre-
serve mitochondrial function and intracellular ATP levels, allowing PMCA to extrude Ca2+ and keep-
ing calpain activity low.
5. Conclusions
In IRDs, the activities of calpain and PARP are both closely associated with photore-
ceptor degeneration, and PARP is a known target for calpain -dependent cleavage. How-
ever, here , we demonstrate that , in cGMP -induced photoreceptor degeneration, PARP
regulates calpa in activity, likely in an indirect fashion , via NAD+/ATP depletion or PAR -
induced mitochondrial dysfunction. In addition, PARP activity is likely to be controlled
by the activities of the VGCC and CNG channels. Overall, these results suggest PARP as
a particularly attractive target for future therapeutic interventions in IRDs. The availabil-
ity of a number of clinically tested PARP inhibitors [8 1,82] further enhances the perspec-
tives for clinical translation.
Author Contribution s: Conceptualization, F.P. -D. and K.J.; formal analysis, J.Y., A.G. and S.D.; in-
vestigation, J.Y.; resources, F.P.-D., R.M., S.M. and M.W. S.; data curation, J.Y.; writing —original
Figure 7. Differential effects of experimental conditions on cGMP-dependent cell death in rd1
photoreceptors. The mutation-induced cGMP accumulation activates cyclic nucleotide-gated (CNG)
channels in the outer segment, leading to Na+-and Ca2+-influx and photoreceptor depolarization.
This leads to opening of voltage-gated Ca2+-channels (VGCCs) in the cell body, causing further Ca2+-
Biomolecules 2022, 12, 455 15 of 21
influx. In the cell body, high Ca2+levels may activate calpain if not controlled by ATP-dependent
plasma membrane Ca2+-ATPase (PMCA). In addition, cGMP-dependent activation of protein kinase
G (PKG) has been associated with histone-deacetylase (HDAC) activity, causing chromatin conden-
sation and DNA breaks, which may trigger PARP activation. Excessive consumption of NAD+by
PARP and the production of PAR may cause mitochondrial dysfunction, leading to ATP shortage.
Calpastatin treatment blocks calpain activation, decreasing proteolytic damage to the cell, even in the
presence of CNG channel/VGCC-mediated Ca2+-influx. D-cis-diltiazem inhibits VGCCs in the cell
body, reducing intracellular Ca2+-levels and calpain activity. Moreover, VGCCs could be involved in
PARP activation, even though D-cis-diltiazem fails to delay rd1rod degeneration. Olaparib blocks
PARP activity, decreasing NAD+consumption and PAR generation. This may preserve mitochon-
drial function and intracellular ATP levels, allowing PMCA to extrude Ca2+and keeping calpain
activity low.
5. Conclusions
In IRDs, the activities of calpain and PARP are both closely associated with photorecep-
tor degeneration, and PARP is a known target for calpain-dependent cleavage. However,
here, we demonstrate that, in cGMP-induced photoreceptor degeneration, PARP regulates
calpain activity, likely in an indirect fashion, via NAD+/ATP depletion or PAR-induced
mitochondrial dysfunction. In addition, PARP activity is likely to be controlled by the
activities of the VGCC and CNG channels. Overall, these results suggest PARP as a partic-
ularly attractive target for future therapeutic interventions in IRDs. The availability of a
number of clinically tested PARP inhibitors [ 81,82] further enhances the perspectives for
clinical translation.
Author Contributions: Conceptualization, F.P .-D. and K.J.; formal analysis, J.Y., A.G. and S.D.;
investigation, J.Y.; resources, F.P .-D., R.M., S.M. and M.W.S.; data curation, J.Y.; writing—original
draft preparation, J.Y.; writing—review and editing, J.Y., K.J., R.M., S.M., A.G., M.W.S. and F.P .-D.;
visualization, J.Y.; supervision, F.P .-D.; project administration, F.P .-D.; and funding acquisition, F.P .-D.,
K.J. and M.W.S. All authors have read and agreed to the published version of the manuscript.
Funding: This research was funded by the German Ministry for Education and Research (BMBF;
TargetRD: 16GW0267K and 16GW02678), the German Research Council (DFG: MU 4138/2-1), Yunnan
Applied Basic Research Projects (No. 2019FB093), and the Charlotte and Tistou Kerstan Foundation.
The APC was partially covered by the Open Access Publishing Fund of the University of Tübingen.
Institutional Review Board Statement: The study was conducted according to the ARVO statement
for the use of animals in ophthalmic and vision research and complied with the regulations of the
German law on animal protection. The experimental protocols were reviewed and approved by
the Tübingen University committee on animal protection (Einrichtung für Tierschutz, Tierärztlicher
Dienst und Labortierkunde) and registered under AK02/19M.
Informed Consent Statement: Not applicable.
Data Availability Statement: Not applicable.
Acknowledgments: The authors thank Norman Rieger for the excellent technical assistance and
Soumaya Belhadj for advice on the calpain activity assay. Figure 7was created with BioRender
(https://biorender.com accessed on 4 February 2022).
Conflicts of Interest: The authors declare no conflict of interest.
Biomolecules 2022, 12, 455 16 of 21
Appendix A
Biomolecules 2022, 12, x FOR PEER REVIEW 16 of 22
draft preparation, J.Y.; writing —review and editing, J.Y., K.J., R.M., S.M., A.G., M.W. S. and F.P.-D.;
visualization, J.Y.; supervision, F.P.-D.; project administration, F.P. -D.; and funding acquisition,
F.P.-D., K.J. and M.W. S. All authors have read and agreed to the published version of the manu-
script.
Funding: This research was funded by the German Ministry for Education and Research (BMBF;
TargetRD: 16GW0267K and 16GW02678), Yunnan Applied Basic Research Projects (No. 2019FB093),
and the Charlotte and Tistou Kerstan Foundation. The APC was partially covered by the Open Ac-
cess Publishing Fund of the University of Tübingen.
Institutional Review Board Statement: The study was conducted according to the ARVO statement
for the use of animals in ophthalmic and vision research and complied with the regulations of the
German law on animal protection. The exp erimental protocols were reviewed and approved by the
Tübingen University committee on animal protection (Einrichtung für Tierschutz, Tierärztlicher
Dienst und Labortierkunde) and registered under AK02/19M.
Informed Consent Statement: Not applicable.
Data Availability Statement: Not applicable.
Acknowledgments: The authors thank Norman Rieger for the excellent technical assistance and
Soumaya Belhadj for advice on the calpain activity assay. Figure 7 was created with Bio Render
(https://biorender.com accesse d on 4 February 2022).
Conflicts of Interest: The authors declare no conflict s of interest.
Appendix A
Scheme A1. Effects of calpastatin, D-cis-diltiazem, and Olaparib on calpain-2 activation and PAR.
Calpain-2 immunostaining (yellow; a–e) and PAR DAB staining (black, f–j) were performed on wt
and rd1retina. DAPI (grey) was used as nuclear counterstaining. Untreated rd1retina (untr.; b,g) was
compared to retina treated with calpastatin ( c,h), D-cis-diltiazem ( d,i), and Olaparib ( e,j), respectively.
The scatter plots show the percentages of ONL-positive cells for calpain-2 ( k) and PAR ( l) inwt
and treated rd1retina, compared with rd1control (untr.). Statistical significance was assessed using
one-way ANOVA, and Tukey’s multiple comparison post hoc testing was performed between the
control and 20- M calpastatin (CAST20), 100- M D-cis-diltiazem (D100), and 1- M Olaparib (OLA1).
Inrd1retina, all treatments decreased the numbers of cells positive for calpain-2, while the number of
PAR-positive cells were not reduced by CAST20. In calpain-2 immunostaining: Untr. wt: 4 explants
from 2 different mice; untr. rd1: 15/15; CAST20 rd1: 10/10; D100 rd1: 10/10; OLA1 rd1: 10/10. In
PAR DAB staining: untr. wt: 4/2; untr. rd1: 16/16; CAST20 rd1: 6/6; D100 rd1: 10/10; OLA1 rd1:
10/10; error bars represent SD; ns = p> 0.05, ** = p0.01, *** = p0.001, and **** = p0.0001.
ONL = outer nuclear layer, INL = inner nuclear layer, GCL = ganglion cell layer. Scale bar = 50 m.
Biomolecules 2022, 12, 455 17 of 21
Biomolecules 2022, 12, x FOR PEER REVIEW 17 of 22
Scheme A1. Effects of calpastatin , D-cis-diltiazem , and Olaparib on calpain -2 activation and PAR.
Calpain-2 immunostaining (yellow; a–e) and PAR DAB staining (black, f –j) were performed on wt
and rd1 retina. DAPI (grey) was used as nuclear counterstaining. Untreated rd1 retina (untr.; b,g)
was compared to retina treated with calpastatin ( c,h), D-cis-diltiazem (d ,i), and Olaparib (e, j), re-
spectively. The scatter plots show the percentages of ONL -positive cells for calpain -2 (k) and PAR
(l) in wt and treated rd1 retina, compared with rd1 control (untr.). Statistical significance was as-
sessed using one -way ANOVA, and Tukey’s multiple comparison post hoc testing was performed
between the control and 20 -
µ
M calpastatin (C AST20 ), 100 -
µ
M D-cis -diltiazem (D100), and 1 -
µ
M
Olaparib (OLA1). In rd1 retina, all treatments decreased the numbers of cells positive for calpain -2,
while the number of PAR -positive cells were not reduced by CAST20. In calpain -2 immunostaining:
Untr. wt: 4 explants from 2 different mice; untr. rd1 : 15/15; CAST20 rd1 : 10/10; D100 rd1 : 10/10; OLA1
rd1: 10/10. I n PAR DAB staining: untr. wt: 4/2; untr. rd1: 16/16; CAST20 rd1 : 6/6; D100 rd1 : 10/10;
OLA1 rd1: 10/10; error bars represent SD; ns = p > 0.05 , ** = p ≤ 0.01, *** = p ≤ 0.001, and **** = p ≤
0.0001. ONL = outer nuclear layer, INL = inner nuclear layer, GCL = ganglion cell layer. Scale bar =
50 µm.
Scheme A2. Effect of DMSO on calpain activity, PARP activity, and TUNEL staining in rd1retinal
explants. Calpain activity (blue), PARP activity (green), and TUNEL (magenta) were used in rd1
retinal explant cultures. ToPro (red) and DAPI (grey) were used as nuclear counterstains. rd1control
retina (untr.; a,d,g) was compared to retina treated with 0.1% DMSO (DMSO; b,e,h). There was
no difference between positive cells detected in the rd1outer nuclear layer (ONL), with or without
DMSO. The scatter plot ( e) shows the percentage of calpain activity ( c), PARP activity ( f), and TUNEL-
positive cells ( i). In the calpain activity assay, untr. rd1: 16 explants derived from 16 different animals;
DMSO rd1: 6/6. In the PARP activity assay, untr. rd1: 18/18; DMSO rd1: 6/6. In the TUNEL assay,
untr. rd1: 27/27; DMSO rd1: 6/6; error bars represent SD; ns = p> 0.05, INL = inner nuclear layer,
GCL = ganglion cell layer. Scale bar = 50 m.
Biomolecules 2022, 12, 455 18 of 21
Biomolecules 2022, 12, x FOR PEER REVIEW 18 of 22
Scheme A2. Effect of DMSO on calpain activity, PARP activity, and TUNEL staining in rd1 retinal
explants. Calpain activity (blue), PARP activity (green), and TUNEL (purple) were used in rd1 reti-
nal explant cultures. ToPro (red) and DAPI (grey) were used as nuclear counterstains. rd1 control
retina (untr.; a, d,g) was compared to retina treated with 0.1% DMSO (DMSO; b,e,h). There was no
difference between positive cells detected in the rd1 outer nuclear layer (ONL), with or without
DMSO. The scatter plot ( e) shows the percentage of calpain activity ( c), PARP activity ( f), and
TUNEL-positive cells (i). In the calpain acti vity assay, untr. rd1: 16 explants derived from 16 differ-
ent animals; DMSO rd1: 6/6. In the PARP activity assay, untr. rd1: 18/18; DMSO rd1 : 6/6. In the
TUNEL assay, untr. rd1: 27/27; DMSO rd1 : 6/6; error bars represent SD; ns = p > 0.05, INL = inner
nuclear layer, GCL = ganglion cell layer. Scale bar = 50 µm.
S
cheme A3. Photoreceptor accumulation of cGMP in different experimental conditions. cGMP im-
munostaining (red ) was used in wt and rd1 retinal explant cultures. DAPI (grey) was used as a nu-
clear counterstain. rd1 control retina (untr.; b) was compared to retina treated with either 100 -µM
D-cis -diltiazem (D100, c ) or 1 -µM Olaparib (OLA1, d). Statistical significance was assessed using
one-way ANOVA, and Tukey’s multiple comparison post hoc testing was performed between con-
trol, 100-
µ
M D -cis-diltiazem (D100), and 1-
µM Ol
aparib (OLA1). Note the large numbers of cGMP
expressed in the rd1 outer nuclear layer (ONL). The scatter plot ( e) show s the percentage of cGMP -
positive cells in the ONL. Neither D -cis-diltiazem or O laparib reduced the cGMP -positive cells.
Untr. wt: 4 explants from 2 different mice; untr. rd1 : 13/13; D100 rd1: 10/10; OLA1 rd1 : 10/10; error
bars represent SD; ns = p > 0.05. INL = inner nuclear layer, GCL = ganglion cell layer. Scale bar = 50
µm.
Scheme A3. Photoreceptor accumulation of cGMP in different experimental conditions. cGMP
immunostaining (red) was used in wtand rd1retinal explant cultures. DAPI (grey) was used as
a nuclear counterstain. In wtretina (untr.; a), only a few cells in the outer nuclear layer (ONL)
were labeled cGMP-positive. rd1control retina (untr.; b) was compared to retina treated with either
100-M D-cis-diltiazem (D100, c) or 1- M Olaparib (OLA1, d). Statistical significance was assessed
using one-way ANOVA, and Tukey’s multiple comparison post hoc testing was performed be-
tween control, 100- M D-cis-diltiazem (D100), and 1- M Olaparib (OLA1). Note the large num-
bers of cGMP expressed in the rd1ONL. The scatter plot ( e) shows the percentage of cGMP-
positive cells in the ONL. Neither D-cis-diltiazem or Olaparib reduced the cGMP-positive cells.
Untr. wt: 4 explants from 2 different mice; untr. rd1: 13/13; D100 rd1: 10/10; OLA1 rd1: 10/10;
error bars represent SD; ns = p> 0.05. INL = inner nuclear layer, GCL = ganglion cell layer.
Scale bar = 50 m.
References
1. Duncan, J.L.; Pierce, E.A.; Laster, A.M.; Daiger, S.P .; Birch, D.G.; Ash, J.D.; Iannaccone, A.; Flannery, J.G.; Sahel, J.A.; Zack, D.J.;
et al. Technology, Inherited retinal degenerations: Current landscape and knowledge gaps. Transl. Vis. Sci. Technol. 2018 ,74, 6.
[CrossRef]
2. Bertelsen, M.; Jensen, H.; Bregnhøj, J.F.; Rosenberg, T. Prevalence of generalized retinal dystrophy in Denmark. Ophthalmic
Epidemiol. 2014, 21, 217–223. [CrossRef]
3. Hartong, D.T.; Berson, E.L.; Dryja, T.P . Retinitis pigmentosa. Lancet 2006, 368, 1795–1809. [CrossRef]
4. Sahel, J.A.; Marazova, K.; Audo, I. Clinical characteristics and current therapies for inherited retinal degenerations. Cold Spring
Harb. Perspect. Med. 2015, 5, a017111. [CrossRef]
5. Arango-Gonzalez, B.; Trifunovi´ c, D.; Sahaboglu, A.; Kranz, K.; Michalakis, S.; Farinelli, P .; Koch, S.; Koch, F.; Cottet, S.; Janssen-
Bienhold, U.; et al. Identification of a common non-apoptotic cell death mechanism in hereditary retinal degeneration. PLoS ONE
2014, 9, e112142. [CrossRef]
6. Yan, J.; Chen, Y.; Zhu, Y.; Paquet-Durand, F. Programmed Non-Apoptotic Cell Death in Hereditary Retinal Degeneration:
Crosstalk between cGMP-Dependent Pathways and PARthanatos? Int. J. Mol. Sci. 2021, 22, 10567. [CrossRef]
7. Das, S.; Chen, Y.; Yan, J.; Christensen, G.; Belhadj, S.; Tolone, A.; Paquet-Durand, F. The role of cGMP-signalling and calcium-
signalling in photoreceptor cell death: Perspectives for therapy development. Pflug. Arch. Eur. J. Physiol. 2021 ,473, 1411–1421.
[CrossRef]
8. Keeler, C.E. The inheritance of a retinal abnormality in white mice. Proc. Natl. Acad. Sci. USA 1924, 10, 329. [CrossRef]
9. Bowes, C.; Li, T.; Danciger, M.; Baxter, L.C.; Applebury, M.L.; Farber, D.B. Retinal degeneration in the rd mouse is caused by a
defect in the subunit of rod cGMP-phosphodiesterase. Nature 1990, 347, 677–680. [CrossRef]
10. Farber, D.B.; Lolley, R.N. Cyclic guanosine monophosphate: Elevation in degenerating photoreceptor cells of the C3H mouse
retina. Science 1974, 186, 449–451. [CrossRef]
Biomolecules 2022, 12, 455 19 of 21
11. Paquet-Durand, F.; Hauck, S.M.; Van Veen, T.; Ueffing, M.; Ekström, P . PKG activity causes photoreceptor cell death in two
retinitis pigmentosa models. J. Neurochem. 2009, 108, 796–810. [CrossRef]
12. Paquet-Durand, F.; Azadi, S.; Hauck, S.M.; Ueffing, M.; van Veen, T.; Ekström, P . Calpain is activated in degenerating photorecep-
tors in the rd1 mouse. J. Neurochem. 2006, 96, 802–814. [CrossRef] [PubMed]
13. Paquet-Durand, F.; Silva, J.; Talukdar, T.; Johnson, L.E.; Azadi, S.; van Veen, T.; Ueffing, M.; Hauck, S.M.; Ekström, P . Excessive
activation of poly (ADP-ribose) polymerase contributes to inherited photoreceptor degeneration in the retinal degeneration 1
mouse. J. Neurosci. 2007, 27, 10311–10319. [CrossRef] [PubMed]
14. Power, M.; Das, S.; Schütze, K.; Marigo, V .; Ekström, P .; Paquet-Durand, F. Cellular mechanisms of hereditary photoreceptor
degeneration–Focus on cGMP . Prog. Retin. Eye Res. 2020, 74, 100772. [CrossRef]
15. Ko, H.L.; Ren, E.C. Functional aspects of PARP1 in DNA repair and transcription. Biomolecules 2012 ,2, 524–548. [CrossRef]
[PubMed]
16. Morales, J.; Li, L.; Fattah, F.J.; Dong, Y.; Bey, E.A.; Patel, M.; Gao, J.; Boothman, D.A. Review of poly (ADP-ribose) polymerase
(PARP) mechanisms of action and rationale for targeting in cancer and other diseases. Crit. Rev. Eukaryot. Gene Expr. 2014 ,24.
[CrossRef]
17. Bai, P . Biology of poly (ADP-ribose) polymerases: The factotums of cell maintenance. Mol. Cell 2015, 58, 947–958. [CrossRef]
18. Curtin, N.J.; Szabo, C. Poly (ADP-ribose) polymerase inhibition: Past, present and future. Nat. Rev. Drug Discov. 2020 ,19, 711–736.
[CrossRef]
19. David, K.K.; Andrabi, S.A.; Dawson, T.M.; Dawson, V .L. Parthanatos, a messenger of death. Front. Biosci. Landmark 2009 ,14, 1116.
[CrossRef]
20. Perrin, B.; Huttenlocher, A. Calpain. Int. J. Biochem. Cell Biol. 2002, 34, 722–725. [CrossRef]
21. Michalakis, S.; Becirovic, E.; Biel, M. Retinal cyclic nucleotide-gated channels: From pathophysiology to therapy. Int. J. Mol. Sci.
2018, 19, 749. [CrossRef] [PubMed]
22. Paquet-Durand, F.; Beck, S.; Michalakis, S.; Goldmann, T.; Huber, G.; Mühlfriedel, R.; Trifunovi´ c, D.; Fischer, M.D.; Fahl, E.;
Duetsch, G.; et al. A key role for cyclic nucleotide gated (CNG) channels in cGMP-related retinitis pigmentosa. Hum. Mol. Genet.
2011, 20, 941–947. [CrossRef]
23. Catterall, W.A. Voltage-gated calcium channels. Cold Spring Harb. Perspect. Biol. 2011, 3, a003947. [CrossRef]
24. Shinkai-Ouchi, F.; Shindo, M.; Doi, N.; Hata, S.; Ono, Y. Calpain-2 participates in the process of calpain-1 inactivation. Biosci. Rep.
2020, 40. [CrossRef]
25. Baudry, M.; Bi, X. Calpain-1 and calpain-2: The yin and yang of synaptic plasticity and neurodegeneration. Trends Neurosci. 2016 ,
39, 235–245. [CrossRef] [PubMed]
26. Paquet-Durand, F.; Sanges, D.; McCall, J.; Silva, J.; Van Veen, T.; Marigo, V .; Ekström, P . Photoreceptor rescue and toxicity induced
by different calpain inhibitors. J. Neurochem. 2010, 115, 930–940. [CrossRef]
27. Schön, C.; Paquet-Durand, F.; Michalakis, S. Cav1. 4 L-type calcium channels contribute to calpain activation in degenerating
photoreceptors of rd1 mice. PLoS ONE 2016, 11, e0156974. [CrossRef] [PubMed]
28. Chaitanya, G.V .; Alexander, J.S.; Babu, P .P . PARP-1 cleavage fragments: Signatures of cell-death proteases in neurodegeneration.
Cell Commun. Signal. 2010, 8, 31. [CrossRef] [PubMed]
29. Saccà, E.; Pizzutti, N.; Corazzin, M.; Lippe, G.; Piasentier, E. Assessment of calpain and caspase systems activities during ageing
of two bovine muscles by degradation patterns of II spectrin and PARP-1. Anim. Sci. J. 2016 ,87, 462–466. [CrossRef] [PubMed]
30. Vosler, P .S.; Sun, D.; Wang, S.; Gao, Y.; Kintner, D.B.; Signore, A.P .; Cao, G.; Chen, J. Calcium dysregulation induces apoptosis-
inducing factor release: Cross-talk between PARP-1-and calpain-signaling pathways. Exp. Neurol. 2009 ,218, 213–220. [CrossRef]
[PubMed]
31. Sanyal, S.; Bal, A.K. Comparative light and electron microscopic study of retinal histogenesis in normal and rd mutant mice. Z.
Für Anat. Entwickl. 1973, 142, 219–238. [CrossRef] [PubMed]
32. Belhadj, S.; Tolone, A.; Christensen, G.; Das, S.; Chen, Y.; Paquet-Durand, F. Long-Term, Serum-Free Cultivation of Organotypic
Mouse Retina Explants with Intact Retinal Pigment Epithelium. JoVE 2020, 25, e61868. [CrossRef] [PubMed]
33. Belhadj, S.; Rentsch, A.; Schwede, F.; Paquet-Durand, F. Fluorescent detection of PARP activity in unfixed tissue. PLoS ONE 2021 ,
16, e0245369. [CrossRef] [PubMed]
34. Ono, Y.; Saido, T.C.; Sorimachi, H. Calpain research for drug discovery: Challenges and potential. Nat. Rev. Drug Discov. 2016 ,15,
854–876. [CrossRef] [PubMed]
35. Lord, C.J.; Ashworth, A. PARP inhibitors: Synthetic lethality in the clinic. Science 2017, 355, 1152–1158. [CrossRef]
36. Berkowitz, B.A.; Podolsky, R.H.; Farrell, B.; Lee, H.; Trepanier, C.; Berri, A.M.; Dernay, K.; Graffice, E.; Shafie-Khorassani, F.; Kern,
T.S.; et al. D-cis-diltiazem can produce oxidative stress in healthy depolarized rods in vivo .Investig. Ophthalmol. Vis. Sci. 2018 ,59,
2999–3010. [CrossRef]
37. Hüttl, S.; Michalakis, S.; Seeliger, M.; Luo, D.G.; Acar, N.; Geiger, H.; Hudl, K.; Mader, R.; Haverkamp, S.; Moser, M.; et al.
Impaired channel targeting and retinal degeneration in mice lacking the cyclic nucleotide-gated channel subunit CNGB1. J.
Neurosci. 2005, 25, 130–138. [CrossRef] [PubMed]
38. Cheng, S.Y.; Wang, S.C.; Lei, M.; Wang, Z.; Xiong, K. Regulatory role of calpain in neuronal death. Neural Regen. Res. 2018 ,13, 556.
[CrossRef]
Biomolecules 2022, 12, 455 20 of 21
39. Curcio, M.; Salazar, I.L.; Mele, M.; Canzoniero, L.M.; Duarte, C.B. Calpains and neuronal damage in the ischemic brain: The swiss
knife in synaptic injury. Prog. Neurobiol. 2016, 143, 1–35. [CrossRef]
40. Baudry, M. Calpain-1 and calpain-2 in the brain: Dr. Jekill and Mr Hyde? Curr. Neuropharmacol. 2019, 17, 823–829. [CrossRef]
41. Wang, Y.; Liu, Y.; Bi, X.; Baudry, M. Calpain-1 and Calpain-2 in the Brain: New Evidence for a Critical Role of Calpain-2 in
Neuronal Death. Cells 2020, 9, 2698. [CrossRef] [PubMed]
42. Power, M.J.; Rogerson, L.E.; Schubert, T.; Berens, P .; Euler, T.; Paquet-Durand, F. Systematic spatiotemporal mapping reveals
divergent cell death pathways in three mouse models of hereditary retinal degeneration. J. Comp. Neurol. 2020 ,528, 1113–1139.
[CrossRef]
43. Singh, R.; Brewer, M.K.; Mashburn, C.B.; Lou, D.; Bondada, V .; Graham, B.; Geddes, J.W. Calpain 5 is highly expressed in the
central nervous system (CNS), carries dual nuclear localization signals, and is associated with nuclear promyelocytic leukemia
protein bodies. J. Biol. Chem. 2014, 289, 19383–19394. [CrossRef]
44. Suzuki, S.; Murotomi, K.; Nakajima, Y.; Kawai, K.; Ohta, K.I.; Warita, K.; Miki, T.; Takeuchi, Y. Development of an artificial
calcium-dependent transcription factor to detect sustained intracellular calcium elevation. ACS Synth. Biol. 2014 ,3, 717–722.
[CrossRef]
45. Luo, Y.; Sellitti, D.F.; Suzuki, K. The Calpain Proteolytic System. Encycl. Cell Biol. 2016, 1, 670–680. [CrossRef]
46. Schön, C.; Asteriti, S.; Koch, S.; Sothilingam, V .; Garrido, M.G.; Tanimoto, N.; Herms, J.; Seeliger, M.W.; Cangiano, L.; Biel, M.; et al.
Loss of HCN1 enhances disease progression in mouse models of CNG channel-linked retinitis pigmentosa and achromatopsia.
Hum. Mol. Genet. 2016, 25, 1165–1175. [CrossRef] [PubMed]
47. Pawlyk, B.S.; Li, T.; Scimeca, M.S.; Sandberg, M.A.; Berson, E.L. Absence of photoreceptor rescue with D-cis-diltiazem in the rd
mouse. Investig. Ophthalmol. Vis. Sci. 2002, 43, 1912–1915.
48. Pearce-Kelling, S.E.; Aleman, T.S.; Nickle, A.; Laties, A.M.; Aguirre, G.D.; Jacobson, S.G.; Acland, G.M. Calcium channel blocker
D-cis-diltiazem does not slow retinal degeneration in the PDE6B mutant rcd1 canine model of retinitis pigmentosa. Mol. Vis.
2001, 7, 42. [PubMed]
49. Biel, M.; Michalakis, S. Function and dysfunction of CNG channels: Insights from channelopathies and mouse models. Mol.
Neurobiol. 2007, 35, 266–277. [CrossRef]
50. Okawa, H.; Sampath, A.P .; Laughlin, S.B.; Fain, G.L. ATP consumption by mammalian rod photoreceptors in darkness and in
light. Curr. Biol. 2008, 18, 1917–1921. [CrossRef] [PubMed]
51. Ames, I.I.I. A Pharmacology, Energy requirements of CNS cells as related to their function and to their vulnerability to ischemia:
A commentary based on studies on retina. Can. J. Physiol. Pharm. 1992, 70, S158–S164. [CrossRef]
52. Michalakis, S.; Shaltiel, L.; Sothilingam, V .; Koch, S.; Schludi, V .; Krause, S.; Zeitz, C.; Audo, I.; Lancelot, M.E.; Hamel, C. Mosaic
synaptopathy and functional defects in Cav1. 4 heterozygous mice and human carriers of CSNB2. Hum. Mol. Genet. 2014 ,23,
1538–1550. [CrossRef]
53. Das, S.; Popp, V .; Power, M.; Groeneveld, K.; Yan, J.; Melle, C.; Rogerson, L.; Achury, M.; Schwede, F.; Strasser, T.; et al. Redefining
the role of Ca2+-permeable channels in hereditary photoreceptor degeneration using the D-and L-cis enantiomers of diltiazem.
Cell Death Dis. 2022, 13, 47. [CrossRef] [PubMed]
54. Stern, J.H.; Kaupp, U.B.; MacLeish, P .R. Control of the light-regulated current in rod photoreceptors by cyclic GMP , calcium, and
l-cis-diltiazem. Proc. Natl. Acad. Sci. USA 1986, 83, 1163–1167. [CrossRef] [PubMed]
55. Koch, K.W.; Kaupp, U.B. Cyclic GMP directly regulates a cation conductance in membranes of bovine rods by a cooperative
mechanism. J. Biol. Chem. 1985, 260, 6788–6800. [CrossRef]
56. Greenwald, S.H.; Brown, E.E.; Scandura, M.J.; Hennessey, E.; Farmer, R.; Du, J.; Wang, Y.; Pierce, E.A. Mutant Nmnat1 leads to a
retina-specific decrease of NAD+ accompanied by increased poly (ADP-ribose) in a mouse model of NMNAT1-associated retinal
degeneration. Hum. Mol. Genet. 2021, 30, 644–657. [CrossRef]
57. Olivares-Gonzalez, L.; Martinez-Fernandez de la Camara, C.; Hervas, D.; Marín, M.P .; Lahoz, A.; Millán, J.M.; Rodrigo, R.
cGMP-phosphodiesterase inhibition prevents hypoxia-induced cell death activation in porcine retinal explants. PLoS ONE 2016 ,
11, e0166717. [CrossRef]
58. Olivares-Gonz ález, L.; Velasco, S.; Millán, J.M.; Rodrigo, R. Intravitreal administration of adalimumab delays retinal degeneration
in rd10 mice. FASEB J. 2020, 34, 13839–13861. [CrossRef]
59. Sahaboglu, A.; Tanimoto, N.; Kaur, J.; Sancho-Pelluz, J.; Huber, G.; Fahl, E.; Arango-Gonzalez, B.; Zrenner, E.; Ekström, P .;
Löwenheim, H. PARP1 gene knock-out increases resistance to retinal degeneration without affecting retinal function. PLoS ONE
2010, 5, e15495. [CrossRef]
60. Sahaboglu, A.; Barth, M.; Secer, E.; Del Amo, E.M.; Urtti, A.; Arsenijevic, Y.; Zrenner, E.; Paquet-Durand, F. Olaparib significantly
delays photoreceptor loss in a model for hereditary retinal degeneration. Sci. Rep. 2016, 6, 39537. [CrossRef]
61. Sahaboglu, A.; Miranda, M.; Canjuga, D.; Avci-Adali, M.; Savytska, N.; Secer, E.; Feria-Pliego, J.A.; Kayık, G.; Durdagi, S. Drug
repurposing studies of PARP inhibitors as a new therapy for inherited retinal degeneration. Cell. Mol. Life Sci. 2020 ,77, 2199–2216.
[CrossRef]
62. Li, H.; Liu, Z.Y.; Wu, N.; Chen, Y.C.; Cheng, Q.; Wang, J. PARP inhibitor resistance: The underlying mechanisms and clinical
implications. Mol. Cancer 2020, 19, 107. [CrossRef] [PubMed]
63. Bixel, K.; Hays, J.L. Olaparib in the management of ovarian cancer. Pharm. Pers. Med. 2015, 8, 127. [CrossRef]
Biomolecules 2022, 12, 455 21 of 21
64. Antolin, A.A.; Ameratunga, M.; Banerji, U.; Clarke, P .A.; Workman, P .; Al-Lazikani, B. The kinase polypharmacology landscape
of clinical PARP inhibitors. Sci. Rep. 2020, 10, 2585. [CrossRef]
65. Knezevic, C.E.; Wright, G.; Rix, L.L.R.; Kim, W.; Kuenzi, B.M.; Luo, Y.; Watters, J.M.; Koomen, J.M.; Haura, E.B.; Monteiro, A.N.;
et al. Proteome-wide Profiling of Clinical PARP Inhibitors Reveals Compound-Specific Secondary Targets. Cell Chem. Biol. 2016 ,
23, 1490–1503. [CrossRef] [PubMed]
66. Plana-Bonamaisó, A.; López-Begines, S.; Fernández-Justel, D.; Junza, A.; Soler-Tapia, A.; Andilla, J.; Loza-Alvarez, P .; Rosa, J.L.;
Miralles, E.; Casals, I. Post-translational regulation of retinal IMPDH1 in vivo to adjust GTP synthesis to illumination conditions.
eLife 2020, 9, e56418. [CrossRef]
67. Yang, P .; Lockard, R.; Titus, H.; Hiblar, J.; Weller, K.; Wafai, D.; Weleber, R.G.; Duvoisin, R.M.; Morgans, C.W.; Pennesi, M.E.
Suppression of cGMP-Dependent Photoreceptor Cytotoxicity With Mycophenolate Is Neuroprotective in Murine Models of
Retinitis Pigmentosa. Investig. Ophthalmol. Vis. Sci. 2020, 61, 25. [CrossRef] [PubMed]
68. Cohen, M.S. development, Interplay between compartmentalized NAD+ synthesis and consumption: A focus on the PARP family.
Genes Dev. 2020, 34, 254–262. [CrossRef] [PubMed]
69. Xie, N.; Zhang, L.; Gao, W.; Huang, C.; Huber, P .E.; Zhou, X.; Li, C.; Shen, G.; Zou, B. NAD(+) metabolism: Pathophysiologic
mechanisms and therapeutic potential. Signal Transduct. Target. Ther. 2020, 5, 227. [CrossRef] [PubMed]
70. Bertram, R.; Pedersen, M.G.; Luciani, D.S.; Sherman, A. A simplified model for mitochondrial ATP production. J. Theor. Biol. 2006 ,
243, 575–586. [CrossRef]
71. Baek, S.H.; Bae, O.N.; Kim, E.K.; Yu, S.W. cells, Induction of mitochondrial dysfunction by poly (ADP-ribose) polymer: Implication
for neuronal cell death. Mol. Cells 2013, 36, 258–266. [CrossRef] [PubMed]
72. Rottenberg, H.; Hoek, J.B. The Mitochondrial Permeability Transition: Nexus of Aging, Disease and Longevity. Cells 2021 ,10, 79.
[CrossRef] [PubMed]
73. Goll, D.E.; Thompson, V .F.; Li, H.; Wei, W.; Cong, J. The calpain system. Physiol. Rev. 2003, 83, 731–801. [CrossRef] [PubMed]
74. Bernardi, P .; Petronilli, V . The permeability transition pore as a mitochondrial calcium release channel: A critical appraisal. J.
Bioenerg. Biomembr. 1996, 28, 131–138. [CrossRef]
75. Gunter, T.; Buntinas, L.; Sparagna, G.; Eliseev, R.; Gunter, K. Mitochondrial calcium transport: Mechanisms and functions. Cell
Calcium 2000, 28, 285–296. [CrossRef] [PubMed]
76. Waldner, D.; Bech-Hansen, N.; Stell, W.K. Channeling vision: CaV1. 4—A critical link in retinal signal transmission. BioMed Res.
Int.2018, 2018. [CrossRef] [PubMed]
77. Geistrikh, I.; Visochek, L.; Klein, R.; Miller, L.; Mittelman, L.; Shainberg, A.; Cohen-Armon, M. Ca2+-induced PARP-1 activation
and ANF expression are coupled events in cardiomyocytes. Biochem. J. 2011, 438, 337–347. [CrossRef] [PubMed]
78. Zhang, F.; Xie, R.; Munoz, F.M.; Lau, S.S.; Monks, T.J. PARP-1 hyperactivation and reciprocal elevations in intracellular Ca2+
during ROS-induced nonapoptotic cell death. Toxicol. Sci. 2014, 140, 118–134. [CrossRef]
79. Munoz, F.M.; Zhang, F.; Islas-Robles, A.; Lau, S.S.; Monks, T.J. From the cover: ROS-Induced store-operated Ca2+entry coupled
to PARP-1 hyperactivation is independent of PARG activity in necrotic cell death. Toxicol. Sci. 2017, 158, 444–453. [CrossRef]
80. Wang, X.; Ge, P . Parthanatos in the pathogenesis of nervous system diseases. Neuroscience 2020, 449, 241–250. [CrossRef]
81. Jiang, X.; Li, W.; Li, X.; Bai, H.; Zhang, Z. Current status and future prospects of PARP inhibitor clinical trials in ovarian cancer.
Cancer Manag. Res. 2019, 11, 4371. [CrossRef] [PubMed]
82. Kamel, D.; Gray, C.; Walia, J.S.; Kumar, V . PARP inhibitor drugs in the treatment of breast, ovarian, prostate and pancreatic
cancers: An update of clinical trials. Curr. Drug Targets 2018, 19, 21–37. [CrossRef] [PubMed]
"""
# Remove references and print the result
cleaned_text = remove_references(input_text)
print("Text after removing references:\n")
print(cleaned_text)
</code>
|
{
"filename": "ref_rem_1.ipynb",
"repository": "skand001/MSc-Medical-Text-Summarisation-for-IRD-Publications",
"query": "transformed_from_existing",
"size": 116159,
"sha": ""
}
|
# ia4genet.run.ipynb
Repository: grimbough/biocworkflows
- [Background](#background)
- [The gwascat package for the EMBL-EBI (formerly NHGRI) GWAS
catalog](#the-gwascat-package-for-the-embl-ebi-formerly-nhgri-gwas-catalog)
- [Basic operations, fields, and interactive
tabulation](#basic-operations-fields-and-interactive-tabulation)
- [GRASP](#grasp)
- [Genomic contexts and interpretations of
variants](#genomic-contexts-and-interpretations-of-variants)
- [Presence in exons](#presence-in-exons)
- [SIFT scores](#sift-scores)
- [ChromHmm segmentation](#chromhmm-segmentation)
- [Regions of chromatin
modification](#regions-of-chromatin-modification)
- [Conclusions](#conclusions)
- [Appendix: Bioconductor infrastructure supporting genetic data
analysis](#appendix-bioconductor-infrastructure-supporting-genetic-data-analysis)
- [Reference builds of the human genome
sequence](#reference-builds-of-the-human-genome-sequence)
- [From dbSNP to GRanges](#from-dbsnp-to-granges)
<code>
## This code chunk was hidden in the original document, but was executed in the background
knitr::opts_chunk$set(results="hide", message=FALSE, warning=FALSE, fig.show="hide", echo=TRUE)
</code>
<code>
## This code chunk was hidden in the original document, but was executed in the background
suppressPackageStartupMessages({
library(BiocStyle)
library(AnnotationHub)
ah = AnnotationHub()
library(gwascat)
library(GenomicFiles)
library(rtracklayer)
library(DT)
library(SIFT.Hsapiens.dbSNP132)
library(grasp2db)
library(BSgenome)
library("SNPlocs.Hsapiens.dbSNP144.GRCh37")
#library(BSgenome.Hsapiens.NCBI.GRCh38)
library(BSgenome.Hsapiens.UCSC.hg19)
})
</code>
Background
==========
The table of contents of Vogel and Motulsky's [*Human Genetics: Problems and Approaches*](https://books.google.com/books?id=xuztCAAAQBAJ&lpg=PA6&dq=human%20genetics&pg=PR32#v=onepage&q=human%20genetics&f=false) is a worthy survey of concepts addressed in research on human genetics and genetic medicine. The frontiers of knowledge in the field are shifting, and expectations are high.
In this workflow, I aim to show how researchers can use R to interrogate important resources of use in human genetic epidemiology and medical genomics. I show how to program with two genome-wide association study (GWAS) catalogs, the [EMBL-EBI GWAS catalog](https://www.ebi.ac.uk/gwas/) and the [NHLBI GRASP v2.0](http://iapps.nhlbi.nih.gov/GRASP/Overview.aspx). Aspects of findings reported in these studies are then integrated with new functional and structural annotation resources to aid in variant interpretation. An appendix provides brief treatment of "reference genome builds" for *Homo sapiens*, packages for querying contents of the [NCBI dbSNP](http://www.ncbi.nlm.nih.gov/SNP/), and tools for obtaining and programming with gene models.
The gwascat package for the EMBL-EBI (formerly NHGRI) GWAS catalog
==================================================================
Basic operations, fields, and interactive tabulation
----------------------------------------------------
The NHGRI version of the GWAS catalog is presented using hg19( GRCh37) coordinates.
<code>
library(gwascat)
data(gwrngs19)
length(gwrngs19)
gwrngs19
</code>
While there are 17254 records, the number of unique loci is
<code>
length(unique(gwrngs19$SNPs))
</code>
A full view of the metadata about each study result is available with the commands
``` r
library(DT)
datatable(as.data.frame(mcols(gwrngs19)), options=list(autoWidth=TRUE,
style="height:30px"), pageLength=5)
```
The following command generates a table restricting attention to records related to asthma.
<code>
suppressWarnings({
aind = grep("sthma", gwrngs19$Disease.Trait)
easth = gwrngs19[aind]
datatable(as.data.frame(mcols(easth)), options=list(autoWidth=TRUE,
style="height:30px", pageLength=5))
})
</code>
<!--
## Navigating traits using the EMBL-EBI Experimental Factor Ontology
Field `MAPPED_TRAIT_URI` includes a comma-delimited string with
URIs referring to an ontology for traits and other factors relevant
to biological experiments and observations. The underlying
ontology is available in the form of an annotated algebraic graph.
``` r
data(efo.obo.g)
efo.obo.g
```
There are over 16000 terms in the ontology. Terms and term-related
metadata are manipulated using methods of the *[graph](http://bioconductor.org/packages/graph)*
package.
``` r
nodes(efo.obo.g)[1:4] # imported directly from OBO
names(nodeData(efo.obo.g)[[1]])
sapply(nodeData(efo.obo.g)[1:4], "[[", "name")
```
Let's obtain the EFO annotation for SNP `rs347412`.
``` r
ind = which(ebicat38$SNPS == "rs347412")
urs = ebicat38$MAPPED_TRAIT_URI[ind]
urs
```
These entries must be converted to match the EFO OBO node
nomenclature. We then find the EFO names of the factors annotated
to this SNP.
``` r
nn = uri2node(urs)
nd = nodeData(efo.obo.g, nn)
sapply(nd, "[[", "name")
```
The current representation of the ontology is a directed graph
with links pointing from a term to its semantic parent. We
convert to an undirected graph to explore semantic neighborhoods of terms.
The `adj` method will return the nodes adjacent to a specified node.
Here we obtain the terms accessible from `respiratory system disease`
with a single step.
``` r
rsdn = adj(ugraph(efo.obo.g), "EFO:0000684") # respiratory system disease
unlist(sapply(nodeData(efo.obo.g, rsdn[[1]]), "[[", "name"))
```
The *[RBGL](http://bioconductor.org/packages/RBGL)* package can be used to deploy diverse graph algorithms
against this ontology.
Once a node name of interest has been found, `node2uri` can be used
with code to find
GWAS hits deemed relevant by the curators. We'll work with hg19
coordinates.
``` r
data(ebicat37)
library(GenomeInfoDb)
seqlevelsStyle(ebicat37) = "UCSC"
genome(ebicat37) = "hg19"
e270 = ebicat37[ grep(node2uri("EFO:0000270"), ebicat37$MAPPED_TRAIT_URI) ]
length(e270)
table(e270$DISEASE.TRAIT)[1:5]
```
-->
GRASP
=====
GRASP is a much denser catalog requiring a different approach to archiving and query resolution. Initial execution of `GRASP2()` will trigger a download of a 5GB SQLite database that can then be used with *[dplyr](http://cran.fhcrc.org/web/packages/dplyr/index.html)* programming. This download will not occur again unless the database has been centrally updated. This document does not evaluate the following chunk, but the output is precomputed and left static.
``` r
library(grasp2db)
v = tbl(GRASP2(), 'variant')
v %>% filter(Phenotype == "Asthma")
```
<pre><code>## Source: sqlite 3.8.6 [AnnotationHub()[["AH21414"]]]
## From: variant [33,351 x 33]
## Filter: Phenotype == "Asthma"
##
## NHLBIkey PMID HUPfield SNPid_dbSNP134 chr_hg19 pos_hg19
## 1 2086050316 20860503 1/1/2014 18 7 11597475
## 2 20860503866 20860503 1/1/2014 535 9 138396251
## 3 208605031097 20860503 1/1/2014 686 5 174868700
## 4 208605031186 20860503 1/1/2014 699 1 230845794
## 5 208605031603 20860503 1/1/2014 1117 3 22085809
## 6 208605031980 20860503 1/1/2014 1320 22 22599537
## 7 208605032429 20860503 1/1/2014 1535 11 61597972
## 8 208605032734 20860503 1/1/2014 1695 11 67352689
## 9 208605032835 20860503 1/1/2014 1760 8 442079
## 10 208605033085 20860503 1/1/2014 1899 15 41689232
## .. ... ... ... ... ... ...
## Variables not shown: SNPidInPaper (chr), LocationWithinPaper (chr), Pvalue
## (dbl), NegativeLog10PBin (int), Phenotype (chr), PlatformSNPsPassingQC
## (chr), GWASancestryDescription (chr), InGene (chr), InLincRNA (chr),
## InMiRNA (chr), InMiRNABS (chr), dbSNPfxn (chr), dbSNPMAF (chr),
## dbSNPallelesHetSe (chr), dbSNPvalidation (int), dbSNPClinStatus (chr),
## ORegAnno (chr), ConservPredTFBS (chr), HumanEnhancer (chr), RNAedit
## (chr), PolyPhen2 (chr), SIFT (chr), LS_SNP (chr), UniProt (chr),
## EqtlMethMetabStudy (int), DiscoverySampleDescription (chr),
## ReplicationSampleDescription (chr)</code></pre>
Genomic contexts and interpretations of variants
================================================
Presence in exons
-----------------
We can map our GWAS hits to exons using the TxDb infrastructure.
<code>
library(TxDb.Hsapiens.UCSC.hg19.knownGene)
allex = exons(TxDb.Hsapiens.UCSC.hg19.knownGene)
subsetByOverlaps( easth, allex )
</code>
SIFT scores
-----------
We query the SIFT resource using dbSNP identifiers.
<code>
rsids = easth$SNPs
library(SIFT.Hsapiens.dbSNP132)
subst = c("RSID", "METHOD", "PREDICTION", "SCORE")
sif = AnnotationDbi::select(SIFT.Hsapiens.dbSNP132, keys=rsids, cols=subst)
datatable(na.omit(sif))
</code>
ChromHmm segmentation
---------------------
We'll use the fetal lung sample from the epigenomics road map as provided by *[AnnotationHub](http://bioconductor.org/packages/AnnotationHub)*. We use prior knowledge that tag "E088" refers to the fetal lung tissue study.
<code>
library(AnnotationHub)
ah = AnnotationHub()
lq = AnnotationHub::query(ah, c("E088", "state"))
lq
cstates = subsetByOverlaps( ah[["AH46941"]], easth )
sort(table(cstates$name), decreasing=TRUE)
</code>
In this way we can label variants according to their tissue-specific epigenetic contexts.
Regions of chromatin modification
---------------------------------
We'll check for coincidence of our GWAS hits with peaks identified with H3K4me1 marks in fetal lung fibroblasts, using component AH43875 of the *[AnnotationHub](http://bioconductor.org/packages/AnnotationHub)*.
<code>
library(AnnotationHub)
ah = AnnotationHub()
h3kf = ah[["AH43875"]]
subsetByOverlaps(easth, h3kf)
</code>
Conclusions
===========
The use of *[GenomicRanges](http://bioconductor.org/packages/GenomicRanges)* infrastructure for representing sets of DNA variants leads to fairly simple merge and intersection operations based on genomic coordinates. These operations are useful for sorting variants into categories based on structural or functional modeling. Richly annotated ranges can be used to manage and program with GWAS catalogs, leading to efficient coupling of genomic assay results with findings of genetic epidemiology.
Appendix: Bioconductor infrastructure supporting genetic data analysis
======================================================================
Reference builds of the human genome sequence
---------------------------------------------
<!--
The most recent build of the human genomic sequence
is labeled GRCh38. Using Bioconductor, we can be very concrete about what this
is.
-->
The second-to-last build of the human genomic sequence is labeled hg19. Using Bioconductor, we can be very concrete about what this is.
<code>
library(BSgenome.Hsapiens.UCSC.hg19)
class(Hsapiens)
Hsapiens
class(Hsapiens$"chr17")
Hsapiens$"chr17"
</code>
From dbSNP to GRanges
---------------------
A number of packages represent snapshots of NCBI dbSNP.
<code>
library(BSgenome)
available.SNPs()
</code>
Functions available for a recent build are:
<code>
library("SNPlocs.Hsapiens.dbSNP144.GRCh37")
ls(pos="package:SNPlocs.Hsapiens.dbSNP144.GRCh37")
</code>
We can retrieve data on a chromosome. Note the peculiar nomenclature for chromosomes used with dbSNP. The `seqlevelsStyle` methods of *[GenomeInfoDb](http://bioconductor.org/packages/GenomeInfoDb)* can be used to manage these nomenclatures systematically.
<code>
snpsBySeqname(SNPlocs.Hsapiens.dbSNP144.GRCh37, "ch20")
</code>
|
{
"filename": "ia4genet.run.ipynb",
"repository": "grimbough/biocworkflows",
"query": "transformed_from_existing",
"size": 64054,
"sha": ""
}
|
# Droplet_DPT_4.ipynb
Repository: ManchesterBioinference/GrandPrix
# Applying GrandPrix on droplet based single-cell RNA-seq of mouse embryonic stem cells
_Sumon Ahmed_, 2017, 2018
This notebooks shows how GrandPrix with informative prior over the latent space can be used to infer one dimensional pseudotime from single cell RNA-seq generated using droplet barcoding. Models with both informative and non-informative priors are examined and compared with the diffusion pseudotime (DPT) framework.
<!--
Our model supports mixed precision computation. This notebook is an example of running the model with lower precision floating point.
-->
<code>
import pandas as pd
import numpy as np
from GrandPrix import GrandPrix
</code>
## Helper function
__MapTo01__ converts everything between [0,1]
<code>
def MapTo01(y):
return (y.copy() - y.min(0)) / (y.max(0) - y.min(0))
</code>
## Data description
<a href="https://www.ncbi.nlm.nih.gov/pubmed/26000487" target="_blank">Klein et al. (2015)</a> developed a method termed inDrop (indexing droplet) based on droplet microfluidics and assayed the gene expression profiles and differentiation heterogeneity of mouse stem cells after leukemia inhibitory factor (LIF) withdrawal.
<a href="https://www.ncbi.nlm.nih.gov/m/pubmed/27571553/" target="_blank">Haghverdi et al. (2016)</a> have applied cell cycle normalization on this data and used it to infer diffusion pseudotime (DPT).
The __dropSeq.csv__ file contains cell cycle normalized expression profiles of __2717__ cells and __2047__ genes.
The __dropsecMeta.csv__ file contains the additional information of the data such as capture time of each cells, diffusion pseudotimes, etc.
<code>
Y = pd.read_csv('../data/dpt/dropSeq.csv', index_col=[0]).T
mData = pd.read_csv('../data/dpt/dropsecMeta.csv', index_col=[0])
</code>
<code>
N, D = Y.shape
print('Cells: %s, Genes: %s'%(N, D))
</code>
<code>
mData.head()
</code>
## Actual capture time and diffusion pseudotime
<code>
dpt = mData['dpt'].values
cpt = mData['capture.orig'].values
</code>
## Model with Informative prior
Capture time points have been used as the informative prior information over pseudotime. Following arguments have been passed to initialize the model.
<!--
- __data__: _array-like, shape N x D_. Observed data, where N is the number of time points and D is the number of genes.
- __latent_prior_mean__: _array-like, shape N_ x 1, _optional (default:_ __0__). > Mean of the prior distribution over pseudotime.
- __latent_prior_var__: _array-like, shape N_ x 1, _optional (default:_ __1.__). Variance of the prior distribution over pseudotime.
- __latent_mean__: _array-like, shape N_ x 1, _optional (default:_ __1.__). Initial mean values of the approximate posterior distribution over pseudotime.
- __latent_var__: _array-like, shape N_ x 1, _optional (default:_ __1.__). Initial variance of the approximate posterior distribution over pseudotime.
- __kernel:__ _optional (default: RBF kernel with lengthscale and variance set to 1.0)_. Covariance function to define the mapping from the latent space to the data space in Gaussian process prior.
-->
- __data__: _array-like, shape N x D_. Observed data, where N is the number of time points and D is the number of genes.
- __n_inducing_points__: _int_. Number of inducing points.
- __latent_prior_mean__: _array-like, shape N_ x 1. Mean of the prior distribution over pseudotime.
- __latent_prior_var__: _array-like, shape N_ x 1. Variance of the prior distribution over pseudotime.
- __latent_mean__: _array-like, shape N_ x 1. Initial mean values of the approximate posterior distribution over pseudotime.
<code>
M = 60 #number of inducing points
sigma_t = 1.
prior_mean = MapTo01(mData['capture.orig'].values[:, None])
np.random.seed(10)
X_mean = np.zeros((N, 1)) # initialize latent_mean
for i in range(0, N):
X_mean[i, 0] = prior_mean[i, 0] + 1.2 * np.random.randn(1)
</code>
<code>
pt_wp, var_wp = GrandPrix.fit_model(data=Y.values, n_inducing_points=M, latent_prior_mean=prior_mean,
latent_prior_var=np.square(sigma_t), latent_mean=X_mean)
</code>
## Model without using Informative prior
<code>
np.random.seed(10)
pt_np, var_np = GrandPrix.fit_model(data=Y.values, n_inducing_points=M)
</code>
## Spearman correlation
<code>
from scipy.stats import spearmanr
from beautifultable import BeautifulTable
table = BeautifulTable()
table.column_headers = ["", "No Prior", "With Prior"]
table.append_row(["Capture time", "%f"%(spearmanr(pt_np, cpt)[0]), "%f"%(spearmanr(pt_wp, cpt)[0])])
table.append_row(["Diffusion Pseudotime", "%s"%(spearmanr(pt_np, dpt)[0]), "%f"%(spearmanr(pt_wp, dpt)[0])])
table.left_padding_widths['No Prior'] = 10
table.right_padding_widths['With Prior'] = 10
print("Spearman Correlation between the estimated pseudotime with known values: \n")
print(table)
</code>
## Pearson correlation
<code>
from scipy.stats import pearsonr
table = BeautifulTable()
table.column_headers = ["", "No Prior", "With Prior"]
table.append_row(["Diffusion Pseudotime", "%s"%(pearsonr(pt_np.reshape(-1), dpt.reshape(-1))[0]),
"%f"%(pearsonr(pt_wp.reshape(-1), dpt.reshape(-1))[0])])
table.left_padding_widths['No Prior'] = 10
table.right_padding_widths['With Prior'] = 10
print("Linear Correlation between the estimated pseudotime with known values: \n")
print(table)
</code>
# Visualize the results
The informative prior on capture time helps the model to infer pseudotime having similar pseudotime density to DPT.
<code>
%matplotlib inline
from matplotlib import pyplot as plt
from utils import correlation_dpt
fig, ax = plt.subplots(1, 2, figsize=(14, 6), sharex=True, sharey=True)
correlation_dpt(MapTo01(-pt_np)*max(cpt), cpt, mData['capture.orig'].values, ax[0], 'No Prior')
correlation_dpt(MapTo01(pt_wp)*max(cpt), cpt, mData['capture.orig'].values, ax[1], 'With Prior')
</code>
<code>
from matplotlib import pyplot as plt
fig, ax = plt.subplots(1, 2, figsize=(14, 6), sharex=True, sharey=True)
correlation_dpt(MapTo01(-pt_np)*max(dpt), dpt, mData['capture.orig'].values, ax[0], 'No Prior', diagLine=True)
correlation_dpt(MapTo01(pt_wp)*max(dpt), dpt, mData['capture.orig'].values, ax[1], 'With Prior', diagLine=True)
</code>
|
{
"filename": "Droplet_DPT_4.ipynb",
"repository": "ManchesterBioinference/GrandPrix",
"query": "transformed_from_existing",
"size": 136068,
"sha": ""
}
|
# transcriptomics_10_drug_visium_2.ipynb
Repository: imsb-uke/ANCA-GN
<code>
import sys
sys.path.append("../src")
from utils import *
</code>
<code>
adata = sc.read(os.path.join(datadir, "anca_samples_annotated_v2.h5ad"))
</code>
<code>
adata.obs["cluster_annot"].replace({"Inflamed interstitial": "Inflamed",
"Inflamed glomerular": "Inflamed"}, inplace=True)
</code>
<code>
import pickle
pickle_path = os.path.join(datadir, "filtered_added.pkl")
with open(pickle_path, "rb") as handle:
newdict = pickle.load(handle)
</code>
<code>
targets = {}
# remove immunostimulants
for key in newdict['L']:
if key not in newdict["L03"]:
targets[key] = newdict['L'][key]
</code>
<code>
len(list(targets.keys()))
</code>
<code>
adata.X.max()
</code>
<code>
import drug2cell as d2c
d2c.score(adata, targets=targets, nested=False, use_raw=False)
</code>
<code>
sc.tl.rank_genes_groups(adata.uns['drug2cell'], method="wilcoxon", groupby="cluster_annot", pts=True, use_raw=False, key_added="wilcoxon_drug")
</code>
<code>
log2fc = 0.25
pval = 0.05
</code>
<code>
dfs = {}
for clst in adata.obs["cluster_annot"].unique(): #_merge
dfs[clst] = sc.get.rank_genes_groups_df(adata.uns["drug2cell"], group=clst, pval_cutoff=pval,
log2fc_min=log2fc,
key="wilcoxon_drug").set_index("names")
spec = pd.read_csv(os.path.join(datadir, "drug_scores_single_cell_EMRM.csv"), index_col=0)[["scores"]]
spec.columns=["spec"]
common = list(set(dfs[clst].index.tolist())&set(spec.index.tolist()))
dfs[clst], spec = dfs[clst].loc[common], spec.loc[common]
dfs[clst]["spec"] = spec.loc[dfs[clst].index]["spec"].tolist()
</code>
<code>
tmp = dfs["Inflamed"].copy()
tmp = tmp.sort_values(by="spec", ascending=False)
tmp.rename(columns={"scores": "ST score (inflamed glomerular and interstsitial)",
"spec": "SC score (CD4 and CD8 T EM/RM)"}, inplace=True)
# tmp.to_csv(os.path.join(datadir, "drugs.csv"))
</code>
<code>
pct_group = 0.75
pct_ref = 0.75
for clst in ["Inflamed"]:
dfs[clst] = dfs[clst][dfs[clst]["pct_nz_group"]>=pct_group]
dfs[clst] = dfs[clst][dfs[clst]["pct_nz_reference"]<=pct_ref]
</code>
<code>
dfs["Inflamed"] = dfs["Inflamed"].sort_values(by="spec", ascending=False)
</code>
<code>
df_int = dfs["Inflamed"].copy()
df_int = df_int[["scores", "spec"]]
df_int.columns = ["ST", "SC"]
</code>
<code>
df = df_int.copy()
</code>
<code>
import matplotlib as mpl
</code>
<code>
colorby="ST"
group="Inflamed\nglomerular and interstitial"
x="SC"
df.loc["dummy", "ST"] = 2
df.loc["dummy_1", "ST"] = 10
</code>
<code>
def colors_from_values(values, palette_name):
# normalize the values to range [0, 1]
normalized = (values - min(values)) / (max(values) - min(values))
# convert to indices
indices = np.round(normalized * (len(values) - 1)).astype(np.int32)
# use the indices to get the colors
palette = sns.color_palette(palette_name, len(values))
return np.array(palette).take(indices, axis=0)
</code>
<code>
df["names"] = df.index.tolist()
</code>
<code>
# df.loc[df["ST"]>7, "ST"] = 7
</code>
<code>
sc.set_figure_params(dpi=100)
sns.set(style="ticks", font_scale=1.2)
plt.rcParams["font.family"] = ["Inter"]
</code>
<code>
df
</code>
<code>
df
</code>
<code>
n = 20
cmap = "plasma"
plt.figure(figsize=(10,5))
cmp = mpl.colors.LinearSegmentedColormap.from_list('colorbar', sns.color_palette(cmap), N=n)
plot = plt.scatter(df.iloc[0:n][colorby], df.iloc[0:n][colorby], c=df.iloc[0:n][colorby], cmap=cmp)
plt.clf()
cbar = plt.colorbar(plot, fraction=1, aspect=20)
cbar.outline.set_linewidth(0.)
# cbar.set_label('T EM/RM score - single cell', rotation=270, labelpad=20)
cbar.set_label('Inflamed score - Spatial transcriptomics', rotation=270, labelpad=20)
plt.axis('off')
# plt.savefig(os.path.join(figdir, f"score_scale_{group}.pdf"), bbox_inches="tight")
pal = sns.color_palette("Blues_d", len(adata))
g=sns.catplot(data=df.iloc[0:-2], x=x, y="names",
kind="bar", edgecolor="black", palette=colors_from_values(df.iloc[0:n][colorby], cmap), #edgecolors="black",
height=3, aspect=4/2, sharex=False) # col="group",
for ax in g.axes[0]:
# ax.set_xlabel("- Log"r"$_{10}$"+" adj. "+"P"+"-value")
# ax.set_xlabel("Log fold change")
# ax.set_xlabel("drug score")
ax.set_xlabel("T EM/RM score - Single cell")
ax.set_ylabel("")
ax.set_title(group, fontsize=15)
# ax.set_xlim(0,80)
# if ax.get_title()=="Abnormal glom.":
# ax.set_xlim(0,30)
# else:
# ax.set_xlim(0,130)
plt.legend(frameon=False, bbox_to_anchor=(1,1))
# plt.savefig(os.path.join(figdir, f"drug_scores_{group}.pdf"), bbox_inches="tight")
plt.show()
</code>
|
{
"filename": "transcriptomics_10_drug_visium_2.ipynb",
"repository": "imsb-uke/ANCA-GN",
"query": "transformed_from_existing",
"size": 161124,
"sha": ""
}
|
# model_main2_1.ipynb
Repository: Rajcc/RAG
<code>
import langchain
</code>
<code>
from langchain_community.document_loaders import PyPDFLoader
</code>
<code>
from langchain.text_splitter import RecursiveCharacterTextSplitter
</code>
<code>
from langchain.vectorstores import Chroma
from langchain.embeddings import HuggingFaceEmbeddings
</code>
<code>
import os
</code>
<code>
def read_doc(doc):
read=PyPDFLoader(doc)
file_loader=read.load()
return file_loader
</code>
<code>
read_file=read_doc("Bioinformatics.pdf")
read_file
</code>
<code>
def split(doc,chunk_size=800,chunk_overlap=50):
chunk=RecursiveCharacterTextSplitter(chunk_size=800,chunk_overlap=50)
return chunk.split_documents(doc)
</code>
<code>
chunks=split(read_file)
chunks
</code>
<code>
model_name = "sentence-transformers/all-mpnet-base-v2"
embeddings = HuggingFaceEmbeddings(model_name=model_name)
#no need to embed manually chroma uses huggingface to automatically embed the chunks
# embeddings = hf.embed_documents("CarbonFootprint.pdf")
# print(embeddings)
</code>
<code>
#pass the chunks and embeddings to chroma it will automatically convert chunks into vector and store it strange but true
vectorstore = Chroma.from_documents(documents=chunks,embedding= embeddings, persist_directory="./chroma_store")
vectorstore.persist()
print("Number of vectors in store:", vectorstore._collection.count())
</code>
|
{
"filename": "model_main2_1.ipynb",
"repository": "Rajcc/RAG",
"query": "transformed_from_existing",
"size": 70647,
"sha": ""
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.