Datasets:
The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
AlphaFold 2 Beta Strand Database
Dataset Summary and Creation
The AlphaFold 2 (AF2) Beta Strand Database is a database for high-confidence scored beta strand pairs as predicted by Alphafold 2, a revolutionary protein structure prediction system. All 214 million protein structures from the Alphafold Protein Structure Database (Alphafold DB) were analyzed and well-aligned pairs of amino acid sequences, which exhibited beta-strand conformations, were collected using specific criteria. The database consists of 1,532,767,459 beta strand pairs, reduced to 488,153,783 unique pairs after combining repeated instances. In this database, approximately 74.9% of pairs are anti-parallel, while 25.1% are parallel.
High-confidence scored pairs were chosen based on parameters such as sequence lengths of at least three residues, a predicted Local-Distance Difference Test (pLDDT) score of 70 or more for all residues, and a Predicted Aligned Error (PAE) score of less than or equal to 10 for facing residues. These selection criteria are instrumental in maintaining accurate backbone predictions and minimizing the inclusion of inaccurate residue pairs. This database can be used for various applications, including but not limited to, generative model training, predictive algorithm development, and in-depth beta strand studies
Data Instances and loading
The AF2 Beta Strand Database's entries comprise three key components: (1) Target sequence: a beta sheet sequence as labeled by Alphafold 2, captured in the N-to-C terminus direction; (2) Complementary peptide sequence: the sequence that pairs with the target sequence in a face to face orientation (For parallel pairs: N-to-C terminus. For antiparallel pairs: C-to-N terminus); (3) Count: the frequency of repeated pair occurrences. Type information (antiparallel or parallel beta strand pairs) is contained in the data file name.
For an instance of database usage, consider the scenario where you have downloaded AF2_Beta_Strand_Database/antiparallel/length_10/data_0.npy and placed it under your current directory. Here's how to load the data and display the first 30 entries:
import pandas as pd
import numpy as np
data = np.load('data_0.npy', allow_pickle=True).tolist()
# Convert nested dictionary to flat data
flat_data = []
for target, inner_dict in data.items():
for comp, count in inner_dict.items():
flat_data.append([target, comp, count])
# Convert list of lists to DataFrame
df = pd.DataFrame(flat_data, columns=['Target', 'Complementary peptide', 'Count'])
print(df[:30])
Target | Complementary peptide | Count |
---|---|---|
ASLVMAVVIN | NHCVVLGWLR | 358 |
ASLVMAVVIN | HHCVVLGWLK | 124 |
ASLVMAVVIN | HHCVLLGWLK | 1 |
ASLVMAVVIN | NHCVVLGWLM | 1 |
ASLVMAVVIN | NHCVVLGWLK | 17 |
ASLVMAVVIN | HHCVVLGWLR | 32 |
ASLVMAVVIN | HHCVVLGRLK | 1 |
ASLVMAVVIN | HHSVILGWLR | 2 |
ASLVMAVVIN | DHCVVLGWLR | 1 |
ASLVMAVVIN | NHCVVLGWLG | 1 |
ASLVMAVVIN | HHCVVSGWLK | 1 |
ASLVMAVVIN | HHCVLLGWLR | 1 |
ASLVMAVVIN | HHCVVMGWLR | 1 |
ASLVMAVVIN | NHCVLLGWLR | 2 |
ASLVMAVVIN | NHCVVLDWLR | 1 |
ASLVMAVVIN | HHCGVLGWLK | 1 |
RLWGLVVCHN | NIVVAMVLSA | 358 |
RLWGLVVCHN | NVVVAMVLSA | 624 |
RLWGLVVCHN | NVIVAMVLSA | 57 |
RLWGLVVCHN | KVVVALVLSA | 25 |
RLWGLVVCHN | KVVVAMVLSA | 4 |
RLWGLVVCHN | NVVVSMVLSA | 63 |
RLWGLVVCHN | NIVVSMVMSA | 1 |
RLWGLVVCHN | NIVVAMVLSS | 4 |
RLWGLVVCHN | NVVIAMVLSA | 4 |
RLWGLVVCHN | NIVVAMVFSA | 3 |
RLWGLVVCHN | NIIVAMVLSA | 2 |
RLWGLVVCHN | NMVVAMVLSA | 1 |
RLWGLVVCHN | NVVASMVLSA | 1 |
RLWGLVVCHN | NVVVGMVLSA | 3 |
Additional information
Dataset curator
Haowen Zhao, haowen.zhao19@alumni.imperial.ac.uk
More information about this project
For detailed insight into TransformerBeta, we suggest referring to our research paper Computational design of target-specific linear peptide binders with TransformerBeta.
To use TransformerBeta effortlessly without installation, visit: TransformerBeta on Google Colab
To install TransformerBeta locally for your projects, visit: TransformerBeta on Github.
For accessing the model weights and corresponding training, validation, and test sets mentioned in the paper, refer to: TransformerBeta on Huggingface.
License
The AlphaFold 2 Beta Strand Database is made available under the terms of the MIT License.
Citing this Database
If you use this database in your research, we kindly request that you cite our paper. We also recommend citing the foundational AlphaFold papers, as our database was constructed based on their work:
Below are the BibTeX entries for our paper:
@article{zhao2024computational,
title={Computational design of target-specific linear peptide binders with TransformerBeta},
author={Zhao, Haowen and Aprile, Francesco A and Bravi, Barbara},
journal={arXiv preprint arXiv:2410.16302},
year={2024}
}
Below are the BibTeX entries for the AlphaFold papers:
@article{jumper2021highly,
title={Highly accurate protein structure prediction with AlphaFold},
author={Jumper, John and Evans, Richard and Pritzel, Alexander and Green, Tim and Figurnov, Michael and Ronneberger, Olaf and Tunyasuvunakool, Kathryn and Bates, Russ and {\v{Z}}{\'\i}dek, Augustin and Potapenko, Anna and others},
journal={Nature},
volume={596},
number={7873},
pages={583--589},
year={2021},
publisher={Nature Publishing Group UK London}
}
@article{varadi2022alphafold,
title={AlphaFold Protein Structure Database: massively expanding the structural coverage of protein-sequence space with high-accuracy models},
author={Varadi, Mihaly and Anyango, Stephen and Deshpande, Mandar and Nair, Sreenath and Natassia, Cindy and Yordanova, Galabina and Yuan, David and Stroe, Oana and Wood, Gemma and Laydon, Agata and others},
journal={Nucleic acids research},
volume={50},
number={D1},
pages={D439--D444},
year={2022},
publisher={Oxford University Press}
}
- Downloads last month
- 33