NaturalGradient
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -6,7 +6,7 @@ license: cc-by-4.0
|
|
6 |
|
7 |
Welcome to the model weights for the paper ["Protein Sequence Modelling with Bayesian Flow Networks"](https://www.biorxiv.org/content/10.1101/2024.09.24.614734v1). Using the [code on our GitHub page](https://github.com/instadeepai/protein-sequence-bfn), you can sample from our trained models ProtBFN, for general proteins, and AbBFN, for antibody VH chains.
|
8 |
|
9 |
-
|
10 |
|
11 |
One of the benefits of defining such a process in probability parameter space is that it can be applied to *any* family of distributions with continous-valued parameters. This means that BFNs can be directly applied to discrete data, allowing for diffusion-like generative modelling for sequences without restrictive left-to-right inductive biases or relying on discrete-time stochastic processes. The main focus of our work is to investigate the application of BFNs to *protein sequences*, as represented by a sequence of amino acids. The ProtBFN methodology is broadly summarised below:
|
12 |
|
|
|
6 |
|
7 |
Welcome to the model weights for the paper ["Protein Sequence Modelling with Bayesian Flow Networks"](https://www.biorxiv.org/content/10.1101/2024.09.24.614734v1). Using the [code on our GitHub page](https://github.com/instadeepai/protein-sequence-bfn), you can sample from our trained models ProtBFN, for general proteins, and AbBFN, for antibody VH chains.
|
8 |
|
9 |
+
Bayesian Flow Networks are a new approach to generative modelling, and can be viewed as an extension of diffusion models to the parameter space of probability distributions. They define a continuous-time process that maps between a naive prior distribution and a psuedo-deterministic posterior distribution for each variable independently. By training our neural network to 'denoise' the current posterior, by taking into account mutual information between variables, we implicitly minimise a variational lower bound. We can then use our trained neural network to generate samples from the learned distribution.
|
10 |
|
11 |
One of the benefits of defining such a process in probability parameter space is that it can be applied to *any* family of distributions with continous-valued parameters. This means that BFNs can be directly applied to discrete data, allowing for diffusion-like generative modelling for sequences without restrictive left-to-right inductive biases or relying on discrete-time stochastic processes. The main focus of our work is to investigate the application of BFNs to *protein sequences*, as represented by a sequence of amino acids. The ProtBFN methodology is broadly summarised below:
|
12 |
|