|
import streamlit as st |
|
|
|
|
|
st.title("PhD-Level Machine Learning Course") |
|
|
|
|
|
st.header("Course Overview") |
|
st.write(""" |
|
This PhD-level course in Machine Learning is designed to cover both foundational and cutting-edge topics, |
|
providing a comprehensive understanding of machine learning theory, advanced optimization techniques, probabilistic models, and more. |
|
The course involves hands-on projects and research problems to solidify your knowledge in this rapidly evolving field. |
|
""") |
|
|
|
|
|
st.header("Course Objectives") |
|
st.markdown(""" |
|
- **Understanding Deep Learning Theory**: Dive into the foundational theories behind deep learning models. |
|
- **Exploration of Probabilistic Graphical Models**: Study the application of PGM in ML. |
|
- **Advanced Optimization Techniques**: Explore methods like stochastic gradient descent, variational inference, and more. |
|
- **Cutting-edge Research Areas**: Learn about the latest research in generative models, reinforcement learning, and unsupervised learning. |
|
- **Hands-on Research and Projects**: Implement state-of-the-art models in practical settings. |
|
""") |
|
|
|
|
|
st.header("Course Syllabus") |
|
|
|
|
|
st.subheader("Module 1: Foundations and Theoretical Aspects") |
|
st.write(""" |
|
- **Linear Models**: Linear Regression, Ridge and Lasso Regression, Regularization Techniques. |
|
- **Convex Optimization**: Convex sets, Gradient Descent, Duality. |
|
- **Kernel Methods**: Support Vector Machines, Kernel tricks, Gaussian and Polynomial Kernels. |
|
- **Readings**: "Elements of Statistical Learning" by Hastie, Tibshirani, Friedman. |
|
""") |
|
st.write("### Problems for Module 1:") |
|
st.write(""" |
|
1. Implement simple linear regression from scratch using only NumPy. Compare your results with scikit-learn. |
|
2. Solve for the weights of a ridge regression model by manually deriving the closed-form solution. |
|
3. Implement Lasso regression using coordinate descent. Evaluate its performance on a synthetic dataset. |
|
4. Prove that the cost function in linear regression is convex. |
|
5. Solve the constrained optimization problem: minimize f(x) = x^2 subject to x ≥ 1 using gradient descent. |
|
""") |
|
|
|
|
|
st.subheader("Module 2: Probabilistic Models in Machine Learning") |
|
st.write(""" |
|
- **Bayesian Networks**: Structure learning, Conditional independence, Markov Properties. |
|
- **Hidden Markov Models (HMMs)**: Forward-backward algorithm, Viterbi algorithm, Baum-Welch for parameter estimation. |
|
- **Gaussian Mixture Models (GMMs)**: Expectation-Maximization (EM), Clustering. |
|
- **Readings**: "Pattern Recognition and Machine Learning" by Bishop. |
|
""") |
|
st.write("### Problems for Module 2:") |
|
st.write(""" |
|
1. Build a Bayesian network for a medical diagnosis problem. Perform exact inference on this network. |
|
2. Derive the update rules for Bayesian networks and implement them to calculate posterior probabilities. |
|
3. Implement the forward-backward algorithm for HMM and apply it to a sequence prediction problem. |
|
4. Use Gaussian Mixture Models for image segmentation. |
|
5. Implement variable elimination for a small Bayesian network and compare it with junction tree inference. |
|
""") |
|
|
|
|
|
st.subheader("Module 3: Advanced Deep Learning") |
|
st.write(""" |
|
- **Generative Adversarial Networks (GANs)**: GAN formulation, Loss functions, Generator vs. Discriminator dynamics. |
|
- **Variational Autoencoders (VAEs)**: Latent variable models, KL Divergence, Reparameterization trick. |
|
- **Deep Reinforcement Learning**: Policy Gradients, Q-learning, Actor-Critic methods. |
|
- **Readings**: "Deep Learning" by Goodfellow, Bengio, Courville. |
|
""") |
|
st.write("### Problems for Module 3:") |
|
st.write(""" |
|
1. Implement a basic GAN from scratch using PyTorch and train it on the MNIST dataset. |
|
2. Build a CycleGAN to perform image translation (e.g., transforming horses into zebras) using a public dataset. |
|
3. Implement a Variational Autoencoder (VAE) for dimensionality reduction on the MNIST dataset. |
|
4. Train a Proximal Policy Optimization (PPO) agent for continuous control in the BipedalWalker environment. |
|
5. Derive the loss functions for both the generator and discriminator in GANs and explain their interaction during training. |
|
""") |
|
|
|
|
|
st.subheader("Module 4: Reinforcement Learning") |
|
st.write(""" |
|
- **Value-Based Methods**: Q-learning, SARSA, Bellman equations, MDPs. |
|
- **Policy-Based Methods**: Policy gradient methods, REINFORCE algorithm, Actor-Critic models. |
|
- **Readings**: "Reinforcement Learning: An Introduction" by Sutton and Barto. |
|
""") |
|
st.write("### Problems for Module 4:") |
|
st.write(""" |
|
1. Implement Q-learning from scratch to solve a grid-world environment. |
|
2. Apply SARSA to a stochastic grid-world environment and compare its performance to Q-learning. |
|
3. Implement the REINFORCE algorithm for a simple policy gradient task and analyze the variance in the gradient estimates. |
|
4. Train an actor-critic agent using A2C in a continuous state space environment. |
|
5. Train an agent using PPO in the Humanoid-v2 environment from OpenAI Gym. |
|
""") |
|
|
|
|
|
st.subheader("Module 5: Transfer Learning and Meta-Learning") |
|
st.write(""" |
|
- **Transfer Learning Concepts**: Fine-tuning, Feature extraction, Adapting pre-trained models. |
|
- **Meta-Learning (Few-Shot Learning)**: Learning-to-learn, MAML, Prototypical Networks. |
|
- **Readings**: Recent advances in transfer learning (BERT, GPT, etc.). |
|
""") |
|
st.write("### Problems for Module 5:") |
|
st.write(""" |
|
1. Fine-tune BERT for a text classification task on a custom dataset using Hugging Face. |
|
2. Implement feature extraction from a pre-trained model (e.g., VGG16) and apply it to a new dataset for object detection. |
|
3. Train a Prototypical Network for few-shot learning and test it on a small dataset of handwritten characters. |
|
4. Explore how few-shot learning techniques can be applied to reinforcement learning tasks. |
|
5. Train a transfer learning model on medical images (e.g., X-rays) using DenseNet and analyze the results. |
|
""") |
|
|
|
|
|
st.subheader("Module 6: Unsupervised and Self-Supervised Learning") |
|
st.write(""" |
|
- **Clustering and Dimensionality Reduction**: K-means, PCA, t-SNE, UMAP. |
|
- **Self-Supervised Learning**: SimCLR, BYOL, Denoising Autoencoders. |
|
""") |
|
st.write("### Problems for Module 6:") |
|
st.write(""" |
|
1. Implement K-means clustering and apply it to a dataset of customer behavior data to segment users. |
|
2. Use t-SNE to visualize high-dimensional data (e.g., word embeddings) and explore the resulting clusters. |
|
3. Train a self-supervised model on an image dataset and visualize learned features. |
|
4. Implement contrastive learning (SimCLR) to learn visual representations without labels. |
|
5. Apply PCA for dimensionality reduction on a real-world dataset (e.g., image data). |
|
""") |
|
|
|
|
|
st.subheader("Module 7: Advanced Optimization Techniques") |
|
st.write(""" |
|
- **Stochastic Optimization**: SGD, Adam, RMSProp, Learning Rate Schedules. |
|
- **Variational Inference**: ELBO maximization, Stochastic Variational Inference, Monte Carlo methods. |
|
""") |
|
st.write("### Problems for Module 7:") |
|
st.write(""" |
|
1. Implement stochastic gradient descent (SGD) with learning rate schedules and compare different optimization techniques (Adam, RMSProp) on the MNIST dataset. |
|
2. Derive the update rules for the Adam optimizer and implement it from scratch. |
|
3. Train a deep learning model with cyclical learning rates and analyze the training dynamics. |
|
4. Implement variational inference to fit a Bayesian neural network on a small dataset. |
|
5. Explore the impact of weight decay and momentum in training deep networks and visualize their effect on the loss surface. |
|
""") |
|
|
|
|
|
st.subheader("Module 8: Special Topics in Machine Learning") |
|
st.write(""" |
|
- **Interpretability**: SHAP, LIME, Counterfactual Explanations, Fairness in ML. |
|
- **Adversarial Learning**: FGSM, PGD, Robust models. |
|
- **Causal Inference**: Structural Equation Modeling (SEM), Causal graphs, Counterfactuals. |
|
- **Readings**: "The Book of Why" by Judea Pearl. |
|
""") |
|
st.write("### Problems for Module 8:") |
|
st.write(""" |
|
1. Implement an adversarial attack on a deep learning model and build defenses. |
|
2. Use SHAP and LIME to interpret the results of a complex machine learning model. |
|
3. Develop a causal inference model for decision-making in healthcare or economics. |
|
4. Implement counterfactual explanations to improve model interpretability. |
|
5. Explore fairness in machine learning using real-world datasets. |
|
""") |
|
|
|
|
|
st.write(""" |
|
- **Midterm Project**: Design and implement a machine learning model that incorporates advanced theoretical methods and optimization techniques. |
|
- **Final Research Paper**: Write and present a research paper on a selected topic in advanced machine learning, proposing a novel method, or experimenting with cutting-edge models. |
|
""") |
|
|
|
|
|
st.header("Additional Resources") |
|
st.write(""" |
|
- **Books**: |
|
- "Elements of Statistical Learning" by Hastie, Tibshirani, Friedman |
|
- "Pattern Recognition and Machine Learning" by Christopher M. Bishop |
|
- "Deep Learning" by Ian Goodfellow, Yoshua Bengio, Aaron Courville |
|
- "Reinforcement Learning: An Introduction" by Richard S. Sutton and Andrew G. Barto |
|
- "The Book of Why" by Judea Pearl (for causal inference) |
|
- **Libraries & Tools**: |
|
- **PyTorch**: A deep learning framework used for various implementations (GANs, VAEs, etc.). |
|
- **TensorFlow**: Another widely-used deep learning library. |
|
- **Hugging Face**: Pre-trained models for transfer learning and NLP tasks. |
|
- **OpenAI Gym**: A toolkit for developing and comparing reinforcement learning algorithms. |
|
- **Pyro**: A probabilistic programming library built on PyTorch for Bayesian networks and probabilistic models. |
|
- **Research Journals & Conferences**: |
|
- NeurIPS (Neural Information Processing Systems) |
|
- ICML (International Conference on Machine Learning) |
|
- ICLR (International Conference on Learning Representations) |
|
""") |
|
|
|
|
|
st.write("Good luck with your studies and projects in advanced machine learning! Stay curious and keep exploring.") |
|
|
|
|