{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#  Representational bottlenecks in deep learning\n",
    "large layer ke bt, chote layer lagade, aur phr wapis bare layer lagade to, chote layer ke kam representaion (learning) ziada nahe ho pate to bottleneck problem hojata hai."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#  Vanishing gradients in deep learning\n",
    "weights(slope) ke value bht ziada ye bht kam hojai, to gradiant vanishing problem hojate hai.\n",
    "\n",
    "Learning rate and weight regulization L1, L2 se isay theik kara tha humne."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
