{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## How to implement my own algorithm in FedLab\n",
    "\n",
    "we provide reproductions of federated learning algorithms in fedlab.contrib.algorithm, which reveals the flexible and reuseability of FedLab primitives."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Customize Client\n",
    "\n",
    "We encourage users to read the source code of our framework before customizing their own algorithms in FedLab.\n",
    "The source code of the abstract class of client trainer in FedLab [repo](https://github.com/SMILELab-FL/FedLab/blob/master/fedlab/core/client/trainer.py).\n",
    "\n",
    "To implement a FedLab trainer, the user needs to create a class that is derived from fedlab.core.client.trainer.ClientTrainer and implement the following properties or functions:\n",
    "\n",
    "- uplink_package(property): the information that your clients would upload to the FL server.\n",
    "- setup_dataset(function): the initialization of local dataset.\n",
    "- setup_optim(function): the initialization of local optimization algorithm.\n",
    "- train(function): perform the standard PyTorch model training process.\n",
    "- local_process(function): organize your dataset, optimization, and model training process.\n",
    "\n",
    "We provide a example implementation of SGDTrainer and SGDSerialTrainer (fedlab.contrib.algorithm.basic_client) below:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import sys\n",
    "sys.path.append(\"../\")\n",
    "\n",
    "from copy import deepcopy\n",
    "import torch\n",
    "from fedlab.core.client.trainer import ClientTrainer, SerialClientTrainer\n",
    "from fedlab.utils import Logger, SerializationTool\n",
    "\n",
    "class SGDClientTrainer(ClientTrainer):\n",
    "    \"\"\"Client backend handler, this class provides data process method to upper layer.\n",
    "\n",
    "    Args:\n",
    "        model (torch.nn.Module): PyTorch model.\n",
    "        cuda (bool, optional): use GPUs or not. Default: ``False``.\n",
    "        device (str, optional): Assign model/data to the given GPUs. E.g., 'device:0' or 'device:0,1'. Defaults to None.\n",
    "        logger (Logger, optional): :object of :class:`Logger`.\n",
    "    \"\"\"\n",
    "    def __init__(self,\n",
    "                 model:torch.nn.Module,\n",
    "                 cuda:bool=False,\n",
    "                 device:str=None,\n",
    "                 logger:Logger=None):\n",
    "        super(SGDClientTrainer, self).__init__(model, cuda, device)\n",
    "\n",
    "        self._LOGGER = Logger() if logger is None else logger\n",
    "\n",
    "    @property\n",
    "    def uplink_package(self):\n",
    "        \"\"\"Return a tensor list for uploading to server.\n",
    "\n",
    "            This attribute will be called by client manager.\n",
    "            Customize it for new algorithms.\n",
    "        \"\"\"\n",
    "        return [self.model_parameters]\n",
    "\n",
    "    def setup_dataset(self, dataset):\n",
    "        self.dataset = dataset\n",
    "\n",
    "    def setup_optim(self, epochs, batch_size, lr):\n",
    "        \"\"\"Set up local optimization configuration.\n",
    "\n",
    "        Args:\n",
    "            epochs (int): Local epochs.\n",
    "            batch_size (int): Local batch size. \n",
    "            lr (float): Learning rate.\n",
    "        \"\"\"\n",
    "        self.epochs = epochs\n",
    "        self.batch_size = batch_size\n",
    "        self.optimizer = torch.optim.SGD(self._model.parameters(), lr)\n",
    "        self.criterion = torch.nn.CrossEntropyLoss()\n",
    "\n",
    "    def local_process(self, payload, id):\n",
    "        model_parameters = payload[0]\n",
    "        train_loader = self.dataset.get_dataloader(id, self.batch_size)\n",
    "        self.train(model_parameters, train_loader)\n",
    "\n",
    "    def train(self, model_parameters, train_loader) -> None:\n",
    "        \"\"\"Client trains its local model on local dataset.\n",
    "\n",
    "        Args:\n",
    "            model_parameters (torch.Tensor): Serialized model parameters.\n",
    "        \"\"\"\n",
    "        SerializationTool.deserialize_model(\n",
    "            self._model, model_parameters)  # load parameters\n",
    "        self._LOGGER.info(\"Local train procedure is running\")\n",
    "        for ep in range(self.epochs):\n",
    "            self._model.train()\n",
    "            for data, target in train_loader:\n",
    "                if self.cuda:\n",
    "                    data, target = data.cuda(self.device), target.cuda(self.device)\n",
    "\n",
    "                outputs = self._model(data)\n",
    "                loss = self.criterion(outputs, target)\n",
    "\n",
    "                self.optimizer.zero_grad()\n",
    "                loss.backward()\n",
    "                self.optimizer.step()\n",
    "        self._LOGGER.info(\"Local train procedure is finished\")\n",
    "\n",
    "\n",
    "class SGDSerialClientTrainer(SerialClientTrainer):\n",
    "    \"\"\"\n",
    "    Train multiple clients in a single process.\n",
    "\n",
    "    Customize :meth:`_get_dataloader` or :meth:`_train_alone` for specific algorithm design in clients.\n",
    "\n",
    "    Args:\n",
    "        model (torch.nn.Module): Model used in this federation.\n",
    "        num (int): Number of clients in current trainer.\n",
    "        cuda (bool): Use GPUs or not. Default: ``False``.\n",
    "        device (str, optional): Assign model/data to the given GPUs. E.g., 'device:0' or 'device:0,1'. Defaults to None.\n",
    "        logger (Logger, optional): Object of :class:`Logger`.\n",
    "        personal (bool, optional): If Ture is passed, SerialModelMaintainer will generate the copy of local parameters list and maintain them respectively. These paremeters are indexed by [0, num-1]. Defaults to False.\n",
    "    \"\"\"\n",
    "    def __init__(self, model, num, cuda=False, device=None, logger=None, personal=False) -> None:\n",
    "        super().__init__(model, num, cuda, device, personal)\n",
    "        self._LOGGER = Logger() if logger is None else logger\n",
    "        self.chache = []\n",
    "\n",
    "    def setup_dataset(self, dataset):\n",
    "        self.dataset = dataset\n",
    "\n",
    "    def setup_optim(self, epochs, batch_size, lr):\n",
    "        \"\"\"Set up local optimization configuration.\n",
    "\n",
    "        Args:\n",
    "            epochs (int): Local epochs.\n",
    "            batch_size (int): Local batch size. \n",
    "            lr (float): Learning rate.\n",
    "        \"\"\"\n",
    "        self.epochs = epochs\n",
    "        self.batch_size = batch_size\n",
    "        self.optimizer = torch.optim.SGD(self._model.parameters(), lr)\n",
    "        self.criterion = torch.nn.CrossEntropyLoss()\n",
    "\n",
    "    @property\n",
    "    def uplink_package(self):\n",
    "        package = deepcopy(self.chache)\n",
    "        self.chache = []\n",
    "        return package\n",
    "\n",
    "    def local_process(self, payload, id_list):\n",
    "        model_parameters = payload[0]\n",
    "        for id in id_list:\n",
    "            data_loader = self.dataset.get_dataloader(id, self.batch_size)\n",
    "            pack = self.train(model_parameters, data_loader)\n",
    "            self.chache.append(pack)\n",
    "\n",
    "    def train(self, model_parameters, train_loader):\n",
    "        \"\"\"Single round of local training for one client.\n",
    "\n",
    "        Note:\n",
    "            Overwrite this method to customize the PyTorch training pipeline.\n",
    "\n",
    "        Args:\n",
    "            model_parameters (torch.Tensor): serialized model parameters.\n",
    "            train_loader (torch.utils.data.DataLoader): :class:`torch.utils.data.DataLoader` for this client.\n",
    "        \"\"\"\n",
    "        self.set_model(model_parameters)\n",
    "        self._model.train()\n",
    "\n",
    "        for _ in range(self.epochs):\n",
    "            for data, target in train_loader:\n",
    "                if self.cuda:\n",
    "                    data = data.cuda(self.device)\n",
    "                    target = target.cuda(self.device)\n",
    "\n",
    "                output = self.model(data)\n",
    "                loss = self.criterion(output, target)\n",
    "\n",
    "                self.optimizer.zero_grad()\n",
    "                loss.backward()\n",
    "                self.optimizer.step()\n",
    "\n",
    "        return [self.model_parameters]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Customize Server\n",
    "\n",
    "We encourage users to read the source code of our framework before customizing their own algorithms in FedLab.\n",
    "The source code of the abstract class of handler in FedLab [repo](https://github.com/SMILELab-FL/FedLab/blob/master/fedlab/core/server/handler.py).\n",
    "\n",
    "To implement a FedLab handler, the user needs to create a class that is derived from fedlab.core.server.handler.ServerHandler and implement the following properties or functions:\n",
    "\n",
    "- downlink_package(property): the information that your clients would upload to the FL server.\n",
    "- if_stop(property): a bool value to determine the time to stop.\n",
    "- load(function): register the information uploaded by clients.  \n",
    "- global_update(function): the global update algorithm.\n",
    "\n",
    "We provide a example implementation of SyncServerHandler (fedlab.contrib.algorithm.basic_server) below:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "import random\n",
    "from copy import deepcopy\n",
    "\n",
    "from typing import List\n",
    "from fedlab.utils import Logger, Aggregators, SerializationTool\n",
    "from fedlab.core.server.handler import ServerHandler\n",
    "\n",
    "class SyncServerHandler(ServerHandler):\n",
    "    \"\"\"Synchronous Parameter Server Handler.\n",
    "\n",
    "    Backend of synchronous parameter server: this class is responsible for backend computing in synchronous server.\n",
    "\n",
    "    Synchronous parameter server will wait for every client to finish local training process before\n",
    "    the next FL round.\n",
    "\n",
    "    Details in paper: http://proceedings.mlr.press/v54/mcmahan17a.html\n",
    "\n",
    "    Args:\n",
    "        model (torch.nn.Module): Model used in this federation.\n",
    "        global_round (int): stop condition. Shut down FL system when global round is reached.\n",
    "        sample_ratio (float): The result of ``sample_ratio * client_num`` is the number of clients for every FL round.\n",
    "        cuda (bool): Use GPUs or not. Default: ``False``.\n",
    "        device (str, optional): Assign model/data to the given GPUs. E.g., 'device:0' or 'device:0,1'. Defaults to None. If device is None and cuda is True, FedLab will set the gpu with the largest memory as default.\n",
    "        logger (Logger, optional): object of :class:`Logger`.\n",
    "    \"\"\"\n",
    "    def __init__(self,\n",
    "                 model: torch.nn.Module,\n",
    "                 global_round: int,\n",
    "                 sample_ratio: float,\n",
    "                 cuda: bool = False,\n",
    "                 device:str=None,\n",
    "                 logger: Logger = None):\n",
    "        super(SyncServerHandler, self).__init__(model, cuda, device)\n",
    "\n",
    "        self._LOGGER = Logger() if logger is None else logger\n",
    "        assert sample_ratio >= 0.0 and sample_ratio <= 1.0\n",
    "\n",
    "        # basic setting\n",
    "        self.client_num = 0\n",
    "        self.sample_ratio = sample_ratio\n",
    "\n",
    "        # client buffer\n",
    "        self.client_buffer_cache = []\n",
    "\n",
    "        # stop condition\n",
    "        self.global_round = global_round\n",
    "        self.round = 0\n",
    "\n",
    "    @property\n",
    "    def downlink_package(self) -> List[torch.Tensor]:\n",
    "        \"\"\"Property for manager layer. Server manager will call this property when activates clients.\"\"\"\n",
    "        return [self.model_parameters]\n",
    "\n",
    "    @property\n",
    "    def if_stop(self):\n",
    "        \"\"\":class:`NetworkManager` keeps monitoring this attribute, and it will stop all related processes and threads when ``True`` returned.\"\"\"\n",
    "        return self.round >= self.global_round\n",
    "\n",
    "    @property\n",
    "    def client_num_per_round(self):\n",
    "        return max(1, int(self.sample_ratio * self.client_num))\n",
    "\n",
    "    def sample_clients(self):\n",
    "        \"\"\"Return a list of client rank indices selected randomly. The client ID is from ``0`` to\n",
    "        ``self.client_num -1``.\"\"\"\n",
    "        selection = random.sample(range(self.client_num),\n",
    "                                  self.client_num_per_round)\n",
    "        return sorted(selection)\n",
    "\n",
    "    def global_update(self, buffer):\n",
    "        parameters_list = [ele[0] for ele in buffer]\n",
    "        serialized_parameters = Aggregators.fedavg_aggregate(parameters_list)\n",
    "        SerializationTool.deserialize_model(self._model, serialized_parameters)\n",
    "\n",
    "    def load(self, payload: List[torch.Tensor]) -> bool:\n",
    "        \"\"\"Update global model with collected parameters from clients.\n",
    "\n",
    "        Note:\n",
    "            Server handler will call this method when its ``client_buffer_cache`` is full. User can\n",
    "            overwrite the strategy of aggregation to apply on :attr:`model_parameters_list`, and\n",
    "            use :meth:`SerializationTool.deserialize_model` to load serialized parameters after\n",
    "            aggregation into :attr:`self._model`.\n",
    "\n",
    "        Args:\n",
    "            payload (list[torch.Tensor]): A list of tensors passed by manager layer.\n",
    "        \"\"\"\n",
    "        assert len(payload) > 0\n",
    "        self.client_buffer_cache.append(deepcopy(payload))\n",
    "\n",
    "        assert len(self.client_buffer_cache) <= self.client_num_per_round\n",
    "\n",
    "        if len(self.client_buffer_cache) == self.client_num_per_round:\n",
    "            self.global_update(self.client_buffer_cache)\n",
    "            self.round += 1\n",
    "\n",
    "            # reset cache\n",
    "            self.client_buffer_cache = []\n",
    "\n",
    "            return True  # return True to end this round.\n",
    "        else:\n",
    "            return False"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Customize Communication Aggrements"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We designed resonable API and comprehensive abstract class in fedlab.core, where includes p2p commnication API, client abstract APIs and server abstract APIs.  Furtheremore, we provide implementations of common FL algorithm for users to learn. Please see them in fedlab.contrib.\n",
    "\n",
    "Other useful information are in our documentation website:\n",
    "\n",
    "- [Communication APIs](https://fedlab.readthedocs.io/en/master/tutorials/distributed_communication.html)\n",
    "- [Network manager design](https://fedlab.readthedocs.io/en/master/tutorials/communication_strategy.html)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3.10.0 ('fedlab')",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.0"
  },
  "orig_nbformat": 4,
  "vscode": {
   "interpreter": {
    "hash": "019ae50596e3d4df627f3288be8543f4b17347150bdb9d2aa2e7c637014aee00"
   }
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
