{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "_lNeCgAVkdhM"
      },
      "source": [
        "##### Copyright 2019 The TensorFlow Authors."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "cellView": "form",
        "colab": {},
        "colab_type": "code",
        "id": "uDcWxmG9kh1Q"
      },
      "outputs": [],
      "source": [
        "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n",
        "# you may not use this file except in compliance with the License.\n",
        "# You may obtain a copy of the License at\n",
        "#\n",
        "# https://www.apache.org/licenses/LICENSE-2.0\n",
        "#\n",
        "# Unless required by applicable law or agreed to in writing, software\n",
        "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
        "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
        "# See the License for the specific language governing permissions and\n",
        "# limitations under the License."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "32xflLc4NTx-"
      },
      "source": [
        "# Custom Federated Algorithms, Part 1: Introduction to the Federated Core"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "SyXVak0dknQw"
      },
      "source": [
        "\u003ctable class=\"tfo-notebook-buttons\" align=\"left\"\u003e\n",
        "  \u003ctd\u003e\n",
        "    \u003ca target=\"_blank\" href=\"https://www.tensorflow.org/federated/tutorials/custom_federated_algorithms_1\"\u003e\u003cimg src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" /\u003eView on TensorFlow.org\u003c/a\u003e\n",
        "  \u003c/td\u003e\n",
        "  \u003ctd\u003e\n",
        "    \u003ca target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/federated/blob/v0.8.0/docs/tutorials/custom_federated_algorithms_1.ipynb\"\u003e\u003cimg src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" /\u003eRun in Google Colab\u003c/a\u003e\n",
        "  \u003c/td\u003e\n",
        "  \u003ctd\u003e\n",
        "    \u003ca target=\"_blank\" href=\"https://github.com/tensorflow/federated/blob/v0.8.0/docs/tutorials/custom_federated_algorithms_1.ipynb\"\u003e\u003cimg src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" /\u003eView source on GitHub\u003c/a\u003e\n",
        "  \u003c/td\u003e\n",
        "\u003c/table\u003e"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "_igJ2sfaNWS8"
      },
      "source": [
        "This tutorial is the first part of a two-part series that demonstrates how to\n",
        "implement custom types of federated algorithms in TensorFlow Federated (TFF)\n",
        "using the [Federated Core (FC)](../federated_core.md) - a set of lower-level\n",
        "interfaces that serve as a foundation upon which we have implemented the\n",
        "[Federated Learning (FL)](../federated_learning.md) layer.\n",
        "\n",
        "This first part is more conceptual; we introduce some of the key concepts and\n",
        "programming abstractions used in TFF, and we demonstrate their use on a very\n",
        "simple example with a distributed array of temperature sensors. In\n",
        "[the second part of this series](custom_federated_alrgorithms_2.ipynb), we use\n",
        "the mechanisms we introduce here to implement a simple version of federated\n",
        "training and evaluation algorithms. As a follow-up, we encourage you to study\n",
        "[the implementation](https://github.com/tensorflow/federated/blob/master/tensorflow_federated/python/learning/federated_averaging.py)\n",
        "of federated averaging in `tff.learning`.\n",
        "\n",
        "By the end of this series, you should be able to recognize that the applications\n",
        "of Federated Core are not necessarily limited to learning. The programming\n",
        "abstractions we offer are quite generic, and could be used, e.g., to implement\n",
        "analytics and other custom types of computations over distributed data.\n",
        "\n",
        "Although this tutorial is designed to be self-contained, we encourage you to\n",
        "first read tutorials on\n",
        "[image classification](federated_learning_for_image_classification.ipynb) and\n",
        "[text generation](federated_learning_for_text_generation.ipynb) for a\n",
        "higher-level and more gentle introduction to the TensorFlow Federated framework\n",
        "and the [Federated Learning](../federated_learning.md) APIs (`tff.learning`), as\n",
        "it will help you put the concepts we describe here in context."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "09FT9ertw8KP"
      },
      "source": [
        "## Intended Uses\n",
        "\n",
        "In a nutshell, Federated Core (FC) is a development environment that makes it\n",
        "possible to compactly express program logic that combines TensorFlow code with\n",
        "distributed communication operators, such as those that are used in\n",
        "[Federated Averaging](https://arxiv.org/abs/1602.05629) - computing\n",
        "distributed sums, averages, and other types of distributed aggregations over a\n",
        "set of client devices in the system, broadcasting models and parameters to those\n",
        "devices, etc.\n",
        "\n",
        "You may be aware of\n",
        "[`tf.contrib.distribute`](https://www.tensorflow.org/api_docs/python/tf/contrib/distribute),\n",
        "and a natural question to ask at this point may be: in what ways does this\n",
        "framework differ? Both frameworks attempt at making TensorFlow computations\n",
        "distributed, after all.\n",
        "\n",
        "One way to think about it is that, whereas the stated goal of\n",
        "`tf.contrib.distribute` is *to allow users to use existing models and training\n",
        "code with minimal changes to enable distributed training*, and much focus is on\n",
        "how to take advantage of distributed infrastructure to make existing training\n",
        "code more efficient, the goal of TFF's Federated Core is to give researchers and\n",
        "practitioners explicit control over the specific patterns of distributed\n",
        "communication they will use in their systems. The focus in FC is on providing a\n",
        "flexible and extensible language for expressing distributed data flow\n",
        "algorithms, rather than a concrete set of implemented distributed training\n",
        "capabilities.\n",
        "\n",
        "One of the primary target audiences for TFF's FC API is researchers and\n",
        "practitioners who might want to experiment with new federated learning\n",
        "algorithms and evaluate the consequences of subtle design choices that affect\n",
        "the manner in which the flow of data in the distributed system is orchestrated,\n",
        "yet without getting bogged down by system implementation details. The level of\n",
        "abstraction that FC API is aiming for roughly corresponds to pseudocode one\n",
        "could use to describe the mechanics of a federated learning algorithm in a\n",
        "research publication - what data exists in the system and how it is transformed,\n",
        "but without dropping to the level of individual point-to-point network message\n",
        "exchanges.\n",
        "\n",
        "TFF as a whole is targeting scenarios in which data is distributed, and must\n",
        "remain such, e.g., for privacy reasons, and where collecting all data at a\n",
        "centralized location may not be a viable option. This has implication on the\n",
        "implementation of machine learning algorithms that require an increased degree\n",
        "of explicit control, as compared to scenarios in which all data can be\n",
        "accumulated in a centralized location at a data center."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "cuJuLEh2TfZG"
      },
      "source": [
        "## Before we start\n",
        "\n",
        "Before we dive into the code, please try to run the following \"Hello World\"\n",
        "example to make sure your environment is correctly setup. If it doesn't work,\n",
        "please refer to the [Installation](../install.md) guide for instructions."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "Ary-OZz5jMJI"
      },
      "outputs": [],
      "source": [
        "#@test {\"skip\": true}\n",
        "\n",
        "# NOTE: If you are running a Jupyter notebook, and installing a locally built\n",
        "# pip package, you may need to edit the following to point to the '.whl' file\n",
        "# on your local filesystem.\n",
        "\n",
        "!pip install --quiet --upgrade tensorflow_federated\n",
        "!pip install --quiet --upgrade tf-nightly"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "-skNC6aovM46"
      },
      "outputs": [],
      "source": [
        "from __future__ import absolute_import, division, print_function\n",
        "\n",
        "import collections\n",
        "\n",
        "import numpy as np\n",
        "from six.moves import range\n",
        "import tensorflow as tf\n",
        "import tensorflow_federated as tff\n",
        "\n",
        "tf.enable_resource_variables()"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "okHp5z7ekFoc"
      },
      "outputs": [],
      "source": [
        "@tff.federated_computation\n",
        "def hello_world():\n",
        "  return 'Hello, World!'\n",
        "\n",
        "\n",
        "hello_world()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "9xX97PJwaBLf"
      },
      "source": [
        "## Federated data\n",
        "\n",
        "One of the distinguishing features of TFF is that it allows you to compactly\n",
        "express TensorFlow-based computations on *federated data*. We will be using the\n",
        "term *federated data* in this tutorial to refer to a collection of data items\n",
        "hosted across a group of devices in a distributed system. For example,\n",
        "applications running on mobile devices may collect data and store it locally,\n",
        "without uploading to a centralized location. Or, an array of distributed sensors\n",
        "may collect and store temperature readings at their locations.\n",
        "\n",
        "Federated data like those in the above examples are treated in TFF as\n",
        "[first-class citizens](https://en.wikipedia.org/wiki/First-class_citizen), i.e.,\n",
        "they may appear as parameters and results of functions, and they have types. To\n",
        "reinforce this notion, we will refer to federated data sets as *federated\n",
        "values*, or as *values of federated types*.\n",
        "\n",
        "The important point to understand is that we are modeling the entire collection\n",
        "of data items across all devices (e.g., the entire collection temperature\n",
        "readings from all sensors in a distributed array) as a single federated value.\n",
        "\n",
        "For example, here's how one would define in TFF the type of a *federated float*\n",
        "hosted by a group of client devices. A collection of temperature readings that\n",
        "materialize across an array of distributed sensors could be modeled as a value\n",
        "of this federated type."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "COe0tLPPtTbe"
      },
      "outputs": [],
      "source": [
        "federated_float_on_clients = tff.FederatedType(tf.float32, tff.CLIENTS)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "iCAMsF_T8p63"
      },
      "source": [
        "More generally, a federated type in TFF is defined by specifying the type `T` of\n",
        "its *member constituents* - the items of data that reside on individual devices,\n",
        "and the group `G` of devices on which federated values of this type are hosted\n",
        "(plus a third, optional bit of information we'll mention shortly). We refer to\n",
        "the group `G` of devices hosting a federated value as the value's *placement*.\n",
        "Thus, `tff.CLIENTS` is an example of a placement."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "zFVZQwUZ_nbt"
      },
      "outputs": [],
      "source": [
        "str(federated_float_on_clients.member)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "eTK00mVb_qi7"
      },
      "outputs": [],
      "source": [
        "str(federated_float_on_clients.placement)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "Q6dp3OHVW_2Q"
      },
      "source": [
        "A federated type with member constituents `T` and placement `G` can be\n",
        "represented compactly as `{T}@G`, as shown below."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "eR-9cP219brl"
      },
      "outputs": [],
      "source": [
        "str(federated_float_on_clients)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "9kn1logOGtBI"
      },
      "source": [
        "The curly braces `{}` in this concise notation serve as a reminder that the\n",
        "member constituents (items of data on different devices) may differ, as you\n",
        "would expect e.g., of temperature sensor readings, so the clients as a group are\n",
        "jointly hosting a [multi-set](https://en.wikipedia.org/wiki/Multiset) of\n",
        "`T`-typed items that together constitute the federated value.\n",
        "\n",
        "It is important to note that the member constituents of a federated value are\n",
        "generally opaque to the programmer, i.e., a federated value should not be\n",
        "thought of as a simple `dict` keyed by an identifier of a device in the system -\n",
        "these values are intended to be collectively transformed only by *federated\n",
        "operators* that abstractly represent various kinds of distributed communication\n",
        "protocols (such as aggregation). If this sounds too abstract, don't worry - we\n",
        "will return to this shortly, and we will illustrate it with concrete examples.\n",
        "\n",
        "Federated types in TFF come in two flavors: those where the member constituents\n",
        "of a federated value may differ (as just seen above), and those where they are\n",
        "known to be all equal. This is controlled by the third, optional `all_equal`\n",
        "parameter in the `tff.FederatedType` constructor (defaulting to `False`)."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "wenF_FnGivCZ"
      },
      "outputs": [],
      "source": [
        "federated_float_on_clients.all_equal"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "6wxL6UAkittF"
      },
      "source": [
        "A federated type with a placement `G` in which all of the `T`-typed member\n",
        "constituents are known to be equal can be compactly represented as `T@G` (as\n",
        "opposed to `{T}@G`, that is, with the curly braces dropped to reflect the fact\n",
        "that the multi-set of member constituents consists of a single item)."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "ei1pmBEuLWf-"
      },
      "outputs": [],
      "source": [
        "str(tff.FederatedType(tf.float32, tff.CLIENTS, all_equal=True))"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "pZ2JlbX6H0h5"
      },
      "source": [
        "One example of a federated value of such type that might arise in practical\n",
        "scenarios is a hyperparameter (such as a learning rate, a clipping norm, etc.)\n",
        "that has been broadcasted by a server to a group of devices that participate in\n",
        "federated training.\n",
        "\n",
        "Another example is a set of parameters for a machine learning model pre-trained\n",
        "at the server, that were then broadcasted to a group of client devices, where\n",
        "they can be personalized for each user.\n",
        "\n",
        "For example, suppose we have a pair of `float32` parameters `a` and `b` for a\n",
        "simple one-dimensional linear regression model. We can construct the\n",
        "(non-federated) type of such models for use in TFF as follows. The angle braces\n",
        "`\u003c\u003e` in the printed type string are a compact TFF notation for named or unnamed\n",
        "tuples."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "noN9mFSN10e6"
      },
      "outputs": [],
      "source": [
        "simple_regression_model_type = (\n",
        "    tff.NamedTupleType([('a', tf.float32), ('b', tf.float32)]))\n",
        "\n",
        "str(simple_regression_model_type)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "ytngzr6r10yn"
      },
      "source": [
        "Note that we are only specifying `dtype`s above. Non-scalar types are also\n",
        "supported. In the above code, `tf.float32` is a shortcut notation for the more\n",
        "general `tff.TensorType(dtype=tf.float32, shape=[])`.\n",
        "\n",
        "When this model is broadcasted to clients, the type of the resulting federated\n",
        "value can be represented as shown below."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "jZxvM1m9OJZc"
      },
      "outputs": [],
      "source": [
        "str(tff.FederatedType(\n",
        "    simple_regression_model_type, tff.CLIENTS, all_equal=True))"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "WfnRcX7rNspF"
      },
      "source": [
        "Per symmetry with *federated float* above, we will refer to such a type as a\n",
        "*federated tuple*. More generally, we'll often use the term *federated XYZ* to\n",
        "refer to a federated value in which member constituents are *XYZ*-like. Thus, we\n",
        "will talk about things like *federated tuples*, *federated sequences*,\n",
        "*federated models*, and so on.\n",
        "\n",
        "Now, coming back to `float32@CLIENTS` - while it appears replicated across\n",
        "multiple devices, it is actually a single `float32`, since all member are the\n",
        "same. In general, you may think of any *all-equal* federated type, i.e., one of\n",
        "the form `T@G`, as isomorphic to a non-federated type `T`, since in both cases,\n",
        "there's actually only a single (albeit potentially replicated) item of type `T`.\n",
        "\n",
        "Given the isomorphism between `T` and `T@G`, you may wonder what purpose, if\n",
        "any, the latter types might serve. Read on."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "pUXF8WEQLV26"
      },
      "source": [
        "## Placements\n",
        "\n",
        "### Design Overview\n",
        "\n",
        "In the preceding section, we've introduced the concept of *placements* - groups\n",
        "of system participants that might be jointly hosting a federated value, and\n",
        "we've demonstrated the use of `tff.CLIENTS` as an example specification of a\n",
        "placement.\n",
        "\n",
        "To explain why the notion of a *placement* is so fundamental that we needed to\n",
        "incorporate it into the TFF type system, recall what we mentioned at the\n",
        "beginning of this tutorial about some of the intended uses of TFF.\n",
        "\n",
        "Although in this tutorial, you will only see TFF code being executed locally in\n",
        "a simulated environment, our goal is for TFF to enable writing code that you\n",
        "could deploy for execution on groups of physical devices in a distributed\n",
        "system, potentially including mobile or embedded devices running Android. Each\n",
        "of of those devices would receive a separate set of instructions to execute\n",
        "locally, depending on the role it plays in the system (an end-user device, a\n",
        "centralized coordinator, an intermediate layer in a multi-tier architecture,\n",
        "etc.). It is important to be able to reason about which subsets of devices\n",
        "execute what code, and where different portions of the data might physically\n",
        "materialize.\n",
        "\n",
        "This is especially important when dealing with, e.g., application data on mobile\n",
        "devices. Since the data is private and can be sensitive, we need the ability to\n",
        "statically verify that this data will never leave the device (and prove facts\n",
        "about how the data is being processed). The placement specifications are one of\n",
        "the mechanisms designed to support this.\n",
        "\n",
        "TFF has been designed as a data-centric programming environment, and as such,\n",
        "unlike some of the existing frameworks that focus on *operations* and where\n",
        "those operations might *run*, TFF focuses on *data*, where that data\n",
        "*materializes*, and how it's being *transformed*. Consequently, placement is\n",
        "modeled as a property of data in TFF, rather than as a property of operations on\n",
        "data. Indeed, as you're about to see in the next section, some of the TFF\n",
        "operations span across locations, and run \"in the network\", so to speak, rather\n",
        "than being executed by a single machine or a group of machines.\n",
        "\n",
        "Representing the type of a certain value as `T@G` or `{T}@G` (as opposed to just\n",
        "`T`) makes data placement decisions explicit, and together with a static\n",
        "analysis of programs written in TFF, it can serve as a foundation for providing\n",
        "formal privacy guarantees for sensitive on-device data.\n",
        "\n",
        "An important thing to note at this point, however, is that while we encourage\n",
        "TFF users to be explicit about *groups* of participating devices that host the\n",
        "data (the placements), the programmer will never deal with the raw data or\n",
        "identities of the *individual* participants.\n",
        "\n",
        "(NOTE: While it goes far outside the scope of this tutorial, we should mention\n",
        "that there is one notable exception to the above, a `tff.federated_collect`\n",
        "operator that is intended as a low-level primitive, only for specialized\n",
        "situations. Its explicit use in situations where it can be avoided is not\n",
        "recommended, as it may limit the possible future applications. For example, if\n",
        "during the course of static analysis, we determine that a computation uses such\n",
        "low-level mechanisms, we may disallow its access to certain types of data.)\n",
        "\n",
        "Within the body of TFF code, by design, there's no way to enumerate the devices\n",
        "that constitute the group represented by `tff.CLIENTS`, or to probe for the\n",
        "existence of a specific device in the group. There's no concept of a device or\n",
        "client identity anywhere in the Federated Core API, the underlying set of\n",
        "architectural abstractions, or the core runtime infrastructure we provide to\n",
        "support simulations. All the computation logic you write will be expressed as\n",
        "operations on the entire client group.\n",
        "\n",
        "Recall here what we mentioned earlier about values of federated types being\n",
        "unlike Python `dict`, in that one cannot simply enumerate their member\n",
        "constituents. Think of values that your TFF program logic manipulates as being\n",
        "associated with placements (groups), rather than with individual participants.\n",
        "\n",
        "Placements *are* designed to be a first-class citizen in TFF as well, and can\n",
        "appear as parameters and results of a `placement` type (to be represented by\n",
        "`tff.PlacementType` in the API). In the future, we plan to provide a variety of\n",
        "operators to transform or combine placements, but this is outside the scope of\n",
        "this tutorial. For now, it suffices to think of `placement` as an opaque\n",
        "primitive built-in type in TFF, similar to how `int` and `bool` are opaque\n",
        "built-in types in Python, with `tff.CLIENTS` being a constant literal of this\n",
        "type, not unlike `1` being a constant literal of type `int`.\n",
        "\n",
        "### Specifying Placements\n",
        "\n",
        "TFF provides two basic placement literals, `tff.CLIENTS` and `tff.SERVER`, to\n",
        "make it easy to express the rich variety of practical scenarios that are\n",
        "naturally modeled as client-server architectures, with multiple *client* devices\n",
        "(mobile phones, embedded devices, distributed databases, sensors, etc.)\n",
        "orchestrated by a single centralized *server* coordinator. TFF is designed to\n",
        "also support custom placements, multiple client groups, multi-tiered and other,\n",
        "more general distributed architectures, but discussing them is outside the scope\n",
        "of this tutorial.\n",
        "\n",
        "TFF doesn't prescribe what either the `tff.CLIENTS` or the `tff.SERVER` actually\n",
        "represent.\n",
        "\n",
        "In particular, `tff.SERVER` may be a single physical device (a member of a\n",
        "singleton group), but it might just as well be a group of replicas in a\n",
        "fault-tolerant cluster running state machine replication - we do not make any\n",
        "special architectural assumptions. Rather, we use the `all_equal` bit mentioned\n",
        "in the preceding section to express the fact that we're generally dealing with\n",
        "only a single item of data at the server.\n",
        "\n",
        "Likewise, `tff.CLIENTS` in some applications might represent all clients in the\n",
        "system - what in the context of federated learning we sometimes refer to as the\n",
        "*population*, but e.g., in\n",
        "[production implementations of Federated Averaging](https://arxiv.org/abs/1602.05629),\n",
        "it may represent a *cohort* - a subset of the clients selected for paticipation\n",
        "in a particular round of training. The abstractly defined placements are given\n",
        "concrete meaning when a computation in which they appear is deployed for\n",
        "execution (or simply invoked like a Python function in a simulated environment,\n",
        "as is demonstrated in this tutorial). In our local simulations, the group of\n",
        "clients is determined by the federated data supplied as input."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "9Lmpr7vpA-3A"
      },
      "source": [
        "## Federated computations\n",
        "\n",
        "### Declaring federated computations\n",
        "\n",
        "TFF is designed as a strongly-typed functional programming environment that\n",
        "supports modular development.\n",
        "\n",
        "The basic unit of composition in TFF is a *federated computation* - a section of\n",
        "logic that may accept federated values as input and return federated values as\n",
        "output. Here's how you can define a computation that calculates the average of\n",
        "the temperatures reported by the sensor array from our previous example."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "g38EkHwGGEUo"
      },
      "outputs": [],
      "source": [
        "@tff.federated_computation(tff.FederatedType(tf.float32, tff.CLIENTS))\n",
        "def get_average_temperature(sensor_readings):\n",
        "  return tff.federated_mean(sensor_readings)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "yjRTFxGxY-AL"
      },
      "source": [
        "Looking at the above code, at this point you might be asking - aren't there\n",
        "already decorator constructs to define composable units such as\n",
        "[`tf.function`](https://www.tensorflow.org/api_docs/python/tf/function)\n",
        "in TensorFlow, and if so, why introduce yet another one, and how is it\n",
        "different?\n",
        "\n",
        "The short answer is that the code generated by the `tff.federated_computation`\n",
        "wrapper is *neither* TensorFlow, *nor is it* Python - it's a specification of a\n",
        "distributed system in an internal platform-independent *glue* language. At this\n",
        "point, this will undoubtedly sound cryptic, but please bear this intuitive\n",
        "interpretation of a federated computation as an abstract specification of a\n",
        "distributed system in mind. We'll explain it in a minute.\n",
        "\n",
        "First, let's play with the definition a bit. TFF computations are generally\n",
        "modeled as functions - with or without parameters, but with well-defined type\n",
        "signatures. You can print the type signature of a computation by querying its\n",
        "`type_signature` property, as shown below."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "o7FmRyQACtZU"
      },
      "outputs": [],
      "source": [
        "str(get_average_temperature.type_signature)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "UCJGl2SFAs7S"
      },
      "source": [
        "The type signature tells us that the computation accepts a collection of\n",
        "different sensor readings on client devices, and returns a single average on the\n",
        "server.\n",
        "\n",
        "Before we go any further, let's reflect on this for a minute - the input and\n",
        "output of this computation are *in different places* (on `CLIENTS` vs. at the\n",
        "`SERVER`). Recall what we said in the preceding section on placements about how\n",
        "*TFF operations may span across locations, and run in the network*, and what we\n",
        "just said about federated computations as representing abstract specifications\n",
        "of distributed systems. We have just a defined one such computation - a simple\n",
        "distributed system in which data is consumed at client devices, and the\n",
        "aggregate results emerge at the server.\n",
        "\n",
        "In many practical scenarios, the computations that represent top-level tasks\n",
        "will tend to accept their inputs and report their outputs at the server - this\n",
        "reflects the idea that computations might be triggered by *queries* that\n",
        "originate and terminate on the server.\n",
        "\n",
        "However, FC API does not impose this assumption, and many of the building blocks\n",
        "we use internally (including numerous `tff.federated_...` operators you may find\n",
        "in the API) have inputs and outputs with distinct placements, so in general, you\n",
        "should not think about a federated computation as something that *runs on the\n",
        "server* or is *executed by a server*. The server is just one type of participant\n",
        "in a federated computation. In thinking about the mechanics of such\n",
        "computations, it's best to always default to the global network-wide\n",
        "perspective, rather than the perspective of a single centralized coordinator.\n",
        "\n",
        "In general, functional type signatures are compactly represented as `(T -\u003e U)`\n",
        "for types `T` and `U` of inputs and outputs, respectively. The type of the\n",
        "formal parameter (such `sensor_readings` in this case) is specified as the\n",
        "argument to the decorator. You don't need to specify the type of the result -\n",
        "it's determined automatically.\n",
        "\n",
        "Although TFF does offer limited forms of polymorphism, programmers are strongly\n",
        "encouraged to be explicit about the types of data they work with, as that makes\n",
        "understanding, debugging, and formally verifying properties of your code easier.\n",
        "In some cases, explicitly specifying types is a requirement (e.g., polymorphic\n",
        "computations are currently not directly executable).\n",
        "\n",
        "### Executing federated computations\n",
        "\n",
        "In order to support development and debugging, TFF allows you to directly invoke\n",
        "computations defined this way as Python functions, as shown below. Where the\n",
        "computation expects a value of a federated type with the `all_equal` bit set to\n",
        "`False`, you can feed it as a plain `list` in Python, and for federated types\n",
        "with the `all_equal` bit set to `True`, you can just directly feed the (single)\n",
        "member constituent. This is also how the results are reported back to you."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "HMDW-7U1aREW"
      },
      "outputs": [],
      "source": [
        "get_average_temperature([68.5, 70.3, 69.8])"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "XsTKl4OIBUSH"
      },
      "source": [
        "When running computations like this in simulation mode, you act as an external\n",
        "observer with a system-wide view, who has the ability to supply inputs and\n",
        "consume outputs at any locations in the network, as indeed is the case here -\n",
        "you supplied client values at input, and consumed the server result.\n",
        "\n",
        "Now, let's return to a note we made earlier about the\n",
        "`tff.federated_computation` decorator emitting code in a *glue* language.\n",
        "Although the logic of TFF computations can be expressed as ordinary functions in\n",
        "Python (you just need to decorate them with `tff.federated_computation` as we've\n",
        "done above), and you can directly invoke them with Python arguments just\n",
        "like any other Python functions in this notebook, behind the scenes, as we noted\n",
        "earlier, TFF computations are actually *not* Python.\n",
        "\n",
        "What we mean by this is that when the Python interpreter encounters a function\n",
        "decorated with `tff.federated_computation`, it traces the statements in this\n",
        "function's body once (at definition time), and then constructs a\n",
        "[serialized representation](https://github.com/tensorflow/federated/blob/master/tensorflow_federated/proto/v0/computation.proto)\n",
        "of the computation's logic for future use - whether for execution, or to be\n",
        "incorporated as a sub-component into another computation.\n",
        "\n",
        "You can verify this by adding a print statement, as follows:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "6gvzd1vwp8sG"
      },
      "outputs": [],
      "source": [
        "@tff.federated_computation(tff.FederatedType(tf.float32, tff.CLIENTS))\n",
        "def get_average_temperature(sensor_readings):\n",
        "\n",
        "  print ('Getting traced, the argument is \"{}\".'.format(\n",
        "      type(sensor_readings).__name__))\n",
        "\n",
        "  return tff.federated_mean(sensor_readings)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "hMJdv8Fip7Rv"
      },
      "source": [
        "You can think of Python code that defines a federated computation similarly to\n",
        "how you would think of Python code that builds a TensorFlow graph in a non-eager\n",
        "context (if you're not familiar with the non-eager uses of TensorFlow, think of\n",
        "your Python code defining a graph of operations to be executed later, but not\n",
        "actually running them on the fly). The non-eager graph-building code in\n",
        "TensorFlow is Python, but the TensorFlow graph constructed by this code is\n",
        "platform-independent and serializable.\n",
        "\n",
        "Likewise, TFF computations are defined in Python, but the Python statements in\n",
        "their bodies, such as `tff.federated_mean` in the example weve just shown,\n",
        "are compiled into a portable and platform-independent serializable\n",
        "representation under the hood.\n",
        "\n",
        "As a developer, you don't need to concern yourself with the details of this\n",
        "representation, as you will never need to directly work with it, but you should\n",
        "be aware of its existence, the fact that TFF computations are fundamentally\n",
        "non-eager, and cannot capture arbitrary Python state. Python code contained in a\n",
        "TFF computation's body is executed at definition time, when the body of the\n",
        "Python function decorated with `tff.federated_computation` is traced before\n",
        "getting serialized. It's not retraced again at invocation time (except when the\n",
        "function is polymorphic; please refer to the documentation pages for details).\n",
        "\n",
        "You may wonder why we've chosen to introduce a dedicated internal non-Python\n",
        "representation. One reason is that ultimately, TFF computations are intended to\n",
        "be deployable to real physical environments, and hosted on mobile or embedded\n",
        "devices, where Python may not be available.\n",
        "\n",
        "Another reason is that TFF computations express the global behavior of\n",
        "distributed systems, as opposed to Python programs which express the local\n",
        "behavior of individual participants. You can see that in the simple example\n",
        "above, with the special operator `tff.federated_mean` that accepts data on\n",
        "client devices, but deposits the results on the server.\n",
        "\n",
        "The operator `tff.federated_mean` cannot be easily modeled as an ordinary\n",
        "operator in Python, since it doesn't execute locally - as noted earlier, it\n",
        "represents a distributed system that coordinates the behavior of multiple system\n",
        "participants. We will refer to such operators as *federated operators*, to\n",
        "distinguish them from ordinary (local) operators in Python.\n",
        "\n",
        "The TFF type system, and the fundamental set of operations supported in the TFF's\n",
        "language, thus deviates significantly from those in Python, necessitating the\n",
        "use of a dedicated representation.\n",
        "\n",
        "### Composing federated computations\n",
        "\n",
        "As noted above, federated computations and their constituents are best\n",
        "understood as models of distributed systems, and you can think of composing\n",
        "federated computations as composing more complex distributed systems from\n",
        "simpler ones. You can think of the `tff.federated_mean` operator as a kind of\n",
        "built-in template federated computation with a type signature `({T}@CLIENTS -\u003e\n",
        "T@SERVER)` (indeed, just like computations you write, this operator also has a\n",
        "complex structure - under the hood we break it down into simpler operators).\n",
        "\n",
        "The same is true of composing federated computations. The computation\n",
        "`get_average_temperature` may be invoked in a body of another Python function\n",
        "decorated with `tff.federated_computation` - doing so will cause it to be\n",
        "embedded in the body of the parent, much in the same way `tff.federated_mean`\n",
        "was embedded in its own body earlier.\n",
        "\n",
        "An important restriction to be aware of is that bodies of Python functions\n",
        "decorated with `tff.federated_computation` must consist *only* of federated\n",
        "operators, i.e., they cannot directly contain TensorFlow operations. For\n",
        "example, you cannot directly use `tf.nest` interfaces to add a pair of\n",
        "federated values. TensorFlow code must be confined to blocks of code decorated\n",
        "with a `tff.tf_computation` discussed in the following section. Only when\n",
        "wrapped in this manner can the wrapped TensorFlow code be invoked in the body of\n",
        "a `tff.federated_computation`.\n",
        "\n",
        "The reasons for this separation are technical (it's hard to trick operators such\n",
        "as `tf.add` to work with non-tensors) as well as architectural. The language of\n",
        "federated computations (i.e., the logic constructed from serialized bodies of\n",
        "Python functions decorated with `tff.federated_computation`) is designed to\n",
        "serve as a platform-independent *glue* language. This glue language is currently\n",
        "used to build distributed systems from embedded sections of TensorFlow code\n",
        "(confined to `tff.tf_computation` blocks). In the fullness of time, we\n",
        "anticipate the need to embed sections of other, non-TensorFlow logic, such as\n",
        "relational database queries that might represent input pipelines, all connected\n",
        "together using the same glue language (the `tff.federated_computation` blocks)."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "RR4EOrl4errh"
      },
      "source": [
        "## TensorFlow logic\n",
        "\n",
        "### Declaring TensorFlow computations\n",
        "\n",
        "TFF is designed for use with TensorFlow. As such, the bulk of the code you will\n",
        "write in TFF is likely to be ordinary (i.e., locally-executing) TensorFlow code.\n",
        "In order to use such code with TFF, as noted above, it just needs to be\n",
        "decorated with `tff.tf_computation`.\n",
        "\n",
        "For example, here's how we could implement a function that takes a number and\n",
        "adds `0.5` to it."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "dpdAqMcygnmr"
      },
      "outputs": [],
      "source": [
        "@tff.tf_computation(tf.float32)\n",
        "def add_half(x):\n",
        "  return tf.add(x, 0.5)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "cXGeOvyTdyix"
      },
      "source": [
        "Once again, looking at this, you may be wondering why we should define another\n",
        "decorator `tff.tf_computation` instead of simply using an existing mechanism\n",
        "such as `tf.function`. Unlike in the preceding section, here we are\n",
        "dealing with an ordinary block of TensorFlow code.\n",
        "\n",
        "There are a few reasons for this, the full treatment of which goes beyond the\n",
        "scope of this tutorial, but it's worth naming the main two:\n",
        "\n",
        "*   In order to embed reusable building blocks implemented using TensorFlow code\n",
        "    in the bodies of federated computations, they need to satisfy certain\n",
        "    properties - such as getting traced and serialized at definition time,\n",
        "    having type signatures, etc. This generally requires some form of a\n",
        "    decorator.\n",
        "\n",
        "*   In addition, TFF needs the ability for computations to be able to accept\n",
        "    data streams (represented as `tf.data.Dataset`s), such as streams of\n",
        "    training example batches in machine learning applications, as either inputs\n",
        "    or outputs. This capability currently does not exist in TensorFlow; the\n",
        "    `tff.tf_computation` decorator offers partial (and for now still\n",
        "    experimental) support for it.\n",
        "\n",
        "In general, we recommend using TensorFlow's native mechanisms for composition,\n",
        "such as `tf.function`, wherever possible, as the exact manner in\n",
        "which TFF's decorator interacts with eager functions can be expected to evolve.\n",
        "\n",
        "Now, coming back to the example code snippet above, the computation `add_half`\n",
        "we just defined can be treated by TFF just like any other TFF computation. In\n",
        "particular, it has a TFF type signature."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "93UdxrpgkHgj"
      },
      "outputs": [],
      "source": [
        "str(add_half.type_signature)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "xpiERRtQlBKq"
      },
      "source": [
        "Note this type signature does not have placements. TensorFlow computations\n",
        "cannot consume or return federated types.\n",
        "\n",
        "You can now also use `add_half` as a building block in other computations . For\n",
        "example, here's how you can use the `tff.federated_map` operator to apply\n",
        "`add_half` pointwise to all member constituents of a federated float on client\n",
        "devices."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "z08K5UKBlSJP"
      },
      "outputs": [],
      "source": [
        "@tff.federated_computation(tff.FederatedType(tf.float32, tff.CLIENTS))\n",
        "def add_half_on_clients(x):\n",
        "  return tff.federated_map(add_half, x)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "P4wjJgLnlkDW"
      },
      "outputs": [],
      "source": [
        "str(add_half_on_clients.type_signature)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "nfaC3DSAgQWk"
      },
      "source": [
        "### Executing TensorFlow computations\n",
        "\n",
        "Execution of computations defined with `tff.tf_computation` follows the same\n",
        "rules as those we described for `tff.federated_computation`. They can be invoked\n",
        "as ordinary callables in Python, as follows."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "gPsr1oEsl59G"
      },
      "outputs": [],
      "source": [
        "add_half_on_clients([1.0, 3.0, 2.0])"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "yuUOSG-9kK8J"
      },
      "source": [
        "Once again, it is worth noting that invoking the computation\n",
        "`add_half_on_clients` in this manner simulates a distirbuted process. Data is\n",
        "consumed on clients, and returned on clients. Indeed, this computation has each\n",
        "client perform a local action. There is no `tff.SERVER` explicitly mentioned in\n",
        "this system (even if in practice, orchestrating such processing might involve\n",
        "one). Think of a computation defined this way as conceptually analogous to the\n",
        "`Map` stage in `MapReduce`.\n",
        "\n",
        "Also, keep in mind that what we said in the preceding section about TFF\n",
        "computations getting serialized at the definition time remains true for\n",
        "`tff.tf_computation` code as well - the Python body of `add_half_on_clients`\n",
        "gets traced once at definition time. On subsequent invocations, TFF uses its\n",
        "serialized representation.\n",
        "\n",
        "The only difference between Python methods decorated with\n",
        "`tff.federated_computation` and those decorated with `tff.tf_computation` is\n",
        "that the latter are serialized as TensorFlow graphs (whereas the former are not\n",
        "allowed to contain TensorFlow code directly embedded in them).\n",
        "\n",
        "Under the hood, each method decorated with `tff.tf_computation` temporarily\n",
        "disables eager execution in order to allow the computation's structure to be\n",
        "captured. While eager execution is locally disabled, you are welcome to use\n",
        "eager TensorFlow, AutoGraph, TensorFlow 2.0 constructs, etc., so long as you\n",
        "write the logic of your computation in a manner such that it can get correctly\n",
        "serialized.\n",
        "\n",
        "For example, the following code will fail:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "gxVu5aeGlPGc"
      },
      "outputs": [],
      "source": [
        "try:\n",
        "\n",
        "  # Eager mode\n",
        "  constant_10 = tf.constant(10.)\n",
        "\n",
        "  @tff.tf_computation(tf.float32)\n",
        "  def add_ten(x):\n",
        "    return x + constant_10\n",
        "\n",
        "except Exception as err:\n",
        "  print (err)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "5KnAdsfylPeA"
      },
      "source": [
        "The above fails because `constant_10` has already been constructed outside of\n",
        "the graph that `tff.tf_computation` constructs internally in the body of\n",
        "`add_ten` during the serialization process.\n",
        "\n",
        "On the other hand, invoking python functions that modify the current graph when\n",
        "called inside a `tff.tf_computation` is fine:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "Y-anTlfWlk2l"
      },
      "outputs": [],
      "source": [
        "def get_constant_10():\n",
        "  return tf.constant(10.)\n",
        "\n",
        "@tff.tf_computation(tf.float32)\n",
        "def add_ten(x):\n",
        "  return x + get_constant_10()\n",
        "\n",
        "add_ten(5.0)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "1Gl2ijcIllOp"
      },
      "source": [
        "Note that the serialization mechanisms in TensorFlow are evolving, and we expect\n",
        "the details of how TFF serializes computations to evolve as well.\n",
        "\n",
        "### Working with `tf.data.Dataset`s\n",
        "\n",
        "As noted earlier, a unique feature of `tff.tf_computation`s is that they allows\n",
        "you to work with `tf.data.Dataset`s defined abstractly as formal parameters by\n",
        "your code. Parameters to be represented in TensorFlow as data sets need to be\n",
        "declared using the `tff.SequenceType` constructor.\n",
        "\n",
        "For example, the type specification `tff.SequenceType(tf.float32)` defines an\n",
        "abstract sequence of float elements in TFF. Sequences can contain either\n",
        "tensors, or complex nested structures (we'll see examples of those later). The\n",
        "concise representation of a sequence of `T`-typed items is `T*`."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "oufOPP5DrUud"
      },
      "outputs": [],
      "source": [
        "float32_sequence = tff.SequenceType(tf.float32)\n",
        "\n",
        "str(float32_sequence)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "pnNQsm2prSPB"
      },
      "source": [
        "Suppose that in our temperature sensor example, each sensor holds not just one\n",
        "temperature reading, but multiple. Here's how you can define a TFF computation\n",
        "in TensorFlow that calculates the average of temperatures in a single local data\n",
        "set using the `tf.data.Dataset.reduce` operator."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "cw0nen-D0Ks8"
      },
      "outputs": [],
      "source": [
        "@tff.tf_computation(tff.SequenceType(tf.float32))\n",
        "def get_local_temperature_average(local_temperatures):\n",
        "  sum_and_count = (\n",
        "      local_temperatures.reduce((0.0, 0), lambda x, y: (x[0] + y, x[1] + 1)))\n",
        "  return sum_and_count[0] / tf.cast(sum_and_count[1], tf.float32)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "wT0V9sJlyqKE"
      },
      "outputs": [],
      "source": [
        "str(get_local_temperature_average.type_signature)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "olZkwEVl2ORH"
      },
      "source": [
        "In the body of a method decorated with `tff.tf_computation`, formal parameters\n",
        "of a TFF sequence type are represented simply as objects that behave like\n",
        "`tf.data.Dataset`, i.e., support the same properties and methods (they are\n",
        "currently not implemented as subclasses of that type - this may change as the\n",
        "support for data sets in TensorFlow evolves).\n",
        "\n",
        "You can easily verify this as follows."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "_W2tBQxz2wmV"
      },
      "outputs": [],
      "source": [
        "@tff.tf_computation(tff.SequenceType(tf.int32))\n",
        "def foo(x):\n",
        "  return x.reduce(np.int32(0), lambda x, y: x + y)\n",
        "\n",
        "foo([1, 2, 3])"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "k1N5mbpF2tEI"
      },
      "source": [
        "Keep in mind that unlike ordinary `tf.data.Dataset`s, these dataset-like objects\n",
        "are placeholders. They don't contain any elements, since they represent abstract\n",
        "sequence-typed parameters, to be bound to concrete data when used in a concrete\n",
        "context. Support for abstractly-defined placeholder data sets is still somewhat\n",
        "limited at this point, and in the early days of TFF, you may encounter certain\n",
        "restrictions, but we won't need to worry about them in this tutorial (please\n",
        "refer to the documentation pages for details).\n",
        "\n",
        "When locally executing a computation that accepts a sequence in a simulation\n",
        "mode, such as in this tutorial, you can feed the sequence as Python list, as\n",
        "below (as well as in other ways, e.g., as a `tf.data.Dataset` in eager mode, but\n",
        "for now, we'll keep it simple)."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "JyNIc79DyuKK"
      },
      "outputs": [],
      "source": [
        "get_local_temperature_average([68.5, 70.3, 69.8])"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "Zmsi59JSr-PA"
      },
      "source": [
        "Like all other TFF types, sequences like those defined above can use the\n",
        "`tff.NamedTupleType` constructor to define nested structures. For example,\n",
        "here's how one could declare a computation that accepts a sequence of pairs `A`,\n",
        "`B`, and returns the sum of their products. We include the tracing statements in\n",
        "the body of the computation so that you can see how the TFF type signature\n",
        "translates into the dataset's `output_types` and `output_shapes`."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "ySQfOfm5sPjl"
      },
      "outputs": [],
      "source": [
        "@tff.tf_computation(tff.SequenceType(collections.OrderedDict([('A', tf.int32), ('B', tf.int32)])))\n",
        "def foo(ds):\n",
        "  print ('output_types = {}, shapes = {}'.format(\n",
        "      tf.compat.v1.data.get_output_types(ds),\n",
        "      tf.compat.v1.data.get_output_shapes(ds)))\n",
        "  return ds.reduce(np.int32(0), lambda total, x: total + x['A'] * x['B'])"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "krw5R3ilsvU9"
      },
      "outputs": [],
      "source": [
        "str(foo.type_signature)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "fYd7CPlYsyY9"
      },
      "outputs": [],
      "source": [
        "foo([{'A': 2, 'B': 3}, {'A': 4, 'B': 5}])"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "Whd5_olh4hxH"
      },
      "source": [
        "The support for using `tf.data.Datasets` as formal parameters is still somewhat\n",
        "limited and evolving, although functional in simple scenarios such as those used\n",
        "in this tutorial.\n",
        "\n",
        "## Putting it all together\n",
        "\n",
        "Now, let's try again to use our TensorFlow computation in a federated setting.\n",
        "Suppose we have a group of sensors that each have a local sequence of\n",
        "temperature readings. We can compute the global temperature average by averaging\n",
        "the sensors' local averages as follows."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "hZIE1kl340at"
      },
      "outputs": [],
      "source": [
        "@tff.federated_computation(\n",
        "    tff.FederatedType(tff.SequenceType(tf.float32), tff.CLIENTS))\n",
        "def get_global_temperature_average(sensor_readings):\n",
        "  return tff.federated_mean(\n",
        "      tff.federated_map(get_local_temperature_average, sensor_readings))"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "RfC3LePY5pUX"
      },
      "source": [
        "Note that this isn't a simple average across all local temperature readings from\n",
        "all clients, as that would require weighing contributions from different clients\n",
        "by the number of readings they locally maintain. We leave it as an exercise for\n",
        "the reader to update the above code; the `tff.federated_mean` operator\n",
        "accepts the weight as an optional second argument (expected to be a federated\n",
        "float).\n",
        "\n",
        "Also note that the input to `get_global_temperature_average` now becomes a\n",
        "*federated int sequence*. Federated sequences is how we will typically represent\n",
        "on-device data in federated learning, with sequence elements typically\n",
        "representing data batches (you will see examples of this shortly)."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "SL8-jcqo5krW"
      },
      "outputs": [],
      "source": [
        "str(get_global_temperature_average.type_signature)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "RNeQOXA36F4P"
      },
      "source": [
        "Here's how we can locally execute the computation on a sample of data in Python.\n",
        "Notice that the way we supply the input is now as a `list` of `list`s. The outer\n",
        "list iterates over the devices in the group represented by `tff.CLIENTS`, and\n",
        "the inner ones iterate over elements in each device's local sequence."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 0,
      "metadata": {
        "colab": {},
        "colab_type": "code",
        "id": "vMzuaF5p6fDJ"
      },
      "outputs": [],
      "source": [
        "get_global_temperature_average([[68.0, 70.0], [71.0], [68.0, 72.0, 70.0]])"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "TBjWB-yftWVc"
      },
      "source": [
        "This concludes the first part of the tutorial... we encourage you to continue on\n",
        "to the [second part](custom_federated_algorithms_2.ipynb)."
      ]
    }
  ],
  "metadata": {
    "colab": {
      "collapsed_sections": [],
      "last_runtime": {
        "build_target": "",
        "kind": "local"
      },
      "name": "Custom Federated Algorithms, Part 1: Introduction to the Federated Core",
      "provenance": [],
      "toc_visible": true,
      "version": "0.3.2"
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}
