{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Model export and deployment tutorial\n",
    "\n",
    "Tribuo works best as a library which provides training and deployment inside the JVM where the application is running, however sometimes you need to deploy models elsewhere, either in another programming environment like Python, or in a cloud service. To support these use cases many of Tribuo's models can be exported as [ONNX](https://onnx.ai) models, a cross-platform model exchange format. ONNX is widely supported across industry, for edge devices, hardware accelerators, and cloud services. Tribuo also supports loading in ONNX models and scoring them as native Tribuo models, for more information on that see the external models tutorial.\n",
    "\n",
    "This tutorial will show how to export models in ONNX format, how to recover the provenance information from Tribuo-exported ONNX models, and how to deploy an ONNX model in [OCI Data Science](https://www.oracle.com/data-science/cloud-infrastructure-data-science.html) though of course other cloud providers support ONNX models too. We'll show how to export a factorization machine, create an ensemble of a factorization machine along with some other models, export the ensemble, then we'll discuss how to interact with the provenance of an exported model, before concluding with deploying that model to OCI.\n",
    "\n",
    "## Setup\n",
    "\n",
    "This tutorial requires ONNX Runtime to score the exported models, so by default will only run on x86\\_64 platforms. ONNX Runtime can be compiled on ARM64 platforms, but that binary is not in the Maven Central jar Tribuo depends on, so will need to be compiled from scratch to run the tutorial on ARM.\n",
    "\n",
    "We're going to use MNIST as the example dataset for this tutorial, so you'll need to download it if you haven't already.\n",
    "\n",
    "First the training set:\n",
    "\n",
    "`wget http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz`\n",
    "\n",
    "`wget http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz`\n",
    "\n",
    "Then the test set:\n",
    "\n",
    "`wget http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz`\n",
    "\n",
    "`wget http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz`\n",
    "\n",
    "As usual we'll load in some jars for classification problems, along with Tribuo's ONNX Runtime and OCI interfaces."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "%jars ./tribuo-classification-experiments-4.3.0-jar-with-dependencies.jar\n",
    "%jars ./tribuo-oci-4.3.0-jar-with-dependencies.jar\n",
    "%jars ./tribuo-onnx-4.3.0-jar-with-dependencies.jar\n",
    "%jars ./tribuo-json-4.3.0-jar-with-dependencies.jar"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "import java.nio.file.Files;\n",
    "import java.nio.file.Paths;\n",
    "\n",
    "import org.tribuo.*;\n",
    "import org.tribuo.classification.*;\n",
    "import org.tribuo.classification.ensemble.*;\n",
    "import org.tribuo.classification.evaluation.*;\n",
    "import org.tribuo.classification.sgd.fm.FMClassificationTrainer;\n",
    "import org.tribuo.classification.sgd.linear.*;\n",
    "import org.tribuo.classification.sgd.objectives.LogMulticlass;\n",
    "import org.tribuo.ensemble.*;\n",
    "import org.tribuo.data.csv.CSVLoader;\n",
    "import org.tribuo.datasource.IDXDataSource;\n",
    "import org.tribuo.evaluation.TrainTestSplitter;\n",
    "import org.tribuo.interop.onnx.*;\n",
    "import org.tribuo.math.optimisers.*;\n",
    "import org.tribuo.interop.oci.*;\n",
    "import org.tribuo.util.onnx.*;\n",
    "import org.tribuo.util.Util;\n",
    "import com.oracle.bmc.ConfigFileReader;\n",
    "import com.oracle.bmc.auth.ConfigFileAuthenticationDetailsProvider;\n",
    "import com.oracle.bmc.datascience.DataScienceClient;\n",
    "import com.oracle.labs.mlrg.olcut.provenance.ProvenanceUtil;\n",
    "import com.oracle.labs.mlrg.olcut.util.Pair;\n",
    "\n",
    "import ai.onnxruntime.*;"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Then we'll load in MNIST and Wine Quality."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "MNIST train size = 60000, number of features = 717, number of classes = 10\n",
      "MNIST test size = 10000, number of features = 668, number of classes = 10\n"
     ]
    }
   ],
   "source": [
    "var labelFactory = new LabelFactory();\n",
    "var labelEvaluator = new LabelEvaluator();\n",
    "var mnistTrainSource = new IDXDataSource<>(Paths.get(\"train-images-idx3-ubyte.gz\"),Paths.get(\"train-labels-idx1-ubyte.gz\"),labelFactory);\n",
    "var mnistTestSource = new IDXDataSource<>(Paths.get(\"t10k-images-idx3-ubyte.gz\"),Paths.get(\"t10k-labels-idx1-ubyte.gz\"),labelFactory);\n",
    "var mnistTrain = new MutableDataset<>(mnistTrainSource);\n",
    "var mnistTest = new MutableDataset<>(mnistTestSource);\n",
    "System.out.println(String.format(\"MNIST train size = %d, number of features = %d, number of classes = %d\",mnistTrain.size(),mnistTrain.getFeatureMap().size(),mnistTrain.getOutputInfo().size()));\n",
    "System.out.println(String.format(\"MNIST test size = %d, number of features = %d, number of classes = %d\",mnistTest.size(),mnistTest.getFeatureMap().size(),mnistTest.getOutputInfo().size()));"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Exporting a single classification model\n",
    "\n",
    "We're going to train a multi-class [Factorization Machine](https://ieeexplore.ieee.org/document/5694074), which is a non-linear model that approximates all the non-linear feature interactions with a small per-feature embedding vector. It's similar to a logistic regression with an additional feature-feature interaction term, one per output label. In Tribuo Factorization Machines can be trained using stochastic gradient descent, using the standard SGD algorithms Tribuo uses for other models. We're going to use AdaGrad as it's usually a good baseline."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "var fmLabelTrainer = new FMClassificationTrainer(new LogMulticlass(),  // Loss function\n",
    "                                                 new AdaGrad(0.1,0.1), // Gradient optimiser\n",
    "                                                 5,                    // Number of training epochs\n",
    "                                                 30000,                // Logging interval\n",
    "                                                 Trainer.DEFAULT_SEED, // RNG seed\n",
    "                                                 6,                    // Factor size\n",
    "                                                 0.1                   // Factor initialisation variance\n",
    "                                                 );"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "After defining the model we train it as usual. Factorization machines take a little longer to train than logistic regression does, but not excessively so."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Training factorization machine took (00:00:15:126)\n"
     ]
    }
   ],
   "source": [
    "var fmStartTime = System.currentTimeMillis();\n",
    "var fmMNIST = fmLabelTrainer.train(mnistTrain);\n",
    "var fmEndTime = System.currentTimeMillis();\n",
    "System.out.println(\"Training factorization machine took \" + Util.formatDuration(fmStartTime,fmEndTime));"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "And then evaluate it using Tribuo's built in evaluation system."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Scoring factorization machine took (00:00:00:379)\n",
      "Class                           n          tp          fn          fp      recall        prec          f1\n",
      "0                             980         959          21          31       0.979       0.969       0.974\n",
      "1                           1,135       1,120          15          22       0.987       0.981       0.984\n",
      "2                           1,032         976          56          57       0.946       0.945       0.945\n",
      "3                           1,010         952          58          39       0.943       0.961       0.952\n",
      "4                             982         952          30          49       0.969       0.951       0.960\n",
      "5                             892         857          35          63       0.961       0.932       0.946\n",
      "6                             958         920          38          30       0.960       0.968       0.964\n",
      "7                           1,028         969          59          36       0.943       0.964       0.953\n",
      "8                             974         916          58          57       0.940       0.941       0.941\n",
      "9                           1,009         951          58          44       0.943       0.956       0.949\n",
      "Total                      10,000       9,572         428         428\n",
      "Accuracy                                                                    0.957\n",
      "Micro Average                                                               0.957       0.957       0.957\n",
      "Macro Average                                                               0.957       0.957       0.957\n",
      "Balanced Error Rate                                                         0.043\n",
      "               0       1       2       3       4       5       6       7       8       9\n",
      "0            959       0       0       0       1       2       7       4       4       3\n",
      "1              0   1,120       4       1       3       0       3       0       4       0\n",
      "2              6       5     976       7       7       2       5       8      14       2\n",
      "3              0       2      15     952       0      19       1       3      14       4\n",
      "4              3       3       7       1     952       0       4       1       1      10\n",
      "5              3       1       0       6       1     857       5       5      13       1\n",
      "6              8       2       7       2       7      11     920       1       0       0\n",
      "7              2       5      13       5       4       4       0     969       4      22\n",
      "8              2       1       9       9      11      15       4       5     916       2\n",
      "9              7       3       2       8      15      10       1       9       3     951\n",
      "\n"
     ]
    }
   ],
   "source": [
    "fmStartTime = System.currentTimeMillis();\n",
    "var mnistFMEval = labelEvaluator.evaluate(fmMNIST,mnistTest);\n",
    "fmEndTime = System.currentTimeMillis();\n",
    "System.out.println(\"Scoring factorization machine took \" + Util.formatDuration(fmStartTime,fmEndTime));\n",
    "System.out.println(mnistFMEval.toString());\n",
    "System.out.println(mnistFMEval.getConfusionMatrix().toString());"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We get about 95% accuracy on MNIST, which is pretty good for a fairly simple model. Now let's export it to ONNX, then  we'll load it back in via Tribuo's ONNX Runtime interface and compare the performance. We'll use this model in the reproducibility tutorial so we'll save it to disk in the tutorials folder.\n",
    "\n",
    "Tribuo `Model`s which support ONNX export implement the `ONNXExportable` interface which defines methods for constructing an ONNX protobuf and saving it to disk."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [],
   "source": [
    "var fmMNISTPath = Paths.get(\".\",\"fm-mnist.onnx\");\n",
    "fmMNIST.saveONNXModel(\"org.tribuo.tutorials.onnxexport.fm\", // namespace for the model\n",
    "                      0,                                    // model version number\n",
    "                      fmMNISTPath                           // path to save the model\n",
    "                      );"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "To load an ONNX model we need to define the mapping between Tribuo's feature names and the indices that the ONNX model understands. Fortunately for models exported from Tribuo we already have that information, as it is stored in the feature and output maps. We'll extract it into the general form that `ONNXExternalModel` expects."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [],
   "source": [
    "Map<String, Integer> mnistFeatureMap = new HashMap<>();\n",
    "for (VariableInfo f : fmMNIST.getFeatureIDMap()){\n",
    "    VariableIDInfo id = (VariableIDInfo) f;\n",
    "    mnistFeatureMap.put(id.getName(),id.getID());\n",
    "}\n",
    "Map<Label, Integer> mnistOutputMap = new HashMap<>();\n",
    "for (Pair<Integer,Label> l : fmMNIST.getOutputIDInfo()) {\n",
    "    mnistOutputMap.put(l.getB(), l.getA());\n",
    "}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now we'll define a test function that compares two sets of predictions, as ONNX Runtime uses single precision for computations, and Tribuo uses double precision so the prediction scores are never bitwise equal."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [],
   "source": [
    "public boolean checkPredictions(List<Prediction<Label>> nativePredictions, List<Prediction<Label>> onnxPredictions, double delta) {\n",
    "    for (int i = 0; i < nativePredictions.size(); i++) {\n",
    "        Prediction<Label> tribuo = nativePredictions.get(i);\n",
    "        Prediction<Label> external = onnxPredictions.get(i);\n",
    "        // Check the predicted label\n",
    "        if (!tribuo.getOutput().getLabel().equals(external.getOutput().getLabel())) {\n",
    "            System.out.println(\"At index \" + i + \" predictions are not equal - \"\n",
    "                    + tribuo.getOutput().getLabel() + \" and \"\n",
    "                    + external.getOutput().getLabel());\n",
    "            return false;\n",
    "        }\n",
    "        // Check the maximum score\n",
    "        if (Math.abs(tribuo.getOutput().getScore() - external.getOutput().getScore()) > delta) {\n",
    "            System.out.println(\"At index \" + i + \" predictions are not equal - \"\n",
    "                    + tribuo.getOutput() + \" and \"\n",
    "                    + external.getOutput());\n",
    "            return false;\n",
    "        }\n",
    "        // Check the score distribution\n",
    "        for (Map.Entry<String, Label> l : tribuo.getOutputScores().entrySet()) {\n",
    "            Label other = external.getOutputScores().get(l.getKey());\n",
    "            if (other == null) {\n",
    "                System.out.println(\"At index \" + i + \" failed to find label \" + l.getKey() + \" in ORT prediction.\");\n",
    "                return false;\n",
    "            } else {\n",
    "                if (Math.abs(l.getValue().getScore() - other.getScore()) > delta) {\n",
    "                    System.out.println(\"At index \" + i + \" predictions are not equal - \"\n",
    "                            + tribuo.getOutputScores() + \" and \"\n",
    "                            + external.getOutputScores());\n",
    "                    return false;\n",
    "                }\n",
    "            }\n",
    "        }\n",
    "    }\n",
    "    return true;\n",
    "}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Then we'll construct the `ONNXExternalModel` loading our freshly created ONNX model using the feature and output mappings we built earlier. First we create a `SessionOptions` which controls the model inference. By default it uses a single thread on one CPU, but by setting values in the options object before building the external model we can make it run on multiple threads, use GPUs or other accelerator hardware supported by ONNX Runtime."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [],
   "source": [
    "var ortEnv = OrtEnvironment.getEnvironment();\n",
    "var sessionOpts = new OrtSession.SessionOptions();\n",
    "var denseTransformer = new DenseTransformer();\n",
    "var labelTransformer = new LabelTransformer();\n",
    "ONNXExternalModel<Label> onnxFM = ONNXExternalModel.createOnnxModel(labelFactory, mnistFeatureMap, mnistOutputMap,\n",
    "                    denseTransformer, labelTransformer, sessionOpts, fmMNISTPath, \"input\");"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "An `ONNXExternalModel` is a Tribuo model so we can use the same evaluation infrastructure."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Scoring ONNX factorization machine took (00:00:00:801)\n",
      "Class                           n          tp          fn          fp      recall        prec          f1\n",
      "0                             980         959          21          31       0.979       0.969       0.974\n",
      "1                           1,135       1,120          15          22       0.987       0.981       0.984\n",
      "2                           1,032         976          56          57       0.946       0.945       0.945\n",
      "3                           1,010         952          58          39       0.943       0.961       0.952\n",
      "4                             982         952          30          49       0.969       0.951       0.960\n",
      "5                             892         857          35          63       0.961       0.932       0.946\n",
      "6                             958         920          38          30       0.960       0.968       0.964\n",
      "7                           1,028         969          59          36       0.943       0.964       0.953\n",
      "8                             974         916          58          57       0.940       0.941       0.941\n",
      "9                           1,009         951          58          44       0.943       0.956       0.949\n",
      "Total                      10,000       9,572         428         428\n",
      "Accuracy                                                                    0.957\n",
      "Micro Average                                                               0.957       0.957       0.957\n",
      "Macro Average                                                               0.957       0.957       0.957\n",
      "Balanced Error Rate                                                         0.043\n",
      "               0       1       2       3       4       5       6       7       8       9\n",
      "0            959       0       0       0       1       2       7       4       4       3\n",
      "1              0   1,120       4       1       3       0       3       0       4       0\n",
      "2              6       5     976       7       7       2       5       8      14       2\n",
      "3              0       2      15     952       0      19       1       3      14       4\n",
      "4              3       3       7       1     952       0       4       1       1      10\n",
      "5              3       1       0       6       1     857       5       5      13       1\n",
      "6              8       2       7       2       7      11     920       1       0       0\n",
      "7              2       5      13       5       4       4       0     969       4      22\n",
      "8              2       1       9       9      11      15       4       5     916       2\n",
      "9              7       3       2       8      15      10       1       9       3     951\n",
      "\n"
     ]
    }
   ],
   "source": [
    "var onnxStartTime = System.currentTimeMillis();\n",
    "var mnistONNXEval = labelEvaluator.evaluate(onnxFM,mnistTest);\n",
    "var onnxEndTime = System.currentTimeMillis();\n",
    "System.out.println(\"Scoring ONNX factorization machine took \" + Util.formatDuration(onnxStartTime,onnxEndTime));\n",
    "System.out.println(mnistONNXEval.toString());\n",
    "System.out.println(mnistONNXEval.getConfusionMatrix().toString());"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The two models evaluate the same, but they could be producing slightly different probability values, so let's check it using our more precise comparison function. `checkPrediction` will log any divergence it finds, as well as returning true or false if the predictions differ. We're going to use a delta of 1e-5, and consider differences below that threshold to be irrelevant."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Predictions are equal - true\n"
     ]
    }
   ],
   "source": [
    "System.out.println(\"Predictions are equal - \" + \n",
    "                    checkPredictions(mnistFMEval.getPredictions(), mnistONNXEval.getPredictions(), 1e-5));"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "An important part of a Tribuo model is the provenance. We don't want to lose that information when exporting models to ONNX format, so we encode the provenance in the ONNX protobuf. It uses the marshalled provenance format from OLCUT, and the protos are available in OLCUT so they could be parsed in other systems. As a result when loading in a Tribuo-exported ONNX model the `ONNXExternalModel` class has two provenance objects, one for the `ONNXExternalModel` itself, and one for the original `Model` object.\n",
    "\n",
    "Let's examine both of these provenances. First the one for the `ONNXExternalModel`:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "ONNXExternalModel provenance:\n",
      "ONNXExternalModel(\n",
      "\tclass-name = org.tribuo.interop.onnx.ONNXExternalModel\n",
      "\tdataset = Dataset(\n",
      "\t\t\tclass-name = org.tribuo.Dataset\n",
      "\t\t\tdatasource = DataSource(\n",
      "\t\t\t\t\tdescription = unknown-external-data\n",
      "\t\t\t\t\toutputFactory = LabelFactory(\n",
      "\t\t\t\t\t\t\tclass-name = org.tribuo.classification.LabelFactory\n",
      "\t\t\t\t\t\t)\n",
      "\t\t\t\t\tdatasource-creation-time = 2022-10-07T11:46:10.955196-04:00\n",
      "\t\t\t\t)\n",
      "\t\t\ttransformations = List[]\n",
      "\t\t\tis-sequence = false\n",
      "\t\t\tis-dense = false\n",
      "\t\t\tnum-examples = -1\n",
      "\t\t\tnum-features = 717\n",
      "\t\t\tnum-outputs = 10\n",
      "\t\t\ttribuo-version = 4.3.0\n",
      "\t\t)\n",
      "\ttrainer = Trainer(\n",
      "\t\t\tclass-name = org.tribuo.Trainer\n",
      "\t\t\tfileModifiedTime = 2022-10-07T11:46:10.476-04:00\n",
      "\t\t\tmodelHash = 9DD2FABC436FB75BAD6A3E061BE51022A79F140FC491C6CA8B8033253F43CD5F\n",
      "\t\t\tlocation = file:/local/ExternalRepositories/tribuo/tutorials/./fm-mnist.onnx\n",
      "\t\t)\n",
      "\ttrained-at = 2022-10-07T11:46:10.952607-04:00\n",
      "\tinstance-values = Map{\n",
      "\t\tmodel-domain=org.tribuo.tutorials.onnxexport.fm\n",
      "\t\tmodel-graphname=FMClassificationModel\n",
      "\t\tmodel-description=factorization-machine-model - Model(class-name=org.tribuo.classification.sgd.fm.FMClassificationModel,dataset=Dataset(class-name=org.tribuo.MutableDataset,datasource=DataSource(class-name=org.tribuo.datasource.IDXDataSource,outputPath=/local/ExternalRepositories/tribuo/tutorials/train-labels-idx1-ubyte.gz,outputFactory=OutputFactory(class-name=org.tribuo.classification.LabelFactory),featuresPath=/local/ExternalRepositories/tribuo/tutorials/train-images-idx3-ubyte.gz,features-file-modified-time=2000-07-21T14:20:24-04:00,output-resource-hash=SHA-256[3552534A0A558BBED6AED32B30C495CCA23D567EC52CAC8BE1A0730E8010255C],datasource-creation-time=2022-10-07T11:45:53.253680-04:00,output-file-modified-time=2000-07-21T14:20:27-04:00,idx-feature-type=UBYTE,features-resource-hash=SHA-256[440FCABF73CC546FA21475E81EA370265605F56BE210A4024D2CA8F203523609],host-short-name=DataSource),transformations=[],is-sequence=false,is-dense=false,num-examples=60000,num-features=717,num-outputs=10,tribuo-version=4.3.0),trainer=Trainer(class-name=org.tribuo.classification.sgd.fm.FMClassificationTrainer,seed=12345,variance=0.1,minibatchSize=1,factorizedDimSize=6,shuffle=true,epochs=5,optimiser=StochasticGradientOptimiser(class-name=org.tribuo.math.optimisers.AdaGrad,epsilon=0.1,initialLearningRate=0.1,initialValue=0.0,host-short-name=StochasticGradientOptimiser),loggingInterval=30000,objective=LabelObjective(class-name=org.tribuo.classification.sgd.objectives.LogMulticlass,host-short-name=LabelObjective),tribuo-version=4.3.0,train-invocation-count=0,is-sequence=false,host-short-name=Trainer),trained-at=2022-10-07T11:46:09.759423-04:00,instance-values={},tribuo-version=4.3.0,java-version=12,os-name=Linux,os-arch=amd64)\n",
      "\t\tmodel-producer=Tribuo\n",
      "\t\tmodel-version=0\n",
      "\t\tinput-name=input\n",
      "\t}\n",
      "\ttribuo-version = 4.3.0\n",
      "\tjava-version = 12\n",
      "\tos-name = Linux\n",
      "\tos-arch = amd64\n",
      ")\n"
     ]
    }
   ],
   "source": [
    "System.out.println(\"ONNXExternalModel provenance:\\n\" + ProvenanceUtil.formattedProvenanceString(onnxFM.getProvenance()));"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This has the location the ONNX file was loaded from, a hash of the file, and timestamps for both the ONNX file and the model object wrapping it.\n",
    "\n",
    "Now let's look at the original Model provenance:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "ONNX file provenance:\n",
      "FMClassificationModel(\n",
      "\tclass-name = org.tribuo.classification.sgd.fm.FMClassificationModel\n",
      "\tdataset = MutableDataset(\n",
      "\t\t\tclass-name = org.tribuo.MutableDataset\n",
      "\t\t\tdatasource = IDXDataSource(\n",
      "\t\t\t\t\tclass-name = org.tribuo.datasource.IDXDataSource\n",
      "\t\t\t\t\toutputFactory = LabelFactory(\n",
      "\t\t\t\t\t\t\tclass-name = org.tribuo.classification.LabelFactory\n",
      "\t\t\t\t\t\t)\n",
      "\t\t\t\t\toutputPath = /local/ExternalRepositories/tribuo/tutorials/train-labels-idx1-ubyte.gz\n",
      "\t\t\t\t\tfeaturesPath = /local/ExternalRepositories/tribuo/tutorials/train-images-idx3-ubyte.gz\n",
      "\t\t\t\t\tfeatures-file-modified-time = 2000-07-21T14:20:24-04:00\n",
      "\t\t\t\t\toutput-resource-hash = 3552534A0A558BBED6AED32B30C495CCA23D567EC52CAC8BE1A0730E8010255C\n",
      "\t\t\t\t\tdatasource-creation-time = 2022-10-07T11:45:53.253680-04:00\n",
      "\t\t\t\t\toutput-file-modified-time = 2000-07-21T14:20:27-04:00\n",
      "\t\t\t\t\tidx-feature-type = UBYTE\n",
      "\t\t\t\t\tfeatures-resource-hash = 440FCABF73CC546FA21475E81EA370265605F56BE210A4024D2CA8F203523609\n",
      "\t\t\t\t\thost-short-name = DataSource\n",
      "\t\t\t\t)\n",
      "\t\t\ttransformations = List[]\n",
      "\t\t\tis-sequence = false\n",
      "\t\t\tis-dense = false\n",
      "\t\t\tnum-examples = 60000\n",
      "\t\t\tnum-features = 717\n",
      "\t\t\tnum-outputs = 10\n",
      "\t\t\ttribuo-version = 4.3.0\n",
      "\t\t)\n",
      "\ttrainer = FMClassificationTrainer(\n",
      "\t\t\tclass-name = org.tribuo.classification.sgd.fm.FMClassificationTrainer\n",
      "\t\t\tseed = 12345\n",
      "\t\t\tvariance = 0.1\n",
      "\t\t\tminibatchSize = 1\n",
      "\t\t\tfactorizedDimSize = 6\n",
      "\t\t\tshuffle = true\n",
      "\t\t\tepochs = 5\n",
      "\t\t\toptimiser = AdaGrad(\n",
      "\t\t\t\t\tclass-name = org.tribuo.math.optimisers.AdaGrad\n",
      "\t\t\t\t\tepsilon = 0.1\n",
      "\t\t\t\t\tinitialLearningRate = 0.1\n",
      "\t\t\t\t\tinitialValue = 0.0\n",
      "\t\t\t\t\thost-short-name = StochasticGradientOptimiser\n",
      "\t\t\t\t)\n",
      "\t\t\tloggingInterval = 30000\n",
      "\t\t\tobjective = LogMulticlass(\n",
      "\t\t\t\t\tclass-name = org.tribuo.classification.sgd.objectives.LogMulticlass\n",
      "\t\t\t\t\thost-short-name = LabelObjective\n",
      "\t\t\t\t)\n",
      "\t\t\ttribuo-version = 4.3.0\n",
      "\t\t\ttrain-invocation-count = 0\n",
      "\t\t\tis-sequence = false\n",
      "\t\t\thost-short-name = Trainer\n",
      "\t\t)\n",
      "\ttrained-at = 2022-10-07T11:46:09.759423-04:00\n",
      "\tinstance-values = Map{}\n",
      "\ttribuo-version = 4.3.0\n",
      "\tjava-version = 12\n",
      "\tos-name = Linux\n",
      "\tos-arch = amd64\n",
      ")\n"
     ]
    }
   ],
   "source": [
    "System.out.println(\"ONNX file provenance:\\n\" + ProvenanceUtil.formattedProvenanceString(onnxFM.getTribuoProvenance().get()));"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We can also check that the provenance extracted from the ONNX file is the same as the provenance in the original model object."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Provenances are equal\n"
     ]
    }
   ],
   "source": [
    "var equality = fmMNIST.getProvenance().equals(onnxFM.getTribuoProvenance().get()) ? \"equal\" : \"not equal\";\n",
    "System.out.println(\"Provenances are \" + equality);"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Exporting an ensemble\n",
    "\n",
    "Tribuo allows the creation of arbitrary ensembles, and these are usually powerful models which are useful to deploy. So we're going to make a 3 element voting ensemble out of our factorization machine along with two other models and export that to ONNX as well. The other models are a logistic regression and a smaller factorization machine, but we could use any classification model supported by Tribuo, including another ensemble. As this is a small ensemble of similar models our goal is to demonstrate the functionality rather than improve performance on MNIST too much."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [],
   "source": [
    "var lrTrainer = new LogisticRegressionTrainer();\n",
    "var smallFMTrainer = new FMClassificationTrainer(new LogMulticlass(),  // Loss function\n",
    "                                                 new AdaGrad(0.1,0.1), // Gradient optimiser\n",
    "                                                 2,                    // Number of training epochs\n",
    "                                                 30000,                // Logging interval\n",
    "                                                 42L,                  // RNG seed\n",
    "                                                 3,                    // Factor size\n",
    "                                                 0.1                   // Factor initialisation variance\n",
    "                                                 );\n",
    "var lrModel = lrTrainer.train(mnistTrain);\n",
    "var smallFMModel = smallFMTrainer.train(mnistTrain);"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Tribuo's `WeightedEnsembleModel` class allows the creation of arbitrary ensembles with or without voting weights. We're going to create an unweighted ensemble of our three models using the standard `VotingCombiner` which takes a majority vote between the three classes, with ties broken by the first label."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [],
   "source": [
    "var ensemble = WeightedEnsembleModel.createEnsembleFromExistingModels(\"ensemble\", // Model name\n",
    "                                           List.of(fmMNIST,lrModel,smallFMModel), // Ensemble members\n",
    "                                           new VotingCombiner());                 // Combination operator"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Scoring ensemble took (00:00:00:611)\n",
      "Class                           n          tp          fn          fp      recall        prec          f1\n",
      "0                             980         965          15          43       0.985       0.957       0.971\n",
      "1                           1,135       1,119          16          34       0.986       0.971       0.978\n",
      "2                           1,032         979          53          86       0.949       0.919       0.934\n",
      "3                           1,010         926          84          38       0.917       0.961       0.938\n",
      "4                             982         937          45          49       0.954       0.950       0.952\n",
      "5                             892         837          55          49       0.938       0.945       0.942\n",
      "6                             958         922          36          32       0.962       0.966       0.964\n",
      "7                           1,028         978          50          52       0.951       0.950       0.950\n",
      "8                             974         918          56          98       0.943       0.904       0.923\n",
      "9                           1,009         917          92          21       0.909       0.978       0.942\n",
      "Total                      10,000       9,498         502         502\n",
      "Accuracy                                                                    0.950\n",
      "Micro Average                                                               0.950       0.950       0.950\n",
      "Macro Average                                                               0.949       0.950       0.949\n",
      "Balanced Error Rate                                                         0.051\n",
      "               0       1       2       3       4       5       6       7       8       9\n",
      "0            965       0       0       1       0       2       7       3       2       0\n",
      "1              0   1,119       5       0       0       0       5       1       5       0\n",
      "2              7       5     979       4       5       1       3       7      20       1\n",
      "3              3       3      29     926       1      14       0       8      25       1\n",
      "4              3       2      11       1     937       0       3       1      11      13\n",
      "5              8       1       2       9       3     837      10       5      17       0\n",
      "6              8       2       5       3       2      14     922       0       2       0\n",
      "7              2       9      21       3       6       1       0     978       2       6\n",
      "8              5       4      10       7      10       9       2       9     918       0\n",
      "9              7       8       3      10      22       8       2      18      14     917\n",
      "\n"
     ]
    }
   ],
   "source": [
    "var ensembleStartTime = System.currentTimeMillis();\n",
    "var ensembleEval = labelEvaluator.evaluate(ensemble,mnistTest);\n",
    "var ensembleEndTime = System.currentTimeMillis();\n",
    "System.out.println(\"Scoring ensemble took \" + Util.formatDuration(ensembleStartTime,ensembleEndTime));\n",
    "System.out.println(ensembleEval.toString());\n",
    "System.out.println(ensembleEval.getConfusionMatrix().toString());"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "As before, we use the `saveONNXModel` method on the `ONNXExportable` interface to write out the model. Note if one of the ensemble members isn't `ONNXExportable` then you'll get a runtime exception out of this call."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [],
   "source": [
    "var ensemblePath = Paths.get(\".\",\"ensemble-mnist.onnx\");\n",
    "ensemble.saveONNXModel(\"org.tribuo.tutorials.onnxexport.ensemble\", // namespace for the model\n",
    "                      0,                                           // model version number\n",
    "                      ensemblePath                                 // path to save the model\n",
    "                      );"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We can load this model into `ONNXExternalModel` as well:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Scoring ONNX ensemble took (00:00:00:938)\n",
      "Predictions are equal - true\n"
     ]
    }
   ],
   "source": [
    "var onnxEnsemble = ONNXExternalModel.createOnnxModel(labelFactory, mnistFeatureMap, mnistOutputMap,\n",
    "                    denseTransformer, labelTransformer, sessionOpts, ensemblePath, \"input\");\n",
    "onnxStartTime = System.currentTimeMillis();\n",
    "var mnistONNXEnsembleEval = labelEvaluator.evaluate(onnxEnsemble,mnistTest);\n",
    "onnxEndTime = System.currentTimeMillis();\n",
    "System.out.println(\"Scoring ONNX ensemble took \" + Util.formatDuration(onnxStartTime,onnxEndTime));\n",
    "System.out.println(\"Predictions are equal - \" + \n",
    "                    checkPredictions(ensembleEval.getPredictions(), mnistONNXEnsembleEval.getPredictions(), 1e-5));"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Deploying the model\n",
    "\n",
    "This portion of the tutorial describes how to deploy the ONNX model on OCI Data Science, using their model deployment service. ONNX models can also be deployed in many other machine learning cloud services, or via a functions-as-a-service offering using something like ONNX Runtime. ONNX models can also be deployed using [Oracle Machine Learning Services](https://blogs.oracle.com/machinelearning/post/introducing-oracle-machine-learning-services), or in many other environments, including other cloud providers.\n",
    "\n",
    "Tribuo's OCI Data Science support comes in two parts, a set of static methods for deploying models in the cloud, and the `OCIModel` class which wraps a model endpoint and allows using it as a normal Tribuo model. Underneath the covers we're going to use an OCI DS conda environment which contains ONNX Runtime in Python, and use that to make predictions from our model trained in Java.\n",
    "\n",
    "To run this part of the tutorial you'll need to have configured your access to OCI Data Science (if you've not done this before then you can see a tutorial on how to do that [here](https://github.com/oracle/oci-data-science-ai-samples/blob/master/labs/MLSummit21/lab-0-tenancy-setup.md)), setup authentication to allow [CLI access to OCI](https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdkconfig.htm) and you'll need the compartment & project ids for the OCI Data Science project you want to deploy into."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {},
   "outputs": [],
   "source": [
    "// Set these variables appropriately for your OCI account\n",
    "var compartmentID = \"your-oci-compartment-id\";\n",
    "var projectID = \"your-oci-ds-project-id\";"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now we'll instantiate the DS client, and build the config object which captures all the information about the model we're uploading. The models are run inside a [conda environment](https://docs.oracle.com/en-us/iaas/data-science/using/conda_understand_environments.htm), and you need to select one which contains ONNX Runtime 1.6.0 or newer (as Tribuo emits ONNX models using Opset 13, which is supported in ONNX Runtime 1.6+). This can either be a custom one you've created, or one provided by OCI Data Science."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {},
   "outputs": [],
   "source": [
    "// Instantiate the client\n",
    "var provider = new ConfigFileAuthenticationDetailsProvider(ConfigFileReader.parseDefault());\n",
    "var dsClient = new DataScienceClient(provider);\n",
    "\n",
    "// Instantiate an ObjectMapper for parsing the REST calls\n",
    "var objMapper = OCIUtil.createObjectMapper();\n",
    "\n",
    "// Select the conda environment\n",
    "var condaName = \"dataexpl_p37_cpu_v3\"; // Also referred to as the \"slug\" in the OCI DS docs\n",
    "var condaPath = \"oci://service-conda-packs@id19sfcrra6z/service_pack/cpu/Data Exploration and Manipulation for CPU Python 3.7/3.0/dataexpl_p37_cpu_v3\";\n",
    "\n",
    "// Instantiate the model configuration\n",
    "var dsConfig = new OCIUtil.OCIDSConfig(compartmentID,projectID);\n",
    "var modelConfig = new OCIUtil.OCIModelArtifactConfig(dsConfig,          // Data Science config\n",
    "                                             \"tribuo-tutorial-model\",   // Model name\n",
    "                                             \"A factorization machine\", // Model description\n",
    "                                             \"org.tribuo.tutorial.test\",// ONNX model domain\n",
    "                                             0,                         // ONNX model version\n",
    "                                             condaName,                 // Conda environment name\n",
    "                                             condaPath);                // Conda environment path on object storage"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We can now upload the model into OCI Data Science. The `createModel` method has an overload that accepts an ONNX file on disk, or you can pass in any model which implements `ONNXExportable`. Tribuo takes care of setting the model metadata according to the information it can extract from the `Model` object, and it automatically generates the necessary python script and yaml file which control the model's environment in the deployment. Note models are distinct from model deployments, so a single model artifact can be deployed multiple times with different endpoints, VM sizes and scaling parameters. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {},
   "outputs": [],
   "source": [
    "var modelID = OCIUtil.createModel(fmMNIST,dsClient,objMapper,modelConfig);"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The `modelID` is the reference for the model artifact stored in Oracle Cloud, and we'll need this to create a deployment wrapping the model.\n",
    "\n",
    "To specify the model deployment configuration there's a `OCIModelDeploymentConfig` wrapper class, it contains the model ID, the model deployment name, the VM shape, maximum number of VM instances to create, and the bandwidth available for that model. At time of writing OCI DS supports the `VM.Standard2` shapes."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {},
   "outputs": [],
   "source": [
    "var deployConfig = new OCIUtil.OCIModelDeploymentConfig(dsConfig,modelID,\"tribuo-tutorial-deployment\",\"VM.Standard2.1\",10,1);\n",
    "\n",
    "var deployURL = OCIUtil.deploy(deployConfig,dsClient,objMapper);\n",
    "System.out.println(deployURL);"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Model deployments take a few minutes, so you'll need to wait a while if you've been following along with the tutorial. The deployment progress can be checked on the OCI console for the data science project you are using.\n",
    "\n",
    "Once the deployment has finished, we can wrap it in an `OCIModel` and then check it's the same as the factorization machine we deployed. An `OCIModel` is a subclass of `ExternalModel` in the same way that externally trained ONNX models are, so we need to supply the mapping between Tribuo's feature domain & the feature indices expected by the model, the output domain mapping, and a `OCIOutputConverter` instance which can convert the prediction matrix into Tribuo's `Prediction` objects. As we've deployed a factorization machine for MNIST, we'll use `OCILabelConverter`, and the mappings are the same as the ones we used for the ONNX model earlier."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "metadata": {},
   "outputs": [],
   "source": [
    "var ociLabelConverter = new OCILabelConverter(true);\n",
    "var ociModel = OCIModel.createOCIModel(labelFactory,mnistFeatureMap, mnistOutputMap, \n",
    "                                       Paths.get(\"~/.oci/config\"), // OCI authentication config\n",
    "                                       deployURL,                  // Model endpoint URL\n",
    "                                       ociLabelConverter);         // Output converter"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "As `OCIModel` is a Tribuo model we can evaluate it using our standard tools.\n",
    "\n",
    "Note when running this notebook from scratch the OCI Model Deployment can take up to 15 minutes to fully instantiate, and the next cell will not execute correctly until that deployment has finished. You can monitor the status of the deployment in the OCI console."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Scoring OCI model took (00:00:53:606)\n",
      "Class                           n          tp          fn          fp      recall        prec          f1\n",
      "0                             980         959          21          31       0.979       0.969       0.974\n",
      "1                           1,135       1,120          15          22       0.987       0.981       0.984\n",
      "2                           1,032         976          56          57       0.946       0.945       0.945\n",
      "3                           1,010         952          58          39       0.943       0.961       0.952\n",
      "4                             982         952          30          49       0.969       0.951       0.960\n",
      "5                             892         857          35          63       0.961       0.932       0.946\n",
      "6                             958         920          38          30       0.960       0.968       0.964\n",
      "7                           1,028         969          59          36       0.943       0.964       0.953\n",
      "8                             974         916          58          57       0.940       0.941       0.941\n",
      "9                           1,009         951          58          44       0.943       0.956       0.949\n",
      "Total                      10,000       9,572         428         428\n",
      "Accuracy                                                                    0.957\n",
      "Micro Average                                                               0.957       0.957       0.957\n",
      "Macro Average                                                               0.957       0.957       0.957\n",
      "Balanced Error Rate                                                         0.043\n",
      "               0       1       2       3       4       5       6       7       8       9\n",
      "0            959       0       0       0       1       2       7       4       4       3\n",
      "1              0   1,120       4       1       3       0       3       0       4       0\n",
      "2              6       5     976       7       7       2       5       8      14       2\n",
      "3              0       2      15     952       0      19       1       3      14       4\n",
      "4              3       3       7       1     952       0       4       1       1      10\n",
      "5              3       1       0       6       1     857       5       5      13       1\n",
      "6              8       2       7       2       7      11     920       1       0       0\n",
      "7              2       5      13       5       4       4       0     969       4      22\n",
      "8              2       1       9       9      11      15       4       5     916       2\n",
      "9              7       3       2       8      15      10       1       9       3     951\n",
      "\n",
      "Predictions are equal - true\n"
     ]
    }
   ],
   "source": [
    "var ociStartTime = System.currentTimeMillis();\n",
    "var ociEval = labelEvaluator.evaluate(ociModel,mnistTest);\n",
    "var ociEndTime = System.currentTimeMillis();\n",
    "System.out.println(\"Scoring OCI model took \" + Util.formatDuration(ociStartTime,ociEndTime));\n",
    "System.out.println(ociEval.toString());\n",
    "System.out.println(ociEval.getConfusionMatrix().toString());\n",
    "\n",
    "System.out.println(\"Predictions are equal - \" + \n",
    "                    checkPredictions(ociEval.getPredictions(), mnistFMEval.getPredictions(), 1e-5));"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We can see that the model performs identically to the Tribuo version, though it takes a little longer as each call to predict incurs some network latency.\n",
    "\n",
    "## Conclusion\n",
    "\n",
    "We've looked at exporting models out of Tribuo in ONNX format, where they can be used in different languages, runtimes and deployed in cloud environments like OCI Data Science. Over time we plan to expand Tribuo's support for ONNX export to cover more models. Tribuo's ONNX support is a separate module from the rest of Tribuo and could be used to build ONNX models in other packages on the JVM. If you're interested in expanding the support for ONNX in Java, you can open a [Github issue](https://github.com/oracle/tribuo/issues) for Tribuo, or you can talk to the ONNX community in their [Slack workspace](https://onnx.ai/slack.html)."
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Java",
   "language": "java",
   "name": "java"
  },
  "language_info": {
   "codemirror_mode": "java",
   "file_extension": ".jshell",
   "mimetype": "text/x-java-source",
   "name": "Java",
   "pygments_lexer": "java",
   "version": "12+33"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
