{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# SDK Reference"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#| hide\n",
    "from nbdev.showdoc import show_doc\n",
    "from nixtla import NixtlaClient"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/markdown": [
       "---\n",
       "\n",
       "## NixtlaClient\n",
       "\n",
       ">      NixtlaClient (api_key:Optional[str]=None, base_url:Optional[str]=None,\n",
       ">                    max_retries:int=6, retry_interval:int=10,\n",
       ">                    max_wait_time:int=360)\n",
       "\n",
       "Constructs all the necessary attributes for the NixtlaClient object.\n",
       "\n",
       "|    | **Type** | **Default** | **Details** |\n",
       "| -- | -------- | ----------- | ----------- |\n",
       "| api_key | Optional | None | The authorization api_key interacts with the Nixtla API.<br>If not provided, it will be inferred by the NIXTLA_API_KEY environment variable. |\n",
       "| base_url | Optional | None | Custom base_url. Pass only if provided. |\n",
       "| max_retries | int | 6 | The maximum number of attempts to make when calling the API before giving up.<br>It defines how many times the client will retry the API call if it fails.<br>Default value is 6, indicating the client will attempt the API call up to 6 times in total |\n",
       "| retry_interval | int | 10 | The interval in seconds between consecutive retry attempts.<br>This is the waiting period before the client tries to call the API again after a failed attempt.<br>Default value is 10 seconds, meaning the client waits for 10 seconds between retries. |\n",
       "| max_wait_time | int | 360 | The maximum total time in seconds that the client will spend on all retry attempts before giving up.<br>This sets an upper limit on the cumulative waiting time for all retry attempts.<br>If this time is exceeded, the client will stop retrying and raise an exception.<br>Default value is 360 seconds, meaning the client will cease retrying if the total time<br>spent on retries exceeds 360 seconds.<br>The client throws a ReadTimeout error after 60 seconds of inactivity. If you want to<br>catch these errors, use max_wait_time >> 60. |"
      ],
      "text/plain": [
       "---\n",
       "\n",
       "## NixtlaClient\n",
       "\n",
       ">      NixtlaClient (api_key:Optional[str]=None, base_url:Optional[str]=None,\n",
       ">                    max_retries:int=6, retry_interval:int=10,\n",
       ">                    max_wait_time:int=360)\n",
       "\n",
       "Constructs all the necessary attributes for the NixtlaClient object.\n",
       "\n",
       "|    | **Type** | **Default** | **Details** |\n",
       "| -- | -------- | ----------- | ----------- |\n",
       "| api_key | Optional | None | The authorization api_key interacts with the Nixtla API.<br>If not provided, it will be inferred by the NIXTLA_API_KEY environment variable. |\n",
       "| base_url | Optional | None | Custom base_url. Pass only if provided. |\n",
       "| max_retries | int | 6 | The maximum number of attempts to make when calling the API before giving up.<br>It defines how many times the client will retry the API call if it fails.<br>Default value is 6, indicating the client will attempt the API call up to 6 times in total |\n",
       "| retry_interval | int | 10 | The interval in seconds between consecutive retry attempts.<br>This is the waiting period before the client tries to call the API again after a failed attempt.<br>Default value is 10 seconds, meaning the client waits for 10 seconds between retries. |\n",
       "| max_wait_time | int | 360 | The maximum total time in seconds that the client will spend on all retry attempts before giving up.<br>This sets an upper limit on the cumulative waiting time for all retry attempts.<br>If this time is exceeded, the client will stop retrying and raise an exception.<br>Default value is 360 seconds, meaning the client will cease retrying if the total time<br>spent on retries exceeds 360 seconds.<br>The client throws a ReadTimeout error after 60 seconds of inactivity. If you want to<br>catch these errors, use max_wait_time >> 60. |"
      ]
     },
     "execution_count": null,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "#| echo: false\n",
    "show_doc(NixtlaClient.__init__, title_level=2, name='NixtlaClient')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/markdown": [
       "---\n",
       "\n",
       "## NixtlaClient.validate_api_key\n",
       "\n",
       ">      NixtlaClient.validate_api_key (log:bool=True)\n",
       "\n",
       "Returns True if your api_key is valid."
      ],
      "text/plain": [
       "---\n",
       "\n",
       "## NixtlaClient.validate_api_key\n",
       "\n",
       ">      NixtlaClient.validate_api_key (log:bool=True)\n",
       "\n",
       "Returns True if your api_key is valid."
      ]
     },
     "execution_count": null,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "#| echo: false\n",
    "show_doc(NixtlaClient.validate_api_key, title_level=2, name='NixtlaClient.validate_api_key')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/markdown": [
       "---\n",
       "\n",
       "## NixtlaClient.plot\n",
       "\n",
       ">      NixtlaClient.plot (df:pandas.core.frame.DataFrame,\n",
       ">                         forecasts_df:Optional[pandas.core.frame.DataFrame]=Non\n",
       ">                         e, id_col:str='unique_id', time_col:str='ds',\n",
       ">                         target_col:str='y', unique_ids:Union[List[str],NoneTyp\n",
       ">                         e,numpy.ndarray]=None, plot_random:bool=True,\n",
       ">                         models:Optional[List[str]]=None,\n",
       ">                         level:Optional[List[float]]=None,\n",
       ">                         max_insample_length:Optional[int]=None,\n",
       ">                         plot_anomalies:bool=False, engine:str='matplotlib',\n",
       ">                         resampler_kwargs:Optional[Dict]=None)\n",
       "\n",
       "Plot forecasts and insample values.\n",
       "\n",
       "|    | **Type** | **Default** | **Details** |\n",
       "| -- | -------- | ----------- | ----------- |\n",
       "| df | DataFrame |  | The DataFrame on which the function will operate. Expected to contain at least the following columns:<br>- time_col:<br>    Column name in `df` that contains the time indices of the time series. This is typically a datetime<br>    column with regular intervals, e.g., hourly, daily, monthly data points.<br>- target_col:<br>    Column name in `df` that contains the target variable of the time series, i.e., the variable we<br>    wish to predict or analyze.<br>Additionally, you can pass multiple time series (stacked in the dataframe) considering an additional column:<br>- id_col:<br>    Column name in `df` that identifies unique time series. Each unique value in this column<br>    corresponds to a unique time series. |\n",
       "| forecasts_df | Optional | None | DataFrame with columns [`unique_id`, `ds`] and models. |\n",
       "| id_col | str | unique_id | Column that identifies each serie. |\n",
       "| time_col | str | ds | Column that identifies each timestep, its values can be timestamps or integers. |\n",
       "| target_col | str | y | Column that contains the target. |\n",
       "| unique_ids | Union | None | Time Series to plot.<br>If None, time series are selected randomly. |\n",
       "| plot_random | bool | True | Select time series to plot randomly. |\n",
       "| models | Optional | None | List of models to plot. |\n",
       "| level | Optional | None | List of prediction intervals to plot if paseed. |\n",
       "| max_insample_length | Optional | None | Max number of train/insample observations to be plotted. |\n",
       "| plot_anomalies | bool | False | Plot anomalies for each prediction interval. |\n",
       "| engine | str | matplotlib | Library used to plot. 'plotly', 'plotly-resampler' or 'matplotlib'. |\n",
       "| resampler_kwargs | Optional | None | Kwargs to be passed to plotly-resampler constructor.<br>For further custumization (\"show_dash\") call the method,<br>store the plotting object and add the extra arguments to<br>its `show_dash` method. |"
      ],
      "text/plain": [
       "---\n",
       "\n",
       "## NixtlaClient.plot\n",
       "\n",
       ">      NixtlaClient.plot (df:pandas.core.frame.DataFrame,\n",
       ">                         forecasts_df:Optional[pandas.core.frame.DataFrame]=Non\n",
       ">                         e, id_col:str='unique_id', time_col:str='ds',\n",
       ">                         target_col:str='y', unique_ids:Union[List[str],NoneTyp\n",
       ">                         e,numpy.ndarray]=None, plot_random:bool=True,\n",
       ">                         models:Optional[List[str]]=None,\n",
       ">                         level:Optional[List[float]]=None,\n",
       ">                         max_insample_length:Optional[int]=None,\n",
       ">                         plot_anomalies:bool=False, engine:str='matplotlib',\n",
       ">                         resampler_kwargs:Optional[Dict]=None)\n",
       "\n",
       "Plot forecasts and insample values.\n",
       "\n",
       "|    | **Type** | **Default** | **Details** |\n",
       "| -- | -------- | ----------- | ----------- |\n",
       "| df | DataFrame |  | The DataFrame on which the function will operate. Expected to contain at least the following columns:<br>- time_col:<br>    Column name in `df` that contains the time indices of the time series. This is typically a datetime<br>    column with regular intervals, e.g., hourly, daily, monthly data points.<br>- target_col:<br>    Column name in `df` that contains the target variable of the time series, i.e., the variable we<br>    wish to predict or analyze.<br>Additionally, you can pass multiple time series (stacked in the dataframe) considering an additional column:<br>- id_col:<br>    Column name in `df` that identifies unique time series. Each unique value in this column<br>    corresponds to a unique time series. |\n",
       "| forecasts_df | Optional | None | DataFrame with columns [`unique_id`, `ds`] and models. |\n",
       "| id_col | str | unique_id | Column that identifies each serie. |\n",
       "| time_col | str | ds | Column that identifies each timestep, its values can be timestamps or integers. |\n",
       "| target_col | str | y | Column that contains the target. |\n",
       "| unique_ids | Union | None | Time Series to plot.<br>If None, time series are selected randomly. |\n",
       "| plot_random | bool | True | Select time series to plot randomly. |\n",
       "| models | Optional | None | List of models to plot. |\n",
       "| level | Optional | None | List of prediction intervals to plot if paseed. |\n",
       "| max_insample_length | Optional | None | Max number of train/insample observations to be plotted. |\n",
       "| plot_anomalies | bool | False | Plot anomalies for each prediction interval. |\n",
       "| engine | str | matplotlib | Library used to plot. 'plotly', 'plotly-resampler' or 'matplotlib'. |\n",
       "| resampler_kwargs | Optional | None | Kwargs to be passed to plotly-resampler constructor.<br>For further custumization (\"show_dash\") call the method,<br>store the plotting object and add the extra arguments to<br>its `show_dash` method. |"
      ]
     },
     "execution_count": null,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "#| echo: false\n",
    "show_doc(NixtlaClient.plot, name='NixtlaClient.plot', title_level=2)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/markdown": [
       "---\n",
       "\n",
       "## NixtlaClient.forecast\n",
       "\n",
       ">      NixtlaClient.forecast (df:pandas.core.frame.DataFrame, h:int,\n",
       ">                             freq:Optional[str]=None, id_col:str='unique_id',\n",
       ">                             time_col:str='ds', target_col:str='y',\n",
       ">                             X_df:Optional[pandas.core.frame.DataFrame]=None,\n",
       ">                             level:Optional[List[Union[int,float]]]=None,\n",
       ">                             quantiles:Optional[List[float]]=None,\n",
       ">                             finetune_steps:int=0, finetune_loss:str='default',\n",
       ">                             clean_ex_first:bool=True,\n",
       ">                             validate_api_key:bool=False,\n",
       ">                             add_history:bool=False,\n",
       ">                             date_features:Union[bool,List[str]]=False, date_fe\n",
       ">                             atures_to_one_hot:Union[bool,List[str]]=True,\n",
       ">                             model:str='timegpt-1',\n",
       ">                             num_partitions:Optional[int]=None)\n",
       "\n",
       "Forecast your time series using TimeGPT.\n",
       "\n",
       "|    | **Type** | **Default** | **Details** |\n",
       "| -- | -------- | ----------- | ----------- |\n",
       "| df | DataFrame |  | The DataFrame on which the function will operate. Expected to contain at least the following columns:<br>- time_col:<br>    Column name in `df` that contains the time indices of the time series. This is typically a datetime<br>    column with regular intervals, e.g., hourly, daily, monthly data points.<br>- target_col:<br>    Column name in `df` that contains the target variable of the time series, i.e., the variable we<br>    wish to predict or analyze.<br>Additionally, you can pass multiple time series (stacked in the dataframe) considering an additional column:<br>- id_col:<br>    Column name in `df` that identifies unique time series. Each unique value in this column<br>    corresponds to a unique time series. |\n",
       "| h | int |  | Forecast horizon. |\n",
       "| freq | Optional | None | Frequency of the data. By default, the freq will be inferred automatically.<br>See [pandas' available frequencies](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases). |\n",
       "| id_col | str | unique_id | Column that identifies each serie. |\n",
       "| time_col | str | ds | Column that identifies each timestep, its values can be timestamps or integers. |\n",
       "| target_col | str | y | Column that contains the target. |\n",
       "| X_df | Optional | None | DataFrame with [`unique_id`, `ds`] columns and `df`'s future exogenous. |\n",
       "| level | Optional | None | Confidence levels between 0 and 100 for prediction intervals. |\n",
       "| quantiles | Optional | None | Quantiles to forecast, list between (0, 1).<br>`level` and `quantiles` should not be used simultaneously.<br>The output dataframe will have the quantile columns<br>formatted as TimeGPT-q-(100 * q) for each q.<br>100 * q represents percentiles but we choose this notation<br>to avoid having dots in column names. |\n",
       "| finetune_steps | int | 0 | Number of steps used to finetune learning TimeGPT in the<br>new data. |\n",
       "| finetune_loss | str | default | Loss function to use for finetuning. Options are: `default`, `mae`, `mse`, `rmse`, `mape`, and `smape`. |\n",
       "| clean_ex_first | bool | True | Clean exogenous signal before making forecasts<br>using TimeGPT. |\n",
       "| validate_api_key | bool | False | If True, validates api_key before<br>sending requests. |\n",
       "| add_history | bool | False | Return fitted values of the model. |\n",
       "| date_features | Union | False | Features computed from the dates.<br>Can be pandas date attributes or functions that will take the dates as input.<br>If True automatically adds most used date features for the<br>frequency of `df`. |\n",
       "| date_features_to_one_hot | Union | True | Apply one-hot encoding to these date features.<br>If `date_features=True`, then all date features are<br>one-hot encoded by default. |\n",
       "| model | str | timegpt-1 | Model to use as a string. Options are: `timegpt-1`, and `timegpt-1-long-horizon`.<br>We recommend using `timegpt-1-long-horizon` for forecasting<br>if you want to predict more than one seasonal<br>period given the frequency of your data. |\n",
       "| num_partitions | Optional | None | Number of partitions to use.<br>If None, the number of partitions will be equal<br>to the available parallel resources in distributed environments. |\n",
       "| **Returns** | **pandas.DataFrame** |  | **DataFrame with TimeGPT forecasts for point predictions and probabilistic<br>predictions (if level is not None).** |"
      ],
      "text/plain": [
       "---\n",
       "\n",
       "## NixtlaClient.forecast\n",
       "\n",
       ">      NixtlaClient.forecast (df:pandas.core.frame.DataFrame, h:int,\n",
       ">                             freq:Optional[str]=None, id_col:str='unique_id',\n",
       ">                             time_col:str='ds', target_col:str='y',\n",
       ">                             X_df:Optional[pandas.core.frame.DataFrame]=None,\n",
       ">                             level:Optional[List[Union[int,float]]]=None,\n",
       ">                             quantiles:Optional[List[float]]=None,\n",
       ">                             finetune_steps:int=0, finetune_loss:str='default',\n",
       ">                             clean_ex_first:bool=True,\n",
       ">                             validate_api_key:bool=False,\n",
       ">                             add_history:bool=False,\n",
       ">                             date_features:Union[bool,List[str]]=False, date_fe\n",
       ">                             atures_to_one_hot:Union[bool,List[str]]=True,\n",
       ">                             model:str='timegpt-1',\n",
       ">                             num_partitions:Optional[int]=None)\n",
       "\n",
       "Forecast your time series using TimeGPT.\n",
       "\n",
       "|    | **Type** | **Default** | **Details** |\n",
       "| -- | -------- | ----------- | ----------- |\n",
       "| df | DataFrame |  | The DataFrame on which the function will operate. Expected to contain at least the following columns:<br>- time_col:<br>    Column name in `df` that contains the time indices of the time series. This is typically a datetime<br>    column with regular intervals, e.g., hourly, daily, monthly data points.<br>- target_col:<br>    Column name in `df` that contains the target variable of the time series, i.e., the variable we<br>    wish to predict or analyze.<br>Additionally, you can pass multiple time series (stacked in the dataframe) considering an additional column:<br>- id_col:<br>    Column name in `df` that identifies unique time series. Each unique value in this column<br>    corresponds to a unique time series. |\n",
       "| h | int |  | Forecast horizon. |\n",
       "| freq | Optional | None | Frequency of the data. By default, the freq will be inferred automatically.<br>See [pandas' available frequencies](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases). |\n",
       "| id_col | str | unique_id | Column that identifies each serie. |\n",
       "| time_col | str | ds | Column that identifies each timestep, its values can be timestamps or integers. |\n",
       "| target_col | str | y | Column that contains the target. |\n",
       "| X_df | Optional | None | DataFrame with [`unique_id`, `ds`] columns and `df`'s future exogenous. |\n",
       "| level | Optional | None | Confidence levels between 0 and 100 for prediction intervals. |\n",
       "| quantiles | Optional | None | Quantiles to forecast, list between (0, 1).<br>`level` and `quantiles` should not be used simultaneously.<br>The output dataframe will have the quantile columns<br>formatted as TimeGPT-q-(100 * q) for each q.<br>100 * q represents percentiles but we choose this notation<br>to avoid having dots in column names. |\n",
       "| finetune_steps | int | 0 | Number of steps used to finetune learning TimeGPT in the<br>new data. |\n",
       "| finetune_loss | str | default | Loss function to use for finetuning. Options are: `default`, `mae`, `mse`, `rmse`, `mape`, and `smape`. |\n",
       "| clean_ex_first | bool | True | Clean exogenous signal before making forecasts<br>using TimeGPT. |\n",
       "| validate_api_key | bool | False | If True, validates api_key before<br>sending requests. |\n",
       "| add_history | bool | False | Return fitted values of the model. |\n",
       "| date_features | Union | False | Features computed from the dates.<br>Can be pandas date attributes or functions that will take the dates as input.<br>If True automatically adds most used date features for the<br>frequency of `df`. |\n",
       "| date_features_to_one_hot | Union | True | Apply one-hot encoding to these date features.<br>If `date_features=True`, then all date features are<br>one-hot encoded by default. |\n",
       "| model | str | timegpt-1 | Model to use as a string. Options are: `timegpt-1`, and `timegpt-1-long-horizon`.<br>We recommend using `timegpt-1-long-horizon` for forecasting<br>if you want to predict more than one seasonal<br>period given the frequency of your data. |\n",
       "| num_partitions | Optional | None | Number of partitions to use.<br>If None, the number of partitions will be equal<br>to the available parallel resources in distributed environments. |\n",
       "| **Returns** | **pandas.DataFrame** |  | **DataFrame with TimeGPT forecasts for point predictions and probabilistic<br>predictions (if level is not None).** |"
      ]
     },
     "execution_count": null,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "#| echo: false\n",
    "show_doc(NixtlaClient.forecast, title_level=2)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/markdown": [
       "---\n",
       "\n",
       "## NixtlaClient.cross_validation\n",
       "\n",
       ">      NixtlaClient.cross_validation (df:pandas.core.frame.DataFrame, h:int,\n",
       ">                                     freq:Optional[str]=None,\n",
       ">                                     id_col:str='unique_id', time_col:str='ds',\n",
       ">                                     target_col:str='y', level:Optional[List[Un\n",
       ">                                     ion[int,float]]]=None,\n",
       ">                                     quantiles:Optional[List[float]]=None,\n",
       ">                                     validate_api_key:bool=False,\n",
       ">                                     n_windows:int=1,\n",
       ">                                     step_size:Optional[int]=None,\n",
       ">                                     finetune_steps:int=0,\n",
       ">                                     finetune_loss:str='default',\n",
       ">                                     clean_ex_first:bool=True,\n",
       ">                                     date_features:Union[bool,List[str]]=False,\n",
       ">                                     date_features_to_one_hot:Union[bool,List[s\n",
       ">                                     tr]]=True, model:str='timegpt-1',\n",
       ">                                     num_partitions:Optional[int]=None)\n",
       "\n",
       "Perform cross validation in your time series using TimeGPT.\n",
       "\n",
       "|    | **Type** | **Default** | **Details** |\n",
       "| -- | -------- | ----------- | ----------- |\n",
       "| df | DataFrame |  | The DataFrame on which the function will operate. Expected to contain at least the following columns:<br>- time_col:<br>    Column name in `df` that contains the time indices of the time series. This is typically a datetime<br>    column with regular intervals, e.g., hourly, daily, monthly data points.<br>- target_col:<br>    Column name in `df` that contains the target variable of the time series, i.e., the variable we<br>    wish to predict or analyze.<br>Additionally, you can pass multiple time series (stacked in the dataframe) considering an additional column:<br>- id_col:<br>    Column name in `df` that identifies unique time series. Each unique value in this column<br>    corresponds to a unique time series. |\n",
       "| h | int |  | Forecast horizon. |\n",
       "| freq | Optional | None | Frequency of the data. By default, the freq will be inferred automatically.<br>See [pandas' available frequencies](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases). |\n",
       "| id_col | str | unique_id | Column that identifies each serie. |\n",
       "| time_col | str | ds | Column that identifies each timestep, its values can be timestamps or integers. |\n",
       "| target_col | str | y | Column that contains the target. |\n",
       "| level | Optional | None | Confidence level between 0 and 100 for prediction intervals. |\n",
       "| quantiles | Optional | None | Quantiles to forecast, list between (0, 1).<br>`level` and `quantiles` should not be used simultaneously.<br>The output dataframe will have the quantile columns<br>formatted as TimeGPT-q-(100 * q) for each q.<br>100 * q represents percentiles but we choose this notation<br>to avoid having dots in column names.. |\n",
       "| validate_api_key | bool | False | If True, validates api_key before<br>sending requests. |\n",
       "| n_windows | int | 1 | Number of windows to evaluate. |\n",
       "| step_size | Optional | None | Step size between each cross validation window. If None it will be equal to `h`. |\n",
       "| finetune_steps | int | 0 | Number of steps used to finetune TimeGPT in the<br>new data. |\n",
       "| finetune_loss | str | default | Loss function to use for finetuning. Options are: `default`, `mae`, `mse`, `rmse`, `mape`, and `smape`. |\n",
       "| clean_ex_first | bool | True | Clean exogenous signal before making forecasts<br>using TimeGPT. |\n",
       "| date_features | Union | False | Features computed from the dates.<br>Can be pandas date attributes or functions that will take the dates as input.<br>If True automatically adds most used date features for the<br>frequency of `df`. |\n",
       "| date_features_to_one_hot | Union | True | Apply one-hot encoding to these date features.<br>If `date_features=True`, then all date features are<br>one-hot encoded by default. |\n",
       "| model | str | timegpt-1 | Model to use as a string. Options are: `timegpt-1`, and `timegpt-1-long-horizon`.<br>We recommend using `timegpt-1-long-horizon` for forecasting<br>if you want to predict more than one seasonal<br>period given the frequency of your data. |\n",
       "| num_partitions | Optional | None | Number of partitions to use.<br>If None, the number of partitions will be equal<br>to the available parallel resources in distributed environments. |\n",
       "| **Returns** | **pandas.DataFrame** |  | **DataFrame with cross validation forecasts.** |"
      ],
      "text/plain": [
       "---\n",
       "\n",
       "## NixtlaClient.cross_validation\n",
       "\n",
       ">      NixtlaClient.cross_validation (df:pandas.core.frame.DataFrame, h:int,\n",
       ">                                     freq:Optional[str]=None,\n",
       ">                                     id_col:str='unique_id', time_col:str='ds',\n",
       ">                                     target_col:str='y', level:Optional[List[Un\n",
       ">                                     ion[int,float]]]=None,\n",
       ">                                     quantiles:Optional[List[float]]=None,\n",
       ">                                     validate_api_key:bool=False,\n",
       ">                                     n_windows:int=1,\n",
       ">                                     step_size:Optional[int]=None,\n",
       ">                                     finetune_steps:int=0,\n",
       ">                                     finetune_loss:str='default',\n",
       ">                                     clean_ex_first:bool=True,\n",
       ">                                     date_features:Union[bool,List[str]]=False,\n",
       ">                                     date_features_to_one_hot:Union[bool,List[s\n",
       ">                                     tr]]=True, model:str='timegpt-1',\n",
       ">                                     num_partitions:Optional[int]=None)\n",
       "\n",
       "Perform cross validation in your time series using TimeGPT.\n",
       "\n",
       "|    | **Type** | **Default** | **Details** |\n",
       "| -- | -------- | ----------- | ----------- |\n",
       "| df | DataFrame |  | The DataFrame on which the function will operate. Expected to contain at least the following columns:<br>- time_col:<br>    Column name in `df` that contains the time indices of the time series. This is typically a datetime<br>    column with regular intervals, e.g., hourly, daily, monthly data points.<br>- target_col:<br>    Column name in `df` that contains the target variable of the time series, i.e., the variable we<br>    wish to predict or analyze.<br>Additionally, you can pass multiple time series (stacked in the dataframe) considering an additional column:<br>- id_col:<br>    Column name in `df` that identifies unique time series. Each unique value in this column<br>    corresponds to a unique time series. |\n",
       "| h | int |  | Forecast horizon. |\n",
       "| freq | Optional | None | Frequency of the data. By default, the freq will be inferred automatically.<br>See [pandas' available frequencies](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases). |\n",
       "| id_col | str | unique_id | Column that identifies each serie. |\n",
       "| time_col | str | ds | Column that identifies each timestep, its values can be timestamps or integers. |\n",
       "| target_col | str | y | Column that contains the target. |\n",
       "| level | Optional | None | Confidence level between 0 and 100 for prediction intervals. |\n",
       "| quantiles | Optional | None | Quantiles to forecast, list between (0, 1).<br>`level` and `quantiles` should not be used simultaneously.<br>The output dataframe will have the quantile columns<br>formatted as TimeGPT-q-(100 * q) for each q.<br>100 * q represents percentiles but we choose this notation<br>to avoid having dots in column names.. |\n",
       "| validate_api_key | bool | False | If True, validates api_key before<br>sending requests. |\n",
       "| n_windows | int | 1 | Number of windows to evaluate. |\n",
       "| step_size | Optional | None | Step size between each cross validation window. If None it will be equal to `h`. |\n",
       "| finetune_steps | int | 0 | Number of steps used to finetune TimeGPT in the<br>new data. |\n",
       "| finetune_loss | str | default | Loss function to use for finetuning. Options are: `default`, `mae`, `mse`, `rmse`, `mape`, and `smape`. |\n",
       "| clean_ex_first | bool | True | Clean exogenous signal before making forecasts<br>using TimeGPT. |\n",
       "| date_features | Union | False | Features computed from the dates.<br>Can be pandas date attributes or functions that will take the dates as input.<br>If True automatically adds most used date features for the<br>frequency of `df`. |\n",
       "| date_features_to_one_hot | Union | True | Apply one-hot encoding to these date features.<br>If `date_features=True`, then all date features are<br>one-hot encoded by default. |\n",
       "| model | str | timegpt-1 | Model to use as a string. Options are: `timegpt-1`, and `timegpt-1-long-horizon`.<br>We recommend using `timegpt-1-long-horizon` for forecasting<br>if you want to predict more than one seasonal<br>period given the frequency of your data. |\n",
       "| num_partitions | Optional | None | Number of partitions to use.<br>If None, the number of partitions will be equal<br>to the available parallel resources in distributed environments. |\n",
       "| **Returns** | **pandas.DataFrame** |  | **DataFrame with cross validation forecasts.** |"
      ]
     },
     "execution_count": null,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "#| echo: false\n",
    "show_doc(NixtlaClient.cross_validation, title_level=2)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/markdown": [
       "---\n",
       "\n",
       "## NixtlaClient.detect_anomalies\n",
       "\n",
       ">      NixtlaClient.detect_anomalies (df:pandas.core.frame.DataFrame,\n",
       ">                                     freq:Optional[str]=None,\n",
       ">                                     id_col:str='unique_id', time_col:str='ds',\n",
       ">                                     target_col:str='y',\n",
       ">                                     level:Union[int,float]=99,\n",
       ">                                     clean_ex_first:bool=True,\n",
       ">                                     validate_api_key:bool=False,\n",
       ">                                     date_features:Union[bool,List[str]]=False,\n",
       ">                                     date_features_to_one_hot:Union[bool,List[s\n",
       ">                                     tr]]=True, model:str='timegpt-1',\n",
       ">                                     num_partitions:Optional[int]=None)\n",
       "\n",
       "Detect anomalies in your time series using TimeGPT.\n",
       "\n",
       "|    | **Type** | **Default** | **Details** |\n",
       "| -- | -------- | ----------- | ----------- |\n",
       "| df | DataFrame |  | The DataFrame on which the function will operate. Expected to contain at least the following columns:<br>- time_col:<br>    Column name in `df` that contains the time indices of the time series. This is typically a datetime<br>    column with regular intervals, e.g., hourly, daily, monthly data points.<br>- target_col:<br>    Column name in `df` that contains the target variable of the time series, i.e., the variable we<br>    wish to predict or analyze.<br>Additionally, you can pass multiple time series (stacked in the dataframe) considering an additional column:<br>- id_col:<br>    Column name in `df` that identifies unique time series. Each unique value in this column<br>    corresponds to a unique time series. |\n",
       "| freq | Optional | None | Frequency of the data. By default, the freq will be inferred automatically.<br>See [pandas' available frequencies](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases). |\n",
       "| id_col | str | unique_id | Column that identifies each serie. |\n",
       "| time_col | str | ds | Column that identifies each timestep, its values can be timestamps or integers. |\n",
       "| target_col | str | y | Column that contains the target. |\n",
       "| level | Union | 99 | Confidence level between 0 and 100 for detecting the anomalies. |\n",
       "| clean_ex_first | bool | True | Clean exogenous signal before making forecasts<br>using TimeGPT. |\n",
       "| validate_api_key | bool | False | If True, validates api_key before<br>sending requests. |\n",
       "| date_features | Union | False | Features computed from the dates.<br>Can be pandas date attributes or functions that will take the dates as input.<br>If True automatically adds most used date features for the<br>frequency of `df`. |\n",
       "| date_features_to_one_hot | Union | True | Apply one-hot encoding to these date features.<br>If `date_features=True`, then all date features are<br>one-hot encoded by default. |\n",
       "| model | str | timegpt-1 | Model to use as a string. Options are: `timegpt-1`, and `timegpt-1-long-horizon`.<br>We recommend using `timegpt-1-long-horizon` for forecasting<br>if you want to predict more than one seasonal<br>period given the frequency of your data. |\n",
       "| num_partitions | Optional | None | Number of partitions to use.<br>If None, the number of partitions will be equal<br>to the available parallel resources in distributed environments. |\n",
       "| **Returns** | **pandas.DataFrame** |  | **DataFrame with anomalies flagged with 1 detected by TimeGPT.** |"
      ],
      "text/plain": [
       "---\n",
       "\n",
       "## NixtlaClient.detect_anomalies\n",
       "\n",
       ">      NixtlaClient.detect_anomalies (df:pandas.core.frame.DataFrame,\n",
       ">                                     freq:Optional[str]=None,\n",
       ">                                     id_col:str='unique_id', time_col:str='ds',\n",
       ">                                     target_col:str='y',\n",
       ">                                     level:Union[int,float]=99,\n",
       ">                                     clean_ex_first:bool=True,\n",
       ">                                     validate_api_key:bool=False,\n",
       ">                                     date_features:Union[bool,List[str]]=False,\n",
       ">                                     date_features_to_one_hot:Union[bool,List[s\n",
       ">                                     tr]]=True, model:str='timegpt-1',\n",
       ">                                     num_partitions:Optional[int]=None)\n",
       "\n",
       "Detect anomalies in your time series using TimeGPT.\n",
       "\n",
       "|    | **Type** | **Default** | **Details** |\n",
       "| -- | -------- | ----------- | ----------- |\n",
       "| df | DataFrame |  | The DataFrame on which the function will operate. Expected to contain at least the following columns:<br>- time_col:<br>    Column name in `df` that contains the time indices of the time series. This is typically a datetime<br>    column with regular intervals, e.g., hourly, daily, monthly data points.<br>- target_col:<br>    Column name in `df` that contains the target variable of the time series, i.e., the variable we<br>    wish to predict or analyze.<br>Additionally, you can pass multiple time series (stacked in the dataframe) considering an additional column:<br>- id_col:<br>    Column name in `df` that identifies unique time series. Each unique value in this column<br>    corresponds to a unique time series. |\n",
       "| freq | Optional | None | Frequency of the data. By default, the freq will be inferred automatically.<br>See [pandas' available frequencies](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases). |\n",
       "| id_col | str | unique_id | Column that identifies each serie. |\n",
       "| time_col | str | ds | Column that identifies each timestep, its values can be timestamps or integers. |\n",
       "| target_col | str | y | Column that contains the target. |\n",
       "| level | Union | 99 | Confidence level between 0 and 100 for detecting the anomalies. |\n",
       "| clean_ex_first | bool | True | Clean exogenous signal before making forecasts<br>using TimeGPT. |\n",
       "| validate_api_key | bool | False | If True, validates api_key before<br>sending requests. |\n",
       "| date_features | Union | False | Features computed from the dates.<br>Can be pandas date attributes or functions that will take the dates as input.<br>If True automatically adds most used date features for the<br>frequency of `df`. |\n",
       "| date_features_to_one_hot | Union | True | Apply one-hot encoding to these date features.<br>If `date_features=True`, then all date features are<br>one-hot encoded by default. |\n",
       "| model | str | timegpt-1 | Model to use as a string. Options are: `timegpt-1`, and `timegpt-1-long-horizon`.<br>We recommend using `timegpt-1-long-horizon` for forecasting<br>if you want to predict more than one seasonal<br>period given the frequency of your data. |\n",
       "| num_partitions | Optional | None | Number of partitions to use.<br>If None, the number of partitions will be equal<br>to the available parallel resources in distributed environments. |\n",
       "| **Returns** | **pandas.DataFrame** |  | **DataFrame with anomalies flagged with 1 detected by TimeGPT.** |"
      ]
     },
     "execution_count": null,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "#| echo: false\n",
    "show_doc(NixtlaClient.detect_anomalies, title_level=2)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "python3",
   "language": "python",
   "name": "python3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
