hetfit / module_name.md
apsys's picture
ic
c176aea

A newer version of the Streamlit SDK is available: 1.36.0

Upgrade

Table of Contents

main

:orange[PINN]

PINN.pinns

PINNd_p Objects

class PINNd_p(nn.Module)

$d \mapsto P$

PINNhd_ma Objects

class PINNhd_ma(nn.Module)

$h,d \mapsto m_a $

PINNT_ma Objects

class PINNT_ma(nn.Module)

$ m_a, U \mapsto T$


:orange[utils]

utils.test

utils.dataset_loader

get_dataset

def get_dataset(raw: bool = False,
                sample_size: int = 1000,
                name: str = 'dataset.pkl',
                source: str = 'dataset.csv',
                boundary_conditions: list = None) -> _pickle

Gets augmented dataset

Arguments:

  • raw bool, optional - either to use source data or augmented. Defaults to False.
  • sample_size int, optional - sample size. Defaults to 1000.
  • name str, optional - name of wanted dataset. Defaults to 'dataset.pkl'.
  • boundary_conditions list,optional - y1,y2,x1,x2.

Returns:

  • _pickle - pickle buffer

utils.ndgan

DCGAN Objects

class DCGAN()

define_discriminator

def define_discriminator(inputs=8)

function to return the compiled discriminator model

generate_latent_points

def generate_latent_points(latent_dim, n)

generate points in latent space as input for the generator

define_gan

def define_gan(generator, discriminator)

define the combined generator and discriminator model

summarize_performance

def summarize_performance(epoch, generator, discriminator, latent_dim, n=200)

evaluate the discriminator and plot real and fake samples

train_gan

def train_gan(g_model,
              d_model,
              gan_model,
              latent_dim,
              num_epochs=2500,
              num_eval=2500,
              batch_size=2)

function to train gan model

utils.data_augmentation

dataset Objects

class dataset()

Creates dataset from input source

__init__

def __init__(number_samples: int,
             name: str,
             source: str,
             boundary_conditions: list = None)

summary

Arguments:

  • number_samples int - description
  • name str - description
  • source str - description
  • boundary_conditions list - y1,y2,x1,x2

:orange[nets]

nets.envs

SCI Objects

class SCI()

data_flow

def data_flow(columns_idx: tuple = (1, 3, 3, 5),
              idx: tuple = None,
              split_idx: int = 800) -> torch.utils.data.DataLoader

Data prep pipeline

Arguments:

  • columns_idx tuple, optional - Columns to be selected (sliced 1:2 3:4) for feature fitting. Defaults to (1,3,3,5).
  • idx tuple, optional - 2|3 indexes to be selected for feature fitting. Defaults to None. Use either idx or columns_idx (for F:R->R idx, for F:R->R2 columns_idx) split_idx (int) : Index to split for training

Returns:

  • torch.utils.data.DataLoader - Torch native dataloader

init_seed

def init_seed(seed)

Initializes seed for torch optional()

compile

def compile(columns: tuple = None,
            idx: tuple = None,
            optim: torch.optim = torch.optim.AdamW,
            loss: nn = nn.L1Loss,
            model: nn.Module = dmodel,
            custom: bool = False) -> None

Builds model, loss, optimizer. Has defaults

Arguments:

  • columns tuple, optional - Columns to be selected for feature fitting. Defaults to (1,3,3,5). optim - torch Optimizer loss - torch Loss function (nn)

train

def train(epochs: int = 10) -> None

Train model If sklearn instance uses .fit()

inference

def inference(X: tensor, model_name: str = None) -> np.ndarray

Inference of (pre-)trained model

Arguments:

  • X tensor - your data in domain of train

Returns:

  • np.ndarray - predictions

RCI Objects

class RCI(SCI)

data_flow

def data_flow(columns_idx: tuple = (1, 3, 3, 5),
              idx: tuple = None,
              split_idx: int = 800) -> torch.utils.data.DataLoader

Data prep pipeline

Arguments:

  • columns_idx tuple, optional - Columns to be selected (sliced 1:2 3:4) for feature fitting. Defaults to (1,3,3,5).
  • idx tuple, optional - 2|3 indexes to be selected for feature fitting. Defaults to None. Use either idx or columns_idx (for F:R->R idx, for F:R->R2 columns_idx) split_idx (int) : Index to split for training

Returns:

  • torch.utils.data.DataLoader - Torch native dataloader

compile

def compile(columns: tuple = None,
            idx: tuple = (3, 1),
            optim: torch.optim = torch.optim.AdamW,
            loss: nn = nn.L1Loss,
            model: nn.Module = PINNd_p,
            lr: float = 0.001) -> None

Builds model, loss, optimizer. Has defaults

Arguments:

  • columns tuple, optional - Columns to be selected for feature fitting. Defaults to None.
  • idx tuple, optional - indexes to be selected Default (3,1) optim - torch Optimizer loss - torch Loss function (nn)

nets.dense

Net Objects

class Net(nn.Module)

4 layer model, different activations and neurons count on layer

__init__

def __init__(input_dim: int = 2, hidden_dim: int = 200)

Init

Arguments:

  • input_dim int, optional - Defaults to 2.
  • hidden_dim int, optional - Defaults to 200.

nets.design

B_field_norm

def B_field_norm(Bmax, L, k=16, plot=True)

Returns vec B_z

Arguments:

  • Bmax any - maximum B in thruster k - magnetic field profile number

nets.deep_dense

dmodel Objects

class dmodel(nn.Module)

4 layers Torch model. Relu activations, hidden layers are same size.

__init__

def __init__(in_features=1, hidden_features=200, out_features=1)

Init

Arguments:

  • in_features int, optional - Input features. Defaults to 1.
  • hidden_features int, optional - Hidden dims. Defaults to 200.
  • out_features int, optional - Output dims. Defaults to 1.