markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
下载流行的 MNIST 数据集
fashion_mnist = tf.keras.datasets.fashion_mnist (train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data() # Adding a dimension to the array -> new shape == (28, 28, 1) # We are doing this because the first layer in our model is a convolutional # layer and it requires a 4D input (batch_size,...
site/zh-cn/tutorials/distribute/custom_training.ipynb
tensorflow/docs-l10n
apache-2.0
创建一个分发变量和图形的策略 tf.distribute.MirroredStrategy 策略是如何运作的? 所有变量和模型图都复制在副本上。 输入都均匀分布在副本中。 每个副本在收到输入后计算输入的损失和梯度。 通过求和,每一个副本上的梯度都能同步。 同步后,每个副本上的复制的变量都可以同样更新。 注意:您可以将下面的所有代码放在一个单独单元内。 我们将它分成几个代码单元用于说明目的。
# If the list of devices is not specified in the # `tf.distribute.MirroredStrategy` constructor, it will be auto-detected. strategy = tf.distribute.MirroredStrategy() print ('Number of devices: {}'.format(strategy.num_replicas_in_sync))
site/zh-cn/tutorials/distribute/custom_training.ipynb
tensorflow/docs-l10n
apache-2.0
设置输入流水线 将图形和变量导出成平台不可识别的 SavedModel 格式。在你的模型保存后,你可以在有或没有范围的情况下载入它。
BUFFER_SIZE = len(train_images) BATCH_SIZE_PER_REPLICA = 64 GLOBAL_BATCH_SIZE = BATCH_SIZE_PER_REPLICA * strategy.num_replicas_in_sync EPOCHS = 10
site/zh-cn/tutorials/distribute/custom_training.ipynb
tensorflow/docs-l10n
apache-2.0
创建数据集并分发它们:
train_dataset = tf.data.Dataset.from_tensor_slices((train_images, train_labels)).shuffle(BUFFER_SIZE).batch(GLOBAL_BATCH_SIZE) test_dataset = tf.data.Dataset.from_tensor_slices((test_images, test_labels)).batch(GLOBAL_BATCH_SIZE) train_dist_dataset = strategy.experimental_distribute_dataset(train_dataset) test_dist_...
site/zh-cn/tutorials/distribute/custom_training.ipynb
tensorflow/docs-l10n
apache-2.0
创建模型 使用 tf.keras.Sequential 创建一个模型。你也可以使用模型子类化 API 来完成这个。
def create_model(): model = tf.keras.Sequential([ tf.keras.layers.Conv2D(32, 3, activation='relu'), tf.keras.layers.MaxPooling2D(), tf.keras.layers.Conv2D(64, 3, activation='relu'), tf.keras.layers.MaxPooling2D(), tf.keras.layers.Flatten(), tf.keras.layers.Dense(64, activation='rel...
site/zh-cn/tutorials/distribute/custom_training.ipynb
tensorflow/docs-l10n
apache-2.0
定义损失函数 通常,在具有 1 个 GPU/CPU 的单台机器上,损失会除以输入批次中的样本数量。 因此,使用 tf.distribute.Strategy 时应如何计算损失? 例如,假设有 4 个 GPU,批次大小为 64。一个批次的输入会分布在各个副本(4 个 GPU)上,每个副本获得一个大小为 16 的输入。 每个副本上的模型都会使用其各自的输入进行前向传递,并计算损失。现在,不将损失除以其相应输入中的样本数 (BATCH_SIZE_PER_REPLICA = 16),而应将损失除以 GLOBAL_BATCH_SIZE (64)。 为什么这样做? 之所以需要这样做,是因为在每个副本上计算完梯度后,会通过对梯度求和...
with strategy.scope(): # Set reduction to `none` so we can do the reduction afterwards and divide by # global batch size. loss_object = tf.keras.losses.SparseCategoricalCrossentropy( from_logits=True, reduction=tf.keras.losses.Reduction.NONE) def compute_loss(labels, predictions): per_example_lo...
site/zh-cn/tutorials/distribute/custom_training.ipynb
tensorflow/docs-l10n
apache-2.0
定义衡量指标以跟踪损失和准确性 这些指标可以跟踪测试的损失,训练和测试的准确性。 您可以使用.result()随时获取累积的统计信息。
with strategy.scope(): test_loss = tf.keras.metrics.Mean(name='test_loss') train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy( name='train_accuracy') test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy( name='test_accuracy')
site/zh-cn/tutorials/distribute/custom_training.ipynb
tensorflow/docs-l10n
apache-2.0
训练循环
# model, optimizer, and checkpoint must be created under `strategy.scope`. with strategy.scope(): model = create_model() optimizer = tf.keras.optimizers.Adam() checkpoint = tf.train.Checkpoint(optimizer=optimizer, model=model) def train_step(inputs): images, labels = inputs with tf.GradientTape() as tape:...
site/zh-cn/tutorials/distribute/custom_training.ipynb
tensorflow/docs-l10n
apache-2.0
以上示例中需要注意的事项: 我们使用for x in ...迭代构造train_dist_dataset和test_dist_dataset。 缩放损失是distributed_train_step的返回值。 这个值会在各个副本使用tf.distribute.Strategy.reduce的时候合并,然后通过tf.distribute.Strategy.reduce叠加各个返回值来跨批次。 在执行tf.distribute.Strategy.experimental_run_v2时,tf.keras.Metrics应在train_step和test_step中更新。 恢复最新的检查点并进行测试 使用 tf.distribute....
eval_accuracy = tf.keras.metrics.SparseCategoricalAccuracy( name='eval_accuracy') new_model = create_model() new_optimizer = tf.keras.optimizers.Adam() test_dataset = tf.data.Dataset.from_tensor_slices((test_images, test_labels)).batch(GLOBAL_BATCH_SIZE) @tf.function def eval_step(images, labels): prediction...
site/zh-cn/tutorials/distribute/custom_training.ipynb
tensorflow/docs-l10n
apache-2.0
迭代一个数据集的替代方法 使用迭代器 如果你想要迭代一个已经给定步骤数量而不需要整个遍历的数据集,你可以创建一个迭代器并在迭代器上调用iter和显式调用next。 您可以选择在 tf.function 内部和外部迭代数据集。 这是一个小片段,演示了使用迭代器在 tf.function 外部迭代数据集。
for _ in range(EPOCHS): total_loss = 0.0 num_batches = 0 train_iter = iter(train_dist_dataset) for _ in range(10): total_loss += distributed_train_step(next(train_iter)) num_batches += 1 average_train_loss = total_loss / num_batches template = ("Epoch {}, Loss: {}, Accuracy: {}") print (template...
site/zh-cn/tutorials/distribute/custom_training.ipynb
tensorflow/docs-l10n
apache-2.0
在 tf.function 中迭代 您还可以使用for x in ...构造在 tf.function 内部迭代整个输入train_dist_dataset,或者像上面那样创建迭代器。下面的例子演示了在 tf.function 中包装一个 epoch 并在功能内迭代train_dist_dataset。
@tf.function def distributed_train_epoch(dataset): total_loss = 0.0 num_batches = 0 for x in dataset: per_replica_losses = strategy.run(train_step, args=(x,)) total_loss += strategy.reduce( tf.distribute.ReduceOp.SUM, per_replica_losses, axis=None) num_batches += 1 return total_loss / tf.cast(...
site/zh-cn/tutorials/distribute/custom_training.ipynb
tensorflow/docs-l10n
apache-2.0
plot_images() is used to plot several images in the same figure. It supports many configurations and has many options available to customize the resulting output. The function returns a list of matplotlib axes, which can be used to further customize the figure. Some examples are given below. Default usage A common usag...
import scipy.ndimage image = hs.signals.Image(np.random.random((2, 3, 512, 512))) for i in range(2): for j in range(3): image.data[i,j,:] = scipy.misc.ascent()*(i+0.5+j) axes = image.axes_manager axes[2].name = "x" axes[3].name = "y" axes[2].units = "nm" axes[3].units = "nm" image.metadata.Gen...
hyperspy/tests/drawing/test_plot_image.ipynb
to266/hyperspy
gpl-3.0
Specified labels By default, plot_images() will attempt to auto-label the images based on the Signal titles. The labels (and title) can be customized with the label and suptitle arguments. In this example, the axes labels and ticks are also disabled with axes_decor:
import scipy.ndimage image = hs.signals.Image(np.random.random((2, 3, 512, 512))) for i in range(2): for j in range(3): image.data[i,j,:] = scipy.misc.ascent()*(i+0.5+j) axes = image.axes_manager axes[2].name = "x" axes[3].name = "y" axes[2].units = "nm" axes[3].units = "nm" image.metadata.Gen...
hyperspy/tests/drawing/test_plot_image.ipynb
to266/hyperspy
gpl-3.0
List of images plot_images() can also be used to easily plot a list of Images, comparing different Signals, including RGB images. This example also demonstrates how to wrap labels using labelwrap (for preventing overlap) and using a single colorbar for all the Images, as opposed to multiple individual ones:
import scipy.ndimage # load red channel of raccoon as an image image0 = hs.signals.Image(scipy.misc.ascent()[:,:,0]) image0.metadata.General.title = 'Rocky Raccoon - R' axes0 = image0.axes_manager axes0[0].name = "x" axes0[1].name = "y" axes0[0].units = "mm" axes0[1].units = "mm" # load lena into 2x3 hyperimage image...
hyperspy/tests/drawing/test_plot_image.ipynb
to266/hyperspy
gpl-3.0
Real-world use Another example for this function is plotting EDS line intensities. Using a spectrum image with EDS data, one can use the following commands to get a representative figure of the line intensities. This example also demonstrates changing the colormap (with cmap), adding scalebars to the plots (with scaleb...
from urllib import urlretrieve url = 'http://cook.msm.cam.ac.uk//~hyperspy//EDS_tutorial//' urlretrieve(url + 'core_shell.hdf5', 'core_shell.hdf5') si_EDS = hs.load("core_shell.hdf5") im = si_EDS.get_lines_intensity() hs.plot.plot_images( im, tight_layout=True, cmap='RdYlBu_r', axes_decor='off', colorbar='sing...
hyperspy/tests/drawing/test_plot_image.ipynb
to266/hyperspy
gpl-3.0
자동차 연비 예측하기: 회귀 <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/tutorials/keras/regression"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />TensorFlow.org에서 보기</a> </td> <td> <a target="_blank" href="https://colab.research.google.c...
# 산점도 행렬을 그리기 위해 seaborn 패키지를 설치합니다 !pip install seaborn import pathlib import matplotlib.pyplot as plt import pandas as pd import seaborn as sns import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers print(tf.__version__)
site/ko/tutorials/keras/regression.ipynb
tensorflow/docs-l10n
apache-2.0
Auto MPG 데이터셋 이 데이터셋은 UCI 머신 러닝 저장소에서 다운로드할 수 있습니다. 데이터 구하기 먼저 데이터셋을 다운로드합니다.
dataset_path = keras.utils.get_file("auto-mpg.data", "http://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data") dataset_path
site/ko/tutorials/keras/regression.ipynb
tensorflow/docs-l10n
apache-2.0
판다스를 사용하여 데이터를 읽습니다.
column_names = ['MPG','Cylinders','Displacement','Horsepower','Weight', 'Acceleration', 'Model Year', 'Origin'] raw_dataset = pd.read_csv(dataset_path, names=column_names, na_values = "?", comment='\t', sep=" ", skipinitialspace=True) dataset = raw_dataset.co...
site/ko/tutorials/keras/regression.ipynb
tensorflow/docs-l10n
apache-2.0
데이터 정제하기 이 데이터셋은 일부 데이터가 누락되어 있습니다.
dataset.isna().sum()
site/ko/tutorials/keras/regression.ipynb
tensorflow/docs-l10n
apache-2.0
문제를 간단하게 만들기 위해서 누락된 행을 삭제하겠습니다.
dataset = dataset.dropna()
site/ko/tutorials/keras/regression.ipynb
tensorflow/docs-l10n
apache-2.0
"Origin" 열은 수치형이 아니고 범주형이므로 원-핫 인코딩(one-hot encoding)으로 변환하겠습니다:
origin = dataset.pop('Origin') dataset['USA'] = (origin == 1)*1.0 dataset['Europe'] = (origin == 2)*1.0 dataset['Japan'] = (origin == 3)*1.0 dataset.tail()
site/ko/tutorials/keras/regression.ipynb
tensorflow/docs-l10n
apache-2.0
데이터셋을 훈련 세트와 테스트 세트로 분할하기 이제 데이터를 훈련 세트와 테스트 세트로 분할합니다. 테스트 세트는 모델을 최종적으로 평가할 때 사용합니다.
train_dataset = dataset.sample(frac=0.8,random_state=0) test_dataset = dataset.drop(train_dataset.index)
site/ko/tutorials/keras/regression.ipynb
tensorflow/docs-l10n
apache-2.0
데이터 조사하기 훈련 세트에서 몇 개의 열을 선택해 산점도 행렬을 만들어 살펴 보겠습니다.
sns.pairplot(train_dataset[["MPG", "Cylinders", "Displacement", "Weight"]], diag_kind="kde")
site/ko/tutorials/keras/regression.ipynb
tensorflow/docs-l10n
apache-2.0
전반적인 통계도 확인해 보죠:
train_stats = train_dataset.describe() train_stats.pop("MPG") train_stats = train_stats.transpose() train_stats
site/ko/tutorials/keras/regression.ipynb
tensorflow/docs-l10n
apache-2.0
특성과 레이블 분리하기 특성에서 타깃 값 또는 "레이블"을 분리합니다. 이 레이블을 예측하기 위해 모델을 훈련시킬 것입니다.
train_labels = train_dataset.pop('MPG') test_labels = test_dataset.pop('MPG')
site/ko/tutorials/keras/regression.ipynb
tensorflow/docs-l10n
apache-2.0
데이터 정규화 위 train_stats 통계를 다시 살펴보고 각 특성의 범위가 얼마나 다른지 확인해 보죠. 특성의 스케일과 범위가 다르면 정규화(normalization)하는 것이 권장됩니다. 특성을 정규화하지 않아도 모델이 수렴할 수 있지만, 훈련시키기 어렵고 입력 단위에 의존적인 모델이 만들어집니다. 노트: 의도적으로 훈련 세트만 사용하여 통계치를 생성했습니다. 이 통계는 테스트 세트를 정규화할 때에도 사용됩니다. 이는 테스트 세트를 모델이 훈련에 사용했던 것과 동일한 분포로 투영하기 위해서입니다.
def norm(x): return (x - train_stats['mean']) / train_stats['std'] normed_train_data = norm(train_dataset) normed_test_data = norm(test_dataset)
site/ko/tutorials/keras/regression.ipynb
tensorflow/docs-l10n
apache-2.0
정규화된 데이터를 사용하여 모델을 훈련합니다. 주의: 여기에서 입력 데이터를 정규화하기 위해 사용한 통계치(평균과 표준편차)는 원-핫 인코딩과 마찬가지로 모델에 주입되는 모든 데이터에 적용되어야 합니다. 여기에는 테스트 세트는 물론 모델이 실전에 투입되어 얻은 라이브 데이터도 포함됩니다. 모델 모델 만들기 모델을 구성해 보죠. 여기에서는 두 개의 완전 연결(densely connected) 은닉층으로 Sequential 모델을 만들겠습니다. 출력 층은 하나의 연속적인 값을 반환합니다. 나중에 두 번째 모델을 만들기 쉽도록 build_model 함수로 모델 구성 단계를...
def build_model(): model = keras.Sequential([ layers.Dense(64, activation='relu', input_shape=[len(train_dataset.keys())]), layers.Dense(64, activation='relu'), layers.Dense(1) ]) optimizer = tf.keras.optimizers.RMSprop(0.001) model.compile(loss='mse', optimizer=optimizer, ...
site/ko/tutorials/keras/regression.ipynb
tensorflow/docs-l10n
apache-2.0
모델 확인 .summary 메서드를 사용해 모델에 대한 간단한 정보를 출력합니다.
model.summary()
site/ko/tutorials/keras/regression.ipynb
tensorflow/docs-l10n
apache-2.0
모델을 한번 실행해 보죠. 훈련 세트에서 10 샘플을 하나의 배치로 만들어 model.predict 메서드를 호출해 보겠습니다.
example_batch = normed_train_data[:10] example_result = model.predict(example_batch) example_result
site/ko/tutorials/keras/regression.ipynb
tensorflow/docs-l10n
apache-2.0
제대로 작동하는 것 같네요. 결괏값의 크기와 타입이 기대했던 대로입니다. 모델 훈련 이 모델을 1,000번의 에포크(epoch) 동안 훈련합니다. 훈련 정확도와 검증 정확도는 history 객체에 기록됩니다.
# 에포크가 끝날 때마다 점(.)을 출력해 훈련 진행 과정을 표시합니다 class PrintDot(keras.callbacks.Callback): def on_epoch_end(self, epoch, logs): if epoch % 100 == 0: print('') print('.', end='') EPOCHS = 1000 history = model.fit( normed_train_data, train_labels, epochs=EPOCHS, validation_split = 0.2, verbose=0, callbacks=[Prin...
site/ko/tutorials/keras/regression.ipynb
tensorflow/docs-l10n
apache-2.0
history 객체에 저장된 통계치를 사용해 모델의 훈련 과정을 시각화해 보죠.
hist = pd.DataFrame(history.history) hist['epoch'] = history.epoch hist.tail() import matplotlib.pyplot as plt def plot_history(history): hist = pd.DataFrame(history.history) hist['epoch'] = history.epoch plt.figure(figsize=(8,12)) plt.subplot(2,1,1) plt.xlabel('Epoch') plt.ylabel('Mean Abs Error [MPG]'...
site/ko/tutorials/keras/regression.ipynb
tensorflow/docs-l10n
apache-2.0
이 그래프를 보면 수 백번 에포크를 진행한 이후에는 모델이 거의 향상되지 않는 것 같습니다. model.fit 메서드를 수정하여 검증 점수가 향상되지 않으면 자동으로 훈련을 멈추도록 만들어 보죠. 에포크마다 훈련 상태를 점검하기 위해 EarlyStopping 콜백(callback)을 사용하겠습니다. 지정된 에포크 횟수 동안 성능 향상이 없으면 자동으로 훈련이 멈춥니다. 이 콜백에 대해 더 자세한 내용은 여기를 참고하세요.
model = build_model() # patience 매개변수는 성능 향상을 체크할 에포크 횟수입니다 early_stop = keras.callbacks.EarlyStopping(monitor='val_loss', patience=10) history = model.fit(normed_train_data, train_labels, epochs=EPOCHS, validation_split = 0.2, verbose=0, callbacks=[early_stop, PrintDot()]) plot_history(history)
site/ko/tutorials/keras/regression.ipynb
tensorflow/docs-l10n
apache-2.0
이 그래프를 보면 검증 세트의 평균 오차가 약 +/- 2 MPG입니다. 좋은 결과인가요? 이에 대한 평가는 여러분에게 맡기겠습니다. 모델을 훈련할 때 사용하지 않았던 테스트 세트에서 모델의 성능을 확인해 보죠. 이를 통해 모델이 실전에 투입되었을 때 모델의 성능을 짐작할 수 있습니다:
loss, mae, mse = model.evaluate(normed_test_data, test_labels, verbose=2) print("테스트 세트의 평균 절대 오차: {:5.2f} MPG".format(mae))
site/ko/tutorials/keras/regression.ipynb
tensorflow/docs-l10n
apache-2.0
예측 마지막으로 테스트 세트에 있는 샘플을 사용해 MPG 값을 예측해 보겠습니다:
test_predictions = model.predict(normed_test_data).flatten() plt.scatter(test_labels, test_predictions) plt.xlabel('True Values [MPG]') plt.ylabel('Predictions [MPG]') plt.axis('equal') plt.axis('square') plt.xlim([0,plt.xlim()[1]]) plt.ylim([0,plt.ylim()[1]]) _ = plt.plot([-100, 100], [-100, 100])
site/ko/tutorials/keras/regression.ipynb
tensorflow/docs-l10n
apache-2.0
모델이 꽤 잘 예측한 것 같습니다. 오차의 분포를 살펴 보죠.
error = test_predictions - test_labels plt.hist(error, bins = 25) plt.xlabel("Prediction Error [MPG]") _ = plt.ylabel("Count")
site/ko/tutorials/keras/regression.ipynb
tensorflow/docs-l10n
apache-2.0
As always, let's do imports and initialize a logger and a new Bundle.
import phoebe from phoebe import u # units import numpy as np import matplotlib.pyplot as plt logger = phoebe.logger('error') b = phoebe.default_binary()
2.3/tutorials/ltte.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Now let's add a light curve dataset to see how ltte affects the timings of eclipses.
b.add_dataset('lc', times=phoebe.linspace(-0.05, 0.05, 51), dataset='lc01')
2.3/tutorials/ltte.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Relevant Parameters The 'ltte' parameter in context='compute' defines whether light travel time effects are taken into account or not.
print(b['ltte@compute'])
2.3/tutorials/ltte.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Comparing with and without ltte In order to have a binary system with any noticeable ltte effects, we'll set a somewhat extreme mass-ratio and semi-major axis.
b['sma@binary'] = 100 b['q'] = 0.1
2.3/tutorials/ltte.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
We'll just ignore the fact that this will be a completely unphysical system since we'll leave the radii and temperatures alone despite somewhat ridiculous masses - but since the masses and radii disagree so much, we'll have to abandon atmospheres and use blackbody.
b.set_value_all('atm', 'blackbody') b.set_value_all('ld_mode', 'manual') b.set_value_all('ld_func', 'logarithmic') b.run_compute(irrad_method='none', ltte=False, model='ltte_off') b.run_compute(irrad_method='none', ltte=True, model='ltte_on') afig, mplfig = b.plot(show=True)
2.3/tutorials/ltte.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
OMA 2. kolokvij 2012/2013 1. naloga V polkrog z radijem 1 vcrtamo pravokotnik ABCD tako, da oglisci A in B lezita na premeru, oglisci C in D pa na loku polkroga. Koliksni naj bosta dolzini stranic pravokotnika, da bo ploscina pravokotnika maksimalna?
%%tikz s 400,400 -sc 1.2 -f png \draw [domain=0:180] plot ({cos(\x)}, {sin(\x)}); \draw (-1,0) -- (1, 0); \draw [color=red] (-0.5, 0) -- node[below, color=black] {2a} ++ (1, 0); \draw [color=red] (-0.5, 0.8660254037844386) -- (0.5, 0.8660254037844386); \draw [color=red] (-0.5, 0) -- node[left, color=black] {b} ++ (0, 0...
oma/kolokviji/OMA, 2. kolokvij 2012_2013.ipynb
mrcinv/matpy
gpl-2.0
Maksimiziramo funkcijo $P(x)=2ab$. Velja tudi $a^2 + b^2 = 1$. Namesto ploscine bomo maksimizirali njen kvadrat (ki ima maksimum v isti tocki kot prvotna funkcija.
P = sympy.symbols('P', cls=sympy.Function) eq1 = Eq(P(b), (2*a*b)**2) eq2 = Eq(a**2+b**2, 1) equation = Eq(P(b), solve([eq1, eq2], P(b), a**2)[P(b)]) equation P = sympy.lambdify(b, equation.rhs) x = sympy.symbols('x', positive=True) solve(Eq(P(x).diff(x), 0))[0]
oma/kolokviji/OMA, 2. kolokvij 2012_2013.ipynb
mrcinv/matpy
gpl-2.0
2. naloga Naj bo $$f(x,y)=3x^2-3y^2+8xy-6x-8y+3.$$ Izracunaj gradient funkcije $f(x,y)$.
x, y = sympy.symbols('x y') f = lambda x, y: 3*x**2 - 3*y**2 + 8*x*y-6*x-8*y+3 f(x,y).diff(x), f(x,y).diff(y)
oma/kolokviji/OMA, 2. kolokvij 2012_2013.ipynb
mrcinv/matpy
gpl-2.0
Izracunaj stacionarne tocke funkcije $f(x,y)$.
sympy.solve([f(x,y).diff(x), f(x,y).diff(y)])
oma/kolokviji/OMA, 2. kolokvij 2012_2013.ipynb
mrcinv/matpy
gpl-2.0
3. naloga Izracunaj odvod funkcije $$\frac{\cos(x)}{\sin(x)}.$$
x = sympy.symbols('x') f = lambda x: sympy.cos(x)/sympy.sin(x) sympy.simplify(f(x).diff())
oma/kolokviji/OMA, 2. kolokvij 2012_2013.ipynb
mrcinv/matpy
gpl-2.0
*S pomocjo substitucije izracunaj nedoloceni integral $$\int \frac{\cos(x)}{\sin(x)}.$$ *
sympy.simplify(f(x).integrate())
oma/kolokviji/OMA, 2. kolokvij 2012_2013.ipynb
mrcinv/matpy
gpl-2.0
V zgorjem racunu poleg konstante znotraj funkcije $\log$ manjka se absolutna vrednost (sympy racuna v kompleksnih stevilih), tako da je pravi rezultat $$ \frac{1}{2}\log(\sin^2(x)) + C = \log(\sin^2(x)) + C.$$ S pomocjo pravila za integriranje po delih izracunaj $$\int\frac{x}{\sin^2(x)}.$$
x = sympy.symbols('x') f = lambda x: x/sympy.sin(x)**2 sympy.simplify(f(x).integrate())
oma/kolokviji/OMA, 2. kolokvij 2012_2013.ipynb
mrcinv/matpy
gpl-2.0
Tudi to resitev se da poenostaviti v $$ \int\frac{x}{\sin^2(x)} = \log(|\sin(x)|) - x\cot(x) + C.$$ 4. naloga Narisite lik, ki ga omejujeta krivulji $y=e^{2x}$ in $y=-e^{2x}+4$. Izracunajte ploscino lika.
from matplotlib import pyplot as plt import numpy as np x = sympy.symbols('x') f = lambda x: np.exp(2*x) g = lambda x: -np.exp(2*x)+4 fig, ax = plt.subplots() xs = np.linspace(0,0.6) ax.fill_between(xs, f(xs),g(xs),where = f(xs)>=g(xs), facecolor='green',interpolate=True) ax.fill_between(xs, f(xs), g(xs), where = f(xs)...
oma/kolokviji/OMA, 2. kolokvij 2012_2013.ipynb
mrcinv/matpy
gpl-2.0
Izracunati moramo ploscino rdecega lika.
x = sympy.symbols('x', real=True) f = lambda x: sympy.E**(2*x) g = lambda x: -sympy.E**(2*x)+4 intersection = sympy.solve(sympy.Eq(f(x), g(x)))[0] result = sympy.integrate(g(x)-f(x), (x, 0, intersection)) result result.evalf()
oma/kolokviji/OMA, 2. kolokvij 2012_2013.ipynb
mrcinv/matpy
gpl-2.0
Hamilton (1989) switching model of GNP This replicates Hamilton's (1989) seminal paper introducing Markov-switching models. The model is an autoregressive model of order 4 in which the mean of the process switches between two regimes. It can be written: $$ y_t = \mu_{S_t} + \phi_1 (y_{t-1} - \mu_{S_{t-1}}) + \phi_2 (y_...
# Get the RGNP data to replicate Hamilton dta = pd.read_stata('https://www.stata-press.com/data/r14/rgnp.dta').iloc[1:] dta.index = pd.DatetimeIndex(dta.date, freq='QS') dta_hamilton = dta.rgnp # Plot the data dta_hamilton.plot(title='Growth rate of Real GNP', figsize=(12,3)) # Fit the model mod_hamilton = sm.tsa.Mar...
v0.12.1/examples/notebooks/generated/markov_autoregression.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
We plot the filtered and smoothed probabilities of a recession. Filtered refers to an estimate of the probability at time $t$ based on data up to and including time $t$ (but excluding time $t+1, ..., T$). Smoothed refers to an estimate of the probability at time $t$ using all the data in the sample. For reference, the ...
fig, axes = plt.subplots(2, figsize=(7,7)) ax = axes[0] ax.plot(res_hamilton.filtered_marginal_probabilities[0]) ax.fill_between(usrec.index, 0, 1, where=usrec['USREC'].values, color='k', alpha=0.1) ax.set_xlim(dta_hamilton.index[4], dta_hamilton.index[-1]) ax.set(title='Filtered probability of recession') ax = axes[1...
v0.12.1/examples/notebooks/generated/markov_autoregression.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
From the estimated transition matrix we can calculate the expected duration of a recession versus an expansion.
print(res_hamilton.expected_durations)
v0.12.1/examples/notebooks/generated/markov_autoregression.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
In this case, it is expected that a recession will last about one year (4 quarters) and an expansion about two and a half years. Kim, Nelson, and Startz (1998) Three-state Variance Switching This model demonstrates estimation with regime heteroskedasticity (switching of variances) and no mean effect. The dataset can be...
# Get the dataset ew_excs = requests.get('http://econ.korea.ac.kr/~cjkim/MARKOV/data/ew_excs.prn').content raw = pd.read_table(BytesIO(ew_excs), header=None, skipfooter=1, engine='python') raw.index = pd.date_range('1926-01-01', '1995-12-01', freq='MS') dta_kns = raw.loc[:'1986'] - raw.loc[:'1986'].mean() # Plot the ...
v0.12.1/examples/notebooks/generated/markov_autoregression.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
Below we plot the probabilities of being in each of the regimes; only in a few periods is a high-variance regime probable.
fig, axes = plt.subplots(3, figsize=(10,7)) ax = axes[0] ax.plot(res_kns.smoothed_marginal_probabilities[0]) ax.set(title='Smoothed probability of a low-variance regime for stock returns') ax = axes[1] ax.plot(res_kns.smoothed_marginal_probabilities[1]) ax.set(title='Smoothed probability of a medium-variance regime f...
v0.12.1/examples/notebooks/generated/markov_autoregression.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
Filardo (1994) Time-Varying Transition Probabilities This model demonstrates estimation with time-varying transition probabilities. The dataset can be reached at http://econ.korea.ac.kr/~cjkim/MARKOV/data/filardo.prn. In the above models we have assumed that the transition probabilities are constant across time. Here w...
# Get the dataset filardo = requests.get('http://econ.korea.ac.kr/~cjkim/MARKOV/data/filardo.prn').content dta_filardo = pd.read_table(BytesIO(filardo), sep=' +', header=None, skipfooter=1, engine='python') dta_filardo.columns = ['month', 'ip', 'leading'] dta_filardo.index = pd.date_range('1948-01-01', '1991-04-01', fr...
v0.12.1/examples/notebooks/generated/markov_autoregression.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
The time-varying transition probabilities are specified by the exog_tvtp parameter. Here we demonstrate another feature of model fitting - the use of a random search for MLE starting parameters. Because Markov switching models are often characterized by many local maxima of the likelihood function, performing an initia...
mod_filardo = sm.tsa.MarkovAutoregression( dta_filardo.iloc[2:]['dlip'], k_regimes=2, order=4, switching_ar=False, exog_tvtp=sm.add_constant(dta_filardo.iloc[1:-1]['dmdlleading'])) np.random.seed(12345) res_filardo = mod_filardo.fit(search_reps=20) res_filardo.summary()
v0.12.1/examples/notebooks/generated/markov_autoregression.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
Below we plot the smoothed probability of the economy operating in a low-production state, and again include the NBER recessions for comparison.
fig, ax = plt.subplots(figsize=(12,3)) ax.plot(res_filardo.smoothed_marginal_probabilities[0]) ax.fill_between(usrec.index, 0, 1, where=usrec['USREC'].values, color='gray', alpha=0.2) ax.set_xlim(dta_filardo.index[6], dta_filardo.index[-1]) ax.set(title='Smoothed probability of a low-production state');
v0.12.1/examples/notebooks/generated/markov_autoregression.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
Using the time-varying transition probabilities, we can see how the expected duration of a low-production state changes over time:
res_filardo.expected_durations[0].plot( title='Expected duration of a low-production state', figsize=(12,3));
v0.12.1/examples/notebooks/generated/markov_autoregression.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
Importare il package AlignIO che è il package per manipolare file contenenti allineamenti multipli in diversi formati (tra cui clustal che è quello del file di input).
from Bio import AlignIO
laboratorio/lezione17-09dic21/esercizio4-biopython.ipynb
bioinformatica-corso/lezioni
cc0-1.0
Leggere l'allineamento in input Il package AlignIO mette a disposizione la funzione read per leggete un allineamento: AligIO.read(input_file_name, format) e restituisce un oggetto di tipo MultipleSeqAlignment che è un oggetto iterabile contenente oggetti SeqRecord, uno per ognuna delle righe dell'allineamento letto...
alignment = AlignIO.read("mafft-alignments.clustalw", "clustal")
laboratorio/lezione17-09dic21/esercizio4-biopython.ipynb
bioinformatica-corso/lezioni
cc0-1.0
La lunghezza dell'allineamento in input (numero di colonne della matrice di allineamento) è:
alignment.get_alignment_length()
laboratorio/lezione17-09dic21/esercizio4-biopython.ipynb
bioinformatica-corso/lezioni
cc0-1.0
Trasformare l'oggetto in una lista di oggetti SeqRecord.
alignment = list(alignment) alignment
laboratorio/lezione17-09dic21/esercizio4-biopython.ipynb
bioinformatica-corso/lezioni
cc0-1.0
Eliminare i gap iniziali. Trovare il più lungo prefisso di soli simboli - delle righe dell'allineamento. Supponendo che tale prefisso sia lungo g, eliminare da ogni riga dell'allinemento il prefisso di lunghezza g. Ad esempio il seguente allineamento composto da tre righe: GTATGTGTCATGTTTTTGCTA --ATGTGTCATG-TTT----- --...
import re gap_list = [re.findall('^-+', str(row.seq)) for row in alignment] gap_size_list = [len(gap[0]) for gap in gap_list if gap] gap_size_list[:0] = [0] leading_gaps = max(gap_size_list) alignment = [row[leading_gaps:] for row in alignment] alignment
laboratorio/lezione17-09dic21/esercizio4-biopython.ipynb
bioinformatica-corso/lezioni
cc0-1.0
Eliminare i gap finali. Trovare il più lungo suffisso di soli simboli - delle righe dell'allineamento. Supponendo che tale suffisso sia lungo g, eliminare da ogni riga il suffisso di lunghezza g. Ad esempio il seguente allineamento composto da tre righe: GTGTCATGTTTTTGCTA GTGTCATG-TTT----- GTGTCATGTTTTTG---...
gap_list = [re.findall('-+$', str(row.seq)) for row in alignment] gap_size_list = [len(gap[0]) for gap in gap_list if gap] gap_size_list[:0] = [0] trailing_gaps = max(gap_size_list) alignment = [row[:len(row)-trailing_gaps] for row in alignment] alignment
laboratorio/lezione17-09dic21/esercizio4-biopython.ipynb
bioinformatica-corso/lezioni
cc0-1.0
Creare la lista degli identificatori dei genomi
index_list = [row.id for row in alignment] index_list
laboratorio/lezione17-09dic21/esercizio4-biopython.ipynb
bioinformatica-corso/lezioni
cc0-1.0
Creare il dizionario contenente i dati per costruire il data frame key: posizione 1-based della variazione (posizione della colonna nell'allineamento in input) value: lista dei simboli allineati coinvolti nella variazione (il primo simbolo deve essere quello del reference, mentre se un genoma non presenta una diffe...
df_data = {} reference = alignment.pop(0) for (i,c) in enumerate(reference): variant_list = [] is_variant = False for row in alignment: variant = '' if row[i] != c and row[i] in {'A', 'C', 'G', 'T'}: is_variant = True variant = row[i] variant_li...
laboratorio/lezione17-09dic21/esercizio4-biopython.ipynb
bioinformatica-corso/lezioni
cc0-1.0
Creare il data frame df = pd.DataFrame(df_data, index = index_list)
import pandas as pd df = pd.DataFrame(df_data, index = index_list) df
laboratorio/lezione17-09dic21/esercizio4-biopython.ipynb
bioinformatica-corso/lezioni
cc0-1.0
Estrarre il genoma con più variazioni e quello con meno variazioni Determinare la lista del numero di variazioni per genoma (per tutti i genomi tranne quello di riferimento).
variants_per_genome = [len(list(filter(lambda x: x!='', list(row)))) for row in df.values] variants_per_genome.pop(0) variants_per_genome
laboratorio/lezione17-09dic21/esercizio4-biopython.ipynb
bioinformatica-corso/lezioni
cc0-1.0
In alternativa:
variants_per_genome = [df.shape[1]-list(df.loc[index]).count('') for index in index_list[1:]] variants_per_genome
laboratorio/lezione17-09dic21/esercizio4-biopython.ipynb
bioinformatica-corso/lezioni
cc0-1.0
Estrarre il genoma con più variazioni.
index_list[variants_per_genome.index(max(variants_per_genome))+1]
laboratorio/lezione17-09dic21/esercizio4-biopython.ipynb
bioinformatica-corso/lezioni
cc0-1.0
Estrarre il genoma con meno variazioni.
index_list[variants_per_genome.index(min(variants_per_genome))+1]
laboratorio/lezione17-09dic21/esercizio4-biopython.ipynb
bioinformatica-corso/lezioni
cc0-1.0
In alternativa, per estrarre il genoma con meno variazioni:
null_df = pd.DataFrame((df == '').sum(axis=1), columns=['difference']) null_df[1:][null_df[1:]['difference'] == null_df[1:]['difference'].max()]
laboratorio/lezione17-09dic21/esercizio4-biopython.ipynb
bioinformatica-corso/lezioni
cc0-1.0
In alternativa, per estrarre il genoma con più variazioni:
null_df[1:][null_df[1:]['difference'] == null_df[1:]['difference'].min()]
laboratorio/lezione17-09dic21/esercizio4-biopython.ipynb
bioinformatica-corso/lezioni
cc0-1.0
Determinare il data frame delle variazioni "complete" Selezionare dal data frame precedente le sole colonne relative a variazioni "complete".
df_complete = df[[col for col in df.columns if all(df[col] != '')]] df_complete
laboratorio/lezione17-09dic21/esercizio4-biopython.ipynb
bioinformatica-corso/lezioni
cc0-1.0
Determinare il data frame delle variazioni "stabili" Selezionare dal data frame precedente le sole colonne relative a variazioni "stabili".
df_stable = df_complete[[col for col in df_complete.columns if len(df_complete[col][1:].unique()) == 1]] df_stable
laboratorio/lezione17-09dic21/esercizio4-biopython.ipynb
bioinformatica-corso/lezioni
cc0-1.0
Ottenere la lista delle posizioni in cui c'è un gap nel genoma di riferimento.
ref_gaps = [col for col in df.columns if df[col][0] == '-'] ref_gaps
laboratorio/lezione17-09dic21/esercizio4-biopython.ipynb
bioinformatica-corso/lezioni
cc0-1.0
Ottenere la lista delle posizioni in cui c'è un gap in almeno uno dei genomi (diversi dal riferimento).
other_gaps = [col for col in df.columns if any(df[col][1:] == '-')] other_gaps
laboratorio/lezione17-09dic21/esercizio4-biopython.ipynb
bioinformatica-corso/lezioni
cc0-1.0
As a rule, I always conduct statistical simulations to make sure the functions I have written actually perform the way I expect them to when the null is known. If you can't get your method to work on a data generating procedure of your choosing, it should not leave the statistical laboratory! In the simulations below, ...
# Parameters of simulations nsim = 100000 alpha = 0.05 nlow, nhigh = 25, 75 n1, n2 = np.random.randint(nlow, nhigh+1, nsim), np.random.randint(nlow, nhigh+1, nsim) se1, se2 = np.exp(np.random.randn(nsim)), np.exp(np.random.randn(nsim)) mu_seq = np.arange(0,0.21,0.01) tt_seq, method_seq = np.repeat(['eq','neq'],2), np.t...
_rmd/extra_unequalvar/unequalvar.ipynb
erikdrysdale/erikdrysdale.github.io
mit
Figure 1 above shows that the tdist_2dist function is working as expected. When the variances of $x$ and $y$ are equivalent, there is no difference in performance between approaches. When the mean difference is zero, the probability of rejecting the null is exactly equivalent to the level of the test (5%). However, whe...
n1, n2 = 25, 75 se1 = 1 se2a, se2b = se1, se1 + 1 var1, var2a, var2b = se1**2, se2a**2, se2b**2 # ddof under different assumptions nu_a = n1 + n2 - 2 nu_b = (var1/n1 + var2b/n2)**2 / ( (var1/n1)**2/(n1-1) + (var2b/n2)**2/(n2-1) ) mu_seq = np.round(np.arange(0, 1.1, 0.1),2) # Pre-calculate power crit_ub_a, crit_lb_a = ...
_rmd/extra_unequalvar/unequalvar.ipynb
erikdrysdale/erikdrysdale.github.io
mit
Figure 2 shows that the power calculations line up exactly with the analytical expectations for both equal and unequal variances. Having thoroughly validated the type-I and type-II errors of this function we can now move onto testing whether the means from multiple normal distributions are equal. (3) F-test for equali...
def fdist_anova(mus, ses, ns, var_eq=False): lshape = len(mus.shape) assert lshape <= 2 assert mus.shape == ses.shape if len(ns.shape) == 1: ns = cvec(ns.copy()) else: assert ns.shape == mus.shape if lshape == 1: mus = cvec(mus.copy()) ses = cvec(ses.copy()) v...
_rmd/extra_unequalvar/unequalvar.ipynb
erikdrysdale/erikdrysdale.github.io
mit
The simulations in Figure 3 show a similar finding to the that of t-test: when the ground truth variances are equal, there is almost no differences between the tests, and an expected 5% false positive rate occurs when the means are equal. However, for the unequal variance situation, the assumption of homoskedasticity l...
from sklearn import datasets ix, iy = datasets.load_iris(return_X_y=True) v1, v2 = ix[:,0], ix[:,1] k = 1 all_stats = [stats.ttest_ind(v1, v2, equal_var=True)[k], tdist_2dist(v1.mean(), v2.mean(), v1.std(ddof=1), v2.std(ddof=1), len(v1), len(v2), var_eq=True)[k], stats.ttest_ind(v1, v2, equal_...
_rmd/extra_unequalvar/unequalvar.ipynb
erikdrysdale/erikdrysdale.github.io
mit
So far so good. Next, we'll use rpy2 to get the results in R which supports equal and unequal variances with two different functions.
import rpy2.robjects as robjects moments_x = pd.DataFrame({'x':ix[:,0],'y':iy}).groupby('y').x.describe()[['mean','std','count']] all_stats = [np.array(robjects.r('summary(aov(Sepal.Length~Species,iris))[[1]][1, 5]'))[0], fdist_anova(moments_x['mean'], moments_x['std'], moments_x['count'], var_eq=True)[1...
_rmd/extra_unequalvar/unequalvar.ipynb
erikdrysdale/erikdrysdale.github.io
mit
Once again the results are identical to the benchmark functions. (5) Application to AUROC inference The empirical AUROC has an asymptotically normal distribution. Consequently, the difference between two AUROCs will also have an asymptotically normal distribution. For small sample sizes, the Hanley and McNeil adjustmen...
n1, n0 = 100, 200 n = n1 + n0 n1n0 = n1 * n0 mu_seq = np.round(np.arange(0, 1.01, 0.1),2) def se_auroc_hanley(auroc, n1, n0): q1 = (n1 - 1) * ((auroc / (2 - auroc)) - auroc ** 2) q0 = (n0 - 1) * ((2 * auroc ** 2) / (1 + auroc) - auroc ** 2) se_auroc = np.sqrt((auroc * (1 - auroc) + q1 + q0) / (n1 * n0)) ...
_rmd/extra_unequalvar/unequalvar.ipynb
erikdrysdale/erikdrysdale.github.io
mit
Example 2
# Example 2 data = ['ACME', 50, 91.1, (2012, 12, 21)] name, shares, price, date = data print name print date name, shares, price, (year, mon, day) = data print name print year print mon print day
notebooks/ch01/01_unpacking_a_sequence_into_variables.ipynb
tuanavu/python-cookbook-3rd
mit
Example 3 If there is a mismatch in the number of elements, you’ll get an error
# Example 3 # error with mismatch in number of elements p = (4, 5) x, y, z = p
notebooks/ch01/01_unpacking_a_sequence_into_variables.ipynb
tuanavu/python-cookbook-3rd
mit
Example 4 Unpacking actually works with any object that happens to be iterable, not just tuples or lists. This includes strings, files, iterators, and generators.
# Example 4: string s = 'Hello' a, b, c, d, e = s print a print b print e
notebooks/ch01/01_unpacking_a_sequence_into_variables.ipynb
tuanavu/python-cookbook-3rd
mit
Example 5 Discard certain values
# Example 5 # discard certain values data = [ 'ACME', 50, 91.1, (2012, 12, 21) ] _, shares, price, _ = data print shares print price
notebooks/ch01/01_unpacking_a_sequence_into_variables.ipynb
tuanavu/python-cookbook-3rd
mit
From Wikipedia: In the mathematics of shuffling playing cards, the Gilbert–Shannon–Reeds model is a probability distribution on riffle shuffle permutations that has been reported to be a good match for experimentally observed outcomes of human shuffling, and that forms the basis for a recommendation that a deck of car...
def get_random_number_for_right_deck(n: int, seed: int=None, ) -> int: """ Return the number of cards to split into the right sub-deck. :param n: one above the highest number that could be returned by this function. :param seed: optional seed for the random number generator to enable ...
Gilbert-Shannon-Reeds.ipynb
proinsias/gilbert-shannon-reeds
mit
Next, define a function to determine which hand to drop a card from.
def should_drop_from_right_deck(n_left: int, n_right:int, seed: int=None, ) -> bool: """ Determine whether we drop a card from the right or left sub-deck. Either `n_left` or `n_right` (or both) must be greater than zero. :param n_left: the number of cards in the left sub-deck. :param n_rig...
Gilbert-Shannon-Reeds.ipynb
proinsias/gilbert-shannon-reeds
mit
Now we can implement the 'Gilbert–Shannon–Reeds' shuffle.
def shuffle(deck: np.array, seed: int=None, ) -> np.array: """ Shuffle the input 'deck' using the Gilbert–Shannon–Reeds method. :param seq: the input sequence of integers. :param seed: optional seed for the random number generator to enable deterministic behavior. :return: A new de...
Gilbert-Shannon-Reeds.ipynb
proinsias/gilbert-shannon-reeds
mit
Finally, we run some experiments to confirm the recommendation of seven shuffles for a deck of 52 cards.
num_cards = 52 max_num_shuffles = 20 num_decks = 10000 # Shuffling the cards using a uniform probability # distribution results in the same expected frequency # for each card in each deck position. uniform_rel_freqs = np.full( shape=[num_cards, num_cards], fill_value=1./num_cards, ) def calculate_differences(...
Gilbert-Shannon-Reeds.ipynb
proinsias/gilbert-shannon-reeds
mit
The KS statistics are of most use here. You can see how the statistic approaches its maximum value around num_shuffles = 7.
fs = 14 fig, ax = plt.subplots(figsize=(8, 6), dpi=300) ax.scatter(range(1, max_num_shuffles + 1), kstests, ); ax.xaxis.set_major_locator(matplotlib.ticker.MaxNLocator(integer=True)) ax.set_xlabel('Number of Shuffles', fontsize=fs, ) ax.set_ylabel('Kolmogorov-Smirnov Statistic', fontsize=fs, ) ax.set_xlim([0, max_num_...
Gilbert-Shannon-Reeds.ipynb
proinsias/gilbert-shannon-reeds
mit
As always, let's do imports and initialize a logger and a new bundle.
import phoebe import numpy as np b = phoebe.default_binary()
2.3/examples/eccentric_ellipsoidal.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Now we need a highly eccentric system that nearly overflows at periastron and is slightly eclipsing.
b.set_value('q', value=0.7) b.set_value('period', component='binary', value=10) b.set_value('sma', component='binary', value=25) b.set_value('incl', component='binary', value=0) b.set_value('ecc', component='binary', value=0.9) print(b.filter(qualifier='requiv*', context='component')) b.set_value('requiv', component=...
2.3/examples/eccentric_ellipsoidal.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Adding Datasets We'll add light curve, orbit, and mesh datasets.
b.add_dataset('lc', compute_times=phoebe.linspace(-2, 2, 201), dataset='lc01') b.add_dataset('orb', compute_times=phoebe.linspace(-2, 2, 201)) anim_times = phoebe.linspace(-2, 2, 101) b.add_dataset('mesh', compute_times=anim_times, coordinates='uvw', ...
2.3/examples/eccentric_ellipsoidal.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Running Compute
b.run_compute(irrad_method='none')
2.3/examples/eccentric_ellipsoidal.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Plotting
afig, mplfig = b.plot(kind='lc', x='phases', t0='t0_perpass', show=True)
2.3/examples/eccentric_ellipsoidal.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Now let's make a nice figure. Let's go through these options: * time: make the plot at this single time * z: by default, orbits plot in 2d, but since we're overplotting with a mesh, we want the z-ordering to be correct, so we'll have them plot with w-coordinates in the z-direction. * c: (will be ignored by the mesh): s...
afig, mplfig = b.plot(time=0.0, z={'orb': 'ws'}, c={'primary': 'blue', 'secondary': 'red'}, fc={'primary': 'blue', 'secondary': 'red'}, ec='face', uncover={'orb': True}, trail={'orb': 0...
2.3/examples/eccentric_ellipsoidal.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Now let's animate the same figure in time. We'll use the same arguments as the static plot above, with the following exceptions: times: pass our array of times that we want the animation to loop over. pad_aspect: pad_aspect doesn't work with animations, so we'll disable to avoid the warning messages. animate: self-ex...
afig, mplfig = b.plot(times=anim_times, z={'orb': 'ws'}, c={'primary': 'blue', 'secondary': 'red'}, fc={'primary': 'blue', 'secondary': 'red'}, ec='face', uncover={'orb': True}, trail={...
2.3/examples/eccentric_ellipsoidal.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
The following are all Theano defined types:
A = T.matrix('A') b = T.scalar('b') v = T.vector('v') print A.type print b.type print v.type
TheanoLearning/TheanoLearning/theano_demo.ipynb
shengshuyang/PCLCombinedObjectDetection
gpl-2.0