docs
stringclasses 4
values | category
stringlengths 3
31
| thread
stringlengths 7
255
| href
stringlengths 42
278
| question
stringlengths 0
30.3k
| context
stringlengths 0
24.9k
| marked
int64 0
1
|
---|---|---|---|---|---|---|
tensorflow | Research & Models | Where should I place dropout layer in the network | https://discuss.tensorflow.org/t/where-should-i-place-dropout-layer-in-the-network/5909 | how and where can I add drop out layer in the following code:
from tensorflow.keras.layers import Conv2D, BatchNormalization, Activation, MaxPool2D, Conv2DTranspose, Concatenate, Input
from tensorflow.keras.models import Model
from tensorflow.keras.applications import ResNet50
def conv_block(input, num_filters):
x = Conv2D(num_filters, 3, padding=“same”)(input)
x = BatchNormalization()(x)
x = Activation(“relu”)(x)
x = Conv2D(num_filters, 3, padding="same")(x)
x = BatchNormalization()(x)
x = Activation("relu")(x)
return x
def decoder_block(input, skip_features, num_filters):
x = Conv2DTranspose(num_filters, (2, 2), strides=2, padding=“same”)(input)
x = Concatenate()([x, skip_features])
x = conv_block(x, num_filters)
return x
def resnet50_unet(input_shape):
“”" Input “”"
inputs = Input(input_shape)
""" Pre-trained ResNet50 Model """
resnet50 = ResNet50(include_top=False, weights=None, input_tensor=inputs)
""" Encoder """
s1 = resnet50.layers[0].output ## (512 x 512)
s2 = resnet50.get_layer("conv1_relu").output ## (256 x 256)
s3 = resnet50.get_layer("conv2_block3_out").output ## (128 x 128)
s4 = resnet50.get_layer("conv3_block4_out").output ## (32 x 32)
""" Bridge """
b1 = resnet50.get_layer("conv4_block6_out").output ## (32 x 32)
""" Decoder """
d1 = decoder_block(b1, s4, 256) ## (64 x 64)
d2 = decoder_block(d1, s3, 128) ## (128 x 128)
d3 = decoder_block(d2, s2, 64) ## (256 x 256)
d4 = decoder_block(d3, s1, 32) ## (512 x 512)
""" Output """
outputs = Conv2D(1, 1, padding="same", activation="sigmoid")(d4)
model = Model(inputs, outputs, name="ResNet50_U-Net")
return model
if name == “main”:
input_shape = (256, 256, 3)
model = resnet50_unet(input_shape)
#model.summary() | @Aleena_Suhail,
Can you take a look at this thread 7 which can throw you some insights on the same? | 0 |
tensorflow | Research & Models | How to improve accuracy of a CNN_LSTM binary classifier in TF 2.4 | https://discuss.tensorflow.org/t/how-to-improve-accuracy-of-a-cnn-lstm-binary-classifier-in-tf-2-4/4764 | I am trying to build a CNN LSTM classifier for 1d sequential data.Input is of length 20 and contains 4 features.
I have trained the model and saved it. However I am unable to get good performance in both training as well as test data:-
Below is my code for the tensorflow model.
model = tf.keras.Sequential()
model.add(tf.keras.layers.Conv1D(filters=128, kernel_size=8, padding = 'same', activation='relu', input_shape = (20,4)))
model.add(tf.keras.layers.Conv1D(filters=128, kernel_size=5, padding = 'same', activation='relu', kernel_regularizer=tf.keras.regularizers.l2(0.01)))
model.add(tf.keras.layers.Conv1D(filters=128, kernel_size=3, padding = 'same', activation='relu', kernel_regularizer=tf.keras.regularizers.l2(0.01)))
model.add(tf.keras.layers.MaxPooling1D(pool_size=2))
model.add(tf.keras.layers.LSTM(units = 128))
model.add(tf.keras.layers.Dense(units = 1, activation = 'sigmoid'))
model.compile(optimizer='adam', loss='binary_crossentropy', metrics = 'accuracy')
model.build()
model.summary()
history = model.fit(X_tf, y_tf, epochs=60, batch_size=256, validation_data = (X_tf_,y_tf_))
Here are the logs that I am getting while training.
Epoch 5/60 19739/19739 [==============================] - 1212s 61ms/step - loss: 0.5858 - accuracy: 0.7055 - val_loss: 0.5854 - val_accuracy: 0.7062
I need help in how can I further improve the performance.What are the various techniques that I can apply to sequential data?
My training dataset has 4.8 million rows and test set has 1.2 million rows. | You can make the model bigger: add more LSTM layers, increase the number of units in the layers, make them bidirectional, add dense layers with activations after the last LSTM or experiment with other architectures.
Other way is to change the number of epochs, batch size and learning rate and see how it affects the results.
If nothing helps, check for class imbalance and how both classes are distributed between the train and validation sets. Apply some basic techniques for imbalances data like using sample weights and generating synthetic data for underrepresented class.
Add more features, if it is possible, or generate new features from existing ones. | 0 |
tensorflow | Research & Models | [Research ] Weather forecasting for up to 12 hours with MetNet-2 (Google Research) | https://discuss.tensorflow.org/t/research-weather-forecasting-for-up-to-12-hours-with-metnet-2-google-research/5878 | Meteorological Neural Network 2 (MetNet-2) - a new probabilistic weather model featuring a forecasting range of up to 12 hours of lead time at a frequency of 2 minutes.
Blog post: Google AI Blog: MetNet-2: Deep Learning for 12-Hour Precipitation Forecasting 7
Within weather forecasting, deep learning techniques have shown particular promise for nowcasting 1 — i.e., predicting weather up to 2-6 hours ahead. Previous work has focused on using direct neural network models for weather data 1, extending neural forecasts from 0 to 8 hours with the MetNet architecture, generating continuations of radar data for up to 90 minutes ahead, and interpreting the weather information learned by these neural networks. Still, there is an opportunity for deep learning to extend improvements to longer-range forecasts.
To that end, in “Skillful Twelve Hour Precipitation Forecasts Using Large Context Neural Networks”, we push the forecasting boundaries of our neural precipitation model to 12 hour predictions while keeping a spatial resolution of 1 km and a time resolution of 2 minutes. By quadrupling the input context, adopting a richer weather input state, and extending the architecture to capture longer-range spatial dependencies, MetNet-2 substantially improves on the performance of its predecessor, MetNet. Compared to physics-based models, MetNet-2 outperforms the state-of-the-art HREF ensemble model for weather forecasts up to 12 hours ahead.
Interpreting What MetNet-2 Learns About Weather
Because MetNet-2 does not use hand-crafted physical equations, its performance inspires a natural question: What kind of physical relations about the weather does it learn from the data during training? Using advanced interpretability tools 1, we further trace the impact of various input features on MetNet-2’s performance at different forecast timelines. Perhaps the most surprising finding is that MetNet-2 appears to emulate the physics described by Quasi-Geostrophic Theory, which is used as an effective approximation of large-scale weather phenomena. MetNet-2 was able to pick up on changes in the atmospheric forces, at the scale of a typical high- or low-pressure system (i.e., the synoptic scale), that bring about favorable conditions for precipitation, a key tenet of the theory.
Conclusion
MetNet-2 represents a step toward enabling a new modeling paradigm for weather forecasting that does not rely on hand-coding the physics of weather phenomena, but rather embraces end-to-end learning from observations to weather targets and parallel forecasting on low-precision hardware. Yet many challenges remain on the path to fully achieving this goal, including incorporating more raw data about the atmosphere directly (rather than using the pre-processed starting state from physical models), broadening the set of weather phenomena, increasing the lead time horizon to days and weeks, and widening the geographic coverage beyond the United States.
MetNet-2 architecture1448×575 208 KB
(MetNet-2 architecture - paper 5)
Also, check out:
[Research ] Nowcasting the Next Hour of Rain (by DeepMind) Research & Models
GitHub: https://github.com/deepmind/deepmind-research/tree/master/nowcasting
Colab: Google Colab
Paper: Skilful precipitation nowcasting using deep generative models of radar | Nature
Recently introduced deep learning methods use radar to directly predict future rain rates, free of physical constraints5,6. While they accurately predict low-intensity rainfall, their operational utility is limited because their lack of constraints produces blurry nowcasts at longer lead times, yielding poor … | 1 |
|
tensorflow | Research & Models | Model training error | https://discuss.tensorflow.org/t/model-training-error/5622 | def expend_as(tensor, rep):
return layers.Lambda(lambda x, repnum: K.repeat_elements(x, repnum, axis=3),
arguments={‘repnum’: rep})(tensor)
def double_conv_layer(x, filter_size, size, dropout, batch_norm=False):
axis = 3
conv = layers.Conv2D(size, (filter_size, filter_size), padding='same')(x)
if batch_norm is True:
conv = layers.BatchNormalization(axis=axis)(conv)
conv = layers.Activation('relu')(conv)
conv = layers.Conv2D(size, (filter_size, filter_size), padding='same')(conv)
if batch_norm is True:
conv = layers.BatchNormalization(axis=axis)(conv)
conv = layers.Activation('relu')(conv)
if dropout > 0:
conv = layers.Dropout(dropout)(conv)
shortcut = layers.Conv2D(size, kernel_size=(1, 1), padding='same')(x)
if batch_norm is True:
shortcut = layers.BatchNormalization(axis=axis)(shortcut)
res_path = layers.add([shortcut, conv])
return res_path
def gating_signal(input, out_size, batch_norm=False):
“”"
resize the down layer feature map into the same dimension as the up layer feature map
using 1x1 conv
:param input: down-dim feature map
:param out_size:output channel number
:return: the gating feature map with the same dimension of the up layer feature map
“”"
x = layers.Conv2D(out_size, (1, 1), padding=‘same’)(input)
if batch_norm:
x = layers.BatchNormalization()(x)
x = layers.Activation(‘relu’)(x)
return x
def attention_block(x, gating, inter_shape):
shape_x = K.int_shape(x)
shape_g = K.int_shape(gating)
theta_x = layers.Conv2D(inter_shape, (2, 2), strides=(2, 2), padding='same')(x) # 16
shape_theta_x = K.int_shape(theta_x)
phi_g = layers.Conv2D(inter_shape, (1, 1), padding='same')(gating)
upsample_g = layers.Conv2DTranspose(inter_shape, (3, 3),
strides=(shape_theta_x[1] // shape_g[1], shape_theta_x[2] // shape_g[2]),
padding='same')(phi_g) # 16
concat_xg = layers.add([upsample_g, theta_x])
act_xg = layers.Activation('relu')(concat_xg)
psi = layers.Conv2D(1, (1, 1), padding='same')(act_xg)
sigmoid_xg = layers.Activation('sigmoid')(psi)
shape_sigmoid = K.int_shape(sigmoid_xg)
upsample_psi = layers.UpSampling2D(size=(shape_x[1] // shape_sigmoid[1], shape_x[2] // shape_sigmoid[2]))(sigmoid_xg) # 32
upsample_psi = expend_as(upsample_psi, shape_x[3])
y = layers.multiply([upsample_psi, x])
result = layers.Conv2D(shape_x[3], (1, 1), padding='same')(y)
result_bn = layers.BatchNormalization()(result)
return result_bn
def Attention_ResUNet(input_shape, NUM_CLASSES=1, dropout_rate=0.0, batch_norm=True):
FILTER_NUM = 64 # number of basic filters for the first layer
FILTER_SIZE = 3 # size of the convolutional filter
UP_SAMP_SIZE = 2
# input data
# dimension of the image depth
inputs = layers.Input((512, 512, 3), dtype=tf.float32)
axis = 3
# Downsampling layers
# DownRes 1, double residual convolution + pooling
conv_512 = double_conv_layer(inputs, 3, 64, dropout_rate, batch_norm)
pool_256 = layers.MaxPooling2D(pool_size=(2,2))(conv_512)
# DownRes 2
conv_256 = double_conv_layer(pool_256, 3, 2*64, dropout_rate, batch_norm)
pool_128 = layers.MaxPooling2D(pool_size=(2,2))(conv_256)
# DownRes 3
conv_128 = double_conv_layer(pool_128, 3, 4*64, dropout_rate, batch_norm)
pool_64 = layers.MaxPooling2D(pool_size=(2,2))(conv_128)
# DownRes 4
conv_64 = double_conv_layer(pool_64, 3, 8*64, dropout_rate, batch_norm)
pool_32 = layers.MaxPooling2D(pool_size=(2,2))(conv_64)
# DownRes 5, convolution only
conv_32 = double_conv_layer(pool_32, 3, 16*64, dropout_rate, batch_norm)
# Upsampling layers
# UpRes 6, attention gated concatenation + upsampling + double residual convolution
gating_64 = gating_signal(conv_32, 8*64, batch_norm)
att_64 = attention_block(conv_64, gating_64, 8*64)
up_64 = layers.UpSampling2D(size=(2, 2), data_format="channels_last")(conv_32)
up_64 = layers.concatenate([up_64, att_64], axis=axis)
up_conv_64 = double_conv_layer(up_64, 3, 8*64, dropout_rate, batch_norm)
# UpRes 7
gating_128 = gating_signal(up_conv_64, 4*64, batch_norm)
att_128 = attention_block(conv_128, gating_128, 4*64)
up_128 = layers.UpSampling2D(size=(2, 2), data_format="channels_last")(up_conv_64)
up_128 = layers.concatenate([up_128, att_128], axis=axis)
up_conv_128 = double_conv_layer(up_128, 3, 4*64, dropout_rate, batch_norm)
# UpRes 8
gating_256 = gating_signal(up_conv_128, 2*64, batch_norm)
att_256 = attention_block(conv_256, gating_256, 2*64)
up_256 = layers.UpSampling2D(size=(2, 2), data_format="channels_last")(up_conv_128)
up_256 = layers.concatenate([up_256, att_256], axis=axis)
up_conv_256 = double_conv_layer(up_256, 3, 2*64, dropout_rate, batch_norm)
# UpRes 9
gating_512 = gating_signal(up_conv_128, 64, batch_norm)
att_512 = attention_block(conv_512, gating_512, 64)
up_512 = layers.UpSampling2D(size=(2, 2), data_format="channels_last")(up_conv_256)
up_512 = layers.concatenate([up_512, att_512], axis=axis)
up_conv_512 = double_conv_layer(up_512, 3, 64, dropout_rate, batch_norm)
# 1*1 convolutional layers
# valid padding
# batch normalization
# sigmoid nonlinear activation
conv_final = layers.Conv2D(NUM_CLASSES, kernel_size=(1,1))(up_conv_512)
conv_final = layers.BatchNormalization(axis=axis)(conv_final)
conv_final = layers.Activation('sigmoid')(conv_final)
# Model integration
model = models.Model(inputs, conv_final, name="AttentionResUNet")
return model
input_shape=(512,512,3)
model=Attention_ResUNet( input_shape, NUM_CLASSES=1,dropout_rate=0.0, batch_norm=True)
model.summary()
The code for training:
import os
import numpy as np
import tensorflow as tf
from tensorflow.keras import backend as K
os.environ[“TF_CPP_MIN_LOG_LEVEL”] = “2” #set to 1 for warnings and errors
import numpy as np
import cv2
import keras
import keras.utils
from glob import glob
from sklearn.utils import shuffle
import tensorflow as tf
from tensorflow.keras.callbacks import ModelCheckpoint, CSVLogger, ReduceLROnPlateau, EarlyStopping, TensorBoard
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.metrics import Recall, Precision
H = 512
W = 512
from focal_loss import BinaryFocalLoss #for tough to classify segement class
def create_dir(path):
“”" Create a directory. “”"
if not os.path.exists(path):
os.makedirs(path)
def shuffling(x, y):
x, y = shuffle(x, y, random_state=42)
return x, y
def load_data(path):
x = sorted(glob(os.path.join(path, “image”, “.png")))
y = sorted(glob(os.path.join(path, “mask”, ".png”)))
return x, y
def read_image(path):
path = path.decode()
x = cv2.imread(path, cv2.IMREAD_COLOR)
x = x/255.0
x = x.astype(np.float32)
return x
def read_mask(path):
path = path.decode()
x = cv2.imread(path, cv2.IMREAD_GRAYSCALE)
x = x/255.0
x = x > 0.5
x = x.astype(np.float32)
x = np.expand_dims(x, axis=-1)
return x
def tf_parse(x, y):
def _parse(x, y):
x = read_image(x)
y = read_mask(y)
return x, y
x, y = tf.numpy_function(_parse, [x, y], [tf.float32, tf.float32])
x.set_shape([H, W, 3])
y.set_shape([H, W, 1])
return x, y
def tf_dataset(x, y, batch=8):
dataset = tf.data.Dataset.from_tensor_slices((x, y))
dataset = dataset.map(tf_parse)
dataset = dataset.batch(batch)
dataset = dataset.prefetch(10)
return dataset
if name == “main”:
“”" Seeding “”"
np.random.seed(42)
tf.random.set_seed(42)
""" Directory for storing files """
create_dir("files")
""" Hyperparameters """
batch_size = 2
lr = 0.002
num_epochs = 60
model_path = os.path.join("files", "model.h5")
csv_path = os.path.join("files", "data.csv")
""" Dataset """
train_path = os.path.join("/content/drive/MyDrive/Data_brain/train/")
valid_path = os.path.join("/content/drive/MyDrive/Data_brain/test/")
train_x, train_y = load_data(train_path)
train_x, train_y = shuffling(train_x, train_y)
valid_x, valid_y = load_data(valid_path)
print(f"Train: {len(train_x)} - {len(train_y)}")
print(f"Valid: {len(valid_x)} - {len(valid_y)}")
train_dataset = tf_dataset(train_x, train_y, batch=batch_size)
valid_dataset = tf_dataset(valid_x, valid_y, batch=batch_size)
""" Model """
model = Attention_ResUNet(input_shape)
metrics = [jacard_coef, Recall(), Precision()]
model.compile(loss=BinaryFocalLoss(gamma=2), optimizer=Adam(lr), metrics=metrics)
callbacks = [
ModelCheckpoint(model_path, verbose=1, save_best_only=True),
#ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=10, min_lr=1e-7, verbose=1),
CSVLogger(csv_path),
TensorBoard(),
#EarlyStopping(monitor='val_loss', patience=50, restore_best_weights=False),
]
model.fit(
train_dataset,
epochs=num_epochs,
validation_data=valid_dataset,
callbacks=callbacks,
shuffle=False)
I am getting the following error while training the model(the results are absurd):
Train: 1280 - 1280
Valid: 32 - 32
/usr/local/lib/python3.7/dist-packages/keras/utils/generic_utils.py:497: CustomMaskWarning: Custom mask layers require a config and must override get_config. When loading, the custom mask layer must be passed to the custom_objects argument.
category=CustomMaskWarning)
Epoch 1/60
640/640 [==============================] - 279s 414ms/step - loss: 0.0857 - jacard_coef: 0.0047 - recall: 0.0555 - precision: 0.0049 - val_loss: 0.0365 - val_jacard_coef: 0.0044 - val_recall: 0.0000e+00 - val_precision: 0.0000e+00
Epoch 00001: val_loss improved from inf to 0.03647, saving model to files/model.h5
Epoch 2/60
640/640 [==============================] - 263s 411ms/step - loss: 0.0235 - jacard_coef: 0.0045 - recall: 0.0000e+00 - precision: 0.0000e+00 - val_loss: 0.0159 - val_jacard_coef: 0.0043 - val_recall: 0.0000e+00 - val_precision: 0.0000e+00
Epoch 00002: val_loss improved from 0.03647 to 0.01592, saving model to files/model.h5
Epoch 3/60
39/640 [>…] - ETA: 4:05 - loss: 0.0159 - jacard_coef: 0.0045 - recall: 0.0000e+00 - precision: 0.0000e+00 | Hi Aleena,
It’s a little bit hard to understand your question. Can you rephrase a little bit? maybe highlight the key parts? | 0 |
tensorflow | Research & Models | A TF repository for point-cloud segmentation | https://discuss.tensorflow.org/t/a-tf-repository-for-point-cloud-segmentation/5446 | I wanted to share a project I have been collaborating on with Soumik 1.
We take the problem of segmenting 3D point clouds that are important for modeling geometric properties from data. Today, we are delighted to open-source our repository 4 that implements the PointNet [1] model family for this purpose. We provide TensorFlow implementations with full support for TPUs and distributed training with mixed-precision (for GPUs). We provide models pre-trained on the four categories of the ShapeNet core dataset [2]. Here’s also a blog post 2 we have prepared to make it easier for getting started.
car964×525 94.9 KB
airplane966×525 74 KB
As always, don’t hesitate to reach out if you have any questions.
References:
[1] https://arxiv.org/abs/1612.00593 2
[2] https://shapenet.org/ 2 | This is great! I’m always interested in deep learning for new domains and data types. | 0 |
tensorflow | Research & Models | Which solution work best for this case | https://discuss.tensorflow.org/t/which-solution-work-best-for-this-case/5389 | Hello all,
I have a question about which solution will fits this case the best:
Case:
I have 3 kinds of text blocks: ingredients, preparation and dosage, and want to classify this types
I have a lot data that are already categorized for training
I hope someone have some papers, GitHub links or even better experience for this case.
best regards !! | You can start with this tutorial: Classify structured data with feature columns | TensorFlow Core 2
It demonstrates how to deal with various data types. | 0 |
tensorflow | Research & Models | A bit of attention for tfjs #3835 (angles from face mesh) | https://discuss.tensorflow.org/t/a-bit-of-attention-for-tfjs-3835-angles-from-face-mesh/5482 | There is community-ticket tfjs #3835 1 - it asks to add rotation angle for face mesh.
I have submitted MR for it #844 1
I surely understand tfjs team has it’s own priorities, however my MR dangles there for already two weeks.
I would be glad if some response would be given, so I can continue with MR. | Thank you for reminding us. This is a highly sought-after feature, we are working on it. Replied the ticket. | 0 |
tensorflow | Research & Models | False prediction in-between frames w/ Fast RCNN object detection model | https://discuss.tensorflow.org/t/false-prediction-in-between-frames-w-fast-rcnn-object-detection-model/5396 | Hi,
I trained a Fast RCNN Model to detect water puddle, and the model predicted well. However, there is an issue with the model on decoding a video stream running at 30fps. As shown in the attached images,
frame# 476 - detected a puddle with 100% confidence level
frame# 477 - did not detect anything
frame# 478 - detected the same puddle again at 100% confidence level.
I would like to know if anyone has similar experience with Fast RCNN model, and what did you do to fix it?
FYI, I also did training with two other models, MobileNet v2 SSD and ResNet. These two models gave gradual prediction results (conf. level fluctuates) as camera is panned over the subject. Fast RCNN behaves erratically, for the most part, confidence level of the detected object is either > 98% or close to zero. Please share if there is a way to fix this!
frame 476: frame476.jpg - Google Drive 1
frame 477: frame477.jpg - Google Drive 1 | here is the third image:
frame 478: frame478.jpg - Google Drive 1 | 0 |
tensorflow | Research & Models | TF Image Processing? | https://discuss.tensorflow.org/t/tf-image-processing/5166 | I’m looking into implementing high-quality image-processing operations using TF. For example, I’d like to have a higher-quality downsampling method, like Lanczos as a TF model. Please forward any references to this sort of work you are aware of.
For example, a basic Gaussian blur can be implemented by passing a custom-width kernel to tf.conv2d() (I’m using TFJS). This works great, but has the expected issues along the image boundary. Production-quality image processing tools solve this edge problem in one of a few ways, typically by adjusting the kernel weights outside the image to zero. However, I’m not experienced enough at how to set different kernels along the image boundaries.
Can anyone provide some tips?
For more context, here’s code that does a simple NxN Gaussian blur, without handling the borders. I’d love to figure out how to enhance this code to provide different kernels along the boundary rows and columns to do a better job of handling the edges (ie. not blending with zero).
const lanczos = (x, a) => {
if (x === 0) return 1
if (x >= -a && x < a) {
return (a * Math.sin(Math.PI * x) * Math.sin(Math.PI * (x / a))) / (Math.PI * Math.PI * x * x)
}
return 0
}
const gaussian = (x, theta = 1 /* ~ -3 to 3 */) => {
const C = 1 / Math.sqrt(2 * Math.PI * theta * theta)
const k = -(x * x) / (2 * theta * theta)
return C * Math.exp(k)
}
const filters = {
Lanczos3: x => lanczos(x, 3),
Lanczos2: x => lanczos(x, 2),
Gaussian: x => gaussian(x, 1),
Bilinear: () => 1,
Nearest: () => 1,
}
const normalizedValues = (size, filter) => {
let total = 0
const values = []
for (let y = -size; y <= size; ++y) {
const i = y + size
values[i] = []
for (let x = -size; x <= size; ++x) {
const j = x + size
values[i][j] = []
const f = filter(x) * filter(y)
total += f
for (let c = 0; c < 3; ++c) {
values[i][j][c] = [ f, f, f ]
}
}
}
const kernel = values.map(row => row.map(col => col.map(a => a.map(b => b / total))))
// for (let x = -size; x <= size; ++x) values[x + size] = filter(x)
// const kernel = tf.einsum('i,j->ij', values, values)
// const sum = tf.sum(values)
const normalized = tf.div(kernel, total * 3)
return normalized
}
const frame = async (tensor, args) => {
const filter = filters[args.filter]
// const [ height, width ] = tensor.shape
// const res = args.resolution === 'Source' ? [ width, height ] : resolutions[args.resolution]
// const strides = [ width / res[0], height / res[1] ]
const { zoom, kernelWidth } = args
const strides = Math.max(1, zoom)
const size = Math.max(3, kernelWidth) * strides
const kernel = normalizedValues(size, filter)
const pad = 'valid' // sample to the edge, even when filter extends beyond image
const dst = tf.conv2d(tensor, kernel, strides, pad)
return { tensor: dst }
} | Dan_Wexler:
I’d love to figure out how to enhance this code to provide different kernels along the boundary rows and columns to do a better job of handling the edges (ie. not blending with zero)
Could you be more specific here?
You always have the option to slice the image into (overlapping): center, edges, corners. Apply the Conv to the center portion, apply modified convs to the edges and corners. And then 2d stack them back together. But there may be a short-cut depending on how you plan to modify the kernel at the edges. | 0 |
tensorflow | Research & Models | Is it possible to crack human idea using tensorflow 2.5 or 2.6 | https://discuss.tensorflow.org/t/is-it-possible-to-crack-human-idea-using-tensorflow-2-5-or-2-6/5326 | the benefits of this model will be in medical for example coma | What is your goal? Can you make an example about your input and expected output? | 0 |
tensorflow | Research & Models | Trying to use Tracker in Movenet Pose detector | https://discuss.tensorflow.org/t/trying-to-use-tracker-in-movenet-pose-detector/4265 | Hi i`m trying to make a motion tracking application with Movenet in React Native
Confirmed keypoints are detected and shown up on console but having trouble to enable tracker
How can I enable built in keypoints tracker in Movenet???
Attached my source code below
import React, { useState, useEffect, useCallback, useMemo } from ‘react’;
import { View, StyleSheet, Platform, TouchableOpacity, Text } from ‘react-native’;
import Icon from ‘react-native-vector-icons/Ionicons’
import { Colors } from ‘react-native-paper’;
import { Camera } from ‘expo-camera’;
import * as tf from ‘@tensorflow/tfjs’;
import {cameraWithTensors} from ‘@tensorflow/tfjs-react-native’;
import * as poseDetection from ‘@tensorflow-models/pose-detection’;
import ‘@tensorflow/tfjs-backend-webgl’;
import ‘@mediapipe/pose’;
let coords = []
export const CameraView = () => {
const [hasPermission, setHasPermission] = useState(null);
const [poseDetector, setPoseDetector] = useState(null);
const [frameworkReady, setFrameworkReady] = useState(false);
const backCamera = Camera.Constants.Type.back
const frontCamera = Camera.Constants.Type.front
const [camType, setCamType] = useState(backCamera)
const TensorCamera = cameraWithTensors(Camera);
let requestAnimationFrameId = 0;
const textureDims = Platform.OS === "ios"? { width: 1080, height: 1920 } : { width: 1600, height: 1200 };
const tensorDims = { width: 152, height: 200 };
const iconPressed = useCallback(() => camType === backCamera? setCamType(frontCamera):setCamType(backCamera),[camType])
const model = poseDetection.SupportedModels.MoveNet;
const detectorConfig = {
modelType: poseDetection.movenet.modelType.MULTIPOSE_LIGHTNING,
enableTracking: true,
trackerType: poseDetection.TrackerType.Keypoint,
trackerConfig: {maxTracks: 4,
maxAge: 1000,
minSimilarity: 1,
keypointTrackerParams:{
keypointConfidenceThreshold: 1,
keypointFalloff: [],
minNumberOfKeypoints: 4
}
}
}
const detectPose = async (tensor) =>{
if(!tensor) return
const poses = await poseDetector.estimatePoses(tensor)
if (poses[0] !== undefined) {
const points = poses[0].keypoints.map(point => [point.x,point.y,point.name])
console.log(points)
coords = points
} else {
coords = []
}
///console.log(coords)
}
const handleCameraStream = (imageAsTensors) => {
const loop = async () => {
const nextImageTensor = await imageAsTensors.next().value;
await detectPose(nextImageTensor);
requestAnimationFrameId = requestAnimationFrame(loop);
};
if (true) loop();
}
useEffect(() => {
if(!frameworkReady) {
;(async () => {
const { status } = await Camera.requestPermissionsAsync();
console.log(`permissions status: ${status}`);
setHasPermission(status === 'granted');
await tf.ready();
setPoseDetector(await poseDetection.createDetector(model, detectorConfig))
setFrameworkReady(true);
})();
}
}, []);
useEffect(() => {
return () => {
cancelAnimationFrame(requestAnimationFrameId);
};
}, [requestAnimationFrameId]);
return(
<View style={styles.cameraView}>
<TensorCamera
style={styles.camera}
type={camType}
zoom={0}
cameraTextureHeight={textureDims.height}
cameraTextureWidth={textureDims.width}
resizeHeight={tensorDims.height}
resizeWidth={tensorDims.width}
resizeDepth={3}
onReady={(imageAsTensors) => handleCameraStream(imageAsTensors)}
autorender={true}
>
</TensorCamera>
<TouchableOpacity style={[styles.absoluteView]} activeOpacity={0.1}>
<Icon name="camera-reverse-outline" size={40} color="white" onPress={iconPressed}/>
</TouchableOpacity>
</View>
)
}
const styles = StyleSheet.create({
camera:{flex:1},
cameraView:{flex:1},
absoluteView:{
position:'absolute',
right:30,
bottom: Platform.select({ios:40, android:30}),
padding: 10,
},
tracker:{
position:'absolute',
width:10,
height:10,
borderRadius:5,
backgroundColor: Colors.blue500
}
}) | Hi @11130 , you need to lower minSimilarity, the semantics of this field is that the similarity between current pose and tracked pose, if their similarity is larger than minSimlarity, then we consider them as the same preson. 1 is the largest possible score for similarity, if you set it to 1, then that means we only consider this is the same person if current pose is exactly same as before. The default minSimilarity is 0.15. If you want to use the default, you can omit this field. Same for keypointConfidenceThreshold, the default is 0.3. Also you need to set values for keypointFalloff, default is [
0.026, 0.025, 0.025, 0.035, 0.035, 0.079, 0.079, 0.072, 0.072, 0.062,
0.062, 0.107, 0.107, 0.087, 0.087, 0.089, 0.089
]. I suggest just use default, to use default, just omit keypointTrackerParams. | 0 |
tensorflow | Research & Models | Implementing Sparse Ternary Compression inside Tensorflow Federated simple_fedavg example | https://discuss.tensorflow.org/t/implementing-sparse-ternary-compression-inside-tensorflow-federated-simple-fedavg-example/5063 | Hi all,
I’m actually trying to implement an Algorithm that i found by reading the paper Robust and Communication-Efficient Federated Learning from Non-IID Data by [Simon Wiedemann] [Klaus-Robert Müller] [Wojciech].
I wanted to try to implement it inside the simple_fedavg offered by Tensorflow Federated. I have actually already created the algorithm and seems to works fine in test case, the real problem is to put it inside the simple_fedavg project. I don’t get where i could change what the client send to the server and what the server expect to recieve.
So, basically, from client_update i don’t want to send weights_delta, but instead i want to send a simple list like [ [list of negatives indexes] [list of positives indexes] [average value] ], then on the server side i will recreate the weights like explained in the paper. But i can’t understand how to change this behaviour.
English is not my main language, so i hope i have explained the problem good enough.
test = weights_delta.copy()
for index in range(len(weights_delta)):
original_shape = tf.shape(weights_delta[index])
tensor = tf.reshape(test[index], [-1])
negatives, positives, average = test_stc.stc_compression(tensor, sparsification_rate)
test[index] = test_stc.stc_decompression(negatives, positives, average, tensor.get_shape().as_list(), original_shape)
test[index] = test_stc.stc_compression(tensor, sparsification_rate)
client_weight = tf.cast(num_examples, tf.float32)
return ClientOutput(test, client_weight, loss_sum / client_weight)
This is the behaviour that i would like, stc_compression return a tuple. Then i would like to access to each “test” variable sent from a client inside the server and recreate all the weights. | @lgusm Can we have some TF federated team member subscribed to this federated tag?
Thanks | 0 |
tensorflow | Research & Models | Efficientdet vs ResNet: Resnet outperforms Efficientdet-B3 | https://discuss.tensorflow.org/t/efficientdet-vs-resnet-resnet-outperforms-efficientdet-b3/4273 | So I tried to swap out ResNet for Efficientdet-B3 in the Eager Few Shot OD Training TF2 tutorial 3.
Now, based on all the positive feedback Efficientdet got I am very surprised that ResNet outperformed Efficientdet on this tutorial. In total Efficientdet got trained on 1700 batches in the tutorial, while I ran ResNet through the standard batch size of 100.
Efficientdet-B3 for the last 1000 batches I run of a total of 1700:
batch 950 of 1000, loss=0.21693243
batch 955 of 1000, loss=0.18070191
batch 960 of 1000, loss=0.1715184
batch 965 of 1000, loss=0.23656633
batch 970 of 1000, loss=0.16813375
batch 975 of 1000, loss=0.23602965
batch 980 of 1000, loss=0.14852181
batch 985 of 1000, loss=0.18400437
batch 990 of 1000, loss=0.22741726
batch 995 of 1000, loss=0.20477971
Done fine-tuning!
ResNet for 100 batches:
batch 0 of 100, loss=1.1079819
batch 10 of 100, loss=0.07644452
batch 20 of 100, loss=0.08746071
batch 30 of 100, loss=0.019333005
batch 40 of 100, loss=0.0071129226
batch 50 of 100, loss=0.00465827
batch 60 of 100, loss=0.0041421074
batch 70 of 100, loss=0.0026128457
batch 80 of 100, loss=0.0023376464
batch 90 of 100, loss=0.002139934
Done fine-tuning!
Why does Efficientdet need so much more training time than ResNet, is it due to that the number of parameters is only about 12 mill for Efficientdet-B3 (the one I tested) and about 25 mill for the ResNet50? Or are their other reasons?
The end result (the .gif at the end of the tutorial) also shows a huge different in accuracy, where ResNet performs much better.
Thanks for any input! | hello, when you mean “outperform” are you referring to the evaluation/test metrics? you should always monitor the gap between the training and validation losses, they should behave similar and then evaluate to see the final performance on a test set. Otherwise ResNet might be just overfitting the data and that’s why you get a extremely small loss, are these losses you posted training losses? | 0 |
tensorflow | Research & Models | [Research ] Nowcasting the Next Hour of Rain (by DeepMind) | https://discuss.tensorflow.org/t/research-nowcasting-the-next-hour-of-rain-by-deepmind/4816 | Deepmind
Nowcasting 3
Our latest research and state-of-the-art model advances the science of Precipitation Nowcasting.
GitHub: https://github.com/deepmind/deepmind-research/tree/master/nowcasting 14
Colab: Google Colab 14
Paper: Skilful precipitation nowcasting using deep generative models of radar | Nature 9
Recently introduced deep learning methods use radar to directly predict future rain rates, free of physical constraints5,6. While they accurately predict low-intensity rainfall, their operational utility is limited because their lack of constraints produces blurry nowcasts at longer lead times, yielding poor performance on rarer medium-to-heavy rain events. Here we present a deep generative model for the probabilistic nowcasting of precipitation from radar that addresses these challenges.
https://lh3.googleusercontent.com/8PmJxGsCJ01Usa4ZN5cRKng8bIJMVAYHQwmzwBe5mZqWMazGljujwUplM0VCP1ZEzghp6Ie65gJALkLWzR2fGLopN8bIAKbFBvc4zJi4HzNHR4OX3Vc=w2048-rw-v1(image larger than 4096KB) 6 | this is super cool and there’s a colab for you to try the model: https://colab.sandbox.google.com/github/deepmind/deepmind-research/blob/master/nowcasting/Open_sourced_dataset_and_model_snapshot_for_precipitation_nowcasting.ipynb#scrollTo=wFD0zFFyuHzH 9 | 0 |
tensorflow | Research & Models | Query while building one Model in tf2 | https://discuss.tensorflow.org/t/query-while-building-one-model-in-tf2/4780 | I am stucking with one problem while defining one model in tf2. I am using one dense layer with a softmax activation function. Now I want to extract the index with the highest probability value for that softmax layer output . So that I can use that index for later layer definition while building the model. Please help me out to implement the above problem, Waiting for your quick reply. | tf.math.argmax is the tensorflow operations which will allow you to do this. If you have a Keras model you can use a Custom layers, as described in the tutorial: Custom layers | TensorFlow Core 1
The final code will look something like:
class IndexOfMaxLayer(tf.keras.layers.Layer):
def __init__(self):
super(IndexOfMaxLayer, self).__init__()
def call(self, inputs):
return tf.math.argmax(inputs)
Now you can use this code after whichever dense / softmax layer you want to extract the max value from. | 0 |
tensorflow | Research & Models | MLP-Mixers are now on TF-Hub! | https://discuss.tensorflow.org/t/mlp-mixers-are-now-on-tf-hub/4696 | As of today, different variants of MLP-Mixers [1] are now available on TensorFlow Hub. Below are the details:
Models: https://tfhub.dev/sayakpaul/collections/mlp-mixer/1 10
Code: https://git.io/JzR68 7
Here is some example usage:
References:
[1] MLP-Mixer: An all-MLP Architecture for Vision by Tolstikhin et al. 4 | Well done Sayak!!! Great work!! | 0 |
tensorflow | Research & Models | Tensorflow problem: The loss return None, and show Error message:Attempting to capture an EagerTensor without building a function | https://discuss.tensorflow.org/t/tensorflow-problem-the-loss-return-none-and-show-error-message-attempting-to-capture-an-eagertensor-without-building-a-function/4734 | Hi guys, I try to implement the model for tensorflow2.5.0, but when I run the model, its print my loss return ‘none’, and show the error message: “RuntimeError: Attempting to capture an EagerTensor without building a function”.
image938×543 28.2 KB
Hope guys help me find the bug.
This is my model code:
encode model:
image936×1257 153 KB
decode model:
image825×806 90.4 KB
discriminator model:
image1171×687 79.6 KB
training step:
image1512×1142 219 KB
loss function:
image884×65 7.83 KB
image1288×130 17.7 KB
image737×178 22.7 KB
image890×58 10.1 KB
image946×56 9.48 KB
There is I have check:
I checked my dataset. There is not none data.
I checked my loss function, there is no nd.array, I change in tf.tensor.
This is my first time ask question on the website, if I need provide other code information to solve problem, I will upload. Thanks. | If you can share a running Colab to reproduce this it could be ideal. | 0 |
tensorflow | Research & Models | How can I predict the next month with time series? | https://discuss.tensorflow.org/t/how-can-i-predict-the-next-month-with-time-series/4604 | Hello everyone, I have a question related to time series. I have a year of data of a specific machine on its fuel consumption, I would like to predict an approximate total consumption for the next month (the value). Following the example of Tensorflow, the prediction works for me, but only for periods already known (checking the prediction). but I have modified the Window of time and I have not been able to predict a future date.
I am following this procedure
https://www.tensorflow.org/tutorials/structured_data/time_series 17
any recommendation or any idea? I really precited!
Thanks , Ricardo. | Hey @Ricardo can you share the requirement that specifics about your needs, especially with the data?
Modeling time series issues are based on how you outline the samples and labels. There are basically two different ways roughly from high level views:
Single step to predict
Multi steps to predict.
This is my point of view.
Actually it is about how you want to choose the time window to train your model. | 0 |
tensorflow | Research & Models | Vision Transformers on TF Hub | https://discuss.tensorflow.org/t/vision-transformers-on-tf-hub/4526 | Ever wanted to use Vision Transformers with TFHub and Keras? Well, pull your socks up now and get started. 16 different models are available for classification and fine-tuning. More details:
github.com
GitHub - sayakpaul/ViT-jax2tf: This repository hosts code for converting the... 68
This repository hosts code for converting the original Vision Transformer models (JAX) to TensorFlow. - GitHub - sayakpaul/ViT-jax2tf: This repository hosts code for converting the original Vision...
The story does not end here. Using this notebook you can convert any supported model from the AugReg pool (> 50,000 models!) and use that inside TFHub and Keras: https://colab.research.google.com/github/sayakpaul/ViT-jax2tf/blob/main/conversion.ipynb 20.
carbon1408×616 75.1 KB | Well done Sayak! great work as usual!!!
Just to complement, link to the the TFHub collection: TensorFlow Hub 21 | 0 |
tensorflow | Research & Models | How to calculate FLOPs of transformer in tensorflow? | https://discuss.tensorflow.org/t/how-to-calculate-flops-of-transformer-in-tensorflow/4592 | I know that
flops = tf.profiler.profile(graph, options=tf.profiler.ProfileOptionBuilder.float_operation())
can calculate the FLOPs.
But where can I find the graph of transformer?
Please help me. | There Is a quite long thread for this in TF 2.x:
github.com/tensorflow/tensorflow
TF 2.0 Feature: Flops calculation 26
opened
Sep 25, 2019
pzobel
stat:awaiting tensorflower
type:feature
comp:tfdbg
TF 2.0
<em>Please make sure that this is a feature request. As per our [GitHub Policy](…https://github.com/tensorflow/tensorflow/blob/master/ISSUES.md), we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:feature_template</em>
**System information**
- TensorFlow version (you are using): TF 2.0 RC2
- Are you willing to contribute it (Yes/No):
**Describe the feature and the current behavior/state.**
I am missing the opportunity to compute the number of floating point operations of a tf.keras Model in TF 2.0.
In TF 1.x tf.profiler was available [see here](https://stackoverflow.com/questions/45085938) but I can find anything equivalent for TF 2.0 yet.
**Will this change the current api? How?**
**Who will benefit with this feature?**
Everbody interested in the computational complexity of a TensorFlow model.
**Any Other info.** | 0 |
tensorflow | Research & Models | Model for sentiment analysis in Nigerian (and other) languages | https://discuss.tensorflow.org/t/model-for-sentiment-analysis-in-nigerian-and-other-languages/4360 | Hi. Very new to this forum and more to Tensorflow, and I had a quick question that I hope someone can help me in.
Basically, I’m working with a set of languages (hausa, yoruba and igbo) that do not have a reliable sentiment analysis model to process text with - unless I missed something. What I want is to create a custom model for each of these languages where the model scores and returns the sentiment of a sentence as accurately as possible.
I’m not sure how to approach this. What I did first is got a training dataset where the text and its sentiment (human scored), vectorized the text and created a model (using an Embedding layer). The accuracy wasn’t the best but I don’t know if that is the way to continue. Selecting the right hyperparameters seems like a separate job on its own.
Can anyone recommend on how you might approach this? And if there’s documentation on how a sentiment analysis model using these languages (or any non-English) language is created?
Any help would be appreciated. Thanks. | Hi, welcome to the forum, Is this for
https://lacunafund.org/language-2020-awards/ 2 | 0 |
tensorflow | Research & Models | KerasTuner & TensorFlow Decision Forests | https://discuss.tensorflow.org/t/kerastuner-tensorflow-decision-forests/4213 | Hi,
Is there a way to use KerasTuner on tensorflow_decision_forests?
Any tutorial?
Thanks
Fadi | Out of interest I checked that the basic KerasTuner logic described in the tutorial here (Getting started with KerasTuner 5) works with decision forest model the same way as with neural networks.
def build_model(hp):
"""Function initializes the model and defines search space.
:param hp: Hyperparameters
:return: Compiled TensorFlow model
"""
model = tfdf.keras.GradientBoostedTreesModel(
num_trees=hp.Int('num_trees', min_value=10, max_value=510, step=50),
max_depth=hp.Int('max_depth', min_value=3, max_value=16, step=1))
model.compile(metrics=['accuracy'])
return model
tuner = kt.RandomSearch(
build_model,
objective='val_loss',
max_trials=5)
tuner.search(X_train, y_train, epochs=1, validation_data=(X_valid, y_valid)) | 0 |
tensorflow | Research & Models | Help understanding fine-tuning tutorial | https://discuss.tensorflow.org/t/help-understanding-fine-tuning-tutorial/4200 | Sorry for spamming down the forum, but I have problems understanding the Eager Few Shot OD Training TF2 tutorial 3.
For this part:
detection_model = model_builder.build(
model_config=model_config, is_training=True)
# Set up object-based checkpoint restore --- RetinaNet has two prediction
# `heads` --- one for classification, the other for box regression. We will
# restore the box regression head but initialize the classification head
# from scratch (we show the omission below by commenting out the line that
# we would add if we wanted to restore both heads)
fake_box_predictor = tf.compat.v2.train.Checkpoint(
_base_tower_layers_for_heads=detection_model._box_predictor._base_tower_layers_for_heads,
# _prediction_heads=detection_model._box_predictor._prediction_heads,
# (i.e., the classification head that we *will not* restore)
_box_prediction_head=detection_model._box_predictor._box_prediction_head,
)
fake_model = tf.compat.v2.train.Checkpoint(
_feature_extractor=detection_model._feature_extractor,
_box_predictor=fake_box_predictor)
ckpt = tf.compat.v2.train.Checkpoint(model=fake_model)
ckpt.restore(checkpoint_path).expect_partial()
# Run model through a dummy image so that variables are created
image, shapes = detection_model.preprocess(tf.zeros([1, 640, 640, 3]))
prediction_dict = detection_model.predict(image, shapes)
_ = detection_model.postprocess(prediction_dict, shapes)
print('Weights restored!')
I don’t see how we actually restore the weights? As far as I can understand we create a checkpoint called fake_model that loads features from the model itself (bare ssd_resnet50 architecture with no weights, expect for random initial values).
We run restore on the provided checkpoint, but this is not linked to the model (detection_model) that is going to be trained in any way? Hence, we call restore on a checkpoint that is not linked to the model we are going to train?
So the model (detection_model) does not contain any of the weights from the checkpoint file.
In my mind this should be:
fake_box_predictor = tf.compat.v2.train.Checkpoint(
_base_tower_layers_for_heads=detection_model._box_predictor._base_tower_layers_for_heads,
# _prediction_heads=detection_model._box_predictor._prediction_heads,
# (i.e., the classification head that we *will not* restore)
_box_prediction_head=detection_model._box_predictor._box_prediction_head,
)
fake_model = tf.compat.v2.train.Checkpoint(
_feature_extractor=detection_model._feature_extractor,
_box_predictor=fake_box_predictor,
model=detection_model)
fake_model.restore(checkpoint_path).expect_partial()
Thanks for any help and clarification! | It took me sometime to understand this too!
Think like this:
detection_model is loaded from a configuration with random weights
this structure is used as the base for fake_box_predictor and fake_model.
the weights are loaded on fake_model. detection_model is part of the fake_model so it’s weights will also be populated on the load.
finally, run a fake image over detection_model so that everything is structured properly
does it makes sense? | 0 |
tensorflow | Research & Models | [Research ] Finetuned Language Models Are Zero-Shot Learners (by Google Research) | https://discuss.tensorflow.org/t/research-finetuned-language-models-are-zero-shot-learners-by-google-research/4206 | arXiv: [2109.01652] Finetuned Language Models Are Zero-Shot Learners 31
This paper explores a simple method for improving the zero-shot learning abilities of language models. We show that instruction tuning—finetuning language models on a collection of tasks described via instructions—substantially boosts zero-shot performance on unseen tasks. We take a 137B parameter pretrained language model and instruction-tune it on over 60 NLP tasks verbalized via natural language instruction templates. We evaluate this instruction tuned model, which we call FLAN, on unseen task types. FLAN substantially improves the performance of its unmodified counterpart and surpasses zero-shot 175B GPT-3 on 19 of 25 tasks that we evaluate. FLAN even outperforms few-shot GPT-3 by a large margin on ANLI, RTE, BoolQ, AI2-ARC, OpenbookQA, and StoryCloze. Ablation studies reveal that number of tasks and model scale are key components to the success of instruction tuning.
724×522 198 KB
Language models (LMs) at scale, such as GPT-3 (Brown et al., 2020), have been shown to perform few-shot learning remarkably well. They are less successful at zero-shot learning, however. For example, GPT-3’s zero-shot performance is much worse than few-shot performance on tasks such as reading comprehension, question answering, and natural language inference. One potential reason is that, without few-shot exemplars, it is harder for models to perform well on prompts that are not similar to the format of the pretraining data.
…
Our empirical results underscore the ability of language models to perform tasks described using natural language instructions. More broadly, as shown in Figure 2, instruction tuning combines appealing characteristics of the pretrain–finetune and prompting paradigms by using supervision via finetuning to improve the ability of language models to respond to inference-time text interactions.
Model architecture and pretraining. In our experiments, we use a dense left-to-right, decoder-only transformer language model of 137B parameters. This model is pretrained on a collection of web documents (including those with computer code), dialog data, and Wikipedia tokenized into 2.81T BPE tokens with a vocabulary of 32K tokens using the SentencePiece library (Kudo & Richardson, 2018). Approximately 10% of the pretraining data was non-English. This dataset is not as clean as the GPT-3 training set and also has a mixture of dialog and code, and so we expect the zero and few-shot performance of this pretrained LM on NLP tasks to be slightly lower. We henceforth refer to this pretrained model as Base LM. This same model was also previously used for program synthesis (Austin et al., 2021).
Instruction tuning procedure. FLAN is the instruction-tuned version of Base LM. Our instruction tuning pipeline mixes all datasets and randomly samples examples from each dataset. Some datasets have more than ten million training examples (e.g., translation), and so we limit the number of training examples per dataset to 30,000. Other datasets have few training examples (e.g., CommitmentBank only has 250), and so to prevent these datasets from being marginalized, we follow the examples-proportional mixing scheme (Raffel et al., 2020) with a mixing rate maximum of 3,000.3 We finetune all models for 30,000 gradient updates at a batch size of 8,192 using the Adafactor Optimizer (Shazeer & Stern, 2018) with a learning rate of 3e-5. The input and target sequence lengths used in our finetuning procedure are 1024 and 256 respectively. We use packing (Raffel et al., 2020) to combine multiple training examples into a single sequence, separating inputs from targets using a special end-of-sequence token.
arXiv.org
Finetuned Language Models Are Zero-Shot Learners 31
This paper explores a simple method for improving the zero-shot learning
abilities of language models. We show that instruction tuning -- finetuning
language models on a collection of tasks described via instructions --
substantially boosts zero-shot... | Source code for loading the instruction tuning
dataset used for FLAN is made publicly available at
https://github.com/google-research/flan 21
But it isn’t. | 0 |
tensorflow | Research & Models | Unable to read TFRecord using tf.data.TFRecordDataset | https://discuss.tensorflow.org/t/unable-to-read-tfrecord-using-tf-data-tfrecorddataset/3962 | I am trying to read a TFRecord file like this:
dataset = tf.data.TFRecordDataset("./tfrecords/train.record").map(_extract_fn).batch(3)
However, when I run
features, labels = iter(dataset).next()
I get this error:
tensorflow.python.framework.errors_impl.InvalidArgumentError: Cannot add tensor to the batch: number of elements does not match. Shapes are: [tensor]: [4], [batch]: [2] [Op:IteratorGetNext]
This is the function that parses the TFRecord file:
features = {
'image/height': tf.io.FixedLenFeature([],tf.int64),
'image/width': tf.io.FixedLenFeature([], tf.int64),
'image/filename': tf.io.VarLenFeature(tf.string),
'image/id': tf.io.FixedLenFeature([], tf.string),
'image/encoded': tf.io.FixedLenFeature([],tf.string),
'image/format': tf.io.FixedLenFeature([], tf.string),
'image/object/class/text': tf.io.VarLenFeature(tf.string),
'image/object/class/label': tf.io.VarLenFeature(tf.int64),
'image/object/bbox/xmin': tf.io.VarLenFeature(tf.float32),
'image/object/bbox/xmax': tf.io.VarLenFeature(tf.float32),
'image/object/bbox/ymin': tf.io.VarLenFeature(tf.float32),
'image/object/bbox/ymax': tf.io.VarLenFeature(tf.float32),
}
sample = tf.io.parse_single_example(tfrecord, features)
data = {}
data["image/encoded"] = tf.image.decode_jpeg(sample["image/encoded"], channels=3)
label = sample['image/object/class/label'].values
return data,label
If I write return data instead and only set features = iter(dataset).next() it works fine.
What is the issue here?
Thanks for any help! | The label has a variable length:
TensorOverflow:
'image/object/class/label': tf.io.VarLenFeature(tf.int64),
So when you try to .batch the dataset it can’t pack the different sized tensors together.
You either want to use .padded_patch or .apply(tf.data.experimental.dense_to_ragged_batch(...)) | 0 |
tensorflow | Research & Models | ValueError: DepthwiseConv2D requires the stride attribute to contain 4 values, but got: 3 | https://discuss.tensorflow.org/t/valueerror-depthwiseconv2d-requires-the-stride-attribute-to-contain-4-values-but-got-3/3965 | Im trying to import to tensorflow an onnx saved model from a pytorch implementation of efficientdet-d0 and I get the following error when I try to run a prediction or export the model :
ValueError: DepthwiseConv2D requires the stride attribute to contain 4 values, but got: 3 for ‘{{node depthwise}} = DepthwiseConv2dNative[T=DT_FLOAT, data_format=“NHWC”, dilations=[1, 1, 1, 1], explicit_paddings=[], padding=“VALID”, strides=[1, 1, 1]](transpose_4, Reshape_2)’ with input shapes: [?,?,?,?], [3,3,32,1].
code :
onnx_model = onnx.load(PATH + ‘model.onnx’)
onnx.checker.check_model(onnx_model)
tf_rep = prepare(onnx_model)
tf_rep.export_graph(“d0_s256_b32_ep400_TF.pb”)
or
onnx_model = onnx.load(PATH + ‘model.onnx’)
onnx.checker.check_model(onnx_model)
tf_rep = prepare(onnx_model)
tf_rep.run(Variable(torch.randn(32,3,256,256, dtype=torch.float32))) | I suggest you to post this on the LinuxFoundation ONNX slack:
https://onnx.ai/slack.html 6 | 0 |
tensorflow | Research & Models | Error while converting keras model to tensorflow saved model | https://discuss.tensorflow.org/t/error-while-converting-keras-model-to-tensorflow-saved-model/3930 | Able to get keras model h5 format for masked RCNN.
I have tried 2 approaches:
1.While trying to convert keras model to tensorflow saved_model using GitHub - bendangnuksung/mrcnn_serving_ready: 🛠 Converting Mask R-CNN Keras model to Tensorflow 6 getting error as File “main.py”, line 113, in
model.load_weights(H5_WEIGHT_PATH, by_name=True)
File “/home/ubuntu/Downloads/Blaize/mrcnn_serving_ready-master/model.py”, line 2131, in load_weights
saving.load_weights_from_hdf5_group_by_name(f, layers)
File “/home/ubuntu/Downloads/venv/lib/python3.6/site-packages/keras/engine/saving.py”, line 1328, in load_weights_from_hdf5_group_by_name
str(weight_values[i].shape) + ‘.’)
ValueError: Layer #389 (named “mrcnn_bbox_fc”), weight <tf.Variable ‘mrcnn_bbox_fc/kernel:0’ shape=(1024, 24) dtype=float32, numpy=
array([[-0.05486 , -0.03290214, 0.05897582, …, -0.05898178,
-0.06868616, 0.05374715],
[-0.06710163, -0.03682471, -0.03057443, …, -0.05611433,
-0.04561458, 0.05178914],
[-0.0041154 , -0.07344876, -0.06137543, …, 0.0011842 ,
0.04365869, -0.05199062],
…,
[ 0.06231805, -0.02443966, -0.00532094, …, -0.01833269,
-0.02245103, -0.01552512],
[-0.04047406, -0.06753345, 0.02390008, …, 0.01883602,
-0.04362615, -0.05265519],
[ 0.00530255, 0.04341973, 0.03085093, …, -0.07011634,
0.01440722, 0.02777647]], dtype=float32)> has shape (1024, 24), but the saved weight has shape (1024, 324).
2.in the second method using https://github.com/amir-abdi/keras_to_tensorflow,got 1 error as
ValueError('Unknown ’ + printable_module_name + ': ’ + class_name)
ValueError: Unknown layer: BatchNorm | I’m not sure I understand your question correctly. Does the error occur when you are saving the keras model into .h5 file or loading it from it?
If you are loading a model from a file and see messages about unknown layers, it mean that the original model contained some custom layers. You need to import them or initialize from scratch in your new code before loading the model. | 0 |
tensorflow | Research & Models | Poisson matrix factorization in TFP with variational inference and surrogate posterior | https://discuss.tensorflow.org/t/poisson-matrix-factorization-in-tfp-with-variational-inference-and-surrogate-posterior/3985 | Dear Tensorflow community,
I am new to Tensorflow and have thus encountered problem when I try to implement Poisson Matrix factorization (on the Wisconsin Breast Cancer dataset). The problem is the following:
I wish to build a gamma/poisson version of the PPCA model described here:
TensorFlow
Probabilistic PCA | TensorFlow Probability 1
I have defined my model with gamma and Poisson distributions for the posterior, i.e. the target log joint probability, as well as initialising the two latent variables in my model (u,v). Moreover, That is:
N, M = x_train.shape
L = 5
min_scale = 1e-5
mask = 1-holdout_mask
# Number of data points = N, data dimension = M, latent dimension = L
def pmf_model(M, L, N, gamma_prior = 0.1, mask = mask):
v = yield tfd.Gamma(concentration = gamma_prior * tf.ones([L, M]),
rate = gamma_prior * tf.ones([L, M]),
name = "v") # parameter
u = yield tfd.Gamma(concentration = gamma_prior * tf.ones([N, L]),
rate = gamma_prior * tf.ones([N, L]),
name = "u") # local latent variable
x = yield tfd.Poisson(rate = tf.multiply(tf.matmul(u, v), mask), name="x") # (modeled) data
pmf_model(M = M, L = L, N = N, mask = mask)
concrete_pmf_model = functools.partial(pmf_model,
M = M,
L = L,
N = N,
mask = mask)
model = tfd.JointDistributionCoroutineAutoBatched(concrete_pmf_model)
# Initialize v and u as a tensorflow variable
v = tf.Variable(tf.random.gamma([L, M], alpha = 0.1))
u = tf.Variable(tf.random.gamma([N, L], alpha = 0.1))
# target log joint porbability
target_log_prob_fn = lambda v, u: model.log_prob((v, u, x_train))
# Initialize v and u as a tensorflow variable
v = tf.Variable(tf.random.gamma([L, M], alpha = 0.1))
u = tf.Variable(tf.random.gamma([N, L], alpha = 0.1))
Then I need to state trainable variables/ parameters, which I do in the following (possibly wrong) way:
qV_variable0 = tf.Variable(tf.random.uniform([L, M]))
qU_variable0 = tf.Variable(tf.random.uniform([N, L]))
qV_variable1 = tf.maximum(tfp.util.TransformedVariable(tf.random.uniform([L, M]),
bijector=tfb.Softplus()), min_scale)
qU_variable1 = tf.maximum(tfp.util.TransformedVariable(tf.random.uniform([N, L]),
bijector=tfb.Softplus()), min_scale)
Ultimately, I make my model for the surrogate posterior and estimate the losses and trainable parameters:
def factored_pmf_variational_model():
qv = yield tfd.TransformedDistribution(distribution = tfd.Normal(loc = qV_variable0,
scale = qV_variable1),
bijector = tfb.Exp(),
name = "qv")
qu = yield tfd.TransformedDistribution(distribution = tfd.Normal(loc = qU_variable0,
scale = qU_variable1),
bijector = tfb.Exp(),
name = "qu")
surrogate_posterior = tfd.JointDistributionCoroutineAutoBatched(
factored_pmf_variational_model)
losses = tfp.vi.fit_surrogate_posterior(
target_log_prob_fn,
surrogate_posterior=surrogate_posterior,
optimizer=tf.optimizers.Adam(learning_rate=0.05),
num_steps=500)
My code does NOT give an error however, my (after running the entire script stated here) trainable parameters are NaN for the qV_variable0 and qU_variable0. Is there a kind person, who can tell me why it goes wrong and it would be lovely to see a demonstration of how to use bijectors in the correct manner in tensorflow probability/ distributions with models estimated using Variational inference. Please also let me know if it is my target model or my surrogate posterior understanding that is wrong.
Thank you so much in advance! | maybe @Christopher_Suter might be able to help | 0 |
tensorflow | Research & Models | Custom loss function | https://discuss.tensorflow.org/t/custom-loss-function/3934 | I want to use a pretrained TensorFlow model inside a custom loss function to train another model. Is it possible to do that? it seems that I cannot run a sess in a graph. | Hi Leonard, can you clarify a little bit.
Do you want to use the pretrained TF model to be the ground truth and those values are compared to what your model is generating?
If so, could you instead cache the pre trained model’s results and use that instead? | 0 |
tensorflow | Research & Models | Open Source MoveNet? | https://discuss.tensorflow.org/t/open-source-movenet/3829 | Hi MoveNet developers,
I’ve been following MoveNet for a while I love how you guys are creating really useful and performant model to be used in mobile applications. I have a use case where I would like to fine-tune/modify the model. (e.g. modify number of key point, provide more targeted training from custom dataset)
Is there any plan to open source/enable fine tuning on MoveNet models? I’m relatively new to ML field (coming from SW) and looking at TFHub suggests that it’s not fine-tunable. (I did try to import/inspect the model on TF and yeah it looked pretty blackbox) I searched around open source implementation and there’s one written in PyTorch,
github.com
GitHub - lee-man/movenet: Un-official implementation of MoveNet from Google 20
Un-official implementation of MoveNet from Google. Contribute to lee-man/movenet development by creating an account on GitHub.
I’ve no idea how accurate this implementation is, but I guess I have few options.
Use PyTorch implementation as baseline
Port this implementation/build in TF
Ask original devs for generosity
Let me know what you guys have planned, and any suggestions!
Thanks,
Daniel | Hello Daniel,
Thanks a lot for your interest in using MoveNet! You are right the MoveNet releases are currently not fine-tunable. But the model implementation is actually based on the Tensorflow Object Detection API 7 so you should be able to access the model code and modify it for your own purpose. As mentioned in the documentation, the MoveNet model is based on CenterNet with some modifications in the postprocessing logics and ops re-write to boost the performance. The closest model architecture can be found here 24. You can also find some example training configs in the /models/research/object_detection/configs/tf2/ folder.
I’d recommend you to start with the above codebase and modify from it. There are a few internal modifications which have not been released yet. We are still working on it and will let you know once it is there. Thanks. | 0 |
tensorflow | Research & Models | Fine-tuning Tensorflow 2 Model Zoo models? | https://discuss.tensorflow.org/t/fine-tuning-tensorflow-2-model-zoo-models/3911 | Hi,
Can someone from the Tensorflow Team please provide information on how to fine-tune a model from the Tensorflow 2 Model Zoo when training it for object detection?
I have read that there is a freeze_variables parameter in the pipeline.config file, but is this still usable?
Let’s say you want to freeze all the layers, expect the top 10, how can this be done when doing object detection?
Thanks! | You could check Adding community contributed guides by sayakpaul · Pull Request #9271 · tensorflow/models · GitHub 20 | 0 |
tensorflow | Research & Models | Can we talk about TF-Hub downloading to TensorflowJS | https://discuss.tensorflow.org/t/can-we-talk-about-tf-hub-downloading-to-tensorflowjs/3842 | Can someone comment on V5 downloading of TF-HUB models to TensorflowJS, I am not sure but things seem to have changed in the last year. I am most interested in fine-tuning which I thought needed to be uploaded using
const model = await tf.loadLayersModel
and frozen models converted from Tensorflow saved models could only be uploaded using
const model = await tf.loadGraphModel(modelUrl, {fromTFHub: true});
but V5 on TF-HUB seems to have fine-tunable Tensorflow Saved models, but the JS v1 and V3 versions only show the GraphModel load method which I thought was for frozen models.
tf-hub link here 1
I guess my question is, how do I find fine-tunable Layers models from TF-HUB?
Searching more I did find a JS(V5) here 1
but it is using the Graph Model Load which as far as I know is frozen
const model = await tf.loadGraphModel(
'https://tfhub.dev/google/tfjs-model/imagenet/mobilenet_v3_large_100_224/feature_vector/5/default/1',
{ fromTFHub: true });
Any opinions? I want to search, find and load layers models from TF-HUB? | @lgusm May know more about TFHub discoverability for models | 0 |
tensorflow | Research & Models | I want to develop an astronomical app using tensorflow to detect exoplanets | https://discuss.tensorflow.org/t/i-want-to-develop-an-astronomical-app-using-tensorflow-to-detect-exoplanets/3335 | Greetings everyone,
I am a computer science student. For my graduation thesis, I want to work on a project that can detect exoplanets using machine learning. If you want to join me, want to support me, please do not hesitate to contact me. | Hi. It will be a pleasure to help you in your project | 0 |
tensorflow | Research & Models | Feature extraction at specific height | https://discuss.tensorflow.org/t/feature-extraction-at-specific-height/3752 | I will be using the EfficeintNet-D7 model through TensorFlow to count the number of objects in an image taken from the sky. I want to extract objects that only appear at 3ft or higher in an image (the object matches the ground so the model keeps incorrectly selecting it). How would I write this into the model? | Do you mean something like this?
arXiv.org
IM2HEIGHT: Height Estimation from Single Monocular Imagery via Fully Residual... 2
In this paper we tackle a very novel problem, namely height estimation from a
single monocular remote sensing image, which is inherently ambiguous, and a
technically ill-posed problem, with a large source of uncertainty coming from
the overall scale.... | 0 |
tensorflow | Research & Models | Deep Learning for signal processing problem | https://discuss.tensorflow.org/t/deep-learning-for-signal-processing-problem/2280 | Hi there,
i am working on a loudspeaker that can generate audible soundwaves in a special medium that is a fluid-gel. Since the equations for the signal processing part are not completely understood yet, i want to use a deep learning script to aid. It is not complicated the only problem is that i am very new to Tensorflow and have not really an idea how to implement it, although i have been working a week on it.
So my problem is this: the loudspeaker has to generate ultrasonic modulated signals, the medium is nonlinear and therefore demodulates it, it becomes audible again in the medium. I can make many samples of some random signals and recorded samples, or hook the PC onto the loudpeaker and microfone, so that tensorflow can “learn” itself.
I just have no clue how to do is, are there any sample projects you know? I am very thankful for any help | I don’t know if you could find this useful for your project:
github.com
magenta/ddsp 12
DDSP: Differentiable Digital Signal Processing. Contribute to magenta/ddsp development by creating an account on GitHub. | 0 |
tensorflow | Research & Models | Using deep learning to color blood smear images | https://discuss.tensorflow.org/t/using-deep-learning-to-color-blood-smear-images/3596 | Hi, i wanted to build a deep learning model to color white blood cells and platelets, without actually staining them. For instance, i input a blood smear image without any colors added, and the model should output the WBCs and platelets colored as if it was stained by hand. What would be the best way to build such a model?
Thanks! | You can start with:
https://lup.lub.lu.se/student-papers/search/publication/8998594 4
But generally you can adapt and experiment many segmentation models.
E.g. for U-NET style:
http://www.worldascience.com/journals/index.php/wassn/article/view/24/14 2 | 0 |
tensorflow | Research & Models | ZeroDivisionError with Dormand-Price ODE Solver Gradients | https://discuss.tensorflow.org/t/zerodivisionerror-with-dormand-price-ode-solver-gradients/3377 | Hello,
I’m trying to implement a mechanistic model using TensorFlow which will be used as part of a GAN, based on the approach shown in this paper: [2009.08267] Integration of AI and mechanistic modeling in generative adversarial networks for stochastic inverse problems 1
The mechanistic model uses a TF Dormand-Prince solver to solve a set of differential equations which yield pressure waveforms for different regions of the cardiovascular system. I want to get gradients of the waveforms with respect to parameters of the mechanistic model for training the generator of the GAN.
A couple of my differential equations incorporate a variable which is time-varying (piecewise but continuous, no “sharp corners”) and which is computed from a subset of the parameters to the mechanistic model. If I set this variable to a constant, I can get gradients of the waveforms wrt model parameters. However, if I keep this variable as time-varying, then I get a ZeroDivisionError when I try to compute the gradients.
Any idea why this error might appear? I have included a stack trace below.
Thanks a lot for your help!
---------------------------------------------------------------------------
ZeroDivisionError Traceback (most recent call last)
in
----> 1 dy6_dX = tape.gradient(y6, X)
~/.conda/envs/mpk/lib/python3.7/site-packages/tensorflow/python/eager/backprop.py in gradient(self, target, sources, output_gradients, unconnected_gradients)
1078 output_gradients=output_gradients,
1079 sources_raw=flat_sources_raw,
→ 1080 unconnected_gradients=unconnected_gradients)
1081
1082 if not self._persistent:
~/.conda/envs/mpk/lib/python3.7/site-packages/tensorflow/python/eager/imperative_grad.py in imperative_grad(tape, target, sources, output_gradients, sources_raw, unconnected_gradients)
75 output_gradients,
76 sources_raw,
—> 77 compat.as_str(unconnected_gradients.value))
~/.conda/envs/mpk/lib/python3.7/site-packages/tensorflow/python/ops/custom_gradient.py in actual_grad_fn(*result_grads)
472 “@custom_gradient grad_fn.”)
473 else:
→ 474 input_grads = grad_fn(*result_grads)
475 variable_grads = []
476 flat_grads = nest.flatten(input_grads)
~/.conda/envs/mpk/lib/python3.7/site-packages/tensorflow_probability/python/math/ode/base.py in grad_fn(*dresults, **kwargs)
454 initial_time=result_time_array.read(initial_n),
455 initial_state=make_augmented_state(initial_n,
→ 456 terminal_augmented_state),
457 )
458
~/.conda/envs/mpk/lib/python3.7/site-packages/tensorflow_probability/python/math/ode/dormand_prince.py in _initialize_solver_internal_state(self, ode_fn, initial_time, initial_state)
307 p = self._prepare_common_params(initial_state, initial_time)
308
→ 309 initial_derivative = ode_fn(p.initial_time, p.initial_state)
310 initial_derivative = tf.nest.map_structure(tf.convert_to_tensor,
311 initial_derivative)
~/.conda/envs/mpk/lib/python3.7/site-packages/tensorflow_probability/python/math/ode/base.py in augmented_ode_fn(backward_time, augmented_state)
388 adjoint_constants_ode) = tape.gradient(
389 adjoint_dot_derivatives, (state, tuple(variables), constants),
→ 390 unconnected_gradients=tf.UnconnectedGradients.ZERO)
391 return (negative_derivatives, adjoint_ode, adjoint_variables_ode,
392 adjoint_constants_ode)
~/.conda/envs/mpk/lib/python3.7/site-packages/tensorflow/python/eager/backprop.py in gradient(self, target, sources, output_gradients, unconnected_gradients)
1078 output_gradients=output_gradients,
1079 sources_raw=flat_sources_raw,
→ 1080 unconnected_gradients=unconnected_gradients)
1081
1082 if not self._persistent:
~/.conda/envs/mpk/lib/python3.7/site-packages/tensorflow/python/eager/imperative_grad.py in imperative_grad(tape, target, sources, output_gradients, sources_raw, unconnected_gradients)
75 output_gradients,
76 sources_raw,
—> 77 compat.as_str(unconnected_gradients.value))
~/.conda/envs/mpk/lib/python3.7/site-packages/tensorflow/python/eager/backprop.py in _gradient_function(op_name, attr_tuple, num_inputs, inputs, outputs, out_grads, skip_input_indices, forward_pass_name_scope)
157 gradient_name_scope += forward_pass_name_scope + “/”
158 with ops.name_scope(gradient_name_scope):
→ 159 return grad_fn(mock_op, *out_grads)
160 else:
161 return grad_fn(mock_op, *out_grads)
~/.conda/envs/mpk/lib/python3.7/site-packages/tensorflow/python/ops/array_grad.py in _ConcatGradV2(op, grad)
228 def _ConcatGradV2(op, grad):
229 return _ConcatGradHelper(
→ 230 op, grad, start_value_index=0, end_value_index=-1, dim_index=-1)
231
232
~/.conda/envs/mpk/lib/python3.7/site-packages/tensorflow/python/ops/array_grad.py in _ConcatGradHelper(op, grad, start_value_index, end_value_index, dim_index)
117 # in concat implementation to be within the allowed [-rank, rank) range.
118 non_neg_concat_dim = (
→ 119 concat_dim._numpy().item(0) % input_values[0]._rank()) # pylint: disable=protected-access
120 # All inputs are guaranteed to be EagerTensors in eager mode
121 sizes = pywrap_tfe.TFE_Py_TensorShapeSlice(input_values,
ZeroDivisionError: integer division or modulo by zero | I was able to resolve this issue! I realized that the parameters which were giving me errors were passed into the function using them as 0-dimensional tensors rather than 1-d tensors; changing to pass them in as 1-d tensors alone solved the issue.
Not sure why such a minor difference (0-d vs 1-d) would result in a ZeroDivisionError - if anyone has suggestions on why, I’d really appreciate it! Thank you so much! | 0 |
tensorflow | Research & Models | Decision_forest | https://discuss.tensorflow.org/t/decision-forest/3284 | The new TF Decision Forests library is really cool in Colab, and I’m trying to get it working in a Kaggle notebook. Please find below a link to a Kaggle notebook where I’ve successfully used the new Decision Forest library. However, there is an issue with the model_plotter (on bottom of the Kaggle Kernel).
Any suggestion on what can be done to plot the model as it worked without a problem in Google Colab? Thank you!
kaggle.com
TF-DF Car Evaluation 2
Explore and run machine learning code with Kaggle Notebooks | Using data from [Private Datasource] | Hi NNCV,
Thanks (for the positive feedback and alert).
TF-DF uses D3 for plotting. And it seems (looking at the console of the web browser development tool) that Jupiter does not support the way TF-DF import D3.
A solution, is to re-import D3 in a Jupyer notebook cell:
%%javascript
require.config({
paths: {
d3: "https://d3js.org/d3.v7.min"
}
});
require(["d3"], function(d3) {
window.d3 = d3;
});
then, the following like should work fine
tfdf.model_plotter.plot_model_in_colab(model, tree_idx=0, max_depth=3)
Note that you might have to clear the cell output and reload the webpage.
Alternatively, or if you don’t have a Jupiter or a Colab nodebook, the model plot can be exported to an html and visualized separably. For example:
with open("/tmp/model.html", "w") as f:
f.write(tfdf.model_plotter.plot_model(model, tree_idx=0, max_depth=3))
# Then, open "/tmp/model.html" in any web browser. | 0 |
tensorflow | Research & Models | Score for human Activity Recognition? | https://discuss.tensorflow.org/t/score-for-human-activity-recognition/3177 | Hello,
I recently try to start a project according to the excellent link below, I call it example project.
Medium – 16 May 19
Human Activity Recognition using LSTMs on Android — TensorFlow for Hackers... 3
Ever wondered how your smartphone, smartwatch or wristband knows when you’re walking, running or sitting?
Reading time: 8 min read
In my new project, I plan to collect human activity movement data by some “expert” people to build a “expert” model and save it.
So, for example, when I do some activities, the expert model can not only predict what activity I am doing, and tells me the “Similarity”.
It means I can realize how much difference between me and the experts in the same activity movement.
Can anyone tells me if my idea workable? or give me any hint?
Are there any methods I can use in TensorFlow?
Any suggestions will be appreciated.
Thanks sincerely… | Hi Anthony,
This is a very interesting idea. Some parts that might be hard:
Have the comparison (pro vs user) be on the same perspective and position (rotation wise)
You might need a model (or algorithm) to pair the movement execution and then compare the differences.
There’s also some discussion on the topic here: https://discuss.tensorflow.org/t/what-model-s-for-a-sequence-of-human-poses | 0 |
tensorflow | Research & Models | Image similarity with efficientnet | https://discuss.tensorflow.org/t/image-similarity-with-efficientnet/2995 | Hello!
I’m using the efficientnet model and the 1280 vector output for image similarity. It’s working great and from testing is the best model I’ve found with the amount of data that is used.
Sometimes it just does weird things though.
Here are the image in question: Imgur: The magic of the Internet 26
The first image is the input, the 2nd which should be found and the third one which is actually found as closest.
I’m using a compare script and these are the results of said images:
input against image that should be found (img1 vs img2)
features shape: (1280,)
euclidean: 0.839167594909668
hamming: 1.0
chebyshev: 0.14557853
correlation: 0.36508870124816895
cosine: 0.35210108757019043
minkowski: 0.839167594909668
input against image that is incorrectly returned as closest (img1 vs img3
features shape: (1280,)
euclidean: 0.7945413589477539
hamming: 1.0
chebyshev: 0.11684865
correlation: 0.32784974575042725
cosine: 0.3156479597091675
minkowski: 0.7945413589477539
I’m not understanding how img3 can be closer to the input than the other. It’s working most of the time and this is a weird outlier.
Any ideas how this can be solved?
Thanks! | Was It trained only on your own dataset? | 0 |
tensorflow | Research & Models | Word Embeddings Not Accurate | https://discuss.tensorflow.org/t/word-embeddings-not-accurate/3041 | I am trying to build my own word2vec model using the code provided here
Link: - Word2Vec | TensorFlow Core 3
So i have even tried to increase the data as well for training the word embedding and i am able to achieve a good model accuracy but when i plot the word vectors on the Embedding Projector the distance between words or the word similarity is really bad, if i even use the cosine distance formula between very similar words the result is bad.
Whereas if the same data is used to train own embeddings using the Gensim library ( not pre-trained) the results of distance and similarity are way better, even on the Embedding Projector as well.
Please can someone help me regarding this, i want to use the Word2Vec code only which is provided by TensorFlow but i am not able to get good results for word distance and word similarity. | Could there be a problem in how you are serializing the embedding vectors and the associated words?
Also, can you confirm there is no difference in the hyperparameters that you are using in TensorFlow and Genism? | 0 |
tensorflow | Research & Models | Freeze layers in Tensorflow Object Detection API | https://discuss.tensorflow.org/t/freeze-layers-in-tensorflow-object-detection-api/3042 | Is it possible to freeze specific layers when using Tensorflow Object Detection API?
For example, I am using EfficientDet downloaded from the Tensorflow 2 Model Zoo. When I train the model, I am going to make it predict whether an object is car,plane or motorcycle. The model is already trained on these types of objects (COCO 2017 dataset), but I want to train it more on my specific use case . But, since it is already trained on these types of object I do not want it to “forget” what it has already learned. Hence, I need to freeze some of the layers. So, is this possible? And, if it is possible, how do I know which layers I actually need to freeze? I have found that I might can use the freeze_variables parameter in the pipeline.config file?
Thanks for any help! | This might be what you’re after
Transfer Learning example 10
Specifically these lines:
base_model.trainable = True
# Let's take a look to see how many layers are in the base model
print("Number of layers in the base model: ", len(base_model.layers))
# Fine-tune from this layer onwards
fine_tune_at = 100
# Freeze all the layers before the `fine_tune_at` layer
for layer in base_model.layers[:fine_tune_at]:
layer.trainable = False | 0 |
tensorflow | Research & Models | Recommender Dataset Structures (Tutorial) | https://discuss.tensorflow.org/t/recommender-dataset-structures-tutorial/2986 | Just getting into Tensorflow and i want to run the tensorflow recommender tutorials with my own data.
However, the datastructures there (dict-like, tensors can be addressed by “keys”) do not fit with the datasets structures in the tf.data starters.
Can pls someone point me somewhere there i can get a better overview over the dataset types (sub-classes) and best to work with recommenders? | I am not sure exactly what you want.
You can get Tensor from csv file via pandas like this.
df = pd.read_csv(‘movie.csv’)
tf.data.Dataset.from_tensor_slices(dict(df)) | 0 |
tensorflow | Research & Models | Effective ML for hyperspectral Images | https://discuss.tensorflow.org/t/effective-ml-for-hyperspectral-images/2742 | Dear all!
Thank you for reading this far
I was wondering if it is possible to make some kind of machine learning model that made an inference on each of the horizontal lines of an image. That is if I have a 1920x1080 image where each of the 1920 lines represents a spectra and outputs 1080 predictions. I belive that it could be beneficial for the training if I could input this prior knowledge into the model.
I want to be able to do this to port it by high-level synthesis to an FPGA for effective edge computing and hoped to be able to generate one (1) TensorFlow model for effective edge computing. The idea was to use PYNQ.
If anyone anywhere has anything to add to this idea, e.g. it is a bad idea, I would love to hear from you!
Best,
Sivert Bakken | I think that the first point to check Is the status of TF support on this board:
PYNQ – 29 Sep 20
Apply Tensorflow model on pynq? 3
@nfmzl @rock Despite that this is a Pynq-Z2, is another option to leverage DPU-PYNQ in conjunction with TVM, assuming that nfmzl can port DPU-PYNQ to the Z2? You should also be able to consider use of the TVM runtime / compiler with no DPU. I...
I also suggest to check this:
github.com
tanmay-ty/SpectralNET 4
Contribute to tanmay-ty/SpectralNET development by creating an account on GitHub. | 0 |
tensorflow | Research & Models | Object detection for ambiguous objects | https://discuss.tensorflow.org/t/object-detection-for-ambiguous-objects/2577 | I am trying to train a model with tf2 to find ambiguous objects such as small damages or small deformations in vehicles. I have tried several models but I have to lower the threshold to a lot so that it detects something … do you advise me some special model or some configuration for the pipeline.config? Thank you very much | From the my understanding, if the classes are very similar, you might need a very good dataset with lot’s of examples.
How many images/boxes are you using? | 0 |
tensorflow | Research & Models | Implementing a CNN LSTM architecture for audio segmentation | https://discuss.tensorflow.org/t/implementing-a-cnn-lstm-architecture-for-audio-segmentation/2425 | Hi everyone,
I’m trying to implement a part of this paper: https://people.kth.se/~ghe/pubs/pdf/szekely2019casting.pdf 18
This part specifically:
Mel-spectrograms were extracted using the Librosa Python package with a window width of 20 ms and 2.5 ms hop length. The resulting spectrograms for two seconds of audio have 128×800 pixels. Zero crossing rates were calculated on the same windows. The neural network was implemented in Keras following the architecture in Figure 1. The first convolutional layer used 16 2D filters (size 3×3, stride 1×1) and ReLU nonlinearities, followed by batch normalisation and 5×4 max-pooling in both time and frequency. The second 2D convolutional layer used 8 filters in the frequency domain (4×1) and ReLU, followed by batch norm and 6×5 max pooling. Due to downsampling by the pooling layers, this produced 40 1×1 cells with 8 channels at a rate of 20 times per second. These were fed into a bidirectional LSTM layer of 8 hidden units in each direction, followed by a softmax output layer. The network was randomly initialised and trained for 40 epochs to minimise cross-entropy using Adadelta (with default parameters) batches of 16 two-second spectrogram excerpts. The softmax outputs can be interpreted as estimated per-frame class probabilities and used to automatically annotate the held-out episodes. Prior to further processing by either method, the temporal coherence of the automatic annotations was improved by merging mixed speech after a single-speaker segment into that speaker’s speech.
This is what I have :
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
model = keras.Sequential(
[
keras.Input(shape=(128, 800, 2)),
layers.Conv2D(16, (3, 3), activation='relu'),
layers.BatchNormalization()
layers.MaxPooling2D(pool_size=(5, 4)),
layers.Conv2D(8, (4, 1), activation='relu'),
layers.BatchNormalization()
layers.MaxPooling2D(pool_size=(6, 5)),
layers.Bidirectional(layers.LSTM(8))
layers.Dense(7),
]
)
model.compile(
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer = keras.optimizers.Adadelta(),
metrics = ["accuracy"],
)
model.fit(x_train, y_train, epochs=40, batch_size=16)
Can someone please help? | What do you need in particular?
Do you need to prepare audio data?
TensorFlow
Audio Data Preparation and Augmentation | TensorFlow I/O 6 | 0 |
tensorflow | Research & Models | Parameter not updated with gradients at the end | https://discuss.tensorflow.org/t/parameter-not-updated-with-gradients-at-the-end/2543 | Hello, I just tried implementing my own version of Scene GCNN from this paper (Holistic 3D Scene Understanding from a Single Image
with Implicit Representation by Zhang et al., arxiv code: 2103.06422) using TF2.0 framework (previous experience only in Pytorch)`:
What I run, what I get as a warning is :
WARNING:tensorflow:Gradients do not exist for variables [‘scene_gcnn/scene_gcn_conv_2/weight_rs/kernel:0’, ‘scene_gcnn/scene_gcn_conv_2/weight_rs/bias:0’, ‘scene_gcnn/scene_gcn_conv_2/weight_rd/kernel:0’, ‘scene_gcnn/scene_gcn_conv_2/weight_rd/bias:0’] when minimizing the loss.
Here’s my code down below:
import tensorflow as tf
from tensorflow.keras import activations, regularizers, constraints, initializers
import numpy as np
dot = tf.matmul
spdot = tf.sparse.sparse_dense_matmul
class Scene_GCNConv(tf.keras.layers.Layer):
def __init__(self,
activation=lambda x: x,
use_bias=True,
kernel_initializer='glorot_uniform',
kernel_regularizer=None,
kernel_constraint=None,
bias_initializer='ones',
bias_regularizer=None,
bias_constraint=None,
activity_regularizer=None,
weight_shape=None,
**kwargs):
self.activation = activations.get(activation)
self.use_bias = use_bias
self.kernel_initializer = initializers.get(kernel_initializer)
self.bias_initializer = initializers.get(bias_initializer)
self.kernel_regularizer = regularizers.get(kernel_regularizer)
self.bias_regularizer = regularizers.get(bias_regularizer)
self.activity_regularizer = regularizers.get(activity_regularizer)
self.kernel_constraint = constraints.get(kernel_constraint)
self.bias_constraint = constraints.get(bias_constraint)
self.weight_shape = weight_shape
self.weight_sd = tf.keras.layers.Dense(self.weight_shape[1], activation=None, use_bias=self.use_bias, name="weight_sd")
self.weight_sr = tf.keras.layers.Dense(self.weight_shape[1], activation=None, use_bias=self.use_bias, name="weight_sr")
self.weight_dr = tf.keras.layers.Dense(self.weight_shape[1], activation=None, use_bias=self.use_bias, name="weight_dr")
self.weight_rs = tf.keras.layers.Dense(self.weight_shape[1], activation=None, use_bias=self.use_bias, name="weight_rs")
self.weight_rd = tf.keras.layers.Dense(self.weight_shape[1], activation=None, use_bias=self.use_bias, name="weight_rd")
super(Scene_GCNConv, self).__init__()
def call(self, z_o_prev, z_r_prev):
# TODO : switched without z_o and z_r, and included all ones adjacent matrix in initialization
# adjacent_matrix = 1 - tf.eye(z_o_prev.shape[1]) #which is N + 1
dim = z_o_prev.shape[1]
adjacent_matrix = 1 - tf.eye(dim) #which is N + 1
z_o = self.update_object_nodes(z_o_prev, z_r_prev, adjacent_matrix)
z_r = self.update_relationship_nodes(z_o_prev, z_r_prev, adjacent_matrix)
output = [z_o, z_r]
return output
def update_object_nodes(self, object_nodes, relationship_nodes, adjacent_matrix):
z_o = object_nodes
z_r = relationship_nodes
dim = adjacent_matrix.shape
adjacent_matrix_r_compatible = tf.concat([adjacent_matrix, tf.ones([(dim[0]-1)*dim[0], dim[1]])], axis=0)
first_term = self.weight_sd(z_o)
second_term = dot(adjacent_matrix_r_compatible,self.weight_sr(z_r), transpose_a = True)
third_term = dot(adjacent_matrix_r_compatible, self.weight_dr(z_r), transpose_a = True)
z_o = self.activation(first_term + second_term + third_term)
return z_o
def update_relationship_nodes(self, object_nodes, relationship_nodes, adjacent_matrix):
z_o = object_nodes
z_r = relationship_nodes
dim = adjacent_matrix.shape
adjacent_matrix_o_compatible = tf.concat([adjacent_matrix, tf.ones([dim[0], (dim[1]-1)*dim[1]])], axis=1)
first_term = dot(adjacent_matrix_o_compatible, self.weight_rs(z_o), transpose_a = True)
second_term = dot(adjacent_matrix_o_compatible, self.weight_rd(z_o),transpose_a = True)
z_r = self.activation(first_term + second_term)
return z_r
## separate embedding transformation that should be inside a overall Scene Graph Conv Net
class Scene_GCNN(tf.keras.layers.Layer):
def __init__(self,
activation=lambda x: x,
use_bias=True,
kernel_initializer='glorot_uniform',
kernel_regularizer=None,
kernel_constraint=None,
bias_initializer='ones',
bias_regularizer=None,
bias_constraint=None,
activity_regularizer=None,
weight_shape_array=None,
**kwargs):
super(Scene_GCNN, self).__init__()
self.activation = activations.get(activation)
self.use_bias = use_bias
self.kernel_initializer = initializers.get(kernel_initializer)
self.bias_initializer = initializers.get(bias_initializer)
self.kernel_regularizer = regularizers.get(kernel_regularizer)
self.bias_regularizer = regularizers.get(bias_regularizer)
self.activity_regularizer = regularizers.get(activity_regularizer)
self.kernel_constraint = constraints.get(kernel_constraint)
self.bias_constraint = constraints.get(bias_constraint)
## Initialize number of layers
self.weight_shape_array = weight_shape_array
self.num_iterations = len(weight_shape_array)
self.sgcnn_layers = []
for i, weight_shape in enumerate(self.weight_shape_array):
self.sgcnn_layers.append(Scene_GCNConv(
activation=self.activation,
use_bias=self.use_bias,
kernel_initializer=self.kernel_initializer,
kernel_regularizer=self.kernel_regularizer,
kernel_constraint=self.kernel_constraint,
bias_initializer=self.bias_initializer,
bias_regularizer=self.bias_regularizer,
bias_constraint=self.bias_constraint,
activity_regularizer=self.activity_regularizer,
weight_shape=weight_shape))
d = self.weight_shape_array[0][0]
embed_relationship = tf.keras.models.Sequential()
embed_relationship.add(tf.keras.Input(shape=(6,)))
embed_relationship.add(tf.keras.layers.Dense(d, activation='relu'))
embed_relationship.add(tf.keras.layers.Dense(d, activation=None))
self.embed_relationship = embed_relationship
embed_background = tf.keras.models.Sequential()
embed_background.add(tf.keras.Input(shape=(3,)))
embed_background.add(tf.keras.layers.Dense(d, activation='relu'))
embed_background.add(tf.keras.layers.Dense(d, activation=None))
self.embed_background = embed_background
embed_slots = tf.keras.models.Sequential()
embed_slots.add(tf.keras.Input(shape=(21,)))
embed_slots.add(tf.keras.layers.Dense(d, activation='relu'))
embed_slots.add(tf.keras.layers.Dense(d, activation=None))
self.embed_slots = embed_slots
final_embed_background = tf.keras.models.Sequential()
final_embed_background.add(tf.keras.Input(shape=(21,)))
final_embed_background.add(tf.keras.layers.Dense(3, activation=tf.keras.layers.LeakyReLU(alpha=0.01)))
self.final_embed_background = final_embed_background
# def call(self, slots, background_latent,):
def call(self, inputs):
slots = inputs[0]
background_latent = inputs[1]
#slots [B, num_obj, 21]
#background_latent [B, 1, 3]
background_latent = background_latent[:,None,:]
object_nodes = self.get_object_nodes(slots, background_latent)
relationship_nodes = self.get_relationship_nodes(slots)
for i in range(self.num_iterations):
object_nodes, relationship_nodes = self.sgcnn_layers[i](object_nodes, relationship_nodes)
#object_nodes [B, num_object + 1, 21]
slots = object_nodes[:,0:-1,:]
background_latent = self.final_embed_background(object_nodes[:,-1,:])
# output = [object_nodes, relationship_nodes]
output = [slots, background_latent]
return output
def get_object_nodes(self, slots=None, background_latent = None):
#Embedding of slot
slots_embedded = self.embed_slots(slots)
#Embedding of background
background_latent_embedded = self.embed_background(background_latent)
object_nodes = tf.concat([slots_embedded, background_latent_embedded], axis = 1)
return object_nodes
def get_relationship_nodes(self, slots):
#Relationship nodes, between background and slots
# For nodes connecting two different objects, the geometry feature [20, 49] of 2D object bounding
# boxes and the box corner coordinates of both connected objects normalized by the image height and width are used as
# features.
# In our example, we use x,y,z as values from each slot to get (N+1)^2 x 2d matrix where d=(x,y,z)
# The coordinates are flattened and concatenated in
# the order of source-destination, which differentiate the relationships of different directions.
# For nodes connecting
# objects and layouts, since the relationship is presumably
# different from object-object relationship, we initialize the
# representations with constant values, leaving the job of inferring reasonable relationship representation to SGCN
slots_extended = tf.concat([slots[:,:,18:21], tf.ones([slots.shape[0], 1, 3])],axis=1)
A = tf.repeat(slots_extended, axis = 1,repeats=slots_extended.shape[1])
#Add [B,1,latent_size] to both A and B to include layout
B = tf.tile(slots_extended, multiples=[1,slots_extended.shape[1],1])
relationship_nodes = tf.concat([A,B], axis=2)
relationship_latent_embedded = self.embed_relationship(relationship_nodes)
relationship_nodes = relationship_latent_embedded
return relationship_nodes
if(__name__ == "__main__"):
weight_shape_array=[(64,128),(128,64),(64,21)]
scene_gcnn = Scene_GCNN(
activation='sigmoid',
use_bias=True,
kernel_initializer='glorot_uniform',
kernel_regularizer=None,
kernel_constraint=None,
bias_initializer='glorot_normal',
bias_regularizer=None,
bias_constraint=None,
activity_regularizer=None,
weight_shape_array=weight_shape_array)
slots = tf.random.uniform([8,3,21])
background_latent = tf.random.uniform([8,3])
print(background_latent)
print(scene_gcnn([slots, background_latent]))
#Made up learning rate
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-7)
is_training = True
with tf.GradientTape() as tape:
output = scene_gcnn([slots, background_latent])
#Made up losses, don't think about the metric
losses_background = tf.reduce_sum(tf.random.uniform([8,3]) - output[1])
losses_foreground = tf.reduce_sum(tf.random.uniform([8,3,21]) - output[0])
losses = losses_background + losses_foreground
if is_training:
variables = scene_gcnn.trainable_variables
gradients = tape.gradient(losses, variables)
optimizer.apply_gradients(zip(gradients, variables))
How do I avoid and solve this warning?
I know it’s a long code sample, but I included dummy test example which you can run if you call python command on it (TF2.0 required). I tried debugging this thing and seeing where are my holes and it has been an ongoing struggle to understand it, but no results so far. | You can use:
optimizer.apply_gradients([
(grad, var)
for (grad, var) in zip(gradients, variables)
if grad is not None
]) | 0 |
tensorflow | Research & Models | Semantic segmentation: How to maximize precision for the minor class in an imbalanced training dataset | https://discuss.tensorflow.org/t/semantic-segmentation-how-to-maximize-precision-for-the-minor-class-in-an-imbalanced-training-dataset/2504 | I am using a deep neural network for a semantic segmentation task. The training data comprise 17,000 images that have been interpreted manually to generate two classes: MMD, and non-MMDs.
While I have good confidence in the accuracy of interpreted MMDs (high accuracy for true positives), there is still a chance that we misclassified some MMDs as non-MMD in the training data (even though its not very frequent). Further, the ratio of MMD to non-MMD samples is not balanced; and only ~20% of the training images are covered by MMDs. As the result, the training step can get biased toward the non-MMD class and show high accuracy for identifying the non-MMDs, which cover 80% of the image. I am currently using the following parameters:
• softmax as the activation function for the last layer
• Adam as the optimizer in Keras
• soft dice as the loss function to account for the imbalanced training data
• precision (tf.keras.metrics.Precision) as the metric to find the optimum model during training.
Obviously, my objective is to focus the training on maximizing the precision for the MMD class, and punish the training for generating false positives specifically for the MMD class. However, the above configuration does not seem to produce the desired results.
My question is, what would you do differently to achieve this objective? all suggestions are welcome! | I suggest to take a look at:
github.com
JunMa11/SegLoss 11
A collection of loss functions for medical image segmentation | 0 |
tensorflow | Research & Models | How to improve classification prediction models accuracy? | https://discuss.tensorflow.org/t/how-to-improve-classification-prediction-models-accuracy/2300 | Hi,
I have applied several classification methods, unfortunately, the developed models never exceed 62% of accuracy.
here I attached a comparison table of the developed models.
I’m wondering how I can improve the models’ accuracy!?
Accurcy Table699×451 17.1 KB | The first question i suggest to you is:
what are your performances on the training set? | 0 |
tensorflow | Research & Models | How to train custom models? | https://discuss.tensorflow.org/t/how-to-train-custom-models/1514 | Hi, new to tfjs, just tested out tfjs-node mobilenet with pre-trained model. I would like to learn how to train custom models using exiting video footage. I came across roboflow.com 5 which looks like to give me what I want, however it only offer formats in TFRecords and Tensorflow Object Detection. Any suggestions what tools I should use? | What specific frameworks are you looking for? Object Detection API is a well-developed and appreciated framework among the vision community. It comes with performance benefits, a rich repository of SOTA model implementations many of which are TFLite-compatible.
But here’s a standalone tutorial from keras.io 2 that shows you how to train a RetinaNet from scratch:
keras.io
Keras documentation: Object Detection with RetinaNet 8 | 0 |
streamlit | Using Streamlit | Button on_click behavior | https://discuss.streamlit.io/t/button-on-click-behavior/20682 | I have a button with an on_click() to run a function which executes a query. It seems the on_click() is triggered when the parameters change, even if the button is not clicked. Example below - when queryText is changed via a text_area control, the on_click() function is executed.
queryText = st.sidebar.text_area("SQL to execute:", value="", height=3, max_chars=None )
if queryText:
# issue - on_click executes when queryText is changed , even if not clicked
btnResult= st.sidebar.button('Run', key='RunBtn', help=None, on_click=runQuery(queryText,False), args=None, kwargs=None)
if btnResult:
st.sidebar.text('Button pushed') | Hi @MarkSim.,
Based on your simplified example I don’t think you should use state for this. I actually think you just need to use a form, that collects the SQL query and then has a form_submit_button that runs the typed query:
with st.sidebar.form("Input"):
queryText = st.text_area("SQL to execute:", height=3, max_chars=None)
btnResult = st.form_submit_button('Run')
if btnResult:
st.sidebar.text('Button pushed')
# run query
st.write(queryText)
In my token example, I simply write the query to the page but you would be able to actually send the queryText to get the data.
Let me know if this solves your issue!
Happy Streamlit-ing
Marisa | 0 |
streamlit | Using Streamlit | Can you select rows in a table? | https://discuss.streamlit.io/t/can-you-select-rows-in-a-table/737 | Is it possible to select rows of a table and then get the selected data? Can be st.table() or st.write(dataframe) | Hi @gaboc4. Unfortunately that functionality is not yet supported but we are designing it! In the meantime, a workaround would be to use an st.multiselect, e.g.:
selected_indices = st.multiselect('Select rows:', data.index)
Which will work up to about 1000 rows or so and which will look like this:
67903453-88fb1080-fb63-11e9-932e-c5d69ef97289.png1296×1684 466 KB
You can see an example in this gist 411.
This feature request is being tracked here 455. Please feel free to follow the issue to keep up with the latest developments. Also please feel free to comment on it so that we can better understand your use-case.
Thanks for using Streamlit!! | 0 |
streamlit | Using Streamlit | Text input strange behavior when key present | https://discuss.streamlit.io/t/text-input-strange-behavior-when-key-present/21438 | I wanted text input in a loop. st rightly complained about the fields having the same key. When I insert key using random, it stops printing values outside.
The following works fine:
import random
for i in range(1): # single field
foo = random.choice(string.ascii_uppercase)+str(random.randint(0,1000))
myc2 = st.text_input('')
st.write(myc2,myc2[::-1])
The following complains about same key:
import random
for i in range(3): # loop and no key
foo = random.choice(string.ascii_uppercase)+str(random.randint(0,1000))
myc2 = st.text_input('')
st.write(myc2,myc2[::-1])
The following stops printing the string and its reverse:
import random
for i in range(3): # loop and key
foo = random.choice(string.ascii_uppercase)+str(random.randint(0,1000))
myc2 = st.text_input('',key=foo)
st.write(myc2,myc2[::-1])
The issue seems to be with the key since it also fails to print for loop of 1 with key
import random
for i in range(1): # no loop; key yes
foo = random.choice(string.ascii_uppercase)+str(random.randint(0,1000))
myc2 = st.text_input('',key=foo)
st.write(myc2,myc2[::-1])
Baffled ;-(
Do others see the same thing? | Hi @AshishMahabal, welcome to the Streamlit community!
What you are seeing is one of the underlying mechanisms of how Streamlit works. In your first code snippet, you are creating a single st.text_input widget, so Streamlit can determine with certainty which one you are referring to.
In the second snippet, now you define a st.text_input 3 times; this does not overwrite the text_input, but rather orphans the references to the other ones (but the front end is still keeping track). So you get a key error.
If you want to keep writing over the same widget, put an st.empty() outside of your loop:
docs.streamlit.io
st.empty - Streamlit Docs
st.empty inserts a single-element container.
If your code has to be this way for some reason, you can add a key argument to st.text_input…set that value to your loop value i to make distinct text_input widgets.
Best,
Randy | 0 |
streamlit | Using Streamlit | How can i plot a GIF or video on streamlit? | https://discuss.streamlit.io/t/how-can-i-plot-a-gif-or-video-on-streamlit/21384 | I have a code that generates an animation of an orbit. How can I configure to plot the same video in a streamlit webapp? I tried something dot “st.pyplot(ani)” at the end, but the result generated an empty white square. Below is the part referring to the plot of my code.
fig = plt.figure()
plt.xlabel("x (km)")
plt.ylabel("y (km)")
plt.gca().set_aspect('equal')
ax = plt.axes()
ax.set_facecolor("black")
circle = Circle((0, 0), rs_sun, color='dimgrey',linewidth=0)
plt.gca().add_patch(circle)
plt.axis([- (rs_sun / 2.0) / u1 , (rs_sun / 2.0) / u1 , - (rs_sun / 2.0) / u1 , (rs_sun / 2.0) / u1 ])
# GIF
graph, = plt.plot([], [], color="gold", markersize=3, label='Tempo: 0 s')
L = plt.legend(loc=1)
plt.close()
def animate(i):
lab = 'Tempo: ' + str(round(dt*i * (rs_sun / 2.0) * 3e-5 , -int(math.floor(math.log10(abs(dt*(rs_sun / 2.0)*3e-5)))))) + ' s'
graph.set_data(x[:i], y[:i])
L.get_texts()[0].set_text(lab) # Atualiza a legenda a cada frame
return graph,
skipframes = int(len(x)/200)
if skipframes == 0:
skipframes = 1
ani = animation.FuncAnimation(fig, animate, frames=range(0,len(x),skipframes), interval=30, blit = True, repeat = False)
HTML(ani.to_jshtml())
st.pyplot(ani) | isa.rsn:
HTML(ani.to_jshtml())
st.pyplot(ani)
Hi @isa.rsn, welcome to the Streamlit community!
What does HTML() represent? Is that the Jupyter HTML function? Streamlit has it’s own html function that might work:
docs.streamlit.io
Components API - Streamlit Docs 2
Roughly
import streamlit.components.v1 as components
# ..all the other code
components.html(ani.to_jshtml())
Best,
Randy | 1 |
streamlit | Using Streamlit | How to solve a error generated in a gif in streamlit? | https://discuss.streamlit.io/t/how-to-solve-a-error-generated-in-a-gif-in-streamlit/21459 | I added import streamlit.components.v1 as components at the beginning of my code and replaced st.pyplot(ani) with components.html(ani.to_jshtml()) . However, apparently the animation image was cut in half (pictured below). I’ve looked at the documentation and still haven’t been able to fix this error. Do you know what it could be?
Apart from that, adding a new question, my code is indeed a bit heavy and takes a few seconds to run. Is there any way to decrease plot time? Maybe if I previously make a text file with all the calculated data, or some configuration?
fig = plt.figure()
plt.xlabel("x (km)")
plt.ylabel("y (km)")
plt.gca().set_aspect('equal')
ax = plt.axes()
ax.set_facecolor("black")
circle = Circle((0, 0), rs_sun, color='dimgrey',linewidth=0)
plt.gca().add_patch(circle)
plt.axis([- (rs_sun / 2.0) / u1 , (rs_sun / 2.0) / u1 , - (rs_sun / 2.0) / u1 , (rs_sun / 2.0) / u1 ])
# Montagem do gif
graph, = plt.plot([], [], color="gold", markersize=3, label='Tempo: 0 s')
L = plt.legend(loc=1)
plt.close() # Não mostra a imagem de fundo
def animate(i):
lab = 'Tempo: ' + str(round(dt*i * (rs_sun / 2.0) * 3e-5 , -int(math.floor(math.log10(abs(dt*(rs_sun / 2.0)*3e-5)))))) + ' s'
graph.set_data(x[:i], y[:i])
L.get_texts()[0].set_text(lab) # Atualiza a legenda a cada frame
return graph,
skipframes = int(len(x)/200)
if skipframes == 0:
skipframes = 1
ani = animation.FuncAnimation(fig, animate, frames=range(0,len(x),skipframes), interval=30, blit = True, repeat = False)
HTML(ani.to_jshtml())
components.html(ani.to_jshtml())
Streamlit
how it look in Google colab
streamlit1905×891 38 KB | Thank you!! | 1 |
streamlit | Using Streamlit | Caption to matplotlib figure just like an image has | https://discuss.streamlit.io/t/caption-to-matplotlib-figure-just-like-an-image-has/21360 | Hi Streamlit team!
Is it possible to add a caption to a st.pyplot just like a caption is nicely added to st.image? I do not see that option yet on the documentation page of st.pyplot - Streamlit Docs 1
Thanks in advance! | Hi @stefanbloemheuvel -
This isn’t a functionality we had considered…st.pyplot is a pretty simple wrapper, as we assume that whatever people are doing in matplotlib, the user will be create that sort of caption via code as well.
You could do something like saving a PNG from matplotlib and passing it to st.image to achieve what you are looking for.
Best,
Randy | 1 |
streamlit | Using Streamlit | Filter logic | https://discuss.streamlit.io/t/filter-logic/3085 | Hi everyone, I currently have a view with a dataframe and 5 filters. The problem I have is that I can’t find a way to build a dynamic logic (users have the possibility to only use only 1 filter), and the only way I found is with the ‘if elif’ statement for all the combinations possible (not very convenient).
If there a better way to solve this?
Thanks!
Pierre | Hi pvtaisne,
See below for an example with multiple filters for one dataframe. Let me know if it helps you
import streamlit as st
import pandas as pd
import numpy as np
np.random.seed(0) # Seed so random arrays do not change on each rerun
n_rows = 1000
random_data = pd.DataFrame(
{"A": np.random.random(size=n_rows), "B": np.random.random(size=n_rows)}
)
sliders = {
"A": st.sidebar.slider(
"Filter A", min_value=0.0, max_value=1.0, value=(0.0, 1.0), step=0.01
),
"B": st.sidebar.slider(
"Filter B", min_value=0.0, max_value=1.0, value=(0.0, 1.0), step=0.01
),
}
filter = np.full(n_rows, True) # Initialize filter as only True
for feature_name, slider in sliders.items():
# Here we update the filter to take into account the value of each slider
filter = (
filter
& (random_data[feature_name] >= slider[0])
& (random_data[feature_name] <= slider[1])
)
st.write(random_data[filter]) | 1 |
streamlit | Using Streamlit | Create new page | https://discuss.streamlit.io/t/create-new-page/21357 | How to create new pages by only using st.button? | st.button will triger page rerun, I do not recommond you to use it.
if you want to create new page, you can choose st.radio or streamlit-option-menu
see here of streamlit-option-menu:
Streamlit-option-menu is a simple Streamlit component that allows users to select a single item from a list of options in a menu Show the Community!
Happy new year everyone!
I discovered Streamlit a couple of weeks ago and fell in love with it immediately. To help myself learn how to make custom components, I created streamlit-option-menu, whose functions, though very simple, are not found in existing components. Hopefully it can be of use to someone else as well!
streamlit-option-menu is a simple Streamlit component that allows users to select a single item from a list of options in a menu.
[menu]
It is similar in function to st.select… | 0 |
streamlit | Using Streamlit | Refreshing the screen without stacking images(tables) in a while loop application | https://discuss.streamlit.io/t/refreshing-the-screen-without-stacking-images-tables-in-a-while-loop-application/21427 | Hi everyone,
I’ve stated using Streamlit recently and I’m still trying to find my way around to produce the result I need from my application.
My program has the following structure:
in the main function I am retrieving data via api, then do a lot of processing with pandas and I get four final tables which I am displaying on the Streamlit app.
this process is in an infinite loop, so everytime I retrive new data from api, I need to re-do the calculations and call the Streamlit display function to print me the updated tables.
My main problem is that the tables are stacking-up, instead of clearing the window and print tables on a fresh empty screen for each cycle of the loop.
This is what I have inside the StreamlitDisplay function:
st.empty()
col1, col2 = st.columns((3, 1))
fig1 = go.Figure(data=go.Table(header=dict(values=['<b>Symbol</b>', '<b>Price</b>'], fill_color='paleturquoise'), cells=dict(values=[table1['Symbol'], table1['Price']))
with col1:
st.header('My Table1')
st.write(fig1)
I simplified the version of my function with just one table (and only two columns), so you have an idea about the structure of it. After I do this table printing job, the function returns to the main where the loop restarts with new api calls.
My expectation was that calling st.empty() when I call this display function will clear the screen allowing printing as if it was the first iteration of the loop. In reality I get images adding up to the bottom without end.
Using plotly.graph_objects I managed to do really nice customization to the layout of my tables, with colors and so on, so this is basically the last thing remaining to have the job done.
Is there any way to fix this or how can I approach this type of application involving an infinite loop where thing are re-calculated on each iteration and new results have to be updated on the screen?
Thanks you! | Hi @mg1795, welcome to the Streamlit community!
You need to define st.empty() outside of your function call, so that you can continue to write over that object.
St.empty for images Using Streamlit
Hi @Bonian, welcome to the Streamlit community!
Yes, what you are asking is how st.empty is intended to work. However, you should have the st.empty container defined outside of the loop if you want to keep reusing the same position. Our documentation has a couple of examples:
Best,
Randy
Best,
Randy | 1 |
streamlit | Using Streamlit | Multiselect boxes in columns | https://discuss.streamlit.io/t/multiselect-boxes-in-columns/21398 | I have 8 multi-select boxes with me and I want them to be put in 2 columns (4 per column) , I’m using a dataset that i have connected. I’ve tired using the st. columns statement and I keep getting an error ‘list’ object has no attribute ‘columns’.
How can I Proceed with this | Hi @Harsha_vardhan_Mudia, welcome to the Streamlit community!
Can you post the code you ran that doesn’t work?
Best,
Randy | 0 |
streamlit | Using Streamlit | Plotly not responsive mobile screen | https://discuss.streamlit.io/t/plotly-not-responsive-mobile-screen/2731 | I am trying to build a lineplot using plotly. However, whenever it is accessed from smaller screens, such as a mobile, it does not apply the responsive widths and forces you to horizontally scroll the page, in order to visualize the whole plot.
Everything else is responsive (apart from the plotly graphs).
Additionally, I tried to change the width of the plotly structure, and it seems to work fine! Although, I don’t have any way to retrieve the dynamic screen width (so I could calculate the length I would need, and thus solve the responsive issue myself).
This is the best I could reach so far.
Something as simple as width = st.screen.width would solve my point, as well as many others that require a minimum javascript aperture to work.
What should I do to guarantee a mobile-friendly app?
Thanks! | Hello @caio.hc.oliveira, welcome to the forums !
Would the default behavior of st.plotly_chart(fig, use_container_width=True) answer this ?
image422×502 13 KB
Regards | 1 |
streamlit | Using Streamlit | Option showed on Ag-Grid | https://discuss.streamlit.io/t/option-showed-on-ag-grid/21347 | I have a large valued option like 1012562,1012563,1012564,1012565,1012566, 1016243,1016244,1016245,1016246,1016247. And I used aggrid but the option is showed like 1012562,1012563,1012564,1012… but I need the entire option to be shown in the screen. I will be grateful if someone give me a solution. | Please suggest me a solution. | 0 |
streamlit | Using Streamlit | Reset data in Ag Grid after edits | https://discuss.streamlit.io/t/reset-data-in-ag-grid-after-edits/21390 | So, I’m use @PablocFonseca 's EXCELLENT ag-grid component, but I’m up against a wall. I have an editable grid setup. But after the user has made some changes, I’d like to have a button that resets the contents. I’m just not what to set. Thanks.
gb = ag.GridOptionsBuilder.from_dataframe(orig_inputs)
gb.configure_default_column(groupable=True, value=True, enableRowGroup=True, aggFunc='sum', editable=True, type=["numericColumn", "numberColumnFilter"])
gb.configure_column('FlowDate', enableRowGroup=True, aggFunc='sum', editable=False, type=["dateColumnFilter", "customDateTimeFormat"], custom_format_string='yyyy-MM-dd', pivot=True)
go = gb.build()
inputs = ag.AgGrid(orig_inputs, gridOptions=go, update_mode='MANUAL')['data'] | Answer my own question:
AgGrid has a parameter called “reload_data” - if called with True, will reload the data from the input. (OW ignores it after initial input.)
this is my code:
reload_data = False
if st.button('Reset Inputs'):
inputs = orig_inputs.copy()
reload_data = True
inputs = ag.AgGrid(inputs, gridOptions=go, reload_data=reload_data, update_mode='MANUAL')['data']
reload_data = False | 1 |
streamlit | Using Streamlit | React custom component: ./frontend/build, and hasn’t received its “streamlit:componentReady” message | https://discuss.streamlit.io/t/react-custom-component-frontend-build-and-hasnt-received-its-streamlit-componentready-message/21406 | Hello,
I tried to embed React DataGrid table to streamlit (thanks for the examples here 1 and this tutorial 1), the development environment works fine, however the production shows the following error.
Screenshot 2022-01-26 at 11.39.43729×290 32.6 KB
I googled similar issues. My Streamlit version is the latest, 1.4.0 (don’t think it caused by timeout), and didn’t use st.columns in the code. It shows the same error in Streamlit Cloud. Couldn’t find any other possible answers for this problem…
Much appreciate your help! | Hi,
I’m having the same issue with 1.3.1 version.
When I run my React component locally it works fine, but when I’m setting the _RELEASE to point to “frontend/build” path - I get the “streamlit:componentReady”** message.
(In my case, it shows ** and not the path to the build folder).
Any help would be appreciated!
Screen Shot 2022-01-26 at 2.45.05 PM942×227 28 KB | 0 |
streamlit | Using Streamlit | What is the best way to publish my webapp made with streamlit? | https://discuss.streamlit.io/t/what-is-the-best-way-to-publish-my-webapp-made-with-streamlit/21388 | I want to release a code that generates orbit animations for educational purposes. Does streamlite allow me to publicize the site to be used by students and teachers? Also, is its interface independent? So that if more than one person is accessing the webapp, the parameter choices for example will not be affected or influenced by another user who is also accessing the website. | Hi @isa.rsn -
We at Streamlit believe that Streamlit Cloud 2 is the best way to deploy a Streamlit app.
isa.rsn:
Does streamlite allow me to publicize the site to be used by students and teachers?
By default, an app deployed from the Community tier of Streamlit Cloud are publicly available, so you can share the URL with whomever. Recently, we’ve added the ability to launch one app from a private GitHub repo; in that case, you would need to either make the app public or invite individual people.
isa.rsn:
Also, is its interface independent? So that if more than one person is accessing the webapp, the parameter choices for example will not be affected or influenced by another user who is also accessing the website.
Yes, Streamlit is designed to be multi-user, with the caveat that anything you are caching is currently global. This is usually what people want, such that if you load a large dataset, the data becomes available for everybody. If that’s not what you want, then if you don’t cache things the app should be independent for all users.
How many simultaneous users is a function of what the app does, i.e. how much RAM and computation the app needs to supply to each user. The larger the app, the fewer the users (or, upgrade to a paid tier where more resources are provided).
Best,
Randy | 1 |
streamlit | Using Streamlit | Altair pie chart error | https://discuss.streamlit.io/t/altair-pie-chart-error/21345 | I am getting the following error when trying to create an Altair pie chart. I have Altair bar charts, line charts and stacked bar charts all working properly. However, as soon as I try to create a pie chart I run into a fatal error. To eliminate any issues with my typing and incorrect encodings etc. I copied the example from the Altair documentation (Pie Chart — Altair 4.2.0 documentation). This also fails. I am not sure what I am doing incorrectly.
Screenshot 2022-01-24 1835211055×276 10.5 KB | Make sure you have Altair 4.2 installed, which is when the mark_arc method was added:
https://altair-viz.github.io/releases/changes.html#version-4-2-0-released-dec-29-2021
Upgrading my local instance from Altair 4.1 to 4.2 with your code snippet displays a pie chart for me
Best,
Randy | 1 |
streamlit | Using Streamlit | Autoplay and controls properties for audio and video | https://discuss.streamlit.io/t/autoplay-and-controls-properties-for-audio-and-video/1657 | Problem:
I have an app with a few audios (OGG bytes-like stored in a db). Works just fine if user press the play button using st.audio(mybytes). But I need this to autoplay, without showing those controls.
I tried to hack the tag using js code, but couldn’t have it done even if using unsafe_allow_html. Probably injection of js is not yet allowed.
string="""
<button onclick='myFunction()'>Try it</button>
<script>function myFunction() {alert('Hello, world!');}</ script >
"""
st.markdown(string, unsafe_allow_html=True)
I then tried to create my own HTML using markdown, but in this case, I can’t use my bytes-like object without prior writing it to disk.
Expected (sth like):
st.audio(mybytes, format='audio/ogg', autoplay=True, controls=False)
Have anyone solved this kind of problem? | Solution:
mymidia_bytes is a blob object stored in my sqlite database [sqlite.Bytes(my_audio_bytes)], loaded to a pandas dataframe (row.Audio).
To autoplay it in the streamlit app:
import base64
mymidia_placeholder = st.empty()
mymidia_str = "data:audio/ogg;base64,%s"%(base64.b64encode(mymidia_bytes).decode())
mymidia_html = """
<audio autoplay class="stAudio">
<source src="%s" type="audio/ogg">
Your browser does not support the audio element.
</audio>
"""%mymidia_str
mymidia_placeholder.empty()
time.sleep(1)
mymidia_placeholder.markdown(mymidia_html, unsafe_allow_html=True) | 0 |
streamlit | Using Streamlit | NLTK error on deployment | Module cannot be found | https://discuss.streamlit.io/t/nltk-error-on-deployment-module-cannot-be-found/21393 | So I’m trying to create a dictionary website using streamlit and wordnet from nltk inside an environment called “nlp” which already includes full nltk packages. Everything works properly except on deployment. I got a weird error message when I try to search a word on the deployed website, saying something like “this ‘wordnet’ package cannot be found” although I already added the nltk package inside the requirements.txt
Here’s the full error message
Resource e[93mwordnete[0m not found. Please use the NLTK Downloader to obtain the resource:
e[31m>>> import nltk
nltk.download(‘wordnet’) e[0m For more information see: NLTK :: Installing NLTK Data
Attempted to load e[93mcorpora/wordnete[0m
Searched in: - ‘/home/appuser/nltk_data’ - ‘/home/appuser/venv/nltk_data’ - ‘/home/appuser/venv/share/nltk_data’ - ‘/home/appuser/venv/lib/nltk_data’ - ‘/usr/share/nltk_data’ - ‘/usr/local/share/nltk_data’ - ‘/usr/lib/nltk_data’ - ‘/usr/local/lib/nltk_data’
Note: NLTK is installed inside the nlp conda environment
I’m completely stuck at this. Help me :’) | Hi @Dylan_Mac,
The error message also states that in addition to installing NLTK in the nlp conda environment (as you have done), you need to additionally download the ‘wordnet’ package in your app.
Include the following in your app:
import nltk
nltk.download('wordnet')
Source: NLTK :: Installing NLTK Data
Best,
Snehan | 1 |
streamlit | Using Streamlit | Disabled parameter resets the widget value | https://discuss.streamlit.io/t/disabled-parameter-resets-the-widget-value/21377 | Hi Streamlit community!
I just noticed that changing the disabled parameter of widgets resets their value. Not sure if this behavior is intended or not, but is there a way to disable a widget while not resetting its value?
Here a code example of the behavior:
import streamlit as st
def toggle_widget():
if st.session_state.widget_disabled:
st.session_state.widget_disabled = False
else:
st.session_state.widget_disabled = True
if 'widget_disabled' not in st.session_state:
st.session_state.widget_disabled = False
widget = st.number_input('Select a number between 1 and 6',min_value=1,max_value=6,value=3,disabled=st.session_state.widget_disabled)
st.button('Toggle widget',on_click=toggle_widget)
st.write(widget)
Thanks in advance! | Hey marduk,
import streamlit as st
def toggle_widget():
if st.session_state.widget_disabled:
st.session_state.widget_disabled = False
else:
st.session_state.widget_disabled = True
if 'widget_disabled' not in st.session_state:
st.session_state.widget_disabled = False
if 'def_val' not in st.session_state:
defaultValue=3
else:
defaultValue=st.session_state.def_val
widget = st.number_input('Select a number between 1 and 6',min_value=1,max_value=6,value=defaultValue,key='def_val',disabled=st.session_state.widget_disabled)
st.button('Toggle widget',on_click=toggle_widget)
st.write(widget) | 1 |
streamlit | Using Streamlit | Are you using HTML in Markdown? Tell us why! | https://discuss.streamlit.io/t/are-you-using-html-in-markdown-tell-us-why/96 | As described in this Github issue 83, it is very easy to write unsafe apps when developers have the ability to code in HTML. The idea is that users should always be able to trust Streamlit apps. So we have turned HTML off by default in Streamlit and will be removing the option to turn it on at some point in the future. (Update: We heard you, and have no plans to deprecate allow_unsafe_html! )
However, there are some legitimate use cases for writing HTML today to get around limitations in Streamlit’s current feature set. Some examples are:
Layout: laying items side by side, changing margin/padding/alignment properties, at.
Text-based diagrams: see this example 557 from a user working on NLP.
(Rest assured, we are currently working on solutions to the use-cases above. You will hear about them very soon!)
If you’re using HTML in Streamlit, we would like to hear from you! Let us know:
Why are you using it?
Any thoughts on what you’d like to see in Streamlit if you did not have the ability to use HTML?
Thanks! | I am working in NLP and want to highlight passages of text in different ways.
E.g. highlight negated words in red.
Afaik is this not possible in markdown and when introducing custom HTML elements as text they get moved into their own span objects even with allow_unsave_html turned on.
It is possible to do this, we just can’t use quotes or style tags in the element. The solution was to also print the css in tags and avoid styling individual elements. | 0 |
streamlit | Using Streamlit | How to make one column in dataframe as checkbox | https://discuss.streamlit.io/t/how-to-make-one-column-in-dataframe-as-checkbox/21311 | Hello,
How to display dataframe like below and also trap the event when checkbox is checked/unchecked?
Accept col1 col2
-------- ---- ----
checkbox row1 row1
checkbox row2 row2
Did find this:
https://www.jscodetips.com/examples/how-to-add-a-checkbox-in-pandas-dataframe 4
But it’s just showing the html text in the cell.
Also perhaps, there’s other way.
TIA. | Why don’t you try aggrid with a checkbox column?
Cheers | 0 |
streamlit | Using Streamlit | Create link to open file from local system | https://discuss.streamlit.io/t/create-link-to-open-file-from-local-system/15934 | Hi All,
In jupyter notebook
%%html
href="…/files/Downloads/jaipur.pdf#page=10" target="_blank">Link to a pdf
creates a link pointing towards pdf file and when clicked opens pdf in new tab.
How I can create a html link to open file using streamlit ? | Hi @kamal_kumawat
Have you checked this hack from @ash2shukla?
Creating a PDF file generator Using Streamlit
Hi @Pavan_Cheyutha,
You sure can !
Though streamlit doesnt support PDF generation out of the box for obvious reasons but you can look into PyFPDF and coupled with streamlit it can do the job. Adding a snippet that I just tried for reference.
import streamlit as st
from fpdf import FPDF
import base64
report_text = st.text_input("Report Text")
export_as_pdf = st.button("Export Report")
def create_download_link(val, filename):
b64 = base64.b64encode(val) # val looks like b'...'
retu…
Thanks
Charly | 1 |
streamlit | Using Streamlit | Request_rerun() function | https://discuss.streamlit.io/t/request-rerun-function/21323 | Hello, I have a question regarding rerunning my streamlit app.
I have been using streamlit v0.72.0 with no problems, but when I upgrade to v1.3.0 I get the following error:
TypeError: request_rerun() missing 1 required positional argument: ‘client_state’
I am not really sure what it means, but this occurs when I call the “sync” function from the class below:
class SessionState:
def __init__(self, session, hash_funcs):
"""Initialize SessionState instance."""
self.__dict__["_state"] = {
"data": {},
"hash": None,
"hasher": streamlit.legacy_caching.hashing._CodeHasher(hash_funcs),
"is_rerun": False,
"session": session,
}
def __call__(self, **kwargs):
"""Initialize state data once."""
for item, value in kwargs.items():
if item not in self._state["data"]:
self._state["data"][item] = value
def __getitem__(self, item):
"""Return a saved state value, None if item is undefined."""
return self._state["data"].get(item, None)
def __getattr__(self, item):
"""Return a saved state value, None if item is undefined."""
return self._state["data"].get(item, None)
def __setitem__(self, item, value):
"""Set state value."""
self._state["data"][item] = value
def __setattr__(self, item, value):
"""Set state value."""
self._state["data"][item] = value
def clear(self):
"""Clear session state and request a rerun."""
self._state["data"].clear()
self._state["session"].experimental_rerun()
def sync(self):
"""Rerun the app with all state values up to date from the beginning to fix rollbacks."""
# Ensure to rerun only once to avoid infinite loops
# caused by a constantly changing state value at each run.
#
# Example: state.value += 1
if self._state["is_rerun"]:
self._state["is_rerun"] = False
elif self._state["hash"] is not None:
if self._state["hash"] != self._state["hasher"].to_bytes(self._state["data"], None):
self._state["is_rerun"] = True
self._state["session"].experimental_rerun()
self._state["hash"] = self._state["hasher"].to_bytes(self._state["data"], None)
As I said this works perfectly fine in v.0.72.0 and only occurs when I upgrade.
Any help would be greatly appreciated. | Any ideas on this one please? | 0 |
streamlit | Using Streamlit | JSME React Component | https://discuss.streamlit.io/t/jsme-react-component/6918 | Hi Folks,
I would like to include the open source molecule editor JSME 13 into a Streamlit app.
I’ve found out that there is already a React component: https://github.com/DouglasConnect/jsme-react 19
However, I’m not getting it to display in my app.
I used the component template and added something like:
import { Jsme } from 'jsme-react'
class MyComponent extends StreamlitComponentBase {
public render = (): ReactNode => {
return (
<div>
<h1>Hello World</h1>
<Jsme height="500px" width="800px" />
</div>
)
}
}
export default withStreamlitConnection(MyComponent);
The “Hello World” shows up but not the rest.
I am very new to React, JS, etc.
Anyone any idea how I could get it working?
Many thanks in advance.
Cheers,
LeChuck | Nobody? | 0 |
streamlit | Using Streamlit | Streamlit app working with dev server and not in production | https://discuss.streamlit.io/t/streamlit-app-working-with-dev-server-and-not-in-production/13729 | I have created an app that uses Vue for the frontend. I followed the documentation and the available reactless-template in the GitHub repo 1.
My code is working locally when running npm run start which internally invokes the script vue-cli-service serve. However, when I build the code for deploying with npm run build which uses vue-cli-service build (I rename the build folder from dist to build) it doesn’t work any more and it gives me the following message:
Your app is having trouble loading the streamlit_app.terminal component.
(The app is attempting to load the component from *, and has not received its "streamlit:componentReady" message.)
If this is a development build, have you started the dev server?
If this is a release build, have you compiled the frontend?
For more troubleshooting help, please see the Streamlit Component docs or visit our forums.
Is it possible that it is not working because of the npm run start?
P.S. my code is available on GH 3 as well, it might give a better idea!.
Any ideas or comments are welcome | Hey there,
Did you manage to solve this?
I have the same issue.
Thanks! | 0 |
streamlit | Using Streamlit | Module not found ReportThread | https://discuss.streamlit.io/t/module-not-found-reportthread/5657 | There are examples everywhere using streamlit.ReportThread, however streamlit.ReportThread throws a module not found error on multiple machines. streamlit.report_thread does exist but none of the functions work if i use streamlit.report_thread. How do i fix this error? do i need a specific version of streamlit so that streamlit.ReportThread does exist and the examples work?
Thanks for the help!!
Here is an example that previously worked but no longer does because ReportThread is not found
import streamlit.ReportThread as ReportThread
To add a little more to this. streamlit version 0.63.0 works with
from streamlit import ReportThread
If ReportThread was changed to streamlit.report_thread after version 0.63.0 then why are all examples still showing
from streamlit import ReportThread
Also, if i try to update from ReportThread to report_thread in later versions streamlit then it breaks code that used ReportThread. | I was running into the same problem and I found this gist to be helpful: A possible design for doing per-session persistent state in Streamlit · GitHub 83
Particularly this section for this specific problem:
try:
import streamlit.ReportThread as ReportThread
from streamlit.server.Server import Server
except Exception:
# Streamlit >= 0.65.0
import streamlit.report_thread as ReportThread
from streamlit.server.server import Server
But I also found this section useful and related:
s = session_info.session
if (
# Streamlit < 0.54.0
(hasattr(s, '_main_dg') and s._main_dg == ctx.main_dg)
or
# Streamlit >= 0.54.0
(not hasattr(s, '_main_dg') and s.enqueue == ctx.enqueue)
or
# Streamlit >= 0.65.2
(not hasattr(s, '_main_dg') and s._uploaded_file_mgr == ctx.uploaded_file_mgr)
):
this_session = s | 0 |
streamlit | Using Streamlit | Unexpected st.columns behaviour | https://discuss.streamlit.io/t/unexpected-st-columns-behaviour/21370 | Hey Streamilt community,
I am getting rather unexecpted behaviour from the st.columns method, I would expect the layout of the page to be identical each run, however, if I refresh the page a few times, the layout is likely to be different on some of the runs, see the attatched image as an example when things go wrong. Is there a solution to this problem?
st.set_page_config(layout=“wide”)
col1, col2, col3, col4, col5 = st.columns([1, .1, 1, .1, 1])
col1.write(‘Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.’)
image1894×330 15.4 KB | I am using Chrome btw. | 0 |
streamlit | Using Streamlit | Alignment is different in local version and cloud version | https://discuss.streamlit.io/t/alignment-is-different-in-local-version-and-cloud-version/21061 | Hello Streamlit!
First of all, amazing work on this package, it is great!
However, one small issue that i am facing is alignment issues when i deploy my app online with streamlit share cloud.
The following screenshots use the same code, but as is visible, the cloud version (bottom screenshot) stops nicely aligning my table in the middle.
localversion1215×403 30 KB
onlineversion1169×438 29.6 KB
The code used is the folowing:
col1, col2, col3 = st.columns([1,2.5,1])
with col2:
st.table(pd.DataFrame({
'Time': ['21-12-21 10:00:00', '21-12-21 10:00:01','21-12-21 10:00:02','21-12-21 10:00:03'],
'Sensor1': [10, 10, 11, 10],
'Sensor2': [14,15,14,14]
}).style.applymap(color_column, subset=['Time']))
How could i fix this? | You can set the Python version in the Advanced Settings menu:
docs.streamlit.io
Deploy an app - Streamlit Docs
Best,
Randy | 1 |
streamlit | Using Streamlit | Get_query_params() not working with ‘#’ instead of ‘?’ | https://discuss.streamlit.io/t/get-query-params-not-working-with-instead-of/20314 | I am trying to integrate the Streamlit frontend with a Supabase backend. The great thing is that I can build upon the user sign_up, password_recovery, etc. logic already build into Supabase. This requires me to read the URL of the Streamlit report and check all parameters. Supabase uses a # sign behind its confirmation URL.
<SITE_URL>#access_token=x&refresh_token=y&expires_in=z&token_type=bearer&type=recovery
See also:
supabase.com
Reset Password (Email) | Supabase 1
Sends a reset request to an email address.
Unfortunately the get_query_params() function does not parse this url into a correct query_string. The query_string only works with a ? as separator. Is there some workaround for this issue? Or could get_query_params() be updated to work with a # as separator? | get_query_params() only uses data available to the server. I’m trying to use Supabase too, I’ll try releasing some useful bits soon. | 0 |
streamlit | Using Streamlit | Notify a user | https://discuss.streamlit.io/t/notify-a-user/21312 | How to notify user that the code is running in streamlit? I edit the theme so the “running” icon is not clearly shown in the top right corner. | You can use a while-loop to show an animation while something is loading. So something like:
while (condition):
st.image('https://media.giphy.com/media/gu9XBXiz60HlO5p9Nz/giphy.gif')
So while a function is doing its thing you can display something for the user. If statements also work … If button == True: st.image(‘https://media.giphy.com/media/gu9XBXiz60HlO5p9Nz/giphy.gif’)
If you mean audio feedback then there is also a python library for that e.g. winsound. There you can also apply the same logic as before.
Edit: And don’t forget “show the community” is more to showcase your finished work rather than discussing specific problems. “Using streamlit” is for discussing problems. | 1 |
streamlit | Using Streamlit | 50MB dataset limitation when using Plotly.py | https://discuss.streamlit.io/t/50mb-dataset-limitation-when-using-plotly-py/9464 | Hi there,
I’m using Plotly.py to show figures in my Streamlit app with big datasets. And now I get a notification of dataset oversizing, bigger than 50.0MB, is there anyway to solve this? | Thanks for the reply! I found this github link before I posted this issue, but it didn’t solve my problem.
But later my friends and I found this error message in ~/site-packages/streamlit/server/server_util.py, and changed the parameter MESSAGE_LIMIT_SIZE to 200*1e6, making the write limit now 200 MB | 1 |
streamlit | Using Streamlit | Where to set page width when set into non widescreeen mode? | https://discuss.streamlit.io/t/where-to-set-page-width-when-set-into-non-widescreeen-mode/959 | Hi,
please is there any parameter i could tweak, where I could set the app width?
Widescreen is too wide and compact mode is too small.
Take a look at how it looks on ultrawide.
screen3440×1306 137 KB | Hi @Dominik_Novotny
You can insert custom css using the style tag. The below snippet is from the Layout Experiments app in the gallery at awesome-streamlit.org 604
st.markdown(
f"""
<style>
.reportview-container .main .block-container{{
max-width: {max_width}px;
padding-top: {padding_top}rem;
padding-right: {padding_right}rem;
padding-left: {padding_left}rem;
padding-bottom: {padding_bottom}rem;
}}
.reportview-container .main {{
color: {COLOR};
background-color: {BACKGROUND_COLOR};
}}
</style>
""",
unsafe_allow_html=True,
)
Visit the gallery if you wan’t to see it in action and use a slider to experiment with the max-width setting.
image1962×1043 193 KB | 0 |
streamlit | Using Streamlit | Page redirect | https://discuss.streamlit.io/t/page-redirect/19263 | Is there a way to redirect from page app_1 to page app_2, and hide page app_1 | Here’s a simple implementation of a multipage app that you can modify for your use. Hope this helps:
import streamlit as st
def App1page():
st.write(“Showing app 1”)
if st.button(“Return to Main Page”):
st.session_state.runpage = main_page
st.experimental_rerun()
def App2page():
st.write(“Showing app 2”)
if st.button(“Return to Main Page”):
st.session_state.runpage = main_page
st.experimental_rerun()
def main_page():
st.write(“This is my main menu page”)
btn1 = st.button(“Show App1”)
btn2 = st.button(“Show App2”)
if btn1:
st.session_state.runpage = App1page
st.session_state.runpage()
st.experimental_rerun()
if btn2:
st.session_state.runpage = App2page
st.session_state.runpage()
st.experimental_rerun()
if ‘runpage’ not in st.session_state:
st.session_state.runpage = main_page
st.session_state.runpage() | 0 |
streamlit | Using Streamlit | Problem using multipage with If and Multiselect Widget | https://discuss.streamlit.io/t/problem-using-multipage-with-if-and-multiselect-widget/21295 | Hi guys. I’m facing a problem while trying to create a multipage app through “if conditions”. First of all, i created a main.py file which contais only a fews lines of code plus a menu. In this menu there are a few options the user can select. After him selecting one of them, he is redirectioned to the page. In this new page i added a multiselect widget so he can have another three options.
This new page (he is) are located in another .py file named variables_control.
Resuming:
The main-page are in the figure below (which are in main.py). After i a select the option in the selectbox i go to another page which are in variables_control.py
11135×895 83.8 KB
Here in variables control page, there is a multiselect widget which i added, so the user can decide what he wants to do:
1923×477 17.4 KB
But after a selected an option in this widget he isn’t showing a message i created in case worked…
I’ll be waiting for a hand!!! | See if this link helps you.
Page redirect Using Streamlit
Here’s a simple implementation of a multipage app that you can modify for your use. Hope this helps:
import streamlit as st
def App1page():
st.write(“Showing app 1”)
if st.button(“Return to Main Page”):
st.session_state.runpage = main_page
st.experimental_rerun()
def App2page():
st.write(“Showing app 2”)
if st.button(“Return to Main Page”):
st.session_state.runpage = main_page
st.experimental_rerun()
def main_page():
st.write(“This is my main menu page”)
btn1 = st.button(“Show App1…
Cheers | 0 |
streamlit | Using Streamlit | Connection timed out | https://discuss.streamlit.io/t/connection-timed-out/4091 | When accessing my Streamlit app, via a Satellite Internet connection, I get “Please wait” then, after a while, a popup saying “Connection Error” - “Connection times out”. I do NOT have any issues connecting to and using this application via other forms internet access. And my co-workers can access it without issue.ScreenClip1715×637 37.4 KB
Note that this Steamlit app is running in a container in a GCP VM inside our VPN.
I tried the solutions provided at Symptom #2: The app says “Please wait…” forever 19 to no avail.
I also output all the streamlit configs but am unable to find any type of ‘timeout’ params etc.
I am guessing this has something to do with the high latency of Satellite internet.
Streamlit version=0.62.1
Any help with this will be greatly appreciated. Thanks. | Hi @dplutchok -
This sort of sounds like what this issue is describing:
github.com/streamlit/streamlit
Improve experience when end user has an intermittent connection. 147
opened
Jun 24, 2020
karriebear
Problem
Communications currently happen via a session opened through a websocket. There are cases when an user has an intermittent connection that...
enhancement
needs triage
spec needed
Not sure if there are any other specifics that @karriebear might need, but she might be able to work with you to solve your solution and inform the larger solution.
Best,
Randy | 0 |
streamlit | Using Streamlit | Ubuntu 20.04.3 LTS, streamlit 1.4 will not start | https://discuss.streamlit.io/t/ubuntu-20-04-3-lts-streamlit-1-4-will-not-start/21126 | Hi,
I just started learning python programming. And to use Streamlit, more precisely to learn it.
I also managed to do a simple little program. Streamlit also runs nicely under 0.52. However, if I upgrade the version, no commands to Streamlit will start. The error message: Invalid instruction.
If I install the “Streamlit pages” pip package, the error will go away until version 0.84, but then unfortunately it will not start with this pip package, it will give the error message “It is no longer supported after version 0.84”. Please help me with a detailed description (I am a very beginner) of how I could make the newer versions work.
My system: Ubuntu 20.04.3 LTS Linux username 5.4.0-94-generic # 106-Ubuntu SMP Thu Jan 6 23:58:14 UTC 2022 x86_64 x86_64 x86_64 GNU / Linux
python version: 3.9.9
and I use Pycharm.
Sorry if my English isn’t really perfect.
And thank you in advance for all the help | Hi @CsorbaTomi, welcome to the Streamlit community!
CsorbaTomi:
Streamlit also runs nicely under 0.52.
If you are going as far back as Streamlit 0.52, then you’re on the wrong track unfortunately. That is probably two years old at this point. The current version of Streamlit is 1.4, and a lot has changed between now and then.
Are you using a 64-bit version of Streamlit? Because of the talk of version 0.84 or so, I suspect you might be using a 32-bit version of Python, which Streamlit does not support since one of our key dependencies (Apache Arrow) does not support 32-bit versions of Python.
Best,
Randy | 0 |
streamlit | Using Streamlit | How to add WordCloud graph in Streamlit | https://discuss.streamlit.io/t/how-to-add-wordcloud-graph-in-streamlit/818 | I want to add the wordcloud to streamlit app to show thw related words in the tweets. | Great question and you absolutely can do this with st.pyplot() and wordcloud. Here’s a simple example:
import streamlit as st
from wordcloud import WordCloud
import matplotlib.pyplot as plt
# Create some sample text
text = 'Fun, fun, awesome, awesome, tubular, astounding, superb, great, amazing, amazing, amazing, amazing'
# Create and generate a word cloud image:
wordcloud = WordCloud().generate(text)
# Display the generated image:
plt.imshow(wordcloud, interpolation='bilinear')
plt.axis("off")
plt.show()
st.pyplot()
Let me know if that works for you! | 0 |
streamlit | Using Streamlit | Toggle, hide sidebar | https://discuss.streamlit.io/t/toggle-hide-sidebar/6661 | Hi.
After select some options on the sidebar of my app, I have a button to make the prediction.
I would like to know if is there a way to toggle/hide the sidebar?
On mobile, that gets in the way of usability.
Thanks. | Hi @SantiagoBF,
First welcome to the Streamlit community!
Currently, there is no built-in way to remove the “X” on the sidebar. But if this is a feature your looking for you can open a feature request on our github!
GitHub
Issues · streamlit/streamlit 16
Streamlit — The fastest way to build data apps in Python - Issues · streamlit/streamlit
Happy Streamlit-ing!
Marisa | 1 |
streamlit | Using Streamlit | Network URL not working on other computers | https://discuss.streamlit.io/t/network-url-not-working-on-other-computers/2691 | Summary
When I run streamlit run xxx.py, it works very well in my local browser. When I try to access the web app from another computer using the provided network URL, Streamlit shows:
Safari can’t open the page “http://192.168.1.100:8501 71”. because the server where this page is located isn’t responding.
Steps to reproduce
streamit run xxxx.py
Local URL: http://localhost:8501 50 works!
Network URL: http://192.168.1.100:8501 71 - works on local computer, but not on other computers.
config.toml
[global]
# By default, Streamlit checks if the Python watchdog module is available and, if not, prints a warning asking for you to install it. The watchdog module is not required, but highly recommended. It improves Streamlit's ability to detect changes to files in your filesystem.
# If you'd like to turn off this warning, set this to True.
# Default: false
disableWatchdogWarning = false
# Configure the ability to share apps to the cloud.
# Should be set to one of these values: - "off" : turn off sharing. - "s3" : share to S3, based on the settings under the [s3] section of this config file. - "file" : share to a directory on the local machine. This is meaningful only for debugging Streamlit itself, and shouldn't be used for production.
# Default: "off"
sharingMode = "off"
# If True, will show a warning when you run a Streamlit-enabled script via "python my_script.py".
# Default: true
showWarningOnDirectExecution = true
# Level of logging: 'error', 'warning', 'info', or 'debug'.
# Default: 'info'
logLevel = "info"
[client]
# Whether to enable st.cache.
# Default: true
caching = true
# If false, makes your Streamlit script not draw to a Streamlit app.
# Default: true
displayEnabled = true
[runner]
# Allows you to type a variable or string by itself in a single line of Python code to write it to the app.
# Default: true
magicEnabled = true
# Install a Python tracer to allow you to stop or pause your script at any point and introspect it. As a side-effect, this slows down your script's execution.
# Default: false
installTracer = false
# Sets the MPLBACKEND environment variable to Agg inside Streamlit to prevent Python crashing.
# Default: true
fixMatplotlib = true
[server]
# List of folders that should not be watched for changes. This impacts both "Run on Save" and @st.cache.
# Relative paths will be taken as relative to the current working directory.
# Example: ['/home/user1/env', 'relative/path/to/folder']
# Default: []
folderWatchBlacklist = []
# Change the type of file watcher used by Streamlit, or turn it off completely.
# Allowed values: * "auto" : Streamlit will attempt to use the watchdog module, and falls back to polling if watchdog is not available. * "watchdog" : Force Streamlit to use the watchdog module. * "poll" : Force Streamlit to always use polling. * "none" : Streamlit will not watch files.
# Default: "auto"
fileWatcherType = "auto"
# If false, will attempt to open a browser window on start.
# Default: false unless (1) we are on a Linux box where DISPLAY is unset, or (2) server.liveSave is set.
headless = false
# Immediately share the app in such a way that enables live monitoring, and post-run analysis.
# Default: false
liveSave = false
# Automatically rerun script when the file is modified on disk.
# Default: false
runOnSave = false
# The address where the server will listen for client and browser connections. Use this if you want to bind the server to a specific address. If set, the server will only be accessible from this address, and not from any aliases (like localhost).
# Default: (unset)
#address =
# The port where the server will listen for browser connections.
# Default: 8501
port = 8501
# The base path for the URL where Streamlit should be served from.
# Default: ""
baseUrlPath = ""
# Enables support for Cross-Origin Request Sharing, for added security.
# Default: true
enableCORS = false
# Max size, in megabytes, for files uploaded with the file_uploader.
# Default: 200
maxUploadSize = 200
[browser]
# Internet address where users should point their browsers in order to connect to the app. Can be IP address or DNS name and path.
# This is used to: - Set the correct URL for CORS purposes. - Show the URL on the terminal - Open the browser - Tell the browser where to connect to the server when in liveSave mode.
# Default: 'localhost'
serverAddress = "192.168.1.100"
# Whether to send usage statistics to Streamlit.
# Default: true
gatherUsageStats = true
# Port where users should point their browsers in order to connect to the app.
# This is used to: - Set the correct URL for CORS purposes. - Show the URL on the terminal - Open the browser - Tell the browser where to connect to the server when in liveSave mode.
# Default: whatever value is set in server.port.
serverPort = 8501
[mapbox]
# Configure Streamlit to use a custom Mapbox token for elements like st.deck_gl_chart and st.map. If you don't do this you'll be using Streamlit's own token, which has limitations and is not guaranteed to always work. To get a token for yourself, create an account at https://mapbox.com. It's free! (for moderate usage levels)
# Default: ""
token = ""
[s3]
# Name of the AWS S3 bucket to save apps.
# Default: (unset)
#bucket =
# URL root for external view of Streamlit apps.
# Default: (unset)
#url =
# Access key to write to the S3 bucket.
# Leave unset if you want to use an AWS profile.
# Default: (unset)
#accessKeyId =
# Secret access key to write to the S3 bucket.
# Leave unset if you want to use an AWS profile.
# Default: (unset)
#secretAccessKey =
# The "subdirectory" within the S3 bucket where to save apps.
# S3 calls paths "keys" which is why the keyPrefix is like a subdirectory. Use "" to mean the root directory.
# Default: ""
keyPrefix = ""
# AWS region where the bucket is located, e.g. "us-west-2".
# Default: (unset)
#region =
# AWS credentials profile to use.
# Leave unset to use your default profile.
# Default: (unset)
#profile = | Welcome to the community @TheClub4!
Usually an issue like this indicates a networking issue, not a Streamlit one. If there is a firewall on the computer you are trying to access over the network, temporarily disabling it completely will allow you to understand if that’s the issue.
If disabling the firewall works, then you can re-enable the firewall with a rule that allows access to port 8501. How this is done depends on which operating system you are using and your local network setup, so unfortunately I can’t give too much more detail than that. | 0 |
streamlit | Using Streamlit | Session State Variables are re-initialized to initial state going through pages | https://discuss.streamlit.io/t/session-state-variables-are-re-initialized-to-initial-state-going-through-pages/19129 | hi all,
I am having issues with my session state variables with the following code.
I instantiate the session state variables
In the sidebar, I can select the pages to which I want to navigate to
In each page, there is a widget that takes a value and saves it in the session state.
Problem: when going through different pages, the values that were given are being removed from the session state.
Here is my code:
# Instantiate the Session State Variables
if 'name' not in st.session_state:
st.session_state.name = ''
if 'age' not in st.session_state:
st.session_state.age = ''
if 'gender' not in st.session_state:
st.session_state.gender = ''
# Sidebar Widgets
sidebar_Title = st.sidebar.markdown('# Streamlit')
sidebar_pages = st.sidebar.radio('Menu', ['Page 1', 'Page 2', 'Page 3'])
# Page 1
def page1():
name = st.text_input('What is your name?', key = 'name' )
# Page 2
def page2():
age = st.text_input('What is your age?', key = 'age')
# Page 3
def page3():
gender = st.text_input('What is your gender?', key = 'gender')
# Navigate through pages
if sidebar_pages == 'Page 1':
page1()
elif sidebar_pages == 'Page 2':
page2()
else:
page3()
st.write(st.session_state) | hello,
I have the same issue, and I don’t understand why.
The doc:
https://docs.streamlit.io/library/advanced-features/widget-semantics
seems to say that it should be the way to save widgets states through session.
An (ugly?) workaround found is to put the following lines somewhere in the page
for k in st.session_state.keys():
st.session_state[k] = st.session_state[k]
but I am not clear if we should really do that… ? | 0 |
streamlit | Using Streamlit | Date picker component | https://discuss.streamlit.io/t/date-picker-component/20938 | Hello friends
Is it possible to create date picker for “date_input” widget(for example persian/jalali calendar) using component???
Do you think is there any example??? | Hi @Mohammadseif, welcome back!
Yes, it would be a great example of a Streamlit Component. That would be an example of a bi-directional Streamlit Component 2, where you use a pre-existing JavaScript library to pass information back to Python.
Unfortunately, I’m not a JS developer nor familiar with the Persian calendar, but it’s definitely possible to implement.
Best,
Randy | 1 |
streamlit | Using Streamlit | Multiple tabs in streamlit | https://discuss.streamlit.io/t/multiple-tabs-in-streamlit/1100 | Hi,
I am new to this - please bear with me if I have gravely missed something.
One feature I would love to see is multiple tabs so the app can host multiple windows. Is it available? I looked through the online documents as well as the community site, but could not find it.
Thanks!
-jc | Not at the moment. What you can do for now is to use radiobuttons like in the image below
image1144×626 57 KB
There are some feature requests related to this like
github.com/streamlit/streamlit
Feature request: Tabs 472
opened
Oct 2, 2019
ines
Problem
Sorry if this has been discussed before (couldn't find anything here on on the forum). While building my app, I thought...
FR:Layout
enhancement
github.com/streamlit/streamlit
Please add st.navigation widget 350
opened
Nov 21, 2019
MarcSkovMadsen
Problem
I would like to build multi page apps with a nice layout and style for use internally in an enterprise and...
enhancement
It would be awesome if you casted a vote or added some comments in the issues on what you need. | 0 |
streamlit | Using Streamlit | Help with “UnpicklingError: invalid load key, ‘v’.” | https://discuss.streamlit.io/t/help-with-unpicklingerror-invalid-load-key-v/8592 | Hi and happy new year,
May I ask for your help with the following upickling error on my Streamlit App:
Screen Shot 2021-01-07 at 2.42.50 PM1496×560 41.6 KB
These pickle files were uploaded to github using git-lfs, as they are large files–might that be involved in the error?
My python version is 3.7.4
My streamlit version is 0.72.0
My github repo is GitHub - amaze2/concordiensis: Concordiensis Project 11
I would really appreciate your help with this. Thank you in advance! | Actually, I take that back…it looks like it could be in by end of January/early February, as it’s a pretty often requested feature. | 1 |
streamlit | Using Streamlit | AttributeError: ‘NoneType’ object has no attribute ‘update’ fastai and streamlit | https://discuss.streamlit.io/t/attributeerror-nonetype-object-has-no-attribute-update-fastai-and-streamlit/21261 | Hello
I want to create an web app using streamlit that classify dogs but I can’t get it to work
I tried every trick in the book but I got nowhere, here is my code :
%%writefile app.py
from fastai.vision.widgets import *
from fastai.vision.all import *
from pathlib import Path
import streamlit as st
class Predict:
def __init__(self, filename):
self.learn_inference = load_learner(Path()/filename)
self.img = self.get_image_from_upload()
if self.img is not None:
self.display_output()
self.get_prediction()
@staticmethod
def get_image_from_upload():
uploaded_file = st.file_uploader("Upload Files",type=['png','jpeg', 'jpg'])
if uploaded_file is not None:
return PILImage.create((uploaded_file))
return None
def display_output(self):
st.image(self.img.to_thumb(500,500), caption='Uploaded Image')
def get_prediction(self):
if st.button('Classify') :
self.pred, self.pred_idx, self.probs=self.learn_inference.predict(self.img)
st.write(f'Prediction: {self.pred}; Probability: {self.probs[pred_idx]:.04f}')
else:
return st.write(f'Click the button to classify')
if __name__=='__main__':
file_name='dog.pkl'
predictor = Predict(file_name)
After I compile
!streamlit run app.py & npx localtunnel --port 8501
and I click on classify
I get :
AttributeError: 'NoneType' object has no attribute 'update'
Any help would be much appreciated.
Thanks | Traceback
<IPython.core.display.HTML object>
2022-01-18 18:17:18.723 Traceback (most recent call last):
File “/usr/local/lib/python3.7/dist-packages/streamlit/script_runner.py”, line 379, in run_script
exec(code, module.dict)
File “/content/app.py”, line 38, in
Predict(file_name)
File “/content/app.py”, line 14, in init
self.get_prediction()
File “/content/app.py”, line 26, in get_prediction
self.pred, self.pred_idx, self.probs=self.learn_inference.predict(self.img)
File “/usr/local/lib/python3.7/dist-packages/fastai/learner.py”, line 266, in predict
inp,preds,,dec_preds = self.get_preds(dl=dl, with_input=True, with_decoded=True)
File “/usr/local/lib/python3.7/dist-packages/fastai/learner.py”, line 253, in get_preds
self._do_epoch_validate(dl=dl)
File “/usr/local/lib/python3.7/dist-packages/fastai/learner.py”, line 203, in _do_epoch_validate
with torch.no_grad(): self._with_events(self.all_batches, ‘validate’, CancelValidException)
File “/usr/local/lib/python3.7/dist-packages/fastai/learner.py”, line 163, in with_events
try: self(f’before{event_type}’); f()
File “/usr/local/lib/python3.7/dist-packages/fastai/learner.py”, line 141, in call
def call(self, event_name): L(event_name).map(self._call_one)
File “/usr/local/lib/python3.7/dist-packages/fastcore/foundation.py”, line 155, in map
def map(self, f, *args, gen=False, **kwargs): return self._new(map_ex(self, f, *args, gen=gen, **kwargs))
File “/usr/local/lib/python3.7/dist-packages/fastcore/basics.py”, line 698, in map_ex
return list(res)
File “/usr/local/lib/python3.7/dist-packages/fastcore/basics.py”, line 683, in call
return self.func(*fargs, **kwargs)
File “/usr/local/lib/python3.7/dist-packages/fastai/learner.py”, line 145, in _call_one
for cb in self.cbs.sorted(‘order’): cb(event_name)
File “/usr/local/lib/python3.7/dist-packages/fastai/callback/core.py”, line 45, in call
if self.run and _run: res = getattr(self, event_name, noop)()
File “/usr/local/lib/python3.7/dist-packages/fastai/callback/progress.py”, line 26, in before_validate
def before_validate(self): self._launch_pbar()
File “/usr/local/lib/python3.7/dist-packages/fastai/callback/progress.py”, line 35, in _launch_pbar
self.pbar.update(0)
File “/usr/local/lib/python3.7/dist-packages/fastprogress/fastprogress.py”, line 56, in update
self.update_bar(0)
File “/usr/local/lib/python3.7/dist-packages/fastprogress/fastprogress.py”, line 76, in update_bar
else: self.on_update(val, f’{100 * val/self.total:.2f}% [{val}/{self.total} {elapsed_t}<{remaining_t}{end}]’)
File “/usr/local/lib/python3.7/dist-packages/fastprogress/fastprogress.py”, line 125, in on_update
if self.display: self.out.update(HTML(self.progress))
AttributeError: ‘NoneType’ object has no attribute 'update | 0 |
streamlit | Using Streamlit | Session state initialization error | https://discuss.streamlit.io/t/session-state-initialization-error/20989 | The session state initialization error shows up, even when the session state is properly initialized.
The code taken ‘as is’ from the Example 1 Streamlit Docs 1
import streamlit as st
st.title(‘Counter Example’)
if ‘count’ not in st.session_state:
st.session_state.count = 0
increment = st.button(‘Increment’)
if increment:
st.session_state.count += 1
st.write('Count = ', st.session_state.count)
Returns the following error
AttributeError: st.session_state has no attribute “count”. Did you forget to initialize it? More info: Add statefulness to apps - Streamlit Docs 1
The same is happening with my own code. I see similar questions in the forum, not yet addressed, so decided to raise a new one. Would appreciate any help… Thank you.
Streamlit ver. 1.4 | I do:
if 'count' not in st.session_state:
st.session_state['count'] = 0
Have you tried that?
Dinesh | 0 |
streamlit | Using Streamlit | Form/start/stop button not working | https://discuss.streamlit.io/t/form-start-stop-button-not-working/21245 | Hi,
I am new to Streamlit. Was trying it out on my local machine, but ran into problems of streamlit not rerunning as per my understanding. Any help would be much appreciated. I am using Python 3.9. Following is the test case code that I run with the command:
streamlit run testcase.py
It opens the browser fine with the required forms and buttons. First time around the “Start” button works and it starts the program. But if I do any changes to the form values, it does not take effect when I again press the “Start” button. Also, when I press the “Stop” button, the program does not stop. I rolled back the streamlit to 0.86 version from the latest one, then also it does not work. Not sure what I am missing.
Regards.
testcase.py:
from time import sleep
import streamlit as st
bell = ‘ON’
form = st.sidebar.form(key=“myform”)
stop_loss_target_total = form.number_input(“SL Target”, value=int(600))
profit_target_total = form.number_input(“Profit Target”, value=int(800))
sleep_interval = form.number_input(“Sleep Interval”, value=int(3))
start = form.form_submit_button(“Start”)
stop = st.sidebar.button(“Stop”)
def print_parameter_values():
print(f’\tSleep Interval: {sleep_interval} Secs’)
print(f’\tProfit Target (per lot): {profit_target_total}’)
print(f’\tStop Loss (per lot): {stop_loss_target_total}’)
print(f’\tBell: {bell}’)
# print(f’\n’)
print(‘Waiting for connection…’)
print(“Connected…”)
if bell == ‘ON’:
print(f’Bell: {bell}’)
print(start, stop)
def run_strategy():
i = 0
while True:
print(i, start, stop)
print(f’Current Parameters For The Run:’)
print_parameter_values()
print(f’Hi2 {start}, {stop}’)
sleep(sleep_interval)
i = i+1
if start:
print(‘In start’)
print(start)
run_strategy()
if stop:
print(‘In stop’)
print(stop)
st.write(‘Exiting…’) | It is working now… no longer an issue. This can be closed. Thanks. | 1 |
streamlit | Using Streamlit | How to take Text Input from a user | https://discuss.streamlit.io/t/how-to-take-text-input-from-a-user/187 | Hi Team ,
The package looks great!. I wanted to know if there is a simple way to take a text input from the user and use the input to filter or create views on streamlit. Since there are many millions of data points , i am unable to use the slider or dropdown option for this
Thanks and regards,
Aravind | Hi Aravind_K_R
There are two ways to get text input from users.
First, there’s st.text_input 1.1k for when you only need a single line of text:
user_input = st.text_input("label goes here", default_value_goes_here)
Then there’s st.text_area 323 for then you want multiple lines of text:
user_input = st.text_area("label goes here", default_value_goes_here)
In both cases, the default_value_goes_here argument is optional. You can find more about these in our API documentation 1.4k.
Let me know if this answers your question! | 0 |
streamlit | Using Streamlit | Move the cursor to the bottom after logging in st.text_area | https://discuss.streamlit.io/t/move-the-cursor-to-the-bottom-after-logging-in-st-text-area/19288 | Hi,
I am using st.text_area to display logs from my background process. i.e. every time some one clicks a refresh button, I read the log file and write it using st.text_area
The problem is that the cursor/scroll bar is at the top of the text_area. But in a log file we are most interested in what is at the end of the file (which is the most recent update). Is there some way to specify that the text_area scroll bar should scroll down to the bottom.
Thanks. | Any pointers on this will be helpful. Thanks. | 0 |
streamlit | Using Streamlit | Disable reloading of image every time a widget is updated | https://discuss.streamlit.io/t/disable-reloading-of-image-every-time-a-widget-is-updated/1612 | Hi,
First of all big kudos to the Streamlit team for this awesome piece of software.
I have run into a mild annoyance. I am creating an app to do image labeling (with multiple label classes). The categories are binary, and each label category is represented by a checkbox. The labels are stored and the next image is shown when upon clicking a ‘Submit’ button.
The problem:
Whenever a checkbox is checked, the app makes a new call to st.image(picture) rerenders the (same) picture, which makes the app ‘fade’ for a moment. I would like to avoid this fading if possible, or at least minimize it. Right now, it seems that the caching isn’t really doing a lot for the speed. I see some approaches.
1: Somehow pause rerendering until submit button is hit. I.e. ability to change the value of the widget without impacting the state of the app before clicking a ‘submit’ button (preferred option).
2: Make the caching work properly so that the image does not rerender (perhaps I am making some mistake regarding how caching is used?)
Can any of you good people give me some pointers?
Minimal example:
from PIL import Image
from collections import OrderedDict
import streamlit as st
categories= ['a', 'b', 'c']
checkboxes = OrderedDict({category: st.sidebar.checkbox(category) for category in categories})
### Omitted code with submit button and writing labels to file ###
@st.cache
def load_image(img_id):
image = Image.open(f"images/{img_id}.jpg")
return image
image = load_image("example")
st.image(image, use_column_width=True) | Hey @PeterT, welcome to Streamlit, and thanks for the kind words!
As you’ve noted, this fade happens every time your app re-runs (and therefore re-renders), if it takes longer than 1 second to complete the re-render. There isn’t a real workaround for this right now, but we have an open GitHub issue 212 that you can follow. There have been a number of requests for “form-like” widget behavior, and it’s something we’re discussing internally!
In the meantime, there are several sub-optimal workarounds if this is a real show-stopper for you:
Make your app run faster. (This may not be possible! It looks like you’re already using caching. Are there other long-running operations that could be cached?)
Modify the .stale-element CSS selector in ReportView.scss 39, extending the duration of the opacity transition (or removing the opacity change altogether). This would require you to build your own Streamlit, which is more work, so I’d certainly not recommend it to most users, but it’s an escape hatch if needed.
Tim | 0 |
streamlit | Using Streamlit | Streamlit Cloud Git LFS Support | https://discuss.streamlit.io/t/streamlit-cloud-git-lfs-support/21175 | Wassup Guys,
I am currently writing a transcriptor app. I’ve deployed it since a couple versions to my cloud.
However, I added a few files to Git LFS because they’re too big to be uploaded normally.
I store the logic for my models as .npy, .pth, .bin files.
Everythings works locally normally, but the cloud project seems to have issues cloning my stored files.
How can I solve this? I don’t want to use an ugly workaround. | Hi @fuccDebugging -
Streamlit Cloud supports using GitLFS, but if I’m not mistaken it does require credits to be purchased (from GitHub) over a certain size. How large are the files you are using?
Best,
Randy | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.