Id
stringlengths 1
6
| PostTypeId
stringclasses 6
values | AcceptedAnswerId
stringlengths 2
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
3
| ViewCount
stringlengths 1
6
⌀ | Body
stringlengths 0
32.5k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 2
values | FavoriteCount
stringclasses 2
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
121546
|
2
| null |
121403
|
0
| null |
I consulted this question with a few people and the recommendation was to go with
- Search for products via image and text similarity separately and then look at combined similarity (we could just sum/average the two similarity scores)
as it should be the easiest and most flexible option. My initial idea should most likely work as well but it is less flexible.
Hopefully this helps somebody in the future.
| null |
CC BY-SA 4.0
| null |
2023-05-15T09:24:52.803
|
2023-05-15T09:24:52.803
| null | null |
67308
| null |
121547
|
2
| null |
121535
|
1
| null |
If you are getting a noisy trend, it may be an indicator of your data being nonstationary. In that case, you cannot decompose your series using a deterministic function.
There are approaches of testing nonstationarity, such as augmented Dickey-Fuller test, Phillips-Perron test, KPSS test, in addition to other approaches (to my knowledge, the first one is the most common one).
You can, difference until your series becomes stationary (do not enforce it though, because it loses information in each differencing operation) and then reconsider the linear trend function.
| null |
CC BY-SA 4.0
| null |
2023-05-15T12:16:54.863
|
2023-05-15T12:16:54.863
| null | null |
92451
| null |
121548
|
1
|
121633
| null |
0
|
28
|
I want to make an RNN that has for example more hidden layers or layer normalization.
I know that is it possible to make a custom RNN by subclassing nn.module, but with this approach is it not possible to do efficient batch processing with a PackedSequence object (with variable length sequences) the same way and with the same efficiency as torch.nn.RNN.
I thought maybe the solution could be to subclass nn.RNN, but I don't know how to do that.
|
How to make an RNN model in PyTorch that has a custom hidden layer(s) and that is compatible with PackedSequence
|
CC BY-SA 4.0
| null |
2023-05-15T12:53:03.943
|
2023-05-19T09:42:43.183
| null | null |
149882
|
[
"machine-learning",
"rnn",
"pytorch"
] |
121550
|
1
| null | null |
0
|
54
|
For some reason, I can’t find built-in solutions (not really?) in keras and tensorflow, while on the site
[https://keras.io/api/applications/](https://keras.io/api/applications/) they provide Time (ms) per inference step (CPU), but for some reason they did not describe how they calculated or which function they used.
|
How to correctly measure the inference time and FLOPs of a model?
|
CC BY-SA 4.0
| null |
2023-05-15T14:13:56.007
|
2023-05-18T09:53:18.920
| null | null |
149880
|
[
"keras",
"tensorflow",
"cnn",
"model-evaluations",
"metric"
] |
121551
|
1
| null | null |
0
|
25
|
first time poster here so please forgive me and correct me on my posting mistakes...
Im trying to teach the databricks/dolly-v2-3b llm, some data, which is just one sentence.
In the future I would like to be able to teach it larger amount of text.
Thing is I have gone through a tutorial from Sam Witteveen on YT, and everything execute OK, just that if I ask in prompt a question regarding my one sentence data, I get some weird answer... not only that but even if I prompt "Hi", i get a weid answer, but when using databricks/dolly-v2-3b in vanilla form I get an answer "Hello." if I prompt "Hi".
```
Enter your input (or 'q' to quit): hi
A decoder-only architecture is being used, but right-padding was detected! For correct generation results, please set `padding_side='left'` when initializing the tokenizer.
Model's response: fallCommentCommentCommentCommentCommentCommentCommentCommentakingatformatformatform],[@ 83atformreek Victoriaсп Victoriaсп Victoriaсп Victoriaсп Victoriaсп Victoriaсп Victoriaсп Victoriaсп Victoriaсп Victoria442fall pulled}(}(}(}(}(}(}(}(}(}(}(}(}(}(}(}(}(}( associateaxel二 Victoria yabatimylated unfortunate grievances PresFinding browyster}( associateaxel二 Victoria yabatimylated unfortunate grievances choirignignignignignignignignignignignignignignignign Borgramble
```
Can anyone direct me to where my issue might be?
|
dolly 2.3b machine laerning using trainer.train
|
CC BY-SA 4.0
| null |
2023-05-15T14:25:47.990
|
2023-05-15T15:49:53.910
|
2023-05-15T15:49:53.910
|
75157
|
149886
|
[
"machine-learning"
] |
121552
|
1
| null | null |
0
|
22
|
I have process data in time series data(0min, 1min, ... 999min). I don't know what does the variables mean. They are just written in X1, X2, ... X52. Each row means the data at the time. At certain point, process becomes abnormal. Then the data after the point are all abnormal data. If normal data class value is 0 and 1 for abnormal data, the label would be like below.
0
0
0
.
.
.
1
1
1
1
So I want to know when the value is changed to 1. I will use change point detection (with python 'ruptures' library). In this situation, should I standardize my data?
I wonder (1) whether standardization is needed or not, and (2) if the standardization downgrades performance of the model, why is it?
(3) If the standardization is not needed, is there any advantage for log transformation of data? As far as I know, log transformation is conducted to make the distribution be similar to standard distribution. (Correcting skewness) Is it true?
I would appreciate to the answers for only the part of questions.
|
Do I need to standardize time series data in change point detection?
|
CC BY-SA 4.0
| null |
2023-05-15T15:34:58.710
|
2023-05-16T06:53:11.887
|
2023-05-15T17:31:20.890
|
65153
|
149888
|
[
"machine-learning",
"time-series"
] |
121553
|
2
| null |
121538
|
2
| null |
One definition of data leakage is providing the model with data during training that would not be available at a future prediction time. The variable "total target achieved/units purchased as on date" is not data leakage according to that definition.
Your problem might be better framed as a time series prediction than a tabular prediction.
| null |
CC BY-SA 4.0
| null |
2023-05-15T17:08:04.050
|
2023-05-15T17:08:04.050
| null | null |
1330
| null |
121554
|
1
|
121566
| null |
0
|
32
|
I have time series data coming at 10sec intervals from passenger counter in a bus [10,10,10,10,9,9,9,5,5,5,10,10 ...]. I need to estimate the total number of passengers carried in 1 hour. When the counts decrease, it means someone/somepeople got off. And when it increases it means new people got on.
|
Which algorithm can I use to estimate total number of passengers carried from time series of passenger counts
|
CC BY-SA 4.0
| null |
2023-05-15T20:01:44.220
|
2023-05-16T08:27:02.153
| null | null |
149892
|
[
"time-series",
"counts"
] |
121555
|
2
| null |
111778
|
0
| null |
Maybe late for OP but I had the same issue (same code on console gives a GPU but nothing on jupyter), here is what I did:
- check that your python is the same for jupyter and on console: !which python (jupyter) must be the same as which python (console)
- check GPU compatibility with tensorflow, you need to install cuda and cudnn (apparently you got this: in my case I installed cudnn with pip install --user nvidia-cudnn-cu11, check this version with nvcc --version)
- uninstall jupyter and reinstall it (if you installed tensorflow with sudo do the same with jupyter and then open it with sudo jupyter notebook, I personally use pip install -user jupyter; in case you have problems removing jupyter install pip install pip-autoremove (sudo or --user flag you chose but be consistent, you can also create a new python venv, which is recommended)
I didn't installed `tensorflow-gpu` and the GPU was detected. For more information:
- https://www.tensorflow.org/install/pip
| null |
CC BY-SA 4.0
| null |
2023-05-15T21:38:08.450
|
2023-05-15T22:39:50.673
|
2023-05-15T22:39:50.673
|
84020
|
84020
| null |
121556
|
2
| null |
121538
|
2
| null |
Data leakage occurs in cases when you train a model with data that is not available for future testing/inference; or when you use same piece of data for training, and then for validation and/or testing. This short [Kaggle article](https://www.kaggle.com/code/alexisbcook/data-leakage) sums it up nicely.
If you have a feature (e.g. `target_year_x`) that somehow quantifies how much of the target goals are currently at year `x` achieved, I fear that this could introduce bias in your model, and may technically be data leakage. High values for that feature indicate that the project is close to meeting its goals, and is more likely to meet its target; thus the model would learn (the very obvious thing) that high values for `target_year_x` are highly predictive for the projects' success.
My suggestion is to maybe try multiple models, i.e., one model to predict success in first year, one in second, etc. Or, separate model for separate project phases, if you can somehow logically split the projects. If you try that, be careful not to include features that relate to latter phases for the earlier models (e.g., don't include features that provide information about the projects' second year performance, for the model that predicts in the first year).
Or, as the other answer by Brian Spiering suggests, which is also a good option IMO, you might want to consider to frame it as a time series prediction problem if you need multiple chronological predictions per project, rather than a binary classification one.
| null |
CC BY-SA 4.0
| null |
2023-05-15T22:59:46.950
|
2023-05-15T22:59:46.950
| null | null |
142205
| null |
121557
|
2
| null |
121554
|
0
| null |
First you will need to aggregate your data by the hour so that it is in the right format. It should be in the following format `(t, c, x)` where `t` is the hour-timestamp, `c` is the passenger count for that hour, and `x` is any other feature you might have that you think can help better estimate/predict the count. `x` can also be empty, i.e., none.
Then, you have a myriad of algorithms that you can apply. See this [Wikipedia list](https://en.wikipedia.org/wiki/Time_series#Models) for an example. You can choose the algorithm based on (1) your expertise and (2) the data's statistical properties.
I have personally used [autoregressive moving average](https://en.wikipedia.org/wiki/Autoregressive_integrated_moving_average) models and their many variants, and some deep learning models, like this [tutorial](https://www.tensorflow.org/tutorials/structured_data/time_series) here. I've learned that the correct choice of the model/algorithm depends on the problem itself, so my suggestion is for you to try out some of the algorithms presented in these links and see what works out for you. If you have a problem with applying a particular algorithm, than you can ask a more specific question.
| null |
CC BY-SA 4.0
| null |
2023-05-15T23:16:16.467
|
2023-05-15T23:16:16.467
| null | null |
142205
| null |
121558
|
2
| null |
121467
|
1
| null |
It depends on the architecture chosen, but generally speaking, they do have some differences that can be measured as follows:
- How they learn and the training speed
Variational Autoencoder learn by modelling explicit densities. On the other hand GANs are a min-max game and for that reason they learn based on competition.
Because of the non-cooperative nature of GANs their convergence is harder to be ensured and for that reason while training you can observe more oscillations and variability. Nevertheless, this is not exactly bad, it will depend on the variability that you want to introduce to your synthetic data.
GANs are harder to optimize and they tend to suffer from mode collapse. This can usually be mitigated with the right loss function (aka you want to ensure a more cooperative training).
Both are not exclusive, and you can see some architectures that combine the pros and cons of both worlds, such as TimeGAN.
- Sampling space
VAE generate new records by reconstructing the data from a low-dimensional representation of the original records. This process introduces some noise, but allows the generation of data with quality.
GANs generate data from any random input, which allows the generation of more diverse samples when compare to VAE. This poses a huge benefit in cases of augmentation for instance.
In a nutshell, GANs are harder to train, but when well fine tune can generate outputs with bigger variability and also more realistic when compared to VAE. The choice will mainly depend on the objective of the generated data - if you want to stress test a model or even augment fraud cases, GANs are better candidates. If you just want to replicate more of the same data for compression purposes for instance, VAE are a great way to go.
Attention models can be also very interesting indeed, but will depend on the data types that you want to focus on (structured data, images, text, etc.)
| null |
CC BY-SA 4.0
| null |
2023-05-16T02:37:35.647
|
2023-05-16T02:37:35.647
| null | null |
149901
| null |
121559
|
2
| null |
87933
|
0
| null |
You can try ydata-profiling ([https://github.com/ydataai/ydata-profiling](https://github.com/ydataai/ydata-profiling)).
There's a property that measure whether a class is imbalanced or not based on entropy, might be helpful.
[https://github.com/ydataai/ydata-profiling/blob/master/src/ydata_profiling/model/pandas/imbalance_pandas.py](https://github.com/ydataai/ydata-profiling/blob/master/src/ydata_profiling/model/pandas/imbalance_pandas.py)
The concept to validate imbalanced classes is pretty straightforward - on a dataset of n instances, if you have k classes of size Ci you can compute Shanon Entropy as follows:
)
It is one of the most precise metrics I've found, to validate whether the dataset is imbalanced, given Shanon-Entropy is commonly used to measure the impurity or uncertainty within a set of data.
| null |
CC BY-SA 4.0
| null |
2023-05-16T02:50:00.247
|
2023-05-21T23:23:41.837
|
2023-05-21T23:23:41.837
|
149901
|
149901
| null |
121560
|
2
| null |
121536
|
1
| null |
You can run a contingency table chi-squared test.
```
import pandas as pd
import scipy.stats as stats
contingency_table = pd.crosstab(df['categorical'], df['y'])
chi2_result = stats.chi2_contingency(contingency_table)
```
| null |
CC BY-SA 4.0
| null |
2023-05-16T03:09:27.990
|
2023-05-16T03:09:27.990
| null | null |
71218
| null |
121561
|
1
| null | null |
0
|
10
|
If I understand the math behind the classic SVM for non-separable data correctly, the addition of a non-support vector (non-SV) should theoretically not alter the solution. My reasoning is that since its slack variable is zero (because it is correctly classified) and its lagrange multiplier is also zero (because it is a non-SV), it does not negatively effect both the value for wTw (w = weight vector) and the sum of the slack variables in the primal. Next, since in SVM the objective function results in a convex problem, the solution should stay the same. My first question is: is my interpretation of the mathematics behind SVMs correct?
Now, I sometimes do observe changes in the solution (i.e., different values for the bias term b and the Lagrange multipliers alpha) when I look at e.g. this applet ([https://cs.stanford.edu/people/karpathy/svmjs/demo/](https://cs.stanford.edu/people/karpathy/svmjs/demo/)) (see example below). The only explanation I could come up with is that this is due to the use of a numerical solver that stops if it is 'close enough' to the global solution.
My second question is: is this a reasonable explanation?
As an illustration:
BEFORE:
[](https://i.stack.imgur.com/njcpG.png)
AFTER (more vertical line and shifted):
[](https://i.stack.imgur.com/GJn41.png)
|
Can the addition of a non-support vector change the SVM solution?
|
CC BY-SA 4.0
| null |
2023-05-16T04:26:25.860
|
2023-05-16T04:26:25.860
| null | null |
146114
|
[
"machine-learning",
"svm"
] |
121562
|
1
| null | null |
0
|
13
|
I have a MaskRCNN model for instance segmentation with Resnet 50 - FPN backbone trained in detectron2. And I want to extract the embedding/feature vectors for visualizing input and hopefully detecting outliers. Which place would be the best to extract the embedding vector? I tried p5 of FPN whose shape is (1,256,51,68), but it's a tensor should I flatten this and feed it into the visualizer?
|
Embedding vector of MaskRCNN (Resnet with FPN)
|
CC BY-SA 4.0
| null |
2023-05-16T05:15:06.437
|
2023-05-16T05:15:17.853
|
2023-05-16T05:15:17.853
|
131709
|
131709
|
[
"machine-learning",
"deep-learning",
"pytorch",
"computer-vision",
"image-segmentation"
] |
121563
|
2
| null |
121552
|
0
| null |
I am not very familiar with the library and the problem that you are facing, but I took a look at the [scientific publication](http://www.laurentoudre.fr/publis/TOG-SP-19.pdf) in the Github [documentation](https://github.com/deepcharles/ruptures#welcome-to-ruptures) about `ruptures`. On page 31, section 8. Presentation of the Python package, under Constraints it says:
```
All methods can be used whether the number of change points is known or not. In
particular, ruptures implements change point detection under a cost budget and with a linear
penalty term [17, 111].
```
And for Input it states:
```
Change point detection can be performed on any univariate or multivariate signal that
fits into a Numpy array. A few standard non-stationary signal generators are included.
```
Based on this I would say that you don't necessarily need to [standardize](https://en.wikipedia.org/wiki/Normalization_(statistics)) your data, if that's what you meant by standardize. That is mainly used to speed up the training process for algorithms like logistic/linear regression, neural nets, etc. and I am quite certain that it does not affect the learning outcome of the model.
My suggestion is to try different evaluation metrics and cost function, to plot the results and compare. You can try also standardization as you planned, but IMO it wouldn't give you a better model.
| null |
CC BY-SA 4.0
| null |
2023-05-16T06:53:11.887
|
2023-05-16T06:53:11.887
| null | null |
142205
| null |
121564
|
1
| null | null |
0
|
16
|
I'm training SGD Classifier before I apply scaling it only gives accuracy of 0.02. After I apply scaling, the accuracy is 0.85. What could be the problem?
```
clf = SGDClassifier(loss="hinge", penalty="l2", n_jobs=-1, max_iter=1000).fit(X_train, y_train)
y_pred = clf.predict(X_test)
print(classification_report(y_test, y_pred))
```
[](https://i.stack.imgur.com/IMy2p.png)
[](https://i.stack.imgur.com/GcwPp.png)
|
Linear SGD Classifier not training without data scaling?
|
CC BY-SA 4.0
| null |
2023-05-16T07:01:18.993
|
2023-05-16T10:09:49.430
|
2023-05-16T10:09:49.430
|
83980
|
83980
|
[
"classification"
] |
121565
|
1
| null | null |
0
|
19
|
I usually perform sensitivity analysis on physical systems. So, for one configuration I have 1 answer and I can build for example a design of experiment and compute the sensitivity of each parameters.
But this time, i would like to perform a sensitivity analysis on a time series : if I change a parameter, it can have an impact later, not immediately. I would like to analyse data on a long period and not only in short term. Do you know existing methods that perform such analysis ?
Or do I have to transform my time series into "slots of period" and study each slot separately ?
|
sensitivity analysis on time series
|
CC BY-SA 4.0
| null |
2023-05-16T08:15:07.477
|
2023-05-19T09:35:59.383
| null | null |
52972
|
[
"time-series"
] |
121566
|
2
| null |
121554
|
0
| null |
Maybe I'm missing something, but it seems to me that, to know the total number of people that have been in a bus during an hour, you just need to start with the initial value of people for that hour and add all the increments (not the decrements) over that hour.
For instance, if during one hour we had the following counter values:
```
10, 10, 10, 10, 9, 9, 9, 5, 5, 5, 10
```
We would first compute the successive differences (starting at the first value):
```
10, 0, 0, 0, -1, 0, 0, -4, 0, 0, +5
```
And then we would add only the positive values together: 10 + 5 = 15
Please, clarify if my understanding of the problem is not correct.
| null |
CC BY-SA 4.0
| null |
2023-05-16T08:27:02.153
|
2023-05-16T08:27:02.153
| null | null |
14675
| null |
121567
|
2
| null |
121564
|
0
| null |
From the User Guide Tips for Practical Use:
[https://scikit-learn.org/stable/modules/sgd.html#tips-on-practical-use](https://scikit-learn.org/stable/modules/sgd.html#tips-on-practical-use)
Stochastic Gradient Descent is sensitive to feature scaling, so it is highly recommended to scale your data. For example, scale each attribute on the input vector X to [0,1] or [-1,+1], or standardize it to have mean 0 and variance 1. Note that the same scaling must be applied to the test vector to obtain meaningful results. This can be easily done using StandardScaler:
```
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(X_train) # Don't cheat - fit only on training data
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test) # apply same transformation to test data
```
Without seeing your data and your model, it's hard to say what's going on. For example, is your data set skewed? Looking at precision/recall/F1 scores as well as the confusion matrix can also sometimes help understand what is going well/what is going wrong with classifiers. hth
| null |
CC BY-SA 4.0
| null |
2023-05-16T08:59:50.663
|
2023-05-16T08:59:50.663
| null | null |
146483
| null |
121568
|
1
| null | null |
0
|
32
|
I'm trying to understand how optimal bayes classifier works and I was wondering if, given that the function we try to maximize when making a new prediction does not depend on the instance we are trying to classify, is it correct to state that the optimal bayes classifier would always predict the most probable class no matter what the input is?
EDIT:
I've been studying the subject on "Machine Learning, Tom Mitchell, McGraw Hill, 1997" where is stated that the prediction for a new instance is the class $v_j$ for which the function $\underset{v_j \in V}{\operatorname{arg max}}\sum_{h_i \in H}{P}(v_j|h_i){P}(h_i|D) $
Where $V$ is the set of all possible classes, $H$ is the space of the hypothesis and $D$ is the train dataset.
|
Does a classifier based on optimal bayes classifier equation classify every new instance the same way?
|
CC BY-SA 4.0
| null |
2023-05-16T09:36:17.680
|
2023-05-16T11:26:41.023
|
2023-05-16T10:25:41.900
|
149910
|
149910
|
[
"machine-learning",
"bayesian"
] |
121569
|
1
| null | null |
1
|
27
|
I am trying to train a MobileNetV2 on a custom dataset, to image Classification task.
Cardinality is 864 images, split in 70%/20%/10%, balanced between the 3 different classes.
Weights are pre-loaded from imagenet, I froze the net and I added to the bottom of the net a GlobalAveragePooling, a Dropout (with 50% drop probability), and a Dense layer with 3 classes and softmax as activation function, since i want the output layer to give me an output like (1,0,0) if the inference image is from the first class, and so on.
- image size: 96x96 (I normalized, too)
- batch_size: 32
- Learning rate: 0.001
- trainable params: 3843
- optimizer: sgd ('adam' doesn't improve my accuracy)
- loss: categorical cross entropy
- metrics: accuracy
Training for 20 epochs gives me these results:
[](https://i.stack.imgur.com/Sh1Cr.png)
After that I decided to try some fine-tuning, by freezing only the first 100 layers of the net.
Trained again for 10 epochs, that's what I get:
[](https://i.stack.imgur.com/5z2AB.png)
My net is overfitting, but I don't know why it's happening and what am I expected to do in order to improve my accuracy.
Edit: I also tried increasing dataset images with some source images or even with some data augmentation, up to more than 3K images, but it didn't work out at any rate.
|
MobileNet validation loss not decreasing over time
|
CC BY-SA 4.0
| null |
2023-05-16T10:09:46.893
|
2023-05-16T10:35:00.183
|
2023-05-16T10:35:00.183
|
149908
|
149908
|
[
"machine-learning",
"deep-learning",
"neural-network",
"image-classification",
"overfitting"
] |
121570
|
2
| null |
121528
|
1
| null |
I fixed the problem as indicated in my comment above. I installed python 3.8.0 as part of creating the environment, as it is required for installing tensorflow.
Not directly related to the original question, but a few wrinkles that could help others:
- Using conda gave me errors about unresolvable issues several times eg installing tensorflow. Using the option "--experimental-solver=libmamba" with conda solved these issues.
- "conda search" only returned version 1.2.0 of tensorflow-datasets. I need 4.6.0 and had to use pip.
Further issues with tensorflow specific to the developer certificate installation requirements:
- Uninstalled (using conda again)and reinstalled tensorflow from pip to overcome it not registering my physical graphics card.
- Rolled back tensorflow from 2.10.0 to 2.9.0 using pip because of incompatibility of 2.10.0 with installed numpy version.
| null |
CC BY-SA 4.0
| null |
2023-05-16T10:56:58.063
|
2023-05-18T10:07:40.750
|
2023-05-18T10:07:40.750
|
143103
|
143103
| null |
121571
|
2
| null |
121568
|
1
| null |
What you seem to miss is that each hypothesis is itself a function that maps input to outputs. According to [this post](https://machinelearningmastery.com/what-is-a-hypothesis-in-machine-learning#AdThrive_Content_3_desktop), a hypothesis is "an instance or specific candidate model that maps inputs to outputs and can be evaluated and used to make predictions."
Therefore, the $\arg \max$ in the optimization problem above is over different functions mapping inputs to outputs, taking into account the entire training set.
Once the optimal hypothesis within the hypothesis space is found, it has the ability to classify individual instances to their correct classes.
| null |
CC BY-SA 4.0
| null |
2023-05-16T11:26:41.023
|
2023-05-16T11:26:41.023
| null | null |
135316
| null |
121572
|
1
|
121573
| null |
0
|
41
|
I came across this question asking about the number of parameters in an RNN layer, from my understanding it is the number of weights and biases, which in this case is five. Can someone confirm this?[](https://i.stack.imgur.com/XuBRZ.png)
Question: How many parameters are there in this RNN (i.e. weights and bias values)?
|
How many parameter in an RNN?
|
CC BY-SA 4.0
| null |
2023-05-16T13:30:57.197
|
2023-05-16T14:17:41.753
| null | null |
149915
|
[
"rnn"
] |
121573
|
2
| null |
121572
|
1
| null |
Normally, when counting the number of parameters in a neural network, you are referring to the total size of the matrices, not the number of matrices.
With the given data, we have that:
- $U \in \mathbb{R^{4\times3}}$ → 12 parameters
- $W \in \mathbb{R^{3\times3}}$ → 9 parameters
- $b \in \mathbb{R^3}$ → 3 parameters
- $V \in \mathbb{R^{3\times2}}$ → 6 parameters
- $c \in \mathbb{R^2}$ → 2 parameters
Totalling 32 parameters ( = 9 + 12 + 3 + 6 + 2).
| null |
CC BY-SA 4.0
| null |
2023-05-16T14:17:41.753
|
2023-05-16T14:17:41.753
| null | null |
14675
| null |
121574
|
2
| null |
107905
|
0
| null |
Since the link from the previous answer doesn't work anymore, you can download the dataset from here now:
[https://github.com/MrHeadbang/machineLearning/blob/main/mnist.zip](https://github.com/MrHeadbang/machineLearning/blob/main/mnist.zip)
| null |
CC BY-SA 4.0
| null |
2023-05-16T14:34:23.483
|
2023-05-16T14:34:23.483
| null | null |
132035
| null |
121575
|
1
| null | null |
0
|
9
|
When performing statistical calculations like variance, mean squared error, and other equations, the approach differs depending on whether the data represents a sample from a population or the entire population itself. Specifically, the summation is divided by 'n' for population data and by 'n-1' for sample data. I'm curious about how these concepts are implemented in numpy, pandas, and scikit-learn functionalities. I tested the mean_squared_error function from scikit-learn, and it seems to divide by 'n' instead of 'n-1'. Treating a dataset of 300 cars as population data seems unreasonable. Is this difference in treatment between sample and population data not significant? Is it not that important actually and I dove to deep into insights and we generally treat our datasets as populations by convention? Should we be concerned about sample/population measurements in machine learning?
|
calculation for population vs for group
|
CC BY-SA 4.0
| null |
2023-05-16T15:02:03.400
|
2023-05-16T15:02:03.400
| null | null |
149495
|
[
"statistics",
"mathematics"
] |
121576
|
1
| null | null |
0
|
155
|
I was trying to create a pipeline using Langchain and GPT4All (gpt4all-converted.bin). The pipeline ran fine when we tried on a windows system. But now when I am trying to run the same code on a RHEL 8 AWS (p3.8x) instance it is generating gibberish response.
This is my code -
```
from langchain import PromptTemplate, LLMChain
from langchain.llms import GPT4All
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
gpt4all_path = 'Models/gpt4all-converted.bin'
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
llm = GPT4All(model=gpt4all_path, callback_manager=callback_manager, verbose=True, temp=0, n_predict=512, n_ctx=2048)
prompt_temp = """
Below is an instruction that describes a task. Write a response that appropriately completes the request.
> How many letters are there in the English alphabet?
There 26 letters in the English Alphabet
> Question: {question}
> Reply:
"""
prompt = PromptTemplate(template=prompt_temp, input_variables=["question"])
llm_chain = LLMChain(prompt=prompt, llm=llm)
response = llm_chain.run("write me a story about a lonely computer?")
```
And this is what I am getting -
```
print(response)
'\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f#\x05\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f#\x05\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f# thealst\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f'
```
Can anyone help me to understand the problem here?
|
GPT4All with Langchain generating gibberish in RHEL 8
|
CC BY-SA 4.0
| null |
2023-05-16T15:12:54.907
|
2023-05-16T15:12:54.907
| null | null |
108477
|
[
"python",
"nlp",
"gpt",
"aws"
] |
121577
|
1
|
121578
| null |
2
|
74
|
I'm working on a Classification problem as a side project and I'm receiving results contrary to what I'd expect.
With 100,000 records, each with 7 components for X, the model is performing much better with 70% of the data being used to test, rather than what I'd expect: 70% training split to work better.
Has anyone had this before or know why this could be? I'm wondering if maybe the large size of the data is worsening the model somehow.
|
Random Forest Classification model performing much better with 70:30 TEST:TRAIN rather than the opposite
|
CC BY-SA 4.0
| null |
2023-05-16T15:49:54.960
|
2023-05-16T17:57:34.340
| null | null |
149919
|
[
"machine-learning",
"classification",
"random-forest"
] |
121578
|
2
| null |
121577
|
1
| null |
Is this data is imbalanced, like 95% target A versus 5% target B? If it is I would suggest that the test set sample was a poor representation of the under represented target to be classified. Could you augment the data set to increase its size, e.g. if it's a time-series use other data points, image recognition rotate or shift the hues, contrast, orientation? Dealing imbalance has alternative solutions if thats the issue.
---
From the comments:
The issue is 92% for 30:70% train-test split and 80% for 70:30% train-test split.
You could simply say 80% is good enough I'll proceed with the orthodox 70:30 split. If you are proceeding with 30:70 split you would need to be clear about that, if it's a manuscript the reviewer would likely return it. Personally, I don't think it's cool.
I get the impression that 3 of the targets under classification have approximately equal proportions (just guessing). The issue is whether there is a minority part of the classification which is getting misrepresented in the testing split.
There are two approaches I would use (as a data scientist):
- Reduce the problem to the 3 majority categories and see if the discrepancy continues between 30:70 and 70:30
- Augment the data and use a standard 70:30 split, however now the 30% is more like the original 70% due to augmentation.
My suspicion is in point 1 is the discrepancy will disappear, thus you've identified the problem and can consider whether its worth moving to point 2.
If that was correct it's what does the fourth catagory represent and how important this is to you. For example in cancer that 4th category (the smallest) could be really important because it carries the highest mortality. If it's just not important - its a minority variant that no-one cares about and you just state the classification for this category needs further development (which might never happen).
Its area specific. In my problems I can't discount a minority classification, but thats because it might become the variant that takes over the world and I've just missed it (I do evolutionary selection). In your problem and I get the impression in many business related analytics you can.
| null |
CC BY-SA 4.0
| null |
2023-05-16T16:16:16.947
|
2023-05-16T17:57:34.340
|
2023-05-16T17:57:34.340
|
67203
|
67203
| null |
121579
|
2
| null |
121577
|
2
| null |
The results you are receving may be affected by variance.
When you evaluate the model on the 30% of the data, you will have low bias but more variance.
The imbalance of the target should not be a problem as long as you stratify your split.
Alternatives to consider:
- Use the out of bag score of random forest, that is the score in the samples that are excluded during bootstrap. That will give you a good approximation of the performance on the test and they will be approximately 30% of the train data always so it will be more fair evaluation.
- Create learning curves
Evaluate the [model performance](https://www.datacamp.com/tutorial/tutorial-learning-curves) as a function of sample size, so you can see if the model actually benefits from adding more data to the training set.
Hope it helps!
| null |
CC BY-SA 4.0
| null |
2023-05-16T16:43:28.800
|
2023-05-16T16:43:28.800
| null | null |
92050
| null |
121580
|
1
| null | null |
0
|
15
|
I'm training a keras sequential model with these layers included.
The problem I face here is that the val_loss is constant and does not decrease..
The model is meant to classify the Diabetic retinopathy detection
The code snippet is as belows
```
input_layer = Input(shape = (224,224,3))
base_model = ResNet50(include_top = False, input_tensor = input_layer, weights = "imagenet")
x = GlobalAveragePooling2D()(base_model.output)
x = Dropout(0.5)(x)
x = Dense(256, activation = "relu")(x)
x = Dropout(0.3)(x)
x = Dense(128, activation = "relu")(x)
#x = Dropout(0.5)(x)
out = Dense(5, activation = 'softmax')(x)
model = Model(inputs = input_layer, outputs = out)
```
And,
```
optimizer = keras.optimizers.Adam(learning_rate = 3e-4)
es = EarlyStopping(monitor='val_loss', mode='min', patience = 8, restore_best_weights=True)
rlrop = ReduceLROnPlateau(monitor='val_loss', mode='min', patience = 3, factor = 0.5, min_lr=1e-6)
callback_list = [es, rlrop]
model.compile(optimizer = optimizer, loss = "categorical_crossentropy", metrics = ["accuracy"])
```
The main problem is the over fitting of the model... please help.. I'm not so good at it.
Please let me know if there's anything more I need to add for reference.
|
How to improve the val_loss in a keras sequential model for classification purpose
|
CC BY-SA 4.0
| null |
2023-05-16T17:27:51.233
|
2023-05-16T17:29:43.580
|
2023-05-16T17:29:43.580
|
149917
|
149917
|
[
"keras"
] |
121581
|
1
| null | null |
0
|
8
|
I am trying to use q-learning for a discrete observation space that is represented by:
- buffer: list of 200 integer values in [0,10]
- discard_counter: list of 200 integer values in [0, 4]
- capacity: list of 30integer values in [-1,10]
I think buffer and discard counter can be combined into one 2D array as for every buffer entry there is a value and a discard counter.
So in order to represent all states I use the following method:
```
def obs_to_state(self):
#maps obs to int value, because i am unable to think in 6 dimensions
obs = self.env.get_obs()
arr1 = np.zeros((200, 11, 5)) # array representing buffer and discard_counter
for i in range(len(obs['buffer'])):
dc = obs['discard_counter'][i]
val = obs['buffer'][i]
arr1[i, val, dc] = 1
arr1 = list(arr1.flatten())
arr2 = np.zeros((30, 12)) # capacity array
for i, v in enumerate(obs['capacity']):
arr2[i, v] = 1
arr2 = list(arr2.flatten())
return arr1 + arr2
```
this method gives me a list of length 200*11*5*30*12=11360. However each of these values can be either 0 or 1 (and most combinations are possible states that can be reached). This results in overall 2**11360 possible states which is a number that definitely is too big for using q-learning with a q-table. In order to create the q-table I have to create an array of size 11360*2 (there are only 2 actions). Am I missing something or is q-learning not a good idea for this task?
Here is the description of the task and the code I have so far:
A startup wants to run multiple workflows simultaneously. Given the low budget of the
company, only one local resource (LR) and some EC2 instances are available to run
all the workflow tasks. The scheduler that coordinates the execution wants to learn
when a particular task should be executed locally or offloaded to the cloud.
Consider the following restrictions:
Tasks durations are discrete values between 1 and 10 time slots.
LR capacity is 30 time slots. If capacity is not enough, tasks will be discarded.
Processing rate of LR is 2 time slots per Q-learning iteration.
Offloading to cloud costs 4 time slots.
An unlimited buffer can be used to queue tasks until LR is available. Buffered tasks
are discarded after 4 time slots.
The task arrival frequency is totally up to you.
- Create a custom Gym environment out of your MDP model.
- Implement Q-learning and obtain the optimal policy that maximize the
cumulative reward.
Code:
The Agent class is work in progress and num_states needs to be 2**11360 but my Laptop cannot do this.
```
import gym
from gym import spaces
from gym.envs.registration import register
import random
from gym.utils.env_checker import check_env
import numpy as np
from itertools import product
# source: https://www.gymlibrary.dev/content/environment_creation/
# source: https://www.gymlibrary.dev/content/basic_usage/
class Env(gym.Env):
def __init__(self) -> None:
super().__init__()
self.N = 200
self.step_cnt = 0
self.schedule_time = 2
self.reward = 0
self.observation_space = spaces.Dict({
"capacity": spaces.Box(low = 0, high = 10, shape=(30,), dtype=int),
"buffer": spaces.Box(low = 0, high = 10, shape=(self.N,), dtype=int),
"discard_counter": spaces.Box(low = 1, high = self.schedule_time, shape=(self.N,), dtype=int)
})
# execute (1) or offload (2)
self.action_space = spaces.Discrete(2)
def get_obs(self):
return({
"capacity": self.capacity,
"buffer": self.buffer,
"discard_counter": self.discard_counter
})
def create_tasks(self):
result = []
while self.N > 0:
# Generate a random self.N between 1 and 10 (inclusive)
value = random.randint(1, 10)
value = min(value, self.N)
result.append(value)
self.N -= value
return result
def reset(self, seed = None, options = None):
#super.reset(seed = seed) super has no reset method...
self.capacity = [0 for x in range(30)]
self.buffer = [0 for x in range(self.N)]
self.discard_counter = [0 for x in range(self.N)]
self.tasks = self.create_tasks()
observation = self.get_obs()
return observation
def is_terminated(self):
if(sum(self.capacity) + sum(self.buffer) + sum(self.tasks) == 0):
return True
return False
def update_buffer(self):
# deal with discarded tasks
for i, t in enumerate(self.discard_counter):
if t == 0 and self.buffer[i] > 0:
self.buffer[i] = 0
self.reward -= 1000
elif self.buffer[i] != 0:
self.discard_counter[i] -= 1
# move tasks to next position if possible
if sum(self.buffer) != 0:
while self.buffer[0] == 0:
for i in range(len(self.buffer) - 1):
self.buffer[i] = self.buffer[i+1]
self.discard_counter[i] = self.discard_counter[i+1]
self.buffer[-1] = 0
def update_capacity(self, action):
# fill capacity from buffer
#while sum(self.buffer) > 0 and sum(self.capacity) < len(self.capacity) + 1:
for i, buffer_value in enumerate(self.buffer):
if buffer_value == 0:
break
if sum(self.capacity) + buffer_value > len(self.capacity):
break
for j, x in enumerate(self.capacity):
if x == 0:
self.capacity[j] = buffer_value
self.discard_counter[i] = 0
self.buffer[i] = 0
self.update_buffer()
# capacity empty
if sum(self.capacity) < 1:
return
# move tasks to next position if possible
while self.capacity[0] < 1:
for i in range(len(self.capacity)-1):
self.capacity[i] = self.capacity[i+1]
self.capacity[-1] = 0
if action == 1: # execute
self.capacity[0] -= 2
else: # offload to EC2
self.capacity[0] = 2
def add_task(self):
# if enough capacity add task to LR
if sum(self.capacity) + self.tasks[self.step_cnt // self.schedule_time] < len(self.capacity) + 1:
for i in range(len(self.capacity)):
if self.capacity[i] == 0:
self.capacity[i] = self.tasks[self.step_cnt // self.schedule_time]
return
else:
for i in range(len(self.buffer)):
if(self.buffer[i] == 0):
self.buffer[i] = self.tasks[self.step_cnt // self.schedule_time]
self.discard_counter[i] = self.schedule_time
return
def step(self, action):
'''
in theory I should secure that tasks that are already offloaded are not offloaded again
however the agent will (should) learn this.
'''
self.reward -= 1
self.update_capacity(action)
if self.step_cnt % self.schedule_time == 0 and self.step_cnt // self.schedule_time < len(self.tasks):
self.add_task()
self.step_cnt += 1
return self.get_obs(), self.reward, self.is_terminated(), {} #, False
def close(self):
# close files or windows opened if there are any
# not required here
pass
class Agent():
def __init__(self, env: Env):
self.env = env
self.epsilon = 0.7
self.learning_rate = 0.1
num_states = 200*11*5 + 30*12
self.Q_table = np.zeros((num_states, 2))
def obs_to_state(self):
#maps obs to int value, because i am unable to think in 6 dimensions
obs = self.env.get_obs()
arr1 = np.zeros((200, 11, 5)) # array representing buffer and discard_counter
for i in range(len(obs['buffer'])):
dc = obs['discard_counter'][i]
val = obs['buffer'][i]
arr1[i, val, dc] = 1
arr1 = list(arr1.flatten())
arr2 = np.zeros((30, 12)) # capacity array
for i, v in enumerate(obs['capacity']):
arr2[i, v] = 1
arr2 = list(arr2.flatten())
return arr1 + arr2
def get_action(self):
if random.random() < self.epsilon:
# exploit
state = self.obs_to_state()
return np.argmax(self.Q_table[state]) + 1
else:
# explore
return random.randint(1,2)
if __name__ == "__main__":
gym.register(
id='Env-v0', # Unique ID for your environment
entry_point=__name__ + ':Env', # Entry point to your custom environment class
)
# skipped step: https://www.gymlibrary.dev/content/environment_creation/#creating-a-package
env = gym.make('Env-v0')
agent = Agent(env)
obs = env.reset()
test = agent.obs_to_state()
print(test)
```
```
|
find q-table for discrete action space
|
CC BY-SA 4.0
| null |
2023-05-16T19:46:59.763
|
2023-05-16T19:46:59.763
| null | null |
149925
|
[
"reinforcement-learning",
"q-learning",
"openai-gym"
] |
121583
|
1
|
121585
| null |
3
|
59
|
I am doing a binary classification task with Keras and my model directly outputs either 0 or 1. Typically I compile the model like something below:
```
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3), metrics=['accuracy'])
```
The dataset I have is imbalanced, only ~10% of samples are positive. I am aware that in this case accuracy is not a good metric and I can see a 90% accuracy even if the model is the same as random guessing.
The problem is, seems Keras does not provide F1 score as an alternative in its `metrics` parameter of `compile()` method (the list of method Keras provides is [here](https://www.tensorflow.org/api_docs/python/tf/keras/metrics)). What else can I pass to the `metrics` parameter so that I can have a better understanding of the model's performance during the training progress?
EDIT1
To make the question more complete, I included a sample model definition below:
```
input_shape = image_size + (3,)
num_classes = 2
model = Sequential([
layers.Rescaling(1./255, input_shape=input_shape),
layers.Conv2D(filters=64, kernel_size=(3, 3), activation='relu'),
layers.Conv2D(filters=64, kernel_size=(3, 3), activation='relu'),
layers.MaxPooling2D(pool_size=(2, 2),strides=(2, 2)),
layers.Dense(4096, activation='relu'),
layers.Dense(4096, activation='relu'),
layers.Dense(num_classes, activation='softmax')
])
model.build((None,) + input_shape)
optimizer = keras.optimizers.Adam()
model.compile(
optimizer=optimizer,
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=['accuracy']
)
```
EDIT2
Following @noe's answer and some posts [here](https://stackoverflow.com/questions/48851558/tensorflow-estimator-valueerror-logits-and-labels-must-have-the-same-shape), I can make AUC work now. A few parameters must be set correctly:
- layers.Dense(1, activation='sigmoid')
- loss=tf.keras.losses.BinaryCrossentropy(),
- metrics=['AUC']
Among them, `layers.Dense(1, activation='sigmoid')` seems to be most critical, we need to use `sigmoid()` to convert the output to a range of (0, 1) to make AUC work.
```
input_shape = image_size + (3,)
num_classes = 2
model = Sequential([
layers.Rescaling(1./255, input_shape=input_shape),
layers.Conv2D(filters=64, kernel_size=(3, 3), activation='relu'),
layers.Conv2D(filters=64, kernel_size=(3, 3), activation='relu'),
layers.MaxPooling2D(pool_size=(2, 2),strides=(2, 2)),
layers.Dense(4096, activation='relu'),
layers.Dense(4096, activation='relu'),
layers.Dense(1, activation='sigmoid')
])
model.build((None,) + input_shape)
optimizer = keras.optimizers.Adam()
model.compile(
optimizer=optimizer,
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=['AUC']
)
```
|
Which metric to use for imbalanced data in TensorFlow/Keras
|
CC BY-SA 4.0
| null |
2023-05-17T01:58:20.377
|
2023-06-01T03:11:54.767
|
2023-06-01T03:11:54.767
|
149935
|
149935
|
[
"keras",
"tensorflow",
"metric"
] |
121584
|
1
| null | null |
0
|
25
|
In deep learning, one way to determine whether the training has converged is to observe the movement of the loss values over iterations or epochs. One can choose any $\epsilon$ threshold and any metric. If the value is less than $\epsilon$, then the training has converged.
My question is: how big is the $\epsilon$ value that is usually used? Are there examples of papers that specifically state the threshold?
|
How big is the threshold that is usually used in determining the convergence of loss values in deep learning?
|
CC BY-SA 4.0
| null |
2023-05-17T03:24:23.430
|
2023-05-17T09:36:52.010
| null | null |
149431
|
[
"deep-learning",
"convergence"
] |
121585
|
2
| null |
121583
|
1
| null |
AUC: Area Under the ROC Curve. Check some references: [1](https://developers.google.com/machine-learning/crash-course/classification/roc-and-auc), [2](https://stats.stackexchange.com/q/260164/40048), [3](https://datascience.stackexchange.com/a/94654/14675).
AUC = 0.5 means the classifier is random guessing. AUC = 1 is the perfect classifier.
| null |
CC BY-SA 4.0
| null |
2023-05-17T04:50:52.587
|
2023-05-17T04:50:52.587
| null | null |
14675
| null |
121586
|
1
|
121592
| null |
0
|
40
|
I'm trying to find conferences that have applied data science papers published. I'm only interested in top ranked conferences. And I notice quite a number of them are quite theoretical, e.g. IJAI, NIPS, etc.
Thanks
|
Where can I find the applied data science research papers?
|
CC BY-SA 4.0
| null |
2023-05-17T05:11:30.340
|
2023-05-17T09:32:49.577
| null | null |
121222
|
[
"machine-learning",
"research",
"artificial-intelligence"
] |
121587
|
2
| null |
121584
|
0
| null |
There is no threshold reference value. Different tasks, losses and datasets lead to radically different loss value ranges and different amounts of noise. This is a somewhat experimentally defined thing.
Also, the training stop criterion is not always based on a threshold. Many times, you just set the training to last for N epochs.
| null |
CC BY-SA 4.0
| null |
2023-05-17T05:13:17.913
|
2023-05-17T09:36:52.010
|
2023-05-17T09:36:52.010
|
14675
|
14675
| null |
121588
|
1
| null | null |
0
|
18
|
So I have around 462 images and I can't really get more images. I am using a pretrained model of MobileNetV3 with the respective weights. I am facing a huge problem of overfitting and no real solution to it. I have not personally used mobilenet as much but I am looking into it as I need to implement it into a device since it is lightweight.
I have tried dropout with weight decay and learning rate. Additionally, I have used gaussion noise as well. But no difference whatsoever. Could this be a data problem or a model issue?
Additionally, I got a test accuracy as 88% but I don't think that translated into anything as the images were mostly missclassified.
[](https://i.stack.imgur.com/t81OL.png)
This is my code so far:
```
# Set the random seed for reproducibility
random.seed(42)
IMG_WIDTH = 232
IMG_HEIGHT = 232
class AddGaussianNoise(object):
def __init__(self, mean=0.0, std=1.0):
self.std = std
self.mean = mean
def __call__(self, tensor):
return tensor + torch.randn(tensor.size()) * self.std + self.mean
def __repr__(self):
return self.__class__.__name__ + "(mean={0}, std={1})".format(
self.mean, self.std
)
# Define the data transformation
transform = transforms.Compose(
[
transforms.Resize(
(IMG_HEIGHT, IMG_WIDTH), interpolation=Image.BILINEAR
), # Resize the images
# transforms.CenterCrop(224), # Crop the images to 224x224 about the center
transforms.ToTensor(), # Convert the image to PyTorch Tensor data type
transforms.Normalize(
mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]
), # Normalize the images
# AddGaussianNoise(0.0, 2.0),
]
)
# Load the dataset
dataset = datasets.ImageFolder(root="real_data/", transform=transform)
test_dataset = datasets.ImageFolder(root="test-set/", transform=transform)
print(len(dataset))
# Split the dataset into training and validation sets
train_size = 0.7
val_size = 0.3
train_len = int(len(dataset) * train_size)
print(train_len)
val_len = len(dataset) - train_len
print(val_len)
train_dataset, val_dataset = torch.utils.data.random_split(
dataset, [train_len, val_len]
)
# Create the training and validation dataloaders
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=16, shuffle=True)
val_loader = torch.utils.data.DataLoader(val_dataset, batch_size=16, shuffle=False)
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=16, shuffle=False)
# Define the loss function
criterion = nn.CrossEntropyLoss()
model = models.mobilenet_v3_large(
pretrained=True, weights="MobileNet_V3_Large_Weights.IMAGENET1K_V2", dropout=0.3
)
model.eval()
# Define the optimizer as Adam with weight decay correction
optimizer = optim.Adam(model.parameters(), lr=0.0001, weight_decay=0.0001)
# scheduler = optim.lr_scheduler.StepLR(optimizer, step_size=10, gamma=0.1)
train_loss = []
train_accuracy = []
val_loss = []
val_accuracy = []
# Define the early stopping criteria
patience = 20
min_delta = 0.001
best_val_loss = float("inf")
counter = 0
def train_epoch(epoch):
start_time = time.time()
# Train the model for one epoch
for i, (images, labels) in enumerate(train_loader):
# Forward pass
outputs = model(images)
# Calculate the loss
loss = criterion(outputs, labels)
# Backward pass
optimizer.zero_grad()
loss.backward()
# Update the parameters
optimizer.step()
# Calculate the accuracy
correct = (outputs.argmax(1) == labels).sum()
accuracy = correct / len(labels)
# Add the loss and accuracy to the lists
train_loss.append(loss.item())
train_accuracy.append(accuracy)
end_time = time.time()
return end_time - start_time
def test_accuracy(model, test_loader):
model.eval()
correct = 0
total = 0
with torch.no_grad():
for images, labels in test_loader:
outputs = model(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
accuracy = correct / total
return accuracy
for epoch in range(60):
# Train the model
train_time = train_epoch(epoch)
# Validate the model
with torch.no_grad():
total_val_loss = 0
total_val_accuracy = 0
total_samples = 0
for images, labels in val_loader:
outputs = model(images)
loss = criterion(outputs, labels)
correct = (outputs.argmax(1) == labels).sum()
accuracy = correct / len(labels)
total_val_loss += loss.item() * len(labels)
total_val_accuracy += accuracy.item() * len(labels)
total_samples += len(labels)
val_loss.append(total_val_loss / total_samples)
val_accuracy.append(total_val_accuracy / total_samples)
# if val_loss[-1] < best_val_loss - min_delta:
# best_val_loss = val_loss[-1]
# counter = 0
# # Save the model checkpoint
# torch.save(model.state_dict(), "model4.pth")
# else:
# counter += 1
# if counter >= patience:
# print("Early stopping")
# break
# Print the loss and accuracy
print(
f"Epoch {epoch + 1}: Train Loss {train_loss[-1]}, Train Accuracy {train_accuracy[-1]}, Val Loss {val_loss[-1]}, Val Accuracy {val_accuracy[-1]}, Train Time {train_time}"
)
test_acc = test_accuracy(model, test_loader)
print(f"Test Accuracy: {test_acc}")
# Save the model
torch.save(model.state_dict(), "model4.pth")
```
```
|
How can I prevent mobilenetv3 from overfitting with less data?
|
CC BY-SA 4.0
| null |
2023-05-17T06:49:09.367
|
2023-05-17T06:49:09.367
| null | null |
138954
|
[
"deep-learning",
"image-classification",
"pytorch",
"computer-vision"
] |
121590
|
2
| null |
121586
|
2
| null |
I think the Open Data Science Conference [ODSC](https://odsc.com/) is what you are looking for - industry leaders present some tools that they use in their work. The ones you've listed are research oriented, the material presented there comes mostly from research (R&D) departments from companies.
There are many other conferences as well, but IMO they are technology specific, e.g., like the Apache Flink Forward [conference](https://www.flink-forward.org/seattle-2023). You can search for the technology you are interested in and might find some events.
Also, the Journal of Open Source Software [JOSS](https://joss.theoj.org/) might be of interest.
| null |
CC BY-SA 4.0
| null |
2023-05-17T08:47:25.537
|
2023-05-17T08:47:25.537
| null | null |
142205
| null |
121591
|
2
| null |
110209
|
0
| null |
This [paper](https://www.researchgate.net/publication/354984828_Early_intermediate_and_late_fusion_strategies_for_robust_deep_learning-based_multimodal_action_recognition) talks about Early, Middle, and Late Fusion and compares them. Could be helpful to you.
| null |
CC BY-SA 4.0
| null |
2023-05-17T09:24:29.223
|
2023-05-17T09:24:29.223
| null | null |
149946
| null |
121592
|
2
| null |
121586
|
2
| null |
If you're looking for conferences that focus on applied data science and have a high ranking, there are several options you can consider. While it's true that some conferences may have a more theoretical emphasis, there are also reputable conferences that highlight practical and applied aspects of data science. Here are a few suggestions:
- ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD): KDD is one of the premier conferences in data mining and knowledge discovery. It covers a wide range of topics including applied data science, machine learning, data mining, and big data analytics.
- IEEE International Conference on Data Mining (ICDM): ICDM is another top conference in the field of data mining. It brings together researchers and practitioners to discuss the latest advancements in data mining and its applications.
- International Conference on Machine Learning (ICML): While ICML does have a theoretical focus, it also accepts and features applied data science papers. It is a leading conference in the machine learning community and covers a broad range of topics.
- International Joint Conference on Artificial Intelligence (IJCAI): IJCAI is a prestigious conference in the field of artificial intelligence. While it does include theoretical research, it also accepts and showcases applied data science papers.
- International Conference on Data Science and Advanced Analytics (DSAA): DSAA focuses specifically on data science and advanced analytics. It welcomes submissions related to practical applications, data-driven solutions, and real-world case studies.
These conferences are known for their rigorous review process and attract top researchers and practitioners in the field. Keep in mind that acceptance rates for these conferences can be highly competitive, so ensure that your work aligns well with the conference's scope and requirements.
Additionally, you can also explore domain-specific conferences in areas such as healthcare, finance, or industry-specific data science conferences. These conferences often highlight applied research and real-world applications within their respective domains.
Remember to check the websites of these conferences for the most up-to-date information on submission deadlines, conference dates, and paper requirements.
| null |
CC BY-SA 4.0
| null |
2023-05-17T09:32:49.577
|
2023-05-17T09:32:49.577
| null | null |
149947
| null |
121593
|
1
| null | null |
0
|
19
|
I thought that with an LSTM you could use sequences of any length as input, but with shape fixed for each time step, but I encountered an anomalous behavior.
The following code gives the error that I expected, since the input shape is different from the one defined on the LSTM model:
```
import numpy as np
from keras.models import Sequential
from keras.layers import Dense, LSTM
model = Sequential()
model.add(LSTM(50, input_shape=(None, 15), return_sequences=True))
model.add(Dense(1, activation='linear'))
N_start = 16
inputs = np.zeros((1000, 50, N_start))
model.predict(inputs)
```
and the error is:
`ValueError: Input 0 of layer "sequential" is incompatible with the layer: expected shape=(None, None, 15), found shape=(None, 50, 16)`
But then, if I use the following code, the model works even at different shapes:
```
import numpy as np
from keras.models import Sequential
from keras.layers import Dense, LSTM
model = Sequential()
model.add(LSTM(50, input_shape=(None, 15), return_sequences=True))
model.add(Dense(1, activation='linear'))
N_start = 14
inputs = np.zeros((1000, 50, N_start))
add = np.ones((1000, 50, 1))
for i in range(10):
inputs = np.concatenate((inputs,add), axis = 2)
print(np.shape(inputs))
model.predict(inputs)
```
The output is
`(1000, 50, 15) 2023-05-17 10:46:19.392818: I tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:428] Loaded cuDNN version 8401 32/32 [==============================] - 2s 6ms/step (1000, 50, 16) 32/32 [==============================] - 0s 6ms/step (1000, 50, 17) 32/32 [==============================] - 0s 6ms/step`
I can't understand why it works for inputs with last dimension different from 15.
|
LSTM can accept inputs of different shapes in some cases
|
CC BY-SA 4.0
| null |
2023-05-17T13:05:27.487
|
2023-05-17T13:05:27.487
| null | null |
142938
|
[
"machine-learning",
"neural-network",
"lstm"
] |
121594
|
2
| null |
121477
|
1
| null |
I worked on the same dataset some days ago.
You can try:
- use the same data lenght of the video trials (1 minute for trial)
- using vertical EOGs or Fp1 and Fp2 of EEGs data
- using as thresh: (max(data_filtered) - min(data_filtered))/2.
data_filtered to band pass 1 - 10 hz
Bye
| null |
CC BY-SA 4.0
| null |
2023-05-17T16:37:50.667
|
2023-05-17T17:26:03.333
|
2023-05-17T17:26:03.333
|
149959
|
149959
| null |
121595
|
1
| null | null |
1
|
47
|
## Initial Information
I built a Neural Network Model (Logistic Regression) to classify Lung Cancer based on the patient's (user) symptoms
My dataset is kind of small (only about 276 data)
Here is the illustration for my dataset:
```
data.head()
```
|GENDER |AGE |SMOKING |YELLOW_FINGERS |ANXIETY |PEER_PRESSURE |CHRONIC DISEASE |FATIGUE |ALLERGY |WHEEZING |ALCOHOL CONSUMING |COUGHING |SHORTNESS OF BREATH |SWALLOWING DIFFICULTY |CHEST PAIN |LUNG_CANCER |
|------|---|-------|--------------|-------|-------------|---------------|-------|-------|--------|-----------------|--------|-------------------|---------------------|----------|-----------|
|M |69 |1 |2 |2 |1 |1 |2 |1 |2 |2 |2 |2 |2 |2 |YES |
|M |74 |2 |1 |1 |1 |2 |2 |2 |1 |1 |1 |2 |2 |2 |YES |
|F |59 |1 |1 |1 |2 |1 |2 |1 |2 |1 |2 |2 |1 |2 |NO |
|M |63 |2 |2 |2 |1 |1 |1 |1 |1 |2 |1 |1 |2 |2 |NO |
Here's how I preprocessing the dataset:
- I drop the duplicate data
- I encode the [GENDER] and [LUNG CANCER] value
- I change the 2,1 value to 1,0
- I scale the age feature using StandardScaler()
- I resample the training set using RandomOverSampler().fit_resample
Here's my Neural Network Model:
```
model = Sequential(
[
Dense(3, activation = 'sigmoid', input_shape=[15]),
Dense(1, activation = 'sigmoid'),
])
model.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
optimizer=tf.keras.optimizers.Adam(learning_rate=0.01),
metrics=['accuracy'])
history = model.fit(X_train, y_train,
epochs=50, batch_size=16,
validation_split=0.2,
shuffle=True)
```
Here's the Training Result:
[](https://i.stack.imgur.com/90DkL.png)
[](https://i.stack.imgur.com/IbZm4.png)
Here's the test result:
```
test_loss, test_acc = model.evaluate(X_test, y_test)
print("Test loss:", test_loss)
print("Test accuracy:", test_acc)
3/3 [==============================] - 0s 4ms/step - loss: 0.1746 - accuracy: 0.9420
Test loss: 0.17462675273418427 Test accuracy: 0.9420289993286133
```
Here's the confusion matrix and classification report:
[](https://i.stack.imgur.com/XRVGF.png)
```
precision recall f1-score support
0 0.98 0.95 0.97 60
1 0.73 0.89 0.80 9
accuracy 0.94 69
macro avg 0.86 0.92 0.88 69
weighted avg 0.95 0.94 0.94 69
```
My QUESTION:
- Is my model result (accuracy) good enough considering I want to build a Cancer Detection app using this model?
- If my model result (accuracy) is not good enough, how could I improve this model? What parameter should I tweak or maybe should I reduce or add the Neural layer?
Note: Please enlighten me, I'm kind of new to Machine Learning :)
|
What should I Improve from my Neural Network Model (Logistic Regression)
|
CC BY-SA 4.0
| null |
2023-05-17T18:01:39.213
|
2023-05-18T06:49:50.167
|
2023-05-17T18:07:34.120
|
149960
|
149960
|
[
"machine-learning",
"neural-network",
"keras",
"tensorflow",
"logistic-regression"
] |
121596
|
1
| null | null |
0
|
42
|
Sorry if this is the wrong SE, but in my mind it made the most sense to ask this here.
My question is related to specifically collecting information about a target demographic, not individuals which is obviously unethical.
For example, say that you’re starting a business selling brown leather shoes, and you want to know what kind of demographic likes brown leather shoes, and you find some data leak that (for some reason) has a bunch of descriptors for people who buy brown leather shoes (like age, how much they paid for their shoes, their general location, etc.).
Would collecting that data in aggregate to inform a predictive model be unethical, since no one individual’s privacy is violated in your usage of the data?
|
Is it unethical to gather data from data leaks about demographics?
|
CC BY-SA 4.0
| null |
2023-05-17T18:44:58.497
|
2023-05-17T21:14:57.610
| null | null |
145929
|
[
"data-leakage",
"privacy",
"ethical-ai"
] |
121598
|
2
| null |
121596
|
3
| null |
Even if no personally identifiable information is revealed about individuals, collecting data from data leaks can still raise ethical and legal concerns.
This is because such data is often sensitive information about a specific group of people. Using this data without their knowledge or consent can lead to potential harm, especially if it is used to inform a predictive model that could potentially be used to target this group of people for marketing or other purposes. It is important to consider the ethical implications of any data collection and use, and to ensure that individuals' privacy and consent are respected.
Additionally, the laws and regulations surrounding the use of data obtained without consent vary by country. In the European Union, the General Data Protection Regulation (GDPR) requires explicit consent for the processing of personal data. In the United States, there are various federal and state laws that govern the use of personal data, such as the California Consumer Privacy Act (CCPA) and the Health Insurance Portability and Accountability Act (HIPAA). Other countries such as Canada, Australia, and Japan also have their own privacy laws. It is important to research and understand the laws and regulations in your jurisdiction before collecting and using any data.
While most large companies have policies in place to ensure ethical data use, it's not always the case that they follow them. Some companies may prioritize profits over ethics and may be willing to take risks when it comes to data collection. However, there are consequences for companies that violate data privacy laws, including fines and damage to their reputation. It's important for companies to prioritize ethical data use and for individuals to hold them accountable when they don't.
It is always best to obtain data through ethical means such as opt-in surveys or public data sources. It's also important to be transparent about your use of the data and to obtain informed consent from individuals if possible.
I hope that provides some more insight to your question!
| null |
CC BY-SA 4.0
| null |
2023-05-17T21:14:57.610
|
2023-05-17T21:14:57.610
| null | null |
149968
| null |
121599
|
1
| null | null |
1
|
38
|
I'm getting different results between R and Python for a classification problem using ordinal categorical features.
I would like to ask you what do you think is the best classification algorithm in Python to use for ordinal categorical data?
In R this seems to be some Machine Learning algorithm based on "ctree". In Python, TreeDecisionClassifier, for example, is based on "CART". I didn't find any "ctree" based Machine Learning Algorithm reference for Python.
My dataset is survey results. All my features have values from 1 to 5 (bad to good), and the target variable is the NPS which takes values from 1, 2 and 3 (detractor, neutral and promoter).
Thank you.
|
Best classifier for ordinal categorical variables
|
CC BY-SA 4.0
| null |
2023-05-17T22:01:35.913
|
2023-05-18T22:47:38.060
|
2023-05-17T22:33:36.080
|
126156
|
126156
|
[
"machine-learning",
"python",
"scikit-learn"
] |
121600
|
2
| null |
120649
|
0
| null |
I've prepared a code snippet that could potentially help.
The provided code introduces a GymWrapper class, specifically designed to adapt any (gym) environment for seamless integration with TF-Agents. By using this GymWrapper class, you can easily bridge the gap between anytrading and TF-Agents.
It ensures that all the required methods and attributes are properly implemented, enabling you to use TF-Agents in your anytrading environment.
Take a look at the code snippet below:
```
from tf_agents.environments.tf_py_environment import TFPyEnvironment
from tf_agents.trajectories import time_step as ts
from tf_agents.environments import py_environment
from tf_agents.specs import array_spec
import gym_anytrading.datasets as DS
class GymWrapper(py_environment.PyEnvironment):
def __init__(self, gym_env):
super(GymWrapper, self).__init__()
self._gym_env = gym_env
self._action_spec = self._get_action_spec()
self._observation_spec = self._get_observation_spec()
def _get_action_spec(self):
action_space = self._gym_env.action_space
if isinstance(action_space, gym.spaces.Box):
return array_spec.BoundedArraySpec(
shape=action_space.shape,
dtype=action_space.dtype,
minimum=action_space.low,
maximum=action_space.high
)
elif isinstance(action_space, gym.spaces.Discrete):
return array_spec.BoundedArraySpec(
shape=(),
dtype=action_space.dtype,
minimum=0,
maximum=action_space.n-1
)
else:
raise ValueError(f"Unsupported action space type: {type(action_space)}")
def _get_observation_spec(self):
observation_space = self._gym_env.observation_space
return array_spec.ArraySpec(
shape=observation_space.shape,
dtype=observation_space.dtype
)
def action_spec(self):
return self._action_spec
def observation_spec(self):
return self._observation_spec
def _reset(self):
return ts.restart(self._gym_env.reset())
def _step(self, action):
obs, reward, done, info = self._gym_env.step(action)
if done:
return ts.termination(obs, reward)
else:
return ts.transition(obs, reward)
env = gym.make('forex-v0', df=DS.FOREX_EURUSD_1H_ASK, window_size = 10, frame_bound=(10, len(DS.FOREX_EURUSD_1H_ASK) - 1), unit_side = 'right');
train_py_env = GymWrapper(env);
eval_py_env = GymWrapper(env);
train_env = TFPyEnvironment(train_py_env);
eval_env = TFPyEnvironment(eval_py_env);
```
Note:
Frame Bound: When using the frame_bound parameter, make sure to set it as frame_bound=(10, len(DS.FOREX_EURUSD_1H_ASK)-1). This ensures that the frame bounds align with the size of your anytrading dataset. Deviating from this format might result in errors. (When I have used frame_bound=(10, 300) I have recived an error.
Resetting the Environment: It's important to call train_env.reset() at the beginning or end of your training loop to properly initialize the environment's state. This ensures that each episode starts with a clean state.
| null |
CC BY-SA 4.0
| null |
2023-05-18T04:35:41.270
|
2023-05-18T04:35:41.270
| null | null |
149974
| null |
121601
|
2
| null |
121595
|
1
| null |
- It depends on the application. This is more of a business than technical question. Also, are you sure accuracy is a good metric for your application?
- Get more data, and try more different models; consider ensembling.
Do not spend more time to train or add more layers; it is already overfitting. Won't do you any good.
| null |
CC BY-SA 4.0
| null |
2023-05-18T06:44:45.590
|
2023-05-18T06:49:50.167
|
2023-05-18T06:49:50.167
|
113067
|
113067
| null |
121602
|
1
| null | null |
0
|
21
|
I want to customize the loss vanilla loss function being used by scikit-learn classifiers like the Logistic Regression classifier, etc.
For example, if the vanilla empirical risk minimization formulation is as follows,
[](https://i.stack.imgur.com/qZPfq.jpg)
how can I change it to something like this?
[](https://i.stack.imgur.com/kr2qR.jpg)
Thanks.
|
How complex can I make a classifier's loss function in Scikit-Learn?
|
CC BY-SA 4.0
| null |
2023-05-18T08:21:39.227
|
2023-05-18T09:20:20.787
| null | null |
149979
|
[
"machine-learning",
"machine-learning-model",
"loss-function"
] |
121603
|
2
| null |
121602
|
1
| null |
Looking at this GitHub [issue](https://github.com/scikit-learn/scikit-learn/discussions/21614) I am afraid that's not possible.
In my experience working with the library, I have never worked on a use case that required such modification, I mostly change the values for the arguments in the constructor of the estimators.
I think that if you want to customise an algorithm (e.g., logistic regression) more, then you'd have to implement it yourself. On the other hand, I've found this StackOverflow [question](https://stackoverflow.com/questions/54267745/implementing-custom-loss-function-in-scikit-learn) that specifies how you can implement a custom loss function for `sklearn` estimators, so maybe there's a workaround for it, even though it seems that the library was not designed to be used like that.
| null |
CC BY-SA 4.0
| null |
2023-05-18T09:20:20.787
|
2023-05-18T09:20:20.787
| null | null |
142205
| null |
121604
|
2
| null |
121550
|
0
| null |
```
def get_flops(model):
if isinstance(model,(keras.engine.functional.Functional,keras.engine.training.Model)):
run_meta=tf.compat.v1.RunMetadata()
opts=tf.compat.v1.profiler.ProfileOptionBuilder.float_operation()
from tensorflow.python.framework.convert_to_constants import (convert_variables_to_constants_v2_as_graph)
inputs=[tf.TensorSpec([1]+inp.shape[1:],inp.dtype) for inp in model.inputs]
real_model=tf.function(model).get_concrete_function(inputs)
frozen_func,_=convert_variables_to_constants_v2_as_graph(real_model)
flops=tf.compat.v1.profiler.profile(graph=frozen_func.graph,run_meta=run_meta,cmd="scope",options=opts)
return flops.total_float_ops
```
from [https://github.com/tokusumi/keras-flops/blob/master/keras_flops/flops_calculation.py](https://github.com/tokusumi/keras-flops/blob/master/keras_flops/flops_calculation.py)
| null |
CC BY-SA 4.0
| null |
2023-05-18T09:53:18.920
|
2023-05-18T09:53:18.920
| null | null |
149982
| null |
121605
|
1
| null | null |
1
|
30
|
- I have been reading on the capabilities of LLM based conversational agents and have been wondering if there is even possibility for any further enhancement with the addition of NER to such system.
- If so, in which case could a conversational agents powered by an LLM like say Dolly 2.0 be enhanced by NER?
|
LLM powered chat bot enhanced by NER
|
CC BY-SA 4.0
| null |
2023-05-18T10:26:08.927
|
2023-05-19T09:28:34.350
| null | null |
139208
|
[
"nlp",
"named-entity-recognition",
"language-model"
] |
121606
|
1
| null | null |
0
|
19
|
Consider a dataset with 5 numerical features: A, B, C, D and E. Can we train a deep learning/machine learning model which can learn constraints between (A,B) and (C,D,E) i.e I want to learn a function f, such that A,B = f(C,D,E) using the data samples in the data set.
|
Domain Constraints
|
CC BY-SA 4.0
| null |
2023-05-18T10:34:12.960
|
2023-05-19T09:10:52.197
| null | null |
149983
|
[
"dataset"
] |
121607
|
1
| null | null |
0
|
7
|
I want to build an ML model to detect the number of standing bowling pins, here is an example of an image from the dataset:
[](https://i.stack.imgur.com/BrAaI.jpg)
Do you think it's a hard task ? (considering that bowling pins are hidden by others, and that some half laid pins could be mistmatched with a standing pin).
I have looked at models like Faster R-CNN or YOLO, I mainly need an opinion about the difficulty of this task.
Thank you
|
Best model / approach to detect the number of standing bowling pins?
|
CC BY-SA 4.0
| null |
2023-05-18T12:10:10.437
|
2023-05-18T12:10:10.437
| null | null |
138158
|
[
"computer-vision"
] |
121608
|
1
| null | null |
1
|
18
|
I'm working through a paper titled "[Understanding Black-box Predictions via Influence Functions](http://proceedings.mlr.press/v70/koh17a)" where they introduce the notion of influence functions from robust statistics to approximate the change in parameters when removing a training example.
They start off by establishing that the parameters that minimize the loss is given by
$$
\hat \theta = \arg \min_{\theta \in \Theta} \frac{1}{n} \sum_{i=1}^n L(z_i, \theta)
$$
Now they state that we would like to know what happens to the parameters when we remove a single training example $z = (x, y)$, which would be $\hat{\theta}_{-z} - \hat\theta$ where $\hat{\theta}_{-z}$ is the parameters obtained from training on every example except $z$.
$$
\hat \theta_{-z} = \arg \min_{\theta \in \Theta} \sum_{z_i \neq z} L(z_i, \theta)
$$
What they introduce now is that the notion of influence functions can be used to upweight the training example $z$ by some small $\epsilon$ which approximates the effect of removing it which results in new parameters
$$
\hat{\theta}_{\epsilon, z} = \arg \min_{\theta \in \Theta} \frac{1}{n} \sum_{i=1}^n L(z_i, \theta) + \epsilon L(z, \theta)
$$
Here's where I'm unsure of what happens exactly. They write:
>
A classic result (Cook & Weisberg, 1982) tells us that the influence of upweighting $z$ on the parameters $\hat{\theta}$ is given by $\frac{d\hat\theta_{\epsilon, z}}{d\epsilon} \bigg|_{\epsilon=0} = - \mathbf{H}_{\hat \theta}^{-1} \nabla_{\theta} L(z, \hat \theta)$
What is this classic result? I have been trying to go through their [reference](https://scholar.googleusercontent.com/scholar?q=cache:-ANkB7mkHbAJ:scholar.google.com/%20cook%20and%20weisberg%201982&hl=en&as_sdt=0,5) but can't seem to find anything that I can relate to the paper.
My question is then where this classic result is stated and how it is arrived at? Additionally, what are the steps that are taken to arrive at $- \mathbf{H}_{\hat \theta}^{-1} \nabla_{\theta} L(z, \hat \theta)$ in their expression?
Thanks in advance!
|
Influence functions on neural networks: Help with understanding of result and derivation
|
CC BY-SA 4.0
| null |
2023-05-18T12:20:29.917
|
2023-05-18T12:20:29.917
| null | null |
149985
|
[
"deep-learning",
"statistics",
"mathematics",
"research"
] |
121609
|
1
| null | null |
0
|
3
|
I am trying to understand the learning target of DDPM. Trying to understand $D_{KL}(q(x_{1:T}|x_0)||p_\theta(x_{1:T}|x_0)$ in following line.
$$
-\log p_\theta(x_0) <= -\log p_\theta(x_0) + D_{KL}(q(x_{1:T}|x_0)||p_\theta(x_{1:T}|x_0))
$$
$x_0$ is observered variable, and $x_T$ is variable from latent space $\sim N(0,1)$
Should $x_t$ ($t<T$) be seen as latent variables as well? But they are not following standard normal distribution.
Our target is to generate new image using $p(x|z)$, why does it learn $p(z|x)$ instead? $z$ represents latent variable and $x$ represents observable variable here.
Details: [https://lilianweng.github.io/posts/2021-07-11-diffusion-models/#nice](https://lilianweng.github.io/posts/2021-07-11-diffusion-models/#nice)
|
Learning target of Denoising Diffusion Probabilistic Model
|
CC BY-SA 4.0
| null |
2023-05-18T12:31:13.423
|
2023-05-18T13:12:26.103
|
2023-05-18T13:12:26.103
|
139169
|
139169
|
[
"loss-function",
"diffusion"
] |
121610
|
1
| null | null |
0
|
26
|
There is this paper that I have been trying to reproduce ([https://arxiv.org/pdf/2205.11482.pdf](https://arxiv.org/pdf/2205.11482.pdf)) as part of my master's thesis. It uses T5 to learn facts from the training set where either the object or the subject is masked with a sentinel token. An example of a training sample (called abstracts) can be seen here:
```
Input: "Animal Farm is an allegorical and dystopian novella by <extra_id_0>, first published in England on 17 August 1945."
Target: "<extra_id_0> George Orwell"
```
The entire dataset can be found here [https://huggingface.co/datasets/ekinakyurek/ftrace](https://huggingface.co/datasets/ekinakyurek/ftrace)
The thing I'm wondering is that in the docs, the use of sentinel tokens are as specified:
```
Input: "The <extra_id_0> walks in <extra_id_1> park"
Target: "<extra_id_0> cute dog <extra_id_1> the <extra_id_2>"
```
i.e. a sort of inverse of each other's masking.
You will notice that this is not the case for the example from the dataset that I'm working on. If I'm right the target should be `"<extra_id_0> George Orwell <extra_id_1>"` since the input mask is in the middle of the abstract.
It is far from the only case as you will see if you explore the dataset.
This has left me to wonder how this "not-so-perfect" placement and formatting of sentinel tokens might affect training of T5? Should it be considered a serious data-quality issue or does its implications sort of go away with training on a lot of data?
Thanks for reading through my question! Hope that someone will be able to clarify my doubts:)
|
Importance of sentinel token placement in T5
|
CC BY-SA 4.0
| null |
2023-05-18T13:17:46.877
|
2023-05-21T11:40:49.760
|
2023-05-21T11:40:49.760
|
149985
|
149985
|
[
"deep-learning",
"nlp",
"data"
] |
121611
|
2
| null |
121599
|
2
| null |
`ctree` or a conditional tree is a non-parametric decision tree. [TreeDecisionClassifier](https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html) in Python is the closest, but there isn't a direct equivalent of a conditional tree in Python's sci-kit learn, which appears the context of the question. It may be possible to modify the scikit learn's decision tree to be a conditional tree - in theory a conditional tree is a subset of a decision tree - but I don't know whether this has been achieved.
Purely personally I wouldn't use sci-kit learn's decision tree because it is prone to overtraining. If I received a manuscript singly using a decision tree, I would return it requesting alternative models to at least supplement the decision tree analysis and request checks for overtraining. Thus in this regard it doesn't have a good reputation. A random forest model would be used instead. `ctree` is cool.
So to answer your question directly I would not replace a conditional tree with scikit learn's decision tree. If you do then you need to check for overfitting (IMO). However, there was talk of implementing a conditional tree in sci-kit learn.
A random forest is an averaged conditional tree, thus I personally would see it as the preferred Python model for replacing a `ctree`, but its not the same thing.
---
What would I do? Well if you are happy with `ctree` and `cforest` then I would use sci-kit learn's random forest.
Beyond that I don't know enough about your data to make a call (I do evolution). Random forests and conditional trees are good with sparse data, so possibly if there are lots of questions that have been skipped (that might be relevant).
Random forests don't deal with imbalance very well. I don't know your targets, but I could imagine the outcomes could be imbalanced because at a wild guess there are more neutrals then any other category.
xgboost, or extreme gradient boosting, is trendy. It will replace random forests (IMO), especially to overcome imbalance and copes with high variance. On the surface its just as easy to use. The parameterisation can be computationally demanding, but there's short-cuts and there's an interaction analysis for examining the features of the survey. This would be particularly true if you want to improve your survey. However its more complex to grasp all the caveats.
I know about surveys in clinical data and getting a good survey is not trivial. The survey and even the investigator can generate bias, but I suspect it will also depend on the morbidities of the patients. In new clinical surveys a model survey is used that is known to be robust. Its definitely not an easy area of investigation.
| null |
CC BY-SA 4.0
| null |
2023-05-18T14:32:56.500
|
2023-05-18T22:47:38.060
|
2023-05-18T22:47:38.060
|
67203
|
67203
| null |
121612
|
1
| null | null |
0
|
12
|
take this example of a small dataset
[](https://i.stack.imgur.com/hwQci.png)
so here there was a question that instead of initializing weight vector as zeroes what if we initialize to [1000 , -1000] (there is no offset i.e classifiers passing through origin). will the mistakes that the perceptron algorithm makes increases or decreases as it gets converged?
Answer : It would significantly increase the number of mistakes.
Intuition that I have:
the learning rate parameter affects only the scale of the weight vector, not the direction. so even if we take thetha = [1000, -1000] we cannot determine whether it makes less mistakes than [0,0] or not.
|
what happens when weghts in perceptron algorithm is first initialized with some random values which is very distant from the correct values?
|
CC BY-SA 4.0
| null |
2023-05-18T14:57:34.173
|
2023-05-23T10:13:25.897
|
2023-05-23T10:13:25.897
|
144880
|
144880
|
[
"machine-learning",
"neural-network",
"perceptron"
] |
121614
|
2
| null |
121583
|
1
| null |
When using AUC on strongly imbalanced datasets, if you care mainly about the accuracy on the minority class, [it's much more robust to use the AUC of the Precision-Recall (PR) curve than on the ROC curve](https://machinelearningmastery.com/roc-curves-and-precision-recall-curves-for-imbalanced-classification/).
>
One common scenario is a highly imbalanced dataset where the fraction of positive class, which we want to find (like in fraud detection), is small.
Here it shows how to choose the PR curve in the [TF document](https://www.tensorflow.org/api_docs/python/tf/keras/metrics/AUC)
```
metrics=tf.keras.metrics.AUC(curve='PR')
```
| null |
CC BY-SA 4.0
| null |
2023-05-18T15:39:49.207
|
2023-05-18T15:39:49.207
| null | null |
7950
| null |
121615
|
1
| null | null |
0
|
12
|
I'm working on an nlp school project in which I have to build a model that takes a text and a claim and gives as an output whether the text is supporting or opposing the claim . .
the data that I have is quite small and it contains paragraphs classified as supporting or opposing for only 10 claims.
My problem is that I'm used to working on text processing and classification but introducing another variable in the classification like the claim seems confusing for me i don't know am i supposed to handle this issue , should i work only on the 10 claims that i have knowing that the data is really small or is it possible to build a model that is more general
|
Argument classification based on a given claim
|
CC BY-SA 4.0
| null |
2023-05-18T16:05:09.190
|
2023-05-18T16:05:09.190
| null | null |
119825
|
[
"classification",
"nlp",
"text-mining",
"text-classification",
"text"
] |
121618
|
1
| null | null |
0
|
41
|
[](https://i.stack.imgur.com/EG4Iq.png)
when I am trying to run sns.heatmap(df.corr(),annot=True) this code in my jupyter notebook. this error is occuring. I cannot understand this problem. please help me.
|
corr() is giving an error. please help out of this problem and tell me what is this error about
|
CC BY-SA 4.0
| null |
2023-05-18T21:14:43.420
|
2023-05-18T21:49:06.290
| null | null |
149995
|
[
"machine-learning",
"pandas",
"data-science-model",
"correlation"
] |
121619
|
1
| null | null |
0
|
21
|
We are trying to execute a deep learning model on a Linux workstation that from all counts has 2 NVIDIA GPUs installed. The model runs fine on our HPC cluster, but when we try to run it locally on the Linux workstation, we're getting an error initializing the model (see attached tl-error.png), which I think is caused because of some incompatibility with the GPUs/pytorch. We're wondering if anyone has run into this before and can help.
[](https://i.stack.imgur.com/3Mtaz.png)
This is what we've done:
Confirmed that GPUs are installed and got the driver / cuda version
(command: `nvidia-smi`):`NVIDIA-SMI 510.60.02 Driver Version: 510.60.02 CUDA Version: 11.6`
Based on the above, created a new conda environment (torch_env) with the recommended installation instructions from the pytorch website.
```
conda install pytorch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1 pytorch-cuda=11.6 -c pytorch -c nvidia
```
However, when we activate the environment and run `torch.cuda.is_available()` and `torch.cuda.device_count()` we get False and 0, respectively.
|
Pytorch fails to detect workstation installed nvidia GPUs
|
CC BY-SA 4.0
| null |
2023-05-18T21:24:55.027
|
2023-05-18T21:24:55.027
| null | null |
147564
|
[
"pytorch",
"gpu",
"nvidia",
"cuda"
] |
121620
|
2
| null |
121618
|
0
| null |
For df.corr() you would have to pass float/integer instead of string.
You can check this answer on stack overflow as well - [https://stackoverflow.com/questions/51241575/calculate-correlation-between-columns-of-strings](https://stackoverflow.com/questions/51241575/calculate-correlation-between-columns-of-strings).
To convert the strings to the integers/floats you can use feature extraction techniques used in NLP like CountVectorizer, TfidifVectorizer, Word Embedding. You can refer this documentation - [https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html)
| null |
CC BY-SA 4.0
| null |
2023-05-18T21:49:06.290
|
2023-05-18T21:49:06.290
| null | null |
144743
| null |
121621
|
1
| null | null |
0
|
17
|
### Problem setting: MTS Classification with CNN architecture
I have a multivariate time series (MTS) dataset that contains 30 features. The goal is to solve a classification problem on this MTS dataset. It is important that I have as much explainability as possible, hence extracting features from the dataset and classifying on them is undesired.
Every MTS in the dataset has a different length. I zero-padded the MTS in the dataset to make them equal length (to length 131).
I use the CNN architecture that is described in the paper [XCM: An Explainable Convolutional Neural Network for Multivariate Time Series Classification](https://inria.hal.science/hal-03469487/document). I used the Python implementation in the [tsai](https://github.com/timeseriesAI/tsai/blob/main/tsai/models/XCM.py) package. I used per-channel standardization on my data.

### Problem: GRAD-CAM gives feature attribution to padding
Since this is a CNN architecture, I can use GRAD-CAM on the first Conv 1D layer after the input, indicated in red in the figure above. Visualising the result for a time series of length 27 (+ padding to 131) from the test set, I get the following result, where the y-axis has no meaning, of course, and the x-axis is the time:

The zero-padding receives a remarkable attribution from GRAD-CAM. I tried this for different test set instances, and the results are equal or sometimes even worse.
My question: How do I interpret this result? Does the classifier undesirably use the padding for its classification? Can I ignore the feature attribution to the padding and just focus on time stamps 0 to 27? How can I avoid this result?
---
### What I already considered
- During my search on the Internet, I found no way of making a CNN ignore padding during training or inference. Only RNNs seem to have this ability, but they suffer from poor explainability.
- The paper Effects of padding on LSTMs and CNNs only examines the effect on classification accuracy.
- The paper Time series classification for varying length series suggest uniform scaling as a pre-processing alternative. However, due to the nature of my time series and the goal of the explainability, this is not a solution for me. Other propositions of the paper are only applicable to univariate time series.
- The most related information I found is a question on this website (here) that encounters the same problem for images. There is no satisfactory answer.
- Applying GRAD-CAM on the 2d convolutional block suffers from the same problem.
- Applying GRAD-CAM on another convolutional MTSC algorithm, such as InceptionTime, suffers from the same problem.
|
How do I interpret GRAD-CAM's feature attribution to time series zero-padding in a CNN classifier?
|
CC BY-SA 4.0
| null |
2023-05-19T00:48:26.310
|
2023-05-19T02:08:37.860
|
2023-05-19T02:08:37.860
|
149998
|
149998
|
[
"deep-learning",
"time-series",
"convolutional-neural-network",
"explainable-ai"
] |
121622
|
1
| null | null |
3
|
178
|
I wish to perform sentence tokenization for sentence without punctuation, below is the code:
```
import nltk
def segment_sentences(text):
# Download the Punkt tokenizer if necessary
nltk.download('punkt')
# Tokenize the text into sentences
sentences = nltk.sent_tokenize(text)
return sentences
input_text = "hello how are you today i hope you're doing well have a great day"
sentences = segment_sentences(input_text)
# Print the segmented sentences
for sentence in sentences:
print(sentence)
```
Desired output
```
hello how are you today
i hope you're doing well
have a great day
```
But current output
```
hello how are you today i hope you're doing well have a great day
```
How should I address it?
|
Sentence tokenization for sentence without punctuation
|
CC BY-SA 4.0
| null |
2023-05-19T03:33:35.333
|
2023-05-19T06:14:45.407
| null | null |
7812
|
[
"python",
"nlp",
"python-3.x",
"nltk",
"spacy"
] |
121623
|
2
| null |
121622
|
4
| null |
You can apply a previous step to add punctuation and proper casing to the text, and then segment the sentences. For this, you may use [Re-punctuate](https://huggingface.co/SJ-Ray/Re-Punctuate). When applied to your text, Re-punctuate gives the following output:
```
Hello, how are you today? I hope you're doing well. Have a great day.
```
Then, if we apply your NLTK code to it, we obtain this output:
```
Hello, how are you today?
I hope you're doing well.
Have a great day.
```
| null |
CC BY-SA 4.0
| null |
2023-05-19T06:14:45.407
|
2023-05-19T06:14:45.407
| null | null |
14675
| null |
121625
|
2
| null |
63055
|
0
| null |
While it is true that we can compute the [sample](https://databasecamp.de/en/statistics/population-and-sample) [mean](https://databasecamp.de/en/statistics/expected-value) and [sample](https://databasecamp.de/en/statistics/population-and-sample) [standard deviation](https://databasecamp.de/en/statistics/standard-deviation) from a set of random variables, the Maximum Likelihood Estimation provides a formal framework for estimating the [population](https://databasecamp.de/en/statistics/population-and-sample) parameters based on observed data.
In most cases, we are more interested in what we can imply for the [population](https://databasecamp.de/en/statistics/population-and-sample) than what we know about the [sample](https://databasecamp.de/en/statistics/population-and-sample), this is why we use the estimation, since the [sample](https://databasecamp.de/en/statistics/population-and-sample) usually has some flaws in how it is created.
| null |
CC BY-SA 4.0
| null |
2023-05-19T06:57:12.130
|
2023-05-19T06:57:12.130
| null | null |
130460
| null |
121626
|
2
| null |
85096
|
0
| null |
Shortly said, they differ in the value function they optimize:
- Q-learning estimates the optimal action-value function (Q-function) and directly learns the values associated with state-action pairs.
- G-learning estimates the optimal value function (V-function) and focuses on learning the values associated with states.
These [reinforcement learning](https://databasecamp.de/en/ml/reinforcement-learnings) algorithms are comparable in that they try to find an optimal policy by optimising their value functions. [Q-Learning](https://databasecamp.de/en/ml/q-learnings) is used when you have a discrete action space. This is why it includes the actions in its value function. G-Learning is more suitable for continuous action spaces.
| null |
CC BY-SA 4.0
| null |
2023-05-19T07:07:25.247
|
2023-05-19T07:07:25.247
| null | null |
130460
| null |
121627
|
1
| null | null |
0
|
33
|
I'm trying to find the best sets of hyperparameters for my YOLO V8 model on my custom dataset with RayTune. I wanted to train the model with model.train() and return some of the evaluation metrics, such as mAP50 (for demonstration purposes, I write 'acc' in my code below). Whenever I call the model_train() function from tune.run(), the model_train() is being returned by the model.train() itself and never reaches the return {'acc': 0.8}. How can I save the models' metrics in a variable such as trained_model = model.train() and not return from the function automatically? Thank you so much.
```
import YOLO
import os
import ray
from ray import tune
from ray.tune.schedulers import ASHAScheduler
from ray.tune.suggest.hyperopt import HyperOptSearch
model = YOLO("yolov8n.pt")
data = './data.yaml'
def model_train(config):
trained_model = model.train( data=data,
batch=16,
epochs=2,
workers=8,
imgsz=640,
patience=1,
save=False,
)
return {'acc': 0.8}
ray.init()
# Define the search space for hyperparameters
search_space = {
"learning_rate": tune.loguniform(1e-4, 1e-1),
"batch_size": tune.choice([8, 16, 32]),
"num_layers": tune.choice([3, 4, 5]),
# Add other hyperparameters for YOLOv7
# ...
}
# Define the hyperparameter search algorithm
search_alg = HyperOptSearch(metric="acc", mode="max")
# Define the scheduler
scheduler = ASHAScheduler(max_t=10, grace_period=1)
# Perform hyperparameter tuning
analysis = tune.run(
model_train,
config=search_space,
num_samples=10, # Number of hyperparameter combinations to try
search_alg=search_alg,
scheduler=scheduler
)
# Get the best hyperparameters and evaluation metric
best_hyperparameters = analysis.get_best_config(metric="acc", mode="max")
best_metric = analysis.best_result["acc"]
print("Best hyperparameters:", best_hyperparameters)
print("Best evaluation metric:", best_metric)
# Clean up Ray resources
ray.shutdown()
```
|
Model.train( ) returns automatically in YOLO v8
|
CC BY-SA 4.0
| null |
2023-05-19T07:12:23.490
|
2023-05-19T07:12:23.490
| null | null |
137463
|
[
"object-detection",
"transfer-learning",
"yolo"
] |
121628
|
2
| null |
92389
|
0
| null |
The number of hidden layers does not have a direct affect on the performance of the activation function. However, with more hidden layers, the probability for the vanishing gradient meaning that the gradient takes on very small values making it hard for the [neural network](https://databasecamp.de/en/ml/artificial-neural-networks) to learn properly.
The sigmoid function is know to have a higher risk for the vanishing gradient in larger networks, which is why, [ReLU](https://databasecamp.de/en/ml/relu-en) or Leaky ReLU are preferred in larger architectures. So the effect is more indirect.
| null |
CC BY-SA 4.0
| null |
2023-05-19T07:15:26.423
|
2023-05-19T07:15:26.423
| null | null |
130460
| null |
121629
|
1
| null | null |
0
|
25
|
I am facing an issue with my pretrained mobilenetv3 model, it is quite strange how the validation loss is behaving, it starts low but then goes up ridiculously high. I have normalized my images as well and I am using TF with Keras.
```
25/25 [==============================] - 68s 2s/step - loss: 0.5515 - accuracy: 0.9213 - val_loss: 0.0361 - val_accuracy: 0.9898
Epoch 2/50
25/25 [==============================] - 57s 2s/step - loss: 0.0490 - accuracy: 0.9797 - val_loss: 0.2195 - val_accuracy: 0.9746
Epoch 3/50
25/25 [==============================] - 62s 2s/step - loss: 0.0315 - accuracy: 0.9898 - val_loss: 0.7404 - val_accuracy: 0.9898
Epoch 4/50
25/25 [==============================] - 57s 2s/step - loss: 0.0093 - accuracy: 0.9962 - val_loss: 1.6772 - val_accuracy: 0.9797
Epoch 5/50
25/25 [==============================] - 56s 2s/step - loss: 0.0124 - accuracy: 0.9962 - val_loss: 1.5263 - val_accuracy: 0.9898
Epoch 6/50
25/25 [==============================] - 57s 2s/step - loss: 0.0303 - accuracy: 0.9937 - val_loss: 24.9216 - val_accuracy: 0.9137
Epoch 7/50
25/25 [==============================] - 59s 2s/step - loss: 0.0056 - accuracy: 0.9975 - val_loss: 281.5234 - val_accuracy: 0.7157
Epoch 8/50
25/25 [==============================] - 55s 2s/step - loss: 0.0042 - accuracy: 0.9975 - val_loss: 93.2665 - val_accuracy: 0.8274
Epoch 9/50
25/25 [==============================] - 57s 2s/step - loss: 0.0051 - accuracy: 0.9975 - val_loss: 50.2444 - val_accuracy: 0.8832
Epoch 10/50
25/25 [==============================] - 62s 2s/step - loss: 4.7321e-04 - accuracy: 1.0000 - val_loss: 90.5489 - val_accuracy: 0.8528
Epoch 11/50
25/25 [==============================] - 57s 2s/step - loss: 0.0012 - accuracy: 1.0000 - val_loss: 73.0347 - val_accuracy: 0.8528
```
My code:
```
IMG_HEIGHT = 224
IMG_WIDTH = 224
IMG_CHANNELS = 3
EPOCHS = 50
BATCH_SIZE = 32
NUM_CLASSES = 2
LEARNING_RATE = 0.001
DATA_DIR = "real_data/"
# Load the dataset
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
DATA_DIR,
validation_split=0.2,
subset="training",
seed=42,
image_size=(IMG_HEIGHT, IMG_WIDTH),
batch_size=BATCH_SIZE,
)
val_ds = tf.keras.preprocessing.image_dataset_from_directory(
DATA_DIR,
validation_split=0.2,
subset="validation",
seed=42,
image_size=(IMG_HEIGHT, IMG_WIDTH),
batch_size=BATCH_SIZE,
)
train_ds = train_ds.map(lambda x, y: (preprocess_input(x), y))
val_ds = val_ds.map(lambda x, y: (preprocess_input(x), y))
base_model = tf.keras.applications.MobileNetV3Large(input_shape=(IMG_HEIGHT, IMG_WIDTH, IMG_CHANNELS), include_top=True, weights="imagenet")
base_model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE), loss="sparse_categorical_crossentropy", metrics=["accuracy"])
base_model.fit(train_ds, validation_data=val_ds, epochs=EPOCHS)
```
|
Why does my validation loss go so high after few epochs?
|
CC BY-SA 4.0
| null |
2023-05-19T07:18:51.223
|
2023-05-19T07:18:51.223
| null | null |
138954
|
[
"keras",
"tensorflow",
"image-classification",
"computer-vision"
] |
121630
|
2
| null |
121606
|
0
| null |
Yes, it is possible to train a deep learning/machine learning model to learn the relationship between (A, B) and (C, D, E). One approach could be to use a neural network architecture, such as a multi-layer perceptron (MLP) or a convolutional neural network (CNN), to learn the mapping between the input variables (C, D, E) and the output variables (A, B).
The training process involves feeding the model with labeled data samples and adjusting the model's parameters to minimize the difference between the predicted outputs and the true outputs. This process is commonly referred to as optimization or training.
It's worth noting that the success of the model depends on the quality and quantity of the training data and the chosen architecture and hyperparameters of the model.
I hope this answer helps!
| null |
CC BY-SA 4.0
| null |
2023-05-19T09:10:52.197
|
2023-05-19T09:10:52.197
| null | null |
149968
| null |
121631
|
2
| null |
121605
|
1
| null |
It's probably best to look into some research but here is some information which might help in the meantime based on some general thoughts;
Adding NER to a LLM can enhance the models ability to understand and respond to user input. NER can help the system identify and extract important contextual information such as names, locations, and dates from the user's input, which can then be used to generate more accurate and relevant responses. For example, if a user asks for information about a specific event, the system can use NER to identify the date and location of the event and provide more detailed information.
You may want to also look into the following methods or use cases;
- Topic Extraction: NER can help identify relevant keywords and topics from user input, which can be used to personalize the conversation and provide more relevant responses.
- Sentiment Analysis: By identifying named entities related to emotions or sentiment, NER can help classify the overall tone of the conversation and adjust responses accordingly.
- Filtering Out Irrelevant Content: NER can be used to filter out irrelevant content from user input, such as stop words, prepositions, and conjunctions, which can improve the accuracy of the conversational agent's responses.
- Entity Linking: NER can help link named entities to external sources of information, such as Wikipedia or Google Knowledge Graph, and provide more informative responses to user queries.
A combination of these with prompt pre-processing and filtering may provide better responses.
I hope this helps in some way!
| null |
CC BY-SA 4.0
| null |
2023-05-19T09:28:34.350
|
2023-05-19T09:28:34.350
| null | null |
149968
| null |
121632
|
2
| null |
121565
|
0
| null |
Performing sensitivity analysis on time series data requires a different approach. One way to do this is by using dynamic linear models (DLMs). DLMs can capture the effect of a parameter change over time and can be used to perform sensitivity analysis.
Another way is to use a time series model such as autoregressive integrated moving average (ARIMA) or seasonal autoregressive integrated moving average (SARIMA) to analyze the data. These models can be used to identify the impact of a parameter change on the time series over a longer period.
Otherwise, you could also split your time series into smaller time periods and analyze each period separately. This approach could also help identify the impact of a parameter change over a specific time period. However, it is important to keep in mind that this approach may not capture the full picture of the time series behavior.
| null |
CC BY-SA 4.0
| null |
2023-05-19T09:35:59.383
|
2023-05-19T09:35:59.383
| null | null |
149968
| null |
121633
|
2
| null |
121548
|
0
| null |
Assuming you're using python it is possible to do (relatively) efficient batch processing with a `PackedSequence` object, here is some example code;
```
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence
class CustomRNN(nn.Module):
def __init__(self, input_size, hidden_size, num_layers, bidirectional=False):
super(CustomRNN, self).__init__()
self.num_layers = num_layers
self.bidirectional = bidirectional
self.rnn = nn.RNN(input_size, hidden_size, num_layers, bidirectional=bidirectional, batch_first=True)
self.layer_norm = nn.LayerNorm(hidden_size * 2 if bidirectional else hidden_size)
def forward(self, x, lengths):
packed_seq = pack_padded_sequence(x, lengths, batch_first=True, enforce_sorted=False)
output, hidden = self.rnn(packed_seq)
output, _ = pad_packed_sequence(output, batch_first=True)
output = self.layer_norm(output)
return output, hidden
```
Here, CustomRNN takes in the `input_size`, `hidden_size`, `num_layer`s, and `bidirectional` parameters just like nn.RNN. In the forward method, the input sequence `x` and corresponding lengths are first packed into a `PackedSequence` object using `pack_padded_sequence`. The packed sequence is then passed through the RNN and the output is obtained. The output is then unpacked using `pad_packed_sequence` and layer normalization is applied to the output using nn.LayerNorm. Finally, the normalized output and hidden state are returned.
With this implementation, you can efficiently process variable-length sequences using a `PackedSequence` object while also incorporating layer normalization into the RNN.
| null |
CC BY-SA 4.0
| null |
2023-05-19T09:42:43.183
|
2023-05-19T09:42:43.183
| null | null |
149968
| null |
121634
|
2
| null |
121525
|
0
| null |
You can create a new column for each layer of location, such as city, area, phase, and tower/building. Then, split the data in the original column by the ">" symbol and assign each layer to its corresponding new column. For the properties with less than five layers, you can assign NaN values to the rest of the layers. After that, you can encode the categorical data using one-hot encoding or label encoding to make it numerical. This will enable you to use the new columns as features for your ML/DL model.
| null |
CC BY-SA 4.0
| null |
2023-05-19T09:46:04.890
|
2023-05-19T09:46:04.890
| null | null |
149968
| null |
121635
|
2
| null |
121439
|
0
| null |
The below `getFeatureAssignment` function first generates a hash value for the customer ID using the `hashCode` function. It then performs a modulo operation on the hash value with the total number of partitions to get the partition number. Finally, it compares the partition number with the threshold value to determine whether the customer is assigned to the feature or not.
The `hashCode` function is used to generate a hash value for the customer ID. It iterates over each character in the string and generates a hash value using the djb2 algorithm. However you can replace this as needed.
```
function getFeatureAssignment(customerId: string, numPartitions: number, threshold: number): boolean {
const hash = hashCode(customerId);
const partition = hash % numPartitions;
return partition / numPartitions < threshold;
}
function hashCode(str: string): number {
let hash = 0;
for (let i = 0; i < str.length; i++) {
const char = str.charCodeAt(i);
hash = ((hash << 5) - hash) + char;
hash = hash & hash;
}
return hash;
}
```
This implementation ensures consistent group assignments for each feature and allows for easy addition of new features. The modulo operation with a prime number close to the total number of partitions ensures an even distribution of partitions.
The new groups would obtain splits that are independent of the previous splits for every feature in a deterministic and reproducible way. The threshold value can be adjusted for each feature to ensure that any slight imbalances in partition sizes are accounted for, without affecting the deterministic and reproducible nature of the group assignments.
| null |
CC BY-SA 4.0
| null |
2023-05-19T10:04:44.610
|
2023-05-19T10:04:44.610
| null | null |
149968
| null |
121636
|
2
| null |
121322
|
0
| null |
To train a model like `RoBERTa` on this dataset, you can first preprocess the data by tokenizing the names, types, and signatures using a tokenizer specific to the model architecture you want to use. You can then convert the tokenized data into numerical values that can be input into the model.
However, there are several other approaches to training a model to detect possible signatures for product names which may be easier to implement.
One approach is to use rule-based systems, which involve creating a set of rules that identify patterns in the text that are likely to correspond to product names. For example, you could define rules that look for common prefixes or suffixes for product names.
Another approach is to use unsupervised learning techniques, such as clustering or topic modeling, to group similar text together and identify groups that are likely to correspond to product names. However, these approaches may not be as accurate as supervised learning with labeled examples.
Additionally, you could use a combination of these techniques to create a more robust rule-based system for detecting product name signatures.
Keep in mind that the quality of the data and choice of model architecture can impact accuracy, so it is important to experiment with different configurations and that having a large dataset is also important for generalization.
| null |
CC BY-SA 4.0
| null |
2023-05-19T10:26:45.337
|
2023-05-19T10:26:45.337
| null | null |
149968
| null |
121637
|
1
| null | null |
0
|
11
|
I would like to train LSTM Network, which should take 5 files as input and predict the 6th file. Each file contains 810000 data points (precipitation values), and each data point indices the location. So I have 810000's location's precipitation values.
Usually, LSTM network takes sequential data as input and in my case the file is generated every 5 minutes interval. Now, I have created an Array of 810000 values for each file, but I am unable to figure out how can I train the model by using these files and how can I split the train_set and test_set out of available files. Below attached sample code:
```
def model(self, folder_name):
files = os.listdir(folder_name)
files_list = []
output = []
for i in range(len(files)):
files_list.append(files[i])
if len(files_list) == 6:
output.append(files_list)
files_list = []
if files_list:
output.append(files_list)
if len(output) > 0 and len(output[-1]) < 6:
remaining = output.pop()
last_list = output[-1] + remaining
output[-1] = last_list
for _filelist in output:
X = _filelist[:5]
y = _filelist[-1]
train_data = np.empty((len(X), 810000))
for i in range(len(X)):
binary_reader = BinaryReader()
train_data = np.array(
binary_reader.file_reader(f"{folder_name}\\{X[i]}")
)
y = np.array(binary_reader.file_reader(f"{folder_name}\\{y}")).reshape(
(-1,)
)
```
output:
```
[[0.]
[0.]
[0.]
...
[0.]
[0.]
[0.]] [0. 0. 0. ... 0. 0. 0.]
[[0.]
[0.]
[0.]
...
[0.]
[0.]
[0.]] [0. 0. 0. ... 0. 0. 0.]
[[0.]
[0.]
[0.]
...
[0.]
[0.]
[0.]] [0. 0. 0. ... 0. 0. 0.]
```
In the sample folder the 20 files are available so I am giving the model the first 5 files as input.
```
from keras.models import Sequential
from keras.layers import *
from keras.callbacks import ModelCheckpoint
from keras.losses import MeanSquaredError
from keras.metrics import RootMeanSquaredError
from keras.optimizers import Adam
model1 = Sequential()
model1.add(InputLayer((5,1)))
model1.add(LSTM(64))
model1.add(Dense(8,'relu'))
model1.add(Dense(1,'linear'))
model1.summary()
```
Now I want to feed the LSTM Network with the available with the specific input shape. Can somebody help me out with this situation?
|
Preparing LSTM Network input from multiple files
|
CC BY-SA 4.0
| null |
2023-05-19T10:33:34.383
|
2023-05-19T10:33:34.383
| null | null |
150019
|
[
"lstm",
"machine-learning-model",
"rnn",
"preprocessing"
] |
121638
|
1
| null | null |
0
|
18
|
I'm looking to use a logistic regression model to predict who is most likely to suffer a heart attack within a population.
I have a dependent variable flag for has heart attack along with some other data such as age, obesity flag, smoking status etc.
The issue is that the population data is a latest snapshot of the current population and not as at the time the event happend. My concern is that somebody might have turned their life around since the event and while at the time they suffered a heart attack they smoked and had an unhealthy BMI they are now a non-smoker with a healthy BMI which could potentially lead to an inaccurate model.
Would a solution to this be to replace the values where the dependent variable is true with the data that would have been correct at the time the event took place?
|
Getting The Data Right For A Model
|
CC BY-SA 4.0
| null |
2023-05-19T10:38:05.733
|
2023-05-19T10:38:05.733
| null | null |
150018
|
[
"data",
"logistic-regression"
] |
121639
|
1
| null | null |
1
|
130
|
I am doing a POC on LLM text generation. I have one AWS p3.8x instance which has 4 GPUs each of 16 GB size. I am pretty new to use LLM and GPU. When I am trying load one LLM pertained model (WizardLM) in GPU, it is saying 16 GB is not sufficient for this. So my question is how can I load the model using all 64 GB?
|
Load an LLM in multiple GPUs
|
CC BY-SA 4.0
| null |
2023-05-19T12:15:42.300
|
2023-05-19T12:42:16.843
| null | null |
108477
|
[
"python",
"nlp",
"pytorch",
"transformer",
"gpu"
] |
121640
|
2
| null |
121639
|
1
| null |
Using multiple GPUs usually means that the whole model is copied into the memory of each of them. In Pytorch this is achieved with [nn.DataParallel](https://pytorch.org/docs/stable/generated/torch.nn.DataParallel.html) or [nn.parallel.DistributedDataParallel](https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html). This however, is not what you want.
It is possible to load parts of a model into different GPUs and distribute the computation among them. This, however, needs specific code logic to distribute and coordinate the different parts. It is not possible to automagically split a model into parts among different GPUs.
Your options are:
- Use a smaller model that fits on 16Gb.
- Use a GPU with enough memory to fit your current model.
- Use a quantized version of your model that is small enough.
- Perform CPU inference. This may be very slow. You may check if there is a C++ implementation for your model using parallelized CPU instruction sets to make inference fast; for instance, for Llama you can use llama.cpp.
| null |
CC BY-SA 4.0
| null |
2023-05-19T12:26:22.650
|
2023-05-19T12:42:16.843
|
2023-05-19T12:42:16.843
|
14675
|
14675
| null |
121641
|
2
| null |
65266
|
1
| null |
The solution to use `survival::basehaz()` with a coxph model and estimate a constant C, as implemented by [survXgboost](https://github.com/IyarLin/survXgboost) should be used with caution. When you have binary predictors, `coxph` coefficients explode, leading to really overestimated baseline hazard, the constant C will not do much and the performance of xgboost will look much worse than what it really is.
The `gbm` package has a function `gbm::basehaz` which skips the model, avoiding the compatibility problem that you have in `survival::basehaz()`, and uses the `predict()` results to estimate the baseline hazard. It is more reliable and the (cumulative) baseline hazard is as expected.
| null |
CC BY-SA 4.0
| null |
2023-05-19T12:49:53.657
|
2023-05-19T14:53:36.417
|
2023-05-19T14:53:36.417
|
150021
|
150021
| null |
121642
|
2
| null |
121322
|
1
| null |
Given the nature of the problem, it might not be amenable to machine learning. The structure of the data drives how it can be modeled. The features are "name" (assumed to be a string) and "type" (assumed to be hierarchical categories). The target is called "signature". Typically, signatures are unique. Labeling unique items is commonly called identification (i.e., mapping different occurrences to the same individual instance). That is different than categorization (i.e., finding features common among a discrete group). Machine learning tends to focus on the generalization problem of categorization, not on identification.
| null |
CC BY-SA 4.0
| null |
2023-05-19T13:06:16.807
|
2023-05-22T11:21:01.700
|
2023-05-22T11:21:01.700
|
1330
|
1330
| null |
121643
|
1
| null | null |
3
|
111
|
In the Titanic dataset, there are two features, "SibSp" and "Parch," which have an impact on the survival rate. For instance, the survival rate tends to increase when the values of "SibSp" range from 0 to 2, but it decreases from 2 onwards. I intend to use logistic regression for predictions and I am unsure if I can utilize these features. Although they do not show a linear relationship with the survival rate, I wonder if logistic regression can still be applied or if it is not suitable. One idea I have is to OneHotEncode these features. This way, logistic regression could identify patterns by creating new features for each class in "SibSp" and "Parch." For example, we could check if "is SibSp = 1?" and assign 1 for "yes" and 0 for "no."
|
Logistic regression and non-linear relationship
|
CC BY-SA 4.0
| null |
2023-05-19T14:18:12.210
|
2023-05-20T06:26:08.273
| null | null |
149495
|
[
"logistic-regression"
] |
121644
|
2
| null |
121643
|
2
| null |
These features can indeed be used for regression, and in my experience helped a little bit (been awhile since I looked at Titanic ;). I ended up combining them:
```
train_set['Relatives']=train_set.SibSp+train_set.Parch
test_set['Relatives']=test_set.SibSp+test_set.Parch
```
though I was using SVMs so your mileage may vary, hth.
| null |
CC BY-SA 4.0
| null |
2023-05-19T15:31:17.343
|
2023-05-19T15:31:17.343
| null | null |
146483
| null |
121645
|
1
| null | null |
1
|
56
|
I’ve got some data by postal zone that includes:
- Postal zone code
- Average rent value per square foot
- Brand affinity 1
- Brand affinity 2
- Brand affinity 3
- Brand affinity 4
…and so on
The brand affinity data is a value from 0 to 100 that shows how much affinity the people living in that postal zone code have with a particular brand. There are about 50 brands.
I’m running a bit low on inspiration for this one. Does anyone have any ideas as to what could be done with this data?
Specifically - any data analysis, ML
Thank you!
|
Looking for a couple of ideas please
|
CC BY-SA 4.0
| null |
2023-05-19T16:07:44.297
|
2023-05-20T14:50:09.937
|
2023-05-19T19:35:34.283
|
150026
|
150026
|
[
"machine-learning",
"regression",
"data-science-model",
"data-analysis",
"ai"
] |
121646
|
2
| null |
121643
|
1
| null |
The features can be used for regression, but my advise would be to perform feature selection to get the optimal results for the model.
For Logistic Regression, you can go with Solver or use Regularization for the purpose for instance.
| null |
CC BY-SA 4.0
| null |
2023-05-19T17:57:48.057
|
2023-05-19T17:57:48.057
| null | null |
149901
| null |
121647
|
2
| null |
85221
|
0
| null |
The most recommended would be precision, recall and F1-Score, but there are others like [F-Beta score](https://hasty.ai/docs/mp-wiki/metrics/f-beta-score) or other threshold metrics.
In any case, based on my experience, the choice of the metric depends on your use case and the conditions under which the classifier will be deployed (eg. if your production data does have a different ratio observed between the classes, your metrics performance can be misleading).
| null |
CC BY-SA 4.0
| null |
2023-05-19T18:04:40.147
|
2023-05-19T18:04:40.147
| null | null |
149901
| null |
121649
|
2
| null |
121645
|
0
| null |
One idea for analyzing this data would be to explore correlations between the average rent value and brand affinities. This could involve using statistical methods like regression analysis to see if there is a relationship between the two variables.
Additionally, clustering algorithms could be used to group postal zones based on their brand affinities, which could provide insights into consumer behavior and help identify potential target markets for different brands.
Another approach would be to use machine learning models like decision trees or random forests to predict brand affinities based on other variables like the average rent value or demographic data. This could be useful for marketing and advertising purposes.
| null |
CC BY-SA 4.0
| null |
2023-05-19T21:49:59.017
|
2023-05-19T21:49:59.017
| null | null |
149968
| null |
121650
|
2
| null |
121537
|
0
| null |
Yes, it is possible. By using a GAN to generate additional samples with a class prompt, you can increase the amount of training data available for your image classification model, which can help improve its robustness, calibration, and overall precision/recall.
This is particularly useful when you have limited training data available, as it allows you to generate synthetic data that can help improve the performance of your model. However, it's important to note that the quality of the generated samples will have an impact on the effectiveness of this approach, so you'll need to ensure that your GAN is generating high-quality images that are representative of the target distribution.
There are other ways you may want to consider for improving the performance of a discriminator in your image classification model;
One way is to increase the size and complexity of the model, allowing it to learn more nuanced features and better distinguish between classes.
Another approach is to use transfer learning, where a pre-trained model is fine-tuned on the specific dataset, resulting in faster and more accurate training.
Additionally, adjusting the learning rate, using different activation functions, or adding regularization techniques like dropout can also improve the model's performance. It is important to experiment with different approaches and monitor the model's performance to determine the best strategy.
| null |
CC BY-SA 4.0
| null |
2023-05-19T22:10:45.457
|
2023-05-19T22:10:45.457
| null | null |
149968
| null |
121651
|
1
| null | null |
0
|
7
|
I am using pytorch geometric, and I need to create several masks for different experiments. I am creating a mask in the `process` function of my dataset class. Here is a simplified snippet:
```
data = from_networkx(graph)
mask = torch.zeros(data.num_nodes, dtype=torch.bool)
for i in range(data.num_nodes):
if data.attr[i]==attr:
mask[i]=True
else:
mask[i]=False
data.mask= mask
torch.save(data, os.path.join(self.processed_dir, f'{filename}.pt'))
```
However, I want to access the induced subgraph by the masked nodes i.e., I want to keep all nodes in the mask, and I want to keep all edge between nodes in the mask. It seems like there would be a convenient function for doing this already, but it seems I need to also create a mask for the edge indices as well?
I can create the code to achieve this, but I imagine there is a standard procedure for creating such a mask maybe using mostly built in functions. If so, I prefer not to create my own method. Please, any suggestion on the standard way to achieve this?
|
What is the standard way to incorporate a mask in a custom pytorch geometric dataset?
|
CC BY-SA 4.0
| null |
2023-05-20T02:05:30.733
|
2023-05-20T02:05:30.733
| null | null |
104546
|
[
"pytorch",
"pytorch-geometric"
] |
121653
|
1
| null | null |
0
|
6
|
i am using OpenGym atari to trainning my Pacman agent, this is part of my code
```
class DQN(nn.Module):
def __init__(self, action_space):
super(DQN, self).__init__()
self.conv1 = nn.Conv2d(3, 16, kernel_size=8, stride=4)
self.conv2 = nn.Conv2d(16, 32, kernel_size=4, stride=2)
self.conv3 = nn.Conv2d(32, 64, kernel_size=3, stride=1)
self.fc = nn.Linear(64 * 22 * 16, action_space)
def forward(self, x):
x = torch.relu(self.conv1(x))
x = torch.relu(self.conv2(x))
x = torch.relu(self.conv3(x))
x = x.view(x.size(0), -1)
x = self.fc(x)
return x
#checkpoint = torch.load("C:\\Users\\tuyen\\Downloads\\hoang_chat.h5",map_location=torch.device('cpu'))
env = gym.make('ALE/MsPacman-v5',render_mode='human')
n_actions = env.action_space.n
state, info = env.reset()
target = DQN(n_actions)
#target.load_state_dict(checkpoint)
target.eval()
done = False
while not done:
action = target(torch.tensor(state, dtype=torch.float32).permute(2, 0, 1).unsqueeze(0))
max_action = torch.argmax(action)
action_index = max_action.item()
observation, reward, terminated, truncated, _ = env.step(action_index)
done = terminated or truncated
state = observation
env.close()
```
then I realize that state and observation is always the same when I put this line
```
...
print(state == observation)
state = observation
...
```
how should I fixed this ?
|
State and next state is the same on openai gym atari
|
CC BY-SA 4.0
| null |
2023-05-20T04:07:49.833
|
2023-05-20T04:07:49.833
| null | null |
150032
|
[
"reinforcement-learning",
"dqn",
"openai-gym"
] |
121654
|
1
|
121659
| null |
0
|
49
|
I've read the paper on [ALiBi](https://arxiv.org/pdf/2108.12409.pdf), and I understand that these models are biasing the values made in the query/key multiplication.
But from my understanding, when I build the actual model I give it `N` input nodes. When I train a model I give it vectors of length `N`. How then at inference can I give it vectors of length greater than `N`? Am I misunderstanding how the multiplication of key and query works? Can there be keys of any length?
Edit: I guess my question includes, why isn't there a multiplication error when I use longer keys in my inference?
|
How can models like Mosaic's MPT-7b or Bloombergs BLOOMGPT take in so many tokens?
|
CC BY-SA 4.0
| null |
2023-05-20T04:58:42.643
|
2023-05-20T08:05:41.090
| null | null |
53916
|
[
"language-model"
] |
121655
|
1
| null | null |
0
|
15
|
This is my code:
```
import numpy as np
from sklearn.model_selection import ParameterGrid
from tensorflow.keras.applications.inception_v3 import InceptionV3
from tensorflow.keras.layers import Flatten, Dense, Dropout
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import SGD, RMSprop, Adagrad, Adadelta, Adam, Adamax, Nadam
from tensorflow.keras.preprocessing.image import ImageDataGenerator
# Define the hyperparameters to search
param_grid = {
'batch_size': [8, 16, 32],
'optimizer': [SGD(), RMSprop(), Adagrad(), Adadelta(), Adam(), Adamax(), Nadam()],
'epochs': [10, 50, 100]
}
best_score = 0.0
best_params = {}
# Perform grid search
for params in ParameterGrid(param_grid):
# Initializing the pre-trained model
image_size = 229
IMG_SHAPE = (image_size, image_size, 3)
pre_trained_model = InceptionV3(
input_shape=IMG_SHAPE,
include_top=False,
weights='imagenet'
)
for layer in pre_trained_model.layers:
layer.trainable = False
last_layer = pre_trained_model.get_layer('mixed5')
last_output = last_layer.output
x = Flatten()(last_output)
# Dense hidden layer
x = Dense(512, activation='relu')(x)
x = Dropout(0.2)(x)
# Output neuron.
x = Dense(4, activation='softmax')(x)
model = Model(pre_trained_model.input, x)
model.compile(optimizer=params['optimizer'], loss='categorical_crossentropy', metrics=['accuracy'])
# Train the model
history = model.fit(
train_generator,
epochs=params['epochs'],
steps_per_epoch=train_generator.samples // params['batch_size'],
validation_data=validation_generator,
validation_steps=validation_generator.samples // params['batch_size'],
verbose=0
)
# Evaluate the model
score = model.evaluate(validation_generator, steps=validation_steps, verbose=0)[1]
# Check if this is the best score so far
if score > best_score:
best_score = score
best_params = params
# Print the best combination of hyperparameters
print("Best: %f using %s" % (best_score, best_params))
best_model = Model(inputs=base_model.input, outputs=predictions)
best_model.compile(optimizer=best_params['optimizer'], loss='categorical_crossentropy', metrics=['accuracy'])
best_model.fit(train_generator, epochs=best_params['epochs'])
# Save the best model
best_model.save('my_inceptionskinmodel.hdf5')
```
|
Why it gives Node: 'model_8/dense_23/Relu' Matrix size-incompatible
|
CC BY-SA 4.0
| null |
2023-05-20T04:59:19.200
|
2023-05-21T03:04:19.617
|
2023-05-21T03:04:19.617
|
14518
|
150035
|
[
"hyperparameter-tuning"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.