CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Tags
stringlengths
6
105
AnswerCount
int64
1
64
A_Id
int64
518
76.4M
Title
stringlengths
11
150
Q_Id
int64
337
73M
is_accepted
bool
2 classes
ViewCount
int64
7
6.81M
Question
stringlengths
15
29.1k
Score
float64
-1
1.2
Q_Score
int64
0
6.79k
Available Count
int64
1
31
Answer
stringlengths
6
11.6k
2022-07-10T11:20:00.000
0
python,memory
1
72,928,156
Run an encrypted program without saving it with python
72,928,096
false
50
I have an AES encrypted file in python. Lets call it encrypted.exe. I have the key for the encryption. I can then use python to decrypt my file (Cipher module in python) and obtain decrypted.exe. I then save the file (using write). Later I can run the file by clicking on it or through a cmd or powershell. What I want to do is be able to run the decrypted.exe without saving it. For instance I thought about, decrypting with my AES key then loading it in the RAM and run it from there. I don't know how to do it nor if it is possible with python.
0
1
1
I would say you can, but it’s more than overhead. It’s really complex task, so I would recommend to save file to hard drive, than use subprocess or os library to execute it, then delete it.
2022-07-10T12:44:00.000
2
python,numpy,convolution,ndimage,numpy-memmap
2
72,928,780
Performing ndimage.convolve on big numpy.memmap: Unable to allocate 56.0 GiB for an array
72,928,565
false
73
While trying to do ndimage.convolve on big numpy.memmap, exception occurs: Exception has occurred: _ArrayMemoryError Unable to allocate 56.0 GiB for an array with shape (3710, 1056, 3838) and data type float32 Seems that convolve creates a regular numpy array which won't fit into memory. Could you tell me please if there is a workaround? Thank you for any input.
0.197375
0
1
Scipy and Numpy often create new arrays to store the output value returned. This temporary array is stored in RAM even when the array is stored on a storage device and accessed with memmap. There is an output parameter to control that in many functions (including ndimage.convolve). However, this does not prevent internal in-RAM temporary arrays to be created (though such array are not very frequent and often not huge). There is not much more you can do if the output parameter is not present or a big internal is created. The only thing to do is to write your own implementation that does not allocate huge in-RAM array. C modules, Cython and Numba are pretty good for this. Note that doing efficient convolutions is far from being simple when the kernel is not trivial and there are many research paper addressing this problem.
2022-07-10T19:46:00.000
0
python,django,database,postgresql,backend
3
72,931,447
Where do I store user personal information
72,931,426
false
66
I am creating an application in which a user registers with a username, email, and password. This data is stored in one table. But then the user enters profile information such as photo, bio, activity and such. Do I store this data in the same table as the username, email, and password? Or do I create another table in which I link those in some way?
0
0
1
You want to extend User model to have other information. The recomended approach for this is to create another model like Profile and setup OneToOne relation to the User.
2022-07-10T23:02:00.000
2
python,html,graph,networkx,pyvis
1
72,967,866
Interactive Pyvis Diagram Search function
72,932,416
false
231
Hi i am creating an interactive graph in Pyvis with more than 200 nodes and i want to add a search bar functionality, so i can have the chance to add in the search bar the node name and that it will be selected in the graph. Does anyone had anything like this?
0.379949
1
1
I am currently working on the same thing. One tool I have used in the past is a software called Gephi. It allows you to load in a node and edges table which can then be displayed in an html through sigma.js. If you find anything for pyvis let me know :)
2022-07-11T07:35:00.000
2
python,object-detection,unsupervised-learning
2
72,935,735
Unsupervised object detection
72,935,129
false
258
I am trying to detect the unique/foreign objects in a conveyor. The problem is in our case is, we don't know which type of featured object is passes through conveyor along with raw material. I am familiar with object detection techniques such as yolov and detectron which can detect object based on the feature of object that we annotate. But in our case we don't know the feature of object. I am wondering for some generic object proposal models for detection. Please give some idea is there any pre-trained unsupervised models which suits for this? or some methods or algorithm that what can i go with?. I hope i had explained my problem as much enough. Thanks in advance.
0.197375
2
1
I think I understood well your issue... If you do not want to train an object detection model because you may do not have the bounding boxes corresponding to the objects, you have several options. However, I do not think there is a pretrained model that fits on your problem since you should fine-tune it, and therefore, you should have some annotations. One think you could do, as Virgaux Pierre said, you could use some classic clustering segmentation. On the other hand, you could use a weakly-supervised approach which it only needs image-level labels, instead of the bounding boxes. This approach could fit well if you do not need high mAP. You could use CAM, GradCAM or other techniques to obtain activation maps. Furthermore, this approaches are easy to implement with a simple NN and some forward/backward hooks. Hope it helps.
2022-07-11T08:52:00.000
1
python,visual-studio-code,terminal
1
73,166,215
How to run the selection in VSCode in the active (IPython) terminal instead of creating a new terminal every time?
72,935,914
true
224
I would like to execute the selection in VSCode (Shift + Enter) in my current (ipython) terminal. Instead the shortcut creates a new terminal every time and don't run the selection in the active (ipython) terminal. Which setting do I have to change to adjust this?
1.2
1
1
I have solved it myself: Adding this "python.terminal.launchArgs": ["m","IPython","--no-autoindent"] to the JSON-file will do the job as mentioned in the question from the second comment. I had a JSON-file error which hasn't been marked. That way the code snippet wasn't executed. Using an online JSON validator helped me to find that error.
2022-07-11T10:26:00.000
0
python,mysql,aws-lambda,mysql-python,mysqlcheck
1
72,955,365
What is the alternative of mysqlcheck in python3?
72,937,101
false
48
I need to use mysqlcheck cmd for optimizing tables in the database. I've created a Lambda function in python for the whole process, now to execute the whole process first I need to optimize all tables of the database. I'm using PyMSQL module in python for connecting DB, but I guess optimising tables ability is not provided by PyMSQL, Then I tried to use the subprocess module to run the OS command mysqlcheck, but got the following error: [ERROR] FileNotFoundError: [Errno 2] No such file or directory: 'mysqlcheck' Can you tell me is any alternative of mysqlcheck is present in python Or how i can run mysqlcheck CMD in AWS Lambda? Thank You.
0
1
1
The alternative is to move your tables away from ENGINE=MyISAM (which sometimes needs OPTIMIZE) to ENGINE=InnoDB (which takes care if itself).
2022-07-11T11:01:00.000
0
python,pandas,numpy,xgboost
1
72,968,427
XGBoostError:[18:46:19] D:\Build\xgboost\xgboost-1.6-1.git\src\data\data.cc:487:Check failed: valid: Label contains NaN, infinity or a value too large
72,937,490
false
210
I am trying to use XGBRegressor for my data but keep getting the above error when doing a model.fit. I have tried: np.any(np.isnan(df)) np.all(np.isfinite(df)) which are both true. I tried getting rid of the inf and null values using: df.replace([np.inf, -np.inf], np.nan, inplace=True) df.fillna(0, inplace=True) but the error still occurs. np.all(np.isfinite(df)) is still showing true. Most errors I found on the website says "Input contains.." and not "Label contains.."
0
0
1
This is a long shot, but I was having a similar error and couldn't figure it out. It turns out I was doing a log transform right before I tossed my data into the regressor, and I had negative values in my output that were going to infinity. I didn't catch it because I looked for NAs/infinite values before it hit the log transform part of the pipeline.
2022-07-11T12:30:00.000
2
python,pip,package,site-packages
1
72,938,775
problem with install, uninstall ,and upgrade packages on pip
72,938,633
true
32
whenever I want to update/upgrade packages with pip I see the following message: WARNING: Ignoring invalid distribution -ip (c:\python39\lib\site-packages) and when I go to the mentioned path I see folders like ~rgcomplete-1.12.2.dist-info that contain some files that I don't know what to do. please guide.
1.2
0
1
It's a problem with a folder name under c:\python39\lib\site-packages. Since I don't know the contents I can't find what is wrong but deleting folders that contain characters like ~ should work.
2022-07-11T15:22:00.000
2
python,pytorch,autograd
1
72,944,357
Forward function with multiple outputs?
72,940,912
true
223
Typically the forward function in nn.module of pytorch computes and returns predictions for inputs happening in the forward pass. Sometimes though, intermediate computations might be useful to return. For example, for an encoder, one might need to return both the encoding and reconstruction in the forward pass to be used later in the loss. Question: Can Pytorch's nn.Module's forward function, return multiple outputs? Eg a tuple of outputs consisting predictions and intermediate values? Does such a return value not mess up the backward propagation or autograd? If it does, how would you handle cases where multiple functions of input are incorporated in the loss function? (The question should be valid in tensorflow too.)
1.2
0
1
"The question should be valid in Tensorflow too", but PyTorch and Tensorflow are different frameworks. I can answer for PyTorch at least. Yes you can return a tuple containing any final and or intermediate result. And this does not mess up back propagation since the graph is saved implicitly from the tensors outputs using callbacks and cached tensors.
2022-07-12T12:30:00.000
1
python,pip,jupyter-notebook,jupyterhub
1
72,952,391
Can we Restrict pip install in jupyter cell/notebook
72,952,346
false
69
Can we restrict or disable pip install in jupyter. I don't want to install any packages in my jupyter notebook. any one have idea about it??
0.197375
0
1
I don't know on what environment you are running but I guess you can change the permissions of the "user" that runs the notebook process so it won't have permissions to run pip
2022-07-12T16:31:00.000
0
python,scrapy,middleware
1
72,955,692
Scrapy: Pass variable from Middleware to Spider itself
72,955,582
false
50
I am trying to capture the raw request payload and request headers for the purpose of tracking inside my database. I know about the response.request.headers but that is the returned request headers. Would it be possible to create a middleware that captures the request.headers and payload (body) and send that to the spider as a meta tag or anything like that?
0
0
1
I found a way to do it (granted without the middleware): Store the scrapy.Request() into a variable e.g. req Store req.headers.to_unicode_dict() into self.req_headers Store req.body into self.req_body Execute yield req to send the request
2022-07-13T01:57:00.000
0
python,django
2
72,961,858
Is it okay to abuse @classmethod and @staticmethod In python?(Django)
72,960,099
false
74
I have used spring framework for developing server application, and now I start to learn Django. I got one question when I use Django. "Is there no issue about memory when using @classmethod or @staticmethod? in python" In spring(java), there is issue about abusing static and spring also support controlling object(IoC container). But when I use Django, there is no decorator or setting about object. Just I use @classmethod, and use it with class name (ex. AccountService.join() )
0
0
1
In "normal use", this is fine. Class methods do not get access to instance data, and static methods don't get access to the parent class. If these are not issues for you, then you can use @classmethod or @staticmethod respectively. In reality though, semantic/readability benefits aside, this is likely premature optimisation, especially in a web application context where the lion's share of latency/resources go to other things.
2022-07-13T05:37:00.000
0
python-3.x,django,cpanel
2
72,992,653
Cpanel python detecting wrong version
72,961,387
false
73
Im trying to deploy django application on cpanel But as I setup python3.7.12 but it detecting python2.6.6 Im tried please help me
0
0
2
If you have deployed Python 3.7 via the Python Selector inside cPanel, it actually creates a virtual environment with that version. Python 2.6 is most probably the default global one on your hosted server. In order to be able to use the 3.7 version, you will need to enter inside the virtual environment. To do that, go to your cPanel -> Setup Python App, edit your newly deployed app, and at the top of the page, you will have a command that you can copy/paste in SSH to enter into that environment.
2022-07-13T05:37:00.000
0
python-3.x,django,cpanel
2
73,004,549
Cpanel python detecting wrong version
72,961,387
false
73
Im trying to deploy django application on cpanel But as I setup python3.7.12 but it detecting python2.6.6 Im tried please help me
0
0
2
We have to edit .htaccess file In this file we can set the python3 virtual enviroment path. If we have root privilages then we can also edit application.json file and change defualt python path But we can edit .htaccess file it easy to create
2022-07-13T08:07:00.000
0
python,docusignapi
1
72,966,545
How to access to the number of enveloppes per user per time period on Docusign?
72,962,889
false
26
I would like to simply access the number of enveloppes (per user and per period) for billing purposes. Any idea how to do it in a fast way? Thank you!
0
0
1
Probably best is to set up an account-level webhook via the Connect feature. Your software can then be notified anytime a user in your account sends an envelope. You can add the information to your own BI (Business Intelligence) system and create derivative reports such as envelopes per user per week, etc.
2022-07-13T08:41:00.000
0
python,django,single-sign-on,openid-connect,openid
3
73,008,679
How to implement OpenID Connect with multiple providers in Django?
72,963,273
false
472
I'm trying to implement multiple SSO (OpenID Connect) logins in my application, besides the regular one. The current provider requests are Azure and Okta, but there will be more. For every bigger customer using my application, I want to be able to enable them a custom SSO login that they can setup in the admin panel. All the libraries I've tried using for this are either using settings.py and local django authentication, or they are deprecated. The flow is like this: User chooses their company and SSO login button -> Gets redirected to login -> I send the client id, secret etc. (which they entered in the admin panel when registering an sso connection) -> I get a token in return with the users name and email -> with this info (email) I find the already existing user in my local database and log him in
0
1
1
User chooses their company and SSO login button OK, you can just simply put the buttons on your website. Gets redirected to login You can imlpement /redirect endpoint and make sure do something to ready for getting user information from OAuth2 provider. I send the client id, secret etc. (which they entered in the admin panel when registering an sso connection) This is also continue of step 2. but I don't know how to connect with Django admin panel. sorry. I get a token in return with the users name and email Use OAuth2 provider's user info API. So you can get user's information whatever you want as much as they give. with this info (email) I find the already existing user in my local database and log him in Just write a function with database. it is not hard. I think that's enough to implement this. but that they can setup in the admin panel. I can't sure you can customize "already made" Django admin panel. That may be hard to do.
2022-07-13T11:11:00.000
1
python,assembly,nasm,obfuscation,shellcode
1
72,968,829
Generate shorter shellcode
72,965,314
false
204
Is there a way to generate short shellcode from a malware executable (PE/.exe)?, meaning sometimes some malware executable (PE/.exe) are big which when converted to shellcode will lead to longer and bigger shellcode size making analysis and obfuscation difficult and time intensive. Imagine trying to obfuscate a shellcode generated from 1.5KB size trojan, by insert new instructions before, after and between existing ones, replace existing instructions with alternative ones and insert jump instructions that will change the execution flow and randomly divide the shellcode into separate blocks. Performing these insertion operations on such a big size shellcode will take many hours. If anyone have an idea on how to shorten these long shllcode I will be gratefull.
0.197375
0
1
While I hate helping people that do these kinds of things, I have a feeling you won't get anywhere anyway. Your payload is your payload. You don't try to minimize a payload. You find a way to encode it, a way that suits you. You can compress it of course but you must treat a payload as a completely opaque blob of data, it could be almost incompressible as far as you know. For example, a simple way to encode arbitrary data in a shellcode is by applying any transformation T to it (e.g. compress it) and then converting the result to a modified base64 where arbitrary letter pairs are swapped. This prevents antiviruses from detecting the payload (checking memory in real-time is too expensive so the final payload won't be checked), uses only printable characters, lets you reduce the payload size if possible (thanks to T), and is easily automated. If you need to have a shorter payload, then reduce its size and not the size of the payload plus the shellcode that bootstraps it. However, what is usually done is to adopt the well-known kill-chain: vector -> dropper -> packer -> malware. The vector is how you gain execution in a particular context (e.g. a malicious MS Office macro or a process injection) and the dropper is a piece of code or an executable that will download or load the payload. Your shellcode should act as a dropper, shellcodes are typically very constrained (in size and shape) so they are kept short by loading the payload from somewhere else. If you need to embed your payload in the shellcode then analyze the constraints and work on the payload. If your payload can't satisfy them, you need to change it. I've only seen plain PE/ELF payloads mostly in process injections, where the attacker can allocate remote memory for the payload and the code (which is often called a shellcode but it is not really one). All shellcodes used in real binary exploitation either needed no payload (eg: spawn a reverse shell) or were droppers.
2022-07-13T12:02:00.000
2
python,numpy,numpy-ndarray,tensor
2
72,966,392
Matrix by Vector multiplication using numpy dot product
72,965,968
false
214
I have a matrix m = np.array([[3,4], [5,6], [7,5]]) and a vector v = np.array([1,2]) and these two tensors can be multiplied. For multiplication of the above two tensors, no. of columns of m must be equal to no. of rows of v The shapes of m and v are (3,2) and (2,) respectively. How is the multiplication possible, if m has 3 rows and 2 columns whereas v has 1 row and 2 columns?
0.197375
1
1
In NumPy, I would recommend not thinking too much about "rows" and "columns" An array in numpy can have any number of dimensions - you can make 1-dimensional, 2-dimensional or 100-dimensional arrays. A 100-dimensional array doesn't have "rows" and "columns" and neither does a 1-dimensional array. The simple rule for multiplying 1- or 2-dimensional arrays is: the last axis / dimension of the first array has to have the same size as the first axis / dimension of the second array. So you can multiply: a (3, ) array by a (3, 2) array a (3, 2) array by a (2, 3) array a (2, 3) array by a (3, ) array
2022-07-13T14:35:00.000
0
python
2
72,968,247
How to write to file special characters in other languages (ą,ę,ć,ż...) to file in python?
72,968,093
false
48
After writing to a txt file from site, special characters are replaced by the � symbol. How can I write the original letters to the file correctly?
0
0
1
Please check that your text editor also supports the encoding in order to view the actual characters on screen. You might be writing correctly to the file, but your text editor is unable to display special characters.
2022-07-13T18:25:00.000
0
python,jupyter-notebook
4
72,971,053
How do I average monthly data to get yearly values in Python?
72,970,982
false
127
I have a dataset that looks like this: Date Value 1871-01 4.5 1871-02 10.7 1871-03 8.9 1871-04 1.3 all the way to 2021-12. how do I get the average value for each year in Python? For example, the 1871 average would be the average of all of the values from 1871-01 to 1871-1 and I would like it for all years from 1871-2021. ... ...
0
0
1
Depends on the what format the data is being given to you. Is it json? csv? If you already know how to import and read the data with python.. you just need to assign the years to variables and average them. (x1 + x2 + x3) / (number of averaged variables)
2022-07-13T19:56:00.000
0
python,django,model
1
72,972,788
modify max_length models python django
72,971,938
false
96
I'm working on a python-django project. If a have a class where one of its attributes have max_length=100 how can I change to max_length=5000 ? for instance... Thank you!
0
0
1
Basically when you edit a field on a Model you need to makemigrations and migrate for changes to take effect in database. python manage.py makemigrations <optional_appname(If not provided it affects all apps.)> python manage.py migrate <optional_appname>
2022-07-13T21:54:00.000
2
python
4
72,973,060
Why print(5 ** 2 ** 0 ** 1) = 5 in Python?
72,973,033
false
190
Can somebody explain me why Why print(5 ** 2 ** 0 ** 1) = 5 in Python? I am trying to learn the language and somehow I am not quite sure how this math is done. Thank you in advance.
0.099668
2
1
** is exponentiation. 0 raised to the power of 1 is 0. So, we could re-write the statement as print(5**2**0) without changing the result. 2 raised to the power of 0 is 1. So, we can re-write the statement as print(5**1) without changing the result. 5 raised to the power of 1 is 5. So, we can rewrite the statement as print(5) without changing the result.
2022-07-13T23:00:00.000
1
python,algorithm,data-structures,hash
1
72,973,870
More efficient data structure/algorithm to find similar imagehashes in database
72,973,529
false
105
I am writing a small python program that tries to find images similar enough to some already in a database (to detect duplicates that have been resized/recompressed/etc). I am using the imagehash library and average hashing, and want to know if there is a hash in a known database that has a hamming distance lower than, say, 3 or 4. I am currently just using a dictionary that matches hashes to filenames and use brute force for every new image. However, with tens or hundreds of thousands of images to compare to, performance is starting to suffer. I believe there must be data structures and algorithms that can allow me to search a lot more efficiently but wasn’t able to find much that would match my particular use case. Would anyone be able to suggest where to look? Thanks!
0.197375
3
1
Here's a suggestion. You mention a database, so initially I will assume we can use that (and don't have to read it all into memory first). If your new image has a hash of 3a6c6565498da525, think of it as 4 parts: 3a6c 6565 498d a525. For a hamming distance of 3 or less any matching image must have a hash where at least one of these parts is identical. So you can start with a database query to find all images whose hash contains the substring 3a6c or 6565 or 498d or a525. This should be a tiny subset of the full dataset, so you can then run your comparison on that. To improve further you could pre-compute all the parts and store them separately as additional columns in the database. This will allow more efficient queries. For a bigger hamming distance you would need to split the hash into more parts (either smaller, or you could even use parts that overlap). If you want to do it all in a dictionary, rather than using the database you could use the parts as keys that each point to a list of images. Either a single dictionary for simplicity, or for more accurate matching, a dictionary for each "position". Again, this would be used to get a much smaller set of candidate matches on which to run the full comparison.
2022-07-14T06:53:00.000
0
python
2
72,977,168
Divide a number to many groups based on several factor using Python
72,976,377
true
60
I have retail sales data of the whole Germany, for example beer revenue. Now I want to find a way to divide that number into 596 cities of Germany based on the GDP per capita of each city and consumer spending of each city. So after that I can have the beer revenue of each single city in Germany. My assumption is: city beer = city consumer spending * x + city GDP per cap * y. and then sum of city beer = national beer Could you please advice which kind of algorithm or a way to do it in Python? Thank you so much.
1.2
0
1
Your assumption is not so good. Some cities may spend a bigger fraction of their total consumation in beer. I think a better assumption is that it's a variable fraction of the total consumation in beer, let's say, city i consumes a fraction xi of national consumation in beer, where xi is somehow dependent on the GDP and on the city consumation. To find xi, firstly scale the GDP to be in [delta, 1-delta], where delta is a positive quantity very close to zero, and keep their relative order. To do that, consider the biggest GDP is GDPmax and the minimum GDP is GDPmin. Then, map each GDPi to scaleGDPi = [(GDPi - GDPmin) * (1 - 2 * delta)/(GDPmax-GDPmin)] + delta. In a similar way, also scale the consumation to be in [delta, 1 - delta]. Then, consider xi = scaleGDPi * scaleConsumationi * x and you get (city beer)i = scaleGDPi * scaleConsumationi * x * national beer By imposing that the sum of city beer is equal to national beer, you get: x = 1 / (sum scaleGDPi * scaleConsumationi). So, city beer = (scaleGDPi * scaleConsumationi * national beer)/(sum scaleGDPi * scaleConsumationi). I think this would be a more adequate modelization of your problem.
2022-07-14T07:12:00.000
1
python,nlp,text-classification,multiclass-classification
3
72,980,134
Is it necessary to mitigate class imbalance problem in multiclass text classification?
72,976,599
false
273
I am performing multi-class text classification using BERT in python. The dataset that I am using for retraining my model is highly imbalanced. Now, I am very clear that the class imbalance leads to a poor model and one should balance the training set by undersampling, oversampling, etc. before model training. However, it is also a fact that the distribution of the training set should be similar to the distribution of the production data. Now, if I am sure that the data thrown at me in the production environment will also be imbalanced, i.e., the samples to be classified will likely belong to one or more classes as compared to some other classes, should I balance my training set? OR Should I keep the training set as it is as I know that the distribution of the training set is similar to the distribution of data that I will encounter in the production? Please give me some ideas, or provide some blogs or papers for understanding this problem.
0.066568
2
2
P(label | sample) is not the same as P(label). P(label | sample) is your training goal. In the case of gradient-based learning with mini-batches on models with large parameter space, rare labels have a small footprint on the model training. So, your model fits in P(label). To avoid fitting to P(label), you can balance batches. Overall batches of an epoch, data looks like an up-sampled minority class. The goal is to get a better loss function that its gradients move parameters toward a better classification goal. UPDATE I don't have any proof to show this here. It is perhaps not an accurate statement. With enough training data (with respect to the complexity of features) and enough training steps you may not need balancing. But most language tasks are quite complex and there is not enough data for training. That was the situation I imagined in the statements above.
2022-07-14T07:12:00.000
1
python,nlp,text-classification,multiclass-classification
3
72,979,672
Is it necessary to mitigate class imbalance problem in multiclass text classification?
72,976,599
false
273
I am performing multi-class text classification using BERT in python. The dataset that I am using for retraining my model is highly imbalanced. Now, I am very clear that the class imbalance leads to a poor model and one should balance the training set by undersampling, oversampling, etc. before model training. However, it is also a fact that the distribution of the training set should be similar to the distribution of the production data. Now, if I am sure that the data thrown at me in the production environment will also be imbalanced, i.e., the samples to be classified will likely belong to one or more classes as compared to some other classes, should I balance my training set? OR Should I keep the training set as it is as I know that the distribution of the training set is similar to the distribution of data that I will encounter in the production? Please give me some ideas, or provide some blogs or papers for understanding this problem.
0.066568
2
2
This depends on the goal of your classification: Do you want a high probability that a random sample is classified correctly? -> Do not balance your training set. Do you want a high probability that a random sample from a rare class is classified correctly? -> balance your training set or apply weighting during training increasing the weights for rare classes. For example in web applications seen by clients, it is important that most samples are classified correctly, disregarding rare classes, whereas in the case of anomaly detection/classification, it is very important that rare classes are classified correctly. Keep in mind that a highly imbalanced dataset tends to always predicting the majority class, therefore increasing the number or weights of rare classes can be a good idea, even without perfectly balancing the training set..
2022-07-14T11:18:00.000
-3
python,function,rounding
4
72,979,851
Python Round not working as expected for some values
72,979,675
false
95
In Python 3, I'm trying to round the value 4800.5, so I was expecting it to 4801 but it's giving me 4800. I'm not able to track why this is happening. Any help will be appreciated.
-0.148885
0
1
Round() function will round up to next value, if decimal is >.5 upto .5 it would round up to just the integer part.
2022-07-14T11:18:00.000
-1
python-3.x,docker
1
72,981,126
how to execute root command inside non-root process running inside docker container
72,979,676
false
86
I have a docker container running which start up few daemon processes post run with normal user (say with non-root privileges) id. The process which was running with normal user has to create some files and directories under /dev inside the container by calling python function which executes os.system('mkdir -p /dev/some_dir') calls. However when run, these calls are failing without the directory being created. But I can run those cmds from container bash prompt where my id is uid=0(root) gid=0(root) groups=0(root). Even providing sudo before the cmd inside os.system('sudo mkdir -p /dev/some_dir') is not working. Is there any way I can make it work. I can not run the process with root user id due to security implications, but I need to create this directory as well. thanks for your pointers
-0.197375
0
1
You should give /dev directory a permission to write files for your non-root user.
2022-07-14T11:33:00.000
0
python,centos,vps
2
74,793,748
How to make python 3.9 my default python interpreter on centos
72,979,838
false
355
I recently installed python3 on my vps, I want to enable it as default, so that when I type python I get python 3. I think the problem is its installed in /usr/local/bin instead of /usr/bin/ typing python on the terminal access python2 typing python3 returns bash: python3: command not found Most answers I have seen is a bit confusing as I am not a centos expert.
0
0
1
for a simple fix, you can use alias add the alias to your .bashrc file sudo vi ~/.bashrc then add your alias at the bottom alias python="python3.9" So that when you type python you'll get python 3
2022-07-14T12:19:00.000
0
python,android,kivy,google-colaboratory,extension-methods
1
72,989,784
I am converting a python/kivy project to apk in google colab and some extension are not working so what extension I should use for audio/font file?
72,980,383
false
62
I use jpg file for image but when I run my app in phone it won't open but it works for PNG file. And also for audio file I use mp3, OGG file my app won't open but when I use wav file my app works but the sound doesn't come. And for font file, both TTF and OTF extension won't work i.e. my app won't work
0
0
1
I solve my problem a bit, now the wav file is working correctly because I add the extension of wav file in the google colab file which created after the compilation of my files (Buildozer.spec) but the problem with jpg is still there and for mp3 also
2022-07-14T13:06:00.000
0
python,math,optimization,physics,curve-fitting
1
72,994,215
How to find center of the circle if the data points are in curved coordinates system(Horizontal - AltAz)
72,981,025
false
94
I have 6 points with their coordinates in the Cartesian plane, XY, placed on one side of a circle. Using the least square method it is relatively easy to fit a circle to those 6 points and to find the radius and the center of the fitted circle again in XY coordinates.. However, I also have Altitude Azimuth coordinates for those 6 points, because those points are on the sky, so I was wondering is it possible to fit a curve to those curved coordinates and then find the center of that circle.
0
1
1
Project your points on the unit sphere and compute the best fitting plane. The normal vector of the plane points towards the center of that circle. The radius of your circle will be equal to sqrt(1-d²) if d is the distance between the plane and the origin or acos(d) if you want the angle between the center and a point of the circle (since we're doing spherical geometry). EDIT : do an orthogonal regression because if you don't, the z-axis could be favored over the others or vice-versa.
2022-07-14T14:45:00.000
1
python,windows,cmd
1
72,986,425
How do i copy files to a read-only folder in python
72,982,424
false
65
I understand that I need to use shutil.copyfile() but I keep getting an error when actually trying to copy a file to said read-only folder. I tried turning off read-only manually at the folder but it just turns itself back on. Error: PermissionError: [Errno 13] Permission Denied: [long folder path] I also tried running as administrator but that did nothing. Also note that I am using a windows 11 pc Edit: I have also tried using os.system("attrib -r " + path) which led to the same exact error.
0.197375
1
1
Found an answer! As it turns out, after lots and lots of research, every folder for windows has that read-only box toggled but it doesn't actually do anything. Weird huh? So that wasn't really the issue. The actual issue had something to do with the shutil.copyfile() method. If you use shutil.copy2() it will work. Not sure why though. Couldn't get an explanation on that.
2022-07-14T14:45:00.000
0
python,python-3.x,pytest
1
72,989,877
Pythonw gui does not run - how to debug
72,982,425
false
78
I am new to python and trying to figure out a problem that gui doesn't launch when clicking on the .pyw file. On my windows 10 machine, I have installed python 3.5, and environment path is set. I was given a set of python files (.py) and some looks like shortcut file (.pyw). And I was told to double click the file then a gui will launch. Some .pyw works well, gui launch. However there are some failed. After double click, there was a quick cmd terminal open and closed automatically. Then no gui pop out. I want to know what is the cause and how do I debug it. From the properties of .pyw file, is is pointing to one of the .py file. Let me know if posting the .py file helps. I will then post here.
0
0
1
Managed to resolve it by selecting the correct pylauncher to open the file.
2022-07-14T15:24:00.000
1
python,runtime
3
72,983,071
What does the phrase "created at runtime mean?"
72,982,976
false
61
My supervisor asked me the other day if a log file in a Python script is created at runtime or not. Is that another way of asking: is a new log file created every time the script is run? Because to me everything happens "at runtime" and it can be no other way. How else could it be?
0.066568
1
1
Yeah i think you got it right! However if you for example would only append to an already existing log file it would not be considered as "created at runtime".
2022-07-14T15:30:00.000
0
python,tkinter,dialog,save
1
72,983,110
How to save tkinter dialog data?
72,983,088
false
36
I want the user to enter something in the TextBox, and if he exits the program and returns to open the program again at any time, he finds that the words he entered are still there. (Python programmers - Save Dialog- tkinter )
0
0
1
If you want to save the state of the program, you could save it in file, in a database, or put your data in one place and use pickle. And at least reload the state on program start.
2022-07-14T16:40:00.000
0
python,django,shell,virtualenv
1
72,984,159
Cannot make virtual environment
72,983,949
false
141
I have virtualenvwrapper-win installed from the command line. When I try to do virtualenv env (env is the name of my virtual environment I'd like to set up) I get this: 'virtualenv' is not recognized as an internal or external command, operable program or batch file I'm not sure what's wrong. Anyone know how to fix?
0
0
1
Try these commands : To create virtual env - mkvirtualenv venv_name To view all virtual env - lsvirtualenv To activate any virtual env - workon venv_name
2022-07-14T18:48:00.000
1
python-3.x,django,django-models,django-rest-framework
2
72,985,640
Validate soft foreign key Django
72,985,351
true
53
Have a model in django that has a json field with list of IDs to a different model. How can we validate the inputs to that field are valid foreign key. Not using many to many field or a joining model separately.
1.2
0
2
This seems like a strange way of doing things but your best option would be to get a list of your valid IDs with MyModel.objects.all().values_list('id', flat=True) and compare your JSON data with the resulting list
2022-07-14T18:48:00.000
0
python-3.x,django,django-models,django-rest-framework
2
72,985,885
Validate soft foreign key Django
72,985,351
false
53
Have a model in django that has a json field with list of IDs to a different model. How can we validate the inputs to that field are valid foreign key. Not using many to many field or a joining model separately.
0
0
2
Another way to do it is compere jsonlist lenght with this count: MyModel.objects.filter(id__in=jsonlist).count()
2022-07-14T19:25:00.000
0
python,scipy,interpolation,spline
1
74,988,616
Thin plate spline interpolation of 3D stack python
72,985,698
false
123
My problem is as follows. I have a 2d image of some tissue and a 3d stack of the same region of the tissue and plus more tissue that does not go into my 2d image. Now, the 3d stack is slightly rotated with respect to the 2d image, but also has some local deformation, so I can't simply apply a rigid rotation transformation. I can scroll through the 3d stack and find individual features that are common to the 2d image. I want to apply a nonlinear transformation such that in the end I can find my source 2d image as a flat plane in the 2d stack. My intuition is that I should use thin plate spline for this, may the scipy RBF interpolator, but my brain stops working when I try to implement it. I would use as input arguments let's say 3 points (x1, y1, 0), (x2, y2, 0) and (x3, y3, 0) with some landmarks on the 2d image and then (x1', y1', z1'), (x2', y2', z2') and (x3', y3', z3') for the corresponding points into the 3d stack. And then I get a transformation but how do I actually apply this to an image? The bit that confuses me is that I'm working with a 3D matrix of intensities, not a meshgrid.
0
1
1
scipy RBF is designed to interpolate scattered data, it's just a spline interpolator. To warp a domain, however, you need to find another library or write TPS (thin plate spline) yourself; scipy doesn't do it. I recommend you check VTK, for example. You feed your landmark information of the reference image and the target image to a vtkThinPlateSplineTransform object. Then you can get the transformation matrix and feed it to a vtkImageReslice object, which warps your image accordingly.
2022-07-14T19:45:00.000
0
python,tensorflow
3
72,986,147
Does the TensorFlow save function automatically overwrite old models? If not, how does the save/load system work?
72,985,903
false
64
I've tried finding information regarding this online but the word overwrite does not show up at all in the official Tensorflow documentation and all the Stack Overflow questions are related to changing the number of copies saved by the model. I would just like to know whether or not the save function overwrites at all. If I re-train a model and would like to re-run the save function will the newer model load in when I use the load_model function? Or will it be a model that is trained on the same data twice? Do older iterations get stored somewhere?
0
0
2
You can use model.save('./model.h5') which will save the model to a file and model = tf.keras.models.load_model('./model.h5') to load the model
2022-07-14T19:45:00.000
0
python,tensorflow
3
74,481,711
Does the TensorFlow save function automatically overwrite old models? If not, how does the save/load system work?
72,985,903
false
64
I've tried finding information regarding this online but the word overwrite does not show up at all in the official Tensorflow documentation and all the Stack Overflow questions are related to changing the number of copies saved by the model. I would just like to know whether or not the save function overwrites at all. If I re-train a model and would like to re-run the save function will the newer model load in when I use the load_model function? Or will it be a model that is trained on the same data twice? Do older iterations get stored somewhere?
0
0
2
I think Eyal's answer is a good point to start. However, if you want to be sure you can let your program delete the previous model or change it's name on the fly. I also observed different results when deleting a model and not, but this could also be effects of the different training process, due to random initialization and updating the weights.
2022-07-15T03:15:00.000
1
python,numpy,opencv,pixel,connected-components
1
72,993,128
How do you access a pixel's label ID that is given to each pixel after connected component labeling?
72,988,695
false
126
I used OpenCV's connectedComponentsWithStats function to do connected component labeling on an image. I would now like to be able to access all the pixels that have a certain label, or check the label that is assigned to a particular pixel while iterating through the image. How can I do this? I plan on iterating through the image using a nested for loop.
0.197375
0
1
connectedComponents* literally gives you a "labels map". You look up the pixel's position in there and you get the label for that pixel. If you need a mask for one specific label, you calculate mask = (labels_map == specific_label) Do not "iterate" through images. Python loops are slow. Whatever you do, consider how to express that with library functions (numpy, OpenCV, ...). There are ways to speed up python loops but that's advanced and likely not the right solution for your problem.
2022-07-15T05:41:00.000
0
python,django,pycharm
1
72,989,606
Pycharm : Python was not found; run without arguments to install from the Microsoft Store, or disable this shortcut from Settings
72,989,495
true
2,157
I was trying to get Django installed onto my Pycharm. In Terminal, I typed in python -m pip install Django When I pressed enter on the information, It told me: Python was not found; run without arguments to install from the Microsoft Store, or disable this shortcut from Settings > Manage App Execution Aliases. I already have Python installed and put it as the base interpreter for the file. Not sure why Pycharm wants me to install it from the Microsoft store??? Please help!
1.2
0
1
Either search for the exe und run it like {pathtoexe}/python.exe -m pip install Django or try python3instead of python. In addition to that you can check your path variables where you should find the actual command.
2022-07-15T05:49:00.000
0
python,php,selenium,heroku,digital-ocean
1
72,993,335
Run Python script On Heroku or DG Ocean when click button on web
72,989,547
false
28
Is it possible to run python script, which I will upload on Heroku or DG Ocean droplet (depending on which of them is most comfortable for what I am trying to do), from external website button? My scenario: I have scrapper and I want to run it when user clicks on button on my webpage, so scrapper will scrape current data and will show it to user. Is it possible to do? or if we have any other way?
0
0
1
For others, who will have same things in the feature: So, i am going to make FLASK app to use DG Droplet or Heroku, after i will import Flask app using Iframe in my Web project and use only BUTTON, like this i will use my scraper on my Webpage. Thanks
2022-07-15T05:55:00.000
0
python,python-3.x,pip
1
72,990,816
Install requirements from pip with minimum possible version
72,989,594
false
1,017
I am trying to install a complicated project on Python 3.4.3. The problem is that a lot of dependencies have dependencies that use the dependency>=version syntax. This ends up having pip trying to install the greatest possible package version greater than said version. I was wondering if there is a general pip command that allows me to treat dependency>=version as dependency==version, basically installing the minimum possible version.
0
0
1
After a bit of research, I came across a tool called pip-compile, included in pip package pip-tools. Using, pip-compile requirements.txt I was able to generate a requirements.txt file easily.
2022-07-15T08:12:00.000
0
python,django,virtualenv,filepath,directory-structure
1
72,993,441
Python Virtual Environment: changed path of enviroment - can't find packages anymore (How to show the enviroment where its packages are located)
72,990,965
false
68
I have a issue with my virutal enviroment and I couldn't find a clear and straightforward awnser for it. I had a fully working virtual enviroment with a lot of packages. My directory changed from "../Desktop/.." to "../Cloud_Name/Desktop/.." and lets assume i can't change that anymore. I'm now able to cd into my eviroment and activate it. If I now want to use any kind of command I get: Fatal error in launcher: Unable to create process using "C: ..." "C: ..." the system cannot find the specified file. I tried sofar to change the directory in "eviroment/Scripts/activate" and "eviroment/Scripts/activate.bat", but it doesn't work. I don't want to install a new enviroment. I'd be very thankfull if someone has a working solution to show my eviroment where its packages are. Thank you in advance for your time and have a great day!
0
1
1
If you are able to activate your virtual environment, I suggest storing the installed packages (their names and version) into a requirements file by running pip freeze > requirements.txt Then recreate a new environment. After which you can reinstall your previous packages through pip install -r requirements.txt. Virtualenv usually creates a symbolic link to reference package locations and I think after you changed the location of the environments, it did not (though it usually does not) update the symbolic links to the new location.
2022-07-15T10:58:00.000
0
python,integer
5
72,993,000
Get the next multiple of a string in python
72,992,943
false
40
I have the length of a string. If this string is say 49, I want it to the find the next multiple of 16 which should be 64. I've researched and found the round function however, this lowers it down to 48 instead of 64. So: the number returned from the length of a string should be rounded to the NEXT multiple of 16 not the nearest. If anyone has any suggestions, please let me know. Many thanks
0
0
1
The math.ceil() function might fix your issue. Not sure if that's gonna work in your case.
2022-07-15T14:30:00.000
0
python,qt,pyqt,progress-bar,qslider
1
72,995,624
PyQt: How to set QSlider moves automatically over time?
72,995,591
false
31
I have a GUI for a network of nodes. There are time-based data logs that I read from to make changes to the nodes. In the GUI I plot the nodes as QGraphicsItems. Based on the logs, the nodes change for example their colors and positions over time. I want to add a QSlider that works like a video progress bar, i.e. it moves automatically when reading data from the logs. How can this be implemented? Thanks in advance.
0
0
1
I think you can read the data in a QThread and update the QSlider every line you read, or every x ammount of logs. You can use slot and signal to communicate between the QThread and the UI main thread (or Application thread or whatever).
2022-07-15T16:43:00.000
0
python,csv
3
72,997,258
Python script to check csv columns for empty cells that will be used with multiple excels
72,997,237
false
282
I've done research and can't find anything that has solved my issue. I need a python script to read csv files using a folder path. This script needs to check for empty cells within a column and then display a popup statement notifying users of the empty cells. Anything helps!!
0
1
1
Use the pandas library pip install pandas You can import the excel file as a DataFrame and check each cell with loops.
2022-07-15T18:43:00.000
1
python,pandas,dataframe
2
72,998,940
In pandas, how can I filter for rows where ALL values are higher than a certain threshold? And keep the index columns with the output?
72,998,419
false
1,613
In pandas, how can I filter for rows where ALL values are higher than a certain threshold? Say I have a table that looks as follows: City Bird species one Bird species two Bird Species three Bird species four A 7 11 13 16 B 11 12 13 14 C 20 21 22 23 D 8 6 4 5 Now I only want to get rows that have ALL COUNTS greater than 10. Here that would be Row B and Row C. So my desired output is: City Bird species one Bird species two Bird Species three Bird species four B 11 12 13 14 C 20 21 22 23 So, even if a single values is false I want that row dropped. Take for example in the example table, Row A has only one value less than 10 but it is dropped. I tried doing this with df.iloc[:,1:] >= 10 which creates a boolean table and if I do df[df.iloc[:,1:] >= 10] it gives me table that shows which cells are satisfying the condition but since the first column is string all of it labelled false and I lose data there and turns out the cells that are false stay in there as well. I tried df[(df.iloc[:,2:] >= 10).any(1)] which is the same as the iloc method and does not remove the rows that have at least one false value. How can I get my desired output? Please note I want to keep the the first column. Edit: The table above is an example table, that is a scaled down version of my real table. My real table has 109 columns and is the first of many future tables. Supplying all column names by hand is not a valid solution at all and makes scripting unfeasible.
0.099668
1
1
df[(df[df.columns[1:]]>x).all(axis=1)] where x should be replaced with the values one wants to test turns out to be the easiest answer for me. This makes it possible to parse the dataframe without having to manually type out the column names. This also assumes that all of your columns other than the first one are integers. Please make note of the other answer above that tells you how to make note of dtypes if you have mixed data. I only slightly changed Rodrigo Laguna answer above.
2022-07-15T20:37:00.000
0
python,spotify,spotipy
1
73,023,708
Spotify API: Restricting recommendations to specific artists?
72,999,367
true
110
I'm working on an app using the Spotipy library and would like to use the Spotify recommendations call to get recommended songs by only a specific artist. I know there are tuneable attributes for more specific characteristics but is there a parameter/argument that would allow you to restrict the results by artists? Scoured the docs and couldn't find anything.
1.2
2
1
This would require access to the underlying representation of tracks/artists (to be able to compute similarity with specific input artists as you'd like, for example) used in Spotify's own recommender models, which is unfortunately not public. As a workaround, something you could try is to leverage the seeds in the Recommendation endpoint, and add to these some tracks from the desired artist in the hope that this will also steer recommendations toward the artist.
2022-07-15T23:59:00.000
0
python-3.x,cryptography,modulenotfounderror,control-m,authlib
2
73,001,788
Authlib module not found on bmc control-m
73,000,588
false
55
I am running a code with uses authlib module and it runs fine with all dependency met but when I try to automate it through bmc control-m it gives module not found error. Any help would be really appreciated.
0
0
2
Are you using the same userid for the Control-M job as when running directly? Also, verify the running mode of the Control-M Agent - does it run as root or via a fixed userid?
2022-07-15T23:59:00.000
1
python-3.x,cryptography,modulenotfounderror,control-m,authlib
2
73,029,303
Authlib module not found on bmc control-m
73,000,588
false
55
I am running a code with uses authlib module and it runs fine with all dependency met but when I try to automate it through bmc control-m it gives module not found error. Any help would be really appreciated.
0.099668
0
2
So the issue is resolved I installed the authlib and cryptography modules by running the shell as administrator on prod server.
2022-07-16T03:04:00.000
-1
python,visual-studio-code
3
73,001,335
vscode "Run Without Debugging" doesn't open Python Debug Console
73,001,241
false
211
VSCode's "Run Without Debugging" runs the python file (I can tell from time passing) but it doesn't open any panel/terminal to show the output. I have uninstalled/reinstalled the Microsoft python extension, and run the above experiment outside of any workspace, in a brand new directory with a single "test.py" file, after quitting/reopening VSCode. How do I get VSCode to open the "Python Debug Console" upon "Run Without Debugging"?
-0.066568
0
1
At the top of the menu bar theres a tab named "Terminal" click that and then hit new terminal
2022-07-16T09:32:00.000
0
python,django
3
73,004,031
how to select a specific part of a string Django?
73,003,090
false
43
I have strings in the database that are the names of countries and cities for example like this: Italy-Milan OR France-Paris. How can i select only the city part, like select what comes only after the '-' using python?
0
0
1
If data is the variable having you city value and other info than to get only the city name data.split("-")[1]
2022-07-16T11:07:00.000
0
python,tkinter
2
73,005,630
How to work on same tkinter window from different python file?
73,003,656
false
71
For example if I have a main.py file which has a root = tk.Tk() line for main window of GUI. And another file menu.py which I want to use to add menu bar to the root window in main.py.
0
0
1
This is actually really easy so don't worry. First ensure that main.py and menu.py are in the same folder, then at the beginning of main.py add from menu import *. This should import all the data and python code from the other file so you can work in that file as if you were working in the main one.
2022-07-16T13:36:00.000
0
python,numpy,2d,subset
3
75,346,242
Subsetting 2D NumPy Arrays
73,004,655
false
166
hello im on the path of learning the python and i am struggling to understand this problem can you please help me to solve this problem Print out the 50th row of np_baseball. why the answer for this command is [49, :] From my perspective if the asking for the 50th it should be just [49] why there is additional : Will be extremely glad for your respond
0
0
1
baseball is available as a regular list of lists Import numpy package import numpy as np Create np_baseball (2 cols) np_baseball = np.array(baseball) Print out the 50th row of np_baseball print(np_baseball[49:50]) Select the entire second column of np_baseball: np_weight_lb np_weight_lb=np_baseball[:,1] Print out height of 124th player print(np_baseball[123, 0])
2022-07-16T17:38:00.000
1
python
1
73,006,412
How to get .isalpha to not count accent letters?
73,006,393
false
66
My code is: def return_num_characters(s): count = 0 for i in s: if i.isalpha(): count+=1 return count Overestimates number of alpha english characters in a input string (less spaces, punctuation, numbers, and all other characters not a-z) How do I get it to return properly? Output is 994 when it should be 974 for the below test case Test case is : case_character_count Wie schön bist du, Freundliche Stille, himmlische Ruh! - Sehet, wie die klaren Sterne Wandeln in des Himmels [Aun]1, Und auf uns [herniederschaun]2, Schweigend aus der blauen Ferne. Wie schön bist du, Freundliche Stille, himmlische Ruh! - Schweigend naht des Lenzes Milde Sich der Erde weichem Schooß, Kränzt den Silberquell mit Moos, Und mit Blumen die Gefilde. Wie schön bist du, Freundliche Stille, himmlische Ruh! - Wenn nicht mehr des Wetters Wogen Um den Himmel tobend ziehn, Donner krachen, Blitze sprühn, Blüht des Friedens stiller Bogen. Wie schön bist du, Freundliche Stille, himmlische Ruh! - Wo der Wellen rauh Getümmel Schweigt, des Meeres Brausen ruht, In der unbewegten Fluth Glänzt der klare, blaue Himmel. Wie schön bist du, Freundliche Stille, himmlische Ruh! - Nicht zu Salems hohen Thoren, Zu der Königsstädte Pracht Stieg die heil'ge Wundernacht, Aus des Urlichts Quell gebohren. Wie schön bist du, Freundliche Stille, himmlische Ruh! - Engelchöre sangen Lieder In des Nachthauchs leisem Wehn, Und auf Bethlehms stille Höhn Schwebten Seraphim hernieder. - Wie schön bist du, Freundliche Stille, himmlische Ruh! - In des Kindes zarter Hülle, In der heil'gen Mutter Schooß, Auf der Krippe weichem Moos Lag des ew'gen Lichtes Fülle!
0.197375
0
1
Use i.isascii() and i.isalpha() in place of just i.isalpha().
2022-07-16T19:02:00.000
0
python,machine-learning,pytorch,iteration,dataloader
1
73,011,459
Is it more beneficial to read many small files or fewer large files of the exact same data?
73,006,936
true
448
I am working on a project where I am combining 300,000 small files together to form a dataset to be used for training a machine learning model. Because each of these files do not represent a single sample, but rather a variable number of samples, the dataset I require can only be formed by iterating through each of these files and concatenating/appending them to a single, unified array. With this being said, I unfortunately cannot avoid having to iterate through such files in order to form the dataset I require. As such, the process of data loading prior to model training is very slow. Therefore my question is this: would it be better to merge these small files together into relatively larger files, e.g., reducing the 300,000 files to 300 (merged) files? I assume that iterating through less (but larger) files would be faster than iterating through many (but smaller) files. Can someone confirm if this is actually the case? For context, my programs are written in Python and I am using PyTorch as the ML framework. Thanks!
1.2
0
1
Usually working with one bigger file is faster than working with many small files. It needs less open, read, close, etc. functions which need time to check if file exists, check if you have privilege to access this file, get file's information from disk (where is beginning of file on disk, what is its size, etc.), search beginning of file on disk (when it has to read data), create system's buffer for data from disk (system reads more data to buffer and later function read() can read partially from buffer instead of reading partially from disk). Using many files it has to do this for every file and disk is much slower than buffer in memory.