Q_CreationDate
stringlengths 23
23
| Title
stringlengths 11
149
| Question
stringlengths 25
6.53k
| Answer
stringlengths 15
5.1k
| Score
float64 -1
1.2
| Is_accepted
bool 2
classes | N_answers
int64 1
17
| Q_Id
int64 0
6.76k
|
---|---|---|---|---|---|---|---|
2019-05-15 23:51:31.380 | How to use a Pyenv virtualenv from within Eclipse? | I am using Eclipse on Linux to develop C applications, and the build system I have makes use of make and python. I have a custom virtualenv installed and managed by pyenv, and it works fine from the command line if I pre-select the virtualenv with, say pyenv shell myvenv.
However I want Eclipse to make use of this virtualenv when building (via "existing makefile") from within Eclipse. Currently it runs my Makefile but uses the system python in /usr/bin/python, which is missing all of the packages needed by the build system.
It isn't clear to me how to configure Eclipse to use a custom Python interpreter such as the one in my virtualenv. I have heard talk of setting PYTHONPATH however this seems to be for finding site-packages rather than the interpreter itself. My virtualenv is based on python 3.7 and my system python is 2.7, so setting this alone probably isn't going to work.
I am not using PyDev (this is a C project, not a Python project) so there's no explicit support for Python in Eclipse. I'd prefer not to install PyDev if I can help it.
I've noticed that pyenv adds its plugins, shims and bin directories to PATH when activated. I could explicitly add these to PATH in Eclipse, so that Eclipse uses pyenv to find an interpreter. However I'd prefer to point directly at a specific virtualenv rather than use the pyenv machinery to find the current virtualenv. | Typing CMD+SHIFT+. will show you dotfiles & directories that begin with dot in any Mac finder dialog box... | -0.135221 | false | 3 | 6,092 |
2019-05-15 23:51:31.380 | How to use a Pyenv virtualenv from within Eclipse? | I am using Eclipse on Linux to develop C applications, and the build system I have makes use of make and python. I have a custom virtualenv installed and managed by pyenv, and it works fine from the command line if I pre-select the virtualenv with, say pyenv shell myvenv.
However I want Eclipse to make use of this virtualenv when building (via "existing makefile") from within Eclipse. Currently it runs my Makefile but uses the system python in /usr/bin/python, which is missing all of the packages needed by the build system.
It isn't clear to me how to configure Eclipse to use a custom Python interpreter such as the one in my virtualenv. I have heard talk of setting PYTHONPATH however this seems to be for finding site-packages rather than the interpreter itself. My virtualenv is based on python 3.7 and my system python is 2.7, so setting this alone probably isn't going to work.
I am not using PyDev (this is a C project, not a Python project) so there's no explicit support for Python in Eclipse. I'd prefer not to install PyDev if I can help it.
I've noticed that pyenv adds its plugins, shims and bin directories to PATH when activated. I could explicitly add these to PATH in Eclipse, so that Eclipse uses pyenv to find an interpreter. However I'd prefer to point directly at a specific virtualenv rather than use the pyenv machinery to find the current virtualenv. | I had the same trouble and after some digging, there are two solutions; project-wide and workspace-wide. I prefer the project-wide as it will be saved in the git repository and the next person doesn't have to pull their hair.
For the project-wide add /Users/${USER}/.pyenv/shims: to the start of the "Project properties > C/C++ Build > Environment > PATH".
I couldn't figure out the other method fully (mostly because I'm happy with the other one) but it should be with possible to modify "Eclipse preferences > C/C++ > Build > Environment". You should change the radio button and add PATH variable. | 0 | false | 3 | 6,092 |
2019-05-15 23:51:31.380 | How to use a Pyenv virtualenv from within Eclipse? | I am using Eclipse on Linux to develop C applications, and the build system I have makes use of make and python. I have a custom virtualenv installed and managed by pyenv, and it works fine from the command line if I pre-select the virtualenv with, say pyenv shell myvenv.
However I want Eclipse to make use of this virtualenv when building (via "existing makefile") from within Eclipse. Currently it runs my Makefile but uses the system python in /usr/bin/python, which is missing all of the packages needed by the build system.
It isn't clear to me how to configure Eclipse to use a custom Python interpreter such as the one in my virtualenv. I have heard talk of setting PYTHONPATH however this seems to be for finding site-packages rather than the interpreter itself. My virtualenv is based on python 3.7 and my system python is 2.7, so setting this alone probably isn't going to work.
I am not using PyDev (this is a C project, not a Python project) so there's no explicit support for Python in Eclipse. I'd prefer not to install PyDev if I can help it.
I've noticed that pyenv adds its plugins, shims and bin directories to PATH when activated. I could explicitly add these to PATH in Eclipse, so that Eclipse uses pyenv to find an interpreter. However I'd prefer to point directly at a specific virtualenv rather than use the pyenv machinery to find the current virtualenv. | For me, following steps worked ( mac os 10.12, eclipse photon version, with pydev plugin)
Project -> properties
Pydev-Interpreter/Grammar
Click here to configure an interpreter not listed (under interpret combobox)
open interpreter preference page
Browse for python/pypy exe -> my virtualenvdirectory/bin/python
Then the chosen python interpreter path should show ( for me still, it was not pointing to my virtual env, but I typed my path explicitly here and it worked)
In the bottom libraries section, you should be able to see the site-packages from your virtual env
Extra tip - In my mac os the virtual env was starting with .pyenv, since it's a hidden directory, I was not able to select this directory and I did not know how to view the hidden directory in eclipse file explorer. Therefore I created an softlink ( without any . in the name) to the hidden directory (.pyenv) and then I was able to select the softlink | -0.135221 | false | 3 | 6,092 |
2019-05-16 10:29:25.553 | the clustering of mixed data using python | I am trying to cluster a data set containing mixed data(nominal and ordinal) using k_prototype clustering based on Huang, Z.: Clustering large data sets with mixed numeric and categorical values.
my question is how to find the optimal number of clusters? | There is not one optimal number of clusters. But dozens. Every heuristic will suggest a different "optimal" number for another poorly defined notion of what is "optimal" that likely has no relevancy for the problem that you are trying to solve in the first place.
Rather than being overly concerned with "optimality", rather explore and experiment more. Study what you are actually trying to achieve, and how to get this into mathematical form to be able to compute what is solving your problem, and what is solving someone else's... | 0 | false | 1 | 6,093 |
2019-05-17 07:14:43.650 | How to predict different data via neural network, which is trained on the data with 36x60 size? | I was training a neural network with images of an eye that are shaped 36x60. So I can only predict the result using a 36x60 image? But in my application I have a video stream, this stream is divided into frames, for each frame 68 points of landmarks are predicted. In the eye range, I can select the eye point, and using the 'boundingrect' function from OpenCV, it is very easy to get a cropped image. But this image has no form 36x60. What is the correct way to get 36x60 data that can be used for forecasting? Or how to use a neural network for data of another form? | Neural networks (insofar as I've encountered) have a fixed input shape, freedom permitted only to batch size. This (probably) goes for every amazing neural network you've ever seen. Don't be too afraid of reshaping your image with off-the-shelf sampling to the network's expected input size. Robust computer-vision networks are generally trained on augmented data; randomly scaled, skewed, and otherwise transformed in order to---among other things---broaden the network's ability to handle this unavoidable scaling situation.
There are caveats, of course. An input for prediction should be as similar to the dataset it was trained on as possible, which is to say that a model should be applied to the data for which it was designed. For example, consider an object detection network made for satellite applications. If that same network is then applied to drone imagery, the relative size of objects may be substantially larger than the objects for which the network (specifically its anchor-box sizes) was designed.
Tl;dr: Assuming you're using the right network for the job, don't be afraid to scale your images/frames to fit the network's inputs. | 1.2 | true | 1 | 6,094 |
2019-05-17 13:19:41.943 | How to configure alerts for employee contract expiration in odoo 11? | i'm using odoo 11 and i want to know how can I configure odoo in a way that the HR manager and the employee received alert before the expiration of contract.
Is it possible to do it ? Any idea for help please ? | This type of scenario is only archived by developing custom addon.
In custom addon you have to specify cron file which will automatically fire some action at regular basis, and which will send email notification to HR Manager that some of employee's contract are going to be expired. | 0 | false | 1 | 6,095 |
2019-05-17 14:56:30.123 | User word2vec model output in larger kmeans project | I am attempting a rather large unsupervised learning project and am not sure how to properly utilize word2vec. We're trying to cluster groups of customers based on some stats about them and what actions they take on our website. Someone recommended I use word2vec and treat each action a user takes as a word in a "sentence". The reason this step is necessary is because a single customer can create multiple rows in the database (roughly same stats, but new row for each action on the website in chronological order). In order to perform kmeans on this data we need to get that down to one row per customer ID. Hence the previous idea to collapse down the actions as words in a sentence "describing the user's actions"
My question is I've come across countless tutorials and resources online that show you how to use word2vec (combined with kmeans) to cluster words on their own, but none of them show how to use the word2vec output as part of a larger kmeans model. I need to be able to use the word2vec model along side other values about the customer. How should I go about this? I'm using python for the clustering if you want to be specific with coding examples, but I could also just be missing something super obvious and high level. It seems the word2vec outputs vectors, but kmeans needs straight numbers to work, no? Any guidance is appreciated. | There are two common approaches.
Taking the average of all words. That is easy, but the resulting vectors tend to be, well, average. They are not similar to the keywords of the document, but rather similar to the most average and least informative words... My experiences with this approach are pretty disappointing, despite this being the most mentioned approach.
par2vec/doc2vec. You add a "word" for each user to all it's contexts, in addition to the neighbor words, during training. This way you get a "predictive" vector for each paragraph/document/user the same way you get a word in the first word2vec. These are supposedly more informative but require much more effort to train - you can't download a pretrained model because they are computed during training. | 0 | false | 1 | 6,096 |
2019-05-17 18:10:01.270 | Conversion from pixel to general Metric(mm, in) | I am using openCV to process an image and use houghcircles to detect the circles in the image under test, and also calculating the distance between their centers using euclidean distance.
Since this would be in pixels, I need the absolute distances in mm or inches, can anyone let me know how this can be done
Thanks in advance. | The image formation process implies taking a 2D projection of the real, 3D world, through a lens. In this process, a lot of information is lost (e.g. the third dimension), and the transformation is dependent on lens properties (e.g. focal distance).
The transformation between the distance in pixels and the physical distance depends on the depth (distance between the camera and the object) and the lens. The complex, but more general way, is to estimate the depth (there are specialized algorithms which can do this under certain conditions, but require multiple cameras/perspectives) or use a depth camera which can measure the depth. Once the depth is known, after taking into account the effects of the lens projection, an estimation can be made.
You do not give much information about your setup, but the transformation can be measured experimentally. You simply take a picture of an object of known dimensions and you determine the physical dimension of one pixel (e.g. if the object is 10x10 cm and in the picture it has 100x100px, then 10px is 1mm). This is strongly dependent on the distance to the camera from the object.
An approach a bit more automated is to use a certain pattern (e.g. checkerboard) of known dimensions. It can be automatically detected in the image and the same transformation can be performed. | 0 | false | 1 | 6,097 |
2019-05-19 00:17:45.020 | How to run Airflow dag with more than 100 thousand tasks? | I have an airflow DAG that has over 100,000 tasks.
I am able to run only up to 1000 tasks. Beyond that the scheduler hangs, the webserver cannot render tasks and is extremely slow on the UI.
I have tried increasing, min_file_process_interval and processor_poll_interval config params.
I have set num_duration to 3600 so that scheduler restarts every hour.
Any limits I'm hitting on the webserver or scheduler? In general, how to deal with a large number of tasks in Airflow? Any config settings, etc would be very helpful.
Also, should I be using SubDagOperator at this scale or not? please advice.
Thanks, | I was able to run more than 165,000 airflow tasks!
But there's a catch. Not all the tasks were scheduled and rendered in a single Airflow Dag.
The problems I faced when I tried to schedule more and more tasks are that of scheduler and webserver.
The memory and cpu consumption on scheduler and webserver dramatically increased as more and more tasks were being scheduled (it is obvious and makes sense). It went to a point where the node couldn't handle it anymore (scheduler was using over 80GB memory for 16,000+ tasks)
I split the single dag into 2 dags. One is a leader/master. The second one being the worker dag.
I have an airflow variable that says how many tasks to process at once (for example, num_tasks=10,000). Since I have over 165,000 tasks, the worker dag will process 10k tasks at a time in 17 batches.
The leader dag, all it does is trigger the same worker dag over and over with different sets of 10k tasks and monitor the worker dag run status. The first trigger operator triggers the worker dag for the first set of 10k tasks and keeps waiting until the worker dag completes. Once it's complete, it triggers the same worker dag with the next batch of 10k tasks and so on.
This way, the worker dag keeps being reused and never have to schedule more than X num_tasks
The bottom line is, figure out the max_number of tasks your Airflow setup can handle. And then launch the dags in leader/worker fashion for max_tasks over and over again until all the tasks are done.
Hope this was helpful. | 1.2 | true | 1 | 6,098 |
2019-05-19 02:12:09.703 | Let high priority python thread to enter the critical section while low priority thread is execution in the critical section | I have set of threads which can execute a synchronized method in python. Currently when a thread comes to critical section it enters to the critical section if no thread is executing the critical section. Otherwise wait and enter the critical section after lock is released. (it works as synchronization supposed to work). But I have a high priority thread which should enter the critical section whether a low priority thread is in the critical section or not. Is this possible? If so how can I implement this? | As another answer described very well, this is not possible, there is no way to do it.
What you can and often should do is prevent another lower priority thread from entering this critical section first, before high priority thread.
I.e. if a critical section is being held by some thread, this thread needs to exit it first. But by that time there might be multiple threads waiting for this critical section, some low and some high priority. You may want to ensure higher priority thread gets the critical section first in such situation. | 0.386912 | false | 1 | 6,099 |
2019-05-19 21:14:23.120 | Replace os.system with os.popen for security purposes | Is it possible to use os.popen() to achieve a result similar to os.system? I know that os.popen() is more secure, but I want to know how to be able to actually run the commands through this function. When using os.system(), things can get very insecure and I want to be able to have a secure way of accessing terminal commands. | Anything that uses the shell to execute commands is insecure for obvious reasons (you don't want someone running rm -rf / in your shell :). Both os.system and os.popen use the shell.
For security, use the subprocess module with shell = False
Either way, both of those functions have been deprecated since Python 2.6 | 1.2 | true | 1 | 6,100 |
2019-05-20 11:26:16.497 | "SSL: CERTIFICATE_VERIFY_FAILED" error in my telegram bot | My Telegram bot code was working fine for weeks and I didn't changed anything today suddenly I got [SSL: CERTIFICATE_VERIFY_FAILED] error and my bot code no longer working in my PC.
I use Ubuntu 18.04 and I'm usng telepot library.
What is wrong and how to fix it?
Edit: I'm using getMe method and I don't know where is the certificate and how to renew it and I didn't import requests in my bot code. I'm using telepot API by importing telepot in my code. | Probably your certificate expired, that is why it worked fine earlier. Just renew it and all should be good. If you're using requests under the hood you can just pass verify=False to the post or get method but that is unwise.
The renew procedure depends on from where do you get your certificate. If your using letsencrypt for example with certbot. Issuing sudo certbot renew command from shell will suffice. | 0.386912 | false | 1 | 6,101 |
2019-05-20 16:35:33.720 | Use C# DLL in Python | I have a driver which is written in C#, .NET 4.7.0 and build as DLL. I don't have sources from this driver. I want to use this driver in python application.
I wrapped some functionality from driver into method of another C# project. Then I built it into DLL. I used RGiesecke.DllExport to make one method available in python. When i call this method from python using ctypes, I get WinError -532462766 Windows Error 0xe0434352.
If I exclude driver code and keep only wrapper code in exported method everything runs fine.
Could you please give me some advice how to make this working or help me find better sollution? Moving from python to IronPython is no option here.
Thank you. | PROBLEM CAUSE:
Python didn't run wrapper from directory where it was stored together with driver. That caused problem with loading driver. | 1.2 | true | 1 | 6,102 |
2019-05-20 17:25:47.853 | How to make multiple y axes zoomable individually | I have a bokeh plot with multiple y axes. I want to be able to zoom in one y axis while having the other one's displayed range stay the same. Is this possible in bokeh, and if it is, how can I accomplish that? | Bokeh does not support this, twin axes are always linked to maintain their original relative scale. | 1.2 | true | 1 | 6,103 |
2019-05-21 07:05:10.423 | Unsupported major.minor version when running a java program from shell script which is executed by a python program | I have a shell script that runs some java program on a remote server.
But this shell script is to be executed by a python script which is on my local machine.
Here's the flow : Python script executes the shell script (with paramiko), the shell script then executes a java class.
I am getting an error : 'The java class could not be loaded. java.lang.UnsupportedClassVersionError: (Unsupported major.minor version 50.0)' whenever I run python code.
Limitations: I cannot make any changes to the shell script.
I believe this is java version issue. But I don't know how to explicitly have a python program to run in a specific java environment.
Please suggest how I can get rid of this error.
The java version of unix machine (where shell script executes) : 1.6.0
Java version of my local machine (where python script executes): 1.7.0 | The shell script can stay the same, update java on the remote system to java 1.7 or later. Then it should work.
Another possibility could be to compile the java application for java 1.6 instead. The java compiler (javac) has the arguments -source and -target and adding -source 1.6 -target 1.6 when compiling the application should solve this issue, too (but limits the application to use java 1.6 features).
Also be aware: If you use a build system like gradle or maven, then you have a different way to set source and target version. | 0.386912 | false | 1 | 6,104 |
2019-05-21 07:54:56.323 | Accidentally deleted /usr/bin/python instead of /usr/local/bin/python on OS X/macOS, how to restore? | I had so many Python installations that it was getting frustrating, so I decided to do a full reinstall. I removed the /Library/Frameworks/Python.Frameworks/ folder, and meant to remove the /usr/local/bin/python folder too, but I accidentally removed the /usr/bin/python instead. I don't see any difference, everything seems to be working fine for now, but I've read multiple articles online saying that I should never touch /usr/bin/python as OS X uses it and things will break.
I tried Time Machine but there are no viable recovery options. How can I manually "restore" what was deleted? Do I even need to, since everything seems to be working fine for now? I haven't restarted the Mac yet, in fear that things might break.
I believe the exact command I ran was rm -rf /usr/bin/python*, and I don't have anything python related in my /usr/bin/ folder.
I'm running on macOS Mojave 10.14.5 | Items can't be recovered when you perform rm -rf. However, you can try the following:
cp /usr/local/bin/python* /usr/bin
This would copy user local python to usr bin and most probably will bail you out.
Don't worry, nothing will happen to your OS. It should work fine :) | 0.201295 | false | 1 | 6,105 |
2019-05-21 09:29:34.240 | How to build a resnet with Keras that trains and predicts the subclass from the main class? | I would like go implement a hierarchical resnet architecture. However, I could not find any solution for this. For example, my data structure is like:
class A
Subclass 1
Subclass 2
....
class B
subclass 6
........
So i would like to train and predict the main class and then the subclass of the chosen/predicted mainclass. Can someone provide a simple example how to do this with generators? | The easiest way to do so would be to train multiple classifiers and build a hierarchical system by yourself.
One classifier detecting class A, B etc. After that make a new prediction for subclasses.
If you want only one single classifier:
What about just killing the first hierarchy of parent classes? Should be also quite easy. If you really want a model, where the hierarchy is learned take a look at Hierarchical Multi-Label Classification Networks. | 0 | false | 1 | 6,106 |
2019-05-22 21:18:51.313 | AzureDataFactory Incremental Load using Python | How do I create azure datafactory for incremental load using python?
Where should I mention file load option(Incremental Load:LastModifiedOn) while creating activity or pipeline??
We can do that using UI by selecting File Load Option. But how to do the same pragmatically using python?
Does python api for datafactory support this or not? | My investigations suggest that the Python SDK has not yet implemented this feature. I used the SDK to connect to my existing instance and fetched two example datasets. I did not find anything that looked like the 'last modified date'. I tried dataset.serialize() , dataset.__dict__ , dataset.properties.__dict__ . I also tried .__slots__ .
Trying serialize() is significant because there ought to be parity between the JSON generated in the GUI and the JSON generated by the Python. The lack of parity suggests the SDK version lags behind the GUI version.
UPDATE: The SDK's are being updated. | 0 | false | 1 | 6,107 |
2019-05-22 22:30:47.140 | Can I connect my IBM Cloudant Database as the callback URL for my Twilio IBM STT add-on service? | I have a Watson voice assistant instance connected using SIP trunk to a Twilio API. I want to enable to the IBM Speech-To-Text add-on from the Twilio Marketplace which will allow me to obtain full transcriptions of phone calls made to the Watson Assistant bot. I want to store these transcriptions in a Cloudant Database I have created in IBM Cloud. Can I use the endpoint of my Cloudant Database as the callback URL for my Twilio add-on so that when the add-on is activated, the transcription will be added as a document in my Cloudant Database?
It seems that I should be able to somehow call a trancsription service through IBM Cloud's STT service in IBM Cloud, but since my assistant is connected through Twilio, this add-on seems like an easier option. I am new to IBM Cloud and chat-bot development so any information is greatly appreciated. | Twilio developer evangelist here.
First up, I don't believe that you can enable add-ons for voice services that are served through Twilio SIP trunking.
Unless I am mistaken and you are making a call through a SIP trunk to a Twilio number that is responding with TwiML. In this case, then you can add the STT add-on. I'm not sure it would be the best idea to set the webhook URL to your Cloudant DB URL as the webhook is not going to deliver the data in the format that Cloudant expects.
Instead I would build out an application that can provide an endpoint to receive the webhook, transform the data into something Cloudant will understand and then send it on to the DB.
Does that help at all? | 0 | false | 1 | 6,108 |
2019-05-23 04:26:35.540 | How corrupt checksum over TCP/IP | I am connecting my slave via TCP/IP, everything looks fine by using the Wireshark software I can validate that the CRC checksum always valid “good”, but I am wondering how I can corrupt the CRC checksum so I can see like checksum “Invalid”. Any suggestion how can I get this done maybe python code or any other way if possible.
Thank you all
Tariq | I think you use a library that computes CRC. You can form Modbus packet without it, if you want simulate bad CRC condition | 0.201295 | false | 1 | 6,109 |
2019-05-23 20:27:53.837 | How to identify Plaid transactions if transaction ID's change | I noticed that the same transaction had a different transaction ID the second time I pulled it. Why is this the case? Is it because pending transactions have different transaction IDs than those same transactions once posted? Does anyone have recommendations for how I can identify unique transactions if the trx IDs are in fact changing? | Turns out that the transaction ID often does change. When a transaction is posted (stops pending), the original transaction ID becomes the pending transaction ID, and a new transaction ID is assigned. | 1.2 | true | 1 | 6,110 |
2019-05-24 20:15:57.300 | How to implement neural network pruning? | I trained a model in keras and I'm thinking of pruning my fully connected network. I'm little bit lost on how to prune the layers.
Author of 'Learning both Weights and Connections for Efficient
Neural Networks', say that they add a mask to threshold weights of a layer. I can try to do the same and fine tune the trained model. But, how does it reduce model size and # of computations? | If you add a mask, then only a subset of your weights will contribute to the computation, hence your model will be pruned. For instance, autoregressive models use a mask to mask out the weights that refer to future data so that the output at time step t only depends on time steps 0, 1, ..., t-1.
In your case, since you have a simple fully connected layer, it is better to use dropout. It randomly turns off some neurons at each iteration step so it reduces the computation complexity. However, the main reason dropout was invented is to tackle overfitting: by having some neurons turned off randomly, you reduce neurons' co-dependencies, i.e. you avoid that some neurons rely on others. Moreover, at each iteration, your model will be different (different number of active neurons and different connections between them), hence your final model can be interpreted as an ensamble (collection) of several diifferent models, each specialized (we hope) in the understanding of a specific subset of the input space. | 0.673066 | false | 1 | 6,111 |
2019-05-25 15:12:30.457 | Controlling Kodi from Browser | I am currently building a media website using node js. I would like to be able to control Kodi, which is installed of the server computer, remotely from the website browser.How would I go about doing this? My first idea was
to simply see if I could somehow pipe the entire Kodi GUI into the
browser such that the full program stays on the server
and just the GUI is piped to the browser, sending commands back to
the server;
however, I could find little documentation on how to do that.
Second, I thought of making a script (eg Python) that would be able to control Kodi and just interface node js with the Python script, but again,
I could find little documentation on that.
Any help would be much appreciated.
Thank You! | Can't you just go to settings -> services -> control and then the 'remote control via http' settings? I use this to login to my local ip e.g. 192.168.1.150:8080 (you can set the port on this page) from my browser and I can do anything from there | 0 | false | 1 | 6,112 |
2019-05-25 20:48:52.037 | How to i split a String by first and last character in python | I have a list of strings
my_list = ['1Jordan1', '2Michael2', '3Jesse3'].
If I should delete the first and last character, how would I do it in python?? | You would use slicing. I would use [1:-1]. | 0 | false | 1 | 6,113 |
2019-05-26 02:09:50.387 | Comparing results of neural net on two subsets of features | I am running a LSTM model on a multivariate time series data set with 24 features. I have ran feature extraction using a few different methods (variance testing, random forest extraction, and Extra Tree Classifier). Different methods have resulted in a slightly different subset of features. I now want to test my LSTM model on all subsets to see which gives the best results.
My problem is that the test/train RMSE scores for my 3 models are all very similar, and every time I run my model I get slightly different answers. This question is coming from a person who is naive and still learning the intricacies of neural nets, so please help me understand: in a case like this, how do you go about determining which model is best? Can you do seeding for neural nets? Or some type of averaging over a certain amount of trials? | Since you have mentioned that using the different feature extraction methods, you are only getting slightly different feature sets, so the results are also similar. Also since your LSTM model is then also getting almost similar RMSE values, the models are able to generalize well and learn similarly and extract important information from all the datasets.
The best model depends on your future data, the computation time and load of different methods and how well they will last in production. Setting a seed is not really a good idea in neural nets. The basic idea is that your model should be able to reach the optimal weights no matter how they start. If your models are always getting similar results, in most cases, it is a good thing. | 1.2 | true | 1 | 6,114 |
2019-05-27 23:00:17.260 | Setting legend entries manually | I am using openpyxl to create charts. For some reason, I do not want to insert row names when adding data. So, I want to edit the legend entries manually. I am wondering if anyone know how to do this.
More specifically
class openpyxl.chart.legend.Legend(legendPos='r', legendEntry=(),
layout=None, overlay=None, spPr=None, txPr=None, extLst=None). I want to edit the legendEntry field | You cannot do that. You need to set the rows when creating the plots. That will create the titles for your charts | 1.2 | true | 1 | 6,115 |
2019-05-28 00:51:27.173 | Time series prediction: need help using series with different periods of days | There's this event that my organization runs, and we have the ticket sales historic data from 2016, 2017, 2018. This data contains the quantity of tickets selled by day, considering all the sales period.
To the 2019 edition of this event, I was asked to make a prediction of the quantity of tickets selled by day, considering all the sales period, sort of to guide us through this period, meaning we would have the information if we are above or below the expected sales average.
The problem is that the historic data has a different size of sales period in days:
In 2016, the total sales period was 46 days.
In 2017, 77 days.
In 2018, 113 days.
In 2019 we are planning 85 days. So how do I ajust those historical data, in a logic/statistical way, so I could use them as inputs to a statistical predictive model (such as ARIMA model)?
Also, I'm planning to do this on Python, so if you have any suggestions about that, I would love to hear them too!
Thank you! | Based on what I understand after reading your question, I would approach this problem in the following way.
For each day, find how far out the event is from that day. The max
value for this number is 46 in 2016, 77 in 2017 etc. Scale this value
by the max day.
Use the above variable, along with day of the month, day of the week
etc as extraneous variable
Additionally, use lag information from ticket sales. You can try one
day lag, one week lag etc.
You would be able to generate all this data from the sale start until
end.
Use the generated variables as predictor for each day and use ticket
sales as target variable and generate a machine learning model
instead of forecasting.
Use the machine learning model along with generated variables to predict future sales. | 0 | false | 1 | 6,116 |
2019-05-28 05:42:52.493 | Why can't I push from PyCharm | I'm a new GitHub user, and this question may be a trivial newbie problem. So I apologize in advance.
I'm using PyCharm for a Python project. I've set up a Git repository for the project and uploaded the files manually through the Git website. I also linked the repository to my PyCharm project.
When I modify a file, PyCharm allows me to "commit" it, but when I try to "push" it, I get a PyCharm pop-up error message saying "Push rejected." No further information is provided. How do I figure out what went wrong -- and how to fix it?
Thanks. | If you manually uploaded files to the Github by dropping them, it now likely has a different history than your local files.
One way you could get around this is to store all of your changes in a different folder, do a git pull in pycharm, abandoning your changes so you are up to date with origin/master, then commit the files and push as you have been doing. | 0.386912 | false | 1 | 6,117 |
2019-06-01 11:32:58.337 | How to choose a split variables for continous features for decision tree | I am currently implementing decision tree algorithm. If I have a continous featured data how do i decide a splitting point. I came across few resources which say to choose mid points between every two points but considering I have 8000 rows of data this would be very time consuming. The output/feature label is having category data. Is any approach where I can perform this operation quicker | Decision tree works calculating entropy and information gain to determine the most important feature. Indeed, 8000 row is not too much for decision tree. But generally, Random forest is similar to decision tree. It is working as ensemble. You can review and try it.Moreover, maybe being slowly is related to another thing. | 0 | false | 1 | 6,118 |
2019-06-02 09:05:49.710 | What is a scalable way of creating cron jobs on Amazon Web Services? | This is my first question so I appologize if it's not the best quality.
I have a use case: User creates a monitoring task which sends an http request to a website every X hours. User can have thousands of these tasks and can add/modify and delete them. When a user creates a task, django signals create a Celery periodic task which then is running periodically.
I'm searching for a more scalable solution using AWS. I've read about using Lambda + Cloudwatch Events.
My question is: how do I approach this to let my users create tens of thousands of these tasks in the cheapest / most scalable way?
Thank you for reading my question!
Peter | There is no straight forward solution to your problem .You have to proceed step by step with some plumbing along the way .
Event management
1- Create a lambda function that creates a cloudwatch schedule.
2 - Create a lambda function that deletes a cloudwatch schedule.
3 - Persist any event created using dynamodb
4 - Create 2 API gateway that will invoke the 2 lambda above.
5 - Create anohter lambda function (used by cloudwatch) that will invoke the API gateway below.
6 - Create API gateway that will invoke the website via http request.
When the user creates an event from the app, there will be a chaining calls as follow :
4 -> 1,3 -> 5-> 6
Now there are two other parameters to take into consideration :
Lambda concurrency: you can't run simultaneously more than 1000 lambda in same region.
Cloudwatch: You can not create more than 100 rules per region . Rule is where you define the schedule. | 0 | false | 1 | 6,119 |
2019-06-03 00:39:25.213 | Run python script from another computer without installing packages/setting up environment? | I have a Jupyter notebook script that will be used to teach others how to use python.
Instead of asking each participant to install the required packages, I would like to provide a folder with the environment ready from the start.
How can I do this?
What is the easiest way to teach python without running into technical problems with packages/environments etc.? | You would need to use a program such as py2exe, pyinstaller, or cx_freeze to package each the file, the modules, and a lightweight interpreter. The result will be an executable which does not require the user to have any modules or even python installed to access it; however, because of the built-in interpreter, it can get quite large (which is why Python is not commonly used to make executables). | 0.201295 | false | 2 | 6,120 |
2019-06-03 00:39:25.213 | Run python script from another computer without installing packages/setting up environment? | I have a Jupyter notebook script that will be used to teach others how to use python.
Instead of asking each participant to install the required packages, I would like to provide a folder with the environment ready from the start.
How can I do this?
What is the easiest way to teach python without running into technical problems with packages/environments etc.? | The easiest way I have found to package python files is to use pyinstaller which packages your python file into an executable file.
If it's a single file I usually run pyinstaller main.py --onefile
Another option is to have a requirements file
This reduces installing all packages to one command pip install -r requirements.txt | 0.201295 | false | 2 | 6,120 |
2019-06-04 10:52:09.683 | how to use python-gitlab to upload file with newline? | I'm trying to use python-gitlab projects.files.create to upload a string content to gitlab.
The string contains '\n' which I'd like it to be the real newline char in the gitlab file, but it'd just write '\n' as a string to the file, so after uploading, the file just contains one line.
I'm not sure how and at what point should I fix this, I'd like the file content to be as if I print the string using print() in python.
Thanks for your help.
EDIT---
Sorry, I'm using python 3.7 and the string is actually a csv content, so it's basically like:
',col1,col2\n1,data1,data2\n'
So when I upload it the gitlab file I want it to be:
,col1,col2
1,data1,data2 | I figured out by saving the string to a file and read it again, this way the \n in the string will be translated to the actual newline char.
I'm not sure if there's other of doing this but just for someone that encounters a similar situation. | 0 | false | 1 | 6,121 |
2019-06-04 17:28:52.047 | How do you install Django 2x on pip when python 2x is your default version of python but you use python 3x on Bash | I need to install Django 2.2.2 on my MacBook pro (latest generation), and I am a user of python 3x. However, my default version of python is python 2x and I cannot pip install Django version 2x when I am using python 2x. Could anyone explain how to change the default version of python on MacBook I have looked at many other questions on this site and none have worked. All help is appreciated thank you :) | You can simply use pip3 instead of pip to install Python 3 packages. | 1.2 | true | 1 | 6,122 |
2019-06-05 17:11:10.447 | Example of using hylang with python multiprocessing | I am looking for an example of using python multiprocessing (i.e. a process-pool/threadpool, job queue etc.) with hylang. | Note that a straightforward translation runs into a problem on macOS (which is not officially supported, but mostly works anyway): Hy sets sys.executable to the Hy interpreter, and multiprocessing relies on that value to start up new processes. You can work around that particular problem by calling (multiprocessing.set_executable hy.sys_executable), but then it will fail to parse the file containing the Hy code itself, which it does again for some reason in the child process. So there doesn't seem to be a good solution for using multiprocessing with Hy running natively on a Mac.
Which is why we have Docker, I suppose. | 0 | false | 1 | 6,123 |
2019-06-05 23:27:18.603 | How to use python with qr scanner devices? | I want to create a program that can read and store the data from a qr scanning device but i don't know how to get the input from the barcode scanner as an image or save it in a variable to read it after with openCV | Typically a barcode scanner automatically outputs to the screen, just like a keyboard (except really quickly), and there is an end of line character at the end (like and enter).
Using a python script all you need to do is start the script, connect a scanner, scan something, and get the input (STDIN) of the script. If you built a script that was just always receiving input and storing or processing them, you could do whatever you please with the data.
A QR code is read in the same way that a barcode scanner works, immediately outputting the encoded data as text. Just collect this using the STDIN of a python script and you're good to go! | 0 | false | 1 | 6,124 |
2019-06-06 06:24:25.907 | What are the available estimators which we can use as estimator in onevsrest classifier? | I want to know briefly about all the available estimators like logisticregression or multinomial regression or SVMs which can be used for classification problems.
These are the three I know. Are there any others like these? and relatively how long they run or how accurate can they get than these? | The following can be used for classification problems:
Logistic Regression
SVM
RandomForest Classifier
Neural Networks | 0.201295 | false | 1 | 6,125 |
2019-06-06 07:55:29.040 | How to use data obtained from a form in Django form? | I'm trying to create a form in Django using Django form.
I need two types of forms.
A form that collect data from user, do some calculations and show the results to user without saving the data to database. I want to show the result to user once he/she press button (calculate) next to it not in different page.
A form that collect data from user, look for it in a column in google sheet, and if it's unique, add it to the column otherwise inform the user a warning that the data is not unique.
Thanks | You could use AJAX and javascript to achieve this, but I suggest doing this only via javascript. This means you will have to rewrite the math in JS and output it directly in the element.
Please let me know if you need any help :)
Jasper | 0 | false | 2 | 6,126 |
2019-06-06 07:55:29.040 | How to use data obtained from a form in Django form? | I'm trying to create a form in Django using Django form.
I need two types of forms.
A form that collect data from user, do some calculations and show the results to user without saving the data to database. I want to show the result to user once he/she press button (calculate) next to it not in different page.
A form that collect data from user, look for it in a column in google sheet, and if it's unique, add it to the column otherwise inform the user a warning that the data is not unique.
Thanks | Start by writing it in a way that the user submits the form (like any normal django form), you process it in your view, do the calculation, and return the same page with the calculated values (render the template). That way you know everything is working as expected, using just Django/python.
Then once that works, refactor to make your form submit the data using AJAX and your view to just return the calculation results in JSON. Your AJAX success handler can then insert the results in the current page.
The reason I suggest you do this in 2 steps is that you're a beginner with javascript, so if you directly try to build this with AJAX, and you're not getting the results you expect, it's difficult to understand where things go wrong. | 0 | false | 2 | 6,126 |
2019-06-06 09:28:12.653 | Cassandra write throttling with multiple clients | I have two clients (separate docker containers) both writing to a Cassandra cluster.
The first is writing real-time data, which is ingested at a rate that the cluster can handle, albeit with little spare capacity. This is regarded as high-priority data and we don't want to drop any. The ingestion rate varies quite a lot from minute to minute. Sometimes data backs up in the queue from which the client reads and at other times the client has cleared the queue and is (briefly) waiting for more data.
The second is a bulk data dump from an online store. We want to write it to Cassandra as fast as possible at a rate that soaks up whatever spare capacity there is after the real-time data is written, but without causing the cluster to start issuing timeouts.
Using the DataStax Python driver and keeping the two clients separate (i.e. they shouldn't have to know about or interact with each other), how can I throttle writes from the second client such that it maximises write throughput subject to the constraint of not impacting the write throughput of the first client? | The solution I came up with was to make both data producers write to the same queue.
To meet the requirement that the low-priority bulk data doesn't interfere with the high-priority live data, I made the producer of the low-priority data check the queue length and then add a record to the queue only if the queue length is below a suitable threshold (in my case 5 messages).
The result is that no live data message can have more than 5 bulk data messages in front of it in the queue. If messages start backing up on the queue then the bulk data producer stops queuing more data until the queue length falls below the threshold.
I also split the bulk data into many small messages so that they are relatively quick to process by the consumer.
There are three disadvantages of this approach:
There is no visibility of how many queued messages are low priority and how many are high priority. However we know that there can't be more than 5 low priority messages.
The producer of low-priority messages has to poll the queue to get the current length, which generates a small extra load on the queue server.
The threshold isn't applied strictly because there is a race between the two producers from checking the queue length to queuing a message. It's not serious because the low-priority producer queues only a single message when it loses the race and next time it will know the queue is too long and wait. | 1.2 | true | 1 | 6,127 |
2019-06-07 13:05:17.733 | Pardot Visit query API - generic query not available | I am trying to extract/sync data through Pardot API v4 into a local DB. Most APIs were fine, just used the query method with created_after search criteria. But the Visit API does not seem to support neither a generic query of all visit data, nor a created_after search criteria to retrieve new items.
As far as I can see I can only query Visits in the context of a Visitor or a Prospect.
Any ideas why, and how could I implement synchronisation? (sorry, no access to Pardot DB...)
I have been using pypardot4 python wrapper for convenience but would be happy to use the API natively if it makes any difference. | I managed to get a response from Pardot support, and they have confirmed that such response filtering is not available on the Visits API. I asked for a feature request, but hardly any chance to get enough up-votes to be considered :( | 1.2 | true | 1 | 6,128 |
2019-06-08 19:04:33.077 | How can I stop networkx to change the source and the target node? | I make a Graph (not Digraph) from a data frame (Huge network) with networkx.
I used this code to creat my graph:
nx.from_pandas_edgelist(R,source='A',target='B',create_using=nx.Graph())
However, in the output when I check the edge list, my source node and the target node has been changed based on the sort and I don't know how to keep it as the way it was in the dataframe (Need the source and target node stay as the way it was in dataframe). | If you mean the order has changed, check out nx.OrderedGraph | 0 | false | 1 | 6,129 |
2019-06-08 19:37:59.810 | How to fail a Control M job when running a python function | I have a Control-M job that calls a python script. The python script contains a function that returns True or False.
Is it possible to make the job to fail when the function returns False?
I have to use a shell scrip for this? If yes how should i create it?
Thank you | Return a non-zero value -- i.e. call sys.exit(1) when function returns False, and sys.exit(0) otherwise. | 1.2 | true | 1 | 6,130 |
2019-06-11 02:31:46.400 | How to load NTU rgbd dataset? | We are working on early action prediction but we are unable to understand the dataset itself NTU rgbd dataset is 1.3 tb.my laptop Hard disk is 931 GB
.first problem : how to deal with such a big dataset?
Second problem : how to understand dataset?
Third problem: how to load dataset ?
Thanks for the help | The overall size of the dataset is 1.3 TB and this size will decrease after processing the data and converting it into numpy arrays or something else.
But I do not think you will work on the entire dataset, what is the part you want to work on it in the dataset? | 0 | false | 1 | 6,131 |
2019-06-11 08:49:12.827 | How do I install Pytorch offline? | I need to install Pytorch on a computer with no internet connection.
I've tried finding information about this online but can't find a single piece of documentation.
Do you know how I can do this? Is it even possible? | An easy way with pip:
Create an empty folder
pip download torch using the connected computer. You'll get the pytorch package and all its dependencies.
Copy the folder to the offline computer. You must be using the same python setup on both computers (this goes for virtual environments as well)
pip install * on the offline computer, in the copied folder. This installs all the packages in the correct order. You can then use pytorch.
Note that this works for (almost) any kind of python package. | 0.999995 | false | 1 | 6,132 |
2019-06-11 16:14:57.993 | Graphing multiple csv lists into one graph in python | I have 5 csv files that I am trying to put into one graph in python. In the first column of each csv file, all of the numbers are the same, and I want to treat these as the x values for each csv file in the graph. However, there are two more columns in each csv file (to make 3 columns total), but I just want to graph the second column as the 'y-values' for each csv file on the same graph, and ideally get 5 different lines, one for each file. Does anyone have any ideas on how I could do this?
I have already uploaded my files to the variable file_list | Read the first file and create a list of lists in which each list filled by two columns of this file. Then read the other files one by one and append y column of them to the correspond index of this list. | 0 | false | 1 | 6,133 |
2019-06-11 16:32:40.920 | Password authentication failed when trying to run django application on server | I have downloaded postgresql as well as django and python but when I try running the command "python manage.py runserver" it gives me an error saying "Fatal: password authentication failed for user" . I am trying to run it locally but am unable to figure out how to get past this issue.
I was able to connect to the server in pgAdmin but am still getting password authentication error message | You need to change the password used to connect to your local Database, and this can be done, modifying your setting.py file in "DATABASES" object | 0 | false | 1 | 6,134 |
2019-06-12 14:54:28.927 | How to get a voxel array from a list of 3D points that make up a line in a voxalized volume? | I have a list of points that represent a needle/catheter in a 3D volume. This volume is voxalized. I want to get all the voxels that the line that connects the point intersects. The line needs to go through all the points.
Ideally, since the round needle/catheter has a width I would like to be able to get the voxels that intersect the actual three dimensional object that is the needle/catheter. (I imagine this is much harder so if I could get an answer to the first problem I would be very happy!)
I am using the latest version of Anaconda (Python 3.7). I have seen some similar problems, but the code is always in C++ and none of it seems to be what I'm looking for. I am fairly certain that I need to use raycasting or a 3D Bresenham algorithm, but I don't know how.
I would appreciate your help! | I ended up solving this problem myself. For anyone who is wondering how, I'll explain it briefly.
First, since all the catheters point in the general direction of the z-axis, I got the thickness of the slices along that axis. Both input points land on a slice. I then got the coordinates of every intersection between the line between the two input points and the z-slices. Next, since I know the radius of the catheter and I can calculate the angle between the two points, I was able to draw ellipse paths on each slice around the points I had previously found (when you cut a cone at and angle, the cross-section is an ellipse). Then I got the coordinates of all the voxels on every slice along the z-axis and checked which voxels where within my ellipse paths. Those voxels are the ones that describe the volume of the catheter. If you would like to see my code please let me know. | 0 | false | 1 | 6,135 |
2019-06-12 21:34:00.557 | Hash a set of three indexes in a key for dictionary? | I have a matrix of data with three indexes: i, j and k
I want to enter some of the data in this matrix into a dictionary, and be able to find them afterwards in the dictionary.
The data itself can not be the key for the dict.
I would like the i,j,k set of indexes to be the key.
I think I need to "hash" (some sort of hash) in one number from which I can get back the i,j,k. I need the result key to be ordered so that:
key1 for 1,2,3 is greater than
key2 for 2,1,3 is greater than
key3 for 2,3,1
Do you know any algorithms to get the keys from this set of indexes? Or is there a better structure in python to do what I want to do?
I can't know before I store the data how much I will get, so I think I cannot just append the data with its indexes. | Only immutable elements can be used as dictionary keys
This mean you can't use a list (mutable data type) but you can use a tuple as the key of your dictionary: dict_name[(i, j, k)] = data | 1.2 | true | 1 | 6,136 |
2019-06-12 23:20:49.637 | Chatterbot sqlite store in repl.it | I'm wondering how sqlite3 works when working in something like repl.it? I've been working on learning chatterbot on my own computer through Jupiter notebook. I'm a pretty amateur coder, and I have never worked with databases or SQL. When working from my own computer, I pretty much get the concept that when setting up a new bot with chatterbot, it creates a sqlite3 file, and then saves conversations to it to improve the chatbot. However, if I create a chatbot the same way only through repl.it and give lots of people the link, is the sqlite3 file saved online somewhere? Is it big enough to save lots of conversations from many people to really improve the bot well? | I am not familiar with repl.it, but for all the answers you have asked the answer is yes. For example, I have made a simple web page that uses the chatterbot library. Then I used my own computer as a server using ngrok and gather training data from users. | 0 | false | 1 | 6,137 |
2019-06-14 03:35:14.117 | Command prompt does not recognize changes in PATH. How do I fix this? | I am attempting to download modules in Python through pip. No matter how many times I edit the PATH to show the pip.exe, it shows the same error:
'pip' is not recognized as an internal or external command,
operable program or batch file.
I have changed the PATH many different times and ways to make pip usable, but these changes go unnoticed by the command prompt terminal.
How should I fix this? | Are you using PyCharm? if yes change the environment to your desired directory and desired interpreter if you do have multiple interpreter available | 0 | false | 1 | 6,138 |
2019-06-14 21:20:00.560 | Using python on android tablet | Learning python workflow on android tablet
I have been using Qpython3 but find it unsatisfactory
Can anybody tell me how best to learn the python workflow using an android tablet... that is what IDE works best with android and any links to pertinent information. Thank you. | Try pydroid3 instead of Qpython.it have almost all scientific Python libraries like Numpy,scikit-learn,matplotlib,pandas etc.All you have to do is to download the scripting library.You can save your file with extension ' .py ' and then upload it to drive and then to colab
Hope this will help....... | 0.201295 | false | 1 | 6,139 |
2019-06-16 01:10:00.687 | Wing IDE The debug server encountered an error in probing locals | I am running Wing IDE 5 with Python 2.4. Everything was fine until I tried to debug and set a breakpoint. Arriving at the breakpoint I get an error message:
"The debug server encountered an error in probing locals or globals..."
And the Stack Data display looks like:
locals
globals
I am not, to my knowledge, using a server client relationship or anything special, I am simply debugging a single threaded program running directly under the IDE. Anybody seen this or know how to fix it?
Wing IDE 5.0.9-1 | That's a pretty old version of Wing and likely a bug that's been fixed since then, so trying a newer version of Wing may solve it.
However, if you are stuck with Python 2.4 then that's the latest that supports it (except that unofficially Wing 6 may work with Python 2.4 on Windows).
A work-around would be to inspect data from the Debug Probe and/or Watch tools (both available in the Tools menu).
Also, Clear Stored Value Errors in the Debug menu may allow Wing to load the data in a later run if the problem doesn't reoccur. | 1.2 | true | 1 | 6,140 |
2019-06-18 12:13:26.103 | How to change a OneToOneField into ForeignKey in django model with data in both table? | I am having a model Employee with a OneToOneField relationship with Django USER model. Now for some reason, I want to change it to the ManyToOne(ForeignKey) relationship with the User model.
Both these tables have data filled. Without losing the data how can I change it?
Can I simply change the relationship and migrate? | makemigrations in this case would only correspond to an sql of Alter field you can see the result of makemigrations, the same sql will be executed when you migrate the model so the data would not be affected | 0 | false | 1 | 6,141 |
2019-06-18 14:09:10.777 | Does escaping work differently in Python shell? (Compared to code in file) | In a Python 3.7 shell I get some unexpected results when escaping strings, see examples below. Got the same results in the Python 2.7 shell.
A quick read in the Python docs seems to say that escaping can be done in strings, but doesn't seem to say it can't be used in the shell. (Or I have missed it).
Can someone explain why escaping doesn't seem to work as expected.
Example one:
input:
>>> "I am 6'2\" tall"
output:
'I am 6\'2" tall'
while >>> print("I am 6'2\" tall")
returns (what I expected):
I am 6'2" tall
(I also wonder how the backslash, in the unexpected result, ends up behind the 6?)
Another example:
input:
>>> "\tI'm tabbed in."
output:
"\tI'm tabbed in."
When inside print() the tab is replaced with a proper tab. (Can't show it, because stackoverflow seems the remove the tab/spaces in front of the line I use inside a code block). | The interactive shell will give you a representation of the return value of your last command. It gives you that value using the repr() method, which tries to give a valid source code representation of the value; i.e. something you could copy and paste into code as is.
print on the other hand prints the contents of the string to the console, without regards whether it would be valid source code or not. | 0.386912 | false | 1 | 6,142 |
2019-06-18 15:01:36.913 | Writing to a file in C while reading from it in python | I am working with an Altera DE1-SoC board where I am reading data from a sensor using a C program. The data is being read continually, in a while loop and written to a text file. I want to read this data using a python program and display the data.
The problem is that I am not sure how to avoid collision during the read/write from the file as these need to happen simultaneously. I was thinking of creating a mutex, but I am not sure how to implement it so the two different program languages can work with it.
Is there a simple way to do this? Thanks. | You could load a C library into Python using cdll.LoadLibrary and call a function to get the status of the C mutex. Then in Python if the C mutex is locking then don't read, and if it is unlocked then it can read. | 0.135221 | false | 2 | 6,143 |
2019-06-18 15:01:36.913 | Writing to a file in C while reading from it in python | I am working with an Altera DE1-SoC board where I am reading data from a sensor using a C program. The data is being read continually, in a while loop and written to a text file. I want to read this data using a python program and display the data.
The problem is that I am not sure how to avoid collision during the read/write from the file as these need to happen simultaneously. I was thinking of creating a mutex, but I am not sure how to implement it so the two different program languages can work with it.
Is there a simple way to do this? Thanks. | Operating system will take care of this as long as you can open that file twice (one for read and one for write). Just remember to flush from C code to make sure your data are actually written to disk, instead of being kept in cache in memory. | 0.135221 | false | 2 | 6,143 |
2019-06-20 06:34:56.463 | Building a window application from the bunch of python files | I have written some bunch of python files and i want to make a window application from that.
The structure looks like this:
Say, a.py,b.py,c.py are there. a.by is the file which i want application to open and it is basically a GUI which has import commands for "b.py" and "c.py".
I know this might be a very basic problem,but i have just started to packaging and deployment using python.Please tell me how to do that , or if is there any way to do it by py2exe and pyinstaller?
I have tried to do it by py2exe and pyinstaller from the info available on internet , but that seems to create the app which is running only "a.py" .It is not able to then use "b" and "c " as well. | I am not sure on how you do this with py2exe. I have used py2app before which is very similar, but it is for Mac applications. For Mac there is a way to view the contents of the application. In here you can add the files you want into the resources folder (where you would put your 'b.py' and 'c.py').
I hope there is something like this in Windows and hope it helps. | 0 | false | 1 | 6,144 |
2019-06-20 13:04:42.747 | How to display number of epochs in tensorflow object detection api with Faster Rcnn? | I am using Tensorflow Object detection api. What I understood reading the faster_rcnn_inception_v2_pets.config file is that num_steps mean the total number of steps and not the epochs. But then what is the point of specifying batch_size?? Lets say I have 500 images in my training data and I set batch size = 5 and num_steps = 20k. Does that mean number of epochs are equal to 200 ??
When I run model_main.py it shows only the global_steps loss. So if these global steps are not the epochs then how should I change the code to display train loss and val loss after each step and also after each epoch. | So you are right with your assumption, that you have 200 epochs.
I had a similar problem with the not showing of loss.
my solution was to go to the model_main.py file and then insert
tf.logging.set_verbosity(tf.logging.INFO)
after the import stuff.
then it shows you the loss after each 100 steps.
you could change the set_verbosity if you want to have it after every epoch ;) | 0.386912 | false | 1 | 6,145 |
2019-06-20 13:36:08.230 | how can i search for facebook users ,using facebook API(V3.3) in python 3 | I want to be able to search for any user using facebook API v3.3 in python 3.
I have written a function that can only return my details and that's fine, but now I want to search for any user and I am not succeeding so far, it seems as if in V3.3 I can only search for places and not users
The following function search and return a place, how can I modify it so that I can able to search for any Facebook users?
def search_friend():
graph = facebook.GraphAPI(token)
find_user = graph.search(q='Durban north beach',type='place')
print(json.dumps(find_user, indent=4)) | You can not search for users any more, that part of the search functionality has been removed a while ago.
Plus you would not be able to get any user info in the first place, unless the user in question logged in to your app first, and granted it permission to access at least their basic profile info. | 0.386912 | false | 1 | 6,146 |
2019-06-21 07:53:06.587 | Record Audio from Peppers Tablet Microphone | I would like to use the microphone of peppers tablet to implement speech recognition.
I already do speech recognition with the microphones in the head.
But the audio I get from the head microphones is noisy due to the fans in the head and peppers joints movement.
Does anybody know how to capture the audio from peppers tablet?
I am using Pepper 2.5. and would like to solve this with python.
Thanks! | With NAOqi 2.5 on Pepper it is not possible to access the tablet's microphone.
You can either upgrade to 2.9.x and use the Android API for this, or stay in 2.5 and use Python to get the sound from Pepper's microphones. | 0 | false | 1 | 6,147 |
2019-06-21 10:48:44.013 | How to create .mdb file? | I am new with zarr, HDF5 and LMDB. I have converted data from HDF5 to Zarr but i got many files with extension .n (n from 0 to 31). I want to have just one file with .zarr extension. I tried to use LMDB (zarr.LMDBStore function) but i don't understand how to create .mdb file ? Do you have an idea how to do that ?
Thank you ! | @kish When trying your solution i got this error:
from comtypes.gen import Access
ImportError: cannot import name 'Access' | 0 | false | 1 | 6,148 |
2019-06-21 15:20:04.737 | How to remove regularisation from pre-trained model? | I've got a partially trained model in Keras, and before training it any further I'd like to change the parameters for the dropout, l2 regularizer, gaussian noise etc. I have the model saved as a .h5 file, but when I load it, I don't know how to remove these regularizing layers or change their parameters. Any clue as to how I can do this? | Create a model with your required hyper-parameters and load the parameters to the model using load_weight(). | 0 | false | 1 | 6,149 |
2019-06-21 17:20:48.640 | How to display contact without company in odoo? | how to display contact without company in odoo 11 , exemple : if mister X in Company Y, in odoo, display this mister and company : Y, X. But i want only X. thanks | That name comes via name_get method written inside res.partner.py You need to extend that method in your custom module and remove company name as a prefix from the contact name. | 0.386912 | false | 1 | 6,150 |
2019-06-22 20:23:45.847 | Python3 script exit with any traceback? | I have one Python3 script that exits without any traceback from time to time.
Some said in another question that it was caused by calling sys.exit, but I am not pretty sure whether this is the case.
So how can I make Python3 script always exit with traceback, of course except when it is killed with signal 9? | It turns out that the script crashed when calling some function from underlying so, and crashed without any trackback. . | 1.2 | true | 1 | 6,151 |
2019-06-24 14:06:10.280 | Could not install packages due to an EnvironmentError: Could not find a suitable TLS CA certificate bundle, invalid path | I get this error:
Could not install packages due to an EnvironmentError: Could not find a suitable TLS CA certificate bundle, invalid path: /home/yosra/Desktop/CERT.RSA
When I run: $ virtualenv venv
So I put a random CERT.RSA on the Desktop which worked and I created my virtual environment, but then when I run: pip install -r requirements.txt
I got this one:
Could not install packages due to an EnvironmentError: HTTPSConnectionPool(host='github.com', port=443): Max retries exceeded with url: /KristianOellegaard/django-hvad/archive/2.0.0-beta.tar.gz (Caused by SSLError(SSLError(0, 'unknown error (_ssl.c:3715)'),))
I feel that these 2 errors are linked to each other, but I want to know how can I fix the first one? | I received this error while running the command as "pip install flask" in Pycharm.
If you look at the error, you will see that the error points out to "packages due to an EnvironmentError: Could not find a suitable TLS CA certificate bundle -- Invalid path".
I solved this by removing the environment variable "REQUESTS_CA_BUNDLE" OR you can just change the name of the environment variable "REQUESTS_CA_BUNDLE" to some other name.
Restart your Pycharm and this should be solved.
Thank you ! | 0.081452 | false | 2 | 6,152 |
2019-06-24 14:06:10.280 | Could not install packages due to an EnvironmentError: Could not find a suitable TLS CA certificate bundle, invalid path | I get this error:
Could not install packages due to an EnvironmentError: Could not find a suitable TLS CA certificate bundle, invalid path: /home/yosra/Desktop/CERT.RSA
When I run: $ virtualenv venv
So I put a random CERT.RSA on the Desktop which worked and I created my virtual environment, but then when I run: pip install -r requirements.txt
I got this one:
Could not install packages due to an EnvironmentError: HTTPSConnectionPool(host='github.com', port=443): Max retries exceeded with url: /KristianOellegaard/django-hvad/archive/2.0.0-beta.tar.gz (Caused by SSLError(SSLError(0, 'unknown error (_ssl.c:3715)'),))
I feel that these 2 errors are linked to each other, but I want to know how can I fix the first one? | We get this all the time for various 'git' actions. We have our own CA + intermediary and we don't customize our software installations enough to accomodate that fact.
Our general fix is update your ca-bundle.crt with the CA cert pems via either concatenation or replacement.
e.g. cat my_cert_chain.pem >> $(python -c "import certifi; print(certifi.where())")
This works great if you have an /etc/pki/tls/certs directory, but with python the python -c "import certifi; print(certifi.where())" tells you the location of python's ca-bundle.crt file.
Althought it's not a purist python answer, since we're not adding a new file / path, it solves alot of other certificate problems with other software when you understand the underlying issue.
I recommended concatenating in this case as I don't know what else the file is used for vis-a-vis pypi. | 0 | false | 2 | 6,152 |
2019-06-24 20:31:18.433 | Running Python Code in .NET Environment without Installing Python | Is it possible to productionize Python code in a .NET/C# environment without installing Python and without converting the Python code to C#, i.e. just deploy the code as is?
I know installing the Python language would be the reasonable thing to do but my hesitation is that I just don't want to introduce a new language to my production environment and deal with its testing and maintenance complications, since I don't have enough manpower who know Python to take care of these issues.
I know IronPython is built on CLR, but don't know how exactly it can be hosted and maintained inside .NET. Does it enable one to treat PYthon code as a "package" that can be imported into C# code, without actually installing Python as a standalone language? How can IronPython make my life easier in this situation? Can python.net give me more leverage? | IronPython is limited compared to running Python with C based libraries needing the Python Interpreter, not the .NET DLR. I suppose it depends how you are using the Python code, if you want to use a lot of third party python libraries, i doubt that IronPython will fit your needs.
What about building a full Python application but running it all from Docker?
That would require your environments to have Docker installed, but you could then also deploy your .NET applications using Docker too, and they would all be isolated and not dirty your 'environment'.
There are base docker images out there that are specifically for Building Python and .NET Project and also for running. | 0.386912 | false | 1 | 6,153 |
2019-06-25 13:23:01.350 | No module named 'numpy' Even When Installed | I'm using windows with Python 3.7.3, I installed NumPy via command prompt with "pip install NumPy", and it installed NumPy 1.16.4 perfectly. However, when I run "import numpy as np" in a program, it says "ModuleNotFoundError: No module named 'numpy'"
I only have one version of python installed, and I don't know how I can fix this. How do I fix this? | python3 is not supported under NumPy 1.16.4. Try to install a more recent version of NumPy:
pip uninstall numpy
pip install numpy | 1.2 | true | 1 | 6,154 |
2019-06-26 02:23:52.283 | How I get the error log generates from a flask app installed on CPanel? | I have a flask application installed on cpanel and it's giving me some error while the application is running. Application makes an ajax request from the server, but server returns the response with a 500 error. I have no idea how I get the information that occurs to throw this error.
There's no information on the cpanel error log and is it possible to create some log file that logs errors when occur in the same application folder or something? | When you log into cPanel go to the Errors menu and it will give a more detailed response to your errors there. You can also try and check: /var/log/apache/error.log or /var/log/daemon.log | 0.386912 | false | 1 | 6,155 |
2019-06-26 06:12:06.163 | How can I output to a v4l2 driver using FFMPEG's avformat_write_header? | I'm trying to use PyAV to output video to a V4l2 loopback device (/dev/video1), but I can't figure out how to do it. It uses the avformat_write_header() from libav* (ffmpeg bindings).
I've been able to get ffmpeg to output to the v4l2 device from the CLI but not from python. | Found the solution. The way to do this is:
Set the container format to v4l2
Set the stream format as "rawvideo"
Set the framerate (if it's a live stream, set the framerate to 1 fps higher than the stream is so that you don't get an error)
Set pixel format to either RGB24 or YUV420 | 0 | false | 1 | 6,156 |
2019-06-26 09:33:29.453 | How to set up data collection for small-scale algorithmic trading software | This is a question on a conceptual level.
I'm building a piece of small-scale algorithmic trading software, and I am wondering how I should set up the data collection/retrieval within that system. The system should be fully autonomous.
Currently my algorithm that I want to trade live is doing so on a very low frequency, however I would like to be able to trade with higher frequency in the future and therefore I think that it would be a good idea to set up the data collection using a websocket to get real time trades straight away. I can aggregate these later if need be.
My first question is: considering the fact that the data will be real time, can I use a CSV-file for storage in the beginning, or would you recommend something more substantial?
In any case, the data collection would proceed as a daemon in my application.
My second question is: are there any frameworks available to handle real-time incoming data to keep the database constant while the rest of the software is querying it to avoid conflicts?
My third and final question is: do you believe it is a wise approach to use a websocket in this case or would it be better to query every time data is needed for the application? | CSV is a nice exchange format, but as it is based on a text file, it is not good for real-time updates. Only my opinion but I cannot imagine a reason to prefere that to database.
In order to handle real time conflicts, you will later need a professional grade database. PostgreSQL has the reputation of being robust, MariaDB is probably a correct choice too. You could use a liter database in development mode like SQLite, but beware of the slight differences: it is easy to write something that will work on one database and will break on another one. On another hand, if portability across databases is important, you should use at least 2 databases: one at development time and a different one at integration time.
A question to ask yourself immediately is whether you want a relational database or a noSQL one. Former ensures ACID (Atomicity, Consistency, Isolation, Durability) transations, the latter offers greater scalability. | 1.2 | true | 1 | 6,157 |
2019-06-27 05:15:37.540 | So when I run my python selenium script through jenkins, how should I write the 'driver = webdriver.Chrome()'? | So when I run my python selenium script through Jenkins, how should I write the driver = webdriver.Chrome()
How should I put the chrome webdriver EXE in jenkins?
Where should I put it? | If you have added your repository path in jenkins during job configuration, Jenkins will create a virtual copy of your workspace. So, as long as the webdriver file is somewhere in your project folder structure and as long as you are using relative path to reference it in your code, there shouldn't be any issues with respect to driver in invocation.
You question also depends on several params like:
1. Whether you are using Maven to run the test
2. Whether you are running tests on Jenkins locally or on a remote machine using Selenium Grid Architecture. | 1.2 | true | 1 | 6,158 |
2019-06-27 08:57:46.013 | Multiple header in Pandas DataFrame to_excel | I need to export my DataFrame to Excel. Everything is good but I need two "rows" of headers in my output file. That mean I need two columns headers. I don't know how to export it and make double headers in DataFrame. My DataFrame is created with dictionary but I need to add extra header above.
I tried few dumb things but nothing gave me a good result. I want to have on first level header for every three columns and on second level header for each column. They must be different.
I expect output with two headers above columns. | Had a similar issue. Solved by persisting cell-by-cell using worksheet.write(i, j, df.iloc[i,j]), with i starting after the header rows. | 0 | false | 1 | 6,159 |
2019-06-27 09:51:08.853 | How can I find the index of a tuple inside a numpy array? | I have a numpy array as:
groups=np.array([('Species1',), ('Species2', 'Species3')], dtype=object).
When I ask np.where(groups == ('Species2', 'Species3')) or even np.where(groups == groups[1]) I get an empty reply: (array([], dtype=int64),)
Why is this and how can I get the indexes for such an element? | It's not means search a tuple('Species2', 'Species3') from groups when you use
np.where(groups == ('Species2', 'Species3'))
it means search 'Species2' and 'Species3' separately if you have a Complete array like this
groups=np.array([('Species1',''), ('Species2', 'Species3')], dtype=object) | 0.101688 | false | 1 | 6,160 |
2019-06-27 12:15:35.603 | Aggregate Ranking using Khatri-Rao product | I have constructed 2 graphs and calculated the eigenvector centrality of each node. Each node can be considered as an individual project contributor. Consider 2 different rankings of project contributors. They are ranked based on the eigenvector of the node.
Ranking #1:
Rank 1 - A
Rank 2 - B
Rank 3 - C
Ranking #2:
Rank 1 - B
Rank 2 - C
Rank 3 - A
This is a very small example but in my case, I have almost 400 contributors and 4 different rankings. My question is how can I merge all the rankings and get an aggregate ranking. Now I can't just simply add the eigenvector centralities and divide it by the number of rankings. I was thinking to use the Khatri-Rao product or Kronecker Product to get the result.
Can anyone suggest me how can I achieve this?
Thanks in advance. | Rank both graphs separately each node gets a rank in both graphs then do simple matrix addition. Now normalize the rank. This should keep the relationship like rank1>rank2>rank3>rank4 true and relationships like rank1+rank1>rank1+rank2 true. I don't know how it would help you taking the Khatri-Rao product of the matrix. That would make you end up with more than 400 nodes. Then you would need to compress them back to 400 nodes in-order to have 400 ranked nodes at the end. Who told you to use Khatri-Rao product? | 0 | false | 1 | 6,161 |
2019-06-28 16:20:58.207 | How to use Javascript in Spyder IDE? | I want to use write code in Javascript, in the Spyder IDE, that is meant for Python. I have read that Spyder supports multiple languages but I'm not sure how to use it. I have downloaded Nodejs and added it to the environment variables. I'd like to know how get Javascript syntax colouring, possibly auto-completion and Help options as well ,and I'd also like to know how to conveniently execute the .js file and see the results in a console. | (Spyder maintainer here) Sorry but for now we only support Python for all the functionality that you are looking for (code completion, help and code execution).
Our next major version (Spyder 4, to be released later in 2019) will have the ability to give code completion and linting for other programming languages, but it'll be more of a power-user feature than something anyone can use. | 1.2 | true | 1 | 6,162 |
2019-06-29 20:41:24.083 | How to view pyspark temporary tables on Thrift server? | I'm trying to make a temporary table a create on pyspark available via Thrift. My final goal is to be able to access that from a database client like DBeaver using JDBC.
I'm testing first using beeline.
This is what i'm doing.
Started a cluster with one worker in my own machine using docker and added spark.sql.hive.thriftServer.singleSession true on spark-defaults.conf
Started Pyspark shell (for testing sake) and ran the following code:
from pyspark.sql import Row
l = [('Ankit',25),('Jalfaizy',22),('saurabh',20),('Bala',26)]
rdd = sc.parallelize(l)
people = rdd.map(lambda x: Row(name=x[0], age=int(x[1])))
people = people.toDF().cache()
peebs = people.createOrReplaceTempView('peebs')
result = sqlContext.sql('select * from peebs')
So far so good, everything works fine.
On a different terminal I initialize spark thrift server:
./sbin/start-thriftserver.sh --hiveconf hive.server2.thrift.port=10001 --conf spark.executor.cores=1 --master spark://172.18.0.2:7077
The server appears to start normally and I'm able to see both pyspark and thrift server jobs running on my spark cluster master UI.
I then connect to the cluster using beeline
./bin/beeline
beeline> !connect jdbc:hive2://172.18.0.2:10001
This is what I got
Connecting to jdbc:hive2://172.18.0.2:10001
Enter username for jdbc:hive2://172.18.0.2:10001:
Enter password for jdbc:hive2://172.18.0.2:10001:
2019-06-29 20:14:25 INFO Utils:310 - Supplied authorities: 172.18.0.2:10001
2019-06-29 20:14:25 INFO Utils:397 - Resolved authority: 172.18.0.2:10001
2019-06-29 20:14:25 INFO HiveConnection:203 - Will try to open client transport with JDBC Uri: jdbc:hive2://172.18.0.2:10001
Connected to: Spark SQL (version 2.3.3)
Driver: Hive JDBC (version 1.2.1.spark2)
Transaction isolation: TRANSACTION_REPEATABLE_READ
Seems to be ok.
When I list show tables; I can't see anything.
Two interesting things I'd like to highlight is:
When I start pyspark I get these warnings
WARN ObjectStore:6666 - Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0
WARN ObjectStore:568 - Failed to get database default, returning NoSuchObjectException
WARN ObjectStore:568 - Failed to get database global_temp, returning NoSuchObjectException
When I start the thrift server I get these:
rsync from spark://172.18.0.2:7077
ssh: Could not resolve hostname spark: Name or service not known
rsync: connection unexpectedly closed (0 bytes received so far) [Receiver]
rsync error: unexplained error (code 255) at io.c(235) [Receiver=3.1.2]
starting org.apache.spark.sql.hive.thriftserver.HiveThriftServer2, logging to ...
I've been through several posts and discussions. I see people saying we can't have temporary tables exposed via thrift unless you start the server from within the same code. If that's true how can I do that in python (pyspark)?
Thanks | createOrReplaceTempView creates an in-memory table. The Spark thrift server needs to be started on the same driver JVM where we created the in-memory table.
In the above example, the driver on which the table is created and the driver running STS(Spark Thrift server) are different.
Two options
1. Create the table using createOrReplaceTempView in the same JVM where the STS is started.
2. Use a backing metastore, and create tables using org.apache.spark.sql.DataFrameWriter#saveAsTable so that tables are accessible independent of the JVM(in fact without any Spark driver.
Regarding the errors:
1. Relates to client and server metastore version.
2. Seems like some rsync script trying to decode spark:\\ url
Both doesnt seems to be related to the issue. | 0 | false | 1 | 6,163 |
2019-06-30 00:14:55.133 | How do I store information about a front-end button on the Django server? | Basically I want to store a buttons service server-side that way it can persist through browser closes and page refresh.
Here's what the user is trying to do
The user searches in a search bar for a list of products.
When the results show up, they are shown a button that triggers an action for each individual product. They are also shown a master button that can trigger the same action for each product that is listed.
Upon clicking the button, I want to disable it for 30 seconds and have this persist through page refreshes and browser close.
What I've done
Currently I have this implemented using AJAX calls on the client side, but if the page refreshes it resets the button and they can click it again. So I looked into using javascript's localStorage function, but in my situation it would be better just to store this on the server.
What I think needs to happen
Create a model in my Django app for a button. Its attributes would be its status and maybe some meta data (last clicked, etc).
Whenever the client requests a list of products, the views will send the list of products and it will be able to query the database for the respective button's status and implement a disabled attribute directly into the template.
If the button is available to be pressed then the client side will make an AJAX POST call to the server and the server will check the buttons status. If it's available it will perform the action, update the buttons status to disabled for 30 seconds, and send this info back to the client in order to reflect it in the DOM.
A couple questions
Is it just a matter of creating a model for the buttons and then querying the database like normal?
How do I have Django update the database after 30 seconds to make a button's status go from disabled back to enabled?
When the user presses the button it's going to make it disabled, but it will only be making it disabled in the database. What is the proper way to actually disable the button without a page refresh on the client side? Do I just disable the button in javascript for 30 seconds, and then if they try to refresh the page then the views will see the request for the list of products and it will check the database for each button's status and it will serve the button correctly?
Thank you very much for the help!! | Is it just a matter of creating a model for the buttons and then
querying the database like normal?
Model could be something like Button (_id, last_clicked as timestamp, user_id)
While querying you could simply sort by timestamp and LIMIT 1 to get the last click. By not overwriting the original value it would ensure a bit faster write.
If you don't want the buttons to behave similarly for each user you will have to create a mapping of the button with the user who clicked it. Even if your current requirements don't need them, create an extensible solution where mapping the user with this table is quite easy.
How do I have Django update the database after 30 seconds to make a
button's status go from disabled back to enabled?
I avoid changing the database without a client request mapped to the change. This ensures the concurrency and access controls. And also has higher predictability for the current state of data. Following that, I would suggest not to update the db after the time delta(30 sec).
Instead of that you could simply compare the last_clicked timestamp and calculate the delta either server side before sending the response or in client side.
This decision could be important, consider a scenario when the client has a different time on his system than the server time.
When the user presses the button it's going to make it disabled, but
it will only be making it disabled in the database. What is the proper
way to actually disable the button without a page refresh on the
client side? Do I just disable the button in javascript for 30
seconds, and then if they try to refresh the page then the views will
see the request for the list of products and it will check the
database for each button's status and it will serve the button
correctly?
You'd need to do a POST request to communicate the button press timestamp with the db. You'd also need to ensure that the POST request is successful as an unsuccessful request would not persist the data in case of browser closure.
After doing the above two you could disable the button only from the client side without trying the get the button last_clicked timestamp. | 1.2 | true | 1 | 6,164 |
2019-06-30 05:53:43.637 | Using a Discord bot, how do I get a string of an embed message from another bot | I am creating a Discord bot that needs to check all messages to see if a certain string is in an embed message created by any other Discord bot. I know I can use message.content to get a string of the message a user has sent but how can I do something similar with bot embeds in Python? | Use message.embeds instead to get the embed string content | 1.2 | true | 1 | 6,165 |
2019-07-01 09:49:35.957 | Python magic is not recognizing the correct content | I have parsed the content of a file to a variable that looks like this;
b'8,092436.csv,,20f85'
I would now like to find out what kind of filetype this data is coming from, with;
print(magic.from_buffer(str(decoded, 'utf-8'), mime=True))
This prints;
application/octet-stream
Anyone know how I would be able to get a result saying 'csv'? | Use magic on the original file.
You also need to take into account that CSV is really just a text file that uses particular characters to delimit the content. There is no explicit identifier that indicates that the file is a CSV file. Even then the CSV module needs to be configured to use the appropriate delimiters.
The delimiter specification of a CSV file is either defined by your program or needs to be configured (see importing into Excel as an example, you are presented with a number of options to configure the type of CSV to import). | 1.2 | true | 1 | 6,166 |
2019-07-01 15:01:58.223 | Configure proxy with python | I am looking to use a public API running on a distant server from within my company. For security reasons, I am supposed to redirect all the traffic via the company's PROXY. Does anyone know how to do this in Python? | Set the HTTP_PROXY environment variable before starting your python script
e.g. export HTTP_PROXY=http://proxy.host.com:8080 | 0.201295 | false | 2 | 6,167 |
2019-07-01 15:01:58.223 | Configure proxy with python | I am looking to use a public API running on a distant server from within my company. For security reasons, I am supposed to redirect all the traffic via the company's PROXY. Does anyone know how to do this in Python? | Directly in python you can do :
os.environ["HTTP_PROXY"] = http://proxy.host.com:8080.
Or as it has been mentioned before launching by @hardillb on a terminal :
export HTTP_PROXY=http://proxy.host.com:8080 | 0.386912 | false | 2 | 6,167 |
2019-07-03 17:35:26.190 | How can I find domain that has been used from a client to reach my server in python socket? | I just wonder how apache server can know the domain you come from you can see that in Vhost configuration | By a reverse DNS lookup of the IP; socket.gethostbyaddr().
Results vary; many IPs from consumer ISPs won't resolve to anything interesting, because of NAT and just not maintaining a generally informative reverse zone. | 0 | false | 1 | 6,168 |
2019-07-03 22:16:01.657 | How to write each dataframe partition into different tables | I am using Databricks to connect to an Eventhub, where each message comming from the EventHub may be very different from another.
In the message, I have a body and an id.
I am looking for performance, so I am avoiding collecting data or doing unecessary processings, also I want to do the saving in parallel by partition. However I am not sure on how to do this in a proper way.
I want to append the body of each ID in a different AND SPECIFIC table in batches, the ID will give me the information I need to save in the right table. So in order to do that I have been trying 2 approachs:
Partitioning: Repartition(numPartitions, ID) -> ForeachPartition
Grouping: groupBy('ID').apply(myFunction) #@pandas_udf GROUPED_MAP
The approach 1 doens't look very attracting to me, the repartition process looks kind unecessary and I saw in the docs that even if I set a column as a partition, it may save many ids of that column in a single partition. It only garantees that all data related to that id is in the partition and not splitted
The approach 2 forces me to output from the pandas_udf, a dataframe with the same schema of the input, which is not going to happen since I am transforming the eventhub message from CSV to dataframe in order to save it to the table. I could return the same dataframe that I received, but it sounds weird.
Is there any nice approach I am not seeing? | If your Id has distinct number of values (kind of type/country column) you can use partitionBy to store and thereby saving them to different table will be faster.
Otherwise create a derive column(using withColumn) from you id column by using the logic same as you want to use while deviding data across tables. Then you can use that derive column as a partition column in order to have faster load. | 1.2 | true | 1 | 6,169 |
2019-07-04 09:21:11.737 | how to get the memebrs of a telegram group greater than 10000 | I am getting only upto 10000 members when using telethon how to get more than 10000
I tried to run multiple times to check whether it is returning random 10000 members but still most of them are same only few changed that also not crossing two digits
Expected greater than 10000
but actual is 10000 | there is no simple way. you can play with queries like 'a*', 'b*' and so on | 0 | false | 1 | 6,170 |
2019-07-04 12:44:56.280 | Keras preprocessing for 3D semantic segmentation task | For semantic image segmentation, I understand that you often have a folder with your images and a folder with the corresponding masks. In my case, I have gray-scale images with the dimensions (32, 32, 32). The masks naturally have the same dimensions. The labels are saved as intensity values (value 1 = label 1, value 2 = label 2 etc.). 4 classes in total. Imagine I have found a model that was built with the keras model API. How do I know how to prepare my label data for it to be accepted by the model? Does it depend on the loss function? Is it defined in the model (Input parameter). Do I just add another dimension (4, 32, 32, 32) in which the 4 represents the 4 different classes and one-hot code it?
I want to build a 3D convolutional neural network for semantic segmentation but I fail to understand how to feed in the data correctly in keras. The predicted output is supposed to be a 4-channel 3D image, each channel showing the probability values of each pixel to belong to a certain class. | The Input() function defines the shape of the input tensor of a given model. For 3D images, often a 5D Tensor is expected, e.g. (None, 32, 32, 32, 1), where None refers to the batch size. Therefore the training images and labels have to be reshaped. Keras offers the to_categorical function to one-hot encode the label data (which is necessary). The use of generators helps to feed in the data. In this case, I cannot use the ImageDataGenerator from keras as it can only deal with RGB and grayscale images and therefore have to write a custom script. | 1.2 | true | 1 | 6,171 |
2019-07-04 23:38:40.930 | How to fix "UnsatisfiableError: The following specifications were found to be incompatible with each other: - pip -> python=3.6" | So, i trying to install with the command ecmwf api client conda install -c conda-forge ecmwf-api-client then the warning in the title shows up. I don't know how to proceede
(base) C:\Users\caina>conda install -c conda-forge ecmwf-api-client
Collecting package metadata (current_repodata.json): done
Solving environment: failed
Collecting package metadata (repodata.json): done
Solving environment: failed
UnsatisfiableError: The following specifications were found to be incompatible with each other:
pip -> python=3.6 | Simply go to Anaconda navigator.
Go to Environments, Select Installed (packages, etc.) and then click the version of Python. Downgrade it to a lower version. In your case Python 3.6 | -0.101688 | false | 2 | 6,172 |
2019-07-04 23:38:40.930 | How to fix "UnsatisfiableError: The following specifications were found to be incompatible with each other: - pip -> python=3.6" | So, i trying to install with the command ecmwf api client conda install -c conda-forge ecmwf-api-client then the warning in the title shows up. I don't know how to proceede
(base) C:\Users\caina>conda install -c conda-forge ecmwf-api-client
Collecting package metadata (current_repodata.json): done
Solving environment: failed
Collecting package metadata (repodata.json): done
Solving environment: failed
UnsatisfiableError: The following specifications were found to be incompatible with each other:
pip -> python=3.6 | Install into a new environment instead of the conda base environment. Recent Anaconda and Miniconda installers have Python 3.7 in the base environment, but you're trying to install something that requires Python 3.6. | 0.386912 | false | 2 | 6,172 |
2019-07-06 19:03:17.943 | how to run my python code on google cloud without fear of getting disconnected - an absolute beginner? | I have been trying to use python 3 for text mining on a 650 MB csv file, which my computer was not powerful enough to do. My second solution was to reach out to google cloud. I have set up my VMs and my jupyter notebook on google cloud, and it works perfectly well. The problem, however, is that I am in constant fear of getting disconnected. As a matter of fact, my connection with google server was lost a couple of time and so was my whole work.
My question: Is there a way to have the cloud run my code without fear of getting disconnected? I need to be able to have access to my csv file and also the output file.
I know there is more than one way to do this and have read a lot of material. However, they are too technical for a beginner like me to understand. I really appreciate a more dummy-friendly version. Thanks!
UPDATE: here is how I get access to my jupyter notebook on google cloud:
1- I run my instance on google cloud
2- I click on SSH
3- in the window that appears, I type the following:
jupyter notebook --ip=0.0.0.0 --port=8888 --no-browser &
I have seen people recommend to add nohup to the beginning of the same commend. I have tried it and got this message:
nohup: ignoring input and appending output to 'nohup.out'
And nothing happens. | If I understand your problem correctly, you could just run the program inside a screen instance:
After connecting via ssh type screen
Run your command
Press ctrl + a, ctrl + d
Now you can disconnect from ssh and your code will continue to run. You can reconnect to the screen via screen -r | 1.2 | true | 1 | 6,173 |
2019-07-08 15:13:46.557 | Importing sknw on jupyter ModuleNotFoundError | On my jupyter notebook, running import sknw throws a ModuleNotFoundError error.
I have tried pip install sknw and pip3 install sknw and python -m pip install sknw. It appears to have downloaded successfully, and get requirement already satisfied if I try to download it again.
Any help on how to get the sknw package to work in jupyter notebook would be very helpful! | check on which environment you using pip. | 1.2 | true | 1 | 6,174 |
2019-07-08 16:03:06.770 | What is the best approach to scrape a big website? | Hello I am developing a web scraper and I am using in a particular website, this website has a lot of URLs, maybe more than 1.000.000, and for scraping and getting the information I have the following architecture.
One set to store the visited sites and another set to store the non-visited sites.
For scraping the website I am using multithreading with a limit of 2000 threads.
This architecture has a problem with a memory size and can never finish because the program exceeds the memory with the URLs
Before putting a URL in the set of non-visited, I check first if this site is in visited, if the site was visited then I will never store in the non-visited sites.
For doing this I am using python, I think that maybe a better approach would be storing all sites in a database, but I fear that this can be slow
I can fix part of the problem by storing the set of visited URLs in a database like SQLite, but the problem is that the set of the non-visited URL is too big and exceeds all memory
Any idea about how to improve this, with another tool, language, architecture, etc...?
Thanks | 2000 threads is too many. Even 1 may be too many. Your scraper will probably be thought of as a DOS (Denial Of Service) attach and your IP address will be blocked.
Even if you are allowed in, 2000 is too many threads. You will bottleneck somewhere, and that chokepoint will probably lead to going slower than you could if you had some sane threading. Suggest trying 10. One way to look at it -- Each thread will flip-flop between fetching a URL (network intensive) and processing it (cpu intensive). So, 2 times the number of CPUs is another likely limit.
You need a database under the covers. This will let you top and restart the process. More importantly, it will let you fix bugs and release a new crawler without necessarily throwing away all the scraped info.
The database will not be the slow part. The main steps:
Pick a page to go for (and lock it in the database to avoid redundancy).
Fetch the page (this is perhaps the slowest part)
Parse the page (or this could be the slowest)
Store the results in the database
Repeat until no further pages -- which may be never, since the pages will be changing out from under you.
(I did this many years ago. I had a tiny 0.5GB machine. I quit after about a million analyzed pages. There were still about a million pages waiting to be scanned. And, yes, I was accused of a DOS attack.) | 0.201295 | false | 2 | 6,175 |
2019-07-08 16:03:06.770 | What is the best approach to scrape a big website? | Hello I am developing a web scraper and I am using in a particular website, this website has a lot of URLs, maybe more than 1.000.000, and for scraping and getting the information I have the following architecture.
One set to store the visited sites and another set to store the non-visited sites.
For scraping the website I am using multithreading with a limit of 2000 threads.
This architecture has a problem with a memory size and can never finish because the program exceeds the memory with the URLs
Before putting a URL in the set of non-visited, I check first if this site is in visited, if the site was visited then I will never store in the non-visited sites.
For doing this I am using python, I think that maybe a better approach would be storing all sites in a database, but I fear that this can be slow
I can fix part of the problem by storing the set of visited URLs in a database like SQLite, but the problem is that the set of the non-visited URL is too big and exceeds all memory
Any idea about how to improve this, with another tool, language, architecture, etc...?
Thanks | At first, i never crawled pages using Python. My preferd language is c#. But python should be good, or better.
Ok, the first thing your detected is quiet important. Just operating on your memory will NOT work. Implementing a way to work on your harddrive is important. If you just want to work on memory, think about the size of the page.
In my opinion, you already got the best(or a good) architecture for webscraping/crawling. You need some kind of list, which represents the urls you already visited and another list in which you could store the new urls your found. Just two lists is the simplest way you could go. Cause that means, you are not implementing some kind of strategy in crawling. If you are not looking for something like that, ok. But think about it, because that could optimize the usage of memory. Therefor you should look for something like deep and wide crawl. Or recursive crawl. Representing each branch as a own list, or a dimension of an array.
Further, what is the problem with storing your not visited urls in a database too? Cause you only need on each thread. If your problem with putting it in db is the fact, that it could need some time swiping through it, then you should think about using multiple tables for each part of the page.
That means, you could use one table for each substring in url:
wwww.example.com/
wwww.example.com/contact/
wwww.example.com/download/
wwww.example.com/content/
wwww.example.com/support/
wwww.example.com/news/
So if your url is:"wwww.example.com/download/sweetcats/", then you should put it in the table for wwww.example.com/download/.
When you have a set of urls, then you have to look at first for the correct table. Afterwards you can swipe through the table.
And at the end, i have just one question. Why are you not using a library or a framework which already supports these features? I think there should be something available for python. | 1.2 | true | 2 | 6,175 |
2019-07-08 17:12:59.247 | How to set Referrer in driver selenium python? | I need to scrape a web page, but the problem is when i click on the link on website, it works fine, but when i go through the link manually by typing url in browser, it gives Access Denied error, so may be they are validating referrer on their end, Can you please tell me how can i sort this issue out using selenium in python ?
or any idea that can solve this issue? i am unable to scrape the page because its giving Access Denied error.
PS. i am working with python3
Waiting for help.
Thanks | I solved myself by using seleniumwire ;) selenium doesn't support headers, but seleniumwire supports, so that solved my issue.
Thanks | 0 | false | 1 | 6,176 |
2019-07-08 23:22:52.727 | While query data (web scraping) from a website with Python, how to avoid being blocked by the server? | I was trying to using python requests and mechanize to gather information from a website. This process needs me to post some information then get the results from that website. I automate this process using for loop in Python. However, after ~500 queries, I was told that I am blocked due to high query rate. It takes about 1 sec to do each query. I was using some software online where they query multiple data without problems. Could anyone help me how to avoid this issue? Thanks!
No idea how to solve this.
--- I am looping this process (by auto changing case number) and export data to csv....
After some queries, I was told that my IP was blocked. | Optimum randomized delay time between requests.
Randomized real user-agents for
each request.
Enabling cookies.
Using a working proxy pool and
selecting a random proxy for each request. | 0 | false | 1 | 6,177 |
2019-07-09 21:53:09.597 | What is the purpose of concrete methods in abstract classes in Python? | I feel like this subject is touched in some other questions but it doesn't get into Python (3.7) specifically, which is the language I'm most familiar with.
I'm starting to get the hang of abstract classes and how to use them as blueprints for subclasses I'm creating.
What I don't understand though, is the purpose of concrete methods in abstract classes.
If I'm never going to instantiate my parent abstract class, why would a concrete method be needed at all, shouldn't I just stick with abstract methods to guide the creation of my subclasses and explicit the expected behavior?
Thanks. | This question is not Python specific, but general object oriented.
There may be cases in which all your sub-classes need a certain method with a common behavior. It would be tedious to implement the same method in all your sub-classes. If you instead implement the method in the parent class, all your sub-classes inherit this method automatically. Even callers may call the method on your sub-class, although it is implemented in the parent class. This is one of the basic mechanics of class inheritance. | 0.386912 | false | 1 | 6,178 |
2019-07-10 09:24:32.763 | Building Kivy Android app with Tensorflow | Recently, I want to deploy a Deeplearning model (Tensorflow) on mobile (Android/iOS) and I found that Kivy Python is a good choice to write cross-platform apps. (I am not familiar with Java Android)
But I don't know how to integrate Tensorflow libs when building .apk file.
The guide for writing "buildozer recipe" is quite complicate for this case.
Is there any solution for this problem without using native Java Android and Tensorflow Lite? | Fortunately found someone facing the same issues as I am but unfortunately I found that Kivy couldn't compile Tensorflow library yet. In other words, not supported, yet. I don't know when will they update the features. | 0 | false | 1 | 6,179 |
2019-07-10 11:12:13.937 | how do I produce unique random numbers as an array in Python? | I have an array with size (4,4) that can have values 0 and 1, so I can have 65536 different arrays. I need to produce all these arrays without repeating. I use wt_random=np.random.randint(2, size=(65536,4,4)) but I am worried they are not unique. could you please tell me this code is correct or not and what should I do to produce all possible arrays? Thank you. | If you need all possible arrays in random order, consider enumerating them in any arbitrary deterministic order and then shuffling them to randomize the order. If you don't want all arrays in memory, you could write a function to generate the array at a given position in the deterministic list, then shuffle the positions. Note that Fisher-Yates may not even need a dense representation of the list to shuffle... if you keep track of where the already shuffled entries end up you should have enough. | 0 | false | 1 | 6,180 |
2019-07-10 11:43:17.540 | How to add a column of seconds to a column of times in python? | I have a file contain some columns which the second column is time. Like what I show below. I need to add a column of time which all are in seconds like this: "2.13266 2.21784 2.20719 2.02499 2.16543", to the time column in the first file (below). My question is how to add these two time to each other. And maybe in some cases when I add these times, then it goes to next day, and in this case how to change the date in related row.
2014-08-26 19:49:32 0
2014-08-28 05:43:21 0
2014-08-30 11:47:54 0
2014-08-30 03:26:10 0 | Probably the easiest way is to read your file into a pandas data-frame and parse each row as a datetime object. Then you create a datetime.timedelta object passing the fractional seconds.
A datetime object + a timedelta handles wrapping around for days quite nicely so this should work without any additional code. Finally, write back your updated dataframe to a file. | 0 | false | 2 | 6,181 |
2019-07-10 11:43:17.540 | How to add a column of seconds to a column of times in python? | I have a file contain some columns which the second column is time. Like what I show below. I need to add a column of time which all are in seconds like this: "2.13266 2.21784 2.20719 2.02499 2.16543", to the time column in the first file (below). My question is how to add these two time to each other. And maybe in some cases when I add these times, then it goes to next day, and in this case how to change the date in related row.
2014-08-26 19:49:32 0
2014-08-28 05:43:21 0
2014-08-30 11:47:54 0
2014-08-30 03:26:10 0 | Ok. Finally it is done via this code:
d= 2.13266
dd= pd.to_timedelta (int(d), unit='s')
df= pd.Timestamp('2014-08-26 19:49:32')
new = df + dd | 0 | false | 2 | 6,181 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.