Title
stringlengths 15
150
| A_Id
int64 2.98k
72.4M
| Users Score
int64 -17
470
| Q_Score
int64 0
5.69k
| ViewCount
int64 18
4.06M
| Database and SQL
int64 0
1
| Tags
stringlengths 6
105
| Answer
stringlengths 11
6.38k
| GUI and Desktop Applications
int64 0
1
| System Administration and DevOps
int64 1
1
| Networking and APIs
int64 0
1
| Other
int64 0
1
| CreationDate
stringlengths 23
23
| AnswerCount
int64 1
64
| Score
float64 -1
1.2
| is_accepted
bool 2
classes | Q_Id
int64 1.85k
44.1M
| Python Basics and Environment
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Web Development
int64 0
1
| Available Count
int64 1
17
| Question
stringlengths 41
29k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
How do I enable local modules when running a python script as a cron tab? | 31,189,359 | 0 | 1 | 1,207 | 0 | python,cron,beautifulsoup,crontab | ~/.local paths (populated by pip install --user) are available automatically i.e., it is enough if the cron job belongs to the corresponding user.
To configure arbitrary path, you could use PYTHONPATH envvar in the crontab. Do not corrupt sys.path inside your script. | 0 | 1 | 0 | 1 | 2015-07-02T12:52:00.000 | 2 | 0 | false | 31,185,207 | 0 | 0 | 0 | 1 | I just wrote a small python script that uses BeautifulSoup in order to extract some information from a website.
Everything runs fine whenever the script is run from the command line. However run as a crontab, the server returns me this error:
Traceback (most recent call last):
File "/home/ws/undwv/mindfactory.py", line 7, in
from bs4 import BeautifulSoup
ImportError: No module named bs4
Since I do not have any root access to the server, BeautifulSoup was installed at the user directory: $HOME/local/lib/python2.7/site-packages
I suppose the cron tab does not look for modules in the user directory. Any ideas how to solve that? |
what is the maximum number of workers and concurrency can be configured in celery | 31,204,413 | 2 | 1 | 1,364 | 0 | python,celery,celery-task | That's like asking 'how long is a piece of string' and I'm sure there isn't a single simple answer. Certainly it will be more than 8 threads, with a useful upper limit at the maximum concurrent I/O tasks needed, maybe determined by the number of remote users of your service that the I/O tasks are communicating with. Presumably at some number of tasks 'manipulating the data' will start to load up your processor and you won't be i/o bound any more. | 0 | 1 | 0 | 0 | 2015-07-03T10:13:00.000 | 1 | 0.379949 | false | 31,204,230 | 0 | 0 | 0 | 1 | If I'm scheduling IO bound task in celery and if my server spec was like Quad Core with 8GB RAM, How many workers and concurrency I can use.
If CPU bound processes are advised to use 4 workers and 8 concurrency for Quad Core processor. Whats the spec for IO bound process.
In my task I will be performing API calls, manipulating the received data and storing the processed data in server. |
subprocess.popen( ) executing Python script but not writing to a file | 31,207,419 | 1 | 0 | 572 | 0 | python,pipe,subprocess,popen | The absolute path of Python in self.runcmd should do the magic!
Try using the absolute path of file name while opening the file in write mode. | 0 | 1 | 0 | 0 | 2015-07-03T10:56:00.000 | 1 | 1.2 | true | 31,205,122 | 0 | 0 | 0 | 1 | I am trying to run a Python program from inside another Python program using these commands:
subprocess.call(self.runcmd, shell=True);
subprocess.Popen(self.runcmd, shell=True); and
self.runcmd = " python /home/john/createRecordSet.py /home/john/sampleFeature.dish "
Now the script runs fine but the file its supposed to write to is not even getting created, i'm using "w" mode for creating and writing |
HiveMQ and IoT control | 31,220,724 | 1 | 0 | 257 | 0 | python,gpio,messagebroker,iot,hivemq | Start HiveMQ with the following: ./bin/run.sh &
Yes it is possible to subscribe to two topics from the same application, but you need to create separate subscribers within your python application. | 0 | 1 | 0 | 1 | 2015-07-03T13:32:00.000 | 2 | 0.099668 | false | 31,208,102 | 0 | 0 | 0 | 1 | I recently installed HiveMQ on a Ubuntu machine and everything works fine. Being new to Linux( I am more on windows guy) , I am stuck with following question.
I started HiveMQ with command as ./bin/run.sh . A window opens and confirm that HiveMQ is running..Great !!!. I started this with putty and when I close the putty , HiveMQ also stops. How to make HiveMQ run all the time ?.
I am using the HiveMQ for my IoT projects ( raspberry pi). I know to subscribe and publish to HiveMQ broker from python , but what confuses me is , should I be running the python program continuously to make this work ?. Assuming I need to trigger 2+ GPIO on Pi , can I write one program and keep it running by making it subscribe to 2+ topic for trigger events ?.
Any help is greatly appreciated.
Thanks |
Using Homebrew python instead of system provided python | 31,891,599 | 0 | 1 | 504 | 0 | python,homebrew | This happened to me when I installed Python 2.7.10 using brew. My PATH was set to /usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin and which python returned /usr/local/bin/python (which is symlinked to Python 2.7.10.)
Problem went away when I closed and restarted Terminal application. | 0 | 1 | 0 | 0 | 2015-07-03T14:55:00.000 | 1 | 0 | false | 31,209,635 | 0 | 0 | 0 | 1 | I used Homebrew to install python, the version is 2.7.10, and the system provided version is 2.7.6. My PATH environment variable is set to /usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin", so my terminal DOES know to look at the Homebrew bin folder first!
However, when I run python, it still defaults to 2.7.6, the system provided version (the interpreter that shows up says 2.7.6 at the top). If I run /usr/local/bin/python, it runs 2.7.10, which is what I want.
If my PATH variable is properly set, then how is it possible that terminal still finds /usr/bin/python first? |
Python alternative to os.kill with a return code? | 31,216,258 | 2 | 0 | 1,632 | 0 | python,linux | This question is based on a mistaken understanding of how kill -9 PID behaves (or kill with any other signal -- even though -9 can't be overridden by a process's signal handler, it can still be delayed if, for instance, the target is in a blocking syscall).
Thus: kill -9 "$pid", in shell, doesn't tell you when the signal is received either. A return code of 0 just means that the signal was sent, same as what Python's os.kill() returning without an exception does.
The underlying sigaction call -- invoked by both os.kill() and the kill shell command -- has no way of returning result information. Thus, that information is not available in any language. | 0 | 1 | 0 | 0 | 2015-07-04T02:13:00.000 | 2 | 0.197375 | false | 31,216,203 | 1 | 0 | 0 | 2 | Is there an alternative to the os.kill function in Python 3 that will give me a return code? I'd like to verify that a process and it's children actually do get killed before restarting them.
I could probably put a kill -0 loop afterwards or do a subprocess.call(kill -9 pid) if I had to but I'm curious if there's a more elegant solution. |
Python alternative to os.kill with a return code? | 31,216,218 | 0 | 0 | 1,632 | 0 | python,linux | os.kill() sends a signal to the process. The return code will still be sent to the parent process. | 0 | 1 | 0 | 0 | 2015-07-04T02:13:00.000 | 2 | 0 | false | 31,216,203 | 1 | 0 | 0 | 2 | Is there an alternative to the os.kill function in Python 3 that will give me a return code? I'd like to verify that a process and it's children actually do get killed before restarting them.
I could probably put a kill -0 loop afterwards or do a subprocess.call(kill -9 pid) if I had to but I'm curious if there's a more elegant solution. |
mac os X 10.6.8 python 2.7.10 issues with direct typing of two-bytes utf-8 characters | 31,230,459 | 0 | 2 | 61 | 0 | python-2.7,utf-8,interactive-mode | I've found a partial solution to that issue: in the terminal.app settings, checking the 'escape non-ascii input' option lets python grab any utf-8 char; unfortunately, it prevents using them at the tcsh prompt as before; yet bash sees them as it should...
goodbye, tcsh! | 0 | 1 | 0 | 0 | 2015-07-05T12:32:00.000 | 1 | 0 | false | 31,230,376 | 0 | 0 | 0 | 1 | I'm running Mac OS X 10.6.8;
I had been using python 2.5.4 for 8 years and had NO problem, and neither had I with python 2.6 and python 3.1 as well;
but I recently had to install python 2.7.10, which has become the default interpreter, and now there are issues when the interpreter is running and I need to enter expressions with utf-8 chars in interactive mode: the terminal rings its bell, and, of course, the characters do not show;
yet any python script containing expressions involving utf-8 strings would still be interpreted as usual; it's just that I cannot type directly anything but 7-bit chars, even though I tweaked the site.py script to make sure sys.getdefaultencoding() would yield the 'utf-8' value;
at the tcsh or bash prompt, typing utf-8 works all right, even as arguments to a python -c command; it's just that no python interpreter likes it: none of them — 2.5, 2.6, 2.7... although I haven't given python 3 a try yet!
Can anybody help? |
How to downgrade python version on CentOS? | 31,235,259 | -1 | 1 | 8,690 | 0 | python,python-2.7,centos,sha | You can always install a different version of Python using the -altinstall argument, and then run it either in a virtual environment, or just run the commands with python(version) command.
A considerable amount of CentOS is written in Python so changing the core version will most likely break some functions. | 0 | 1 | 0 | 0 | 2015-07-05T21:14:00.000 | 2 | 1.2 | true | 31,235,059 | 0 | 0 | 0 | 1 | I have a dedicated web server which runs CentOS 6.6
I am running some script that uses Python SHA module and I think that this module is deprecated in the current Python version.
I am consider downgrading my Python installation so that I can use this module.
Is there a better option? If not, how should I do it?
These are my Python installation details:
rpm-python-4.8.0-38.el6_6.x86_64
dbus-python-0.83.0-6.1.el6.x86_64
gnome-python2-2.28.0-3.el6.x86_64
gnome-python2-canvas-2.28.0-3.el6.x86_64
libreport-python-2.0.9-21.el6.centos.x86_64
gnome-python2-applet-2.28.0-5.el6.x86_64
gnome-python2-gconf-2.28.0-3.el6.x86_64
gnome-python2-bonobo-2.28.0-3.el6.x86_64
python-urlgrabber-3.9.1-9.el6.noarch
python-tools-2.6.6-52.el6.x86_64
newt-python-0.52.11-3.el6.x86_64
python-ethtool-0.6-5.el6.x86_64
python-pycurl-7.19.0-8.el6.x86_64
python-docs-2.6.6-2.el6.noarch
gnome-python2-libegg-2.25.3-20.el6.x86_64
python-iwlib-0.1-1.2.el6.x86_64
libxml2-python-2.7.6-17.el6_6.1.x86_64
gnome-python2-gnome-2.28.0-3.el6.x86_64
python-iniparse-0.3.1-2.1.el6.noarch
gnome-python2-libwnck-2.28.0-5.el6.x86_64
libproxy-python-0.3.0-10.el6.x86_64
python-2.6.6-52.el6.x86_64
gnome-python2-gnomevfs-2.28.0-3.el6.x86_64
gnome-python2-desktop-2.28.0-5.el6.x86_64
gnome-python2-extras-2.25.3-20.el6.x86_64
abrt-addon-python-2.0.8-26.el6.centos.x86_64
at-spi-python-1.28.1-2.el6.centos.x86_64
python-libs-2.6.6-52.el6.x86_64
python-devel-2.6.6-52.el6.x86_64 |
how can we wire up cluster based softwares using chef? | 31,248,915 | 2 | 5 | 348 | 0 | python,automation,chef-infra,orchestration | If you have a chef server, you can do a search for the node that runs the ambari-server recipe. Then you use the IP of that machine. Alternately, you can use a DNS name for the ambari-server, and then update you DNS entry to point to the new server when it is available.
Other options include using confd with etcd, or using consul. Each would allow you to update your config post-chef with the ip of the server. | 0 | 1 | 0 | 0 | 2015-07-06T08:54:00.000 | 3 | 0.132549 | false | 31,241,531 | 0 | 0 | 0 | 1 | As part of a platform setup orchestration we are using our python package to install various software packages on a cluster of machines in cloud.
We have the following scenario:
out of many softwares, one of our software is Ambari(helps in managing hadoop platform).
it works as follows - 'n' number of cluster machines reporting to 1 ambari-server.
for each cluster machine to do reporting, we have to install ambari-agent on each of cluster machine and modify its properties file with the ambari server it is suppposed to report and start ambari-agent.
what are we able to do--
we were successful in installing ambari server and ambari agents seperately in our cluster machines with the help of seperate chef cookbooks.
what we are not able to do--
how can we modify each machine's ambari-agent properties file so that it is pointing to our ambari server IP. in general what is an elegant way to wire up cluster based softwares as part of chef orchestration?
NB:. ambari-server is created on fly and hence its IP is obtained during run time.
Is it possible? are there any alternatives to above problem?
Thanks |
Launch Python script from Swift App | 31,248,808 | 3 | 1 | 3,016 | 0 | python,swift,nstask | This should work:
system("python EXECUTABLE_PATH")
Josh | 0 | 1 | 0 | 0 | 2015-07-06T12:49:00.000 | 1 | 1.2 | true | 31,246,335 | 0 | 0 | 0 | 1 | I'm new to swift and I'm trying to run a Python file from it.
I already got the full path to the file, and my tries with NStask failed so far.
Now I'm somehow stuck launching the python executable with the path to the script as a parameter :-/ I already thought of just creating an .sh file with the appropriate command in it (python $filename) and launch that, but isn't there another way?
Of course I'm running OS X 10.10
Thanks for any help! |
Bad CPU type in executable when doing arch -i386 pip2 install skype4py | 31,281,242 | 0 | 1 | 1,867 | 0 | python,macos,segmentation-fault,skype4py | Ok, I was not able to solve the problem with Skype4Py on Mac OS. But perhaps someone will be useful to know that I have found a replacement. I used Ruby gem called skype. It works well on Mac OS. So, if you want to send message from script or anything else, just make gem install skype and start to write some ruby code :) | 0 | 1 | 0 | 0 | 2015-07-06T23:26:00.000 | 3 | 0 | false | 31,257,354 | 1 | 0 | 0 | 1 | I have a problem with Skype4Py lib in Mac OS. As I know from documentation in github, in macos skype4py must install with specific arch. But when I try to use arch -i386 pip2 install skype4py I get error message Bad CPU type in executable. I am not experienced user in macos (this is been a remote control in team viewer) but what I doing wrong? Also I tried use virtualenv and at the start all be ok, but when in shell I make client.Attach() I have a segfault. Please help. Thanks in advance. |
Async Tasks for Django and Gunicorn | 31,272,086 | 1 | 1 | 420 | 0 | python,django,multithreading,celery | I'm assuming you don't want to wait because you are using an external service (outside of your control) for sending email. If that's the case then setup a local SMTP server as a relay. Many services such as Amazon SES, SendGrid, Mandrill/Mailchimp have directions on how to do it. The application will only have to wait on the delivery to localhost (which should be fast and is within your control). The final delivery will be forwarded on asynchronously to the request/response. STMP servers are already built to handle delivery failures with retries which is what you might gain by moving to Celery. | 0 | 1 | 0 | 0 | 2015-07-07T12:24:00.000 | 1 | 1.2 | true | 31,268,494 | 0 | 0 | 1 | 1 | I have a use case where I have to send_email to user in my views. Now the user who submitted the form will not receive an HTTP response until the email has been sent . I do not want to make the user wait on the send_mail. So i want to send the mail asynchronously without caring of the email error. I am using using celery for sending mail async but i have read that it may be a overkill for simpler tasks like this. How can i achieve the above task without using celery |
How to modify crontab to run python script? | 31,286,520 | 0 | 0 | 902 | 0 | python,linux,crontab,redhat | Thank you all guys , but I did a little research and I have found a solution , first you have to test sudo python to see if it works with the module , if not you have to do alias for the sudo you put it inside /etc/bashrc [ to make it system wide alias ] , alias sudo='sudo env PATH=$PATH LD_LIBRARY_PATH=$LD_LIBRARY_PATH ORACLE_HOME=$ORACLE_HOME TNS_ADMIN=$TNS_ADMIN'
Then you have to change crontab to call a script to assign these values to the variables , using source /the script && /usr/bin/python script.py | 0 | 1 | 0 | 1 | 2015-07-07T16:48:00.000 | 1 | 1.2 | true | 31,274,717 | 0 | 0 | 0 | 1 | I am using redhat linux platform
I was wondering why when I use python script inside crontab to run every 2 minutes it won't work even though when I do monitor the crond logs using
tail /etc/sys/cron it shows that it called the script , tried to add the path of python , [ I am using python2.6 -- so the path would be /usr/bin/python2.6 ]
the crontab -e [tried user and root same problem ]
*/2 * * * * /usr/bin/python2.6 FULLPATH/myscript.py |
How to set buffer size in pypcap | 31,293,136 | 0 | 2 | 840 | 0 | python,packet,sniffer | I studied the source code of pypcap and as far as I could see there was no way to set the buffer size from it.
Because pypcap is using the libpcap library, I changed the default buffer size in the source code of libpcap and reinstalled it from source. That solved the problem as it seems.
Tcpdump sets the buffer size by calling the set_buffer_size() method of libpcap, but it seems that pypcap cannot do that.
Edit: The buffer size variable is located in the pcap-linux.c file, and the name is opt.buffer_size. I is 2MB by default (2*1024*1024 in source code) | 0 | 1 | 1 | 0 | 2015-07-08T09:59:00.000 | 2 | 1.2 | true | 31,289,288 | 0 | 0 | 0 | 1 | I created a packet sniffer using the pypcap Python library (in Linux). Using the .stats() method of the pypcap library, I see that from time to time few packets get dropped by the Kernel when the network is busy. Is it possible to increase the buffer size for the pypcap object so that less packets get dropped (like it is possible in tcpdump?). |
How should a Twisted AMP Deferred be cancelled? | 31,305,323 | 2 | 1 | 186 | 0 | python,twisted,deferred,asynchronous-messaging-protocol | No. There is no way, presently, to cancel an AMP request.
You can't cancel AMP requests because there is no way defined in AMP at the wire-protocol level to send a message to the remote server telling it to stop processing. This would be an interesting feature-addition for AMP, but if it were to be added, you would not add it by allowing users to pass in their own cancellers; rather, AMP itself would have to create a cancellation function that sent a "cancel" command.
Finally, adding this feature would have to be done very carefully because once a request is sent, there's no guarantee that it would not have been fully processed; chances are usually good that by the time the cancellation request is received and processed by the remote end, the remote end has already finished processing and sent a reply. So AMP should implement asynchronous cancellation. | 0 | 1 | 1 | 0 | 2015-07-08T22:15:00.000 | 1 | 1.2 | true | 31,304,788 | 0 | 0 | 0 | 1 | I have a Twisted client/server application where a client asks multiple servers for additional work to be done using AMP. The first server to respond to the client wins -- the other outstanding client requests should be cancelled.
Deferred objects support cancel() and a cancellor function may be passed to the Deferred's constructor. However, AMP's sendRemote() api doesn't support passing a cancellor function. Additionally, I'd want the cancellor function to not only stop the local request from processing upon completion but also remove the request from the remote server.
AMP's BoxDispatcher does have a stopReceivingBoxes method, but that causes all deferreds to error out (not quite what I want).
Is there a way to cancel AMP requests? |
No module named google.protobuf | 45,141,001 | 20 | 32 | 96,128 | 0 | python,installation,protocols,protocol-buffers,deep-dream | Locating the google directory in the site-packages directory (for the proper latter directory, of course) and manually creating an (empty) __init__.py resolved this issue for me.
(Note that within this directory is the protobuf directory but my installation of Python 2.7 did not accept the new-style packages so the __init__.py was required, even if empty, to identify the folder as a package folder.)
...In case this helps anyone in the future. | 0 | 1 | 0 | 0 | 2015-07-09T05:24:00.000 | 9 | 1 | false | 31,308,812 | 0 | 0 | 0 | 5 | I am trying to run Google's deep dream. For some odd reason I keep getting
ImportError: No module named google.protobuf
after trying to import protobuf. I have installed protobuf using sudo install protobuf. I am running python 2.7 OSX Yosemite 10.10.3.
I think it may be a deployment location issue but i cant find anything on the web about it. Currently deploying to /usr/local/lib/python2.7/site-packages. |
No module named google.protobuf | 31,325,403 | 2 | 32 | 96,128 | 0 | python,installation,protocols,protocol-buffers,deep-dream | According to your comments, you have multiply versions of python
what could happend is that you install the package with pip of anthor python
pip is actually link to script that donwload and install your package.
two possible solutions:
go to $(PYTHONPATH)/Scripts and run pip from that folder that way you insure
you use the correct pip
create alias to pip which points to $(PYTHONPATH)/Scripts/pip and then run pip install
how will you know it worked?
Simple if the new pip is used the package will be install successfully, otherwise the package is already installed | 0 | 1 | 0 | 0 | 2015-07-09T05:24:00.000 | 9 | 0.044415 | false | 31,308,812 | 0 | 0 | 0 | 5 | I am trying to run Google's deep dream. For some odd reason I keep getting
ImportError: No module named google.protobuf
after trying to import protobuf. I have installed protobuf using sudo install protobuf. I am running python 2.7 OSX Yosemite 10.10.3.
I think it may be a deployment location issue but i cant find anything on the web about it. Currently deploying to /usr/local/lib/python2.7/site-packages. |
No module named google.protobuf | 45,384,713 | 0 | 32 | 96,128 | 0 | python,installation,protocols,protocol-buffers,deep-dream | In my case, MacOS has the permission control.
sudo -H pip3 install protobuf | 0 | 1 | 0 | 0 | 2015-07-09T05:24:00.000 | 9 | 0 | false | 31,308,812 | 0 | 0 | 0 | 5 | I am trying to run Google's deep dream. For some odd reason I keep getting
ImportError: No module named google.protobuf
after trying to import protobuf. I have installed protobuf using sudo install protobuf. I am running python 2.7 OSX Yosemite 10.10.3.
I think it may be a deployment location issue but i cant find anything on the web about it. Currently deploying to /usr/local/lib/python2.7/site-packages. |
No module named google.protobuf | 52,287,475 | 3 | 32 | 96,128 | 0 | python,installation,protocols,protocol-buffers,deep-dream | when I command pip install protobuf, I get the error:
Cannot uninstall 'six'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.
If you have the same problem as me, you should do the following commands.
pip install --ignore-installed six
sudo pip install protobuf | 0 | 1 | 0 | 0 | 2015-07-09T05:24:00.000 | 9 | 0.066568 | false | 31,308,812 | 0 | 0 | 0 | 5 | I am trying to run Google's deep dream. For some odd reason I keep getting
ImportError: No module named google.protobuf
after trying to import protobuf. I have installed protobuf using sudo install protobuf. I am running python 2.7 OSX Yosemite 10.10.3.
I think it may be a deployment location issue but i cant find anything on the web about it. Currently deploying to /usr/local/lib/python2.7/site-packages. |
No module named google.protobuf | 46,490,849 | 0 | 32 | 96,128 | 0 | python,installation,protocols,protocol-buffers,deep-dream | I had this problem to when I had a google.py file in my project files.
It is quite easy to reproduce.
main.py: import tensorflow as tf
google.py: print("Protobuf error due to google.py")
Not sure if this is a bug and where to report it. | 0 | 1 | 0 | 0 | 2015-07-09T05:24:00.000 | 9 | 0 | false | 31,308,812 | 0 | 0 | 0 | 5 | I am trying to run Google's deep dream. For some odd reason I keep getting
ImportError: No module named google.protobuf
after trying to import protobuf. I have installed protobuf using sudo install protobuf. I am running python 2.7 OSX Yosemite 10.10.3.
I think it may be a deployment location issue but i cant find anything on the web about it. Currently deploying to /usr/local/lib/python2.7/site-packages. |
AWS ETL with python scripts | 31,363,788 | 0 | 1 | 671 | 0 | python,amazon-web-services,amazon-s3,amazon-emr,amazon-data-pipeline | First thing you want to do is to set 'termination protection' on - on the EMR cluster -as soon as it is launched by Data Pipeline. (this can be scripted too).
Then you can log on to the 'Master instance'. This is under 'hardware' pane under EMR cluster details. (you can also search in EC2 console by cluster id).
You also have to define a 'key' so that you can SSH to the Master.
Once you log on to the master, you can look under /mnt/var/log/hadoop/steps/ for logs - or /mnt/var/lib/hadoop/.. for actual artifacts. You can browse hdfs using HDFS utils.
The logs (if they are written to stdout or stderr), are already moved to S3. If you want to move additional files, you have to have write a script and run it using 'script-runner'. You can copy large amount of files using 's3distcp'. | 0 | 1 | 0 | 0 | 2015-07-10T16:41:00.000 | 1 | 0 | false | 31,346,102 | 0 | 0 | 0 | 1 | I am trying to create a basic ETL on AWS platform, which uses python.
In a S3 bucket (lets call it "A") I have lots of raw log files, gzipped.
What I would like to do is to have it periodically (=data pipeline) unzipped, processed by a python script which will reformat the structure of every line, and output it to another S3 bucket ("B"), preferably as gzips of the same log files originating in the same gzip in A, but that's not mandatory.
I wrote the python script which does with it needs to do (receives each line from stdin) and outputs to stdout (or stderr, if a line isn't valid. in this case, i'd like it to be written to another bucket, "C").
I was fiddling around with the data pipeline, tried to run a shell command job and also a hive job for sequencing with the python script.
The EMR cluster was created, ran, finished, no fails or errors, but also no logs created, and I can't understand what is wrong.
In addition, I'd like the original logs be removed after processed and written to the destination or erroneous logs buckets.
Does anyone have any experience with such configuration? and words of advise? |
python astonishing IOError on windows creating files - Errno 13 Permission | 31,347,743 | 0 | 0 | 786 | 0 | python,windows,csv | Damn it, it's already working!, it has been like saying i cannot find my glasses and to have them on.
THanks Brian, it wasn't that the error. The problem was that in my code i was dealing with ubuntu separator besides the full path to the csv output file was completely correct. But I replaced it with os.sep , and started working like a charm :)
Thanks again! | 0 | 1 | 0 | 0 | 2015-07-10T17:58:00.000 | 2 | 1.2 | true | 31,347,339 | 0 | 0 | 0 | 1 | I have to run my python script on windows too, and then it began the problems.
Here I'm scraping html locally saved files, and then saving their .csv versions with the data I want. I ran it on my ubuntu and goes for +100k files with no problems. But when I go on windows, it says:
IOError: [Errno 13] Permission denied
It is not a permissions problems, I've rechecked it, and run it under 'Administration' powers, and it makes no difference.
It breaks exactly on the line where I open the file:
with open(of, 'w') as output:
...
I've tried to create same first file of the 100k from the python console and from a new blank stupid script from same directory as my code, and it works...
So, it seems is doable.
Then I've tried with output = open(of, 'w') instead of above code but nothing.
The weird thing is that it creates a directory with same name as the file, and then breaks with the IOError.
I've started thinking that it could be a csv thing..., naaaeehh, apart from other tries that didn't helped me, the most interesting stuff is that with the following code:
with open(of+.txt, 'w') as output:
...
it happens the astonishing thing that it creates a directory ending on .csv AND a file ending in .csv.txt with the right data!
Aargh!
Changing the open mode file to 'w+', 'wb', it didn't make a difference either.
Any ideas? |
Docker not responding to CTRL+C in terminal | 31,355,539 | 1 | 22 | 14,239 | 0 | linux,centos,docker,ipython-notebook | @maybeg's answer already explains very well why this might be happening.
Regarding stopping the unresponsive container, another solution is to simply issue a docker stop <container-id> in another terminal. As opposed to CTRL-C, docker stop does not send a SIGINT but a SIGTERM signal, to which the process might react differently.
Usage: docker stop [OPTIONS] CONTAINER [CONTAINER...]
Stop a running container by sending SIGTERM and then SIGKILL after a grace period
If that fails, use docker kill <container-id> which sends a SIGKILL immediately. | 0 | 1 | 0 | 0 | 2015-07-10T21:10:00.000 | 5 | 0.039979 | false | 31,350,335 | 0 | 0 | 0 | 1 | Having an issue with Docker at the moment; I'm using it to run an image that launches an ipython notebook on startup. I'm looking to make some edits to ipython notebook itself, so I need to close it after launch.
However, hitting CTRL+C in the terminal just inputs "^C" as a string. There seems to be no real way of using CTRL+C to actually close the ipython notebook instance.
Would anyone have any clues as to what can cause this, or know of any solutions for it? |
kill a shell created by vim when ctrl-C doesn't work | 31,391,726 | 1 | 3 | 371 | 0 | python,shell,unix,vim | When you do :! in Vim, you effectively put Vim into background and the running process, in this case py.test, gets the focus. That means you can't tell Vim to kill the process for you since Vim is not getting keystrokes from you.
Ctrl-Z puts Vim into background while running py.test because Vim is the parent process of py.test. Thus the shell goes through the chain then puts all children as well as the parent into background.
I would suggest that you open another terminal window and do all the housekeeping chores there. | 0 | 1 | 0 | 1 | 2015-07-13T04:56:00.000 | 1 | 0.197375 | false | 31,375,628 | 0 | 0 | 0 | 1 | I'm writing some threaded python code in vim. When I run my tests, with
:! py.test test_me.py
Sometimes they hang and cannot be killed with ctrl-C. So I have to background vim (actually the shell the tests are running in) and pkill py.test. Is there a better way to kill the hanging test suite?
I tried mapping :map ,k:! pkill py.test but this doesn't work since while the tests are running my input is going to the shell running the test, not vim.
EDIT:
I'm looking for a way to kill the test process that is quicker than ctrl-Z, pkill py.test, fg <cr> to return to editing. Ideally just a hotkey. |
GAE middlewares for modules? | 31,397,559 | 0 | 0 | 59 | 0 | google-app-engine,middleware,google-app-engine-python | The way I approached such scenario (in a python-only project, donno about php) was to use a custom handler (inheriting webapp2.RequestHandler which I was already using for session support). In its customized dispatch() method the user info is collected and stored in the handler object itself.
The implementation of the handler exists in only one version controlled file, but which is symlinked (for GAE accessibility) in each module that references the handler. This way I don't have to manage multiple independent copies of the user and session verification code. | 0 | 1 | 0 | 0 | 2015-07-13T08:08:00.000 | 1 | 0 | false | 31,378,288 | 0 | 0 | 1 | 1 | Assume that I have few modules on my GAE project (say A, B, C). They shares the users database and sessions.
For example: module A will manage the login/logout actions (through cookies), module B,C will handle other actions. FYI, those modules are developed in both PHP and Python.
Now, I do not want to make user & session verification codes on all 3 modules.
Is there anyway for me to put a middleware that run before all 3 modules for each request. Such as X: it will add header for each request to set the user id and some user's information if the user has logged in.
I.E: after I can implement my above idea. Each request will run through 1 in below 3 cases:
X, A
X, B
X, C
What do you say?
Thanks
Update 1: more information
The middleware, I mean the request middle ware.
If X is a middleware then it will be run before the request is passed to the app (or module), it will change the request only such as:
Do some authentication actions
Add some headers:
X-User-Id: for authorized user id
X-User-Scopes: for scopes of authorized user
etc ...
And of course, it is independent to the inside module's language (PHP or Python or Java or ...)
The X middleware should be configured at app.yaml. |
Accept "Content-Encoding: gzip" in Tornado | 31,399,949 | 0 | 2 | 1,594 | 0 | python,tornado | Only way u can change function parse_body_arguments in tornado.httputil file. Otherwise remove Content-Encoding in headers arguments | 0 | 1 | 0 | 0 | 2015-07-14T06:56:00.000 | 2 | 0 | false | 31,399,735 | 0 | 0 | 0 | 2 | I'm processing requests in Tornado that comes with Content-Encoding: gzip header in the body request. The problem is that Tornado shows a warning:
[W 150713 17:22:11 httputil:687] Unsupported Content-Encoding: gzip
I'm doing the unzip operation inside the code and it works like a charm but I'd like to get rid of the message.
Is there any way of accepting that Content-Encoding in Tornado?
Thanks! |
Accept "Content-Encoding: gzip" in Tornado | 31,408,024 | 4 | 2 | 1,594 | 0 | python,tornado | You must opt-in to handling of gzipped requests by passing decompress_request=True to the HTTPServer constructor (or Application.listen). | 0 | 1 | 0 | 0 | 2015-07-14T06:56:00.000 | 2 | 0.379949 | false | 31,399,735 | 0 | 0 | 0 | 2 | I'm processing requests in Tornado that comes with Content-Encoding: gzip header in the body request. The problem is that Tornado shows a warning:
[W 150713 17:22:11 httputil:687] Unsupported Content-Encoding: gzip
I'm doing the unzip operation inside the code and it works like a charm but I'd like to get rid of the message.
Is there any way of accepting that Content-Encoding in Tornado?
Thanks! |
Is there any parallel way of accessing Netcdf files in Python | 31,568,148 | 2 | 5 | 1,241 | 0 | python,io,parallel-processing,netcdf | It's too bad PyPnetcdf is not a bit more mature. I see hard-coded paths and abandoned domain names. It doesn't look like it will take a lot to get something compiled, but then there's the issue of getting it to actually work...
in setup.py you should change the library_dirs_list and include_dirs_list to point to the places on your system where Northwestern/Argonne Parallel-NetCDF is installed and where your MPI distribution is installed.
then one will have to go through and update the way pypnetcdf calls pnetcdf. A few years back (quite a few, actually) we promoted a lot of types to larger versions. | 0 | 1 | 0 | 0 | 2015-07-15T03:16:00.000 | 2 | 0.197375 | false | 31,420,879 | 0 | 0 | 0 | 1 | Is there any way of doing parallel IO for Netcdf files in Python?
I understand that there is a project called PyPNetCDF, but apparently it's old, not updated and doesn't seem to work at all. Has anyone had any success with parallel IO with NetCDF in Python at all?
Any help is greatly appreciated |
travis setup heroku command on Windows 7 64 bit | 31,445,471 | 0 | 1 | 456 | 0 | python,ruby,windows,heroku,travis-ci | If you hadn't had Heroku Toolbelt setup to the $PATH environment variable during installation, here are some steps to check:
Check if Heroku toolbelt is set in PATH variable. If not, cd to your Heroku toolbelt installation folder, then click on the address bar and copy it.
Go to the Control Panel, then click System and Advanced System Protection.
Go to Environment Variables, then look for $PATH in the System Variables
After the last program in the variable, put a ; then paste in your Heroku CLI folder and click OK. (This requires cmd to be restarted manually)
Login to Heroku CLI
grab the token key from heroku auth:token
run travis setup heroku if the setup goes smoothly, you shouldn't get the command not found and prompt you for heroku auth key. It will ask that you want to encrypt the auth key (highly recommend) and verify the information you provided with the toolbelt and Travis CLI.
commit changes
you should be able to get your app up and running within your tests. | 0 | 1 | 0 | 1 | 2015-07-15T04:55:00.000 | 1 | 0 | false | 31,421,793 | 0 | 0 | 1 | 1 | Hi there I'm trying to deploy my python app using Travis CI but I'm running into problems when I run the "travis setup heroku" command in the cmd prompt.
I'm in my project's root directory, there is an existing ".travis.yml" file in that root directory.
I've also installed ruby correctly and travis correcty because when I run:
"ruby -v" I get "ruby 2.2.2p95 (2015-04-13 revision 50295) [x64-mingw32]"
"travis -v" I get "1.7.7"
When I run "travis setup heroku" I get this message "The system cannot find the path specified" then prompts me for a "Heroku API token:"
What's the issue? |
How to copy first 100 files from a directory of thousands of files using python? | 31,427,309 | -1 | 1 | 2,371 | 0 | python | You may try to read a directory directly (as a file) and pick data from there. How successfull would this be is a question of a filesystem you are on. Try first ls or dir commands to see who returns faster. os.listdir() or that funny little program. You'll se that both are in trouble. Here the key is just in that that your directory is flooded with new files. That creates kind of bottle neck. | 0 | 1 | 0 | 0 | 2015-07-15T09:28:00.000 | 2 | -0.099668 | false | 31,426,536 | 1 | 0 | 0 | 1 | I have a huge directory that keeps getting updated all the time. I am trying to list only the latest 100 files in the directory using python. I tried using os.listdir(), but when the size of directory approaches 1,00,000 files, it seems as though listdir() crashes( or i have not waited long enough). I only need the first 100 files (or filenames) for further processing, so i dont want listdir() to be filled with all the 100000 files. Is there a good way of doing this in Python?
PS: I am very new to programming |
/usr/include folder missing in mac | 53,036,986 | 4 | 1 | 3,414 | 0 | python,xcode,macos,terminal | Try on 10.14:
sudo installer -pkg /Library/Developer/CommandLineTools/Packages/macOS_SDK_headers_for_macOS_10.14.pkg -target / | 0 | 1 | 0 | 0 | 2015-07-15T14:37:00.000 | 2 | 0.379949 | false | 31,433,422 | 0 | 0 | 0 | 1 | I've tried pretty much everything on stackoverflow and other forums to get the /usr/include/ folder on my mac (currently using OS X 10.9.5)
Re-installed Xcode and command line tools (actually, command line tool wasn't one of the downloads available - so I'm guessing it's was already downloaded)
tried /Applications/Install Xcode.app command line on terminal
I haven't tested if there is no standard library on Xcode, but I'm only trying to build cloudera/hue from github and it won't install because there is no /usr/include/python2.7 (and couldn't really ask their forum because the error isn't coming from cloudera/hue).
How do I get the /usr/include folder? |
Where should I put the .pdbrc file on windows so that it is globally visible? | 35,230,270 | 1 | 2 | 274 | 0 | python | If putting the file in C:\Users\<your_user> doesn't work, additionaly try setting your HOME environment variable to C:\Users\<your_user>. Worked for me.
Thanks to @WayneWerner for the solution. | 0 | 1 | 0 | 0 | 2015-07-15T18:39:00.000 | 2 | 0.099668 | false | 31,438,478 | 1 | 0 | 0 | 2 | I am using .pdbrc to store my debugging alias. And I want it to be available globally. Where should this file be on windows? |
Where should I put the .pdbrc file on windows so that it is globally visible? | 31,438,577 | 1 | 2 | 274 | 0 | python | After several tries, I found it.
You can put it in C:\users\your_win_user\.pdbrc | 0 | 1 | 0 | 0 | 2015-07-15T18:39:00.000 | 2 | 0.099668 | false | 31,438,478 | 1 | 0 | 0 | 2 | I am using .pdbrc to store my debugging alias. And I want it to be available globally. Where should this file be on windows? |
Where to run python file on Remote Debian Sever | 31,448,678 | 0 | 0 | 85 | 0 | python,debian,remote-server,directory-structure | Basically you're stuffed.
Your problem is:
You have a script, which produces no error messages, no logging, and no other diagnostic information other than a single timestamp, on an output file.
Something has gone wrong.
In this case, you have no means of finding out what the issue was. I suggest any of the following:
either adding logging or diagnostic information to the script.
Contacting the developer of the script and getting them to find a way of determining the issue.
Delete the evidently worthless script if you can't do either option 1, or 2, above, and consider an alternative way of doing your task.
Now, if the script does have logging, or other diagnostic data, but you delete or throw them away, then that's your problem and you need to stop discarding this useful information.
EDIT (following comment).
At a basic level, you should print to either stdout, or to stderr, that alone will give you a huge amount of information. Just things like, "Discovered 314 records, we need to save 240 records", "Opened file name X.csv, Open file succeeded (or failed, as the case may be)", "Error: whatever", "Saved 2315 records to CSV". You should be able to determine if those numbers make sense. (There were 314 records, but it determined 240 of them should be saved, yet it saved 2315? What went wrong!? Time for more logging or investigation!)
Ideally, though, you should take a look at the logging module in python as that will let you log stack traces effectively, show line numbers, the function you're logging in, and the like. Using the logging module allows you to specify logging levels (eg, DEBUG, INFO, WARN, ERROR), and to filter them or redirect them to file or the console, as you may choose, without changing the logging statements themselves.
When you have a problem (crash, or whatever), you'll be able to identify roughly where the error occured, giving you information to either increase the logging in that area, or to be able to reason what must have happened (though you should probably then add enough logging so that the logging will tell you what happened clearly and unambiguously). | 0 | 1 | 0 | 1 | 2015-07-16T07:30:00.000 | 1 | 0 | false | 31,447,971 | 0 | 0 | 0 | 1 | I have written a python script that is designed to run forever. I load the script into a folder that I made on my remote server which is running debian wheezy 7.0. The code runs , but it will only run for 3 to 4 hours then it just stops, I do not have any log information on it stopping.I come back and check the running process and its not there. Is this a problem in where I am running the python file from? The script simply has a while loop and writes to an external csv file. The file runs from /var/pythonscript. The folder is a custom folder that I made. There is not error that I receive and the only way I know how long the code runs is by the time stamp on the csv file. I run the .py file by ssh to the server and sudo python scriptname.I also would like to know the best place in the linux debian directory to run python files from and limitations concerning that. Any help would be much appreciated. |
Access local files from locally running http server | 66,746,290 | 0 | 3 | 3,484 | 0 | python,web | keep the files in a some folder which you want to access from a localhost
In command prompt go to that location and type
python -m http.server 8080
now type localhost :8080 in browser you will able to access files in that folder
if u want to use some js files for paticular html files then
<script src="http://localhost:8080/main.js"</script>
make sure you run this program on another port | 0 | 1 | 0 | 0 | 2015-07-16T20:56:00.000 | 4 | 0 | false | 31,464,366 | 0 | 0 | 0 | 1 | I want to access files in my local machine by using urls. For example "file:///usr/local/home/thapaliya/constants.py". What would be the best way to achieve this? |
Installation of py2 | 31,472,969 | -2 | 1 | 83 | 0 | python,python-2.7,py2exe | Just unpack this exe with tool like 7-zip and you can run py2exe from resulting folder. | 0 | 1 | 0 | 0 | 2015-07-17T09:42:00.000 | 1 | -0.379949 | false | 31,472,881 | 1 | 0 | 0 | 1 | I want to transform a python script in a executable file. That is why, I want to install py2exe
When I try to install the file "py2exe-0.6.9.win32-py2.7.exe", I got the message "Python version 2.7 required, which was not found in the registry"
I suspect that py2exe is not finding my python.exe file (it ask me python directory but I cannot enter anything).
Python 2.7.9 is installed on my laptop in the file Mydocuments (and I cannot not move the path)!
I use Windows 8.
Thank you a lot for your help and for your answer |
Computing an index that accounts for score and date within Google App Engine Datastore | 31,478,203 | 2 | 0 | 68 | 0 | python,google-app-engine,google-bigquery,google-cloud-datastore,google-prediction | Such a system is often called "frecency", and there's a number of ways to do it. One way is to have votes 'decay' over time; I've implemented this in the past on App Engine by storing a current score and a last-updated; any vote applies an exponential decay to the score based on the last-updated time, before storing both, and a background process runs a few times a day to update the score and decay time of any posts that haven't received votes in a while. Thus, a post's score always tends towards 0 unless it consistently receives upvotes.
Another, even simpler system, is to serial-number posts. Whenever someone upvotes a post, increment its number. Thus, the natural ordering is by creation order, but votes serve to 'reshuffle' things, putting more upvoted posts ahead of newer but less voted posts. | 0 | 1 | 0 | 0 | 2015-07-17T14:11:00.000 | 1 | 1.2 | true | 31,477,842 | 0 | 0 | 1 | 1 | I'm working on an Google App Engine (python) based site that allows for user generated content, and voting (like/dislike) on that content.
Our designer has, rather nebulously, spec'd that the front page should be a balance between recent content and popular content, probably with the assumption that these are just creating a score value that weights likes/dislikes vs time-since-creation. Ultimately, the goals are (1) bad content gets filtered out somewhat quickly, (2) content that continues to be popular stays up longer, and (3) new content has a chance at staying long enough to get enough votes to determine if its good or bad.
I can easily compute a score based on likes/dislikes. But incorporating the time factor to produce a single score that can be indexed doesn't seem feasible. I would essentially need to reindex all the content every day to adjust its score, which seems cost prohibitive once we have any sizable amount of content. So, I'm at a loss for potential solutions.
I've also suggested something where where we time box it (all time, daily, weekly), but he says users are unlikely to look at the tabs other than the default view. Also, if I filtered based on the last week, I'd need to sort on time, and then the secondary popularity sort would essentially be meaningless since submissions times would be virtually unique.
Any suggestions on solutions that I might be overlooking?
Would something like Google's Prediction API or BigQuery be able to handle this better? |
Running python in the background and feeding data | 31,479,959 | 0 | 0 | 49 | 0 | python,linux,ssh,sftp | You can try and use a client-server or sockets approach. Your remote PC has a server running listening to commands or data coming into. Your client or local computer can send commands on the port and ip that the remote PC is listening to. The server then parses the data coming in, looks at whatever commands you have defined and executes them accordingly. | 0 | 1 | 0 | 1 | 2015-07-17T15:49:00.000 | 1 | 0 | false | 31,479,763 | 0 | 0 | 0 | 1 | I have this python setup using objects that can perform specific tasks for I2C protocol. Rather than having the script create objects and run a single task when run from a command line command, is there a way to have the objects 'stay alive' in the background and somehow feed the program new data from the command line?
The general idea is to have something running on a remote pc and use ssh to send commands (new data) over to the program.
One idea I had was to have the program constantly check (infinite loop) for a data file containing a set of tasks to perform and run those when it exists. But it seems like it could go awry if I were to sftp a new data file over because the program could be reading the one that already exists and cause undesirable effects.
I'm sure there are many better ways to go about a task like this, any help will be appreciated. |
subprocess.popen detached from master (Linux) | 31,484,229 | 0 | 3 | 5,358 | 0 | python,linux,subprocess,popen | fork the subprocs using the NOHUP option | 0 | 1 | 0 | 0 | 2015-07-17T18:27:00.000 | 4 | 0 | false | 31,482,397 | 0 | 0 | 0 | 1 | I am trying to open a subprocess but have it be detached from the parent script that called it. Right now if I call subprocess.popen and the parent script crashes the subprocess dies as well.
I know there are a couple of options for windows but I have not found anything for *nix.
I also don't need to call this using subprocess. All I need is to be able to cal another process detached and get the pid. |
How do I redirect and pass my Google API data after handling it in my Oauth2callback handler on Google App Engine | 31,496,649 | 1 | 0 | 37 | 0 | python-2.7,google-app-engine,oauth-2.0 | I think I found a better way of doing it, I just use the oauth callback to redirect only with no data, and then on the redirect handler I access the API data. | 0 | 1 | 1 | 0 | 2015-07-18T23:34:00.000 | 1 | 0.197375 | false | 31,496,583 | 0 | 0 | 1 | 1 | My Oauth2Callback handler is able to access the Google API data I want - I want to know the best way to get this data to my other handler so it can use the data I've acquired.
I figure I can add it to the datastore, or also perform redirect with the data. Is there a "best way" of doing this? For a redirect is there a better way than adding it to query string? |
osx - dyld: Library not loaded Reason: image not found - Python Google Speech Recognition API | 31,508,159 | 0 | 4 | 4,422 | 0 | python-2.7,pycharm,speech | Figured it out - I just forgot to install Homebrew | 0 | 1 | 0 | 0 | 2015-07-19T01:44:00.000 | 2 | 0 | false | 31,497,217 | 0 | 0 | 0 | 1 | When I try to use the Google Speech Rec API I get this error message. Any help?
dyld: Library not loaded: /usr/local/Cellar/flac/1.3.1/lib/libFLAC.8.dylib
Referenced from: /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/speech_recognition/flac-mac
Reason: image not found
I'm using PyCharm.
I have tried copy pasting and uninstalling and reinstalling but to no avail. HELP :) My whole project is to get the user to say something, and have google translate translate it and have it say the answer. I have the translating and speaking covered, but the Speech Recognition is what I am having trouble with now. Thanks in advance
Here's more error messages.
Traceback (most recent call last):
File >"/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py", line >162, in _run_module_as_main
"main", fname, loader, pkg_name)
File >"/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py", >line >72, in _run_code
exec code in run_globals
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site->packages/speech_recognition/main.py", line 12, in
audio = r.listen(source)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site->packages/speech_recognition/init.py", line 264, in listen
buffer = source.stream.read(source.CHUNK)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site->packages/pyaudio.py", line 605, in read
return pa.read_stream(self._stream, num_frames)
IOError: [Errno Input overflowed] -9981 |
How can I set number of parameters from jenkins running Python script? | 31,500,756 | 0 | 0 | 366 | 0 | python-2.7,jenkins | Does jenkins also provide the information 'which users' too or just the 'number of users', so you would have to get the 'which users' by your own? I don't have a jenkins installation with administrativ access, so I cannot check this myself. | 0 | 1 | 0 | 0 | 2015-07-19T08:35:00.000 | 1 | 0 | false | 31,499,363 | 0 | 0 | 0 | 1 | I am running Python job from Jenkins... now my question is as follow:
I am setting number of users as an external parameter, for example I am passing this command:
python /home/py_version/single_run.py $number_of_users
i want to be able to set a way to choose what are the users (in this case users ids) from the jenkins or the script itself...
thanks! |
PIP install unable to find ffi.h even though it recognizes libffi | 31,508,671 | 3 | 90 | 93,372 | 0 | python,linux,pip | You need to install the development package for libffi.
On RPM based systems (Fedora, Redhat, CentOS etc) the package is named libffi-devel.
Not sure about Debian/Ubuntu systems, I'm sure someone else will pipe up with that. | 0 | 1 | 0 | 0 | 2015-07-20T03:54:00.000 | 8 | 0.07486 | false | 31,508,612 | 1 | 0 | 0 | 3 | I have installed libffi on my Linux server as well as correctly set the PKG_CONFIG_PATH environment variable to the correct directory, as pip recognizes that it is installed; however, when trying to install pyOpenSSL, pip states that it cannot find file 'ffi.h'. I know both thatffi.h exists as well as its directory, so how do I go about closing this gap between ffi.h and pip? |
PIP install unable to find ffi.h even though it recognizes libffi | 31,508,663 | 266 | 90 | 93,372 | 0 | python,linux,pip | You need to install the development package as well.
libffi-dev on Debian/Ubuntu, libffi-devel on Redhat/Centos/Fedora. | 0 | 1 | 0 | 0 | 2015-07-20T03:54:00.000 | 8 | 1 | false | 31,508,612 | 1 | 0 | 0 | 3 | I have installed libffi on my Linux server as well as correctly set the PKG_CONFIG_PATH environment variable to the correct directory, as pip recognizes that it is installed; however, when trying to install pyOpenSSL, pip states that it cannot find file 'ffi.h'. I know both thatffi.h exists as well as its directory, so how do I go about closing this gap between ffi.h and pip? |
PIP install unable to find ffi.h even though it recognizes libffi | 38,077,173 | 24 | 90 | 93,372 | 0 | python,linux,pip | To add to mhawke's answer, usually the Debian/Ubuntu based systems are "-dev" rather than "-devel" for RPM based systems
So, for Ubuntu it will be apt-get install libffi libffi-dev
RHEL, CentOS, Fedora (up to v22) yum install libffi libffi-devel
Fedora 23+ dnf install libffi libffi-devel
OSX/MacOS (assuming homebrew is installed) brew install libffi | 0 | 1 | 0 | 0 | 2015-07-20T03:54:00.000 | 8 | 1 | false | 31,508,612 | 1 | 0 | 0 | 3 | I have installed libffi on my Linux server as well as correctly set the PKG_CONFIG_PATH environment variable to the correct directory, as pip recognizes that it is installed; however, when trying to install pyOpenSSL, pip states that it cannot find file 'ffi.h'. I know both thatffi.h exists as well as its directory, so how do I go about closing this gap between ffi.h and pip? |
gdb within emacs: python commands (py and pi) | 31,729,095 | 1 | 1 | 810 | 0 | python,emacs,gdb,gud | I am going to go out on a limb and say this is a bug in gud mode. The clue is the -interpreter-exec line in the error.
What happens here is that gud runs gdb in a special "MI" ("Machine Interface") mode. In this mode, commands and their responses are designed to be machine-, rather than human-, readable.
To let GUIs provide a console interface to users, MI provides the -interpreter-exec command, which evaluates a command using some other gdb "interpreter" (which doesn't mean what you may think and in particular has nothing to do with Python).
So, gud sends user input to gdb, I believe, with -interpreter-exec console .... But, in the case of a continuation line for a python command, this is the wrong thing to do.
I tried this out in Emacs and I was able to make it work for the python command when I spelled it out -- but py, pi, and python-interactive all failed. | 0 | 1 | 0 | 1 | 2015-07-20T10:58:00.000 | 2 | 0.099668 | false | 31,514,741 | 0 | 0 | 0 | 1 | I want to debug a c++ program using gdb. I use the pi and the py commands to evaluate python commands from within gdb, which works fine when I invoke gdb from the command line. However, when I invoke gdb from within emacs using M-x gdb and then gdb -i=mi file_name, the following errors occur:
the pi command correctly opens an interactive python shell, but any input to this shell yields the errors like this:
File "stdin", line 1
-interpreter-exec console "2"
SyntaxError: invalid syntax
the py command works correctly for a single command (like py print 2+2), but not for multiple commands
I can get around those problems by starting gdb with gud-gdb, but then I dont have the support for gdb-many-windows. Maybe the problem is caused by the prompt after typing pi, which is no longer (gdb) but >>> instead? |
cx_Oracle, and Library paths | 31,525,508 | 0 | 1 | 841 | 0 | python,oracle | Well, that was pretty simple. I just had to add it to the .bashrc file in my root directory. | 0 | 1 | 0 | 1 | 2015-07-20T17:30:00.000 | 1 | 1.2 | true | 31,522,754 | 0 | 0 | 0 | 1 | Pretty new to all this so I apologize if I butcher my explanation. I am using python scripts on a server at work to pull data from our Oracle database. Problem is whenever I execute the script I get this error:
Traceback (most recent call last):
File "update_52w_forecast_from_oracle.py", line 3, in
import cx_Oracle
ImportError: libnnz11.so: cannot open shared object file: No such file or direct ory
But if I use:
export LD_LIBRARY_PATH=/usr/lib/oracle/11.2/client64/lib
Before executing the script it runs fine but only for that session. If I log back in again I have to re-set the path. Anything I can do to make this permanent? I'm trying to use Cron as well to automate the script once a week. It was suppose to automate early Monday morning but it didn't run.
EDIT: Just had to add the path to my .bashrc file in the root directory. |
Is there any way to get the full command line that's executed when using subprocess.call? | 31,528,729 | 2 | 0 | 56 | 0 | python,python-2.7 | If you're not using shell=True, there isn't really a "command line" involved. subprocess.Popen is just passing your argument list to the underlyingexecve() system call.
Similarly, there's no escaping, because there's no shell involved and hence nothing to interpret special characters and nothing that is going to attempt to tokenize your string.
There isn't a character limit to worry about because the arguments are never concatenated into a single command line. There may be limits on the maximum number of arguments and/or the length of individual arguments.
If you are using shell=True, you have to construct the command line yourself before passing it to subprocess. | 0 | 1 | 0 | 0 | 2015-07-20T23:50:00.000 | 1 | 0.379949 | false | 31,528,166 | 0 | 0 | 0 | 1 | I'm using subprocess.call where you just give it an array of argumets and it will build the command line and execute it.
First of all is there any escaping involved? (for example if I pass as argument a path to a file that has spaces in it, /path/my file.txt will this be escaped? "/path/my file.txt")
And is there any way to get this command line that's generated (after escaping and all) before being executed?
As I need to check if the generated command line is not longer than certain amount of characters (to make sure it will not give an error when it gets executed). |
Is os.listdir() deterministic? | 31,535,279 | 0 | 13 | 3,004 | 0 | python | It will probably depend on file system internals. On a typical unix machine, I would expect the order of items in the return value from os.listdir to be in the order of the details in the directory's "dirent" data structure (which, again, depends on the specifics of the file system).
I would not expect a directory to have the same ordering over time, if files are added and deleted.
I would not expect two "directories with the same contents" on two different machines to have a consistent ordering, unless specific care was taken when copying from one to the other.
Depending on a variety of specifics, the ordering may change on a single machine, over time, without any explicit changes to the directory, as various file system compacting operations take place (although I don't think I've seen a file system that would actually do this, but it's definitely something that could be done).
In short, if you want any sort of ordering you can reason about, sort the results, somehow. Then you have the guarantee that the ordering will be whatever your sorting imposes. | 0 | 1 | 0 | 0 | 2015-07-21T08:58:00.000 | 3 | 0 | false | 31,534,583 | 1 | 0 | 0 | 1 | From Python's doc, os.listdir() returns
a list containing the names of the entries in the directory given by
path. The list is in arbitrary order.
What I'm wondering is, is this arbitrary order always the same/deterministic? (from one machine to another, or through time, provided the content of the folder is the same)
Edit: I am not trying to make it deterministic, nor do I want to use this. I was just wondering (for example, what does the order depend on?) |
Why do multiple processes slow down? | 31,543,489 | 0 | 1 | 1,550 | 0 | python,qt,io,hard-drive,child-process | There are no guarantees as to fairness of I/O scheduling. What you're describing seems rather simple: the I/O scheduler, whether intentionally or not, gives a boost to new processes. Since your disk is tapped out, the order in which the processes finish is not under your control. You're most likely wasting a lot of disk bandwidth on seeks, due to parallel access from multiple processes.
TL;DR: Your expectation is unfounded. When I/O, and specifically the virtual memory system, is saturated, anything can happen. And so it does. | 0 | 1 | 0 | 0 | 2015-07-21T10:44:00.000 | 2 | 0 | false | 31,536,863 | 1 | 0 | 0 | 1 | Not sure this is the best title for this question but here goes.
Through python/Qt I started multiple processes of an executable. Each process is writing a large file (~20GB) to disk in chunks. I am finding that the first process to start is always the last to finish and continues on much, much longer than the other processes (despite having the same amount of data to write).
Performance monitors show that the process is still using the expected amount of RAM (~1GB), but the disk activity from the process has slowed to a trickle.
Why would this happen? It is as though the first process started somehow gets its' disk access 'blocked' by the other processes and then doesnt recover after the other processes have finished...
Would the OS (windows) be causing this? What can I do to alleviate this? |
Tornado gzip compressed response for a specific RequestHandler | 31,539,885 | 2 | 2 | 1,628 | 0 | python,tornado | In that handler's initialize() method, call self.transforms.append(tornado.web.GZipContentEncoding) | 0 | 1 | 0 | 0 | 2015-07-21T11:27:00.000 | 1 | 0.379949 | false | 31,537,752 | 0 | 0 | 0 | 1 | How can I serve compressed responses only for a single RequestHandler from my Tornado application? |
PTVS using os.system fails | 31,542,784 | 1 | 0 | 69 | 0 | python,visual-studio,azure,ptvs | After adding the PATH environment variable, all I needed to do was close Visual Studio and open it again. For anyone who struggled with the same issue, just close the programme and it might work! | 0 | 1 | 0 | 0 | 2015-07-21T11:30:00.000 | 1 | 1.2 | true | 31,537,841 | 1 | 0 | 0 | 1 | I am having an issue with Visual Studio.
I have everything set up in my project in the Python Environments including Platformio, which I would like to use.
When I do
os.system("platformio init") it fails and produces this error:
'platformio' is not recognized as an internal or external command, operable program or batch file.
I added the platformio folder in the python library Search Paths, but still no success.
I do not have python or platformio installed on the local machine, only in the PTVS.
The python program works fine without installing it on the local machine, so I would like to maintain it that way if possible.
Please anyone, help! |
Run python script with droneapi without terminal | 31,924,023 | 1 | 2 | 617 | 0 | python-2.7,dronekit-python,dronekit | I think Sony Nguyen is asking for running the vehicle_state.py outside the Mavproxy command prompt, just like runnning the .py file normally.
I'm also looking for a solution as well. | 0 | 1 | 0 | 0 | 2015-07-21T13:24:00.000 | 3 | 0.066568 | false | 31,540,347 | 0 | 0 | 0 | 1 | I managed to run examples in command prompt after running mavproxy.py and loading droneapi. But when I double click on on my script, it throws me "'local_connect' is not defined", it runs in terminal as was told above, but I cannot run it only with double click. So my question is: Is there any way to run script using droneapi only with double click?
using Windows 8.1
Thanks in advance |
Allow user other than root to restart supervisorctl process? | 31,541,908 | 0 | 6 | 3,601 | 0 | python,supervisord | Maybe you should try restarting your superviord process using user stavros. | 0 | 1 | 0 | 0 | 2015-07-21T14:18:00.000 | 2 | 0 | false | 31,541,685 | 0 | 0 | 0 | 1 | I have supervisord run a program as user stavros, and I would like to give the same user permission to restart it using supervisorctl. Unfortunately, I can only do it with sudo, otherwise I get a permission denied error in socket.py. How can I give myself permission to restart supervisord processes? |
Apache Kafka: Can I set the offset manually | 31,580,503 | 0 | 0 | 294 | 0 | python,twitter,apache-kafka | Don't see how that would be possible, but instead you can:
Use Kafka's API to obtain an offset that is earlier than a given time (getOffsetBefore). Note that the granularity depends on your storage file size IIRC and thus you can get an offset that is quite a bit earlier than the time you specified
Keep a timestamp in the message itself and use it in conjunction with above to skip messages
Keep an external index of time->offset yourself and use that | 0 | 1 | 0 | 0 | 2015-07-23T07:00:00.000 | 1 | 0 | false | 31,580,276 | 0 | 0 | 0 | 1 | So I'm using Apache Kafka as a message queue to relay a Twitter Stream to my consumers. If I want to go back, I want to have a value (offset) which I can send Kafka. So, for eg, if I want to go back one day, I have no idea what the offset would be for that.
Hence, can I set the offset manually? Maybe a linux/epoch timestamp? |
How do I build the latest Python 2 for Windows? | 33,733,485 | 1 | 0 | 132 | 0 | python,python-2.7 | Since nobody answered, I'll post what I found here.
These instructions are for an 'offline' build machine, e.g. download/obtain everything you need prior to setting up the build environment. I don't connect my build machines to the internet. The instructions assume you downloaded the 2.7.10 PSF source release. This may have been made easier in git. I'm only showing the 32-bit build here, the 64-bit build needs some extra steps.
Pre-reqs:
Microsoft Windows 7 Professional with service pack 1 (64-bit)
Install Microsoft Visual Studio Team System 2008 development edition, service pack 1
ActivePython 2.7.8.10 32-bit. Note: Needs to be 32-bit to get access to msm.merge2.1 which is a 32-bit COM object.
put Nasm.exe 2.11.06 in path
Install ActiveState Perl 64-bit, including Perl v5.20.2
Set the environment variable HOST_PYTHON to c:\python27\python.exe
Set the environment variable PYTHON to python
For building documentation, install the following. If you are connected to the internet you can let pip download these as they are dependencies of Sphinx.
pip install alabaster-0.7.6-py2-none-any.whl
install MarkupSafe-0.23 (no wheel available) by the usual route of python setup.py install from the source directory
pip install Jinja2-2.8-py2.py3-none-any.whl
pip install Pygments-2.0.2-py2-none-any.whl
pip install pytz-2015.4-py2.py3-none-any.whl
Install Babel-2.0, as above no wheel or egg, so needs to be from source.
pip install --no-deps sphinx_rtd_theme-0.1.8-py2.py3-none-any.whl (due to circular dependency with Sphinx)
pip install Sphinx-1.3.1-py2.py3-none-any.whl
Go to tools/buildbot/build.bat and edit the file, change the 'Debug' build targets to 'Release'. Remove '_d' from the kill_python exe name.
Go to the 'Doc' directory. Type 'make.bat htmlhelp' to build the help.
Go to file Tools/buildbot/buildmsi.bat, and change the help workshop command line to point to what you created in the previous step, e.g.:
"%ProgramFiles%\HTML Help Workshop\hhc.exe" Doc\build\htmlhelp\python2710.hhp
Edit Tools/buildbot/external.bat, stop the build being a debug build by changing as follows:
if not exist tcltk\bin\tcl85g.dll (
@rem all and install need to be separate invocations, otherwise nmakehlp is not found on install
cd tcl-8.5.15.0\win
nmake -f makefile.vc INSTALLDIR=..\..\tcltk clean all
nmake -f makefile.vc INSTALLDIR=..\..\tcltk install
cd ..\..
)
if not exist tcltk\bin\tk85g.dll (
cd tk-8.5.15.0\win
nmake -f makefile.vc INSTALLDIR=..\..\tcltk TCLDIR=..\..\tcl-8.5.15.0 clean
nmake -f makefile.vc INSTALLDIR=..\..\tcltk TCLDIR=..\..\tcl-8.5.15.0 all
nmake -f makefile.vc INSTALLDIR=..\..\tcltk TCLDIR=..\..\tcl-8.5.15.0 install
cd ..\..
)
if not exist tcltk\lib\tix8.4.3\tix84g.dll (
cd tix-8.4.3.5\win
nmake -f python.mak DEBUG=0 MACHINE=IX86 TCL_DIR=..\..\tcl-8.5.15.0 TK_DIR=..\..\tk-8.5.15.0 INSTALL_DIR=..\..\tcltk clean
nmake -f python.mak DEBUG=0 MACHINE=IX86 TCL_DIR=..\..\tcl-8.5.15.0 TK_DIR=..\..\tk-8.5.15.0 INSTALL_DIR=..\..\tcltk all
nmake -f python.mak DEBUG=0 MACHINE=IX86 TCL_DIR=..\..\tcl-8.5.15.0 TK_DIR=..\..\tk-8.5.15.0 INSTALL_DIR=..\..\tcltk install
cd ..\..
)
In buildbot/external-common.bat, simply remove the clause building Nasm as we are already providing that as a binary.
I haven't documented the build of the wininst*.exe stubs from distutils, but the PSF ones are binary-identical to the ones in the ActiveState Python distribution 2.7.8.10, so you can just copy from there.
Finally, from the root directory run tools\buildbot\buildmsi.bat. This will build the 32-bit installer. | 0 | 1 | 0 | 0 | 2015-07-23T09:02:00.000 | 1 | 1.2 | true | 31,582,768 | 1 | 0 | 0 | 1 | I mean all of it, starting from all sources, and ending up with the .MSI file on the Python website. This includes building the distutils wininst*.exe files. I have found various READMEs that get me some of the way, but no comprehensive guide. |
Python socket with PACKET_MMAP | 31,597,835 | 1 | 1 | 178 | 0 | python,sockets,networking | So it looks like buffer or memoryview will do the trick. Although, there is some discrepancies in the sites I found regarding whether python 2.7 supported this or not, I will have to test it out to make sure | 0 | 1 | 0 | 0 | 2015-07-23T16:37:00.000 | 1 | 0.197375 | false | 31,593,267 | 0 | 0 | 0 | 1 | Is there a PACKET_MMAP or similar flag for python sockets? I know in C one can use a zero-copy/circular buffer with the previous mention flag to avoid having to copy buffers from kernel space to user space but I cannot find anything similar in the python documentation.
Thanks for any input on docs or code to look into. |
Does os.path.sep affect the tarfile module? | 31,600,239 | -2 | 2 | 242 | 0 | python,windows,tarfile | A quick test tells me that a (forward) slash is always used.
In fact, the tar format stores the full path of each file as a single string, using slashes (try looking at a hex dump), and python just reads that full path without any modification. Likewise, at extraction time python hard-replaces slashes with the local separator (see TarFile._extract_member).
... which makes me think that there are surely some nonconformant implementations of tar for Windows that create tarfiles with backslashs as separators!? | 0 | 1 | 0 | 0 | 2015-07-24T00:08:00.000 | 1 | 1.2 | true | 31,600,127 | 1 | 0 | 0 | 1 | Is the path separator employed inside a Python tarfile.TarFile object a '/' regardless of platform, or is it a backslash on Windows?
I basically never touch Windows, but I would kind of like the code I'm writing to be compatible with it, if it can be. Unfortunately I have no Windows host on which to test. |
Compiling a unix make file for windows | 31,606,702 | 1 | 0 | 69 | 0 | python,c,unix,gcc | Answer to your first paragraph: Use MinGW for the compiler (google it, there is a -w64 version if you need that) and MSYS for a minimal environment including shell tools the Makefile could need. | 0 | 1 | 0 | 0 | 2015-07-24T09:17:00.000 | 1 | 0.197375 | false | 31,606,659 | 0 | 0 | 0 | 1 | I have a c-program which includes a make file that works fine on unix systems. Although I would like to compile the program for windows using this make file, how can i go around doing that?
Additionally I have python scripts that call this c-program using ctypes, I don't imagine I will have to much of an issue getting ctypes working on windows but i heard its possible to include all the python and c scripts in one .exe for windows, has anyone heard of that? |
Google Analytics Management API - Insert method - Insufficient permissions HTTP 403 | 31,866,981 | 0 | 2 | 480 | 1 | api,python-2.7,google-analytics,insert,http-error | The problem was I using a service account when I should have been using an installed application. I did not need a service account since I had access using my own credentials.That did the trick for me! | 0 | 1 | 0 | 0 | 2015-07-24T23:46:00.000 | 2 | 0 | false | 31,621,373 | 0 | 0 | 1 | 1 | I am trying to add users to my Google Analytics account through the API but the code yields this error:
googleapiclient.errors.HttpError: https://www.googleapis.com/analytics/v3/management/accounts/**accountID**/entityUserLinks?alt=json returned "Insufficient Permission">
I have Admin rights to this account - MANAGE USERS. I can add or delete users through the Google Analytics Interface but not through the API. I have also added the service account email to GA as a user. Scope is set to analytics.manage.users
This is the code snippet I am using in my add_user function which has the same code as that provided in the API documentation.
def add_user(service):
try:
service.management().accountUserLinks().insert(
accountId='XXXXX',
body={
'permissions': {
'local': [
'EDIT',
]
},
'userRef': {
'email': 'ABC.DEF@gmail.com'
}
}
).execute()
except TypeError, error:
# Handle errors in constructing a query.
print 'There was an error in constructing your query : %s' % error
return None
Any help will be appreciated. Thank you!! |
How to rollback a python application | 31,622,554 | 0 | 0 | 181 | 0 | python,google-app-engine,rollback | In the windows command prompt, reference your python executable:
eg:
[cmd]
cd C:\Program Files (x86)\Google\google_appengine (ie: [GAE dir])
C:\Python27\python.exe appcfg.py rollback [deploy dir] | 0 | 1 | 0 | 0 | 2015-07-25T02:27:00.000 | 1 | 1.2 | true | 31,622,256 | 0 | 0 | 0 | 1 | I am running on Windows 8 and I was recently uploading an application using the standard Google App Engine launcher but it froze mid way and when I closed it and reopened it and tried to upload again it would say a transaction is already in progress for this application and that I would need to rollback the application using appcfg.py.
I looked all over the internet and I understand what to execute, however I don't know how/where.
I tried doing it in the standard Windows command prompt but it just opened the appcfg.py file for me, I tried doing it in the python console but it said is not a valid function, I also tried to run the application locally and access the interactive console but it just said the same thing as the python console attempt.
What do I do? |
Collecting results from celery worker with asyncio | 43,289,761 | 2 | 2 | 2,413 | 0 | python,celery,python-asyncio | I implement on_finish function of celery worker to publish a message to redis
then in the main app uses aioredis to subscribe the channel, once got notified, the result is ready | 0 | 1 | 0 | 0 | 2015-07-26T11:30:00.000 | 2 | 0.197375 | false | 31,636,454 | 0 | 0 | 0 | 1 | I am having a Python application which offloads a number of processing work to a set of celery workers. The main application has to then wait for results from these workers. As and when result is available from a worker, the main application will process the results and will schedule more workers to be executed.
I would like the main application to run in a non-blocking fashion. As of now, I am having a polling function to see whether results are available from any of the workers.
I am looking at the possibility of using asyncio get notification about result availability so that I can avoid the polling. But, I could not find any information on how to do this.
Any pointers on this will be highly appreciated.
PS: I know with gevent, I can avoid the polling. However, I am on python3.4 and hence would prefer to avoid gevent and use asyncio. |
Default values for PyCharm Terminal? | 43,356,885 | 1 | 2 | 1,300 | 0 | python,path,terminal,pycharm | I came across this error too in PhpStorm, to fix it simply navigate through to...
Preferences > Tools > Terminal
Under 'Application Settings' click [...] at the end of Shell path and open the .bash profile.
This should grey out the Shell path to '/bin/bash'
You can now launch Terminal. | 0 | 1 | 0 | 1 | 2015-07-27T02:54:00.000 | 2 | 0.099668 | false | 31,644,298 | 0 | 0 | 0 | 2 | I accidentally changed the "Shell path" specified in the Terminal setting for PyCharm and now I am getting this error:
java.io.IOException:Exec_tty error:Unkown reason
I replaced the default value with the string returned by echo $PATH which is:
/usr/local/cuda-7.0/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/usr/local/bin
I've been trying to google what the default value is that goes here, but I have not been able to find it. Can someone help me resolve this?
Notes:
The specific setting is found in Settings > Tools > Terminal > Shell path |
Default values for PyCharm Terminal? | 31,661,642 | 1 | 2 | 1,300 | 0 | python,path,terminal,pycharm | The default value is the value of the $SHELL environment variable, which is normally /bin/bash. | 0 | 1 | 0 | 1 | 2015-07-27T02:54:00.000 | 2 | 1.2 | true | 31,644,298 | 0 | 0 | 0 | 2 | I accidentally changed the "Shell path" specified in the Terminal setting for PyCharm and now I am getting this error:
java.io.IOException:Exec_tty error:Unkown reason
I replaced the default value with the string returned by echo $PATH which is:
/usr/local/cuda-7.0/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/usr/local/bin
I've been trying to google what the default value is that goes here, but I have not been able to find it. Can someone help me resolve this?
Notes:
The specific setting is found in Settings > Tools > Terminal > Shell path |
How can I ensure cron job runs only on one host at any time | 31,645,469 | -1 | 0 | 416 | 0 | python,database,cron,crontab,distributed | simple way:
- start cron before need time (for example two minutes)
- force synchronize time (using ntp or ntpdate) (optional paranoid mode)
- wait till expected time, run job | 0 | 1 | 0 | 0 | 2015-07-27T05:11:00.000 | 1 | -0.197375 | false | 31,645,343 | 0 | 0 | 1 | 1 | I have a django management command run as a cron job and it is set on multiple hosts to run at the same time. What is the best way to ensure that cron job runs on only one host at any time? One approach is to use db locks as the cron job updates a MySQL db but I am sure there are better(django or pythonic) approaches to achieve what I am looking for |
Using mpi4py (or any python module) without installing | 31,676,562 | 1 | 2 | 283 | 0 | python,python-2.7,numpy,mpi4py | Did you try pip install --user mpi4py?
However, I think the best solution would be to just talk to the people in charge of the cluster and see if they will install it. It seems pretty useless to have a cluster without mpi4py installed. | 0 | 1 | 0 | 0 | 2015-07-28T11:38:00.000 | 1 | 0.197375 | false | 31,675,214 | 0 | 1 | 0 | 1 | I have some parallel code I have written using numpy and mpi4py modules. Till now I was running it on my laptop but now I want to attack bigger problem sizes by using the computing clusters at my university. The trouble is that they don't have mpi4py installed. Is there anyway to use the module by copying the necessary files to my home directory in the cluster?
I tried some ways to install it with out root access but that didn't workout. So I am looking for a way to use the module by just copying it to the remote machine
I access the cluster using ssh from terminal |
Python Script on Google App Engine, which scrapes only updates from a website | 31,717,519 | 1 | 1 | 65 | 0 | python,google-app-engine,web-scraping | Doesn't the website have RSS or API or something?
Anyway, you could store the list of scraped news titles (might not be unique though) / IDs / URLs as entity IDs in the datastore right after you send them to your email & just before sending the email you would first check whether the news IDs exist in the datastore with simply not including the onces that do.
Or depending in what strucure the articles are being published and what data is available (Do they have an incrimental post ID? Do they have a date of when an article was posted?) you may simply need to remember the highest value of your previous scrapping and only send email to yourself with the articles where that value is higher than the one previously saved. | 0 | 1 | 0 | 0 | 2015-07-30T06:42:00.000 | 1 | 0.197375 | false | 31,716,833 | 0 | 0 | 1 | 1 | I am hosting a Python script on Google App Engine which uses bs4 and mechanize to scrap news section of a website, it runs every 2 hours and sends an email to me all the news.
The Problem is, I want only the Latest news to be sent as mail, As of now it sends me all the news present every time.
I am storing all the news in a list, is there a way to send only the latest news, which has not been mailed to me, not the complete list every time? |
install HDF5 and pytables in ubuntu | 31,719,735 | 11 | 25 | 76,957 | 0 | python,ubuntu-14.04,hdf5,pytables | Try to install libhdf5-7 and python-tables via apt | 0 | 1 | 0 | 0 | 2015-07-30T09:02:00.000 | 4 | 1.2 | true | 31,719,451 | 1 | 0 | 0 | 1 | I am trying to install tables package in Ubuntu 14.04 but sems like it is complaining.
I am trying to install it using PyCharm and its package installer, however seems like it is complaining about HDF5 package.
However, seems like I cannnot find any hdf5 package to install before tables.
Could anyone explain the procedure to follow? |
Handling a linux system shutdown operation "gracefully" | 31,732,143 | 0 | 1 | 214 | 0 | python,linux,signals,shutdown | When Linux is shutting down, (and this is slightly dependent on what kind of init scripts you are using) it first sends SIGTERM to all processes to shut them down, and then I believe will try SIGKILL to force them to close if they're not responding to SIGTERM.
Please note, however, that your script may not receive the SIGTERM - init may send this signal to the shell it's running in instead and it could kill python without actually passing the signal on to your script.
Hope this helps! | 0 | 1 | 0 | 1 | 2015-07-30T19:01:00.000 | 1 | 0 | false | 31,731,980 | 0 | 0 | 0 | 1 | I'm developing a python script that runs as a daemon in a linux environment. If and when I need to issue a shutdown/restart operation to the device, I want to do some cleanup and log data to a file to persist it through the shutdown.
I've looked around regarding Linux shutdown and I can't find anything detailing which, if any, signal, is sent to applications at the time of shutdown/restart. I assumed sigterm but my tests (which are not very good tests) seem to disagree with this. |
Python Process Terminated due to "Low Swap" When Writing To stdout for Data Science | 31,735,713 | 1 | 1 | 70 | 0 | python,memory,amazon-web-services,subprocess | the process gets terminated on my mac due to "Low Swap" which I believe refers to lack of memory
SWAP space is part of your Main Memory - RAM.
When a user reads a file it puts in it Main Memory (caches, and RAM). When its done it removes it.
However, when a user writes to a file, changes need to be recorded. One problem. What if you are writing to a different file every millisecond. The RAM and L caches reach capacity, so the least recently used (LRU) files are put into SWAP space. And since SWAP is still part of Main Memory (not the hard drive), it is possible to overflow it and lose information, which can cause a crash.
Is it possible that I have some sort of memory leak in the script even though I'm not doing hardly anything?
Possibly
Is there anyway that I reduce the memory usage to run this successfully?
One way is to think of how you are managing the file(s). Reads will not hurt SWAP because the file can just be scrapped, without the need to save. You might want to explicitly save the file (closing and opening the file should work) after a certain amount of information has been processed or a certain amount of time has gone by. Thus, removing the file from SWAP space. | 0 | 1 | 0 | 0 | 2015-07-30T23:06:00.000 | 1 | 1.2 | true | 31,735,552 | 0 | 0 | 0 | 1 | I'm new to python so I apologize for any misconceptions.
I have a python file that needs to read/write to stdin/stdout many many times (hundreds of thousands) for a large data science project. I know this is not ideal, but I don't have a choice in this case.
After about an hour of running (close to halfway completed), the process gets terminated on my mac due to "Low Swap" which I believe refers to lack of memory. Apart from the read/write, I'm hardly doing any computing and am really just trying to get this to run successfully before going any farther.
My Question: Does writing to stdin/stdout a few hundred thousand times use up that much memory? The file basically needs to loop through some large lists (15k ints) and do it a few thousand times. I've got 500 gigs of hard drive space and 12 gigs of ram and am still getting the errors. I even spun up an EC2 instance on AWS and STILL had memory errors. Is it possible that I have some sort of memory leak in the script even though I'm not doing hardly anything? Is there anyway that I reduce the memory usage to run this successfully?
Appreciate any help. |
Killing a daemon process through cron job/runnit | 31,787,179 | 0 | 0 | 48 | 0 | python-2.7,cron | The best way out is to create this daemon as a child thread so it automatically gets killed when parent process is killed | 0 | 1 | 0 | 1 | 2015-07-31T05:45:00.000 | 1 | 1.2 | true | 31,738,875 | 0 | 0 | 0 | 1 | I have a python file which starts 2 threads - 1 is daemon process 2 is to do other stuff. Now what I want is to check if my 2 thread is stopped then 1 one also should stop. I was suggested to do so by cron job/runnit.. I am completely new to these so can you please help me achieve the goal
Thanks |
Will my database connections have problems? | 31,741,461 | 0 | 0 | 49 | 0 | python,sql,django,celery | The only time when you are going to run into issues while using db with celery is when you use the database as backend for celery because it will continuously poll the db for tasks. If you use a normal broker you should not have issues. | 0 | 1 | 0 | 0 | 2015-07-31T07:09:00.000 | 2 | 0 | false | 31,740,127 | 0 | 0 | 1 | 2 | In my django project, I am using celery to run a periodic task that will check a URL that responds with a json and updating my database with some elements from that json.
Since requesting from the URL is limited, the total process of updating the whole database with my task will take about 40 minutes and I will run the task every 2 hours.
If I check a view of my django project, which also requests information from the database while the task is asynchronously running in the background, will I run into any problems? |
Will my database connections have problems? | 31,740,391 | 0 | 0 | 49 | 0 | python,sql,django,celery | While requesting information from your database you are reading your database. And in your celery task your are writing data into your database. You can write only once at a time but read as many times as you want as there is no lock permission on database while reading. | 0 | 1 | 0 | 0 | 2015-07-31T07:09:00.000 | 2 | 0 | false | 31,740,127 | 0 | 0 | 1 | 2 | In my django project, I am using celery to run a periodic task that will check a URL that responds with a json and updating my database with some elements from that json.
Since requesting from the URL is limited, the total process of updating the whole database with my task will take about 40 minutes and I will run the task every 2 hours.
If I check a view of my django project, which also requests information from the database while the task is asynchronously running in the background, will I run into any problems? |
How to setup Pycharm and JDK on ubuntu | 31,741,486 | 3 | 0 | 3,954 | 0 | java,python,ubuntu | When you have downloaded a package from Oracle site, unpack it and copy its contents into for example /usr/lib/jvm/jdk1.8.0_51/.
Then, type following commands:
sudo update-alternatives --install "/usr/bin/java" "java" "/usr/lib/jvm/jdk1.8.0_51/bin/java" 1
sudo update-alternatives --install "/usr/bin/javac" "javac" "/usr/lib/jvm/jdk1.8.0_51/bin/javac" 1
sudo update-alternatives --install "/usr/bin/javaws" "javaws" "/usr/lib/jvm/jdk1.8.0_51/bin/javaws" 1
and in the end:
sudo update-alternatives --config java
and choose the the number of your Oracle Java installation. | 0 | 1 | 0 | 0 | 2015-07-31T07:50:00.000 | 2 | 0.291313 | false | 31,740,878 | 1 | 0 | 0 | 1 | I am going to develop some functionality using python and I need to setup pycharm but it depends on some dependencies like open JDK of oracle.
How can setup these two. |
Python will not execute Java program: 'java' is not recognized | 31,745,847 | 0 | 1 | 1,290 | 0 | python,python-2.7,command-line,subprocess | give absolute path of java location
in my system path is C:\Program Files\Java\jdk1.8.0_45\bin\java.exe | 0 | 1 | 0 | 0 | 2015-07-31T12:01:00.000 | 2 | 0 | false | 31,745,699 | 0 | 0 | 1 | 2 | I am trying to get Python to call a Java program using a command that works when I enter it into the command line.
When I have Python try it with subprocess or os.system, it says:
'java' is not recognized as an internal or external command, operable
program or batch file.
From searching, I believe it is because when executing through Python, it will not be able to find java.exe like a normal command would. |
Python will not execute Java program: 'java' is not recognized | 61,620,608 | 0 | 1 | 1,290 | 0 | python,python-2.7,command-line,subprocess | You have to set the PATH variable to point to the java location.
import os
os.environ["PATH"] += os.pathsep + os.pathsep.join([java_env])
java_env will be a string containing the directory to java.
(tested on python 3.7) | 0 | 1 | 0 | 0 | 2015-07-31T12:01:00.000 | 2 | 0 | false | 31,745,699 | 0 | 0 | 1 | 2 | I am trying to get Python to call a Java program using a command that works when I enter it into the command line.
When I have Python try it with subprocess or os.system, it says:
'java' is not recognized as an internal or external command, operable
program or batch file.
From searching, I believe it is because when executing through Python, it will not be able to find java.exe like a normal command would. |
Worker role and web role counterpart in GAE | 31,790,837 | 1 | 1 | 97 | 0 | python,google-app-engine,azure,web-applications | yes there is. look at backend and frontend instances. your question is too broad to go into more detail. in general the backend type of instance is used for long running tasks but you could also do everyrhing in the frontend instance. | 0 | 1 | 0 | 0 | 2015-08-03T14:34:00.000 | 2 | 0.099668 | false | 31,790,076 | 0 | 0 | 1 | 1 | I am currently working with MS Azure. There I have a worker role and a web role. In worker role I start an infinite loop to process some data continously. The web role is performing the interaction with the client. There I use a MVC Framework, which on server side is written in C# and on client side in Javascript.
Now I'm interested in GAE engine. I read a lot about the app engine. I want to build an application in Python. But I don't really understand the architecture. Is there a counterpart in the project structure like the worker and web role in Azure? |
Starting a python script at boot - Raspbian | 31,791,309 | 0 | 0 | 344 | 0 | python,linux,arm,raspberry-pi,init.d | Ah bah, let's just give a quick answer.
After creating a script in /etc/init.d, you need to add a soft-link to the directory /etc/rc2.d, such as sudo ln -s /etc/init.d/<your script> /etc/rc2.d/S99<your script>. Assuming, of course, that you run runlevel 2. You can check that with the command runlevel.
The S means the script is 'started', the number determines the order in which processes are started.
You will also want to remove the entry from rc2.d that starts the graphical environment. What command that is depends on how your pi is configured. | 0 | 1 | 0 | 1 | 2015-08-03T14:37:00.000 | 1 | 1.2 | true | 31,790,133 | 0 | 0 | 0 | 1 | I have a python script. This script is essentially my own desktop/UI. However, I would like to replace the default Raspbian (Raspberry Pi linux distro) desktop enviroment with my own version. How would I go about:
Disabling the default desktop and
Launching my python script (fullscreen) at startup?
This is on the Raspberry Pi running a modified version of debian linux.
(Edit: I tried making a startup script in the /etc/init.d directory, and added it to chmod, but I still can't seem to get it to start up. The script contained the normal .sh stuff, but also contained the python command that opened the script in my designated directory.) |
What does app configuration mean? | 31,796,794 | 1 | 1 | 3,561 | 0 | python-2.7,google-app-engine,web-applications,configuration,app.yaml | To "configure your app," generally speaking, is to specify, via some mechanism, parameters that can be used to direct the behavior of your app at runtime. Additionally, in the case of Google App Engine, these parameters can affect the behavior of the framework and services surrounding your app.
When you specify these parameters, and how you specify them, depends on the app and the framework, and sometimes also on your own philosophy of what needs to be parameterized. Readable data files in formats like YAML are a popular choice, particularly for web applications and services. In this case, the configuration will be read and obeyed when your application is deployed to Google App Engine, or launched locally via GoogleAppEngineLauncher.
Now, this might seem like a lot of bother to you. After all, the easiest way you have to change your app's behavior is to simply write code that implements the behavior you want! When you have configuration via files, it's generally more work to set up: something has to read the configuration file and twiddle the appropriate switches/variables in your application. (In the specific case of app.yaml, this is not something you have to worry about, but Google's engineers certainly do.) So what are some of the advantages of pulling out "configuration" into files like this?
Configuration files like YAML are relatively easy to edit. If you understand what the parameters are, then changing a value is a piece of cake! Doing the same thing in code may not be quite as obvious.
In some cases, the configuration parameters will affect things that happen before your app ever gets run – such as pulling out static content and deploying that to Google App Engine's front-end servers for better performance and lower cost. You couldn't direct that behavior from your app because your app is not running yet – it's still in the process of being deployed when the static content is handled.
Sometimes, you want your application to behave one way in one environment (testing) and another way in another environment (production). Or, you might want your application to behave some reasonably sensible way by default, but allow someone deploying your application to be able to change its behavior if the default isn't to their liking. Configuration files make this easier: to change the behavior, you can simply change the configuration file before you deploy/launch the application. | 0 | 1 | 0 | 0 | 2015-08-03T16:27:00.000 | 2 | 0.099668 | false | 31,792,302 | 0 | 0 | 1 | 1 | I am working on Google App Engine (GAE) which has a file called (app.yaml). As I am new to programming, I have been wondering, what does it mean to configure an app? |
Is the application code visible to others when it is run? | 31,794,311 | 1 | 0 | 75 | 0 | python,flask | No. The code won't be viewable. Server side code is not accessible unless you give someone access or post it somewhere public. | 0 | 1 | 0 | 0 | 2015-08-03T18:22:00.000 | 2 | 1.2 | true | 31,794,152 | 0 | 0 | 1 | 1 | I don't want other people to see my application code. When I host my application, will others be able to see the code that is running? |
How to install python smtplib module in ubuntu os | 35,091,800 | 7 | 13 | 58,280 | 0 | python,module,smtplib | I will tell you a probable why you might be getting error like Error no module smtplib
I had created program as email.py
Now email is a module in python and because of that it start giving error for smtplib also
then I had to delete email.pyc file created and then rename email.py to mymail.py
After that no error of smtplib
Make sure your file name is not conflicting with the python module. Also see because of that any *.pyc file created inside the folder | 0 | 1 | 0 | 1 | 2015-08-03T20:27:00.000 | 4 | 1 | false | 31,796,174 | 0 | 0 | 0 | 1 | I tried to install python module via pip, but it was not successful.
can any one help me to install smtplib python module in ubuntu 12.10 OS? |
How Do I Turn Off Python Error Checking in vim? (vim terminal 7.3, OS X 10.11 Yosemite) | 31,800,107 | 2 | 1 | 421 | 0 | python,macos,vim,osx-yosemite | Vim doesn't check Python syntax out of the box, so a plugin is probably causing this issue.
Not sure why an OS upgrade would make a Vim plugin suddenly start being more zealous about things, of course, but your list of installed plugins (however you manage them) is probably the best place to start narrowing down your problem. | 0 | 1 | 0 | 1 | 2015-08-04T01:01:00.000 | 1 | 1.2 | true | 31,799,087 | 0 | 0 | 0 | 1 | Overview
After upgrading to 10.11 Yosemite, I discovered that vim (on the terminal) highlights a bunch of errors in my python scripts that are actually not errors.
e.g.
This line:
from django.conf.urls import patterns
gets called out as an [import-error] Unable to import 'django.conf.urls'.
This error is not true because I can open up a python shell from the command line and import the supposedly missing module. I'm also getting a bunch of other errors all the way through my python file too: [bad-continuation] Wrong continued indentation, [invalid-name] Invalid constant name, etc.
All of these errors are not true.
Question
Anyway, how do I turn off these python error checks?
vim Details
vim --version:
VIM - Vi IMproved 7.3 (2010 Aug 15, compiled Nov 5 2014 21:00:28)
Compiled by root@apple.com
Normal version without GUI. Features included (+) or not (-):
-arabic +autocmd -balloon_eval -browse +builtin_terms +byte_offset +cindent
-clientserver -clipboard +cmdline_compl +cmdline_hist +cmdline_info +comments
-conceal +cryptv +cscope +cursorbind +cursorshape +dialog_con +diff +digraphs
-dnd -ebcdic -emacs_tags +eval +ex_extra +extra_search -farsi +file_in_path
+find_in_path +float +folding -footer +fork() -gettext -hangul_input +iconv
+insert_expand +jumplist -keymap -langmap +libcall +linebreak +lispindent
+listcmds +localmap -lua +menu +mksession +modify_fname +mouse -mouseshape
-mouse_dec -mouse_gpm -mouse_jsbterm -mouse_netterm -mouse_sysmouse
+mouse_xterm +multi_byte +multi_lang -mzscheme +netbeans_intg -osfiletype
+path_extra -perl +persistent_undo +postscript +printer -profile +python/dyn
-python3 +quickfix +reltime -rightleft +ruby/dyn +scrollbind +signs
+smartindent -sniff +startuptime +statusline -sun_workshop +syntax +tag_binary
+tag_old_static -tag_any_white -tcl +terminfo +termresponse +textobjects +title
-toolbar +user_commands +vertsplit +virtualedit +visual +visualextra +viminfo
+vreplace +wildignore +wildmenu +windows +writebackup -X11 -xfontset -xim -xsmp
-xterm_clipboard -xterm_save
system vimrc file: "$VIM/vimrc"
user vimrc file: "$HOME/.vimrc"
user exrc file: "$HOME/.exrc"
fall-back for $VIM: "/usr/share/vim"
Compilation: gcc -c -I. -D_FORTIFY_SOURCE=0 -Iproto -DHAVE_CONFIG_H -arch i386 -arch x86_64 -g -Os -pipe
Linking: gcc -arch i386 -arch x86_64 -o vim -lncurses |
Share objects between celery tasks | 31,877,500 | 1 | 3 | 2,958 | 0 | python,celery,fileparsing | Using Memcached sounds like a much easier solution - a task is for processing, memcached is for storage - why use a task for storage?
Personally I'd recommend using Redis over memcached.
An alternative would be to try ZODB - it stores Python objects natively. If your application really suffers from serialization overhead maybe this would help. But I'd strongly recommend testing this with your real workload against JSON/memcached. | 0 | 1 | 0 | 0 | 2015-08-04T08:58:00.000 | 1 | 1.2 | true | 31,804,892 | 1 | 0 | 0 | 1 | I have got a program that handle about 500 000 files {Ai} and for each file, it will fetch a definition {Di} for the parsing.
For now, each file {Ai} is parsed by a dedicated celery task and each time the definition file {Di} is parsed again to generate an object. This object is used for the parsing of the file {Ai} (JSON representation).
I would like to store the definition file (generated object) {Di(object)} to make it available for whole task.
So I wonder what would be the best choice to manage it:
Memcahe + Python-memcached,
A Long running task to "store" the object with set(add)/get interface.
For performance and memory usage, what would be the best choice ? |
Query total CPU usage of all instances of a process on Linux OS | 31,830,627 | 0 | 0 | 114 | 0 | python,c++,c,linux | Here is the only way to do that I can think. It is a bit confusing but if you follow the steps it is very simple:
If I want to select total cpu use of Google Chrome process:
$ps -e -o pcpu,comm | grep chrome | awk '{ print $1 }' | paste -sd+ |
bc -l | 0 | 1 | 0 | 1 | 2015-08-05T03:02:00.000 | 1 | 0 | false | 31,822,714 | 0 | 0 | 0 | 1 | I have a python server that forks itself once it receives a request. The python service has several C++ .so objects it can call into, as well as the python process itself.
My question is, in any one of these processes, I would like to be able to see how much CPU all instances of this server are currently using. So lets say I have foo.py, I want to see how much CPU all instances of foo.py are currently using. For example, foo.py(1) is using 200% cpu, foo.py(2) is using 300%, and foo.py(3) is using 50%, id like to arrive at 550%.
The only way I can think of doing this myself is getting the PID of every process and scanning through the /proc filesystem. Is there a more general way available within C/Python/POSIX for such an operation?
Thank you! |
testing celery job that runs each night | 31,877,460 | 0 | 0 | 158 | 0 | python,testing,celery | To facilitate testing you should first run the task from ipython to verify that it does what it should.
Then to verify scheduling you should change the celerybeat schedule to run in the near future, and verify that it does in fact run.
Once you have verified functionality and schedule you can update the celerybeat schedule to midnight, and be at least some way confident that it will run like it should. | 0 | 1 | 0 | 1 | 2015-08-05T09:46:00.000 | 1 | 0 | false | 31,828,928 | 0 | 0 | 0 | 1 | I have a periodical celery job that is supposed to run every night at midnight. Of course I can just run the system and leave it overnight to see the result. But I can see that it's not going to be very efficient in terms of solving potential problems and energy.
In such situation, is there a trick to make the testing easier? |
Maximum Beaglebone Black UART baud? | 33,552,144 | 6 | 5 | 5,044 | 0 | python,pyserial,beagleboneblack,uart,baud-rate | The AM335x technical reference manual (TI document spruh73) gives the baud rate limits for the UART sub-system in the UART section (section 19.1.1, page 4208 in version spruh73l):
Baud rate from 300 bps up to 3.6864 Mbps
The UART modules each have a 48MHz clock to generate their timing. They can be configured in one of two modes: UART 16x and UART 13x, in which that clock is divided by 16 and 13, respectively. There is then a configured 16-bit divisor to generate the actual baud rate from that clock. So for 300 bps it would be UART 16x and a divisor of 10000, or 48MHz / 16 / 1000 = 300 bps.
When you tell the omap-serial kernel driver (that's the driver used for UARTs on the BeagleBone), it calculates the mode and divisor that best approximates the rate you want. The actual rate you'll get is limited by the way it's generated - for example if you asked for an arbitrary baud of 2998 bps, I suppose you'd actually get 2997.003 bps, because 48MHz / 16 / 1001 = 2997.003 is closer to 2998 than 48 MHz / 16 / 1000 = 3000.
So the UART modules can certainly generate all the standard baud rates, as well as a large range of arbitrary ones (you'd have to actually do the math to see how close it can get). On Linux based systems, PySerial is just sending along the baud you tell it to the kernel driver through an ioctl call, so it won't limit you either.
Note: I just tested sending data on from the BeagleBone Black at 200 bps and it worked fine, but it doesn't generate 110 bps (the next lower standard baud rate below 300 bps), so the listed limits are really the lowest and highest standard rates it can generate. | 0 | 1 | 0 | 1 | 2015-08-05T21:03:00.000 | 3 | 1.2 | true | 31,842,785 | 0 | 0 | 0 | 2 | I have been looking around for UART baud rates supported by the Beaglebone Black (BB). I can't find it in the BB system reference manual or the datasheet for the sitara processor itself. I am using pyserial and the Adafruit BBIO library to communicate over UART.
Does this support any value within reason or is it more standard (9600, 115200, etc.)?
Thanks for any help.
-UPDATE-
It is related to the baud rates supported by PySerial. This gives a list of potential baud rates, but not specific ones that will or will not work with specific hardware. |
Maximum Beaglebone Black UART baud? | 31,902,876 | 0 | 5 | 5,044 | 0 | python,pyserial,beagleboneblack,uart,baud-rate | The BBB reference manual does not contain any information on Baud Rate for UART but for serial communication I usually prefer using value of BAUDRATE = 115200, which works in most of the cases without any issues. | 0 | 1 | 0 | 1 | 2015-08-05T21:03:00.000 | 3 | 0 | false | 31,842,785 | 0 | 0 | 0 | 2 | I have been looking around for UART baud rates supported by the Beaglebone Black (BB). I can't find it in the BB system reference manual or the datasheet for the sitara processor itself. I am using pyserial and the Adafruit BBIO library to communicate over UART.
Does this support any value within reason or is it more standard (9600, 115200, etc.)?
Thanks for any help.
-UPDATE-
It is related to the baud rates supported by PySerial. This gives a list of potential baud rates, but not specific ones that will or will not work with specific hardware. |
Generating maximum wifi activity through 1 computer | 31,860,813 | 1 | 7 | 93 | 0 | python,linux,curl,wifi,bandwidth | Simply sending packets as fast as possible to a random destination (that is not localhost) should work.
You'll need to use udp (otherwise you need a connection acknowledge before you can send data).
cat /dev/urandom | pv | nc -u 1.1.1.1 9123
pv is optional (but nice).
You can also use /dev/zero, but there may be a risk of link-level compression.
Of course, make sure the router is not actually connected to the internet (you don't want to flood a server somewhere!), and that your computer has the router as the default route. | 0 | 1 | 0 | 0 | 2015-08-06T15:57:00.000 | 1 | 1.2 | true | 31,860,476 | 0 | 0 | 0 | 1 | I need to generate a very high level of wifi activity for a study to see if very close proximity to a transceiver can have a negative impact on development of bee colonies.
I have tried to write an application which spawns several web-socket server-client pairs to continuously transfer mid-sized files (this approach hit >100MB). However, we want to run this on a single computer connected to a wifi router, so the packets invariably end up getting routed via the loopback interface, not the WLAN.
Alternatively I have tried using a either simple ping floods and curling the router, but this is not producing nearly the maximum bandwidth the router is capable of.
Is there a quick fix on linux to force the traffic over the network? The computer we are using has both an ethernet and a wireless interface, and I found one thread online which suggested setting up iptables to force traffic between the two interfaces and avoid the loopback. |
Sharing a resource (file) across different python processes using HDFS | 31,934,576 | 2 | 5 | 132 | 0 | python,hdfs,race-condition,ioerror | (Setting aside that it sounds like HDFS might not be the right solution for your use case, I'll assume you can't switch to something else. If you can, take a look at Redis, or memcached.)
It seems like this is the kind of thing where you should have a single service that's responsible for computing/caching these results. That way all your processes will have to do is request that the resource be created if it's not already. If it's not already computed, the service will compute it; once it's been computed (or if it already was), either a signal saying the resource is available, or even just the resource itself, is returned to your process.
If for some reason you can't do that, you could try using HDFS for synchronization. For example, you could try creating the resource with a sentinel value inside which signals that process A is currently building this file. Meanwhile process A could be computing the value and writing it to a temporary resource; once it's finished, it could just move the temporary resource over the sentinel resource. It's clunky and hackish, and you should try to avoid it, but it's an option.
You say you want to avoid expensive recalculations, but if process B is waiting for process A to compute the resource, why can't process B (and C and D) be computing it as well for itself/themselves? If this is okay with you, then in the event that a resource doesn't already exist, you could just have each process start computing and writing to a temporary file, then move the file to the resource location. Hopefully moves are atomic, so one of them will cleanly win; it doesn't matter which if they're all identical. Once it's there, it'll be available in the future. This does involve the possibility of multiple processes sending the same data to the HDFS cluster at the same time, so it's not the most efficient, but how bad it is depends on your use case. You can lessen the inefficiency by, for example, checking after computation and before upload to the HDFS whether someone else has created the resource since you last looked; if so, there's no need to even create the temporary resource.
TLDR: You can do it with just HDFS, but it would be better to have a service that manages it for you, and it would probably be even better not to use HDFS for this (though you still would possibly want a service to handle it for you, even if you're using Redis or memcached; it depends, once again, on your particular use case). | 0 | 1 | 0 | 0 | 2015-08-06T16:05:00.000 | 1 | 1.2 | true | 31,860,630 | 1 | 0 | 0 | 1 | So I have some code that attempts to find a resource on HDFS...if it is not there it will calculate the contents of that file, then write it. And next time it goes to be accessed the reader can just look at the file. This is to prevent expensive recalculation of certain functions
However...I have several processes running at the same time on different machines on the same cluster. I SUSPECT that they are trying to access the same resource and I'm hitting a race condition that leads a lot of errors where I either can't open a file or a file exists but can't be read.
Hopefully this timeline will demonstrate what I believe my issue to be
Process A goes to access resource X
Process A finds resource X exists and begins writing
Process B goes to access resource X
Process A finishes writing resource X
...and so on
Obviously I would want Process B to wait for Process A to be done with Resource X and simply read it when A is done.
Something like semaphores come to mind but I am unaware of how to use these across different python processes on separate processors looking at the same HDFS location. Any help would be greatly appreciated
UPDATE: To be clear..process A and process B will end up calculating the exact same output (i.e. the same filename, with the same contents, to the same location). Ideally, B shouldn't have to calculate it. B would wait for A to calculate it, then read the output once A is done. Essentially this whole process is working like a "long term cache" using HDFS. Where a given function will have an output signature. Any process that wants the output of a function, will first determine the output signature (this is basically a hash of some function parameters, inputs, etc.). It will then check the HDFS to see if it is there. If it's not...it will write calculate it and write it to the HDFS so that other processes can also read it. |
How to load IPython shell with PySpark | 66,149,862 | 1 | 33 | 26,058 | 0 | python,apache-spark,ipython,pyspark | Tested with spark 3.0.1 and python 3.7.7 (with ipython/jupyter installed)
To start pyspark with IPython:
$ PYSPARK_DRIVER_PYTHON=ipython pyspark
To start pyspark with jupyter notebook:
$ PYSPARK_DRIVER_PYTHON=jupyter PYSPARK_DRIVER_PYTHON_OPTS=notebook pyspark | 0 | 1 | 0 | 0 | 2015-08-06T17:36:00.000 | 8 | 0.024995 | false | 31,862,293 | 0 | 0 | 0 | 1 | I want to load IPython shell (not IPython notebook) in which I can use PySpark through command line. Is that possible?
I have installed Spark-1.4.1. |
Celery worker stops consuming from a specific queue while it consumes from other queues | 32,602,968 | 1 | 0 | 1,570 | 0 | python,django,rabbitmq,celery | I found the problem in my code,
So in one of my task i was opening a connection to parse using urllib3 that was getting hung.
After moving out that portion in async task, things are working fine now. | 0 | 1 | 0 | 0 | 2015-08-06T19:12:00.000 | 1 | 1.2 | true | 31,863,996 | 0 | 0 | 1 | 1 | I am using rabbitmq as broker, there is a strange behaviour that is happening in my production environment only. Randomly sometimes my celery stops consuming messages from a queue, while it consumes from other queues.
This leads to pileup on messages in queue, if i restart my celeryd everything starts to work fine.
"/var/logs/celeryd/worker" does not indicate any error. I am not even sure where to start looking as i am new to python/django.
Any help will be greatly appreciated. |
flask application deployment: rabbitmq and celery | 31,885,764 | 1 | 0 | 603 | 0 | python,deployment | I don't see why you couldn't deploy on the same node (that's essentially what I do when I'm developing locally), but if you want to be able to rapidly scale you'll probably want them to be separate.
I haven't used rabbitmq in production with celery, but I use redis as the broker and it was easy for me to get redis as a service. The web app sends messages to the broker and worker nodes pick up the messages (and perhaps provide a result to the broker).
You can scale the web app, broker service (or the underlying node it's running on), and the number of worker nodes as appropriate. Separating the components allows you to scale them individually and I find that it's easier to maintain. | 0 | 1 | 0 | 0 | 2015-08-07T14:01:00.000 | 1 | 0.197375 | false | 31,879,606 | 0 | 0 | 1 | 1 | My web app is using celery for async job and rabbitmq for messaging, etc. The standard stuff. When it comes to deployment, are rabbitmq and celery normally deployed in the same node where the web app is running or separate? What are the differences? |
How to run PyQt4 app with sudo privelages in Ubuntu and keep the normal user style | 42,756,312 | 0 | 0 | 1,223 | 0 | python,linux,qt,ubuntu,pyqt | This is a hacky solution.
Install qt-qtconf. sudo apt-get install qt4-qtconfig
Run sudo qtconfig or gksudo qtconfig.
Change GUI Style to GTK+.
Edited. | 0 | 1 | 0 | 0 | 2015-08-08T13:11:00.000 | 2 | 0 | false | 31,893,477 | 0 | 0 | 0 | 1 | Ok the title explains it all. But just to clarify.
I have Ubuntu and programed a GUI app with Qt Designer 4 and PyQt4. The program works fine running python main.py in terminal.
Last week I made an update and now the program needs sudo privelages to start. So I type sudo python main.py.
But Oh my GODDDDDDD. What an ungly inteface came up. O.o
And I don't know how to get the realy nice normal-mode interface in my programm and all of my others programs i'll make. Is there any way to set a vaiable to python? Do I need to execute any command line code?
The program is deployed only in Linux machines.
P.S.
I search a lot in the web and couldn't find a working solution. |
Twisted unexpected connection lost | 32,285,162 | 0 | 2 | 860 | 0 | python,twisted | The only way to support a cross-platform unexpected disconnection (unplug) is to implement a application-level ping message to ping clients in a specific interval. | 0 | 1 | 0 | 0 | 2015-08-09T11:10:00.000 | 2 | 0 | false | 31,903,574 | 0 | 0 | 0 | 1 | I wrote a TCP server using Python Twisted to send/receive binary data from clients.
When a client close their application or calls the abortConnection method, I get the connectionLost event normally but when the client disconnects unexpectedly, I don't get the disconnect event, therefore, I can't remove the disconnected client from the queue.
By unexpected disconnect I mean disabling the network adapter or lost the network connection somehow.
My question is, how can I handle this sort of unexpected connection losts? |
Running ApScheduler in Gunicorn Without Duplicating Per Worker | 31,929,832 | 0 | 7 | 1,182 | 0 | python,uwsgi,gunicorn,apscheduler | I'm not aware of any way to do this with either, at least not without some sort of RPC. That is, run APScheduler in a separate process and then connect to it from each worker. You may want to look up projects like RPyC and Execnet to do that. | 0 | 1 | 0 | 0 | 2015-08-10T02:22:00.000 | 1 | 0 | false | 31,910,812 | 0 | 0 | 1 | 1 | The title basically says it all. I have gunicorn running my app with 5 workers. I have a data structure that all the workers need access to that is being updated on a schedule by apscheduler. Currently apscheduler is being run once per worker, but I just want it run once period. Is there a way to do this? I've tried using the --preload option, which let's me load the shared data structure just once, but doesn't seem to let all the workers have access to it when it updates. I'm open to switching to uWSGI if that helps. |
"make" builds wrong python version | 31,931,647 | 1 | 2 | 364 | 0 | linux,python-2.7,build,compilation,mod-wsgi | I'll document this here as the fix, also to hopefully get a comment from Graham as to why this might be needed;
Changing
make
to
LD_RUN_PATH=/usr/local/lib make
was the answer, but i had to use this for building both python2.7.10 and mod_wsgi. Without using LD_RUN_PATH on mod_wsgi I still got the dreaded;
[warn] mod_wsgi: Compiled for Python/2.7.10.
[warn] mod_wsgi: Runtime using Python/2.7.3. | 0 | 1 | 0 | 0 | 2015-08-11T00:04:00.000 | 1 | 0.197375 | false | 31,931,087 | 1 | 0 | 0 | 1 | System : SMEServer 8.1 (CentOS 5.10) 64bit, system python is 2.4.3
There is an alt python at /usr/local/bin/python2.7 (2.7.3) which was built some time ago.
Goal : build python2.7.10, mod_wsgi, django. First step is python 2.7.10 to replace the (older and broken) 2.7.3
What happens:
When i build the latest 2.7 python as shared, the wrong executable is built.
cd /tmp && rm -vrf Python-2.7.10 && tar -xzvf Python-2.7.10.tgz && cd Python-2.7.10 && ./configure && make && ./python -V
2.7.10 <- as expected
... but this wont work with mod_wsgi - we have to --enable-shared.
cd /tmp && rm -vrf Python-2.7.10 && tar -xzvf Python-2.7.10.tgz && cd Python-2.7.10 && ./configure --enable-shared && make && ./python -V
2.7.3 <- Wrong version!
I'm deleting the entire build directory each time to isolate things and ensure I'm not polluting the folder with each attempt. Somehow the (years old) install of 2.7.3 is being 'found' by configure but only when '--enable-shared' is on.
cd /tmp && rm -vrf Python-2.7.10 && tar -xzvf Python-2.7.10.tgz && cd Python-2.7.10 && ./configure --prefix=/usr/local/ && make && ./python -V
2.7.10
cd /tmp && rm -vrf Python-2.7.10 && tar -xzvf Python-2.7.10.tgz && cd Python-2.7.10 && ./configure --enable-shared --prefix=/usr/local/ && make && ./python -V
2.7.3 <- ???
Where do I look to find how make is finding old versions? |
linux switch between ananconda python 3.4 and 2.7 | 31,965,393 | 0 | 0 | 259 | 0 | python,python-2.7,python-3.4,anaconda | I think I can answer my own question. Python 2.7 seems to be the default. If I activate 3.x with
source activate py3k
I need to reboot to go back to 2.7, which, being the default, happens automatically.
If anyone knows a cleaner way, please let me know. | 0 | 1 | 0 | 0 | 2015-08-11T12:23:00.000 | 1 | 0 | false | 31,941,685 | 1 | 0 | 0 | 1 | I do most of my work in Python 2.7, but I've recently encountered some tutorials that require 3.4. Fine. I checked and Anaconda allows installation of both under Linux (Fedora 22 to be precise). However, now I seem to be stuck in 3.4. I followed the Anaconda directions, entering:
conda create -n py3k python=3 anaconda
source activate py3k
I would like to be able to choose between 2.7 and 3.4 each time I run Python. Any ideas? |
what is a robust way to execute long-running tasks/batches under Django? | 31,952,520 | 1 | 1 | 1,698 | 0 | python,django,batch-processing | I'm not sure how your celery configuration makes it unstable but sounds like it's still the best fit for your problem. I'm using redis as the queue system and it works better than rabbitmq from my own experience. Maybe you can try it see if it improves things.
Otherwise, just use cron as a driver to run periodic tasks. You can just let it run your script periodically and update the database, your UI component will poll the database with no conflict. | 0 | 1 | 0 | 0 | 2015-08-11T21:31:00.000 | 1 | 1.2 | true | 31,952,327 | 0 | 0 | 1 | 1 | I have a Django app that is intended to be run on Virtualbox VMs on LANs. The basic user will be a savvy IT end-user, not a sysadmin.
Part of that app's job is to connect to external databases on the LAN, run some python batches against those databases and save the results in its local db. The user can then explore the systems using Django pages.
Run time for the batches isn't all that long, but runs to minutes, tens of minutes potentially, not seconds. Run frequency is infrequent at best, I think you could spend days without needing a refresh.
This is not celery's normal use case of long tasks which will eventually push the results back into the web UI via ajax and/or polling. It is more similar to a dev's occasional use of the django-admin commands, but this time intended for an end user.
The user should be able to initiate a run of one or several of those batches when they want in order to refresh the calculations of a given external database (the target db is a parameter to the batch).
Until the batches are done for a given db, the app really isn't useable. You can access its pages, but many functions won't be available.
It is very important, from a support point of view that the batches remain easily runnable at all times. Dropping down to the VMs SSH would probably require frequent handholding which wouldn't be good - it is best that you could launch them from the Django webpages.
What I currently have:
Each batch is in its own script.
I can run it on the command line (via if __name__ == "main":).
The batches are also hooked up as celery tasks and work fine that way.
Given the way I have written them, it would be relatively easy for me to allow running them from subprocess calls in Python. I haven't really looked into it, but I suppose I could make them into django-admin commands as well.
The batches already have their own rudimentary status checks. For example, they can look at the calculated data and tell whether they have been run and display that in Django pages without needing to look at celery task status backends.
The batches themselves are relatively robust and I can make them more so. This is about their launch mechanism.
What's not so great.
In Mac dev environment I find the celery/celerycam/rabbitmq stack to be somewhat unstable. It seems as if sometime rabbitmqs daemon balloons up in CPU/RAM use and then needs to be terminated. That mightily confuses the celery processes and I find I have to kill -9 various tasks and relaunch them manually. Sometimes celery still works but celerycam doesn't so no task updates. Some of these issues may be OSX specific or may be due to the DEBUG flag being switched for now, which celery warns about.
So then I need to run the batches on the command line, which is what I was trying to avoid, until the whole celery stack has been reset.
This might be acceptable on a normal website, with an admin watching over it. But I can't have that happen on a remote VM to which only the user has access.
Given that these are somewhat fire-and-forget batches, I am wondering if celery isn't overkill at this point.
Some options I have thought about:
writing a cleanup shell/Python script to restart rabbitmq/celery/celerycam and generally make it more robust. i.e. whatever is required to make celery & all more stable. I've already used psutil to figure out rabbit/celery process are running and display their status in Django.
Running the batches via subprocess instead and avoiding celery. What about django-admin commands here? Does that make a difference? Still needs to be run from the web pages.
an alternative task/process manager to celery with less capability but also less moving parts?
not using subprocess but relying on Python multiprocessing module? To be honest, I have no idea how that compares to launches via subprocess.
environment:
nginx, wsgi, ubuntu on virtualbox, chef to build VMs. |
why does elastic beanstalk not update? | 31,955,222 | 2 | 2 | 2,185 | 0 | python,amazon-web-services,amazon-elastic-beanstalk,pyramid | Are you committing your changes before deploying?
eb deploy will deploy the HEAD commit.
You can do eb deploy --staged to deploy staged changes. | 0 | 1 | 0 | 1 | 2015-08-12T02:14:00.000 | 1 | 0.379949 | false | 31,954,968 | 0 | 0 | 1 | 1 | I'm new to the world of AWS, and I just wrote and deployed a small Pyramid application. I ran into some problems getting set up, but after I got it working, everything seemed to be fine. However, now, my deployments don't seem to be making a difference in the environment (I changed the index.pt file that my root url routed to, and it does not register on my-app.elasticbeanstalk.com).
Is there some sort of delay to the deployments that I am unaware of, or is there a problem with how I'm deploying (eb deploy using the awsebcli package) that's causing these updates to my application to not show? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.