Title
stringlengths
15
150
A_Id
int64
2.98k
72.4M
Users Score
int64
-17
470
Q_Score
int64
0
5.69k
ViewCount
int64
18
4.06M
Database and SQL
int64
0
1
Tags
stringlengths
6
105
Answer
stringlengths
11
6.38k
GUI and Desktop Applications
int64
0
1
System Administration and DevOps
int64
1
1
Networking and APIs
int64
0
1
Other
int64
0
1
CreationDate
stringlengths
23
23
AnswerCount
int64
1
64
Score
float64
-1
1.2
is_accepted
bool
2 classes
Q_Id
int64
1.85k
44.1M
Python Basics and Environment
int64
0
1
Data Science and Machine Learning
int64
0
1
Web Development
int64
0
1
Available Count
int64
1
17
Question
stringlengths
41
29k
Why records about python command-line launches do not saved in history
28,583,843
1
0
45
0
python,bash,ubuntu,command-line,history
Bash usually saves all your commands in the history buffer except if you specifically mark them to be excluded. There is an environment variable HISTIGNORE which might be configured to ignore python invocations altogether, although this is somewhat unlikely; or you may be marking them for exclusion by typing a space before the command.
0
1
0
1
2015-02-18T12:23:00.000
1
0.197375
false
28,583,633
0
0
0
1
I quiet often launch python scripts from command-line, like python somescript.py --with-arguments Now I'm wondering why that does not saved in output of history command? And if there is a way to see history of it
error in opening Ulipad for python
37,771,773
0
0
336
0
python,python-2.7
I had the same problem and the solution was: uninstall Ulipad install Ulipad on a different disk, e.g. D:\Ulipad
0
1
0
0
2015-02-18T15:52:00.000
2
0
false
28,587,843
0
0
0
2
I am familiar with R but new to Python. To use python, I installed Python 2.7, set environment variables and installed wxPython. And then, after installing Ulipad I opened Ulipad but an error message showed this: The logfile 'C:\program file(x86)\Ulipad\Ulipad.exe.log' could not be opened:[Errno13] Permission denied: 'C:\program file(x86)\Ulipad\Ulipad.exe.log' Can you help me to open the Ulipad? Or Is there any other good program like Ulipad? I am not good at programming but only familiar with R. Python seems to be little different from R in an interface.
error in opening Ulipad for python
29,441,423
0
0
336
0
python,python-2.7
This is happening because under newer Windows OS(Vista, 7, 8) programs do not have write access to "C:\program file(x86)\" for security reasons. Easiest fix for your problem would be to uninstall the current installation and re-install it at a different location e.g. C:\Ulipad. Alternative is to run Ulipad using "Run as an administrator" option but it is not recommended.
0
1
0
0
2015-02-18T15:52:00.000
2
0
false
28,587,843
0
0
0
2
I am familiar with R but new to Python. To use python, I installed Python 2.7, set environment variables and installed wxPython. And then, after installing Ulipad I opened Ulipad but an error message showed this: The logfile 'C:\program file(x86)\Ulipad\Ulipad.exe.log' could not be opened:[Errno13] Permission denied: 'C:\program file(x86)\Ulipad\Ulipad.exe.log' Can you help me to open the Ulipad? Or Is there any other good program like Ulipad? I am not good at programming but only familiar with R. Python seems to be little different from R in an interface.
How to auto-refresh a log.txt file
28,593,395
0
0
1,704
0
python,updates
The data isn't written to the latest.log file until the process writing it (probably the server) fills or flushes the buffer. There probably isn't any way to change that from within Python. The best bet is to see if you can configure the writing process to flush after each line.
0
1
0
0
2015-02-18T18:15:00.000
2
0
false
28,590,903
0
0
0
1
I am making a python program for a Minecraft server that automatically bids on items up to a certain price. In appdata/roaming/.minecraft/logs there is a chat log called "latest.log". This log is constantly being updated with what everyone on my server is saying over chat. If I open it and view the text, the log doesn't automatically update (obviously). How would I use a python script to print every line in my log and automatically update? I am on Windows 8.1 with Python 2.7.9
Vim python-mode plugin picks up system python and not anaconda
28,592,881
2
0
513
0
vim,python-mode
Not trivial. Python-mode uses the Python interpreter Vim is linked against; you'll have to recompile Vim and link it against Anaconda.
0
1
0
1
2015-02-18T19:51:00.000
1
1.2
true
28,592,624
1
0
0
1
When i use python-mode it uses my system (mac python), I have anaconda installed and want Vim to autocomplete etc with that version of python As it stands now, python-mode will only autocomplete modules in from system python and not any other modules e.g. pandas that is installed in the anaconda distro. thanx Tobie
Using Python Shell with any text editor
28,593,838
5
1
240
0
python,sublimetext
You can use ctrl-b to run your python in sublime. If you want to use a different interpreter you can customise under Tools -> Build System
0
1
0
0
2015-02-18T20:53:00.000
3
1.2
true
28,593,711
1
0
0
2
I use Sublime Text and am using the terminal to run my code. I would prefer to use the Python Shell to run my code, as it has color and is not so hard to look at. Is there any easy way to do this other than saving then opening in IDLE?
Using Python Shell with any text editor
28,593,841
3
1
240
0
python,sublimetext
Stick with Sublime text. It's a popular text editor with syntax highlighting for several different programming languages. Here's what you need to do: Press Ctrl + Shift + P to bring up command palette and enter "python". Choose the option that says something like "Set syntax to Python". Enter Python code then Ctrl + Shift + B to build the project. Code will run below in another view(you will probably be able to move it to the side). This is the standard procedure for a python setup in sublime text, but you may need to install SublimeREPL for python in order to get user input. Just give it a Google search.
0
1
0
0
2015-02-18T20:53:00.000
3
0.197375
false
28,593,711
1
0
0
2
I use Sublime Text and am using the terminal to run my code. I would prefer to use the Python Shell to run my code, as it has color and is not so hard to look at. Is there any easy way to do this other than saving then opening in IDLE?
Django app - deploy using UWSGI or Phusion Passenger
28,599,998
2
1
760
0
python,django,deployment,passenger,uwsgi
Production performance is pretty the same, so I wouldn't worry about that. uWSGI has some advanced builtin features like clustering and a cron API while Phusion Passenger is more minimalist, but Phusion Passenger provides more friendly tools for administration and inspection (e.g. passenger-status, passenger-memory-stats, passenger-config system-metrics).
0
1
0
0
2015-02-19T01:27:00.000
1
0.379949
false
28,597,205
0
0
1
1
Which way of deploying Django app is better (or maybe the better question would be what are pros and cons): using UWSGI, using Phusion Passenger? In my particular case the most important advantage for using Passenger is ease of use (on my hosting I need to place single file in project directory and it's done), but what with performance things, etc.? What do you think?
Opening Python program from Anaconda
28,635,473
0
0
407
0
python,anaconda
%run is a command that's run from inside of IPython. To use it, you should start ipython first. Or just run python program.py (if your program is named program.py).
0
1
0
0
2015-02-19T07:17:00.000
1
0
false
28,600,606
1
0
0
1
I have opened Anaconda - then i maneuvered to the directory where a certain python program i want to run actually lies. I then tried the %run command. But the command does not seem to work! So how am i to run that program? Does anyone know the right command that one has to use in the black colored Anaconda console command line, to run a Python program existing in a certain directory (to which the command line has been taken to)
Django's "call_command" hangs the application
28,642,118
1
1
347
0
python,django,parallel-processing,celery,mongoengine
When you make the synchronous calls to external systems it will tie up a thread in the application server, so depending on application server you choose and how many concurrent threads/users you have will determine whether doing it that way will work for you. Usually when you have long running requests like that it is a good idea to use a background processing system such as celery, like you suggest.
0
1
0
0
2015-02-19T11:50:00.000
1
1.2
true
28,605,646
0
0
1
1
I'm working on a project that uses Django and mongoengine. When a user presses a button, a trigger to a call_command (django.core.management - just calls a script it seems to me) is made which sshs to multiple servers in parallel, copies some files, parses them and stores them in the database. The problem is that when the button is pressed and the above process is running, if any other user tries to use the website, it doesn't load. Is this because of mongo's lock? This happens as soon as the button is pressed (so when the connections to other servers are still made, not yet writing to the DB) so I was thinking that it's not a mongo issue. So is it a Django issue calling the command synchronously? Do I need to use Celery for this task?
Deleting Contents of a folder selevtively with python
28,608,827
0
1
897
0
python,shutil
rmtree does not appear to have any kind of filtering mechanism that you could use; further, since part of its functionality is to remove the directory itself, and not just its contents, it wouldn't make sense to. If you could do something to the file so that rmtree's attempt to delete it fails, you can have rmtree ignore such errors, thus leaving your file but deleting the others. If you cannot, you could resort to os.walk to loop over the contents of your directory, and thus decide which items to remove for yourself.
0
1
0
0
2015-02-19T14:15:00.000
3
0
false
28,608,641
1
0
0
1
I would periodically like to delete the contents of a Windows directory which includes files and sub directories that contain more files. However I do not there is one specific file that I do not want to remove (it is the same file every time). I am using shutil.rmtree to delete the contents of a folder but I am deleting the file I wish to keep also. How would I make an exception preventing the removal of the file I would like to keep and is shutil the best method for this?
Syntax errors for keywords in pydev plugin for Eclipse
28,972,434
0
0
303
0
eclipse,python-3.x,pydev
It seems like Eclipse Luna does not provide support for PyDev when it's installed with Aptana. I was able to install Aptana without PyDev and do a separate install of Pydev on its own and this solved the problem.
0
1
0
1
2015-02-19T19:39:00.000
1
0
false
28,615,418
0
0
0
1
I'm using the Pydev plugin for Eclipse Luna for Java EE. The python code runs correctly, but errors are showing up for built in keywords like print. Error: Undefined Variable: print I looked on stackoverflow for other answers, and the suggestions have all been to manually configure an interpreter. I changed my interpreter to point at C:/python34/python.exe, but this has not fixed the problem. I also made sure that I was using grammar version 3.0. Update: I think it might be a problem with aptana instead of pydev. I uninstalled aptana, and installed pydev without any issues. But when I tried to reinstall aptana, I can only do it by uninstalling pydev. I need a way to try a previous version of aptana or else a way to install aptana and pydev separately
DOS Batch multicore affinity not working
28,848,968
1
0
632
0
python,batch-file,cmd,dos,affinity
This is more of an answer to a question that arose in comments, but I hope it might help. I have to add it as an answer only because it grew too large for the comment limits: There seems to be a misconception about two things here: what "processor affinity" actually means, and how the Windows scheduler actually works. What this SetProcessAffinityMask(...) means is "which processors can this process (i.e. "all threads within the process") can run on," whereas SetThreadAffinityMask(...) is distinctly thread-specific. The Windows scheduler (at the most base level) makes absolutely no distinction between threads and processes - a "process" is simply a container that contains one or more threads. IOW (and over-simplified) - there is no such thing as a process to the scheduler, "threads" are schedulable things: processes have nothing to do with this ("processes" are more life-cycle-management issues about open handles, resources, etc.) If you have a single-threaded process, it does not matter much what you set the "process" affinity mask to: that one thread will be scheduled by the scheduler (for whatever masked processors) according to 1) which processor it was last bound to - ideal case, less overhead, 2) whichever processor is next available for a given runnable thread of the same priority (more complicated than this, but the general idea), and 3) possibly transient issues about priority inversion, waitable objects, kernel APC events, etc. So to answer your question (much more long-windedly than expected): "But if I will use a multicore X like 15 or F or 0xF (meaning in my opinion all 4 cores) it will still run only on the first core" What I said earlier about the scheduler attempting to use the most-recently-used processor is important here: if you have a (or an essentially) single-threaded process, the scheduling algorithm goes for the most-optimistic approach: previously-bound CPU for the switchback (likely cheaper for CPU/main memory cache, prior branch-prediction eval, etc). This explains why you'll see an app (regardless of process-level affinity) with only one (again, caveats apply here) thread seemingly "stuck" to one CPU/core. So: What you are effectively doing with the "/affinity X" switch is 1) constraining the scheduler to only schedule your threads on a subset of CPU cores (i.e. not all), and 2) limit them to a subset of what the scheduler kernel considers "available for next runnable thread switch-to", and 3) if they are not multithreaded apps (and capable of taking advantage of that), "more cores" does not really help anything - you might just be bouncing that single thread of execution around to different cores (although the scheduler tries to minimize this, as described above). That is why your threads are "sticky" - you are telling the scheduler to make them so.
0
1
0
0
2015-02-20T13:39:00.000
2
1.2
true
28,630,336
1
0
0
1
I have a batch that launches a few executables .exe and .py (python) to process some data. With start /affinity X mybatch.bat it will work as it should only if X equals to 0, 2, 4 or 8 (the individual cores) But if I will use a multicore X like 15 or F or 0xF (meaning in my opinion all 4 cores) it will still run only on the first core. Does it have to do with the fact that the batch is calling .exe files that maybe cannot be affinity controled this way? OS:Windows 7 64bit
Ubuntu and Ironpython: What paths to add to sys.path AND how to import fcntl module?
28,673,847
3
1
456
0
python-2.7,ubuntu,mono,ironpython,fcntl
As far as I can see, the fcntl module of cPython is a builtin module (implemented in C) - those modules need to be explicitly implemented for most alternative Python interpreters like IronPython (in contrast to the modules implemented in plain Python), as they cannot natively load Python C extensions. Additionally, it seems that there currently is no such fcntl implementation in IronPython. There is a Fcntl.cs in IronRuby, however, maybe this could be used as a base for implementing one in IronPython.
0
1
0
1
2015-02-21T16:41:00.000
1
1.2
true
28,648,230
0
0
0
1
I have latest IronPython version built and running in Ubuntu 14.04 through Mono. Building Ironpython and running with Mono seems trivial but I am not convinced I have proper sys.paths or permissions for Ironpython to import modules, especially modules like fcntl. Running ensurepip runs subprocess, and wants to import "fcntl". There are numerous posts already out there, but mostly regarding windows. As I understand, fcntl is part of unix python2.7 standard library. To start the main problem seems to be that Ironpython has no idea where this is, but I also suspect that since fcntl seems to be perl or at least not pure python, that there is more to the story. So my related sys.path questions are: In Ubuntu, where should I install Ironpython (Ironlanguages folder) to? Are there any permissions I need to set? What paths should I add to the sys.path to get Ironpython's standard library found?' What paths should I add to the sys.path to get Ubuntu's python 2.7 installed modules? What paths should I add to the sys.path or methods to get fcntl to import properly in Ironpython Any clues on how to workaround known issues installing pip through ensurepip using mono ipy.exe X:Frames ensurepip Thanks!
Extract python script from exe generated with cx_Freeze
28,671,674
0
0
916
0
python,exe,cx-freeze
Seems like using cython will make it impossible to get script back.
0
1
0
0
2015-02-22T01:35:00.000
1
1.2
true
28,653,502
1
0
0
1
Is it possible to get .py text file from .exe file generated with cx_Freeze? If yes, how can I prevent it when I generate exe? I don't want that somebody see my python code. Of course anybody will have access to bytecode, but it much harder to disasemblate it.
Python & MapReduce: beyond basics -- how to do more tasks on one database
28,762,585
1
2
107
0
python,hadoop,mapreduce,hadoop-streaming
This question seems very generic to me. Chain of many map-reduce jobs are the most common pattern for the production ready solutions. But as programmer, we should always try to use less number of MR jobs to get the best performance (You have to be smart in selecting your key-value pairs for the jobs in order to do this) but off course it is dependent on the use cases. Some people use different combinations of Hadoop Streaming, Pig, Hive, JAVA MR etc. MR jobs to solve one business problem. With the help of any workflow management tools like Oozie or bash scripts you can set the dependencies between the jobs. And for exporting/importing data between RDBMS and HDFS, you can use Sqoop. This is the very basic answer of your query. If you want to have further explanation for any point then let me know.
0
1
0
0
2015-02-23T07:18:00.000
1
1.2
true
28,668,641
0
0
0
1
I have a huge txt data store on which I want to gather some stats. Using Hadoop-streaming and Python I know how to implement a MapReduce for gathering stats on a single column, e.g. count how many records there are for each of a 100 categories. I create a simple mapper.py and reducer.py, and plug them into the hadoop-streaming command as -mapper and -reducer respectively. Now, I am at a bit of a loss at how to practically approach a more complex task: gathering various stats on various other columns in addition to categories above (e.g. geographies, types, dates, etc.). All that data is in the same txt files. Do I chain the mapper/reducer tasks together? Do I pass key-value pairs initially long (with all data included) and "strip" them of interesting values one by one while processing? Or is this a wrong path? I need a practical advice on how people "glue" various MapReduce tasks for a single data source from within Python.
EndpointNotFound: public endpoint for hpext:dns service in RegionOne region not found
28,739,765
0
1
961
0
python-2.7,openstack-neutron
I was able to solve this. This was my mistake. I have exported hpext:dns in keystone_admin and .bashrc file. This value is very much specific if anyone is using hp cloud and they are logging into their geos.
0
1
0
0
2015-02-24T10:02:00.000
2
0
false
28,692,809
0
0
0
2
I have installed designate client on the same box where designate server is running with OpenStack Juno. After setting environment by issuing . .venv/bin/activate and keystone variables by issuing this command keystonerc_admin. When I try to run designate --debug server-list command I am getting this error: EndpointNotFound: public endpoint for hpext:dns service in RegionOne region not found Please help me out.
EndpointNotFound: public endpoint for hpext:dns service in RegionOne region not found
28,767,745
0
1
961
0
python-2.7,openstack-neutron
Yes, that value is from before designate was an incubated project, but was running in HP Cloud. The standard 'dns' service should be used for anyone not using the HP Public Cloud service (it is the default in python-designateclient, so you shouldn't have to do anything)
0
1
0
0
2015-02-24T10:02:00.000
2
0
false
28,692,809
0
0
0
2
I have installed designate client on the same box where designate server is running with OpenStack Juno. After setting environment by issuing . .venv/bin/activate and keystone variables by issuing this command keystonerc_admin. When I try to run designate --debug server-list command I am getting this error: EndpointNotFound: public endpoint for hpext:dns service in RegionOne region not found Please help me out.
Elastic Load Balancing with Tornado
28,732,773
0
0
808
0
python,amazon-web-services,tornado,amazon-elb
It is possible. For example our setup is ELB->nginx->tornado. nginx is used for app specific proxy, cache and header magic, but can be thrown out of this chain or replaced with something else.
0
1
0
0
2015-02-25T01:22:00.000
1
0
false
28,709,535
0
0
0
1
I haven't been able to find any solid information online. I'm curious to know if its possible (and how) to use the Elastic Load Balancing (ELB) service with Tornado. If it isn't, whats the best alternative to using AWS as a scalable option with Tornado?
Python bottle: iterate through folder in app's route or in template?
28,722,748
1
1
105
0
python,bottle
In general, a best practice is to do the work in the app, and do (only) presentation in the template. This keeps your so-called business logic as separate as possible from your rendering. Even if it wasn't a bad idea, I don't even know how you could walk through a directory of files from within a template. The subset of Python that's available to you in a template is pretty constrained. Hope that helps!
0
1
0
0
2015-02-25T08:17:00.000
1
0.197375
false
28,714,197
0
0
1
1
I'm beginning to work on a Python 3.4 app to serve a little website (mostly media galleries) with the bottle framework. I'm using bottle's 'simple template engine' I have a YAML file pointing to a folder which contains images and other YAML files (with metadata for videos). The app or the template should then grab all the files and treat them according to their type. I'm now on the point where I have to decide whether I should iterate through the folder within the app (in the function behind the @app.routedecorator) or in the template. Is there a difference in performance / caching between these two approaches? Where should I place my iteration loops for the best performance and the most "pythonic" way?
Rounding up Dependencies for PyQt
30,196,277
1
0
191
0
python,qt,pyqt,cups
I got around the CUPS sandboxing by having the backend send the information to a listening server on localhost that then processed the job as I needed it. I made sure that the server listening would only accept connections from localhost. I never was able to get pyinstaller or cx_freeze to work with PyQt, but this workaround was a better alternative.
1
1
0
0
2015-02-25T15:42:00.000
1
1.2
true
28,723,232
0
0
0
1
I have been trying to get a program I wrote in PyQt to work being called from a CUPS backend on OS X. The problem is that CUPS sandboxing keeps the program from being able to access the PyQt python modules which I have brewed in /usr/local/Cellar. Is there any way to grab those files, as well as the Qt files in the Cellar, and put them all in one contained folder? It is simple for other modules, but PyQt depends on a lot itself. I tried using pyinstaller and cx_freeze, but with no luck. How can I round up all my applications dependencies into one location?
Installed python-mode; now I only see class names in my file
35,023,398
0
0
132
0
python,vim,python-mode
If you are not happy with folding, you might like to disable python-mode folding by keeping let g:pymode_folding = 0 in ~/.vimrc. What I usually do is enable folding and use space bar to open it. I also set set foldclose=all to automatically fold unfolded fold
0
1
0
1
2015-02-25T16:49:00.000
2
0
false
28,724,782
1
0
0
2
I installed python-mode for vim on my Mac OSX system. I decided to try one of the python motion commands. I hit [C which I thought would go to the next class. But the screen also switched, to show ONLY class names in gray highlighting. I've searched the python-mode documentation, and I can't see anything about this happening, and therefore no way to undo it. Well, I thought, I will just quit and reload, and everything will be fine. But no! When I come back in to the file, it opens as I left it, with just the class names showing, highlighted in gray, and indications of line numbers. How do I get out of this "mode" or whatever I am stuck in?
Installed python-mode; now I only see class names in my file
28,724,868
1
0
132
0
python,vim,python-mode
It sounds like you've discovered the "folding" feature of Vim. Press zo to open one fold under the cursor. zO opens all folds under the cursor. zv opens just enough folds to see the cursor line. zR opens all folds. See :help folding for details.
0
1
0
1
2015-02-25T16:49:00.000
2
0.099668
false
28,724,782
1
0
0
2
I installed python-mode for vim on my Mac OSX system. I decided to try one of the python motion commands. I hit [C which I thought would go to the next class. But the screen also switched, to show ONLY class names in gray highlighting. I've searched the python-mode documentation, and I can't see anything about this happening, and therefore no way to undo it. Well, I thought, I will just quit and reload, and everything will be fine. But no! When I come back in to the file, it opens as I left it, with just the class names showing, highlighted in gray, and indications of line numbers. How do I get out of this "mode" or whatever I am stuck in?
How do I get python to recognize a module from any directory?
28,770,249
1
2
1,420
0
python,installation,anaconda,packages,pydicom
You shouldn't copy the source to site-packages directly. Rather, use python setup.py install in the source directory, or use pip install .. Make sure your Python is indeed the one in /usr/local/anaconda, especially if you use sudo (which in general is not necessary and not recommended with Anaconda).
0
1
0
1
2015-02-26T20:06:00.000
1
1.2
true
28,751,764
1
0
0
1
I'm installing some additional packages to anaconda and I can't get them to work. One such package is pydicom which I downloaded, unziped, and moved to /usr/local/anaconda/lib/python2.7/site-package/pydicom. In the pydicom folder the is a subfolder called source which contains both ez_setup.py and setup.py. I ran sudo python setup.py install which didn't spit out any errors and then ran sudo python ez_setup.py install when I still couldn't get the module to open in ipython. Now I can successfully import dicom but ONLY when my current directory is /usr/local/anaconda/lib/python2.7/site-package/pydicom/source. How do I get it so I import it from any directory? I'm running CentOS and I put export PATH=/usr/local/anaconda/bin:$PATH export PATH=/usr/local/anaconda/lib/python2.7/:$PATH in my .bashrc file.
Can I have GCS private isolated buckets with a unique api key per bucket?
28,754,390
1
1
56
0
google-app-engine,google-cloud-storage,google-app-engine-python
Unfortunately you only have two good options here: Have a service which authenticates the individial app according to whatever scheme you like (some installation license, a random GUID assigned at creation time, whatever) and vends GCS signed URLs, which the end user could then use for a single operation, like uploading an object or listing a bucket's content. The downside here is that all requests must involve your service. All resources would belong entirely to your application. Abandon the "without asking the user to login" requirement and require a single Google login at install time.
0
1
1
0
2015-02-26T21:26:00.000
1
0.197375
false
28,753,061
0
0
0
1
I'd like to give to each of my customers access to their own bucket under my GCS enabled app. I also need to make sure that a user's bucket is safe from other users' actions. Last but not least, the customer will be a client application, so the whole process needs to be done transparently without asking the user to login. If I apply an ACL on each bucket, granting access only to the user I want, can I create an API key only for that bucket and hand that API key to the client app to perform GCS API calls?
How do I show a list of processes for the current user using python?
28,756,912
-2
3
4,574
0
python,process
popen works great because you can run things through grep, cut, etc. So you can tailor the info to exactly what you want.
0
1
0
0
2015-02-27T02:09:00.000
6
-0.066568
false
28,756,362
0
0
0
1
I know it has something to do with /proc but I'm not really familiar with it.
How to share memcache items across 2 apps locally using google app eninge sdk
28,804,617
2
1
63
0
python,google-app-engine
Actually two different App Engine apps cannot see the same items in memcache. Their memcache spaces are totally isolated from each other. However two different modules of the same app use the same memcache space and can read and write the same items. Modules act like sub-apps. Is that what you meant? It is also possible to have different versions of an app (or module) running at the same time (for example to do A/B testing), and these also use the same memcache space.
0
1
0
0
2015-03-01T13:04:00.000
1
1.2
true
28,793,857
0
0
1
1
I have 2 Google App Engine applications which share memcache items, one app writes the items and the other apps reads them. This works in production. however - locally using the SDK, items written by one app are not available to the other. Is there a way to make this work?
Best way to stop a Python script even if there are Threads running in the script
28,799,878
2
0
238
0
python,multithreading
To me, this looks like a pristine application for the subprocess module. I.e. do not run the test-scripts from within the same python interpreter, rather spawn a new process for each test-script. Do you have any particular reason why you would not want to spawn a new process and run them in the same interpreter instead? Having a sub-process isolates the scripts from each other, like imports, and other global variables. If you use the subprocess.Popen to start the sub-processes, then you have a .terminate() method to kill the process if need be.
0
1
0
1
2015-03-01T22:02:00.000
2
1.2
true
28,799,663
1
0
0
1
I have a python program run_tests.py that executes test scripts (also written in python) one by one. Each test script may use threading. The problem is that when a test script unexpectedly crashes, it may not have a chance to tidy up all open threads (if any), hence the test script cannot actually complete due to the threads that are left hanging open. When this occurs, run_tests.py gets stuck because it is waiting for the test script to finish, but it never does. Of course, we can do our best to catch all exceptions and ensure that all threads are tidied up within each test script so that this scenario never occurs, and we can also set all threads to daemon threads, etc, but what I am looking for is a "catch-all" mechanism at the run_tests.py level which ensures that we do not get stuck indefinitely due to unfinished threads within a test script. We can implement guidelines for how threading is to be used in each test script, but at the end of the day, we don't have full control over how each test script is written. In short, what I need to do is to stop a test script in run_tests.py even when there are rogue threads open within the test script. One way is to execute the shell command killall -9 <test_script_name> or something similar, but this seems to be too forceful/abrupt. Is there a better way? Thanks for reading.
Multiple executables as a single file
28,828,064
0
0
202
0
python,bash,scripting,path,packing
Thanks for the answers! Of course this is not a good approach for providing a published code! Sorry if it was confusing. But this is a good approach if you are developing some e.g. scientific idea, and you wish to obtain a proof of concept result fast and you wish to do similar tasks several times but replacing fast some parts of the algorithm. Note, that sometimes many codes are available for some parts of the task. These codes are sometimes needed to be modified a bit (a few lines). I am a big believer of re-implementing everything, but first it is good to know if it worth to do! For some compromise: can I call a script externally that is wrapped in some tar or zip and is not compressed? Thanks again, J.
0
1
0
1
2015-03-02T15:44:00.000
2
0
false
28,813,775
0
0
0
1
For some routine work I found that combining different scripting languages can be the fastest way to do. Like I have some main bash script which calles some awk, pyton and bash scripts or even some compiled fortran executables. I can put all the files into a folder that is in the paths, but it makes modification is a bit slower. If I need a new copy with some modifications I need to add another path to $PATH as well. Is there a way to make merge these files as a single executable? For example: tar all the files together and explain somehow that the main script is main.sh? This way I could simply vi the file, modify, run, modify, run ... but I could move the file between folders and machines easily. Also dependencies could be handle properly (executing the tar could set PATH itself). I hope this dream does exist! Thanks for the comments! Janos
only run python script if it is git committed
28,814,943
1
1
132
0
python,git
You could consider running your script from a separate checkout to where you do your development. That way you would need to commit, push locally, and pull in the 'deployment' location before you could run the updated script. You could probably automate those steps with a shell script or even a git commit hook.
0
1
0
1
2015-03-02T16:34:00.000
2
1.2
true
28,814,845
1
0
0
1
I have a slightly unusually situation: I have scripts that I make small changes to frequently, and that take hours to execute. I save output logs, but more importantly I need to make sure that the code which produced a given log will not be lost. Committing changes before each run will work, but I'd like to enforce this automatically by preventing my code from running if git is not up to date. Is there a simple way to do this, or is running shell commands and scraping output my best bet?
GAE blobstore upload fails with CSRF token missing
28,857,208
0
0
228
0
python,google-app-engine,flask,blobstore,flask-wtforms
Okey, so the real problem was that I was giving an absolute url to the successpath argument (i.e. the first) of blobstore.create_upload_url(), causing the request notifying about the success caused a csrf error when loading the root path (/). I changed it to a path relative to the root and now just using @csrf.exempt as normal works fine.
0
1
0
0
2015-03-03T22:27:00.000
2
0
false
28,843,234
0
0
1
1
I'm running flask on app engine. I need to let users upload some files. For security reasons I have csrf = CsrfProtect(app) on the whole app, with specific url's exempted using the @csrf.exempt decorator in flask_wtf. (Better to implicitly deny than to implicitly allow.) Getting an upload url from blobstore with blobstore.create_upload_url works fine, but the upload itself fails with a 400; CSRF token missing or incorrect. This problem is on the development server. I have not tested it on the real server, since it is in production. How do I exempt the /_ah/ path so the uploads work?
Task queue: Allow only one task at a time per user
28,849,410
1
2
406
0
python,google-app-engine,task-queue
You can specify as many queues as you like in queue.yaml rather than just using the default push queue. If you feel that no more than, say, five users at once are likely to contest for simultaneous use of them then simply define five queues. Have a global counter that increases by one and wraps back to 1 when it exceeds five. Use it to assign which queue a given user gets to push his or her tasks to at the time of the request. With this method, when you have six or more users concurrently adding tasks, you are no worse off than you currently are (in fact, likely much better off). If you find the server overloading, turn down the default "rate: 5/s" to a lower value for some or all of the queues if you have to, but first try lowering the bucket size, because turning down the rate is going to slow things down when there are not multiple users. Personally, I would first try only turning down the four added queues and leave the first queue fast to solve this if you have performance issues that you can't resolve by tuning the bucket sizes.
0
1
0
0
2015-03-04T07:27:00.000
1
0.197375
false
28,848,740
0
0
1
1
In my application, I need to allow only one task at a time per user. I have seen that we can set max_concurrent_requests: 1 in queue.yaml. but this will allows only one task at a time in a queue. When a user click a button, a task will be initiated and it will add 50 task to the queue. If 2 user click the button in almost same time total task count will be 100. If i give max_concurrent_requests: 1 it will run only one task from any of these user. How do i handle this situation ?
Python Subprocess for Notepad
28,854,681
0
2
1,546
0
python,subprocess,popen,notepad
Found the exact solution from Alex K's comment. I used pywinauto to perform this task.
0
1
0
0
2015-03-04T11:59:00.000
3
0
false
28,853,923
1
0
0
1
I am trying to open Notepad using popen and write something into it. I can't get my head around it. I can open Notepad using command: notepadprocess=subprocess.Popen('notepad.exe') I am trying to identify how can I write anything in the text file using python. Any help is appreciated.
Bloomberg API Python 3.5.5 with C++ 3.8.1.1. on Mac OS X import blpapi referencing
29,039,670
7
3
3,384
0
python,c++,macos,installation,blpapi
There is a missing step in the Python SDK README file; it instructs you to set BLPAPI_ROOT in order to build the API wrapper, but this doesn't provide the information needed at runtime to be able to load it. If you unpacked the C/C++ SDK into '/home/foo/blpapi-sdk' (for example), you will need to set DYLD_LIBRARY_PATH to allow the runtime dynamic linker to locate the BLPAPI library. This can be done as so: $ export DYLD_LIBRARY_PATH=/home/foo/blpapi-sdk/Darwin
0
1
0
0
2015-03-05T09:32:00.000
1
1
false
28,874,356
0
0
0
1
I am trying to install and run successfully Bloomberg API Python 3.5.5 and I have also downloaded and unpacked C++ library 3.8.1.1., both for the Mac OS X. I'm running Mac OS X 10.10.2. I am using the Python native to Mac OS X, Python 2.7.6 and I had already installed, via Xcode, the Command line gcc compiler, GCC 4.2.1. I did, on an administrator account, sudo python setup.py install. I also had changed the setup.py ENVIRONMENT variable BLPAPI_ROOT to the directory for the C++ headers, blpapi_cpp_3.8.1.1. The setup was successful. I changed to another directory as suggested by the Python's README file, to avoid 'Import Error: No module named _internals'. When I go to python and enter the command import blpapi, I obtain the following error: import blpapi Traceback (most recent call last): File "", line 1, in File "/Library/Python/2.7/site-packages/blpapi/init.py", line 5, in from .internals import CorrelationId File "/Library/Python/2.7/site-packages/blpapi/internals.py", line 50, in _internals = swig_import_helper() File "/Library/Python/2.7/site-packages/blpapi/internals.py", line 46, in swig_import_helper _mod = imp.load_module('_internals', fp, pathname, description) ImportError: dlopen(/Library/Python/2.7/site-packages/blpapi/_internals.so, 2): Library not loaded: libblpapi3_64.so Referenced from: /Library/Python/2.7/site-packages/blpapi/_internals.so Reason: image not found I check the directory for /Library/Python.../blpapi/ and there is no _internals.so only *.py files. Is that the problem? I don't know how to proceed.
Is it possible to use Celery to run an already compiled (py2exe) python script
28,878,537
0
0
94
0
python,python-2.7,celery,py2exe
if __name__ == '__main__': app.start() should be added to the entry point of the script.
0
1
0
0
2015-03-05T12:14:00.000
1
0
false
28,877,646
0
0
0
1
Is it possible to use Celery to run an already compiled (py2exe) python script, if yes, how I can invoke it ?
How to open process again in linux terminal?
28,891,431
0
1
1,139
0
python,linux,ssh,terminal
Things to try: nohup, or screen
0
1
0
1
2015-03-06T02:13:00.000
2
0
false
28,891,305
0
0
0
1
From my home pc using putty, I ssh'ed into a remote server, and I ran a python program that takes hours to complete, and as it runs it prints stuff. Now after a while, my internet disconnected, and I had to close and re-open putty and ssh back in. If I type 'top' I can see the python program running in the background with its PID number. Is there a command I can use to basically re-open that process and see it printing its stuff again? Thanks
How to create multiple entities dynamically in google app engine using google data storage(python)
29,608,781
1
0
165
0
python,google-app-engine,google-cloud-datastore
It's not recommended to dynamically create a new table. You need to redesign your database relation structure. For example in a user messaging app instead of making a new table for every new message [ which contains message and user name] , you should rather create a User table and Messagestable separately and implement a many to one relation between the two tables.
0
1
0
0
2015-03-06T12:29:00.000
1
1.2
true
28,898,827
0
0
1
1
I wish to implement this: There should be an entity A with column 1 having values a,b,c...[dynamically increases by user's input] There should be another entity B for each values of a , b , c.. How should I approach this problem? Should I dynamically generate other entities as user creates more [a,b,c,d... ] ? If yes , how? Any other way of implementation of the same problem,?
Change hostname in mininet host
29,895,418
1
2
1,568
0
python,ubuntu,networking,hostname,mininet
I don't think you can get different names by running the "hostname" on each host. Only networking-related commands will produce different results on different hosts because the hosts run on separated namespaces. So perhaps one way to get the hostname is to run ifconfig and intepret the hostname from the interfaces' name.
0
1
0
0
2015-03-06T16:44:00.000
2
0.099668
false
28,903,499
0
0
0
2
I need to emulate a network with n hosts connected by a switch. The perfect tool for this seems to be mininet. The problem is that I need to run a python script in every host that makes use of the hostname. The skript acts different depending on the hostname, so this is very important for me :) But the hostname seems to be the same in every host! Example: h1 hostname outputs "simon-pc" h2 hostname outputs "simon-pc" "simon-pc" is the hostname of my "real" underlying ubuntu system. I don't find a possibility to change the hostname on the host. Is this even possible? And if yes, how? If no, why not? I read about mininet using one common kernel for every host. Might this be the problem?
Change hostname in mininet host
43,475,651
1
2
1,568
0
python,ubuntu,networking,hostname,mininet
I finally figure out how to do it , first you need to run the "ifconfig" command from inside the program and storing it in a variable , second you use regular expressions 're' to grab the text , I use it to grab the addressees of my hosts...you can do the same for the hostname code: getip(): ifconfig_output=(subprocess.check_output('ifconfig')).decode() ip=re.search(r".\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}",ifconfig_output) return((str(ip))[47:-2]) The first line will get the output from the terminal (in binary use decode to represent it in utf-8 encoding string) The second line will grab the ip in the text using regular expressions The Third will grab the ip from the search object and eliminate info about the position of the grabbed text in the given string
0
1
0
0
2015-03-06T16:44:00.000
2
0.099668
false
28,903,499
0
0
0
2
I need to emulate a network with n hosts connected by a switch. The perfect tool for this seems to be mininet. The problem is that I need to run a python script in every host that makes use of the hostname. The skript acts different depending on the hostname, so this is very important for me :) But the hostname seems to be the same in every host! Example: h1 hostname outputs "simon-pc" h2 hostname outputs "simon-pc" "simon-pc" is the hostname of my "real" underlying ubuntu system. I don't find a possibility to change the hostname on the host. Is this even possible? And if yes, how? If no, why not? I read about mininet using one common kernel for every host. Might this be the problem?
Bloomberg API SDK not compatible with Anaconda Python
28,947,897
4
1
1,122
0
python,api,anaconda,blpapi
This is not true. The Anaconda Python and Python extension modules are built using Visual Studio (2008 for Python 2 and 2010 for Python 3, the same as the Python installers from python.org).
0
1
0
0
2015-03-06T17:15:00.000
1
0.664037
false
28,904,066
1
0
0
1
I spent hours yesterday trying to get the blapi up and running and finally gave in and emailed their support, this is the response: "Unfortunately our BLPAPI SDKs are not compatible with the Anaconda distribution of Python. That Python is built using GCC, and it is not capable of loading DLLs that were built using Microsoft Visual Studio; our DLLS were built with MSVS. This means you'll need to use the Python distribution from Python.org, which is also built with MSVS." I cannot download the normal Python (from Python.org) due to security constraints, but for some reason I can do Anaconda. Honestly it's preferable for me anyhow because I don't want to mess with having to download 15 diff packages I need afterwards. Does anybody have any idea if it is even possible to work around this? It seems ridiculous that Bloomberg would force you to use the straight distribution and then have to go download all the packages you want individually by making this incompatible with GCC builds.
Create executable that uses admin rights with Pyinstaller
35,067,170
17
5
7,070
0
python,pyinstaller
PyInstaller 3.0 includes the --uac-admin option!
0
1
0
0
2015-03-07T23:58:00.000
1
1
false
28,921,545
1
0
0
1
I need to create an executable for windows 8.1 and lower, so I tried Py2Exe (no success) and then PyInstaller, and damn it, it worked. Now I need to run it as admin (everytime since that uses admin tasks). My actual compiling script looks like this : python pyinstaller.py --onefile --noconsole --icon=C:\Python27\my_distrib\ico_name --name=app_name C:\Python27\my_distrib\script.py Is there an option to use UAC or things like this ? (its mess to me) Also, everytime I start it on my laptop windows 8.1 (my desktop computer is windows 7) it says that is dangerous... Anything to make it a "trust" exe? Thanks in advance
-bash: /usr/bin/yum: /usr/bin/python: bad interpreter: Permission denied
37,664,878
-1
5
45,325
0
python,linux,centos
-bash: /usr/bin/yum: /usr/bin/python: bad interpreter: Permission denied then first remove python follow command line -- sudo rpm -e python second check which package install this command line -- sudo rpm -q python then install package -- sudo yum install python* i think this problem solve
0
1
0
1
2015-03-08T05:24:00.000
5
-0.039979
false
28,923,393
0
0
0
3
I am new in centos.I am try to do an application on it.For my application I need to install python 2.7.But the default one on server was python 2.6. So tried to upgrade the version .And accidentally I deleted the folder /usr/bin/python.After that I Installed python 2.7 through make install.I created the folder again /usr/bin/python and run command sudo ln -s /usr/bin/python2.7 /usr/bin/python. After this when I tried to run YUM commands I am getting the error -bash: /usr/bin/yum: /usr/bin/python: bad interpreter: Permission denied drwxrwxrwx 2 root root 4096 Mar 8 00:19 python this is permission showing for the directory /usr/bin/python
-bash: /usr/bin/yum: /usr/bin/python: bad interpreter: Permission denied
40,200,244
3
5
45,325
0
python,linux,centos
yum doesn't work with python2.7. You should do the following vim /usr/bin/yum change #!/usr/bin/python to #!/usr/bin/python2.6 If your python2.6 was deleted, then reinstall them and point the directory in /usr/bin/yum to your python2.6 directory.
0
1
0
1
2015-03-08T05:24:00.000
5
0.119427
false
28,923,393
0
0
0
3
I am new in centos.I am try to do an application on it.For my application I need to install python 2.7.But the default one on server was python 2.6. So tried to upgrade the version .And accidentally I deleted the folder /usr/bin/python.After that I Installed python 2.7 through make install.I created the folder again /usr/bin/python and run command sudo ln -s /usr/bin/python2.7 /usr/bin/python. After this when I tried to run YUM commands I am getting the error -bash: /usr/bin/yum: /usr/bin/python: bad interpreter: Permission denied drwxrwxrwx 2 root root 4096 Mar 8 00:19 python this is permission showing for the directory /usr/bin/python
-bash: /usr/bin/yum: /usr/bin/python: bad interpreter: Permission denied
62,297,081
0
5
45,325
0
python,linux,centos
this problem is that yum file start head write #!/usr/local/bin/python2.6, write binary file, is not dir, is python binary file
0
1
0
1
2015-03-08T05:24:00.000
5
0
false
28,923,393
0
0
0
3
I am new in centos.I am try to do an application on it.For my application I need to install python 2.7.But the default one on server was python 2.6. So tried to upgrade the version .And accidentally I deleted the folder /usr/bin/python.After that I Installed python 2.7 through make install.I created the folder again /usr/bin/python and run command sudo ln -s /usr/bin/python2.7 /usr/bin/python. After this when I tried to run YUM commands I am getting the error -bash: /usr/bin/yum: /usr/bin/python: bad interpreter: Permission denied drwxrwxrwx 2 root root 4096 Mar 8 00:19 python this is permission showing for the directory /usr/bin/python
distutils setup script under linux - permission issue
28,951,788
1
0
36
0
python,linux,distutils,setup.py
The immediate solution is to invoke setup.py with --prefix=/the/path/you/want. A better approach would be to include the data as package_data. This way they will be installed along side your python package and you'll find it much easier to manage it (find paths etc).
0
1
0
1
2015-03-09T20:56:00.000
1
0.197375
false
28,951,588
0
0
0
1
So I created a setup.py script for my python program with distutils and I think it behaves a bit strange. First off it installs all data_files into /usr/local/my_directory by default which is a bit weird since this isn't a really common place to store data, is it? I changed the path to /usr/share/my_directory/. But now I'm not able to write to the database inside that directory and I can't set required permission from within setup.py neither since the actual database file has not been created when I run it. Is my approach wrong? Should I use another tool for distributing? Because at least for Linux, writing a simple setup sh script seems easier to me at the moment.
REPL error with Sublime Text 3
29,251,327
3
2
15,816
0
python,sublimetext3,sublimerepl
I had the same problem, when I installed REPL for the first time. Now, that could sound crazy, but the way to solve the problem (at least, the trick worked for me!) is to restart once Sublime Text 3. Update: As pointed out by Mark in the comments, apparently you could have to restart Sublime more than once to solve the problem.
0
1
0
0
2015-03-09T21:41:00.000
2
0.291313
false
28,952,282
1
0
0
1
I'm using REPL with sublime text 3 (latest version as of today) and I'm coding in python 3.4. As far as I understand the documentation on REPL if do: tools>sublimeREPL>python>python-RUN current file then I should run the code I have typed in using REPL. However when I do this I get an error pop up saying: FileNotFoundError(2, 'The system cannot find the file specified.',None,2) I get this error whatever the code I typed in is (I tried print ("Hello World") on its own and also big long programs I've made before) Can someone please help me with this and explain what the problem is, thanks :)
twisted run local shell commands with pipeline
31,752,497
0
1
700
0
python,twisted,twisted.internet
use getProcessOutput('/bin/sh', ('-c', cmd)). cmd is your shell command. try it :-)
0
1
0
0
2015-03-10T06:16:00.000
1
1.2
true
28,957,258
0
0
0
1
In twisted, getProcessOutput method could get 'ps' shell command ouput by using getProcessOutupt('ps', 'aux') and return a defer. my question is how to run command like "ps aux | grep 'some keyword' | awk '{...}'" in getProcessOutput. for example getProcessOutput("ps aux | grep 'some keyword' | awk '{...}'"). any suggestions would be appreciated.
celery beat schedule: run task instantly when start celery beat?
30,854,981
-1
12
5,063
0
python,celery,celerybeat
The best idea is create an implementation which schedules the task itself after completing the task. Also, create an entrance lock so the task cannot be executed multiple times per moment. Trigger the execution once. In this case, you don't need a celerybeat process the task is guaranteed to execute
0
1
0
0
2015-03-10T10:39:00.000
3
-0.066568
false
28,961,517
0
0
1
1
If I create a celery beat schedule, using timedelta(days=1), the first task will be carried out after 24 hours, quote celery beat documentation: Using a timedelta for the schedule means the task will be sent in 30 second intervals (the first task will be sent 30 seconds after celery beat starts, and then every 30 seconds after the last run). But the fact is that in a lot of situations it's actually important that the the scheduler run the task at launch, But I didn't find an option that allows me to run the task immediately after celery starts, am I not reading carefully, or is celery missing this feature?
How to run one command on multiple terminals?
28,966,597
2
3
1,086
0
python,shell,terminal,command,conemu
Apps+G groups input for all visible panes.
0
1
0
0
2015-03-10T13:36:00.000
1
1.2
true
28,965,230
0
0
0
1
I am using ConEmu windows emulator and I would like to run one simple command on more terminals at the same time. Is there any way to do that?
Open SSH connection on exit Python
28,971,821
0
1
1,186
0
python,ssh
If you want the python script to exit, I think your best bet would be to continue doing a similar thing to what you're doing; print the credentials in the form of arguments to the ssh command and run python myscript.py | xargs ssh. As tdelaney pointed out, though, subprocess.call(['ssh', args]) will let you run the ssh shell as a child of your python process, causing python to exit when the connection is closed.
0
1
1
1
2015-03-10T18:17:00.000
3
1.2
true
28,971,180
0
0
0
1
I am writing a little script which picks the best machine out of a few dozen to connect to. It gets a users name and password, and then picks the best machine and gets a hostname. Right now all the script does is print the hostname. What I want is for the script to find a good machine, and open an ssh connection to it with the users provided credentials. So my question is how do I get the script to open the connection when it exits, so that when the user runs the script, it ends with an open ssh connection. I am using sshpass.
How can i point pip to VCForPython27 in order to prevent "Unable to find vcvarsall.bat" error
36,624,566
2
1
3,358
0
python,pip
Use the command prompt shortcut provided from installing the MSI. This will launch the prompt with VCVarsall.bat activated for the targeted environment. Depending on your installation, you can find this in the Start Menu under All Program -> Microsoft Visual C++ For Python -> then pick the command prompt based on x64 or x86. Otherwise, press Windows Key and search for "Microsoft Visual C++ For Python".
0
1
0
0
2015-03-11T06:21:00.000
3
0.132549
false
28,979,898
1
0
0
1
I downloaded Microsoft Visual C++ Compiler for Python 2.7 and it installed in C:\Users\user\AppData\Local\Programs\Common\Microsoft\Visual C++ for Python\9.0\vcvarsall.bat However, I am getting the error: Unable to find vcvarsall.bat error when attempting to install "MySQL-python". I added C:\Users\user\AppData\Local\Programs\Common\Microsoft\Visual C++ for Python\9.0; to my Path. I am using python 2.7.8
how to install python packages for brew installed pythons
29,003,811
10
8
12,370
0
python,homebrew
Use pip3. The "caveats" text you see when you run brew info python3 was printed for you after python3 was installed; that text is frequently helpful! It reads: You can install Python packages with pip3 install <package> They will install into the site-package directory /usr/local/lib/python3.4/site-packages
0
1
0
0
2015-03-11T15:24:00.000
1
1
false
28,990,639
1
0
0
1
I just finished installing the latest stable version of python via Homebrew. $ brew install python3 Everything works fine. I would like to install packages, for example PyMongo. I don't have pip. $ pip -bash: pip: command not found and there is no Homebrew formulae for it: $ brew install PyMongo brew install PyMongo Error: No available formula for pymongo Searching formulae... Searching taps... Any idea what's the best way to install PyMongo on OS X when Python was installed via Homebrew. Thank you!
Close port on killing python the process with children
28,993,604
0
0
352
0
python,port,popen,kill
ok I've got it seems like close_fds=True while doing Popen solves the issue.
0
1
0
0
2015-03-11T17:34:00.000
1
0
false
28,993,486
0
0
0
1
When my python program is killed with -9 normally it also closes the port it's listening. BUT when it has some child processes running created with Popen (which I don't really need to kill on killing parent), while killing -9 the parent it seems to leave the port in use. How can I force to close the port even if there are children?
Python igraph for windows 64bit
29,264,918
1
0
852
0
python,igraph,python-3.4
As suggested by @Tamas, you should download wheel packages from the link and use pip to install them.
0
1
0
0
2015-03-11T21:31:00.000
2
0.099668
false
28,997,764
1
0
0
1
I am looking for python-igraph package for windows 64bits. I have installed python 3.4 and it seems that I can not find proper igraph installation package for it. I have crawled all webpages and still could not find what I am looking for. Can anyone help me please? Thanks
How to run a shell script placed in different folder from python
29,032,085
0
2
1,696
0
python
You could use an absolute, as opposed to relative, file path to your script.
0
1
0
0
2015-03-13T12:17:00.000
3
0
false
29,032,051
0
0
0
1
I am using the subprocess.call like below: subprocess.call(['sudo ./order_fc_prioritizer/run.sh']) But its saying no such file or directory
How to point LLVM_CONFIG environment variable to the path for llvm-config
29,104,989
5
9
23,428
0
python,osx-mountain-lion,numba
Ok, I needed to install llvm first. My problem was that I was installing LLVMLITE not LLVM. So brew install llvm and then locating llvm-config in the Cellar directory solved my problem.
0
1
0
0
2015-03-13T20:26:00.000
2
0.462117
false
29,041,356
1
0
0
1
I am trying to install numba on an OS X Mountain Lion. I had tried the pip install way but didn't work, so I have downloaded from the GIT respositories the zip files. When trying to install numba I realized that I need LLVM first. I downloaded and unpacked llvm into the Download folder. The README instructions are: "If your LLVM is installed in a non-standard location, first point the LLVM_CONFIG environment variable to the path of the corresponding llvm-config executable."; a message compatible with the RunTimeError I get when running the python setup.py install command. My problem is that I don't understand what to do in order to make the LLVM_CONFIG environment variable to point to the corresponding llvm-config executable. Any help? Thanks
Cron job on google cloud managed virtual machine
29,050,842
1
0
1,243
1
python,google-app-engine,cron,virtual-machine,google-compute-engine
The finest resolution of a cron job is 1 minute, so you cannot run a cron job once every 10 seconds. In your place, I'd run a Python script that starts a new thread every 10 seconds to do your MySQL work, accompanied by a cronjob that runs every minute. If the cronjob finds that the Python script is not running, it would restart it. (i.e., the crontab line would look like * * * * * /command/to/restart/Python/script). Worse-case scenario you'd miss 5 runnings of your MySQL worker threads (a 50 seconds' duration).
0
1
0
0
2015-03-14T01:06:00.000
2
1.2
true
29,044,322
0
0
1
1
I have a python script that queries some data from several web APIs and after some processing writes it to MySQL. This process must be repeated every 10 seconds. The data needs to be available to Google Compute instances that read MySQL and perform CPU-intensive work. For this workflow I thought about using GCloud SQL and running GAppEngine to query the data. NOTE: The python script does not run on GAE directly (imports pandas, scipy) but should run on a properly setup App Engine Managed VM. Finally the question: is it possible and would it be reasonable to schedule a cron job on a GApp Managed VM to run a command invoking my data collection script every 10 seconds? Any alternatives to this approach?
Is there another way, other than the "depends_on" list, for a Property to become a dependency of another Property?
31,955,940
0
2
154
0
python,enthought
I'm not sure this directly answers your question, but you may get what you're after by using the @cached_property decorator to reduce the number of time the property is computed. I think there may be elements of "push" and "pull" style computations with properties.
0
1
0
0
2015-03-16T01:00:00.000
1
0
false
29,068,146
0
0
0
1
In the Enthought Traits/UI system, is there another way, other than being included in another Property's depends_on list, that a Property can become a dependency of another Property? I have a HasTraits subclass, which has a property, chnl_h, which doesn't appear in any other Property's depends_on list, but is behaving as if it were a dependency of another Property. That is, it is recalculating its value, whenever one of its dependencies changes value, as opposed to only when its value is actually requested. Thanks! -db
Python 3 can't find homebrew pyqt installation
29,091,698
2
0
562
0
python,homebrew
brew reinstall pyqt --with-python3 will get you sorted!
1
1
0
0
2015-03-16T03:47:00.000
2
0.197375
false
29,069,364
1
0
0
1
I recently used homebrew to install pyqt (along with qt & sip), but get an import error whenever I try to import PyQt4 in Python 3 (which was also installed using homebrew). To confuse matters more, I am able to import PyQt4 on Python 2 via the terminal. I'm totally new to working with Python packages and, with that, totally confused. Any thoughts on how I might be able to undo what I did and reinstall so that I can access PyQt via the usr/local/python3 installation? Thanks in advance!
Spin up VM using Ansible without Vagrant
29,079,838
2
1
561
0
python,ansible,kvm,libvirt
Of course - if you have SSH access to it. Yes, you can run Ansible using its Python API or through command-line call. About passing YAML file - also - yes.
0
1
0
0
2015-03-16T14:45:00.000
1
0.379949
false
29,079,698
1
0
0
1
I have a specific requirement where I can use only Ansible in my host machine without vagrant. Two questions associated with it: Is it possible to spin up a VM over the host machine with libvirt/KVM as hypervisor using ansible ? I know there is a module called virt in ansible which is capable of doing this. But I could'nt find any real example of how to use this. Appreciate if someone can point me to the example YAML through which I can spin up VM. With Ansible, is it possible to run my playbook from python code ? If I am not wrong there is a python API supported by Ansible. But is it possible to give YAML file as input to this API which executes tasks from YAML.
How do I fix my anaconda python distribution?
37,975,575
-1
9
14,137
0
python,anaconda
I had a similar problem - was able to use conda from an anaconda prompt (found in the anaconda folder) and install packages I needed
0
1
0
0
2015-03-16T23:51:00.000
3
-0.066568
false
29,088,972
1
0
0
2
All seemed to be working fine in my anaconda distribution on Mac. Then I tried to install the postgres library psycopg2 with conda install psycopg2. That threw an error. Something about permissions. But now nothing works. Now it can't even find the conda executable or start ipython. -bash: conda: command not found Should condo executable be in ~/ananconda/bin. The directory is there but no conda executable. Anyone know what might have happened or how I can recover from this?
How do I fix my anaconda python distribution?
29,105,951
5
9
14,137
0
python,anaconda
You're going to have to reinstall Anaconda to fix this. Without conda, there's not much you can do to clean up the broken install.
0
1
0
0
2015-03-16T23:51:00.000
3
0.321513
false
29,088,972
1
0
0
2
All seemed to be working fine in my anaconda distribution on Mac. Then I tried to install the postgres library psycopg2 with conda install psycopg2. That threw an error. Something about permissions. But now nothing works. Now it can't even find the conda executable or start ipython. -bash: conda: command not found Should condo executable be in ~/ananconda/bin. The directory is there but no conda executable. Anyone know what might have happened or how I can recover from this?
Python: What happens when main process is terminated.
39,339,970
0
1
780
0
python,python-multiprocessing
You can just run your program and see if there is python processes alive after the main process terminated. The correct way to terminate your program is making all the subprocesses terminated before the main process end. (Try to use Process.terminate() and Process.join() methods for all subprocesses before the main process terminated.)
0
1
0
0
2015-03-17T01:16:00.000
2
0
false
29,089,753
1
0
0
1
I am working with the multiprocessing module on a Unix system. I have noticed memory leaks when I terminate one of my programs. I was thinking that this might be because the processes that were started in the main process kept running. Is this correct?
Is there a difference between RotatingFileHandler and logrotate.d + WatchedFileHandler for Python log rotation?
29,104,883
3
12
4,420
0
python,logrotate,log-rotation
RotatingFileHandler allows a log file to grow up to size N, and then immediately and automatically rotates to a new file. logrotate.d runs once per day usually. If you want to limit a log file's size, logrotate.d is not the most helpful because it only runs periodically.
0
1
0
0
2015-03-17T16:34:00.000
2
0.291313
false
29,104,675
0
0
0
1
Python has its own RotatingFileHandler which is supposed to automatically rotate log files. As part of a linux application which would need to rotate it's log file every couple of weeks/months, I am wondering if it is any different than having a config file in logrotate.d and using a WatchedFileHandler instead. Is there any difference in how they operate? Is one method safer, more efficient, or considered superior to the other?
Chromium build gclient runhooks error number 13
29,161,669
0
0
769
0
python,build,permissions,chromium
Actually the directory was not mounted with execution permission. So I remounted the directory with execution permission using mount -o exec /dev/sda5 /media/usrname and it worked fine.
0
1
1
0
2015-03-17T17:22:00.000
2
1.2
true
29,105,684
0
0
0
1
I am getting the following error while running gclient runhooks for building chromium. running '/usr/bin/python src/tools/clang/scripts/update.py --if-needed' in '/media/usrname/!!ChiLL out!!' Traceback (most recent call last): File "src/tools/clang/scripts/update.py", line 283, in sys.exit(main()) File "src/tools/clang/scripts/update.py", line 269, in main stderr=os.fdopen(os.dup(sys.stdin.fileno()))) File "/usr/lib/python2.7/subprocess.py", line 522, in call return Popen(*popenargs, **kwargs).wait() File "/usr/lib/python2.7/subprocess.py", line 710, in init errread, errwrite) File "/usr/lib/python2.7/subprocess.py", line 1327, in _execute_child raise child_exception OSError: [Errno 13] Permission denied Error: Command /usr/bin/python src/tools/clang/scripts/update.py --if-needed returned non-zero exit status 1 in /media/usrname/!!ChiLL out!! In order to get permission of the directory "/usr/bin/python src/tools/clang/scripts" I tried chown and chmod but it returned the same error.
Docker not run in debian wheezy
29,106,853
0
0
72
0
java,javascript,android,python,ios
Add #!/bin/bash or #!/usr/bin/env bash as the very first line of the script that you're executing.
0
1
0
0
2015-03-17T18:06:00.000
1
0
false
29,106,586
0
0
0
1
i installed Docker in debian wheezy 64bit and when i try to run it these error are displayed: /usr/local/bin/docker: line 1: --2015-03-17: command not found /usr/local/bin/docker: line 2: syntax error near unexpected token (' /usr/local/bin/docker: line 2:Resolving get.docker.io (get.docker.io)...162.242.195.84' how to solve this problem? thanks
Python packaging for hive/hadoop streaming
34,297,373
0
0
496
0
python,hadoop,mapreduce,hadoop-streaming
This may be done by packaging the dependencies and the reducer script in a zip, and adding this zip as a resource in Hive. Let's say the Python reducer script depends on package D1, which in turn depends on D2 (thus resolving OP's query on transitive dependencies), and both D1 and D2 are not installed on any machine in the cluster. Package D1, D2, and the Python reducer script (let's call it reducer.py) in, say, dep.zip Use this zip like in the following sample query: ADD ARCHIVE dep.zip; FROM (some_table) t1 INSERT OVERWRITE TABLE t2 REDUCE t1.col1, t1.col2 USING 'python dep.zip/dep/reducer.py' AS output; Notice the first and the last line. Hive unzips the archive and creates these directories. The dep directory will hold the script and dependencies.
0
1
0
0
2015-03-17T19:28:00.000
1
0
false
29,108,020
0
0
0
1
I have a hive query with custom mapper and reducer written in python. The mapper and reducer modules depend on some 3rd party modules/packages which are not installed on my cluster (installing them on the cluster is not an option). I realized this problem only after running the hive query when it failed saying that the xyz module was not found. How do I package the whole thing so that I have all the dependencies (including transitive dependencies) available in my streaming job? How do I use such a packaging and import modules in my mapper and reducer? The question is rather naive but I could not find an answer even after an hour of googling. Also, it's not just specific to hive but holds for hadoop streaming jobs in general when mapper/reducer is written in python.
Is there any benefit to using python2.7 multiprocessing to copy files
29,111,506
1
0
53
0
python,multiprocessing
I/O goes to the system cache in RAM before hitting a hard drive. Fro writes, you may find the copies are fast until you exhaust RAM and then slows down and that multiple reads of the same data are fast. If you copy the same file to several places, there is an advantage to do the copies of that file before moving to the next. I/O to a single hard drive (or group of hard drives joined with a RAID or volume manager) is mostly serial except that the operating system and drive may reorder operations to read / write nearby tracks before seeking for tracks that are further away. There is some advantage to doing parallel copies because there are more opportunities to reorder, but since you are really writing from the system RAM cache sometime after your application writes, the benefits may be hard to measure. There is a greater benefit moving between drives. Those go mostly in parallel, although there is some contention for the buses (eg, pcie, sata) that run the drives. If you have a lot of files to copy, multiprocessing is a reasonable way to go, but you may find that subprocess to the native copy utilities is faster.
0
1
0
0
2015-03-17T23:09:00.000
1
0.197375
false
29,111,407
1
0
0
1
I would like to know if there is any benefit to using python2.7's multiprocessing module to asynchronously copy files from one folder to another. Is diskio always forced to be in serial? Does this change if you are copying from one hard disk to a different hard disk? Does this change depending on operating system (windows / linux)? Perhaps it is possible to read in parallel, but not possible to write? This is all assuming that the fiels being moved/copied are different files going to different locations.
Do I need to use Tornado Futures with Motorengine?
29,156,089
0
0
113
0
python,mongodb,tornado
You need to understand how Tornado works asynchronously. Everytime you yield a Future object, Tornado suspends current coroutine and jumps to the next coroutine. Doing queries synchronous or asynchronous depends on the situation. If your query is fast enough, you can use synchronous driver. Also, keep in mind, jumping between coroutines has a cost, too. If it is not fast enough, you might consider doing asychronous calls.
0
1
0
0
2015-03-18T14:16:00.000
1
0
false
29,124,446
0
0
0
1
Basically, what is a Futures on Tornado's approach? I've read on some stackoverflow threads that a tornado coroutine must return a Future, but returning a Future how do my db queries work? Using Futures will my Tornado app be waiting for the query to return anything like a blocking i/o or it will just dispatch the request and change the context until the query return? And this Motorengine solution? Do I need to use Futures or just make the queries?
How to feed information to a Python daemon?
30,587,524
4
9
2,615
0
python,linux,queue,pipe,fifo
There are several options 1) If the daemon should accept messages from other systems, make the daemon an RPC server - Use xmlrpc/jsonrpc. 2) If it is all local, you can use either TCP sockets or Named PIPEs. 3) If there will be a huge set of clients connecting concurrently, you can use select.epoll.
0
1
0
0
2015-03-18T16:22:00.000
8
0.099668
false
29,127,341
0
0
0
2
I have a Python daemon running on a Linux system. I would like to feed information such as "Bob", "Alice", etc. and have the daemon print "Hello Bob." and "Hello Alice" to a file. This has to be asynchronous. The Python daemon has to wait for information and print it whenever it receives something. What would be the best way to achieve this? I was thinking about a named pipe or the Queue library but there could be better solutions.
How to feed information to a Python daemon?
30,565,140
0
9
2,615
0
python,linux,queue,pipe,fifo
Why not use signals? I am not a python programmer but presumably you can register a signal handler within your daemon and then signal it from the terminal. Just use SIGUSR or SIGHUP or similar. This is the usual method you use to rotate logfiles or similar.
0
1
0
0
2015-03-18T16:22:00.000
8
0
false
29,127,341
0
0
0
2
I have a Python daemon running on a Linux system. I would like to feed information such as "Bob", "Alice", etc. and have the daemon print "Hello Bob." and "Hello Alice" to a file. This has to be asynchronous. The Python daemon has to wait for information and print it whenever it receives something. What would be the best way to achieve this? I was thinking about a named pipe or the Queue library but there could be better solutions.
simplest way to make two python scripts talk to each other?
29,135,637
1
1
1,235
0
python
If I were a beginner, I would have my remote script periodically check the value of the variable in a text file. When I needed to update the variable, I would just ssh to my remote machine and update the text file.
0
1
0
0
2015-03-18T22:12:00.000
2
0.099668
false
29,133,682
1
0
0
1
I have a python script running on a vps. Now i just want to change 1 variable in the running script using my desktop computer. What is the simplest way to that for a beginner?
Why do people create virtualenv in a docker container?
53,656,409
21
27
11,721
0
python,docker,virtualenv
Here is my two cents, or rather comments on @gru 's answer and some of the comments. Neither docker nor virtual environments are virtual machines every line in your docker file produces overhead. But it's true that at runtime virtual environments have zero impact the idea of docker containers is that you have one process which interacts with other (docker-)services in a client-server relationship. Running different apps in one docker or calling one app from another inside a docker is somehow against that idea. More importantly, it adds complexity to you docker, which you want to avoid. "isolating" the python packages that the app sees (inside a virtual environment) from the packages installed in the docker is only necessary if you need to assure a certain version for one or more packages. the system installed inside the container only serves as an environment for the one app that you are running. Adjust it to the requirements of your app. There is no need to leave it "untouched" So in conclusion: There is no good reason for using a virtual environment inside a container. Install whatever packages you need on the system. If you need control over the exact package versions, install the (docker-wide) with pip or alike. If you think that you need to run different apps with different package versions inside a single container, take a step back and rethink your design. You are heading towards more complexity, more difficult maintenance and more headache. Split the work/services up into several containers.
0
1
0
0
2015-03-19T14:08:00.000
3
1
false
29,146,792
1
0
0
2
You can build a container with Dockerfile in a few seconds. Then why do people need to install a virtual enviroment inside the docker container? It's like a "virtual machine" in a virtual machine ?
Why do people create virtualenv in a docker container?
33,150,800
30
27
11,721
0
python,docker,virtualenv
I am working with virtualenvs in Docker and I think there are several reasons: you may want to isolate your app from system's python packages you may want to run a custom version of python but still keep the system's packages untouched you may need fine grain control on the packages installed for a specific app you may need to run multiple apps with different requirements I think these are all reasonably good reasons to add a little pip install virtualenv at the end of the installation! :)
0
1
0
0
2015-03-19T14:08:00.000
3
1.2
true
29,146,792
1
0
0
2
You can build a container with Dockerfile in a few seconds. Then why do people need to install a virtual enviroment inside the docker container? It's like a "virtual machine" in a virtual machine ?
MPI_Sendrecv with operation on recvbuf?
29,256,366
1
2
127
0
python,c,mpi,mpi4py
MPI_Recvreduce is what you're looking for. Unfortunately, it doesn't exist yet. It's something that the MPI Forum has been looking at adding to a future version of the standard, but hasn't yet been adopted and won't be in the upcoming MPI 3.1.
0
1
0
0
2015-03-19T23:50:00.000
1
1.2
true
29,157,039
0
1
0
1
I use the MPI_Sendrecv MPI function to communicate arrays of data between processes. I do this in Python using mpi4py, but I'm pretty sure my question is independent of the language used. What I really want is to add an array residing on another process to an existing local array. This should be done for all processes, so I use the MPI_Sendrecv function to send and receive the arrays in one go. I can then add the received array in the recvbuf to the local array and I'm done. It would be nice however if I could save the step of having a separate recvbuf array, and simply receiving the data directly into the local array without overwriting the existing data, but rather updating it using some operation (addition in my case). I guess what I'm looking for is a combined MPI_Sendrecv/MPI_Reduce function. Do some function like this exist in MPI?
Orientation error for images uploaded to GAE (GCS + get_serving_url)
29,222,762
-1
1
218
0
google-app-engine,google-cloud-storage,google-app-engine-python
I think it happens because when get_serving_url service resize the image, it always resize the image from the longest side of the Image, keeping the aspect ration same. If you have a image of 1600x2400, then the resize image is 106x160 to keep the aspect ratio is same. In your case one of the image is 306x408 (which is correct)as Image is resized from the height, and the other image is 360x270 (in which orientation change) the image is resized from width. I think in the later-one the orientation is changed just to keep the aspect ratio same.
0
1
0
0
2015-03-23T04:31:00.000
1
-0.197375
false
29,203,302
0
0
1
1
We are developing an image sharing service using GAE. Many users have reported since last week that "portrait images are oriented in landscape". We found out that from a specific timing, the specification of images uploaded and distributed through GAE has changed. So the specs seem to have changed around 3/18 03:25(UTC) . The "orientation" of Exif is not properly applied. We are using GAE/Python. We save images uploaded by the users to GoogleCloudStorage, then use the URL we get with get_serving_url to distribute them. Is this problem temporal? Also, is it possible to return to the specs before 3/18 03:22(UTC)?
Where to store Dockerized python application configuration
29,226,897
1
1
58
0
python,configuration,docker
I guess that will very much depend. It might be useful to distinguish between two types of configuration: the one which define the way the container (application code contained) functions and the one which defines infrastructure (db credentials, collaborators endpoints, etc.). The functional configuration would more naturally be a part of the image, as often you would like to minimize the variation in the behavior of the resulting containers. The infrastructure configuration on the other hand has to be specified at the run time for a particular instance (container). The more docker way is to use environmental variables, but at the end it can be anything that suits your needs.
0
1
0
0
2015-03-23T21:27:00.000
1
1.2
true
29,220,747
1
0
0
1
Much like data volumne, the configuration for a python app should persist across changes in the app container. A file in a separate data container? Database in a separate data container? I realize there are multiple ways to store the configuration information. But what patterns are being used in today's Dockerized web apps?
How to install dependencies in OpenShift?
29,236,634
0
0
461
0
python,openshift
You can not use the "yum" command to install packages on OpenShift. What specific issue are you having? I am sure that at least some of those packages are already installed in OpenShift online already (such as wget). Have you tried running your project to see what specific errors you get about what is missing?
0
1
0
1
2015-03-23T22:44:00.000
2
0
false
29,221,836
0
0
0
1
Hello i want to install these dependencies in OpenShift for my App yum -y install wget gcc zlib-devel bzip2-devel openssl-devel ncurses-devel sqlite-devel readline-devel tk-devel gdbm-devel libffi-devel libxslt libxslt-devel libxml2 libxml2-devel openldap-devel libjpeg-turbo-devel openjpeg-devel libtiff-devel libyaml-devel python-virtualenv git libpng12 libXext xorg-x11-font-utils But don't know how, is it through rhc? if so, how?
Impossible to set python.exe to *.py scripts on Win7
36,078,931
3
3
1,995
0
python,windows,python-2.7
Here is another check to make, which helped me figure out what was going on. I switched from the 32bit Anaconda to the 64bit version. I deinstalled, downloaded then reinstalled, but several things didn't get cleaned up properly (quick launch stuff, and some registry keys). The problem on my side was that the default installation path changed, from C:\Anaconda to C:\Anaconda2. I first tried the assoc and ftype tricks, everything was fine there. However, the HKEY_CLASSES_ROOT\Applications\python.exe\shell\open\command registry key was pointing to the old Anaconda path. As soon as I fixed this, python.exe showed up when I tried associating with "Open with" and everything went back to normal. I also added the %* at the end in the registry key.
0
1
0
1
2015-03-24T09:42:00.000
5
0.119427
false
29,229,273
0
0
0
2
i've installed py 2.7 (64bit) on my PC with Win7 (64bit) without problem but I'm not able to run *.py scripts via DOS shell without declare python full path. Let me better explain : If I type D:\ myscript.py it doesn't work. The script is open with wordpad If I type D:\ C:\Python27 myscript.py it works and run correctly I try to change the default application software for *.py file via Win7 GUI ( control pannel etc etc) but without success. Python is not present in the list of available sw and in any case also with the manual set I'm not able to associate python.exe at *.py files. I've checked in my environment variables but I've not found problem (python path is declared in Path = C:\Python27\;C:\Python27\Scripts). I've tried also to modify HKEY_CLASSES_ROOT->Applications->python.exe->shell->open->command : old register value "C:\Python27\python.exe" "%1" new register value "C:\Python27\python.exe" "%1" %* without success. Any suggestion? Thanks
Impossible to set python.exe to *.py scripts on Win7
56,543,680
0
3
1,995
0
python,windows,python-2.7
@slv 's answer is good and helped me a bit with solving this problem. Anyhow, since I had previous installations of Python before this error occured for me, I might have to add something to this. One of the main problems hereby was that the directory of my python-installation changed. So, I opened regedit.exe and followed these to steps: I searched the entire registry for .py, .pyw, .pyx and .pyc (hopefully I did not forget to mention any here). Then, I radically deleted all occurrences I could find. I searched the entire registry for my old python-installation-path (e.g. C:\Users\Desktop\Anaconda3). Then I replaced this path with my new installation path (e.g. C:\Users\Desktop\Miniconda3). Thereby, I also came across and replaced HKEY_CLASSES_ROOT\Applications\python.exe\shell\open\command which @slv mentioned. Afterwards, it was possible again to connect a .py-file from the Open with...-menu with my python.exe.
0
1
0
1
2015-03-24T09:42:00.000
5
0
false
29,229,273
0
0
0
2
i've installed py 2.7 (64bit) on my PC with Win7 (64bit) without problem but I'm not able to run *.py scripts via DOS shell without declare python full path. Let me better explain : If I type D:\ myscript.py it doesn't work. The script is open with wordpad If I type D:\ C:\Python27 myscript.py it works and run correctly I try to change the default application software for *.py file via Win7 GUI ( control pannel etc etc) but without success. Python is not present in the list of available sw and in any case also with the manual set I'm not able to associate python.exe at *.py files. I've checked in my environment variables but I've not found problem (python path is declared in Path = C:\Python27\;C:\Python27\Scripts). I've tried also to modify HKEY_CLASSES_ROOT->Applications->python.exe->shell->open->command : old register value "C:\Python27\python.exe" "%1" new register value "C:\Python27\python.exe" "%1" %* without success. Any suggestion? Thanks
How to write a stress test script for asynchronous processes in python
29,233,194
0
0
988
0
python,django,web-applications,stress-testing,djcelery
I think there is no need to write your own stress testing script. I have used www.blitz.io for stress testing. It is set up in minutes, easy to use and it makes beautiful graphs. It has a 14 day trial so you just can test the heck out of your system for 14 days for free. This should be enough to find all your bottlenecks.
0
1
0
0
2015-03-24T10:36:00.000
1
0
false
29,230,333
0
0
1
1
I have a web application running on django, wherein an end user can enter a URL to process. All the processing tasks are offloaded to a celery queue which sends notification to the user when the task is completed. I need to stress test this app with the goals. to determine breaking points or safe usage limits to confirm intended specifications are being met to determine modes of failure (how exactly a system fails) to test stable operation of a part or system outside standard usage How do I go about writing my script in Python, given the fact that I also need to take the offloaded celery tasks into account as well.
Create Kivy package for Windows from OSX
29,239,289
1
1
327
0
python,linux,windows,macos,kivy
Your best bet will be to setup a VM for Windows and a VM for Linux, then create the packages for each OS within those VMs. You might be able to use pyinstaller with Wine to make the Windows package directly on OSX - I have read that this can be done on Linux, so in theory it could work on OSX. But you will probably get better results using a VM where you can also test the package and make sure it installs and runs properly.
0
1
0
0
2015-03-24T17:21:00.000
1
1.2
true
29,239,033
1
0
0
1
I've packaged a Kivy program that I made on OSX, but I also want to be able to distribute it for Windows and Linux. Is there currently a way of creating a Windows or Linux package from OSX? Or is there an ETA on when Buildozer will be able to create Windows packages?
How to create a virtualenv with Python3 while my system is OsX 10 with Python2
29,287,001
2
0
62
0
python,macos,virtualenvwrapper
You can use the -p arg like so -- virtualenv -p $(which python3)
0
1
0
0
2015-03-24T18:08:00.000
1
1.2
true
29,239,877
1
0
0
1
My machine is OsX 10.10.2 and I have Python 2 installed. It is possible to create a virtualenv using virtualenvwrapper command mkvirtualenv to run on Python 3? I am reluctant to install Python3 in my system as the last time I did that, python on the whole stopped working. Not sure why, I am a new. May I screwed up. I am looking for the command to run.
Google App Engine Faceted Search in production has to be enabled / activated?
29,302,041
1
1
223
0
python,google-app-engine,faceted-search
Facet value cannot be empty string. You can workaround it by not including facets with empty values or have a special value for your empty facets. The local implementation of faceted search (python) currently accepts empty facets that is a bug and will be fixed.
0
1
0
0
2015-03-24T20:01:00.000
2
1.2
true
29,241,797
0
0
1
1
I've just created locally on my machine a perfectly running faceted search service using Google App Engine Faceted Search, written in Python. As soon as I deploy to our production server, it throws an error during the index creation, specifically when the code try to execute index.put(docs), where docs is an array of [max 100] search.Document. The error is: "PutError: one or more put document operations failed: Value is empty" I tried then to step back to the the previous version of my service, which was working like a charm until then. I removed all the new search.TextField added and I removed the facets=[search.AtomFacet(...)] from the search.Document constructor keywords. It started working again. Then, baby step forward again, I've added all the fields I needed, but still no facets=[] in the constructor. It worked.As soon as I added again facets=[search.AtomFacet(name='propName', value=doc.propName if doc.propName else '')] then the error appeared again. Whilst locally on my machine, it works perfectly. Is there any setting / configuration we need to enable on production server to have this feature? Thank you
why is Java Processbuilder 4000 times slower at running commands then Python Subprocess.check_output
29,289,687
0
2
1,183
0
java,python,performance,subprocess,processbuilder
It seems like python doesn't spawn subprocess. Which is why it was faster. I am sorry with confusion. thank you
0
1
0
1
2015-03-24T22:03:00.000
1
0
false
29,243,748
0
0
1
1
I was trying to write a wrapper for a third party C tool using Java processbuilder. I need to run this process builder millions of times. But, I found something weird about the speed. I already have a wrapper for this third party tool C tool for python. In python, the wrapper uses the python subprocess.check_output. So, I ran the java wrapper 10000 times with same command. Also, ran the python wrapper 10000 time with same command. With python, my 10000 tests ran in about 0.01 second. With java processbuilder, it ran in 40 seconds. Can someone explain why I am getting large difference in speed between two languages? You try this experiment with a simple command like "time".
Installing Python devel on centos
29,283,004
2
0
3,455
0
python,linux,oracle,centos,cx-oracle
You have a newer version of python installed than the corresponding source package you're trying to install. You have python 2.6.6-37 installed but the latest available source package from your repos (that you can successfully connect to) is 2.6.6-36. But it looks like the python you have installed came from your "updates" repo, http://192.168.210.26/centos/6/updates/i386/repodata/repomd.xml which isn't working at t he moment. If that repo also had the corresponding python-devel-2.6.6-37 package, and it worked, (didn't throw a PYCURL error) you'd be fine, yum would find that and use it. So your first step should be fixing your LAN repo / mirror.
0
1
0
1
2015-03-26T15:40:00.000
1
0.379949
false
29,282,771
0
0
0
1
bash-4.1# yum install python-devel Loaded plugins: fastestmirror, rhnplugin This system is receiving updates from RHN Classic or RHN Satellite. Loading mirror speeds from cached hostfile * rpmforge: mirror.smartmedia.net.id * webtatic-el5: uk.repo.webtatic.com http://192.168.210.26/centos/6/updates/i386/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404 Not Found" Trying other mirror. Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package python-devel.x86_64 0:2.6.6-36.el6 will be installed --> Processing Dependency: python(x86-64) = 2.6.6-36.el6 for package: python-devel-2.6.6-36.el6.x86_64 --> Finished Dependency Resolution Error: Package: python-devel-2.6.6-36.el6.x86_64 (centos64-x86_64) Requires: python(x86-64) = 2.6.6-36.el6 Installed: python-2.6.6-37.el6_4.x86_64 (@centos64-updates-x86_64) python(x86-64) = 2.6.6-37.el6_4 Available: python-2.6.6-36.el6.x86_64 (centos64-x86_64) python(x86-64) = 2.6.6-36.el6 You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest Can somebody help me with above error , I am getting. Just to let everybody know I am trying to install cx_Oracle on my CentOS system (CentOS release 6.4) and I got this error:- error: command 'gcc' failed with exit status 1 So, I searched and found out to install python-devel and to do that I am getting the above error.
How to install in python 3.4 - .whl files
33,483,629
8
5
22,006
0
python,numpy,module
See the easiest solution is to unzip the .whl file using 7-zip. Then in the unzipped directory you will find the module which you can copy and paste in the directory C:/Python34/Lib/site-packages/ (or wherever else you have installed Python).
0
1
0
0
2015-03-26T16:48:00.000
2
1
false
29,284,282
1
0
0
2
I recently tried to re-install numpy for python 3.4, since I got a new computer, and am struggling. I am on windows 8.1, and from what I remember I previously used a .exe file that did everything for me. However, this time I was given a .whl file (apparently this is a "Wheel" file), which I cannot figure out how to install. Other posts have explained that I have to use PIP, however the explanations of how to install these files that I have been able to find are dreadful. The command "python install pip" or "pip install numpy" or all the other various commands I have seen only return an error that "python is not recognized as an internal or external command, operable program or batch file", or "pip is not recognised as an internal...." ect. I have also tried "python3.4", "python.exe" and many others since it does not like python. The file name of the numpy file that I downloaded is "numpy-1.9.2+mkl-cp34-none-win_amd64.whl". So can anybody give me a Detailed tutorial of how to use these, as by the looks of things all modules are using these now. Also, why did people stop using .exe files to install these? It was so much easier!
How to install in python 3.4 - .whl files
29,286,276
6
5
22,006
0
python,numpy,module
Python 3.4 comes with PIP already included in the package, so you should be able to start using PIP immediately after installing Python 3.4. Commands like pip install <packagename> only work if the path to PIP is included in your path environment variable. If it's not, and you'd rather not edit your environment variables, you need to provide the full path. The default location for PIP in Python 3.4 is in C:\Python34\Scripts\pip3.4.exe. If that file exists there (it should), enter the command C:\Python34\Scripts\pip3.4.exe install <numpy_whl_path>, where <numpy_whl_path> is the full path to your numpy .whl file. For example: C:\Python34\Scripts\pip3.4.exe install C:\Users\mwinfield\Downloads\numpy‑1.9.2+mkl‑cp34‑none‑win_amd64.whl.
0
1
0
0
2015-03-26T16:48:00.000
2
1.2
true
29,284,282
1
0
0
2
I recently tried to re-install numpy for python 3.4, since I got a new computer, and am struggling. I am on windows 8.1, and from what I remember I previously used a .exe file that did everything for me. However, this time I was given a .whl file (apparently this is a "Wheel" file), which I cannot figure out how to install. Other posts have explained that I have to use PIP, however the explanations of how to install these files that I have been able to find are dreadful. The command "python install pip" or "pip install numpy" or all the other various commands I have seen only return an error that "python is not recognized as an internal or external command, operable program or batch file", or "pip is not recognised as an internal...." ect. I have also tried "python3.4", "python.exe" and many others since it does not like python. The file name of the numpy file that I downloaded is "numpy-1.9.2+mkl-cp34-none-win_amd64.whl". So can anybody give me a Detailed tutorial of how to use these, as by the looks of things all modules are using these now. Also, why did people stop using .exe files to install these? It was so much easier!
Use pdb to debug into subprocess?
29,286,660
0
3
3,428
0
python,debugging,subprocess,pdb
You will have to step through the code if you have a pdb. If you have the source files, leave a breakpoint in the line of your interest and use pdb to automatically stop at your line of interest. This is what we do in .net. Hopefully it should work for python too..
0
1
0
0
2015-03-26T18:45:00.000
2
0
false
29,286,485
1
0
0
1
I have some python code with many calls to subprocess (for example, subprocess.check_call()). It apparently can't debug into the subprocess. Is there any way (e.g. adding code) to make it do that, or must I use a different debugger?
How to use custom authentication with the login: required attribute in app.yaml ( Google app engine, python )
29,311,398
2
0
648
0
python,google-app-engine,authentication,yaml
Essentially, you have the following alternatives: either give up on static file / dir serving directly from App Engine infrastructure (transparently to your application), or give up on using your custom user class for authentication. I suspect you'll pick the first alternative, serving all files from your app (at least, all files that must be kept secret from all but authorized users) -- that "just" costs more resources (and possibly slightly increases latency for users), but lets you implement whatever functionality you require. The advantage of serving static files/dirs directly with the static_files: &c directives in app.yaml is that your app does not actually get involved -- App Engine's infrastructure does it all for you, which saves you resources and possibly makes things faster for users (better caching/CDN-like delivery). But if your app does not actually get involved, then how could any code you wrote for custom auth possibly be running?! That would be a logical contradiction... If you're reluctant to serve static files from your app specifically because they're very large, then you can get the speed fully back (and then some), and some resource savings back too, by serving the URL from your app, but then, after authentication, going right on to Google Cloud Storage for it to actually do the serving. More generally, a mix of files you don't actually need to keep secret (place those in static_dir &c app.yaml directives), ones that are large enough to warrant serving from Cloud Storage, and ones your app can best serve directly, can let you optimize along all fronts -- while keeping full control of your custom auth wherever it matters!
0
1
0
0
2015-03-27T15:29:00.000
1
1.2
true
29,304,395
0
0
1
1
On Google app engine I use a custom user class with methods. ( Not the class and functions provided by webapp2 ) However, I still need to block users from accessing certain static directory url's with html pages behind them. The current solution I have is that the user authentication happens after the user visits the page, but they still see the entire page loaded for a moment. This looks bad and is not very secure. How can I use a custom authentication option with the login : required attribute in the YAML file? So that users are immediately redirected ( before landing on the page ) when they are not logged in.
Openshift custom env vars not available in Python
29,305,330
1
0
37
0
python,openshift,django-1.7
You probably just need to stop & start (not restart) your application via the rhc command line so that your python environment can pick them up.
0
1
0
0
2015-03-27T16:14:00.000
1
1.2
true
29,305,308
0
0
1
1
I'm trying to get a Python 2.7, Django 1.7 web gear up and running. I have hot_deploy activated. However, after setting my required env vars (via rhc), and I see them set in the gear ('env | grep MY_VAR' is OK), when running the WSGI script the vars are NOT SET. os.environ['MY_VAR'] yields KeyError. Is this somehow related to hot_deploy?
Build a buffer in file for stream of TCP packets
29,309,592
0
0
261
0
python,tcp,proxy,buffer
I recommend you join the two scripts into one and just use memory. If you can't join the scripts for some reason, create a unix-domain socket to pass the raw, binary data directly from one to the other. These fifos have limited size so you'll still have to do in-memory buffering on one side or another, probably the B side. If the data is too big for memory, you can write it out to a temporary file and re-read it when it's time to pass it on. It'll be easiest if the same script both writes and reads the file as then you won't have to guess when to write, advanced to a new file, or deal with separate readers and writers.
0
1
1
0
2015-03-27T17:45:00.000
1
0
false
29,306,962
0
0
0
1
I am trying to build a proxy to buffer some packets according to some schedules. I have two TCP connections, one from host A to the proxy and the other one from the proxy to host B. The proxy forwards the packets between A and B. The proxy will buffer the packets according to the scheduled instructions. At certain time, it will buffer the packets. After the buffering period is over, it will forward the packets in the buffer and also do its normal forwarding work. I am using python. Which module would be the best in this situation? I tried pickle but it is difficult to remove and append elements in the file. Any suggestions? Thanks!
Can I upload a binary wheel to a local devpi on Linux?
29,356,362
0
3
2,736
0
python,python-wheel,devpi
I wound up using twine for the upload. The devpi interfacing script ("devpi") is interesting, but I don't think we want it installed on all the boxes I'd need it on. Thanks.
0
1
0
0
2015-03-27T21:59:00.000
2
1.2
true
29,310,849
0
0
0
1
Is it possible to use "pip wheel" to upload a binary wheel on Linux, to a local devpi server? Or do I need to get to a setup.py and do an upload from there? It seems a shame to build the wheel without need of a setup.py (it's taken care of behind the scenes), only to need a setup.py to upload the result. This makes me wonder: C extensions PyPI currently only allows uploading platform-specific wheels for Windows and Mac OS X. It is still useful to create wheels for these platforms, as it avoids the need for your users to compile the package when installing. I'm doing "pip -v -v -v wheel numpy" (for example), and I have a pip.conf and .pypirc (both pointing at our local devpi). Thanks!
DataStax OpsCenter 5.1.0 fails to start due to python 'ImportError'
29,364,698
1
0
334
0
python,cassandra,datastax,opscenter
The problem was due to missing symbolic links between the bundled python libraries. In particular, in /lib/py-debian/2.7/amd64/twisted the symoblic links to the contents of the py-unpure directory for the files _version.py, plugin.py, init.py and copyright.py were missing. Originally, I used gradle's copy from tarTree to extract the archive, which resulted in the missing symoblic links. Using tar -xzf instead resolves the issue and opscenter starts up as expected.
0
1
0
0
2015-03-29T13:01:00.000
1
1.2
true
29,329,425
1
0
0
1
Trying to start a tarball installation of OpsCenter 5.1.0 on Ubuntu 14.04 64-bit by running ./opscenter in /opt/opscenter-5.1.0/bin fails with the following error: Traceback (most recent call last): File "./bin/twistd", line 28, in <module> from twisted.scripts.twistd import run ImportError: cannot import name run My version of python is 2.7.6: $ python --version Python 2.7.6 And trying to import twisted results in: $ python -c "import twisted; print twisted" Traceback (most recent call last): File "<string>", line 1, in <module> ImportError: No module named twisted The value of PYTHONPATH from opscenter looks as follows: PYTHONPATH: ./src:/usr/lib/python2.7/site-packages:./src/lib/python2.7/site-packages:./lib/python2.7/site-packages:./lib/py:./lib/py-debian/2.7/amd64:: What is going wrong here and can someone suggest a workaround that is worth trying to a Python newbie?
How to remove anaconda from windows completely?
50,202,529
3
102
368,077
0
python,windows,anaconda
It looks that some files are still left and some registry keys are left. So you can run revocleaner tool to remove those entries as well. Do a reboot and install again it should be doing it now. I also faced issue and by complete cleaning I got Rid of it.
0
1
0
0
2015-03-30T03:25:00.000
14
0.042831
false
29,337,928
1
0
0
13
I installed Anaconda a while ago but recently decided to uninstall it and just install basic python 2.7. I removed Anaconda and deleted all the directories and installed python 2.7. But when I go to install PyGTK for Windows it says it will install it to the c:/users/.../Anaconda directory - this doesn't even exist. I want to install it to the c:/python-2.7 directory. Why does it think Anaconda is still installed? And how can I change this?
How to remove anaconda from windows completely?
45,360,960
18
102
368,077
0
python,windows,anaconda
If a clean re-install/uninstall did not work, this is because the Anaconda install is still listed in the registry. Start -> Run -> Regedit Navigate to HKEY_CURRENT_USER -> Software -> Python You may see 2 subfolders, Anaconda and PythonCore. Expand both and check the "Install Location" in the Install folder, it will be listed on the right. Delete either or both Anaconda and PythonCore folders, or the entire Python folder and the Registry path to install your Python Package to Anaconda will be gone.
0
1
0
0
2015-03-30T03:25:00.000
14
1
false
29,337,928
1
0
0
13
I installed Anaconda a while ago but recently decided to uninstall it and just install basic python 2.7. I removed Anaconda and deleted all the directories and installed python 2.7. But when I go to install PyGTK for Windows it says it will install it to the c:/users/.../Anaconda directory - this doesn't even exist. I want to install it to the c:/python-2.7 directory. Why does it think Anaconda is still installed? And how can I change this?
How to remove anaconda from windows completely?
61,464,989
3
102
368,077
0
python,windows,anaconda
For windows- In the Control Panel, choose Add or Remove Programs or Uninstall a program, and then select Python 3.6 (Anaconda) or your version of Python. Use Windows Explorer to delete the envs and pkgs folders prior to Running the uninstall in the root of your installation.
0
1
0
0
2015-03-30T03:25:00.000
14
0.042831
false
29,337,928
1
0
0
13
I installed Anaconda a while ago but recently decided to uninstall it and just install basic python 2.7. I removed Anaconda and deleted all the directories and installed python 2.7. But when I go to install PyGTK for Windows it says it will install it to the c:/users/.../Anaconda directory - this doesn't even exist. I want to install it to the c:/python-2.7 directory. Why does it think Anaconda is still installed? And how can I change this?
How to remove anaconda from windows completely?
48,980,402
7
102
368,077
0
python,windows,anaconda
To use Uninstall-Anaconda.exe in C:\Users\username\Anaconda3 is a good way.
0
1
0
0
2015-03-30T03:25:00.000
14
1
false
29,337,928
1
0
0
13
I installed Anaconda a while ago but recently decided to uninstall it and just install basic python 2.7. I removed Anaconda and deleted all the directories and installed python 2.7. But when I go to install PyGTK for Windows it says it will install it to the c:/users/.../Anaconda directory - this doesn't even exist. I want to install it to the c:/python-2.7 directory. Why does it think Anaconda is still installed? And how can I change this?
How to remove anaconda from windows completely?
58,889,740
1
102
368,077
0
python,windows,anaconda
Uninstall Anaconda from control Panel Delete related folders, cache data and configurations from Users/user Delete from AppData folder from hidden list To remove start menu entry -> Go to C:/ProgramsData/Microsoft/Windows/ and delete Anaconda folder or search for anaconda in start menu and right click on anaconda prompt -> Show in Folder option. This will do almost cleaning of every anaconda file on your system.
0
1
0
0
2015-03-30T03:25:00.000
14
0.014285
false
29,337,928
1
0
0
13
I installed Anaconda a while ago but recently decided to uninstall it and just install basic python 2.7. I removed Anaconda and deleted all the directories and installed python 2.7. But when I go to install PyGTK for Windows it says it will install it to the c:/users/.../Anaconda directory - this doesn't even exist. I want to install it to the c:/python-2.7 directory. Why does it think Anaconda is still installed? And how can I change this?
How to remove anaconda from windows completely?
63,536,711
3
102
368,077
0
python,windows,anaconda
there is a start item folder in C:\ drive. Remove ur anaconda3 folder there, simple and you are good to go. In my case I found here "C:\Users\pravu\AppData\Roaming\Microsoft\Windows\Start Menu\Programs"
0
1
0
0
2015-03-30T03:25:00.000
14
0.042831
false
29,337,928
1
0
0
13
I installed Anaconda a while ago but recently decided to uninstall it and just install basic python 2.7. I removed Anaconda and deleted all the directories and installed python 2.7. But when I go to install PyGTK for Windows it says it will install it to the c:/users/.../Anaconda directory - this doesn't even exist. I want to install it to the c:/python-2.7 directory. Why does it think Anaconda is still installed? And how can I change this?