Title
stringlengths 15
150
| A_Id
int64 2.98k
72.4M
| Users Score
int64 -17
470
| Q_Score
int64 0
5.69k
| ViewCount
int64 18
4.06M
| Database and SQL
int64 0
1
| Tags
stringlengths 6
105
| Answer
stringlengths 11
6.38k
| GUI and Desktop Applications
int64 0
1
| System Administration and DevOps
int64 1
1
| Networking and APIs
int64 0
1
| Other
int64 0
1
| CreationDate
stringlengths 23
23
| AnswerCount
int64 1
64
| Score
float64 -1
1.2
| is_accepted
bool 2
classes | Q_Id
int64 1.85k
44.1M
| Python Basics and Environment
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Web Development
int64 0
1
| Available Count
int64 1
17
| Question
stringlengths 41
29k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
How to remove anaconda from windows completely?
| 56,087,895 | 3 | 102 | 368,077 | 0 |
python,windows,anaconda
|
On my machine (Win10), the uninstaller was located at C:\ProgramData\Anaconda3\Uninstall-Anaconda3.exe.
| 0 | 1 | 0 | 0 |
2015-03-30T03:25:00.000
| 14 | 0.042831 | false | 29,337,928 | 1 | 0 | 0 | 13 |
I installed Anaconda a while ago but recently decided to uninstall it and just install basic python 2.7.
I removed Anaconda and deleted all the directories and installed python 2.7.
But when I go to install PyGTK for Windows it says it will install it to the c:/users/.../Anaconda directory - this doesn't even exist. I want to install it to the c:/python-2.7 directory. Why does it think Anaconda is still installed? And how can I change this?
|
How to remove anaconda from windows completely?
| 54,453,907 | 1 | 102 | 368,077 | 0 |
python,windows,anaconda
|
Go to C:\Users\username\Anaconda3 and search for Uninstall-Anaconda3.exe which will remove all the components of Anaconda.
| 0 | 1 | 0 | 0 |
2015-03-30T03:25:00.000
| 14 | 0.014285 | false | 29,337,928 | 1 | 0 | 0 | 13 |
I installed Anaconda a while ago but recently decided to uninstall it and just install basic python 2.7.
I removed Anaconda and deleted all the directories and installed python 2.7.
But when I go to install PyGTK for Windows it says it will install it to the c:/users/.../Anaconda directory - this doesn't even exist. I want to install it to the c:/python-2.7 directory. Why does it think Anaconda is still installed? And how can I change this?
|
How to remove anaconda from windows completely?
| 37,898,464 | 19 | 102 | 368,077 | 0 |
python,windows,anaconda
|
In my computer there wasn't a uninstaller in the Start Menu as well. But it worked it the Control Panel > Programs > Uninstall a Program, and selecting Python(Anaconda64bits) in the menu.
(Note that I'm using Win10)
| 0 | 1 | 0 | 0 |
2015-03-30T03:25:00.000
| 14 | 1 | false | 29,337,928 | 1 | 0 | 0 | 13 |
I installed Anaconda a while ago but recently decided to uninstall it and just install basic python 2.7.
I removed Anaconda and deleted all the directories and installed python 2.7.
But when I go to install PyGTK for Windows it says it will install it to the c:/users/.../Anaconda directory - this doesn't even exist. I want to install it to the c:/python-2.7 directory. Why does it think Anaconda is still installed? And how can I change this?
|
How to remove anaconda from windows completely?
| 39,490,516 | 184 | 102 | 368,077 | 0 |
python,windows,anaconda
|
In the folder where you installed Anaconda (Example: C:\Users\username\Anaconda3) there should be an executable called Uninstall-Anaconda.exe. Double click on this file to start uninstall Anaconda.
That should do the trick as well.
| 0 | 1 | 0 | 0 |
2015-03-30T03:25:00.000
| 14 | 1 | false | 29,337,928 | 1 | 0 | 0 | 13 |
I installed Anaconda a while ago but recently decided to uninstall it and just install basic python 2.7.
I removed Anaconda and deleted all the directories and installed python 2.7.
But when I go to install PyGTK for Windows it says it will install it to the c:/users/.../Anaconda directory - this doesn't even exist. I want to install it to the c:/python-2.7 directory. Why does it think Anaconda is still installed? And how can I change this?
|
How to remove anaconda from windows completely?
| 29,377,301 | 8 | 102 | 368,077 | 0 |
python,windows,anaconda
|
Anaconda comes with an uninstaller, which should have been installed in the Start menu.
| 0 | 1 | 0 | 0 |
2015-03-30T03:25:00.000
| 14 | 1 | false | 29,337,928 | 1 | 0 | 0 | 13 |
I installed Anaconda a while ago but recently decided to uninstall it and just install basic python 2.7.
I removed Anaconda and deleted all the directories and installed python 2.7.
But when I go to install PyGTK for Windows it says it will install it to the c:/users/.../Anaconda directory - this doesn't even exist. I want to install it to the c:/python-2.7 directory. Why does it think Anaconda is still installed? And how can I change this?
|
How to remove anaconda from windows completely?
| 29,377,433 | 15 | 102 | 368,077 | 0 |
python,windows,anaconda
|
Since I didn't have the uninstaller listed - the solution turned out to be to reinstall Anaconda and then uninstall it.
| 0 | 1 | 0 | 0 |
2015-03-30T03:25:00.000
| 14 | 1.2 | true | 29,337,928 | 1 | 0 | 0 | 13 |
I installed Anaconda a while ago but recently decided to uninstall it and just install basic python 2.7.
I removed Anaconda and deleted all the directories and installed python 2.7.
But when I go to install PyGTK for Windows it says it will install it to the c:/users/.../Anaconda directory - this doesn't even exist. I want to install it to the c:/python-2.7 directory. Why does it think Anaconda is still installed? And how can I change this?
|
How to remove anaconda from windows completely?
| 56,794,491 | 4 | 102 | 368,077 | 0 |
python,windows,anaconda
|
Method1:
To uninstall Anaconda3 go to the Anaconda3 folder, there u will be able to find an executable called Uninstall-Anaconda3.exe, double click on it. This should uninstall ur application.
There are times when the shortcut of anaconda command prompt,jupyter notebook, spyder, etc exists, so delete those files too.
Method2 (Windows8):
Go to control panel->Programs->Uninstall a Program and then select Anaconda3(Python3.1. 64-bit)in the menu.
| 0 | 1 | 0 | 0 |
2015-03-30T03:25:00.000
| 14 | 0.057081 | false | 29,337,928 | 1 | 0 | 0 | 13 |
I installed Anaconda a while ago but recently decided to uninstall it and just install basic python 2.7.
I removed Anaconda and deleted all the directories and installed python 2.7.
But when I go to install PyGTK for Windows it says it will install it to the c:/users/.../Anaconda directory - this doesn't even exist. I want to install it to the c:/python-2.7 directory. Why does it think Anaconda is still installed? And how can I change this?
|
Installed python 2.7.9 instead of 3.4.3 and can't run pip install in cmd anymore
| 29,346,753 | 0 | 0 | 517 | 0 |
python,python-2.7,cmd
|
Pip is not in C:\Python27.
It's in C:\Python27\Scripts.
Check that folder to make sure there is a pip.exe, if there is then it should have installed fine, and make sure that C:\Python27\Scripts is in your PATH.
| 0 | 1 | 0 | 0 |
2015-03-30T12:42:00.000
| 1 | 1.2 | true | 29,346,545 | 1 | 0 | 0 | 1 |
I installed python 2.7.9 instead of 3.4.3 due to module xlutils not working on 3.4.3 and from the time I installed python 2.7.9 and I just can't install the related modules to it by using pip install and I added the ;C:\Python27 to the system path.
what am I missing here coz it keeps on telling me this error:
'pip is not recognized as an internal or external command
|
change date/time using python in linux
| 37,574,931 | 1 | 1 | 6,022 | 0 |
python,linux,date,time
|
You did a lower case -s it is meant to be -S so that is why it isn't working.
| 0 | 1 | 0 | 0 |
2015-03-30T13:51:00.000
| 2 | 0.099668 | false | 29,347,981 | 0 | 0 | 0 | 1 |
I tried using date command to change the system time in debian linux:
os.system("echo passwd | "sudo date -s \"Thu Aug 9 21:31:26 UTC 2012\")
and I set the python file permission to 777 and also chown as root. But it does not work and says date: cannot set date: Operation not permitted. Any Ideas?
Thanks
|
Python 3.5 Port Binding Error
| 29,353,947 | 0 | 2 | 1,571 | 0 |
python
|
Assumptions:
You're running on Windows and installed the basic python for windows (which includes idle) and probably stuck with the defaults (so you should have python in c:\python35)
Related assumptions: Windows doesn't have an out of the box loopback interface and trying to enable one is going to be more painful than working around it.
Recommendations:
You might want to try to sneakernet (i.e. download on a friend's computer to a USB stick and then copy over) pycharm or another IDE, but I don't know if they won't have the same problem
If that doesn't work, you should be able to (as an interim step while you wait for a new computer):
Use Idle (with the -n flag) to edit your python program(s) [because trying to edit python programs in Wordpad is cruel], save your program as myprogram.py
Open up cmd (dos prompt), cd to your save directory, do c:\python35\python.exe myprogram.py
Repeat 1+2 as needed
| 0 | 1 | 0 | 0 |
2015-03-30T18:29:00.000
| 2 | 0 | false | 29,353,557 | 0 | 0 | 0 | 2 |
I need some help. Im trying to run Python IDLE on my computer but I get the following error:
IDLE can't bind to a TCP/IP port, which is necessary to communicate with its Python execution server. This might be because no networking is installed on this computer. Run IDLE with the -n command line switch to start without a subprocess and refer Help/IDLE Help 'Running without a subprocess' for further details.
My networking card just fried a couple days ago so I think this might be the problem, I dont know. I also tried refer to IDLE Help but I couldn't understand a thing.
Is there anyway I can get by this problem. I need this program operational for my programming class' a least until my new computer arrives.
|
Python 3.5 Port Binding Error
| 58,263,324 | 0 | 2 | 1,571 | 0 |
python
|
By using -n flag, you can use it for one time only. It means closing python idle, I'll have to redo -n flag process whenever I want to open .
| 0 | 1 | 0 | 0 |
2015-03-30T18:29:00.000
| 2 | 0 | false | 29,353,557 | 0 | 0 | 0 | 2 |
I need some help. Im trying to run Python IDLE on my computer but I get the following error:
IDLE can't bind to a TCP/IP port, which is necessary to communicate with its Python execution server. This might be because no networking is installed on this computer. Run IDLE with the -n command line switch to start without a subprocess and refer Help/IDLE Help 'Running without a subprocess' for further details.
My networking card just fried a couple days ago so I think this might be the problem, I dont know. I also tried refer to IDLE Help but I couldn't understand a thing.
Is there anyway I can get by this problem. I need this program operational for my programming class' a least until my new computer arrives.
|
Gsutil - How can I check if a file exists in a GCS bucket (a sub-directory) using Gsutil
| 59,030,743 | 0 | 18 | 24,723 | 0 |
python,google-cloud-storage,gsutil
|
If for whatever reason you want to do something depending on the result of that listing (if there are for example parquet files on a directory load a bq table):
gsutil -q stat gs://dir/*.parquet; if [ $? == 0 ]; then bq load ... ; fi
| 0 | 1 | 0 | 0 |
2015-03-30T22:24:00.000
| 6 | 0 | false | 29,357,420 | 0 | 0 | 0 | 1 |
I have a GCS bucket containing some files in the path
gs://main-bucket/sub-directory-bucket/object1.gz
I would like to programmatically check if the sub-directory bucket contains one specific file. I would like to do this using gsutil.
How could this be done?
|
Root access in Python3
| 29,383,623 | 0 | 0 | 518 | 0 |
python-3.x,root
|
Don't you need to just launch the program as sudo (root not recommended), I am not sure you can run partial code as root.
Alternately split the program into a and with some messaging schema between them
| 0 | 1 | 0 | 0 |
2015-04-01T04:49:00.000
| 1 | 0 | false | 29,383,061 | 0 | 0 | 0 | 1 |
I want to write a simple program to manage the screen brightness on my laptop, running Python3 under Ubuntu Linux.
To directly change the screen brightness levels, I can deal with a single file in the folder /sys/class/backlight/acpi_video0, called brightness.
(the maximum brightness is another text file called max_brightness, so it's easy to find)
The problem is, however, that I want to grant my program partial access to root permissions, just enough to modify the files in that folder (though, I'd like it to be flexible enough to choose any folder in /sys/class/backlight/, in case it's not named acpi_video0), but not actually run as root, as that may cause problems as it tries to access GTK for a graphical interface.
How do I grant a Python3 program partial root permissions?
|
Using 3rd party packages on remote machine without download/install rights
| 29,398,092 | 1 | 0 | 33 | 0 |
python
|
It is basically useless if you don't have executable permission in the remote machine. You need to contact your administrator to obtain an executable permission.
In the case for the SCP files to the remote server, you may still be able to cp you files but you may not be able to execute it.
| 0 | 1 | 0 | 0 |
2015-04-01T18:11:00.000
| 2 | 0.099668 | false | 29,397,839 | 0 | 0 | 0 | 1 |
I am SSHed into a remote machine and I do not have rights to download python packages but I want to use 3rd party applications for my project. I found cx_freeze but I'm not sure if that is what I need.
What I want to achieve is to be able to run different parts of my project (will mains everywhere) with command line arguments on the remote machine. My project will be filled with a few 3rd party python packages. Not sure how to get around this as I cannot pip install and am not a sudoer. I can SCP files to the remote machine
|
Python delete lines of text line #1 till regex
| 29,452,916 | 0 | 2 | 1,429 | 0 |
python,regex
|
Set a flag false.
Iterate over each line.
For each line,
1) When you match your pattern, set a flag.
2) If the flag is currently set set, print the line.
| 0 | 1 | 0 | 0 |
2015-04-05T00:28:00.000
| 3 | 0 | false | 29,452,879 | 1 | 0 | 0 | 1 |
I have an issue that I can't seem to find a solution within python.
From command line I can do this by:
sed '1,/COMMANDS/d' /var/tmp/newFile
This delete everything from line #1 till regex "COMMANDS". Simple
But I can't do the same with Python that I can find.
The re.sub and multiline doesn't seem to work.
So I have a question how can I do this in a pythonic way? I really rather not run sed from within python unless I have to.
|
Command window popping up when running a Python executable?
| 29,453,760 | 1 | 0 | 2,357 | 0 |
python,exe,executable,py2exe,command-window
|
If all your program does is print something and you run it by double-clicking the executable, then it simply closes the console when it finishes running. If you want the window to stay open, run your program from the command line. You can also create a batch file that runs your program and then pauses the console, so that you at least get a "press any key" before the console closes.
| 0 | 1 | 0 | 0 |
2015-04-05T03:14:00.000
| 1 | 1.2 | true | 29,453,737 | 1 | 0 | 0 | 1 |
I recently made an executable of a Python program with py2exe, and when I ran the executable, a command window appeared for a split second and then disappeared. My program itself never actually ran at all. Everything is still inside the dist folder, so I'm not sure what's actually wrong. Is there a solution for this?
|
Can I use a linux-based ORM from multiple languages?
| 29,479,399 | 0 | 0 | 124 | 0 |
python,ruby,database,oop,orm
|
This question doesn't really make sense. Presumably LINQ, like any .NET library, can be used in any language that runs in the CLR: C#, VB, IronPython, IronRuby, etc.
The most common cross-language runtime that works on Linux is the Java VM, and you can use Java libraries - including ORMs like JDO - in any language that uses that VM: Java, Scala, Clojure, Jython, JRuby, etc.
| 0 | 1 | 0 | 0 |
2015-04-06T20:26:00.000
| 2 | 0 | false | 29,479,112 | 1 | 0 | 0 | 2 |
The predominant ORMs that run in a linux-based environment seem to be written around a specific language.
Microsoft LINQ, however, supports access from a number of languages. Can I do this in linux-land (i.e. non-LINQ-land, non-JVM-land), for example between native versions of Python and Ruby?
|
Can I use a linux-based ORM from multiple languages?
| 29,481,003 | 1 | 0 | 124 | 0 |
python,ruby,database,oop,orm
|
It seems that the only way to do this is to use languages which share a common VM, such as .NET CLR (and LINQ) or the Java JVM (Hibernate, Eclipse Link, etc).
So for the various languages running in their native implementation, the answer is no.
| 0 | 1 | 0 | 0 |
2015-04-06T20:26:00.000
| 2 | 1.2 | true | 29,479,112 | 1 | 0 | 0 | 2 |
The predominant ORMs that run in a linux-based environment seem to be written around a specific language.
Microsoft LINQ, however, supports access from a number of languages. Can I do this in linux-land (i.e. non-LINQ-land, non-JVM-land), for example between native versions of Python and Ruby?
|
PIP Python Installation weird error
| 29,558,245 | 0 | 0 | 57 | 0 |
python,pip
|
You can try this.
sudo mv /var/lib/dpkg/info /var/lib/dpkg/info.bak
sudo mkdir /var/lib/dpkg/info
sudo apt-get update
Hope it can help you(and others).
| 0 | 1 | 0 | 0 |
2015-04-07T19:05:00.000
| 1 | 0 | false | 29,499,285 | 1 | 0 | 0 | 1 |
I am trying to install PIP for Python3, but no matter what I try at some point I always end up with:
E: Sub-pricess /usr/bin/dpkg returned an error code(1)
E: Failed to process build dependencies.
I tried with :
python get-pip.py from the official PIP page.
sudo apt-get install python3-pip
sudo apt-get build-dep python3.4
I have the version of PIP for python2.7, so that's why I ran the latest command. Can someone help me out please ?
|
easy_install not working on OS X
| 29,546,275 | 0 | 0 | 314 | 0 |
python-2.7,easy-install
|
Have you recently upgraded your OS? Sometimes the X-Code Command Line Tools need to be re-installed after an OS upgrade.
| 0 | 1 | 0 | 0 |
2015-04-09T18:30:00.000
| 1 | 0 | false | 29,546,225 | 0 | 0 | 0 | 1 |
I seem to have screwed up my Python install on my Mac (running OSX 10.10.3), I can run python but not easy_install. Running easy_install just gives me
sudo: easy_install: command not found
However, sudo easy_install-3.4 pip doesn't give me any error but when I then try to use pip using pip install gevent I get
-bash: /usr/local/bin/pip: No such file or directory
If I use pip3.4 install geventI get a long set of errors ending with
Cleaning up...
Command /Library/Frameworks/Python.framework/Versions/3.4/bin/python3.4 -c "import setuptools, tokenize;file='/private/var/folders/sb/bk7v6n4x30s6c_w_p3jf7mrh0000gn/T/pip_build_Oskar/gevent/setup.py';exec(compile(getattr(tokenize, 'open', open)(file).read().replace('\r\n', '\n'), file, 'exec'))" install --record /var/folders/sb/bk7v6n4x30s6c_w_p3jf7mrh0000gn/T/pip-q7w99lz8-record/install-record.txt --single-version-externally-managed --compile failed with error code 1 in /private/var/folders/sb/bk7v6n4x30s6c_w_p3jf7mrh0000gn/T/pip_build_Oskar/gevent
Storing debug log for failure in /var/folders/sb/bk7v6n4x30s6c_w_p3jf7mrh0000gn/T/tmpoowjltmj
How can I restore my Python setup?
|
Get IO Wait time as % in python
| 29,548,863 | 1 | 1 | 1,680 | 0 |
python,psutil,iowait
|
%wa is giving your the iowait of the CPU, and if you are using times = psutil.cpu_times() or times = psutil.cpu_times_percent() then it is under the times.iowait variable of the returned value (Assuming you are on a Linux system)
| 0 | 1 | 0 | 1 |
2015-04-09T20:51:00.000
| 1 | 1.2 | true | 29,548,735 | 0 | 0 | 0 | 1 |
I am writing a python script to get some basic system stats. I am using psutil for most of it and it is working fine except for one thing that I need.
I'd like to log the average cpu wait time at the moment.
from top output it would be in CPU section under %wa.
I can't seem to find how to get that in psutil, does anyone know how to get it? I am about to go down a road I really don't want to go on....
That entire CPU row is rather nice, since it totals to 100 and it is easy to log and plot.
Thanks in advance.
|
Which will give the best performance Hive or Pig or Python Mapreduce with text file and oracle table as source?
| 29,991,069 | 3 | 2 | 2,382 | 0 |
python,hadoop,mapreduce,hive,apache-pig
|
Python Map Reduce or anything using Hadoop Streaming interface will most likely be slower. That is due to the overhead of passing data through stdin and stdout and the implementation of the streaming API consumer (in your case python). Python UDF's in Hive and Pig do the same thing.
You might not want to compress data flow into ORC on the Python side. You'll be subjected to using Python's ORC libraries, which I am not sure if they are available. It would be easier if you let Python return your serialized object and the Hadoop reduce steps to compress and store as ORC (Python as a UDF for computation)
Yes. Pig and Python have some what of a nice programmatic interface where in you can write python scripts to dynamically generate Pig Logic and submit it in parallel. look up Embedding Pig Latin in Python. It's robust enough to define Python UDFS and let Pig do the overall abstraction and job optimization. Pig does a lazy evaluation so in cases of multiple joins or multiple transformations it can demonstrate pretty good performance in the optimizing the complete pipe line.
You say HDP 2.1. Have you had a look at Spark ? If performance is important to you and looking at the datasets size which dont look huge you ll expect many time faster overall pipeline execution than Hadoop s native MR engine
| 0 | 1 | 0 | 0 |
2015-04-10T03:30:00.000
| 1 | 0.53705 | false | 29,552,853 | 0 | 1 | 0 | 1 |
I have the below requirements and confused about which one to choose for high performance. I am not java developer. I am comfort with Hive, Pig and Python.
I am using HDP2.1 with tez engine. Data sources are text files(80 GB) and Oracle table(15GB). Both are structured data. I heard Hive will suite for structure data and Python map reduce streaming concept too will have high performance than hive & Pig. Please clarify.
I am using Hive and the reasons are:
need to join those two sources based on one column.
using ORC format table to store the join results since the data size is huge
text file name will be used to generate one output column and that has been performed with virtual column concept input__file__name field.
After join need to do some arithmetic operations on each row and doing that via python UDF
Now the complete execution time from data copy into HDFS to final result taken 2.30 hrs with 4 node cluster using Hive and Python UDF.
My questions are:
1) I heard Java Mapreduce always faster. Will that be true with Python Map reduce streaming concept too?
2) Can I achieve all the above functions in Python like join, retrieval of text file name, compressed data flow like ORC since the volume is high?
3) Will Pig join would be better than Hive? If yes can we get input text file name in Pig to generate output column?
Thanks in advance.
|
Python | Python.exe- Entry point not found
| 35,697,811 | 1 | 5 | 12,725 | 0 |
python-3.x,system,kivy
|
This happened to me because an old zlib1.dll was being loaded from somewhere in my PATH. I copied a new version to system32 and it solved the problem.
| 1 | 1 | 0 | 0 |
2015-04-10T17:03:00.000
| 4 | 0.049958 | false | 29,566,947 | 0 | 0 | 0 | 2 |
I installed (extracted) Kivy (Kivy-1.9.0-py3.4-win32-x86.exe) on my PC (Win7 32bit). Now whenever trying to run a file using kivy-3.4.bat getting an error message within a window...
python.exe- Entry point not found
The procedure entry point inflateReset2 could not be located in the dynamic link library zlib1.dll.
Once click on the "Ok" button I see
[Critical ] [app] unable to get a window, abort
on CMD.
I think this is a problem related to my system and Python more than Kivy. Can anyone tell me what is the problem and how to solve it?
This is amazing!! Even in StackOverFlow no one could give me any solution!!
|
Python | Python.exe- Entry point not found
| 34,912,630 | 4 | 5 | 12,725 | 0 |
python-3.x,system,kivy
|
Yes, I know this post is a bit old, but maybe someone else searches this.
I got the same error. And really, I just tried to start the python script via MS Powershell instead of CMD. I just wanted to use the PS one time.
And it worked, at least for me.
So, if you encounter this error, try to use the Powershell :)
| 1 | 1 | 0 | 0 |
2015-04-10T17:03:00.000
| 4 | 0.197375 | false | 29,566,947 | 0 | 0 | 0 | 2 |
I installed (extracted) Kivy (Kivy-1.9.0-py3.4-win32-x86.exe) on my PC (Win7 32bit). Now whenever trying to run a file using kivy-3.4.bat getting an error message within a window...
python.exe- Entry point not found
The procedure entry point inflateReset2 could not be located in the dynamic link library zlib1.dll.
Once click on the "Ok" button I see
[Critical ] [app] unable to get a window, abort
on CMD.
I think this is a problem related to my system and Python more than Kivy. Can anyone tell me what is the problem and how to solve it?
This is amazing!! Even in StackOverFlow no one could give me any solution!!
|
Is it possible to run java command line app from python in AWS EC2?
| 29,572,952 | 0 | 1 | 211 | 0 |
java,python,amazon-web-services,amazon-ec2
|
Since you're just making a command line call to the Java app, the path of least resistance would just be to make that call from another server using ssh. You can easily adapt the command you've been using with subprocess.call to use ssh -- more or less, subprocess.call(['ssh', '{user}@{server}', command]) (although have fun figuring out the quotation marks). As an aside on those lines, I usually find using '-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null' stabilizes scripted SSH calls in my environment.
The more involved thing will be setting up the environments to properly run the components you need. You'll need to set up ssh configs so that your django app can ssh over and set up -- probably with private key verification. Then, you'll need to make sure that your EC2 security groups are set up to allow the ssh access to your java server on port 22, where sshd listens by default.
None of this is that hairy, but all the same, it might be stabler to just wrap your Java service in a HTTP server that your Django app can hit. Anyway, hope this is helpful.
| 0 | 1 | 0 | 0 |
2015-04-11T00:22:00.000
| 1 | 0 | false | 29,572,608 | 0 | 0 | 1 | 1 |
I am working on some machine learning for chemical modelling in python. I need to run a java app (from command line through python subprocess.call) and a python webserver. Is this possible on AWS EC2?
I currently have this setup running on my mac but I am curious on how to set it up on aws.
Thanks in advance!
|
Pycharm (Ubuntu) program not opening
| 29,581,633 | 0 | 0 | 3,197 | 0 |
python,linux,pycharm,ubuntu-14.04
|
hi Pedro in ubuntu you need to install the files using the command prompt even if you downloaded the files you need to install them using command, unlike windows in linux when some software is downloaded you need to install them manually by using commands like sudo apt-get python install..
for further help you can consult http://linuxg.net/how-to-install-pycharm-3-4-on-ubuntu-14-04-linux-mint-17-pinguy-os-14-04-and-other-ubuntu-14-04-derivatives/
| 0 | 1 | 0 | 0 |
2015-04-11T18:19:00.000
| 2 | 1.2 | true | 29,581,436 | 1 | 0 | 0 | 1 |
I am running Ubuntu 14.04/GNOME 3.8.4 on 15 MBP Duel Boot.
I am new to Linux and Python(Pycharm ide)
I have downloaded the Pycharm "files" from the software center
but cannot run the program.The icon comes up in the side bar but when i click it nothing happens. I have tried "./" and only the code appers in "gedit" Please help Oracle java is already installed
|
How do you make a Python executable file on Mac?
| 29,584,289 | 1 | 8 | 17,211 | 0 |
python,macos,exe
|
You can run python scripts through OS X Terminal. You just have to write a python script with an editor, open your Terminal and enter python path_to_my_script/my_script.py
| 0 | 1 | 0 | 0 |
2015-04-11T23:33:00.000
| 2 | 0.099668 | false | 29,584,270 | 0 | 0 | 0 | 1 |
My google searching has failed me. I'd like to know how to make an executable Python file on OS X, that is, how to go about making an .exe file that can launch a Python script with a double click (not from the shell). For that matter I'd assume the solution for this would be similar between different scripting languages, is this the case?
|
I don't know how to update my Python version to 3.4?
| 37,517,408 | 0 | 0 | 113 | 0 |
macos,python-3.x
|
I have found that making the 'python' alias replace the default version of python that the system comes with is a bad idea.
When you install a new version of python (3.4 for instance),
these two new commands are installed, specifically for the version you installed:
pip3.4
python3.4
If you're using an IDE that wants you to indicate which python version you are using the IDE will let you navigate to it in the Library folder
pip will still be for python2.7 after you download some other python version, as I think that's the current version osx comes installed with
| 0 | 1 | 0 | 0 |
2015-04-12T01:10:00.000
| 3 | 0 | false | 29,584,840 | 1 | 0 | 0 | 2 |
I'm on OSX, and I installed IDLE for Python 3.4. However, in Terminal my python -V and pip --version are both Python 2.7.
How do I fix this? I really have no idea how any of this works, so please bear with my lack of knowledge.
|
I don't know how to update my Python version to 3.4?
| 29,584,868 | 0 | 0 | 113 | 0 |
macos,python-3.x
|
Try python3 or python3.4. It should print out the right version if correctly installed.
Python 3.4 already has pip with it. You can use python3 -m pip to access pip. Or python3 -m ensurepip to make sure that it's correctly installed.
| 0 | 1 | 0 | 0 |
2015-04-12T01:10:00.000
| 3 | 0 | false | 29,584,840 | 1 | 0 | 0 | 2 |
I'm on OSX, and I installed IDLE for Python 3.4. However, in Terminal my python -V and pip --version are both Python 2.7.
How do I fix this? I really have no idea how any of this works, so please bear with my lack of knowledge.
|
Webapp2 redirect 404 error
| 29,591,301 | 1 | 2 | 237 | 0 |
python,google-app-engine,redirect,webapp2
|
Redirect takes a URL. You probably want to self.redirect("/") but without knowing your URL mappings, that's just a guess.
| 0 | 1 | 0 | 0 |
2015-04-12T15:34:00.000
| 1 | 0.197375 | false | 29,591,189 | 0 | 0 | 1 | 1 |
I'm having troubles with redirect function. When I call it with self.redirect("/index.html") the server goes to http://localhost:10080/index.html and alert 404 page not found.
Log:
HTTP/1.1" 304 -
INFO 2015-04-12 12:32:39,029 module.py:737] default: "POST /subscribe HTTP/1.1" 302 -
INFO 2015-04-12 12:32:39,046 module.py:737] default: "GET /index.html HTTP/1.1" 404 154
INFO 2015-04-12 12:32:39,223 module.py:737] default: "GET /favicon.ico HTTP/1.1" 304 -
INFO 2015-04-12 12:32:39,296 module.py:737] default: "GET /favicon.ico HTTP/1.1" 304 -
|
install python pygame on ubuntu 14.10
| 30,444,254 | 1 | 0 | 386 | 0 |
python-2.7,pygame,ubuntu-14.10
|
If you open up the terminal
type in
sudo apt-get install python-pygame
It should download and install the dependencies required for pygame.
| 1 | 1 | 0 | 0 |
2015-04-12T18:46:00.000
| 1 | 0.197375 | false | 29,593,223 | 1 | 0 | 0 | 1 |
I want to install pygame library on Ubunbu 14.10. I am using Python 2.7.x. At this time I found dependences problems with python-numpy dependence., because it uses a previous version of gcc that 14.10 uses that's why I am stuck with this, my question is: is there any way to install pygame on 14.10?
best
|
kombu producer and celery consumer
| 50,602,825 | 0 | 1 | 673 | 0 |
python,rabbitmq,celery,kombu
|
I understand that to communicate with RabbitMQ, you would require any lib that abides by AMQP specification.
Kombu is one such lib which can bind to the RabbitMQ exchange, listen and process messages by spawning numerous consumers.
Celery is nothing but an asynchronous task generator which has numerous add-ons like in-memory processing, capacity to write to DB/Redis cache, perform complex operations and so on.
Said that now you can use kombu to read and write messages in/from RMQ and use celery workers to process the message.
| 0 | 1 | 0 | 0 |
2015-04-13T16:53:00.000
| 1 | 0 | false | 29,610,806 | 0 | 0 | 0 | 1 |
Is it possible for a kombu producer to queue a message on rabbitmq to be processed by celery workers? It seems the celery workers do not understand the message put by the kombu producer.
|
how to execute shell script in the same process in python
| 29,621,377 | 1 | 0 | 1,004 | 0 |
python,shell
|
What if you make a 'master' shell script that would execute all the others in sequence? This way you'll only have to create a single sub-process yourself, and the individual scripts will share the same environment.
If, however, you would like to interleave script executions with Python code, then you would probably have to make each of the scripts echo its environment to stdout before exiting, parse that, and then pass it into the next script (subprocess.Popen() accepts the env parameter, which is a map.)
| 0 | 1 | 0 | 0 |
2015-04-14T07:08:00.000
| 2 | 1.2 | true | 29,621,193 | 1 | 0 | 0 | 2 |
I need to execute several shell scripts with python, some scripts would export environment parameters, so I need to execute them in the same process, otherwise, other scripts can't see the new environment parameters
in one word, I want to let the shell script change the environment of the python process
so I should not use subprocess, any idea how to realize it?
|
how to execute shell script in the same process in python
| 29,621,636 | 1 | 0 | 1,004 | 0 |
python,shell
|
No, you cannot run more than one program (bash, python) in the same process at the same time.
But you can run them in sequence using exec in bash or one of the exec commands in python, like os.execve. Several things survive the "exec boundary", one of which is the environment block. So in each bash script you exec the next, and finally exec your python.
You might also consider using an IPC mechanism like a named pipe to pass data between processes.
I respectfully suggest that you look at your design again. Why are you mixing bash and python? Is it just to reuse code? Even if you managed this you will end with a real mess. It is generally easier to stick with one language.
| 0 | 1 | 0 | 0 |
2015-04-14T07:08:00.000
| 2 | 0.099668 | false | 29,621,193 | 1 | 0 | 0 | 2 |
I need to execute several shell scripts with python, some scripts would export environment parameters, so I need to execute them in the same process, otherwise, other scripts can't see the new environment parameters
in one word, I want to let the shell script change the environment of the python process
so I should not use subprocess, any idea how to realize it?
|
python3 not linked with idle
| 29,668,903 | 2 | 0 | 267 | 0 |
python-3.x,pandas,pip,python-idle
|
I asked some questions in a comments. However, starting Idle with python3 -m idlelib will start Idle with whatever python is started with python3. Since you say that python3 starts 3.4.3, the above should run Idle 3.4.3.
| 0 | 1 | 0 | 0 |
2015-04-14T08:16:00.000
| 1 | 1.2 | true | 29,622,322 | 1 | 0 | 0 | 1 |
I have python3.4.0 installed on my system.
Today, I want to install python3.4.3, so I download the source code and install it.
However, my idle is still python3.4.0
while when I type python3 in terminal, it shows python3.4.3.
I also have pandas installed on my old version, it still can be used on my idle (linked with 3.4.0) but not with python3.4.3.
My question is how I can just sticked to python3.4.3 and make everything run in it.
|
Maximum Whoosh Index size?
| 29,650,121 | 0 | 1 | 677 | 0 |
python,indexing,whoosh
|
Is it possible that Whoosh overflows to RAM of your computer by loading the 27Gb file?
| 0 | 1 | 0 | 1 |
2015-04-15T11:35:00.000
| 1 | 0 | false | 29,649,147 | 0 | 0 | 0 | 1 |
I'm using a 32-bit Ubuntu machine. I'm trying to create a Whoosh index of a 27GB file. But my system is crashing after index size of 3GB. Is there any size constraint on Whoosh index size? If not then what can be the problem.
|
How to cleanly pass command-line parameters when test-running my Python script?
| 29,657,355 | 1 | 0 | 1,305 | 0 |
workflow,argparse,python-idle
|
With some editors you can define the 'execute' command,
For example with Geany, for Python files, F5 is python2.7 %f. That could be modified to something like python2.7 %f dummy parameters. But I use an attached terminal window and its line history more than F5 like commands.
I'm an Ipython user, so don't remember much about the IDLE configuration. In Ipython I usually use the %run magic, which is more like invoking the script from a shell than from an IDE. Ipython also has a better previous line history than the shell.
For larger scripts I like to put the guts of the code (classes, functions) in one file, and test code in the if __name__ block. The user interface is in another file that imports this core module.
| 0 | 1 | 0 | 0 |
2015-04-15T17:02:00.000
| 3 | 0.066568 | false | 29,656,381 | 1 | 0 | 0 | 2 |
So I'm writing Python with IDLE and my usual workflow is to have two windows (an editor and a console), and run my script with F5 to quickly test it.
My script has some non-optional command line parameters, and I have two conflicting desires:
If someone launches my script without passing any parameters, I'd like him to get an error telling him to do so (argparse does this well)
When I hit F5 in IDLE, I'd like to have my script run with dummy parameters (and I don't want to have more keystrokes than F5 to have this work, and I don't want to have a piece of code I have to remember to remove when I'm not debugging any more)
So far my solution has been that if I get no parameters, I look for a params.debug file (that's not under source control), and if so, take that as default params, but it's a bit ugly... so would there be a cleaner, more "standard" solution to this? Do other IDEs offer easier ways of doing this?
Other solutions I can think of: environment variables, having a separate "launcher" script that's the one taking the "official" parameters.
(I'm likely to try out another IDE anyway)
|
How to cleanly pass command-line parameters when test-running my Python script?
| 39,773,753 | 1 | 0 | 1,305 | 0 |
workflow,argparse,python-idle
|
Thanks for your question. I also searched for a way to do this. I found that Spyder which is the IDE I use has an option under Run/Configure to enter the command line parameters for running the program. You can even configure different ones for the different editors you have open.
| 0 | 1 | 0 | 0 |
2015-04-15T17:02:00.000
| 3 | 0.066568 | false | 29,656,381 | 1 | 0 | 0 | 2 |
So I'm writing Python with IDLE and my usual workflow is to have two windows (an editor and a console), and run my script with F5 to quickly test it.
My script has some non-optional command line parameters, and I have two conflicting desires:
If someone launches my script without passing any parameters, I'd like him to get an error telling him to do so (argparse does this well)
When I hit F5 in IDLE, I'd like to have my script run with dummy parameters (and I don't want to have more keystrokes than F5 to have this work, and I don't want to have a piece of code I have to remember to remove when I'm not debugging any more)
So far my solution has been that if I get no parameters, I look for a params.debug file (that's not under source control), and if so, take that as default params, but it's a bit ugly... so would there be a cleaner, more "standard" solution to this? Do other IDEs offer easier ways of doing this?
Other solutions I can think of: environment variables, having a separate "launcher" script that's the one taking the "official" parameters.
(I'm likely to try out another IDE anyway)
|
How/where to store temp files and logs for a cloud app?
| 29,656,524 | -1 | 11 | 976 | 1 |
python,mysql,redis,cloud,storage
|
Store your logs in MySQL. Just make a table like this:
x***time*****source*****action
----------------------------
****unixtime*somemodule*error/event
Your temporary storage should be enough for temporary files :)
| 0 | 1 | 0 | 0 |
2015-04-15T17:04:00.000
| 3 | -0.066568 | false | 29,656,422 | 0 | 0 | 0 | 1 |
I am working on a Python/MySQL cloud app with a fairly complex architecture. Operating this system (currently) generates temporary files (plain text, YAML) and log files and I had intended to store them on the filesystem.
However, our prospective cloud operator only provides a temporary, non-persistent filesystem to apps. This means that the initial approach with storing the temporary and log files won't work.
There must be a standard approach to solving this problem which I am not aware of. I don't want to use object storage like S3 because it would extend the current stack and add complexity. But I have the possibility to install an additional, dedicated app (if there is anything made for this purpose) on a different server with the same provider. The only limitation is that it would have to be in PHP, Python, MySQL.
The generic question: What is the standard approach to storing files when no persistent filesystem is available?
And for my specific case: Is there any solution using Python and/or MySQL which is simple and quick to implement? Is this a usecase for Redis?
|
Questions on Twisted Protocols and loopingCall
| 29,664,919 | 0 | 1 | 174 | 0 |
python,twisted
|
Things get garbage collected when there are no more references to them, so I can't say when objects in your program will be collected.
However, I can tell you about the references kept from Twisted.
A Protocol connected to a Transport will have a reference that goes globals→reactor→transport→protocol. When the transport is closed, the reference from the reactor to the transport is broken. The reactor only references the transport to deliver events to it, and since a disconnected transport has no events to deliver, the reactor can drop it. The protocol therefore no longer is referenced by the reactor. At that point, if no other globals or active stack variables reference it, it will be collected.
A LoopingCall is referenced by globals→reactor→DelayedCall (the object returned by callLater)→LoopingCall.__call__ bound method→LoopingCall. If the LoopingCall's f attribute (the callable it's calling) still references your Protocol, then yes, your Protocol object will continue to live in memory. But, since it no longer has a useful transport, there's not that much you can do with it.
| 0 | 1 | 0 | 0 |
2015-04-15T20:00:00.000
| 1 | 0 | false | 29,659,755 | 0 | 0 | 0 | 1 |
Suppose I have a python twisted app with the standard Factory and Protocol subclasses. My Protocol subclass connectionMade() method launches a loopingCall that runs (say) every 5 minutes. I have two questions:
Suppose the connection gets lost. Yes I know that this will result in the connectionLost() method being called. But what happens the protocol object itself? When does that stop existing? Does it get garbage collected right away?
What happens to the loopingCall in that protocol? If I don't explicitly cancel it, does that mean it keeps running forever and prevents the protocol from getting garbage collected?
|
Python imports in Hadoop
| 29,679,539 | 0 | 0 | 77 | 0 |
java,python,hadoop,import
|
Ok, I figured this one out by myself.
The problem the python process runs into is that the HDFS uses symlinks. Python on the other hand does not accept symlinks as valid files so will not import from them if in the same directory.
Instead of adding each file to the Distributed Cache, you can add the directory to the cache, then any calls to the directory are via the symlink but calls to files are then in the actual directory, allowing the python process to import libraries
| 0 | 1 | 0 | 0 |
2015-04-15T20:40:00.000
| 1 | 0 | false | 29,660,487 | 0 | 0 | 0 | 1 |
Ok, I am writing a java based Hadoop MR task. Part of the task is calling an outside python script as a new Process, passing it information and reading it back the resut. I have done this a few times before without problems when not working with hadoop.
I can also call a single python script as a new process in hadoop when it does not import anything, or only things that are on the nodes' python install.
the current python script calls an import on another script which is usually just sitting in the same directory and that works fine when not running on hadoop.
In hadoop I have added both files to the distributed cache so I do not understand why the script could not import the other one.
|
Where to put a virtualenv directory on Mac OS X?
| 29,683,947 | 1 | 0 | 3,129 | 0 |
python,macos,bash,virtualenv
|
You can have as many as you like and put them where it's convenient. A common arrangement is to have a dedicated environment for each project; then if each project is in ~/projects/<project> you could have a virtualenv directory in each project's respective root directory. So ~/projects/foo/.env for the virtualenv for project foo, ~/projects/bar/.env for the one for bar, etc. (The use of .env is just a convention; again, you can name them any way you like.)
| 0 | 1 | 0 | 0 |
2015-04-16T18:56:00.000
| 2 | 0.099668 | false | 29,683,652 | 1 | 0 | 0 | 1 |
I am transferring a Python based development system from PC to Mac. I need to create a virtualenv / directory to store this system. Where is a good place to put the directory (somewhere easily accessible from a terminal window)? I am not so savvy on the Mac as the PC, although I could probably write a bash script to change directory and activate the virtualenv. I am running OS X Mountain Lion (v10.8) and I'm the only user on the system.
|
Shell: Prompt user to enter a directory path
| 29,691,432 | 1 | 1 | 1,873 | 0 |
python,linux,bash,shell,unix
|
You might be looking for the readlink command. You can use readlink -m "some/path" to convert a path to the canonical path format. It's not quite path.join but it does provide similar functionality.
Edit: As someone pointed out to me readlink is actually more like os.path.realpath. It is also a GNU extension and not available on all *nix systems. I will leave my answer here in case it still helps in some way.
| 0 | 1 | 0 | 0 |
2015-04-17T05:50:00.000
| 2 | 0.099668 | false | 29,691,344 | 0 | 0 | 0 | 1 |
I'm looking for a Bash equivalent of Python's os.path.join. I'm trying to prompt the user for a directory path, which then will be used (with the help of path join equivalent) to do other stuff.
|
Whats the best way to implement python TCP client?
| 29,695,958 | 1 | 0 | 78 | 0 |
python,multithreading,tcpclient
|
Calling for a best way or code examples is rather off topic, but this is too long to be a comment.
There are three general ways to build those terminal emulator like applications :
multiple processes - the way the good old Unix cu worked with a fork
multiple threads - a variant from the above using light way threads instad of processes
using select system call with multiplexed io.
Generally, the 2 first methods are considered more straightforward to code with one thread (or process) processing upward communication while the other processes the downward one. And the third while being trickier to code is generally considered as more efficient
As Python supports multithreading, multiprocessing and select call, you can choose any method, with a slight preference for multithreading over multiprocessing because threads are lighter than processes and I cannot see a reason to use processes.
Following in just my opinion
Unless if you are writing a model for rewriting it later in a lower level language, I assume that performance is not the key issue, and my advice would be to use threads here.
| 0 | 1 | 1 | 0 |
2015-04-17T08:41:00.000
| 1 | 0.197375 | false | 29,694,344 | 0 | 0 | 0 | 1 |
I need to write python script which performs several tasks:
read commands from console and send to server over tcp/ip
receive server response, process and make output to console.
What is the best way to create such a script? Do I have to create separate thread to listen to server response, while interacting with user in main thread? Are there any good examples?
|
python copying directory and reading text files Remotely
| 29,705,179 | 0 | 1 | 693 | 0 |
python,windows,file,wmi,remote-access
|
I've done some work with WMI before (though not from Python) and I would not try to use it for a project like this. As you said WMI tends to be obscure and my experience says such things are hard to support long-term.
I would either work at the Windows API level, or possibly design a service that performs the desired actions access this service as needed. Of course, you will need to install this service on each machine you need to control. Both approaches have merit. The WinAPI approach pretty much guarantees you don't invent any new security holes and is simpler initially. The service approach should make the application faster and required less network traffic. I am sure you can think of others easily.
You still have to have the necessary permissions, network ports, etc. regardless of the approach. E.g., WMI is usually blocked by firewalls and you still run as some NT process.
Sorry, not really an answer as such -- meant as a long comment.
ADDED
Re: API programming, though you have no Windows API experience, I expect you find it familiar for tasks such as you describe, i.e., reading and writing files, scanning directories are nothing unique to Windows. You only need to learn about the parts of the API that interest you.
Once you create the appropriate security contexts and start your client process, there is nothing service-oriented in the, i.e., your can simply open and close files, etc., ignoring that fact that the files are remote, other than server name being included in the UNC name of the file/folder location.
| 0 | 1 | 0 | 1 |
2015-04-17T16:30:00.000
| 2 | 0 | false | 29,704,766 | 0 | 0 | 0 | 1 |
I'm about to start working on a project where a Python script is able to remote into a Windows Server and read a bunch of text files in a certain directory. I was planning on using a module called WMI as that is the only way I have been able to successfully remotely access a windows server using Python, But upon further research I'm not sure i am going to be using this module.
The only problem is that, these text files are constantly updating about every 2 seconds and I'm afraid that the script will crash if it comes into an MutEx error where it tries to open the file while it is being rewritten. The only thing I can think of is creating a new directory, copying all the files (via script) into this directory in the state that they are in and reading them from there; and just constantly overwriting these ones with the new ones once it finishes checking all of the old ones. Unfortunately I don't know how to execute this correctly, or efficiently.
How can I go about doing this? Which python module would be best for this execution?
|
Which Python Environment Does Pydev Use?
| 29,908,351 | 0 | 0 | 45 | 0 |
python,eclipse,pydev
|
It'll be either the default interpreter (the top one at the preferences > pydev > interpreter) or the one that's configured for the project (if you select the project > alt+enter > pydev-interpreter/grammar you can let 'default' or select a different one).
After you've done a launch, you can go to run > run configurations and change it for the created launch.
| 0 | 1 | 0 | 0 |
2015-04-18T11:14:00.000
| 3 | 0 | false | 29,716,502 | 1 | 0 | 0 | 1 |
When I press CTRL+F9 to run tests in Pydev, which interpreter does it use?
For example, I have Python 2.7 and Python 3.3 installed.
When I run scripts in the console by pressing CTRL+Shift+Enter, a window pops up that lets me choose interpreter. But this does not happen when running tests.
|
How to disable creation of $VIRTUAL_ENV/bin/python for python3 virtualenvs?
| 56,749,854 | 0 | 0 | 160 | 0 |
python-3.x,virtualenv
|
The purpose of the virtual env is to allow the use of required version as default. That's why you see the symbolic link created.
If you want both (i.e no symbolic link python -> /usr/bin/python3) then do not create virtual env
| 0 | 1 | 0 | 0 |
2015-04-19T09:34:00.000
| 1 | 0 | false | 29,728,131 | 1 | 0 | 0 | 1 |
I develop Python3 on GNU/Linux, but my system has two interpreters installed:
python2 -> /usr/bin/python
python3 -> /usr/bin/python3
(AFAIK: This is normal on a Linux box, other installed applications frequently depend upon different versions of Python: either 2 or 3.)
When I create new virtual environments for Python3, I notice the $VIRTUAL_ENV/bin folder has (at least) two python binaries:
$VIRTUAL_ENV/bin/python3 -> copied from /usr/bin/python3
$VIRTUAL_ENV/bin/python-> symlink to $VIRTUAL_ENV/bin/python3
I don't want the python symlink, as it hides my Python2 interpreter in /usr/bin/python.
Is there a way to disable the creation of symlink python in the new Python3 virtual environment?
(For the time being, I run virtualenv, then manually delete $VIRTUAL_ENV/bin/python myself.)
|
Multiple versions of Python in Ubuntu
| 29,730,474 | 1 | 1 | 14,313 | 0 |
python,python-2.7,ubuntu,ubuntu-14.04
|
Version 2.x and 3.x happily live together - that is no problem.
But the versions in /usr/bin and /usr/local/bin will give you problems:
The 'home'compiled version always installs in /usr/local/bin unless you specify the prefix on compilation. System-installed version normally install in /usr/bin. If you call python3, you will only execute the first one found - probably /usr/local/bin/python3. Test this with which python3
The real problem in that you now have two python3.x/site-packages (one in /usr/lib or /usr/lib64, and the other in /usr/local/lib[64]), and installing new modules will update only one of them. (unless you install them twice).
I'd suggest that you uninstall the self-compiled one (3.4.0), using make uninstall in the source directory.
To be clear: I believe there is no problem in having a 2.7 in /usr and 3.x in /usr/local.
| 0 | 1 | 0 | 0 |
2015-04-19T13:13:00.000
| 2 | 0.099668 | false | 29,730,330 | 1 | 0 | 0 | 2 |
In Ubuntu, I used to have (two hours ago) three versions of python :
2.7 and 3.4.0 installed by default in 'usr/bin'
3.4.3 that I built manually from the official source-code, which I found was in 'usr/local/bin'
(that means, at a certain point i was able to run the tree versions of python at the same time)
But now, the 3.4.0 version has become a 3.4.3, now i have a 2.7 and two 3.4.3 (one in '/usr/bin' and the other in '/usr/local/bin')
This happened while i was experimenting with PIP. So I'm not able to retrace what I actually did.
My questions are :
Why building the 3.4.3 didn't upgrade the existing 3.4.0, but
instead it made a new installation in '/usr/local/bin' ?
What do you think actually happened that upgraded the 3.4.0 to a 3.4.3 ?
Is it 'okay' to have two installations of the same version (3.4.3) of python in my system ?
|
Multiple versions of Python in Ubuntu
| 29,730,405 | 0 | 1 | 14,313 | 0 |
python,python-2.7,ubuntu,ubuntu-14.04
|
The version 2.7 and 3.4 are your distribution official pythons. To upgrade their versions, Ubuntu should release new packages for them.
When you install a new python by yourself it goes to /usr/local/bin.
I don't recommend having two similar pythons on your system, it will probably be difficult to know whether a package is installed into either or the site-packages. You would have to be careful with pip too.
I suggest you remove the pythons installed with apt-get and keep yours in /usr/local/bin.
| 0 | 1 | 0 | 0 |
2015-04-19T13:13:00.000
| 2 | 0 | false | 29,730,330 | 1 | 0 | 0 | 2 |
In Ubuntu, I used to have (two hours ago) three versions of python :
2.7 and 3.4.0 installed by default in 'usr/bin'
3.4.3 that I built manually from the official source-code, which I found was in 'usr/local/bin'
(that means, at a certain point i was able to run the tree versions of python at the same time)
But now, the 3.4.0 version has become a 3.4.3, now i have a 2.7 and two 3.4.3 (one in '/usr/bin' and the other in '/usr/local/bin')
This happened while i was experimenting with PIP. So I'm not able to retrace what I actually did.
My questions are :
Why building the 3.4.3 didn't upgrade the existing 3.4.0, but
instead it made a new installation in '/usr/local/bin' ?
What do you think actually happened that upgraded the 3.4.0 to a 3.4.3 ?
Is it 'okay' to have two installations of the same version (3.4.3) of python in my system ?
|
svn 0.3.33 module : Windows Error: [Error 2] : The system cannot find the file specified
| 29,740,532 | 0 | 0 | 1,099 | 0 |
python,svn
|
You need to recheck that the files on your laptop and make sure:
The files actually exist / are the correct files.
They are in the directory you are trying to access them from.
The file names are spelled correctly.
If the system could not find the file specified it means something is missing / misplaced, or the file you are trying to access is not spelled the same as the one you actually want. Its most likely a simple error that could be found by just looking over everything and making sure its all the same as your desktop.
| 0 | 1 | 0 | 0 |
2015-04-20T05:55:00.000
| 2 | 0 | false | 29,740,212 | 0 | 0 | 0 | 2 |
I have checked out a folder from SVN to my desktop. Actually I need to extract the information like SVN revision no, URL and status for the specific file in the local working copy. Here is the line of code which I am using to extract those info.
file = svn.local.LocalClient(filePath[i][j])
fileInfo = file.info()
This works perfectly fine in my desktop. But the same thing when I tried to do it in my laptop it throws the following error
Traceback (most recent call last):
File "", line 1, in
File "C:\Python27\lib\site-packages\svn-0.3.22-py2.7.egg\svn\common.py", line 134, in export
self.run_command('export', [self.url_or_path, path])
File "C:\Python27\lib\site-packages\svn-0.3.22-py2.7.egg\svn\common.py", line 29, in run_command
stderr=subprocess.PIPE)
File "C:\Python27\lib\subprocess.py", line 711, in __init
errread, errwrite)
File "C:\Python27\lib\subprocess.py", line 948, in _execute_child
startupinfo)
WindowsError: [Error 2] The system cannot find the file specified
Can anyone pls help me what is wrong? I have installed all the packages which I installed in my desktop. But don't know what is the problem exactly.
Thanks
|
svn 0.3.33 module : Windows Error: [Error 2] : The system cannot find the file specified
| 29,762,407 | 0 | 0 | 1,099 | 0 |
python,svn
|
I got the solution for this. Actually the problem was with subversion command line client version. The client version need to be higher than the TortoiseSVN server version, whereas mine was a lower version which created this problem. Now it works fine :)
| 0 | 1 | 0 | 0 |
2015-04-20T05:55:00.000
| 2 | 0 | false | 29,740,212 | 0 | 0 | 0 | 2 |
I have checked out a folder from SVN to my desktop. Actually I need to extract the information like SVN revision no, URL and status for the specific file in the local working copy. Here is the line of code which I am using to extract those info.
file = svn.local.LocalClient(filePath[i][j])
fileInfo = file.info()
This works perfectly fine in my desktop. But the same thing when I tried to do it in my laptop it throws the following error
Traceback (most recent call last):
File "", line 1, in
File "C:\Python27\lib\site-packages\svn-0.3.22-py2.7.egg\svn\common.py", line 134, in export
self.run_command('export', [self.url_or_path, path])
File "C:\Python27\lib\site-packages\svn-0.3.22-py2.7.egg\svn\common.py", line 29, in run_command
stderr=subprocess.PIPE)
File "C:\Python27\lib\subprocess.py", line 711, in __init
errread, errwrite)
File "C:\Python27\lib\subprocess.py", line 948, in _execute_child
startupinfo)
WindowsError: [Error 2] The system cannot find the file specified
Can anyone pls help me what is wrong? I have installed all the packages which I installed in my desktop. But don't know what is the problem exactly.
Thanks
|
Python change Windows Path (refresh Shell)
| 29,747,013 | 2 | 1 | 2,683 | 0 |
python,windows,variables
|
Each process has its own environment. When a process starts another process, the new process gets a (eventually modified) copy of its parent environment.
The rule is :
a process can modify its own environment - this modifications will be inherited by child processes started later
a process can modify (at start time) the environment of its child processes
a process can never modify its parent's environment (*)
So when you start a Python script from a cmd.exe :
the script can change its own environment and those changes will be visible by all subsequent commands of the script and all its children
the script cannot change the environment for its parent cmd.exe nor for subsequent commands of that cmd.exe
If you need to execute other batch commands after changing the environment, you will have to start a new cmd.exe for the python script and have this new shell execute the other commands, or directly execute a .bat file (both via subprocessmodule).
setx is a completely different thing : it updates the default environmnent that is given to processes started from windows explorer (including cmd.exe). That environment is stored permanently in Windows registry, and every change to it is broadcasted to all active processes ... that monitors it. Any windows GUI application can process it (and explorer does - that's how every explorer window know immediatly what is the current default environment), but console applications normaly do not.
(*) well it used to be possible for .com executable in old MS/DOS system and was even documented. It should be possible on Windows recent system through WriteProcessMemory API call but is absolutely non documented (thanks to eryksun for noticing)
| 0 | 1 | 0 | 0 |
2015-04-20T10:09:00.000
| 2 | 0.197375 | false | 29,744,940 | 1 | 0 | 0 | 2 |
i have a Python script here and it is called from the Windows CMD.
It is executing some commands and also changing the Windows environment variables. Now after i changed them with the command "setx". I have to restart another Shell so the new variables are loaded into it.
Is it possible that the main shell from which i called my script can update the variables itself ?
Or is it possible to start another shell with the new variables and the script will continue in the new opened shell ?
Thanks
|
Python change Windows Path (refresh Shell)
| 29,746,276 | 1 | 1 | 2,683 | 0 |
python,windows,variables
|
You can't change the value of a environment variable.
Allow me to clarify: environment variables represent the variables set on the environment of a process when that process starts.
From the point-of-view of the new process, its environment is unchanging. Changing a variable on the environment (the process' parent) will not change the value of the environment variable seen by the process. Changing a variable on the process will not make it's environment see the change.
So, what can you change?
Variables set on your process. This is achieved in python by changing os.environ, or using set on the shell. Any changes will be seen by your process and any children you make (os.system, subprocess, most commands on the shell).
Variables set by the system (what SetX does). These changes will be seen by any new process launched directly by the system (Explorer, in Windows) after you change them.
| 0 | 1 | 0 | 0 |
2015-04-20T10:09:00.000
| 2 | 1.2 | true | 29,744,940 | 1 | 0 | 0 | 2 |
i have a Python script here and it is called from the Windows CMD.
It is executing some commands and also changing the Windows environment variables. Now after i changed them with the command "setx". I have to restart another Shell so the new variables are loaded into it.
Is it possible that the main shell from which i called my script can update the variables itself ?
Or is it possible to start another shell with the new variables and the script will continue in the new opened shell ?
Thanks
|
Python: Open and read remote text files on Windows
| 29,755,645 | 1 | 1 | 2,454 | 0 |
python,windows,text-files,remote-access,readfile
|
You can use powershell for this.
first Open powershell by admin previlage.
Enter this command
Enable-PSRemoting -Force
Enter this command also on both computer so they trust eachother.
Set-Item wsman:\localhost\client\trustedhosts *
then restart winrm service on both pc by this command.
Restart-Service WinRM
test it by this command
Test-WsMan computername
for executing a Remote Command.
Invoke-Command -ComputerName COMPUTER -ScriptBlock { COMMAND }
-credential USERNAME
for starting remote session.
Enter-PSSession -ComputerName COMPUTER -Credential USER
| 0 | 1 | 0 | 1 |
2015-04-20T18:09:00.000
| 1 | 0.197375 | false | 29,755,274 | 0 | 0 | 0 | 1 |
Im trying to find a module that will allow me to run a script locally that will:
1. Open a text file on a remote Windows Machine
2. Read the lines of the text file
3. Store the lines in a variable and be able to process the data.
This is absolutely no problem on a Linux machine via SSH, but I have no clue what module to use for a remote Windows machine. I can connect no problem and run commands on a remote Windows machine via WMI,but WMI does not have a way to read/write to files. Are there any modules out there that I can install to achieve this process?
|
Easy install is not recognized
| 29,761,891 | 0 | 0 | 913 | 0 |
python-2.7,easy-install
|
I figured it out I needed to cd c:\python27\scripts, then use the pip install tmx command. Nothing I read anywhere suggested I had to run cmd from the directory that pip was in.
| 0 | 1 | 0 | 0 |
2015-04-21T00:54:00.000
| 2 | 1.2 | true | 29,761,006 | 1 | 0 | 0 | 2 |
I installed easy install it is in my scripts folder. I set my path variable. When I type python in cmd it works, but no matter what I try if I type easy_install it says it is not recognized. I am trying to install pip and then pytmx. is there an easier way to install pytmx? or can someone please walk me through this so I can get this working.
new variable PY_HOME value C:\Python27
path variable %PY_HOME%;%PY_HOME%\Lib;%PY_HOME%\DLLs;%PY_HOME%\Lib\lib-tk;C:\Python27\scripts
python version 2.7.8
windows 7 professional
Update uninstalled all versions of python reinstalled version 2.7.9
now pip is not a recognized command python is still recognized and give me a version number. I still cannot install pytmx.
|
Easy install is not recognized
| 29,761,326 | 0 | 0 | 913 | 0 |
python-2.7,easy-install
|
If you install python 2.7.9 pip is included.
Then just pip install pytmx
| 0 | 1 | 0 | 0 |
2015-04-21T00:54:00.000
| 2 | 0 | false | 29,761,006 | 1 | 0 | 0 | 2 |
I installed easy install it is in my scripts folder. I set my path variable. When I type python in cmd it works, but no matter what I try if I type easy_install it says it is not recognized. I am trying to install pip and then pytmx. is there an easier way to install pytmx? or can someone please walk me through this so I can get this working.
new variable PY_HOME value C:\Python27
path variable %PY_HOME%;%PY_HOME%\Lib;%PY_HOME%\DLLs;%PY_HOME%\Lib\lib-tk;C:\Python27\scripts
python version 2.7.8
windows 7 professional
Update uninstalled all versions of python reinstalled version 2.7.9
now pip is not a recognized command python is still recognized and give me a version number. I still cannot install pytmx.
|
How to uninstall and/or manage multiple versions of python in OS X 10.10.3
| 29,841,327 | 3 | 4 | 9,789 | 0 |
python,spyder,osx-yosemite
|
(Spyder dev here) There is no simple way to do what you ask for, at least for the Python version that comes with Spyder.
I imagine you downloaded and installed our DMG package. That package comes with its own Python version as part of the application (along with several important scientific packages), so it can't be removed because that would imply to remove Spyder itself :-)
I don't know how you installed IDL(E?), so I can't advise you on how to remove it.
| 0 | 1 | 0 | 0 |
2015-04-21T02:17:00.000
| 1 | 0.53705 | false | 29,761,728 | 1 | 0 | 0 | 1 |
I have installed the Python IDE Spyder. For me it's a great development environment.
Some how in this process I have managed to install three versions of Python on my system.These can be located as following:
Version 2.7.6 from the OS X Terminal;
Version 2.7.8 from the Spyder Console; and
Version 2.7.9rc1 from an IDL window.
The problem I have is (I think) that the multiple versions are preventing Spyder from working correctly.
So how do I confirm that 2.7.6 is the latest version supported by Apple and is there a simple way ('silver bullet') to remove other versions from my system.
I hope this is the correct forum for this question. If not I would appreciate suggestions where I could go for help.
I want to keep my life simple and to develop python software in the Spyder IDE. I am not an OS X guru and I really don't want to get into a heavy duty command line action. To that end I just want to delete/uninstall the 'unofficial versions' of Python. Surely there must be an easy way to do this - perhaps 'pip uninstall Python-2.7.9rc1' or some such. The problem is that I am hesitant to try this due to the fear that it will crash my system.
Help on this would be greatly appreciated.
|
How can I check whether the script started with cron job is completed?
| 29,769,403 | 0 | 0 | 45 | 0 |
python,cron,cron-task
|
You can use a lock file to indicate that the first script is still running.
| 0 | 1 | 0 | 1 |
2015-04-21T09:43:00.000
| 2 | 0 | false | 29,768,499 | 0 | 0 | 0 | 1 |
I start a script and I want to start second one immediately after the first one is completed successfully?
The problem here is that this script can take 10min or 10hours according to specific cases and I do not want to fix the start of the second script.
Also, I am using python to develop the script, so if you can provide me a solution with python control on the cron it will be OK.
Thank you,
|
Can't install python selenium on Ubuntu 14.10/15.04(pre-release)
| 29,772,531 | 0 | 0 | 361 | 0 |
python,ubuntu,selenium
|
Solved upgrading six
pip install --upgrade six
| 0 | 1 | 1 | 0 |
2015-04-21T12:41:00.000
| 1 | 1.2 | true | 29,772,530 | 0 | 0 | 0 | 1 |
I'm trying to install selenium library for python on my Ubuntu machine using pip installer.
I receive the following error:
pip install selenium
Exception: Traceback (most recent call last): File
"/usr/lib/python2.7/dist-packages/pip/basecommand.py", line 122, in
main
status = self.run(options, args) File "/usr/lib/python2.7/dist-packages/pip/commands/install.py", line 304,
in run
requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle) File
"/usr/lib/python2.7/dist-packages/pip/req.py", line 1230, in
prepare_files
req_to_install.run_egg_info() File "/usr/lib/python2.7/dist-packages/pip/req.py", line 293, in
run_egg_info
logger.notify('Running setup.py (path:%s) egg_info for package %s' % (self.setup_py, self.name)) File
"/usr/lib/python2.7/dist-packages/pip/req.py", line 285, in setup_py
if six.PY2 and isinstance(setup_py, six.text_type): AttributeError: 'module' object has no attribute 'PY2'
I am currently using Python 2.7.9
python --version
Python 2.7.9
|
How to open a shortcut with windows command line and parameters
| 29,775,325 | 0 | 1 | 4,374 | 0 |
python,windows,command-line
|
Change the shortcut target to "cmd filename" (i.e. add "cmd" before the target)
| 0 | 1 | 0 | 0 |
2015-04-21T14:30:00.000
| 2 | 0 | false | 29,775,174 | 1 | 0 | 0 | 1 |
I made a shortcut of a python script on my desktop.
Now I want to open this with the command line and parameters.
If I open the properties of the shortcut, I can add the parameters, but I can't force to be opened with the command line.
The default program for these files is notepad++, but if I change it to "command line" and double click it, then just the command line opens with the respective path given in the shortcut, but not executing the file.
What do I need to do?
|
How to attach to PyCharm debugger when executing python script from bash?
| 37,883,146 | 1 | 4 | 1,147 | 0 |
python,bash,debugging,pycharm,pdb
|
You can attach the debugger to a python process launched from terminal:
Use Menu Tools --> Attach to process then select python process to debug.
If you want to debug a file installed in site-packages you may need to open the file from its original location.
You can to pause the program manually from debugger and inspect the suspended Thread to find your source file.
| 0 | 1 | 0 | 0 |
2015-04-22T00:24:00.000
| 1 | 0.197375 | false | 29,785,534 | 1 | 0 | 0 | 1 |
I know how to set-up run configurations to pass parameters to a specific python script. There are several entry points, I don't want a run configuration for each one do I? What I want to do instead is launch a python script from a command line shell script and be able to attach the PyCharm debugger to the python script that is executed and have it stop at break points. I've tried to use a pre-launch condition of a utility python script that will sleep for 10 seconds so I can attempt to "attach to process" of the python script. That didn't work. I tried to import pdb and settrace to see if that would stop it for attaching to the process, but that looks to be command line debugging specific only. Any clues would be appreciated.
Thanks!
|
Why does esky create 2 executables?
| 30,921,894 | 1 | 1 | 54 | 0 |
python,esky
|
Esky uses a bootstrapping mechanism that keeps the app safe in the face of failed or partial updates.
The top level executable is the one you should be running, it does all the business of managing what version to run. Once it has decided what to run it will open up the exe for the correct version.
| 0 | 1 | 0 | 0 |
2015-04-22T10:13:00.000
| 1 | 0.197375 | false | 29,794,354 | 0 | 0 | 0 | 1 |
esky 0.9.8 creates 2 executables of my application.
There is an inner executable that weights less then the outer executable.
I would like to know if esky is supposed to create 2 executables and if there are any drawbacks or advantages in creating 2 executables.
I would also like to know which executable should I be calling when I want to run my application.
|
Installing to virtual env doesn't work
| 29,906,180 | 0 | 0 | 768 | 0 |
python,pip,virtualenv,virtualenvwrapper
|
Seems by realiasing pip and starting over and using --force-installed to install everything all over (including venv) it worked.
| 0 | 1 | 0 | 0 |
2015-04-22T16:44:00.000
| 1 | 1.2 | true | 29,803,949 | 1 | 0 | 0 | 1 |
I install to virtualenv, yet it still runs the versions from C:\Python27\site-packages or C:\Python34\site-packages. When I try to install with pip in my venv I get already installed and the location is the global site-packages.
Any idea why that could be??
Also my virtualenv wrapper commands work but when i do workon X it doesn't activate the venv.
OS is win7. But the problem occurred on powershell and git bash.
Thanks
|
Will using memcache reduce my instance memory?
| 29,806,800 | 3 | 0 | 130 | 1 |
python-2.7,google-app-engine,memcached
|
Moving objects to and from Memcache will have no impact on your memory unless you destroy these objects in your Java code or empty collections.
A bigger problem is that memcache entities are limited to 1MB, and memcache is not guaranteed. The first of these limitations means that you cannot push very large objects into Memcache.
The second limitations means that you cannot easily replace, for example, a HashMap with memcache - it's impossible to tell if getValue() returns null because an object is not present or because it was bumped out of memcache. So you will have to make an extra call each time to a datastore to see if an object is really not present.
| 0 | 1 | 0 | 0 |
2015-04-22T18:48:00.000
| 1 | 1.2 | true | 29,806,384 | 0 | 0 | 1 | 1 |
I'm currently running into soft memory errors on my Google App Engine app because of high memory usage. A number of large objects are driving memory usage sky high.
I thought perhaps if I set and recalled them from memcache maybe that might reduce overall memory usage. Reading through the docs this doesn't seem to be the case, and that the benefit of memcache is to reduce HRD queries.
Does memcache impact overall memory positively or negatively?
Edit: I know I can upgrade the instance class to F2 but I'm trying to see if I can remain on the least expensive while reducing memory.
|
Possible to Kill Processes using WMI Python
| 29,809,058 | 1 | 2 | 2,587 | 0 |
python,windows,process,wmi,remote-access
|
I figured out the answer in case anybody else runs into a similar issue; You actually dont even need WMI and can be run directly from command prompt:
if you are in the same network you can issue a command via command prompt with the following format:
taskkill /s [Computer name or IP] /u [USER or DOMAIN\USER] /p Password /pid [The process to kill i.e. notepad.exe]
This will take a few moments but will eventuall kill the process that is running.
| 0 | 1 | 0 | 0 |
2015-04-22T19:45:00.000
| 3 | 1.2 | true | 29,807,381 | 1 | 0 | 0 | 1 |
I know it is possible to create a process on a remote windows machine using the WMI module but I wish to know if the same can be said for ending a process. I havent been able to find a thread or any documentation for this so please if you can help me It would be greatly appreciated.
|
Do I need mysql-client for PHP/Python etc interactions?
| 29,814,988 | 0 | 1 | 35 | 0 |
php,python,mysql,linux,webserver
|
If you expect code running on the production server to connect to a mysql db, then yes.
| 0 | 1 | 0 | 0 |
2015-04-23T02:07:00.000
| 1 | 0 | false | 29,812,355 | 0 | 0 | 0 | 1 |
I am setting up a production server for the first time and would like to make sure I only have what I need for security prepress.
By "interactions" since I'm also new to programming, I think I mean "API calls".
Do I need mysql-client on a Linux (Debian) server to be able to 'talk' to mysql with any programming language? As I think there isn't any point installing the client on the production server if I can remotely send commands from mysql-client on my Mac.
|
Recommended way to build cross platform desktop application using linux development machine
| 29,815,298 | 1 | 2 | 1,133 | 0 |
python,linux,windows
|
What you are looking for is a GUI tool-kit with bindings to python. Tkinter is the de facto standard for python GUI and is cross platform. Qt is also a popular choice but the license is more restrictive then Tkinter but will allow you to transition into C++ programming with Qt easier if that is something you may want to do down the road. The choice up to you.
| 0 | 1 | 0 | 0 |
2015-04-23T04:57:00.000
| 2 | 0.099668 | false | 29,814,020 | 0 | 0 | 1 | 1 |
I am pretty familiar with building web based apps using python django on my linux machine. But when I decided to try my hand at building desktop applications that can run on windows/linux I didn't know where to begin.
I know I can surely build windows desktop application on windows machine. But I am pretty comfortable with linux and don't want to get out of that comfort zone. Can anyone guide me as to what tools can I begin with to develop a simple windows desktop application. I would target windows 7 for starters.
Any guidance is hugely appreciated.
|
I can't install eyeD3 0.7.5 into Python in windows
| 29,851,171 | 0 | 2 | 641 | 0 |
python,python-2.7,pip,anaconda,eyed3
|
The problem is that this file is only written for Python 2 but you are using Python 3. You should use Anaconda (vs. Anaconda3), or create a Python 2 environment with conda with conda create -n py2 anaconda python=2 and activate it with activate py2.
| 0 | 1 | 0 | 0 |
2015-04-23T14:19:00.000
| 1 | 0 | false | 29,826,237 | 1 | 0 | 0 | 1 |
could you help me with that. I can't manage to install this plugin. I tried:
1) install it through pip
2) through setup.py in win console
3) through anaconda3 but still no.
4) I searched about it in web and here, but insructions are made to older versions.
5) and also through the installation page of eyeD3.
Could you guide me how should I do this? Maybe I'm doing something wrong. For first: should I use for this Python 2.7.9 or can it be Anaconda3
|
plain Password is logged via my python script
| 29,938,376 | 0 | 2 | 446 | 0 |
python,linux,passwords
|
I figured out the best way is to disable it via sudo command:
Cmnd_Alias SCRIPT =
Defaults!SCRIPT !syslog
The above lines in sudoers.conf should help from preventing the logging in syslog.
| 0 | 1 | 0 | 1 |
2015-04-24T09:28:00.000
| 4 | 1.2 | true | 29,843,710 | 0 | 0 | 0 | 1 |
I have a sample python script: sample.py. The script takes the argument as the user name and password to connect to some remote server. When I run the script sample.py --username --password , the password is being logged in linux messages files. I understand this is a linux behavior, but wondering if we can do anything within my script to avoid this logging. One way I can think is to provide password in an interactive way. Any other suggestions?
|
App engine Datastore cache or memcache
| 29,846,381 | 1 | 0 | 290 | 0 |
python,google-app-engine
|
The 'Datastore caching' you refer to is implemented using memcache under the hood anyway, so you won't gain from additional explicit caching of these entities.
Python's ndb API and Java's Objectify both provide memcache-based automated caching for exactly this scenario. Of course, you can still use memcache independently for additional application caching.
| 0 | 1 | 0 | 0 |
2015-04-24T10:56:00.000
| 1 | 1.2 | true | 29,845,685 | 0 | 0 | 1 | 1 |
I have read that the datastore read request in the App engine get cached and subsequent reads performed on the same entity are fast.
So if i read an entity from datastore, are there any tangible benefits of storing the entity in the memcache explicitly for later fetches? Or would the datastore caching serve with sufficient efficiency?
|
GAE: how to get current server IP?
| 29,846,170 | 0 | 0 | 157 | 0 |
python,google-app-engine
|
I'm not entirely clear on what you're looking for, but you can retrieve that type of information from the WSGI environmental variables. The method of retrieving them varies with WSGI servers, and the number of variables made available to your application depends on the web server configuration.
That being said, getting client IP address is a common task and there is likely a method on the request object of the web framework you are using.
What framework are you using?
| 0 | 1 | 0 | 0 |
2015-04-24T11:08:00.000
| 1 | 0 | false | 29,845,940 | 0 | 0 | 1 | 1 |
I'm hosting my app on Google App Engine. Is there any posibility to get server IP of my app for current request?
More info:
GAE has a specific IP addressing. All http requests go to my 3-level domain, and IP of this domain isn't fixed, it changes rather frequently and can be different on different computers at the same moment. Can I somehow find out, what IP address client is requesting now?
Thank you!
|
Nginx/Uwsgi log showing duplicate requests
| 29,899,664 | 0 | 1 | 602 | 0 |
python,flask,uwsgi
|
Sorry false alarm. This was my Devops incorrectly pinging my actual application route for heartbeat. Sorry for the confusion.
| 0 | 1 | 0 | 0 |
2015-04-24T14:49:00.000
| 1 | 1.2 | true | 29,850,613 | 0 | 0 | 1 | 1 |
I'm running a flask application using nginx and uwsgi and I noticed when I tail the logs for uwsgi it looks like its just constantly polling my app when I'm doing nothing. It also seems like it's cycling through the cores on my machine with each request so I see this in the logs.
[pid: 27182|app: 0|req: 557/784] {26 vars in 254 bytes} [09:33:38 2015] GET / => generated 1337 bytes in 11 msecs ( 200) 3 headers in 238 bytes (1 switches on core 0)
[pid: 27182|app: 0|req: 558/785]{26 vars in 254 bytes} [09:33:42 2015] GET / => generated 1337 bytes in 11 msecs ( 200) 3 headers in 238 bytes (1 switches on core 1)
[pid: 27182|app: 0|req: 559/786] {26 vars in 254 bytes} [09:33:43 2015] GET / => generated 1337 bytes in 11 msecs ( 200) 3 headers in 238 bytes (1 switches on core 2)
[pid: 27182|app: 0|req: 560/787] {26 vars in 254 bytes} [09:33:47 2015] GET / => generated 1337 bytes in 11 msecs ( 200) 3 headers in 238 bytes (1 switches on core 3)
Nginx shows something similar. It's just constantly issuing a request to my app.
It's only doing this when nginx is on. If I stop nginx the polling stops. My app is up and working but I don't know why this is happening. Is this normal behavior for nginx/uwsgi when using the uwsgi protocol?
EDIT Im also using uwsgi in emperor mode
|
Passing dictionary values in python as command line inputs to c++ code
| 29,858,319 | 0 | 0 | 211 | 0 |
python,c++
|
You can't do that unless you relax one of the restrictions.
Relax the python dict requirement: The command line has a well defined text arguments interface, which can easily handle all the info. You can pass the json filename, the str representation of the dict, or pass name-value pairs as command line arguments.
Relax the system call requirement: Rather than building an executable from the c++ code, you can build a python c++ extension. The c++ code can export functions that take a python dict.
Relax the c++ requirement: Obviously you could code it in python.
| 0 | 1 | 0 | 0 |
2015-04-24T21:13:00.000
| 1 | 1.2 | true | 29,857,593 | 1 | 0 | 0 | 1 |
I have a wrapper file that is reading a JSON file into a dictionary.
I am using the os.system("command") command to run a C++ code in this python file.
The C++ code takes command line inputs which are key values in the parsed dictionary.
How can i pass a python variable as a command line input for a C++ code using the os.system("command") instruction?
|
Webapp2 Redirect Method
| 29,866,631 | 2 | 1 | 177 | 0 |
python,google-app-engine,webapp2
|
Use a 307 redirect. A 307 will not change the method of the redirect.
Wikipedia: 307 temporary redirect (provides a new URL for the browser to resubmit a GET or POST request)
| 0 | 1 | 0 | 0 |
2015-04-25T14:09:00.000
| 1 | 1.2 | true | 29,866,234 | 0 | 0 | 1 | 1 |
I am trying to redirect a POST request from an Google App Engine Python Handler to another URL. The Problem is that it seems the method is changed to GET. Is there any way to set the POST method when redirecting?
|
Python/Eclipse/wxPython -- CallAfter undefined variable? callAfter is as well -- confused
| 29,873,332 | 0 | 0 | 94 | 0 |
python,eclipse,wxpython,pydev
|
I figured it out on my own. I deleted the wx and wxPython forced builtins and then loaded wx as an external library. Everything worked fine after that.
| 0 | 1 | 0 | 0 |
2015-04-25T23:40:00.000
| 1 | 1.2 | true | 29,871,970 | 0 | 0 | 0 | 1 |
I'm using Eclipse Luna and the latest pydev with it. I have wxpython 3.0 installed. First, I could import wx and I tried in the console to print version, perfect, but then I do import wx.lib.pubsub -- it says unresolved. I try other variations, no dice, so I have to go into the properties of my project and add wx manually, then it worked.
Second, now all my CallAfter calls are underlined red, undefined variable from import. I know callAfter used to be it, so I tried that too, it tries to autocomplete to it -- but then underlines it. I know in 3.0, CallAfter is capitalized. Even if it wasn't, Eclipse tries to autocomplete to an old version and then says it's still bad.
I've never seen that before, I'm confused. Does anyone know what I'm doign incorrectly?
EDIT: Even weirder -- I use the console inside pydev eclipse, it autocompletes to normal CallAfter and doesn't throw any errors.
|
I use os.system to run executable file, but it need a .so file, it can't find library
| 29,888,758 | 0 | 2 | 926 | 0 |
python,ubuntu-12.04,dynamic-library
|
If I remember correctly, executing export ... via os.system will only set that shell variable within the scope, thus it's not available in the following os.system scopes. You should set the LD_LIBRARY_PATH in the shell, before executing the Python script.
Btw. also avoid setting relative paths…
| 0 | 1 | 0 | 1 |
2015-04-27T06:32:00.000
| 3 | 0 | false | 29,888,716 | 0 | 0 | 0 | 1 |
When I call a executable in python using os.system("./mydemo") in Ubuntu, it can't find the .so file (libmsc.so) needed for mydemo. I used os.system("export LD_LIBRARY_PATH=pwd:$LD_LIBRARY_PATH;"), but it still can't find libmsc.so.
The libmsc.so is in the current directory. and shouldn't be global.
|
Simple way for message passing in distributed system
| 29,904,422 | 0 | 2 | 1,745 | 0 |
python,message-queue,messaging,distributed,distributed-system
|
I would recommend RabbitMQ or Redis (RabbitMQ preferred because it is a very mature technology and insanely reliable). ZMQ is an option if you want a single hop messaging system instead of a brokered messaging system such as RabbitMQ but ZMQ is harder to use than RabbitMQ. Depending on how you want to utilize the message passing (is it a task dispatch in which case you can use Celery or if you need a slightly more low-level access in which case use Kombu with librabbitmq transport )
| 0 | 1 | 1 | 1 |
2015-04-27T17:15:00.000
| 3 | 0 | false | 29,902,069 | 0 | 0 | 0 | 1 |
I am implementing a small distributed system (in Python) with nodes behind firewalls. What is the easiest way to pass messages between the nodes under the following restrictions:
I don't want to open any ports or punch holes in the firewall
Also, I don't want to export/forward any internal ports outside my network
Time delay less than, say 5 minutes, is acceptable, but closer to real time would be nice, if possible.
1+2 → I need to use a third party, accessible by all my nodes. From this follows, that I probably also want to use encryption
Solutions considered:
Email - by setting up separate or a shared free email accounts (e.g. Gmail) which each client connects to using IMAP/SMTP
Google docs - using a shared online spreadsheet (e.g. Google docs) and some python library for accessing/changing cells using a polling mechanism
XMPP using connections to a third party server
IRC
Renting a cheap 5$ VPS and setting up a Zero-MQ publish-subscribe node (or any other protocol) forwarded over SSH and having all nodes connect to it
Are there any other public (free) accessible message queues available (or platforms that can be misused as a message queue)?
I am aware of the solution of setting up my own message broker (RabbitMQ, Mosquito) etc and make it accessible to my nodes somehow (ssh-forwardning to a third host etc). But my questions is primarily about any solution that doesn't require me to do that, i.e. any solutions that utilizes already available/accessible third party infrastructure. (i.e. are there any public message brokers I can use?)
|
Cannot upload to an app ID though I have the correct permissions
| 29,906,509 | 0 | 0 | 30 | 0 |
python,google-app-engine,console,cloud
|
Most likely, you are logged into Gmail under a different account. Go to Gmail, and click Sign Out. Then go to the developer console. It should ask you to log in or select from several accounts.
| 0 | 1 | 0 | 0 |
2015-04-27T19:01:00.000
| 1 | 0 | false | 29,903,889 | 0 | 0 | 1 | 1 |
(I've been using appengine since 2009 and haven't needed support until now.) I've been added to a new project from the cloud console. When I try to upload the app, AppEngine launcher says "This application does not exist". Furthermore, in Cloud console, nothing appears under the appengine heading. At the same time, however, the old appengine.appspot.com DOES have the application listed. Any help?
|
Upgrade Python 3.2 to Python 3.4 on linux
| 29,975,297 | 2 | 1 | 5,897 | 0 |
python,python-3.x,debian,apt
|
I'm going to answer my own question, since I have found a solution to my problem. I had previously run apt-get upgrade on my system after setting my debian release to jessie. This did not replace python 3.2 though. What did replace it was running apt-get dist-upgrade; after that apt-get autoremove removed python 3.2. I doubt that this could be a problem, since I hadn't installed any external libraries.
| 0 | 1 | 0 | 1 |
2015-04-27T20:24:00.000
| 2 | 0.197375 | false | 29,905,262 | 1 | 0 | 0 | 1 |
I have Python 3.2 installed by default on my Raspbian Linux, but I want Python 3.4 (time.perf_counter, yield from, etc.). Installing Python 3.4 via apt-get is no problem, but when i type python3 in my shell I still get Python 3.2 (since /usr/bin/python3 still links to it). Should I change the Symlink, or is there a better was to do this?
|
Avoid duplicate entries in Datastore
| 29,918,649 | 2 | 1 | 456 | 0 |
python-2.7,google-app-engine,google-cloud-datastore,webapp2
|
Hash the entities and use the hash value as the key for your Entity.
| 0 | 1 | 0 | 0 |
2015-04-28T09:34:00.000
| 1 | 0.379949 | false | 29,915,632 | 0 | 0 | 1 | 1 |
I am trying to send the data to google app engine in python using Webapp2, But when I check the entries in the data in console I found duplicate entries which means except Id everything is same.I want to avoid those duplicate entries.Please suggest me if there is anyway to find the duplicate values to avoid.Thanks in advance.
|
GAE module: "Request attempted to contact a stopped backend."
| 29,928,760 | 0 | 2 | 607 | 0 |
python,google-app-engine
|
Fixed by shutting down all instances (on all modules/versions just to be safe).
| 0 | 1 | 0 | 0 |
2015-04-28T19:42:00.000
| 1 | 1.2 | true | 29,928,477 | 0 | 0 | 1 | 1 |
I am currently experiencing an issue in my GAE app with sending requests to non-default modules. Every request throws an error in the logs saying:
Request attempted to contact a stopped backend.
When I try to access the module directly through the browser, I get:
The requested URL / was not found on this server.
I attempted to stop and start the "backend" modules a few times to no avail. I also tried changing the default version for the module to a previous working version, but the requests from my front-end are still hitting the "new", non-default version. When I try to access a previous version of the module through the browser, it does work however.
One final symptom: I am able to upload my non-default modules fine, but cannot upload my default front-end module. The process continually says "Checking if deployment succeeded...Will check again in 60 seconds.", even after rolling back the update.
I Googled the error from the logs and found almost literally nothing. Anyone have any idea what's going on here, or how to fix it?
|
Supervisor and directory option
| 30,126,084 | 0 | 2 | 498 | 0 |
python,celery
|
Directory option in supervisord = where we mention our project directory path.
Example:
directory="/home/celery/pictures/myapp"
| 0 | 1 | 0 | 0 |
2015-04-28T21:11:00.000
| 1 | 0 | false | 29,930,029 | 0 | 0 | 0 | 1 |
I was configuring supervisor daemon to be able to start/stop Celery.
It did not work. After debuging back and forth I realized that the problem was that it did not change the working directory to the one mentioned in the directory option in supervisord.conf under program:celery configuration.
Hopefully there is a workdir in Celery but I am curious - what is the purpose of the directory option then?
|
How to Install Private Python Package as Part of Build
| 29,936,384 | 11 | 13 | 7,187 | 0 |
python,docker,pip,git-submodules
|
If you use github with a private repo you will have to create a SSH deploy key and add the private key to your app folder for builds.
pip install git+git://github.com/myuser/foo.git@v123
Alternatively, you can mount a pip-cache folder from host into container and do pip install from that folder. You'd have to keep the python packages in the cache dir with your app.
pip install --no-index --find-links=/my/pip-cache/
you can install python packages to this pip-cache with the following command:
pre pip 9.0.1:
pip install --download pip-cache/ package1 package2
pip 9.0.1+ (thx for comment @James Hiew):
pip install download pip-cache/ package1 package2
| 0 | 1 | 0 | 0 |
2015-04-29T04:35:00.000
| 5 | 1 | false | 29,934,451 | 1 | 0 | 0 | 1 |
I have a fairly large private python package I just finished creating. I'd like to install it as part of my build process for an app in a Docker container (though this isn't so important). The package source is quite large, so ideally I'd avoid downloading/keeping the whole source.
Right now, I've been just passing around the package source along with my app, but this is unwieldy and hopefully temporary. What's a better way? git submodule/subtree? I'm pretty new to this.
|
What is the relationship between virtualenv and pyenv?
| 46,344,026 | 33 | 209 | 58,878 | 0 |
python,virtualenv,virtualenvwrapper,pyenv
|
Short version:
virtualenv allows you to create local (per-directory), independent python installations by cloning from existing ones
pyenv allows you to install (build from source) different versions of Python alongside each other; you can then clone them with virtualenv or use pyenv to select which one to run at any given time
Longer version:
Virtualenv allows you to create a custom Python installation e.g. in a subdirectory of your project. This is done by cloning from an existing Python installation somewhere on your system (some files are copied, some are reused/shared to save space). Each of your projects can thus have their own python (or even several) under their respective virtualenv. It is perfectly fine for some/all virtualenvs to even have the same version of python (e.g. 3.8.5) without conflict - they live separately and don't know about each other. If you want to use any of those pythons from shell, you have to activate it (by running a script which will temporarily modify your PATH to ensure that that virtualenv's bin/ directory comes first). From that point, calling python (or pip etc.) will invoke that virtualenv's version until you deactivate it (which restores the PATH). It is also possible to call into a virtualenv Python using its absolute path - this can be useful e.g. when invoking Python from a script.
Pyenv operates on a wider scale than virtualenv. It is used to install (build from source) arbitrary versions of Python (it holds a register of available versions). By default, they're all installed alongside each other under ~/.pyenv, so they're "more global" than virtualenv. Then, it allows you to configure which version of Python to run when you use the python command (without virtualenv). This can be done at a global level or, separately, per directory (by placing a .python-version file in a directory). It's done by prepending pyenv's shim python script to your PATH (permanently, unlike in virtualenv) which then decides which "real" python to invoke. You can even configure pyenv to call into one of your virtualenv pythons (by using the pyenv-virtualenv plugin). You can also duplicate Python versions (by giving them different names) and let them diverge.
Using pyenv can be a convenient way of installing Python for subsequent virtualenv use.
| 0 | 1 | 0 | 0 |
2015-04-29T17:13:00.000
| 2 | 1 | false | 29,950,300 | 1 | 0 | 0 | 1 |
I recently learned how to use virtualenv and virtualenvwrapper in my workflow but I've seen pyenv mentioned in a few guides but I can't seem to get an understanding of what pyenv is and how it is different/similar to virtualenv. Is pyenv a better/newer replacement for virtualenv or a complimentary tool? If the latter what does it do differently and how do the two (and virtualenvwrapper if applicable) work together?
|
Celery, find the task caller from the task?
| 29,971,218 | 1 | 0 | 249 | 0 |
python,celery
|
Short answer is no and that is by design.
Long answer is yes, you can always send in unneeded information to the worker whose sole purpose is to identify the caller and the caller's state.
| 0 | 1 | 0 | 0 |
2015-04-29T19:35:00.000
| 1 | 1.2 | true | 29,952,975 | 0 | 0 | 0 | 1 |
Is it possible to lookup what code called (delay(), apply_async(), apply(), etc.) a task from within the task's code? Strings would be fine. Ideally, I would like to get the caller's stack trace.
|
Will installing Anaconda3 change Mac OS X default Python version to 3.4?
| 29,989,510 | 2 | 2 | 2,423 | 0 |
python,python-2.7,python-3.x,anaconda
|
No it won't, you can have multiple python installs, once you don't remove your system python or manually change the default you will be fine.
| 0 | 1 | 0 | 0 |
2015-05-01T14:13:00.000
| 2 | 0.197375 | false | 29,988,504 | 1 | 0 | 0 | 1 |
I'm planning to install Anaconda3 for Python 3.4. Since by default, Mac OS X uses Python2, will install Anaconda3 change the default Python version for the system? I don't want that to happen since Python3 can break backwards compatibility. If it does change the default Python version, how can I avoid that?
|
icu4c + Mapnik - I want to switch icu4c from 55.1 to 54.1 to get Mapnik to work
| 30,045,778 | 2 | 0 | 824 | 0 |
python-2.7,osx-mavericks,homebrew,icu,mapnik
|
This was Homebrew's fault and should be fixed after brew update && brew upgrade mapnik; sorry!
| 0 | 1 | 0 | 0 |
2015-05-01T14:37:00.000
| 2 | 0.197375 | false | 29,988,923 | 0 | 0 | 0 | 1 |
What is the best way to downgrade icu4c from 55.1 to 54.1 on Mac OS X Mavericks.
I tried brew switch icu4c 54.1 and failed.
Reason to switch back to 54.1
I am trying to setup and use Mapnik.
I was able to install Mapnik from homebrew - brew install mapnik
But, I get the following error when I try to import mapnik in python
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/site-packages/mapnik/__init__.py", line 69, in <module>
from _mapnik import *
ImportError: dlopen(/usr/local/lib/python2.7/site-packages/mapnik/_mapnik.so, 2): Library not loaded: /usr/local/opt/icu4c/lib/libicuuc.54.dylib
Referenced from: /usr/local/Cellar/mapnik/2.2.0_5/lib/libmapnik.dylib
Reason: image not found
Python version on my Mac - Python 2.7.5 (default, Mar 9 2014, 22:15:05)
Is switching icu4c back to 54.1 way to go?
Or, Am I missing something?
Thanks for the help in advance.
|
install numpy for python 3.4 in ubuntu
| 30,000,588 | 3 | 2 | 5,372 | 0 |
python,ubuntu,python-3.x,numpy
|
You're close, to update a package for python3.4 you have to sudo pip3 install -U numpy (note the pip3).
It might be the case that you still have to install pip3 first (not sure if it is bundled). Probably for that you have to sudo apt-get install python-pip3 or something. If you have a recent Ubuntu (I believe starting from 14.10), then you already have python 3.4 when you first boot up Ubuntu, as well as pip3 pre-installed.
You can also install through ubuntu's package manager, but if you want an OS independent way, you can just use pip3.
| 0 | 1 | 0 | 0 |
2015-05-02T08:51:00.000
| 2 | 1.2 | true | 30,000,294 | 1 | 0 | 0 | 1 |
I have python 2.7, and I installed beside it python 3.4, but python 3.4 has not numpy package. When I use sudo pip install -U numpy, it install it in python2.7 location. How can I install numpy for python 3.4 in a machine that has already python 2.7?
|
Python script fallback to second server
| 30,035,305 | 1 | 1 | 104 | 0 |
python,pushbullet
|
Could you shutdown the script on your VPS, copy the cache files over the the Pi and run the script there? Then do the reverse when you want to move it back to the VPS.
You could possibly run the script on both systems, but then you'd need to synchronize between them which sounds like a lot of unnecessary work. For instance you could run a third server that you can check with to see if you've sent something yet, but you would need to be able to lock items on there so you don't have a race condition between your two scripts.
| 0 | 1 | 0 | 1 |
2015-05-03T02:24:00.000
| 1 | 0.197375 | false | 30,009,595 | 0 | 0 | 1 | 1 |
I have a Python script that manages Pushbullet channels for Nexus Android device factory images. It runs on my VPS (cron job that runs every 10 minutes), but my provider has warned that there may be intermittent downtime over the next several days. The VPS is running Ubuntu Server 15.04.
I have a Raspberry Pi that's always on, and I can easily modify the script so that it works independently on both the VPS and the Pi. I would like the primary functionality to exist on the VPS, but I want to fall back to the Pi if the VPS goes down. What would be the best way to facilitate this handoff between the two systems (in both directions)? The Pi is running Raspbian Wheezy.
Additionally, the script uses urlwatch to actually watch the requisite page for updates. It keeps a cache file on the local system for each URL. If the Pi takes over and determines a change is made, it will notify the Pushbullet channel(s) as it should. When the VPS comes back up and takes over, it will have the old cache files and will notify the channel(s) again, which I want to avoid.
So: How can I properly run the script on whichever system happens to be up at the moment (preferring the VPS), and how can I manage the urlwatch caches between the two systems?
|
Try to run Python 3.4.3 but still shows Python 2.7.9
| 30,020,231 | 0 | 1 | 1,024 | 0 |
python,python-2.7,python-3.x
|
Probably the links are not working
To solve this, backup your current python link:
cp /usr/bin/python ~/Desktop
Remove the old soft link and create a new soft link pointing to Python 3.4.3 installation:
rm -f /usr/bin/python
ln -s /System/Library/Frameworks/Python.framework/Versions/3.4/bin/python3.4 /usr/bin/python
| 0 | 1 | 0 | 0 |
2015-05-03T22:16:00.000
| 1 | 0 | false | 30,020,044 | 1 | 0 | 0 | 1 |
I installed Python 3.4.3 and then installed Python 2.7.9 on Mac Air. If I run Python on a command line, it shows Python 2.7.9. I removed Python 2.7.9, it still shows Python 2.7.9. What is the problem? Thanks.
|
Celery worker: periodic local background task
| 30,047,898 | 0 | 1 | 541 | 0 |
python,scheduled-tasks,celery,background-process
|
One possible method I thought of, though not ideal, is to patch the celery.worker.heartbeat Heart() class.
Since we already use heartbeats, the class allows for a simple modification to its start() method (add another self.timer.call_repeatedly() entry), or an additional self.eventer.on_enabled.add() __init__ entry which references a new method that also uses self.timer.call_repeatedly() to perform a periodic task.
| 0 | 1 | 0 | 0 |
2015-05-04T18:15:00.000
| 2 | 0 | false | 30,037,065 | 0 | 0 | 0 | 1 |
Is there any Celery functionality or preferred way of executing periodic background tasks locally when using a single worker? Sort of like a background thread, but scheduled and handled by Celery?
celery.beat doesn't seem suitable as it appears to be simply tied to a consumer (so could run on any server) - that's the type of scheduling I was after, but just a task that is always run locally on each server running this worker (the task does some cleanup and stats relating to the main task the worker handles).
I may be going about this the wrong way, but I'm confined to implementing this within a celery worker daemon.
|
Python shell in linux
| 30,049,734 | 0 | 2 | 4,968 | 0 |
python,linux,python-idle
|
Install the "idle" package. Many distros break the core python installation into several pieces, and idle is usually one that is packaged separately.
| 0 | 1 | 0 | 0 |
2015-05-05T09:50:00.000
| 3 | 0 | false | 30,049,608 | 1 | 0 | 0 | 2 |
Is there a way to have a python IDLE shell, like how it is seen on windows? Sorry if this topic has been raised before, I did not find it.
Using Linux mint 17.1 rebecca, Cinnamon 32 bit
|
Python shell in linux
| 30,049,783 | 0 | 2 | 4,968 | 0 |
python,linux,python-idle
|
If you really want IDLE, just run python -m idlelib.idle
| 0 | 1 | 0 | 0 |
2015-05-05T09:50:00.000
| 3 | 0 | false | 30,049,608 | 1 | 0 | 0 | 2 |
Is there a way to have a python IDLE shell, like how it is seen on windows? Sorry if this topic has been raised before, I did not find it.
Using Linux mint 17.1 rebecca, Cinnamon 32 bit
|
IIS access to a remote server in same domain
| 30,057,339 | 0 | 0 | 78 | 0 |
php,python,iis
|
Since you provide no example code or describe what you are doing... There are a few things to consider.
Anything running in the context of a webpage in IIS is running in a different context than a logged in user.
The first part of that is simply what file system level permissions might be different for the IIS user account. The proper way you want to handle that is by assigning the necessary changes at the filesystem level for the IIS user. Do not change the IIS user if you do not understand the ramifications of doing that.
The next part is that certain operations cannot be done in the context of the IIS user account (regardless of account permissions), because there are certain things that only a logged in user with access to the console/desktop can do.
Certain operations called from IIS are purposely blocked (shell.execute) regardless of permissions, account used, etc. This occurs in versions of IIS in Windows Server 2008 and later and is done for security.
| 0 | 1 | 0 | 1 |
2015-05-05T15:11:00.000
| 2 | 0 | false | 30,056,836 | 0 | 0 | 0 | 2 |
I have a server A where some logs are saved, and another server B with a web server (IIS) on it.
I can access serverA from Windows Explorer with zero problems, but when I want to access it from serverB with some PHP code, it doesn't work.
I made a python script that accesses the file from serverA on serverB. It works if I run that script from CMD, but when I run that script from PHP code it doesn't work anymore.
I run IIS server as a domain account that has access on serverA
I try to run that as LocalService, NetworkService, System, LocalUser but no success.
That script is a simple open command, so problem it's not from python.
|
IIS access to a remote server in same domain
| 30,117,742 | 0 | 0 | 78 | 0 |
php,python,iis
|
Resolved.
Uninstall IIS and use XAMPP.
No problem found till now, everything works okay.
So use XAMPP/WAMP!
| 0 | 1 | 0 | 1 |
2015-05-05T15:11:00.000
| 2 | 1.2 | true | 30,056,836 | 0 | 0 | 0 | 2 |
I have a server A where some logs are saved, and another server B with a web server (IIS) on it.
I can access serverA from Windows Explorer with zero problems, but when I want to access it from serverB with some PHP code, it doesn't work.
I made a python script that accesses the file from serverA on serverB. It works if I run that script from CMD, but when I run that script from PHP code it doesn't work anymore.
I run IIS server as a domain account that has access on serverA
I try to run that as LocalService, NetworkService, System, LocalUser but no success.
That script is a simple open command, so problem it's not from python.
|
OPENSHIFT Cron job just stop working
| 30,107,482 | 0 | 0 | 567 | 0 |
python,django,openshift
|
This issue should be fixed now. Please open a request at help.openshift.com if you continue to have issues with it.
| 0 | 1 | 0 | 1 |
2015-05-06T11:20:00.000
| 2 | 0 | false | 30,075,234 | 0 | 0 | 0 | 1 |
I have an hourly cron job which has been running in Openshift free gear for almost a year and has been no problem. But the past 2 days, cron job stops running automatically. I have been googling around and still cannot find what went wrong. Here are what I have checked/done to date
the service I use to keep the site alive is still up and running as normal. So it is not a case of being idle.
force restart the app. Cron job still not started automatically as it used to.
fake changes to cron script file and push to Openshift. Still not fixed this.
log files looks ok
Mon May 4 13:01:07 EDT 2015: START hourly cron run
Mon May 4 13:01:29 EDT 2015: END hourly cron run - status=0
Any advice or pointer as to why it just stop working when there is no change to the app. Thank you.
|
What should the Python start command look like in Bluemix?
| 30,096,077 | 3 | 1 | 2,739 | 0 |
python-3.x,ibm-cloud
|
You can define the start command in a file called Procfile. Create the Procfile in the root of your app code that you push to Bluemix. The contents of the Procfile should look like this:
web: python3 appname.py
where appname.py is the nameof your python script to run
| 0 | 1 | 0 | 0 |
2015-05-06T23:27:00.000
| 3 | 1.2 | true | 30,089,374 | 0 | 0 | 0 | 1 |
I am trying to push a python3 app to Bluemix, but get the error msg "missing start command". I have tried to add -c "python appname.py" as Python usually has in Windows and -c "python3 appname.py" as in Python in Linux, but neither works for me. Can anyone give me the right start command to use?
|
is there any possible way to run a python script on boot in windows operating system?
| 30,090,978 | 1 | 0 | 111 | 0 |
python,windows
|
You don't need to create a py2exe executable for this, you can simply run the Python executable itself (assuming it's installed of course), passing the name of your script as an argument.
And one way to do that is to use the task scheduler, which can create tasks to be run at boot time, under any user account you have access to.
| 0 | 1 | 0 | 0 |
2015-05-07T02:30:00.000
| 1 | 1.2 | true | 30,090,942 | 1 | 0 | 0 | 1 |
I want to run a python script which should always start when windows boot.
i believe i can create an executable windows executable file from python by using py2exe... But how to make as a start up service which will be triggered while boot
Is there any way ?
|
about close a file in Python
| 30,092,316 | 3 | 9 | 2,274 | 0 |
python,file
|
There are two good reasons.
If your program crashes or is unexpectedly terminated, then output files may be corrupted.
It's good practice to close what you open.
| 0 | 1 | 0 | 0 |
2015-05-07T04:55:00.000
| 5 | 0.119427 | false | 30,092,249 | 1 | 0 | 0 | 1 |
I know it is a good habit of using close to close a file if not used any more in Python. I have tried to open a large number of open files, and not close them (in the same Python process), but not see any exceptions or errors. I have tried both Mac and Linux. So, just wondering if Python is smart enough to manage file handle to close/reuse them automatically, so that we do not need to care about file close?
thanks in advance,
Lin
|
Python Default Confusion
| 31,082,418 | 0 | 0 | 51 | 0 |
python,python-2.7,amazon-ec2,config,python-2.6
|
Creating an alias in your ~/bashrc is a good approach.
It sounds like you have not run source ~/.bashrc after you have edited it. Make sure to run this command.
Also keep it mind that when you run sudo python your_script.py it will not use your alias (because you are running as the root, not at the ec2-user).
Make sure to not change your default python, it could break several programs in your linux distributions (again, using an alias in your ~/bashrc is good).
| 0 | 1 | 0 | 1 |
2015-05-11T00:29:00.000
| 1 | 0 | false | 30,158,068 | 0 | 0 | 0 | 1 |
Upon running python in any dir in my amazon EC2 instance, I get the following printout on first line: Python 2.6.9 (unknown, todays_date). Upon going to /usr/bin and running python27, I get this printout on first line: Python 2.7.9 (default, todays_date).
This is a problem because the code that I have only works with Python 2.6.9, and it seems as though my default is Python 2.7.9. I have tried the following things to set default to 2.6:
1) Editing ~/.bashrc and creating an alias for python to point to 2.6
2) Editing ~/.bashrc and exporting the python path
3) Hopelessly scrolling through the /etc folder looking for any kind of file that can reset the default python
What the hell is going on?!?! This might be EC2 specific, but I think my main problem is that upon running /usr/bin/python27 I see that it is default on that first line.
Even upon running python -V, I get Python 2.6. And upon running which python I get /usr/bin/python, but that is not the default that the EC2 instance runs when it attempts to execute my code. I know this because the EC2 prints out Python/2.7.9 in the error log before showing my errors.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.