Title
stringlengths 15
150
| A_Id
int64 2.98k
72.4M
| Users Score
int64 -17
470
| Q_Score
int64 0
5.69k
| ViewCount
int64 18
4.06M
| Database and SQL
int64 0
1
| Tags
stringlengths 6
105
| Answer
stringlengths 11
6.38k
| GUI and Desktop Applications
int64 0
1
| System Administration and DevOps
int64 1
1
| Networking and APIs
int64 0
1
| Other
int64 0
1
| CreationDate
stringlengths 23
23
| AnswerCount
int64 1
64
| Score
float64 -1
1.2
| is_accepted
bool 2
classes | Q_Id
int64 1.85k
44.1M
| Python Basics and Environment
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Web Development
int64 0
1
| Available Count
int64 1
17
| Question
stringlengths 41
29k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Keep Django runserver alive when SSH is closed | 32,011,468 | 5 | 5 | 7,057 | 0 | python,django,ssh,server | Since runserver isn't intended to be ran in production/only for development there is no way to to this built in.
You will need to use a tool like tmux/screen or nohup to keep the process alive, even if the spawning terminal closes. | 0 | 1 | 0 | 0 | 2015-08-14T13:32:00.000 | 4 | 1.2 | true | 32,011,375 | 0 | 0 | 1 | 1 | I haven't yet been able to get Apache working with my Django app, so until I do get it working, I'm using runserver on my Linux server in order to demo the app. The problem is that whenever I close the SSH connection to the server, runserver stops running. How can I keep runserver running when I say put my laptop to sleep, or lose internet connectivity?
P.S. I'm aware that runserver isn't intended for production. |
How does TCP packet numbering make it easier for retransmission? | 32,023,059 | 3 | 1 | 295 | 0 | python,networking,tcp | It makes it easier because in essence each byte is numbered this way, letting you generate new sequence ids without having to worry about what higher sequence ids have already been used.
Lets say that transmission of the first 8,224 bytes succeeded, but the next 2 packets need to be resent. Moreover, those last 2 packets are not of optimal size, they are perhaps 2048 bytes long and 1024 bytes is a better packet size (perhaps a route was switched, or some other reason).
If the packets were numbered sequentially, you cannot break up those two packets as later packets already received already use the next numbers in the series. These two packets might be numbered 10 and 11, and you break them up and use 12 and 13 as well, because those numbers are already taken in this series of packets.
But if you used 8224 and 10272 instead, now you can break up the packets and send sequence numbers 8224, 9248, 10272 and 11296 without breaking the order of the whole sequence. | 0 | 1 | 0 | 0 | 2015-08-15T08:29:00.000 | 1 | 1.2 | true | 32,022,971 | 0 | 0 | 0 | 1 | In a reputable book about Python network programming (not mentioning the title to not make the question thought as an advertisement for the book), explaining TCP, the author wrote:
Instead of using sequential integers (1, 2, 3...) to sequence packets, TCP uses a counter that
counts the number of bytes transmitted. A 1,024-byte packet with a sequence number of 7,200,
for example, would be followed by a packet with a sequence number of 8,224. This means that
a busy network stack does not have to remember how it broke up a data stream into packets.
If asked for a retransmission, it can break up the stream into new packets some other way
(which might let it fit more data into a packet if more bytes are now waiting for transmission),
and the receiver can still put the packets back together.
How does that numbering pattern (counting the bytes in the packet rather than adding 1 to every sequence packet) make it easier for the sender to retransmit a packet? |
How to send `eof` signal, over a socket, to a command running in remote shell? | 32,023,977 | 7 | 6 | 4,705 | 0 | linux,sockets,python-3.x,signals | eof is not a signal but is implemented by the tty driver as a read of length 0 when you type ctrl-d.
If your remote is not running in a tty then you cannot generate an eof as you cannot send a packet that reads as of length 0. However if you run a program like script /dev/null as the first command in your remote shell, then this will envelop your shell inside a pseudo-tty, and you will be able to send a real ctrl-d character (hex 0x04) and the pty will convert this to eof and end a cat, for example. Send a stty -a to the remote to check that eol is enabled in your pty.
stty -a on my terminal says lnext = ^V (literal-next char) so I can type ctrl-vctrl-d to input a real hex 0x04 char.
I chose script as I know it effectively interposes a pseudo-tty in the communication, and does nothing to the data stream. This is not its original purpose (see its man page), but that doesn't matter. | 0 | 1 | 1 | 0 | 2015-08-15T09:56:00.000 | 2 | 1 | false | 32,023,586 | 0 | 0 | 0 | 1 | How to send eof signal, over a socket, to a command running in remote shell?
I've programmed in Python, using sockets, a remote shell application, where I send commands to be executed on another PC.
Everything works fine (for most commands), except a command like cat > file is causing me problems.
Normally, I would terminate the command above with CTRL + D (eof signal), but pressing CTRL + D in my client, doesn't send the signal to the remote shell. Therefore I have no means of terminating the command and I'm stuck.
Anyone have suggestions ? |
How to trigger Python script on Raspberry Pi from Node-Red | 54,633,976 | 1 | 8 | 28,176 | 0 | python,raspberry-pi,gpio,iot,node-red | I hope you have installed red-node along with Python.
If not, install it using following either in Power shell or CMD:
npm install -g node-red-contrib-python3-function
After starting node-red, you can find pythonshell node in Node Panel of node-red.
Drag and Drop it and double click it to get "node properties" panel,
Enter Python.exe path in Name and Python File in Py File and click on Done.
Have and msg-payload node connected to it and Deploy.
Click on PythonShell node input, you will get your python program executed and displayed in output. | 0 | 1 | 0 | 1 | 2015-08-17T19:07:00.000 | 4 | 0.049958 | false | 32,057,882 | 0 | 0 | 0 | 3 | I'm using Node-Red, hosted on a Raspberry Pi for an IoT project.
How do I trigger a Python script that is on the raspi from Node-Red? I want to run a script that updates the text on an Adafruit LCD shield which is sitting on the Pi
Should I be looking to expose the Python script as a web service somehow?
I'm using a Raspberry Pi B+ |
How to trigger Python script on Raspberry Pi from Node-Red | 71,484,228 | 0 | 8 | 28,176 | 0 | python,raspberry-pi,gpio,iot,node-red | I had a similar challenge with a Raspberry pi 4.
I solved it by using an execute node. On the command slot, enter the path of the python script as follows.
sudo python3 /home/pi/my_script.py
Change the script path to yours. Use the inject node to run the script and the debug node to view your output.
Ensure you grant superuser permission using sudo and you have python3 installed. | 0 | 1 | 0 | 1 | 2015-08-17T19:07:00.000 | 4 | 0 | false | 32,057,882 | 0 | 0 | 0 | 3 | I'm using Node-Red, hosted on a Raspberry Pi for an IoT project.
How do I trigger a Python script that is on the raspi from Node-Red? I want to run a script that updates the text on an Adafruit LCD shield which is sitting on the Pi
Should I be looking to expose the Python script as a web service somehow?
I'm using a Raspberry Pi B+ |
How to trigger Python script on Raspberry Pi from Node-Red | 32,058,198 | 9 | 8 | 28,176 | 0 | python,raspberry-pi,gpio,iot,node-red | Node-RED supplies an exec node as part of it's core set, which can be used to call external commands, this could be call your python script.
More details of how to use it can be found in the info sidebar when a copy is dragged onto the canvas.
Or you could wrap the script as a web service or just a simple TCP socket, both of which have nodes that can be used to drive them. | 0 | 1 | 0 | 1 | 2015-08-17T19:07:00.000 | 4 | 1.2 | true | 32,057,882 | 0 | 0 | 0 | 3 | I'm using Node-Red, hosted on a Raspberry Pi for an IoT project.
How do I trigger a Python script that is on the raspi from Node-Red? I want to run a script that updates the text on an Adafruit LCD shield which is sitting on the Pi
Should I be looking to expose the Python script as a web service somehow?
I'm using a Raspberry Pi B+ |
Can Tornado route differently based on Content-Type header? | 32,065,441 | 1 | 1 | 103 | 0 | python,tornado | No. Tornado's routing only considers the hostname and path. You'll have to route this path to a single RequestHandler and then inspect the Content-Type inside that handler. | 0 | 1 | 0 | 0 | 2015-08-17T20:34:00.000 | 1 | 0.197375 | false | 32,059,241 | 0 | 0 | 0 | 1 | For the very same REST route, e.g. /message, can I configure routing to execute different handlers based on the value of the Content-Type header? |
Efficient way of launching python script from exe | 32,062,059 | 0 | 0 | 46 | 0 | python,python-2.7,executable,popen | If you're simply trying to execute a python script externally, then just use popen() on your script, as you said. | 0 | 1 | 0 | 1 | 2015-08-18T01:03:00.000 | 1 | 0 | false | 32,062,008 | 0 | 0 | 0 | 1 | I know there's a lot of questions asking about the opposite, but is there a particularly good way to launch a Python script from an executable? The executable itself was originally written in Python and compiled using py2exe so I was thinking of using popen() and passing python myscript.py but not sure if that's the most efficient.
The particular script being launched would be Python 2.7 with the Python ArcGIS interpreter. |
is it better to run tornado on nginx rather than running it by its own? | 32,088,517 | 0 | 0 | 120 | 0 | python,nginx,tornado | Tornado can be run on its own, but there are three major advantages to using nginx as a proxy in front of it:
Nginx is much more efficient at serving static files.
A proxy makes it easier to do rolling restarts of backend processes to perform upgrades without downtime.
Nginx can be configured to more efficiently reject abusive requests (DDoS, etc). | 0 | 1 | 0 | 0 | 2015-08-18T14:38:00.000 | 1 | 1.2 | true | 32,075,475 | 0 | 0 | 0 | 1 | Tornado is powerful webserver and web Framework written in Python, it can run by its own (stand-alone) or could also be run through an other webserver specially nginx
Is there any performance or advantage of runing Tornado through Nginx ? |
Running Shell script from within Python issue | 32,085,685 | 1 | 0 | 76 | 0 | python,linux,bash,shell | Please ensure you have added:
!/bin/bash
as the first line and also make sure that the file script.sh has executable permission.
chmod u+x script.sh
then try specifying the complete path:
subprocess.call("/complete/path/script.sh", shell=True) | 0 | 1 | 0 | 0 | 2015-08-19T02:16:00.000 | 2 | 0.099668 | false | 32,085,479 | 0 | 0 | 0 | 1 | So, I am trying to run a Shell script from Python and I double checked that the location of the script.sh is all correct (because when I run it from sublime, the script.sh opens). What I have to call script.sh is:
subprocess.call("script.sh", shell=True)
When I run that, the function returns 0. However, the script is supposed to create a file in my folder and write into it, which it is not doing. It does work when I run the script from cygwin command prompt.
What could be wrong? |
Connecting to web app running on localhost on an Amazon EC2 from another computer | 32,092,809 | 2 | 6 | 14,127 | 0 | python,amazon-ec2,flask,web,localhost | You cannot connect to localhost on a remote machine without a proxy. If you want to test it you will need to change the binding to the public IP address or 0.0.0.0.
You will then have to lock down access to your own IP address through the security settings in AWS. | 0 | 1 | 0 | 0 | 2015-08-19T08:29:00.000 | 2 | 0.197375 | false | 32,090,306 | 0 | 0 | 1 | 1 | currently I am working on a web app development and I am running my server on an Amazon ec2 instance. I am testing my (web app which uses Flask) by running the server on localhost:5000 as usual. However I don't have access to the gui hence I don't see my app and test it like I would do on a browser. I have a Mac OS X computer so my question is how can I see the localhost of Amazon EC2 from my mac's browser ? |
IS reading from buffer quicker than reading from a file in python | 32,107,551 | 0 | 0 | 96 | 0 | python-2.7 | Unless you are significantly compressing before download, and decompressing the image after download, the problem is your 115,200 baud transfer rate, not the speed of reading from a file.
At the standard N/8/1 line encoding, each byte requires 10 bits to transfer, so you will be transferring 1150 bytes per second.
In 10 minutes, you will transfer 1150 * 60 * 10 = 6,912,000 bytes. At 3 bytes per pixel (for R, G, and B), this is 2,304,600 pixels, which happens to be the number of pixels in a 1920 by 1200 image.
The answer is to (a) increase the baud rate; and/or (b) compress your image (using something simple to decompress on the FPGA like RLE, if it is amenable to that sort of compression). | 0 | 1 | 0 | 1 | 2015-08-19T15:38:00.000 | 1 | 0 | false | 32,100,003 | 0 | 0 | 0 | 1 | I have a fpga board and I write a VHDL code that can get Images (in binary) from serial port and save them in a SDRAM on my board. then FPGA display images on a monitor via a VGA cable. my problem is filling the SDRAM take to long(about 10 minutes with 115200 baud rate).
on my computer I wrote a python code to send image(in binary) to FPGA via serial port. my code read binary file that saved in my hard disk and send them to FPGA.
my question is if I use buffer to save my images insted of binary file, do I get a better result? if so, can you help me how to do that, please? if not, can you suggest me a solution, please?
thanks in advans, |
Debugging Python when both PyDev and CDT natures in same Eclipse project | 32,167,307 | 0 | 0 | 702 | 0 | python,c++,eclipse,pydev | After 'googling' around the internet, here is what appears to be working for my particular situation:
Create a C/C++ project (empty makefile project). This produces the following 3 files in my top-level local SVN check-out directory:
.settings
.cproject
.project
Note: I keep my Eclipse workspace separate from my Eclipse project.
Create a separate Python project that is outside of the local SVN check-out directory.
Note: This Eclipse Python project is in my Eclipse workspace.
This creates the following 2 files:
.pydevproject
.project
Copy the .pydevproject to the directory containing the .settings, .cproject, and .project files.
Copy the Python 'nature' elements from the Python .project file to the CDT .project file.
Restart Eclipse if it had been running while editing the dot (.) files.
Finally, get into the "C/C++ Perspective". In the 'Project Explorer" window, pull down the 'View Menu".
Select 'Customize View...'.
Select the 'Content' tab.
Uncheck the 'PyDev Navigator Content' option. | 0 | 1 | 0 | 1 | 2015-08-19T16:16:00.000 | 1 | 0 | false | 32,100,787 | 0 | 0 | 0 | 1 | Eclipse 4.5 (Mars) / Windows 7
I have an Eclipse C/C++ Makefile project that has both Python and C/C++ code. The source code is checked-out from an SVN repository. The build environment is via a MSYS shell using a project specific configuration script to create all Makefiles in the top/sub-directories and 'make', 'make install' to build.
My .project file has both the PyDev and CDT natures configured.
I can switch between the PyDev and C/C++ perspectives and browse code including right-clicking on a symbol and 'open declaration'.
The 'Debug' perspective appears to be specific to the C/C++ perspective.
Do you have experience with configuring an Eclipse project that allows you to debug both Python and C/C++ code? |
Python is not saving .pyc files in filesystem | 32,104,793 | 2 | 0 | 377 | 0 | python,linux | There are a number of places where this enabled-by-default behavior could be turned off.
PYTHONDONTWRITEBYTECODE could be set in the environment
sys.dont_write_bytecode could be set through an out-of-band mechanism (ie. site-local initialization files, or a patched interpreter build).
File permissions could fail to permit it. This need not be obvious! Anything from filesystem mount flags to SELinux tags could have this result. I'd suggest using strace or a similar tool (as available for your platform) to determine whether any attempts to create these files exist.
On an embedded system, it makes much more sense to make this an explicit step rather than runtime behavior: This ensures that performance is consistent (rather than having some runs take longer than others to execute). Use py_compile or compileall to explicitly run ahead-of-time. | 0 | 1 | 0 | 1 | 2015-08-19T19:41:00.000 | 1 | 1.2 | true | 32,104,282 | 1 | 0 | 0 | 1 | I have a python application running in an embedded Linux system. I have realized that the python interpreter is not saving the compiled .pyc files in the filesystem for the imported modules by default.
How can I enable the interpreter to save it ? File system permission are right. |
Equivalent of matlab "ans" and running shell commands | 32,135,546 | 0 | 2 | 478 | 0 | python,matlab,shell | You probably want to use the IPython shell (now part of the jupyeter project). In the IPython shell you can also run system commands using !, although many basic commands (like ls or cd) work without even needing to !. Unlike in MATLAB, you don't need to pass it as a string (although you can). So !ls works fine in IPython, while in MATLAB you would need to do !'ls'. Further, you can assign the results to a variable in IPython, which you can't do in MATLAB. So a = !ls works in IPython but not in MATLAB. Further, if you use !!, the result is returned in a form easily usable in Python. So !!ls returns a list of file names.
IPython still uses the _ notation for getting the previous result (except, as with Python, None is counted as "no result" and thus is not recorded). You can also get the second-to-last result with __ and the third-to-last with ___. Further, IPython puts a number next to each line in the command prompt. To get the result of a particular line, just do _n where n is the number. So to get the result of the 3rd command, which has the number 3 next to it, just do _3. This still doesn't work if the result is None, though.
It has a ton of features. You can get the previous input (as a string) with _i (and so on, following the same pattern as with the outputs). You can time code with %timeit and %%timeit. You can jump into the debugger after encountering an error. | 0 | 1 | 0 | 0 | 2015-08-20T02:07:00.000 | 3 | 0 | false | 32,108,471 | 1 | 0 | 0 | 1 | These days, I'm transitiong from Matlab to Python after using Matlab/Octave for more than ten years. I have two quick questions:
In the Python interactive mode, is there anything corresponding to Matlab's ans?
How can I run shell commands in the Python interactive mode? Of course, I can use os.system(), but in Matlab we may run shell commands just by placing ! before the actual command. Is there anything similar in Python? |
Python: runtime shebang problems | 32,125,245 | 1 | 0 | 244 | 0 | python,shebang | I accepted John Schmitt's answer because it led me to the solution. However, I am posting what I actually did, because it might be useful for other Hadoopy users.
What I actually did was :
args['cmdenvs'] = ['export VIRTUAL_ENV=/n/2.7.9/ourvenv','export PYTHONPATH=/n/2.7.9/ourvenv', 'export PATH=/n/2.7.9/ourvenv/bin:$PATH']
and passed args into Hadoopy's launch function. In the executable .py files, I put the generic #!/usr/bin/env python shebang. | 0 | 1 | 0 | 0 | 2015-08-20T16:40:00.000 | 2 | 0.099668 | false | 32,123,775 | 1 | 0 | 0 | 1 | Here is the problem I am trying to solve. I don't have a specific question in the title because I don't even know what I need.
We have an ancient Hadoop computing cluster with a very old version of Python installed. What we have done is installed a new version (2.7.9) to a local directory (that we have perms on) visible to the entire cluster, and have a virtualenv with the packages we need. Let's call this path /n/2.7.9/venv/
We are using Hadoopy to distribute Python jobs on the cluster. Hadoopy distributes the python code (the mappers and reducers) to the cluster, which are assumed to be executable and come with a shebang, but it doesn't do anything like activate a virtualenv.
If I hardcode the shebang in the .py files to /n/2.7.9/venv/, everything works. But I want to put the .py files in a library; these files should have some generic shebang like #!/usr/bin/env python. But I tried this and it does not work, because at runtime the virtualenv is not "activated" by the script and therefore it bombs with import errors.
So if anyone has any ideas on how to solve this problem I would be grateful. Essentially I want #!/usr/bin/env python to resolve to /n/2.7.9/venv/ without /n/2.7.9/venv/ being active, or some other solution where I cannot hardcode the shebang.
Currently I am solving this problem by having a run function in the library, and putting a wrapper around this function in the main code (that calls the library) with the hardcoded shebang in it. This is less offensive because the hardcoded shebang makes sense in the main code, but it is still messy because I have to have an executable wrapper file around every function I want to run from the library. |
How do you use Fabric to copy files between remote machines? | 35,467,047 | 0 | 1 | 223 | 0 | python,rsync,fabric | Best way would be running the script from remotemachine1 if you can. | 0 | 1 | 0 | 1 | 2015-08-21T05:32:00.000 | 1 | 0 | false | 32,132,987 | 0 | 0 | 0 | 1 | I need to copy huge files from remotemachine1 to remotemachine2-remotemachine10.
What is the best way to do it ? Doing a get on remotemachine1 and then a put to all the remaining machines aren't ideal as the file is huge and I need to be able to send the Fabric command from my laptop. The remotemachines are all in the same network. Or should I do a run('rsync /file_on_remotemachine1 RemoteMachine2:/targetpath/') ?
Is there a better way to do this in Fabric ? |
Scapy installation fails due to invalid token | 34,408,487 | 4 | 32 | 16,594 | 0 | python,terminal,installation,scapy | Change os.chmod(fname,0755) to os.chmod(fname,0o755) and re-run | 0 | 1 | 0 | 0 | 2015-08-21T10:56:00.000 | 5 | 0.158649 | false | 32,138,575 | 1 | 0 | 0 | 1 | I have recently taken up learning networks, and I want to install scapy.
I have downloaded the latest version (2.2.0), and have two versions of python on my computer- 2.6.1 and 3.3.2. My OS is windows 7 64 bit.
After extracting scapy and navigating to the correct folder in the terminal, I was instructed to run "python setup.py install". I get the following error-
File "setup.py", line 35
os.chmod(fname,0755)
................................^
......................invalid
token
(dots for alignment)
How do I solve this problem? |
How to run the python file in remote machine directory? | 32,139,340 | 1 | 0 | 85 | 0 | python,python-2.7,python-3.x | Unless you have done something to specifically allow this, such as SSH into machine B first, you cannot do this.
That's a basic safety consideration. If any host A could execute any script on host B, it would be extremely easy to run malicious code on other machines. | 0 | 1 | 0 | 1 | 2015-08-21T11:28:00.000 | 2 | 0.099668 | false | 32,139,162 | 1 | 0 | 0 | 1 | Details:
I am having xxx.py file in B machine.
I trying to execute that xxx.python file from A machine by using python script. |
I installed Python 3.4.3 over 3.4.2 on Windows 7... and now I cannot uninstall Python | 32,146,376 | 3 | 4 | 331 | 0 | python,windows | I used MicrosoftFixit.ProgramInstallUninstall and I was able to remove Python34 and then it reinstalled without any problems. | 0 | 1 | 0 | 0 | 2015-08-21T16:28:00.000 | 2 | 0.291313 | false | 32,145,217 | 1 | 0 | 0 | 2 | I installed Python 3.4.3 over 3.4.2 on Windows 7 and got problems with IDLE not starting.
When I use the Windows uninstaller via the control panel I get the message:
"There is a problem with this Windows Installer package a program required for this install to complete could not be run. Contact your support personnel or package vendor."
If I try to remove Python via the msi file then I get the same message.
There is no Python34 directory on my machine. I noticed that there is an entry in the registry HKEY_LOCAL_MACHINE\SOFTWARE\Python\PythonCore\3.4\Modules. I didn't want to mess with my registry, but can I safely delete this entry? Is there any more to delete? |
I installed Python 3.4.3 over 3.4.2 on Windows 7... and now I cannot uninstall Python | 32,147,159 | 1 | 4 | 331 | 0 | python,windows | Had a similar problem. This is what I did:
Restart computer (kill any running processes of Python)
Delete the main Python folder under C drive.
Using CCleaner (or a similar application), use the Tools -> Uninstall feature to remove Python (if it is still there after deleting the folder)
Then go to the Registry window in CCleaner and clean the registry. Python should now be completely gone from your computer. | 0 | 1 | 0 | 0 | 2015-08-21T16:28:00.000 | 2 | 0.099668 | false | 32,145,217 | 1 | 0 | 0 | 2 | I installed Python 3.4.3 over 3.4.2 on Windows 7 and got problems with IDLE not starting.
When I use the Windows uninstaller via the control panel I get the message:
"There is a problem with this Windows Installer package a program required for this install to complete could not be run. Contact your support personnel or package vendor."
If I try to remove Python via the msi file then I get the same message.
There is no Python34 directory on my machine. I noticed that there is an entry in the registry HKEY_LOCAL_MACHINE\SOFTWARE\Python\PythonCore\3.4\Modules. I didn't want to mess with my registry, but can I safely delete this entry? Is there any more to delete? |
Trying to install python modules on cmd with windows 10 - Access is denied | 32,182,571 | 0 | 0 | 1,078 | 0 | cmd,pip,python-3.4,windows-10 | Are you running the command line as administrator? | 0 | 1 | 0 | 0 | 2015-08-24T12:35:00.000 | 1 | 0 | false | 32,182,492 | 1 | 0 | 0 | 1 | I have been having trouble installing pip modules. I have python 3.4 and windows 10. When i type into cmd python pip install [package], the computer comes up with an error saying "This app can't run on your pc" and cmd returns "Access is denied."
would this be a windows 10 incompatibility or is there something im missing/doing wrong?
Thanks in advance for help |
Apache Spark RDD transformations with 2 elements as input | 32,192,487 | 1 | 0 | 237 | 0 | python,tcp,apache-spark,pyspark,rdd | It seems like what your looking for might be best done with something like reduceByKey where you can remove the duplicates as you go for each sequence (assuming that the resulting amount of data for each sequence isn't too large, in your example it seems pretty small). Sorting the results can be done with the standard sortBy operator.
Saving the data out to HDFS is indeed done in parallel on the workers, forwarding the data to the Spark client app would create a bottleneck and sort of defeat the purpose (although if you do want to bring the data back locally you can use collect provided that the data is pretty small). | 0 | 1 | 0 | 0 | 2015-08-24T14:19:00.000 | 1 | 0.197375 | false | 32,184,638 | 0 | 1 | 0 | 1 | I am trying to 'follow-tcp-stream' in Hadoop sequence file that structured as follows:
i. Time stamp as key
ii. Raw Ethernet frame as value
The file contains a single TCP session, and because the record is very long, sequence-id of TCP frame overflows (which means that seq-id not necessarily unique and data cannot be sorted by seq-id because then it will get scrambled).
I use Apache Spark/Python/Scapy.
To create the TCP-stream I intended to:
1.) Filter out any non TCP-with-data frames
2.) Sort the RDD by TCP-sequence-ID (within each overflow cycle)
3.) Remove any duplicates of sequence-ID (within each overflow cycle)
4.) Map each element to TCP data
5.) Store the resulting RDD as testFile within HDFS
Illustration of operation on RDD:
input: [(time:100, seq:1), (time:101, seq:21), (time:102, seq:11), (time:103, seq:21), ... , (time:1234, seq=1000), (time:1235, seq:2), (time:1236, seq:30), (time:1237, seq:18)]
output:[(seq:1, time:100), (seq:11, time:102), (seq:21, time:101), ... ,(seq=1000, time:1234), (seq:2, time:1235), (seq:18, time:1237), (seq:30, time:1236)]
Steps 1 and 4 or obvious. The ways I came up for solving 2 and 3 required comparison between adjacent elements within the RDD, with the option to return any number of new elements (not necessarily 2, Without making any action of course - so the code will run in parallel). Is there any way to do this? I went over RDD class methods few times nothing came up.
Another issue the storage of the RDD (step 5). Is it done in parallel? Each node stores his part of the RDD to different Hadoop block? Or the data first forwarded to Spark client app and then it stores it? |
Ipython notebook - fails to open | 50,180,440 | 0 | 2 | 2,936 | 0 | ipython-notebook | I just got the same problem when I upgrade my python2.7 to python3 by using homebrew yesterday. Tried googled suggestions but no one really solved the problem. Then I checked the first line of my pip, pip3, ipython, ipython2, ipython3 and jupyter. Found the problem actually is that the first lines of jupyter and ipython2 still point to the old python2.7 path "/usr/local/opt/python/bin/python2.7" which is not exist anymore. So, I just changed the first line to "#!/usr/local/opt/python/bin/python3.6" for jupyter and the problem solved. | 0 | 1 | 0 | 0 | 2015-08-24T16:38:00.000 | 2 | 0 | false | 32,187,398 | 1 | 0 | 0 | 1 | I have tried to open ipython notebook without luck and don't know why?
When i type the command "ipython notebook", the output i receive is :
-bash: /usr/local/bin/ipython: /usr/local/opt/python/bin/python2.7: bad interpreter: No such file or directory
Any help ? |
What happened when celery task code was changed before prefetched task executed? | 32,202,757 | 0 | 0 | 181 | 0 | python,rabbitmq,celery | No, you must reload the workers. | 0 | 1 | 0 | 0 | 2015-08-25T05:15:00.000 | 1 | 1.2 | true | 32,195,993 | 0 | 0 | 0 | 1 | Does celery detect the changes of task code even if task already is prefetched as past task code? |
Connecting to a known external ip and internal ip without port forwarding | 32,205,167 | 1 | 1 | 918 | 0 | python,python-2.7,sockets,networking,tcp | Basically, it isn't (shouldn't be) possible for you to connect to your friends private IP through his firewall. That's the point of firewalls :-o
Two solutions - the simplest is a port forwarding rule on his firewall, the second is as you suggest an external server that both clients connect to. | 0 | 1 | 1 | 0 | 2015-08-25T13:02:00.000 | 2 | 0.099668 | false | 32,204,773 | 0 | 0 | 0 | 2 | I know my friend's external IP (from whatsmyip) and internal IP (e.g 192.168.1.x) and he knows mine. How do I establish a TCP connection with him?
Is it possible to do it without any port forwarding? Or do I require a server with an external IP to transfer messages between me and him? |
Connecting to a known external ip and internal ip without port forwarding | 32,220,457 | 3 | 1 | 918 | 0 | python,python-2.7,sockets,networking,tcp | You cannot do that because of NAT(Network Address Translation). The public ip you see by whatsmyip.com is the public ip of your router. Since different machines can connect to the same router all of them will have the same public ip( that of the router). However each of them have an individual private ip assigned by the router. Each outgoing connection from the private network has to be distinguished hence the router converts the connection(private ip, port) to a (different port) and adds it to the NAT table.
So if you really want to have a working connection, you should have to determine both the internal and external port for both ends and do the port forwarding in the router. Its a bit tricky and hence techniques like TCP hole punching are used. | 0 | 1 | 1 | 0 | 2015-08-25T13:02:00.000 | 2 | 0.291313 | false | 32,204,773 | 0 | 0 | 0 | 2 | I know my friend's external IP (from whatsmyip) and internal IP (e.g 192.168.1.x) and he knows mine. How do I establish a TCP connection with him?
Is it possible to do it without any port forwarding? Or do I require a server with an external IP to transfer messages between me and him? |
Twisted + Django as a daemon process plus Django + Apache | 32,235,411 | 1 | 0 | 143 | 1 | python,django,sqlite,twisted,daemon | No there is nothing inherently wrong with that approach. We currently use a similar approach for a lot of our work. | 0 | 1 | 0 | 0 | 2015-08-25T20:49:00.000 | 2 | 0.099668 | false | 32,213,796 | 0 | 0 | 1 | 1 | I'm working on a distributed system where one process is controlling a hardware piece and I want it to be running as a service. My app is Django + Twisted based, so Twisted maintains the main loop and I access the database (SQLite) through Django, the entry point being a Django Management Command.
On the other hand, for user interface, I am writing a web application on the same Django project on the same database (also using Crossbar as websockets and WAMP server). This is a second Django process accessing the same database.
I'm looking for some validation here. Is anything fundamentally wrong to this approach? I'm particularly scared of issues with database (two different processes accessing it via Django ORM). |
Python Scripts on Windows 10 | 37,099,911 | 0 | 3 | 7,001 | 0 | windows-10,python-3.5 | in python3 print is replaced by print() you can use this | 0 | 1 | 0 | 0 | 2015-08-26T14:57:00.000 | 2 | 0 | false | 32,230,048 | 0 | 0 | 0 | 1 | I am a new python user. I need to run scripts written by (remote) coworkers.
My first install of Python is 3.5.0.rc1. It was installed on a Windows 10 machine using the python webinstaller.
On installation, I told the installer to add all Python components, and to add Python to the PATH. I authorized python for all users.
I can load and access Python through the command line. It will respond to basic instructions (>>> 1+1 2).
However, I do not get the expected response from some basic commands (eg, >>>import os followed by >>>print os.getcwd() results in a syntax error rather than in a print of the directory containing the python executable).
Further, I can not get python to execute scripts (eg. >>>python test.py). This results in a syntax error, which seems to point to various places in the script file name. I have tried a quick search of previous questions on StackOverfow, and can't seem to find discussion of what seems to be a failure on this basic of level.
Perhaps I have not loaded all the necessary python modules, or is it something else that I'm missing |
mac following brew install python warning thrown unstable state | 32,239,497 | 0 | 0 | 67 | 0 | python,macos,homebrew | Homebrew's Python build will only attempt to recognize brewed or system Tcl/Tk. To build against Homebrew's Tcl/Tk (and install it first if necessary), install Python with brew install python3 --with-tcl-tk. | 0 | 1 | 0 | 0 | 2015-08-27T00:37:00.000 | 1 | 0 | false | 32,238,882 | 0 | 0 | 0 | 1 | Pundits warn against installing python in a mac usr/bin/Frameworks area.
Python self-installers write to Framework by default.
pundits advise using brew install of python to avoid the above.
Brew install python however, results in unstable state
Idle reports tclsh mismatch.
Pundits advise active state installer of correct tclsh. These are high-level python cognoscenti, and real pundits, lilies amidst the thorns.
Active-state installs to Frameworks (can you imagine?).
The said installer allows no other installation directory.
Brew installed python fails to see the active-state tclsh.
However, if one of you admonitory pundits could help me with a logical, non-idiomatic description of a process that will associate the appropriate "tclsh" in usr/bin with python3 in usr/local/bin, I would be ecstatic. |
arbitrary gql filters and sorts without huge index.yaml | 32,281,354 | 0 | 0 | 57 | 0 | python,google-app-engine,google-cloud-datastore,gql | It seems like Google Cloud SQL would do what I need, but since I'm trying not to spend any money on this project and GCS doesn't have a free unlimited tier, I've resorted to querying by my filter and then sorting the results myself. | 0 | 1 | 0 | 0 | 2015-08-27T00:40:00.000 | 1 | 1.2 | true | 32,238,896 | 0 | 0 | 1 | 1 | I've written a tiny app on Google App Engine that lets users upload files which have about 10 or so string and numeric fields associated with them. I store the files and these associated fields in an ndb model. I then allow users to filter and sort through these files, using arbitrary fields for sorting and arbitrary fields or collections of fields for filtering. However, whenever I run a sort/filter combination on my app that I didn't run on the dev_appserver before uploading, I get a NeedIndexError along with a suggested index, which seems to be unique for every combination of sort and filter fields. I tried running through every combination of sort/filter field on the appserver, generating a large index.yaml file, but at some point the app stopped loading altogether (I wasn't monitoring whether this was a gradual slowdown or a sudden breaking).
My questions are as follows. Is this typical behavior for the GAE datastore, and if not what parts of my code would be relevant for troubleshooting this? If this is typical behavior, is there an alternative to the datastore on GAE that would let me do what I want? |
Tornado websocket pings | 32,245,768 | 0 | 0 | 1,215 | 0 | python,websocket,tornado | The on_close event can only be triggered when the connection is closed.
You can send a ping and wait for an on_pong event.
Timouts are typically hard to detect since you won't even get a message that the socket is closed. | 0 | 1 | 1 | 0 | 2015-08-27T09:13:00.000 | 1 | 0 | false | 32,245,227 | 0 | 0 | 1 | 1 | I'm running a Python Tornado server with a WebSocket handler.
We've noticed that if we abruptly disconnect the a client (disconnect a cable for example) the server has no indication the connection was broken. No on_close event is raised.
Is there a workaround?
I've read there's an option to send a ping, but didn't see anyone use it in the examples online and not sure how to use it and if it will address this issue. |
Can't find a way to deal with Google Drive API 403 Rate Limit Exceeded | 32,260,966 | 0 | 1 | 884 | 0 | google-api,google-drive-api,google-api-python-client | I believe that this is a limit that google sets to stop people spamming the service and tying it up. It doesn't have anything to do with your app itself but is set on the Google server side. If the Google server receives over a particular number of requests within a certain time this is the error you get. There is nothing you can do in your app to overcome this. You can talk to Google about it and usually paying for Google licenses ect can allow you much higher limits before being restricted. | 0 | 1 | 0 | 0 | 2015-08-27T23:13:00.000 | 1 | 0 | false | 32,260,884 | 0 | 0 | 1 | 1 | I have a huge amount of users and files in a Google Drive domain. +100k users, +10M of files. I need to fetch all the permissions for these files every month.
Each user have files owned by themselves, and files shared by other domain users and/or external users (users that don't belong to the domain). Most of the files are owned by domain users. There is more than 7 millions of unique files owned by domain users.
My app is a backend app, which runs with a token granted by the domain admin user.
I think that doing batch requests is the best way to do this. Then, I configured my app to 1000 requests per user, in google developer console.
I tried the following cases:
1000 requests per batch, up to 1000 per user -> lots of user rate limits
1000 requests per batch, up to 100 per user -> lots of rate limit errors
100 requests per batch, up to 100 per user -> lots of rate limit errors
100 requests per batch, up to 50 per user -> lots of rate limits errors
100 requests per batch, up to 10 per user -> not errors anymore
I'm using quotaUser parameter to uniquely identify each user in batch requests.
I checked my app to confirm that each batch was not going to google out of its time. I checked also to see if each batch have no more than the limit of file_id configured to fetch. Everything was right.
I also wait each batch to finish before sending the next one.
Every time I see a 403 Rate Limit Exceeded, I do an exponential backoff. Sometimes I have to retry after 9 steps, which is 2**9 seconds waiting.
So, I can't see the point of Google Drive API limits. I'm sure my app is doing everything right, but I can't increase the limits to fetch more permissions per second. |
How to print barcode in Centos 6 using python | 32,270,859 | 2 | 1 | 281 | 0 | python-2.7,centos6,zebra-printers,barcode-printing | If you are printing to a network printer open a TCP connection to port 9100. If you are printing to a USB printer look up a USB library for python.
Once you have a connection send a print string formatted in ZPL. Look on the Zebra site for the ZPL manual. There are examples in there on how to print a barcode.
Normal Linux drivers will print graphics and text but do not have a barcode font. | 0 | 1 | 0 | 0 | 2015-08-28T10:28:00.000 | 1 | 0.379949 | false | 32,268,889 | 0 | 0 | 0 | 1 | I want to print Barcode on my Zebra Desktop label printer on CentOS 6.5 but I did not find any python drivers for that and did not found script so that i can use in my project.
Does anyone know how to print Barcode in Zebra printer? |
running command line program from npyscreen select option | 32,323,128 | 0 | 0 | 903 | 0 | python | npyscreen has CallSubShell function which allows to execute a command line program. The CallSubShell is actually switching from curses mode to normal mode and executes the command using os.system then at the end of the command execution switches back to curses mode.
Note:- I was not able to make the standard input working properly in the command execution. Also you may want to clear the screen before calling CallSubShell. | 0 | 1 | 0 | 0 | 2015-08-31T07:22:00.000 | 2 | 1.2 | true | 32,305,936 | 1 | 0 | 0 | 2 | I have a npyscreen program which has set of options and also I have another normal python command line program which interacts with user by asking yes/no question(s) like a wizard.
I want to integrate the normal python command line program in to the npyscreen program, so when user selects a option I want to run this normal python program. I do not want to reimplement the whole normal python command line program into npyscreen.
Is there anyway to do?
I found one function "npyscreen.CallSubShell" but didn't find any example code and much help in the documentation about this function.
Thanks in advance for any help.
/Shan |
running command line program from npyscreen select option | 32,691,384 | 0 | 0 | 903 | 0 | python | Thanks for the solution Shan. This works for me. Also as you said, uncommenting curses.endwin() works for scripts that are interactive. | 0 | 1 | 0 | 0 | 2015-08-31T07:22:00.000 | 2 | 0 | false | 32,305,936 | 1 | 0 | 0 | 2 | I have a npyscreen program which has set of options and also I have another normal python command line program which interacts with user by asking yes/no question(s) like a wizard.
I want to integrate the normal python command line program in to the npyscreen program, so when user selects a option I want to run this normal python program. I do not want to reimplement the whole normal python command line program into npyscreen.
Is there anyway to do?
I found one function "npyscreen.CallSubShell" but didn't find any example code and much help in the documentation about this function.
Thanks in advance for any help.
/Shan |
WebSockets connection refused | 32,324,682 | 0 | 0 | 2,943 | 0 | python,django,nginx,websocket | Just needed to change the port... May be this will help somebody. | 0 | 1 | 0 | 0 | 2015-08-31T12:37:00.000 | 2 | 0 | false | 32,311,470 | 0 | 0 | 1 | 2 | I have a Django app, with a real-time chat using tornado, redis and WebSockets. Project is running on the ubuntu server. On my local server everything is working good, but doesn't work at all on production server. I get an error
WebSocket connection to 'ws://mysite.com:8888/dialogs/' failed: Error in connection establishment: net::ERR_CONNECTION_REFUSED
privatemessages.js:232
close dialog ws
I have tried to change nginx configuration, settings.py, tried to open the 8888 port, but still no result. |
WebSockets connection refused | 32,313,073 | 0 | 0 | 2,943 | 0 | python,django,nginx,websocket | Seems to be you are using WebSockets as a separate service, so try to add the Access-control-origins add_header Access-Control-Allow-Origin *; | 0 | 1 | 0 | 0 | 2015-08-31T12:37:00.000 | 2 | 0 | false | 32,311,470 | 0 | 0 | 1 | 2 | I have a Django app, with a real-time chat using tornado, redis and WebSockets. Project is running on the ubuntu server. On my local server everything is working good, but doesn't work at all on production server. I get an error
WebSocket connection to 'ws://mysite.com:8888/dialogs/' failed: Error in connection establishment: net::ERR_CONNECTION_REFUSED
privatemessages.js:232
close dialog ws
I have tried to change nginx configuration, settings.py, tried to open the 8888 port, but still no result. |
Distutils can't find gcc from mingwpy in WinPython | 32,594,079 | 1 | 1 | 596 | 0 | python,mingw,distutils | Ok, figured it out. If you run the python.exe included with winpython it doesn't set the environment variables and so won't find gcc. If you run the special WinPython.exe it will set the variables and everything works fine. | 0 | 1 | 0 | 0 | 2015-08-31T14:37:00.000 | 1 | 1.2 | true | 32,313,826 | 1 | 0 | 0 | 1 | I'm trying out WinPython as an option to recommend to users who need to run my Python software. Crucially, distutils needs to work with MinGW.
WinPython includes mingwpy and provides a gcc.exe in the Python scripts directory. When checking os.environ I can see that this directory is added to the (temporary) path environment variable.
Unfortunately, distutils still can't find gcc. Does anyone know if there is a way to make distutils find the included gcc file without making changes to the system? |
Is it possible to Bulk Insert using Google Cloud Datastore | 33,367,328 | 7 | 6 | 3,726 | 1 | python,mysql,google-cloud-datastore | There is no "bulk-loading" feature for Cloud Datastore that I know of today, so if you're expecting something like "upload a file with all your data and it'll appear in Datastore", I don't think you'll find anything.
You could always write a quick script using a local queue that parallelizes the work.
The basic gist would be:
Queuing script pulls data out of your MySQL instance and puts it on a queue.
(Many) Workers pull from this queue, and try to write the item to Datastore.
On failure, push the item back on the queue.
Datastore is massively parallelizable, so if you can write a script that will send off thousands of writes per second, it should work just fine. Further, your big bottleneck here will be network IO (after you send a request, you have to wait a bit to get a response), so lots of threads should get a pretty good overall write rate. However, it'll be up to you to make sure you split the work up appropriately among those threads.
Now, that said, you should investigate whether Cloud Datastore is the right fit for your data and durability/availability needs. If you're taking 120m rows and loading it into Cloud Datastore for key-value style querying (aka, you have a key and an unindexed value property which is just JSON data), then this might make sense, but loading your data will cost you ~$70 in this case (120m * $0.06/100k).
If you have properties (which will be indexed by default), this cost goes up substantially.
The cost of operations is $0.06 per 100k, but a single "write" may contain several "operations". For example, let's assume you have 120m rows in a table that has 5 columns (which equates to one Kind with 5 properties).
A single "new entity write" is equivalent to:
+ 2 (1 x 2 write ops fixed cost per new entity)
+ 10 (5 x 2 write ops per indexed property)
= 12 "operations" per entity.
So your actual cost to load this data is:
120m entities * 12 ops/entity * ($0.06/100k ops) = $864.00 | 0 | 1 | 0 | 0 | 2015-08-31T16:47:00.000 | 3 | 1.2 | true | 32,316,088 | 0 | 0 | 1 | 1 | We are migrating some data from our production database and would like to archive most of this data in the Cloud Datastore.
Eventually we would move all our data there, however initially focusing on the archived data as a test.
Our language of choice is Python, and have been able to transfer data from mysql to the datastore row by row.
We have approximately 120 million rows to transfer and at a one row at a time method will take a very long time.
Has anyone found some documentation or examples on how to bulk insert data into cloud datastore using python?
Any comments, suggestions is appreciated thank you in advanced. |
getting "SyntaxError" when installing twilio in the Windows command line interface | 40,198,608 | 1 | 2 | 3,665 | 0 | python-2.7 | go to the command prompt
it will say the account and all that jazz.
type cd ..
then hit enter
it will say C:\Users>
type cd .. again
then it will say C:>
type cd python27 (or the name of your python folder)
it will say C:\Python27>
type cd scripts
it will say C:\python27/scripts>
type easy_install twilio
then wait for it to run the procceses and then you will have twilio installed to python. | 0 | 1 | 0 | 0 | 2015-09-02T00:53:00.000 | 3 | 0.066568 | false | 32,343,072 | 0 | 0 | 0 | 2 | I'm new here and also a new Python learner. I was trying to install twilio package through the Windows command line interface, but I got a syntax error(please see below). I know there're related posts, however, I was still unable to make it work after trying those solutions. Perhaps I need to set the path in the command line, but I really have no idea how to do that...(I can see the easy_install and pip files in the Scripts folder under Python) Can anyone please help? Thanks in advance!
Microsoft Windows [Version 6.3.9600] (c) 2013 Microsoft Corporation.
All rights reserved.
C:\WINDOWS\system32>python Python 2.7.10 (default, May 23 2015,
09:44:00) [MSC v.1500 64 bit (AMD64)] on wi n32 Type "help",
"copyright", "credits" or "license" for more information.
easy_install twilio File "", line 1
easy_install twilio
^ SyntaxError: invalid syntax |
getting "SyntaxError" when installing twilio in the Windows command line interface | 40,895,682 | 3 | 2 | 3,665 | 0 | python-2.7 | You should not type python first, then it becomes python command line.
Open a new command prompt and directly type:
easy_install twilio | 0 | 1 | 0 | 0 | 2015-09-02T00:53:00.000 | 3 | 0.197375 | false | 32,343,072 | 0 | 0 | 0 | 2 | I'm new here and also a new Python learner. I was trying to install twilio package through the Windows command line interface, but I got a syntax error(please see below). I know there're related posts, however, I was still unable to make it work after trying those solutions. Perhaps I need to set the path in the command line, but I really have no idea how to do that...(I can see the easy_install and pip files in the Scripts folder under Python) Can anyone please help? Thanks in advance!
Microsoft Windows [Version 6.3.9600] (c) 2013 Microsoft Corporation.
All rights reserved.
C:\WINDOWS\system32>python Python 2.7.10 (default, May 23 2015,
09:44:00) [MSC v.1500 64 bit (AMD64)] on wi n32 Type "help",
"copyright", "credits" or "license" for more information.
easy_install twilio File "", line 1
easy_install twilio
^ SyntaxError: invalid syntax |
Does Jupyter data path support unicode? | 32,355,092 | 1 | 2 | 503 | 0 | python,unicode,ipython,jupyter | Well setting IPYTHONDIR variable to another location rather than the default one which also includes unicode characters, solved the problem, it is not an elegant solution in fact but it works. | 0 | 1 | 0 | 0 | 2015-09-02T11:03:00.000 | 1 | 0.197375 | false | 32,351,417 | 1 | 0 | 0 | 1 | Jupyter data path on my laptop includes unicode characters because my name has specific letters (\u00d6 and \u00fc) which are not available in plain Latin. I tried to change data path by changing JUPYTER_PATH variable but according to documentation it must include %APPDATA% variable and unfortunately %APPDATA% also includes the same letters. Is there any way to solve this problem ? |
Error during mediaproxy installation | 32,355,833 | 0 | 0 | 118 | 0 | python,gcc,proxy,compilation,media | Could be a dependency issue. Give this a shot:
sudo apt-get install build-essential autoconf libtool pkg-config python-opengl python-imaging python-pyrex python-pyside.qtopengl idle-python2.7 qt4-dev-tools qt4-designer libqtgui4 libqtcore4 libqt4-xml libqt4-test libqt4-script libqt4-network libqt4-dbus python-qt4 python-qt4-gl libgle3 python-dev | 0 | 1 | 0 | 1 | 2015-09-02T14:18:00.000 | 1 | 0 | false | 32,355,681 | 0 | 0 | 0 | 1 | I am installing mediaproxy on my server debian. Please review the error pasted below. I have also tried installing the dependencies but still this error occurs. Need help on this.
root@server:/usr/local/src/mediaproxy-2.5.2# ./setup.py build running build running build_py running build_ext building 'mediaproxy.interfaces.system._conntrack' extension x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fno-strict-aliasing -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -fPIC -DMODULE_VERSION=2.5.2 -I/usr/include/python2.7 -c mediaproxy/interfaces/system/_conntrack.c -o build/temp.linux-x86_64-2.7/mediaproxy/interfaces/system/_conntrack.o mediaproxy/interfaces/system/_conntrack.c:12:29: fatal error: libiptc/libiptc.h: No such file or directory #include ^ compilation terminated. error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
Thanks. Faisal |
How to run a batch file without launching a “command window”? | 32,371,292 | 0 | 0 | 1,362 | 0 | python,batch-file,window,command | Run your script with pythonw.exe instead of python.exe and it won't show dos shell. | 0 | 1 | 0 | 0 | 2015-09-03T08:52:00.000 | 1 | 1.2 | true | 32,371,099 | 0 | 0 | 0 | 1 | I dont want to open a command window when i am Running application,
I directed shortcut to .bat file while creating .exe file where the application is python based
Code in .bat file is like this
@python\python.exe -m demo.demo %*
where demo is my application name (.bat file name) |
Does Installer Need an Internet Connection? | 32,380,336 | 1 | 1 | 979 | 0 | python-2.7,installation | If you mean python installer for windows, yes it's enough and installer doesn't need internet connection, but if you want to install another modules through pip you will need internet connection. | 0 | 1 | 0 | 0 | 2015-09-03T15:23:00.000 | 1 | 1.2 | true | 32,379,716 | 1 | 0 | 0 | 1 | I have a Windows machine that I want to install Python (2.7) on.
That machine is not connected to the internet and never will be.
Hence the question: If I download the thing that the python site
calls the installer and copy it to that machine, will that be
enough to install python? Or does the installer need internet
access, like so many "installers" these days?
(Yes, I could just try it. Got a very slow connection...)
Anyone happens to know the answer to the same question regarding
wxpython that would be great.
Thanks. |
GAE/python site is no longer handling requests prefaced with 'www.' | 32,404,416 | 0 | 0 | 26 | 0 | python,google-app-engine,google-cloud-datastore | Here's how I remedied this:
I went to console.developers.google.com > project > hockeybias-hrd > appengine > settings > domains > add
In 'step 2' on that page I put the 'www' for the subdomain in the textbox which enabled the Add button.
I clicked on the 'Add' button and the issue was solved.
I will note that this was the second time I have been 'head-faked' by google's use of greyed-out text to mean something other than 'disabled'... 'www' was the default value in the subdomain textbox - BUT it was greyed-out AND the 'Add'button was disabled right next to it. So, I did not initially think I could enter a value there. | 0 | 1 | 0 | 0 | 2015-09-03T16:16:00.000 | 1 | 0 | false | 32,380,760 | 0 | 0 | 1 | 1 | If a user enters ‘hockeybias.com’ into his/her browser as a URL to get to my hockey news aggregation site, the default page comes up correctly. It has in the past and does so today.
However, as of this summer, if someone uses ‘www.hockeybias.com’ the user will get the following error message:
Error: Not Found
The requested URL / was not found on this server.
This is a relatively new issue for me as ‘www.hockeybias.com’ worked fine in the past.
The issue seems to have come up after I migrated from the ‘Master/Slave Datastore’ version of GAE to the ‘High Replication Datastore’ (HRD) version of GAE earlier this summer.
The issue occurred while the site used python2.5. And I migrated the site to python2.7 this morning and am still having the issue. |
Import Error: No module name libstdcxx | 48,399,591 | 0 | 15 | 14,223 | 0 | python,c++,c,linux | If you used sudo to start the gdb, make sure you have the PATH correct.
Try this sudo PATH=$PATH gdb ... | 0 | 1 | 0 | 0 | 2015-09-04T04:16:00.000 | 4 | 0 | false | 32,389,977 | 0 | 0 | 0 | 2 | When I use gdb to debug my C++ program with segmentation fault, I come with this error in gdb.
Traceback (most recent call last):
File "/usr/share/gdb/auto-load/usr/lib/x86_64-linux- gnu/libstdc++.so.6.0.19-gdb.py", line 63, in
from libstdcxx.v6.printers import register_libstdcxx_printers
ImportError: No module named 'libstdcxx'
I am using Gdb 7.7.1 and g++ version 4.8.4. I have googled around but haven't get answers. Can any one solve my error? Thank you very much. |
Import Error: No module name libstdcxx | 33,897,420 | 21 | 15 | 14,223 | 0 | python,c++,c,linux | This is a bug in /usr/lib/debug/usr/lib/$triple/libstdc++.so.6.0.18-gdb.py;
When you start gdb, please enter:
python sys.path.append("/usr/share/gcc-4.8/python"); | 0 | 1 | 0 | 0 | 2015-09-04T04:16:00.000 | 4 | 1 | false | 32,389,977 | 0 | 0 | 0 | 2 | When I use gdb to debug my C++ program with segmentation fault, I come with this error in gdb.
Traceback (most recent call last):
File "/usr/share/gdb/auto-load/usr/lib/x86_64-linux- gnu/libstdc++.so.6.0.19-gdb.py", line 63, in
from libstdcxx.v6.printers import register_libstdcxx_printers
ImportError: No module named 'libstdcxx'
I am using Gdb 7.7.1 and g++ version 4.8.4. I have googled around but haven't get answers. Can any one solve my error? Thank you very much. |
"sudo" operations from python daemon | 32,468,089 | 0 | 2 | 299 | 0 | python,linux | In this case I have a flask back end that needed to do something privileged. I broke it up into two back ends - one unprivileged and another small privileged piece rather than use sudo.
It is also possible to run sudo in a pty but I decided against this approach as it does indeed have a security flaw. | 0 | 1 | 0 | 0 | 2015-09-04T18:26:00.000 | 1 | 0 | false | 32,404,408 | 0 | 0 | 0 | 1 | I am writing a python administrative daemon on linux that needs to start/stop other services. Following the principle of least privilege, I want to run this normally with regular user privileges but when it needs to start/stop other services, I want it to become root. Essentially I want to do what sudo would do from the command line. I cannot directly exec sudo from the daemon because it has no tty. I want to avoid running the daemon as root when it does not need to run as root. Is there any way to do this from python without needing to use sudo?
Thank you in advance.
Ranga. |
Using SCM to synchronize PyDev eclipse projects between different computer | 32,408,606 | 0 | 0 | 86 | 0 | python,eclipse,version-control,synchronization,pydev | I use mercurial. I picked it because it seemed easier. But is is only easiER.
There is mercurial eclipse plugin.
Save a copy of your workspace and maybe your eclipse folder too before daring it :) | 0 | 1 | 0 | 1 | 2015-09-04T21:21:00.000 | 3 | 1.2 | true | 32,406,765 | 0 | 0 | 1 | 2 | I use eclipse to write python codes using pydev. So far I have been using dropbox to synchronize my workspace.
However, this is far from ideal. I would like to use github (or another SCM platform) to upload my code so I can work with it from different places.
However, I have found many tutorials kind of daunting... Maybe because they are ready for projects shared between many programers
Would anyone please share with me their experience on how to do this? Or any basic tutorial to do this effectively?
Thanks |
Using SCM to synchronize PyDev eclipse projects between different computer | 32,466,408 | 0 | 0 | 86 | 0 | python,eclipse,version-control,synchronization,pydev | I use bitbucket coupled with mercurial. That is my repository is on bitbucket and i pull and psuh to it from mercurial within eclipse
For my backup i have an independent carbonite process going to net back all hard disk files. But I imagine there is a clever free programatic way to do so. If one knew how to write the appropriate scripts.
Glad the first suggestion was helpful .you are wise to bite the bullet and get this in place now. ;) | 0 | 1 | 0 | 1 | 2015-09-04T21:21:00.000 | 3 | 0 | false | 32,406,765 | 0 | 0 | 1 | 2 | I use eclipse to write python codes using pydev. So far I have been using dropbox to synchronize my workspace.
However, this is far from ideal. I would like to use github (or another SCM platform) to upload my code so I can work with it from different places.
However, I have found many tutorials kind of daunting... Maybe because they are ready for projects shared between many programers
Would anyone please share with me their experience on how to do this? Or any basic tutorial to do this effectively?
Thanks |
Get PID of process blocking a COM PORT | 32,430,151 | 0 | 0 | 367 | 0 | c#,python,windows,serial-port | This question has been asked numerous times on SO and many other forums for the last 10 years or so. The generally accepted answer is to use sysinternals to find the process using the particular file handle. Remember, a serial port is really just a file as far as the win32 api is concerned.
So, two answers for you:
Use sysinternals to find to offending application. I don't think this approach will work via python but you might hack something with .NET.
Use the NtQuerySystemInformation in a getHandles function. Take a look at the structures and figure out which fields are useful for identifying the offending process.
os.system("taskkill blah blah blah") against known serial port using apps. More on this idea at the end.
The 2nd idea sounds fun, however I just don't think the juice is worth the squeeze in this case. A relatively small number of processes actually use serial ports these days and if you are working in a specific problem domain, you are well aware of what the applications are called.
I would just run taskkill (via os.system) against any applications that I know 1) can be safely closed and 2) might actually have a port open. With this approach you'll save the headache of enumerating file handles and get back to focusing on what your application should really be doing. | 0 | 1 | 0 | 1 | 2015-09-06T00:53:00.000 | 1 | 0 | false | 32,419,015 | 0 | 0 | 0 | 1 | How to go about to get the process id of a process blocking a certain COM Port on Windows 7 and/or later?
I would like to get the PID programmatically. If possible using Python or C# but the language is not really important, I just want to understand the procedure. |
Homebrew installation of OpenCV 3.0 not linking to Python | 37,454,424 | 7 | 4 | 7,461 | 0 | python,macos,opencv,homebrew,opencv3.0 | It's weird that there is no concise instruction for installing OpenCV 3 with Python3. So, here I make it clear step-by-step:
Install Homebrew Python 3.5: brew install python3
Tap homebrew/science: brew tap homebrew/science
Install any Python3 packages using pip3. This will create the site-packages folder for Python3
For example:
pip3 install numpy
Then install OpenCV3 brew install opencv3 --with-python3
Now you can find the site-packages folder created in Step 2. Just run the following command to link Opencv3 to Python3:
echo /usr/local/opt/opencv3/lib/python3.5/site-packages >> /usr/local/lib/python3.5/site-packages/opencv3.pth
You may have to change the above command correpondingly to your installed Homebrew Python version (e.g. 3.4). | 0 | 1 | 0 | 0 | 2015-09-06T06:50:00.000 | 2 | 1 | false | 32,420,853 | 0 | 0 | 0 | 2 | When I install OpenCV 3.0 with Homebrew, it gives me the following directions to link it to Python 2.7:
If you need Python to find bindings for this keg-only formula, run:
echo /usr/local/opt/opencv3/lib/python2.7/site-packages >>
/usr/local/lib/python2.7/site-packages/opencv3.pth
While I can find the python2.7 site packages in opencv3, no python34 site packages were generated. Does anyone know how I can link my OpenCV 3.0 install to Python 3? |
Homebrew installation of OpenCV 3.0 not linking to Python | 32,510,430 | 4 | 4 | 7,461 | 0 | python,macos,opencv,homebrew,opencv3.0 | You need to install opencv like brew install opencv3 --with-python3. You can see a list of options for a package by running brew info opencv3. | 0 | 1 | 0 | 0 | 2015-09-06T06:50:00.000 | 2 | 0.379949 | false | 32,420,853 | 0 | 0 | 0 | 2 | When I install OpenCV 3.0 with Homebrew, it gives me the following directions to link it to Python 2.7:
If you need Python to find bindings for this keg-only formula, run:
echo /usr/local/opt/opencv3/lib/python2.7/site-packages >>
/usr/local/lib/python2.7/site-packages/opencv3.pth
While I can find the python2.7 site packages in opencv3, no python34 site packages were generated. Does anyone know how I can link my OpenCV 3.0 install to Python 3? |
Pelican stopped generating the site | 32,427,155 | 0 | 0 | 55 | 0 | python,blogs,pelican | The problem was with the PAGE_PATHS value in the settings file.
Turned out that it cannot be set to [""].
Changed it to pages | 0 | 1 | 0 | 0 | 2015-09-06T18:49:00.000 | 1 | 1.2 | true | 32,427,064 | 0 | 0 | 0 | 1 | Everything was working fine.
But now when I do pelican content, nothing happens. Literally. Command Line is just stuck.
What could be the reason? |
Python and Java in internal command | 41,688,142 | 0 | 0 | 29 | 0 | java,python-2.7,python-3.4 | You can create a new variable name, for example MY_PYTHEN=C:\Pythen34 . Then you need to add the variable name into system variable PATH such as,
PATH = ...;%MY_PYTHEN%
PATH is a Windows system default variable. | 0 | 1 | 0 | 1 | 2015-09-10T08:56:00.000 | 2 | 0 | false | 32,497,329 | 0 | 0 | 1 | 1 | I have installed java and using it in internal command with variable name:PATH and variable value: C:\Program Files\Java\jdk1.8.0_60\bin . Now i want add python to internal command. What variable name do I give so that it works.I tried with Name: PTH and Value:C:\Python34; its not working. |
Update message in Kafka topic | 32,510,798 | 19 | 11 | 7,086 | 0 | python,apache-kafka,kafka-consumer-api,kafka-python | Kafka is a distributed immutable commit log. That said, there is no possibility to update a message in a topic. Once it is there all you can do is consume it, update and produce to another (or this) topic again | 0 | 1 | 1 | 0 | 2015-09-10T17:40:00.000 | 1 | 1 | false | 32,508,415 | 0 | 0 | 0 | 1 | I am using Python Kafka topic.
Is there any provision producer that can update a message in a queue in Kafka and append it to the top of queue again?
According to spec of Kafka, it doesn't seems feasible. |
Google App Engine Server to Server OAuth Python | 32,524,341 | 0 | 0 | 163 | 0 | python,google-app-engine,gmail,google-oauth,service-accounts | A service account isn't you its it's own user. Even if you could access Gmail with a service account which I doubt you would only be accessing the service accounts GMail account (Which I don't think it has) and not your own.
To my knowledge the only way to access Gmail API is with Oauth2.
Service accounts can be used to access some of the Google APIs for example Google drive. The service account his its own Google drive account files will be uploaded to its drive account. I can give it permission to upload to my google drive account by adding it as a user on a folder in Google drive.
You cant give another user permission to read your Gmail Account so again the only way to access the Gmail API will be to use Oauth2. | 0 | 1 | 1 | 0 | 2015-09-11T13:04:00.000 | 2 | 0 | false | 32,524,226 | 0 | 0 | 1 | 1 | I can't find a solution to authorize server-to-server authentication using Google SDK + Python + MAC OSx + GMAIL API.
I would like testing GMail API integration in my local environment, before publishing my application in GAE, but until now I have no results using samples that I have found in GMail API or OAuth API documentation. During all tests I received the same error "403-Insufficient Permission" when my application was using GCP Service Account, but if I convert the application to use User Account everything was fine. |
Is there a difference between "brew install" and "pip install"? | 32,530,618 | 4 | 48 | 38,886 | 0 | python,macos,pip,homebrew | Homebrew is a package manager, similar to apt on ubuntu or yum on some other linux distros. Pip is also a package manager, but is specific to python packages. Homebrew can be used to install a variety of things such as databases like MySQL and mongodb or webservers like apache or nginx. | 0 | 1 | 0 | 0 | 2015-09-11T19:06:00.000 | 3 | 0.26052 | false | 32,530,506 | 1 | 0 | 0 | 1 | I want to install pillow on my Mac. I have python 2.7 and python 3.4, both installed with Homebrew. I tried brew install pillow and it worked fine, but only for python 2.7. I haven't been able to find a way to install it for python 3. I tried brew install pillow3 but no luck. I've found a post on SO that says to first install pip3 with Homebrew and then use pip3 install pillow. As it happens, I have already installed pip3.
I've never understood the difference, if any, between installing a python package with pip and installing it with Homebrew. Can you explain it to me? Also, is it preferable to install with Homebrew if a formula is available? If installing with Homebrew is indeed preferable, do you know how to install pillow for python 3 with Homebrew?
The first answers indicate that I haven't made myself plain. If I had installed pillow with pip install pillow instead of brew install pillow would the installation on my system be any different? Why would Homebrew make a formula that does something that pip already does? Would it check for additional prerequisites or something? Why is there a formula for pillow with python2, but not as far as I can tell for pillow with python3? |
Using Terrestrial Time in PyEphem | 32,557,689 | 1 | 1 | 110 | 0 | python,astronomy,pyephem | Alas — I am not aware of any settings in the libastro library, the PyEphem is based on, that would allow the use of alternative time scales. | 0 | 1 | 0 | 0 | 2015-09-12T07:56:00.000 | 1 | 0.197375 | false | 32,536,548 | 1 | 0 | 0 | 1 | Is there a way to make PyEphem give times in Dynamical Time (Terrestrial Time), without using delta_t() every time?
According to documentations, PyEphem uses Ephemeris Time. So isn't there a way to just 'switch off' the conversion to UTC? |
pip2.7 cassandra-driver installation on centos 6.6 fails with recursion depth issue | 32,554,637 | 0 | 0 | 480 | 0 | python-2.7,cassandra,centos6 | Changing python installation to scl fixed the problem. I uninstalled the python2.7 but cleaning out /usr/local with all python 2.7 related things in bin and lib. Reinstalled python27 using the following sequence:
yum install centos-release-SCL
yum install python27
scl enable python27 bash
Installed pip using "easy_install-2.7 pip"
Now I can install cassandra driver... | 0 | 1 | 0 | 0 | 2015-09-13T00:53:00.000 | 1 | 0 | false | 32,545,277 | 0 | 0 | 0 | 1 | I am trying to install using pip2.7 install cassandra-driver and it fails with the long stack trace. The error is RuntimeError: maximum recursion depth exceeded while calling a Python object. I can install number of things like scikit etc, just fine. Is there something special needed? Here is the tail of the stack trace.
File "/usr/local/lib/python2.7/site-packages/distribute-0.6.28-py2.7.egg/pkg_resources.py", line 837, in obtain
return installer(requirement)
File "/usr/local/lib/python2.7/site-packages/distribute-0.6.28-py2.7.egg/setuptools/dist.py", line 272, in fetch_build_egg
dist = self.__class__({'script_args':['easy_install']})
File "/usr/local/lib/python2.7/site-packages/distribute-0.6.28-py2.7.egg/setuptools/dist.py", line 225, in __init__
_Distribution.__init__(self,attrs)
File "/usr/local/lib/python2.7/distutils/dist.py", line 287, in __init__
self.finalize_options()
File "/usr/local/lib/python2.7/site-packages/distribute-0.6.28-py2.7.egg/setuptools/dist.py", line 257, in finalize_options
ep.require(installer=self.fetch_build_egg)
File "/usr/local/lib/python2.7/site-packages/distribute-0.6.28-py2.7.egg/pkg_resources.py", line 2029, in require
working_set.resolve(self.dist.requires(self.extras),env,installer))
File "/usr/local/lib/python2.7/site-packages/distribute-0.6.28-py2.7.egg/pkg_resources.py", line 579, in resolve
env = Environment(self.entries)
File "/usr/local/lib/python2.7/site-packages/distribute-0.6.28-py2.7.egg/pkg_resources.py", line 748, in __init__
self.scan(search_path)
File "/usr/local/lib/python2.7/site-packages/distribute-0.6.28-py2.7.egg/pkg_resources.py", line 777, in scan
for dist in find_distributions(item):
File "/usr/local/lib/python2.7/site-packages/distribute-0.6.28-py2.7.egg/pkg_resources.py", line 1757, in find_on_path
path_item,entry,metadata,precedence=DEVELOP_DIST
File "/usr/local/lib/python2.7/site-packages/distribute-0.6.28-py2.7.egg/pkg_resources.py", line 2151, in from_location
py_version=py_version, platform=platform, **kw
File "/usr/local/lib/python2.7/site-packages/distribute-0.6.28-py2.7.egg/pkg_resources.py", line 2128, in __init__
self.project_name = safe_name(project_name or 'Unknown')
File "/usr/local/lib/python2.7/site-packages/distribute-0.6.28-py2.7.egg/pkg_resources.py", line 1139, in safe_name
return re.sub('[^A-Za-z0-9.]+', '-', name)
File "/usr/local/lib/python2.7/re.py", line 155, in sub
return _compile(pattern, flags).sub(repl, string, count)
File "/usr/local/lib/python2.7/re.py", line 235, in _compile
cachekey = (type(key[0]),) + key
RuntimeError: maximum recursion depth exceeded while calling a Python object |
ndk-build installs libraries even if no change. Can this be changed? | 52,265,683 | 0 | 3 | 100 | 0 | android,python,android-ndk | The OP probably doesn't need this any more, but I had the exact same problem, trying to set up a Makefile to build a project, so maybe this will be helpful to someone else in the future as well.
ndk-build is a wrapper around gnu make, that invokes a bunch of Makefiles in build/core directory of the ndk, so, while it's not universally applicable*, for your personal project you can modify those Makefiles to do whatever you want. I found a clean-installed-binaries target that a couple of build/install targets depended on, removing those dependencies fixed the issue with perpetual installs.
In whichever cases that clean target is necessary you can invoke it manually with:
ndk-build clean-installed-binaries.
*Given the time to come up with a clean opt-in solution you can submit a patch to ndk project, and if accepted it will eventually become universally applicable. | 0 | 1 | 0 | 0 | 2015-09-14T13:12:00.000 | 1 | 0 | false | 32,565,759 | 1 | 0 | 0 | 1 | I'm using the Native Development Kit (NDK) in a project of mine, and I'm trying to automate the whole app build procedure with Python.
Whenever ndk-build is called, it copies the prebuilt shared libraries to libs/<abi>/, even if there's no changes in them or they already exist there. This causes problem when I call ant later on, as it detects changed files (the library timestamps are newer) and so rebuilds the apk without any need.
Is there a way to change the ndk-build behaviour so it checks for existing libraries in the libs/<abi>/ folder and if they need updating or some are missing, it will call ndk-build, otherwise, just proceed to the next build step?
I've tried using filecmp in Python, but as the timestamps are different between the prebuilt shared libraries and the installed ones, it doesn't work. |
How can I reference libraries for ApacheSpark using IPython Notebook only? | 32,574,697 | 1 | 2 | 157 | 0 | python,apache-spark,ipython,ibm-cloud,jupyter | You cannot add 3rd party libraries at this point in the beta. This will most certainly be coming later in the beta as it's a popular requirement ;-) | 0 | 1 | 0 | 0 | 2015-09-14T21:09:00.000 | 1 | 1.2 | true | 32,573,995 | 0 | 1 | 0 | 1 | I'm currently playing around with the Apache Spark Service in IBM Bluemix. There is a quick start composite application (Boilerplate) consisting of the Spark Service itself, an OpenStack Swift service for the data and an IPython/Jupyter Notebook.
I want to add some 3rd party libraries to the system and I'm wondering how this could be achieved. Using an python import statement doesn't really help since the libraries are then expected to be located on the SparkWorker nodes.
Is there a ways of loading python libraries in Spark from an external source during job runtime (e.g. a Swift or ftp source)?
thanks a lot! |
how to install python-devel in Mac OS? | 32,578,175 | 24 | 27 | 59,901 | 0 | macos,python-2.7 | If you install Python using brew, the relevant headers are already installed for you.
In other words, you don't need python-devel. | 0 | 1 | 0 | 0 | 2015-09-15T05:06:00.000 | 1 | 1 | false | 32,578,106 | 0 | 0 | 0 | 1 | brew and port does not provide python-devel.
How can I install it in Mac OS.
Is there an equivalent in Mac OS? |
Is there a difference using Python3 in IDLE3 or in Ubuntu 14.04 terminal? | 32,637,094 | 1 | 0 | 247 | 0 | python-3.x,ubuntu-14.04,python-idle | I have not used the Ubuntu terminal, but I will assume that it is a typical terminal program. If you type python3, it starts python3, which prints, in the same window, something like Python 3.4.3 ... and then a prompt >>>. You interact with python3 via the terminal program.
If you type idle3, it runs a python gui program (Idle) with python3. That program prints, in a separate window, something like Python 3.4.3 ... and then a prompt >>>. You interact with python3 via this python program. In either case, any code you enter is executed by python3. For nearly all code you might enter, such as anything in the tutorial, the printed response will be the same.
The difference in terms of interaction is that in the terminal, if it is typical, you enter and recall (with up arrow?) lines of code, whereas in Idle, you enter and recall (with Alt-p) statements, which may comprise multiple lines. Also, Idle syntax colors your code, whereas your terminal may not.
A bigger difference is that Idle is not just a Python terminal or shell, but is an integrated development environment that includes an editor that works with the shell. You can run code from the editor with F5. If there is an error traceback in the shell, you can right click on an error line and go to the line with the error. | 0 | 1 | 0 | 0 | 2015-09-17T11:35:00.000 | 1 | 1.2 | true | 32,629,367 | 1 | 0 | 0 | 1 | Is there a difference using IDLE3 or the Ubuntu 14.04 terminal for Python3 interpretation? In that case, what are the differences? |
Python can't open file | 32,660,141 | 1 | 2 | 4,074 | 0 | php,python,apache,permission-denied | You need read permission to run the python script. | 0 | 1 | 0 | 1 | 2015-09-18T19:58:00.000 | 2 | 0.099668 | false | 32,660,088 | 0 | 0 | 0 | 1 | I have a PHP script that is supposed to execute a python script as user "apache" but is returning the error:
/transform/anaconda/bin/python: can't open file '/transform/python_code/edit_doc_with_new_demo_info.py': [Errno 13] Permission denied
Permissions for edit_doc_with_new_demo_info.py are ---xrwx--x. 1 apache PosixUsers 4077 Sep 18 12:14 edit_doc_with_new_demo_info.py. The line that is calling this python script is:
shell_exec('/transform/anaconda/bin/python /transform/python_code/edit_doc_with_new_demo_info.py ' . escapeshellarg($json_python_data) .' >> /transform/edit_subject_python.log 2>&1')
If apache is the owner of the python file and the owner has execute permission how can it be unable to open the file? |
efficient directory tree walking in Python | 32,675,226 | 0 | 1 | 242 | 0 | python-2.7,python-3.4,scandir | You want scandir(), which has been added to the standard library for 3.5. It is available for 2.7 and 3.4 from the Python Package index. (You should be able to use pip or easyinstall to retrieve it.) | 0 | 1 | 0 | 0 | 2015-09-20T02:39:00.000 | 1 | 1.2 | true | 32,675,202 | 0 | 0 | 0 | 1 | I have to process a large number of wide directory trees that are only a few levels tall and with the leaf (and only the leaf) directories containing thousands of files (over NFS). When I use os.walk() there seems to be a very long delay at the leaf nodes as os.walk() is generating a list of all files in the directory. Is there a solution that will give me one file at a time (as it walks the filesystem) instead of pre-generating the entire list?
I'm interested in both Python 2.7 and python 3.4 solutions |
What's the correct way to do a rotating log with python twisted? | 33,092,530 | 0 | 2 | 106 | 0 | python,python-2.7,logging,twisted | Log files are pretty much inherently blocking things. Eventually Twisted may integrate a non-blocking log consumer with an intelligent buffering policy, but that day is not here yet. Good luck! | 0 | 1 | 0 | 0 | 2015-09-20T23:45:00.000 | 1 | 1.2 | true | 32,685,466 | 0 | 0 | 0 | 1 | I'm using Twisted 14.0.2 and I was wondering what's the correct way of doing a rotating log file without blocking I/O? |
Limit a python file to only be run by a bash script | 32,698,394 | 8 | 3 | 54 | 0 | python,bash | There is no meaningful way to do this.
UNIX process architecture does not work this way. You cannot control the execution of a script by its parent process.
Instead we should discuss why you want to do something like this, you are probably thinking doing it in a wrong way and what good options there is to address the actual underlying problem. | 0 | 1 | 0 | 1 | 2015-09-21T14:51:00.000 | 2 | 1 | false | 32,698,320 | 0 | 0 | 0 | 1 | I am looking for a way to limit how a python file to be called. Basically I only want it to be executable when I call it from a bash script but if ran directly either from a terminal or any other way I do not want it to be able to run. I am not sure if there is a way to do this or not but I figured I would give it a shot. |
Wake Windows PC from sleep in Python 2.7 | 34,420,302 | 2 | 2 | 2,597 | 0 | python,sleep,wakeup | I was unable to accomplish this using just python. However in the Windows SDK they provide a tool called pwrtest that will allow you to do timed sleep cycles. I am able to call this with python and then my script continues when pwrtest wakes the PC up from sleep. | 0 | 1 | 0 | 0 | 2015-09-21T22:39:00.000 | 2 | 1.2 | true | 32,705,626 | 1 | 0 | 0 | 1 | I have a script that will put the system to sleep in the middle of it. Is there any way to make that script wake the system up and then continue running?
I have read many round-about ways of doing so by Wake on LAN or using Task Scheduler. I am looking for something that would wake it up after a set period of time or after a specific piece of my script is finished. I will need to this to work for Windows 7, 8.1, and 10.
Anyone know of a way to wake from sleep while still running a script? |
how to access python from command line using py instead of python | 32,742,619 | 20 | 6 | 29,688 | 0 | python,windows,python-2.7,cmd | py command comes with Python3.x and allow to choose among multiple Python interpreters. For example if you have both Python 3.4 and 2.7 installed, py -2 will start python2.7 and py -3 will start python3.4 . If you just use py it will start the one that was defined as default.
So the official way would be to install Python 3.x, declare Python 2.7 as the default, and the py command will do its job.
But if you just want py to be an alias of python, doskey py=python.exe as proposed by @Nizil and @ergonaut will be much simpler... Or copying python.exe to py.exe in Python27 folder if you do not want to be bothered by the limitations of doskey. | 0 | 1 | 0 | 0 | 2015-09-23T14:18:00.000 | 2 | 1.2 | true | 32,742,093 | 0 | 0 | 0 | 1 | I have a very weird request. An executable I have has a system call to a python script which goes like py file1.py
Now, in my system though py is shown as an unrecognized internal or external command. python file1.py works however.
is there some way I can get my windows command prompt to recognize that py and python refer to the same thing? |
How to find more information about the file descriptor? | 32,759,245 | 1 | 1 | 2,426 | 0 | python,c,file-descriptor | Look earlier in the strace output for when the file descriptor was returned from open() (or perhaps socket()), there you'll see the additional arguments used in the call. | 0 | 1 | 0 | 0 | 2015-09-24T10:30:00.000 | 2 | 0.099668 | false | 32,759,078 | 0 | 0 | 0 | 1 | I am trying to debug a process that hangs, the output of strace for the process id has last line :
recvfrom(9, <detached ...>
From this what I understand that the process is waiting on the socket.
But I don't know which or what kind of socket is this. How can I discover more about this ? does the file descriptor 9 will give me more information ? How can I use this file discover to know more about what it is waiting for ?
its a python process, running in linux. |
actual script file name when installed as shell command with setup.py | 32,759,202 | -1 | 0 | 31 | 0 | python | Simply pass the full path you have now to os.path.basename to get only the filename part. | 0 | 1 | 0 | 0 | 2015-09-24T10:32:00.000 | 1 | -0.197375 | false | 32,759,106 | 1 | 0 | 0 | 1 | I'd like to find out the currently running script's file name. Usually via __file__ or __main__.__file__ or even with sys.argv[0].
But when installed as shell command with setup.py and entry_points / console_scripts, /usr/local/bin/... is returned instead of the actual file name.
My next guess was the inspect module, like inspect.stack()[1].filename. Unfortunately this was inconsistent and did not work in all cases.
Any suggestions please? |
How to sync a salt execution module with non-python dependencies | 32,769,491 | 2 | 0 | 284 | 0 | python,salt-stack | Only python extensions are supported, so your best bet is to do the following:
1) Deploy your non-Python components via a file.managed / file.recurse state.
2) Ensure your custom execution module has a __virtual__() function checking for the existence of the non-Python dependencies, and returning False if they are not present. This will keep the module from being loaded and used unless the deps are present.
3) Sync your custom modules using saltutil.sync_modules. This function will also re-invoke the loader to update the available execution modules on the minion, so if you already had your custom module sync'ed and later deployed the non-Python depenencies, saltutil.sync_modules would re-load the custom modules and, provided your __virtual__() function returned either True or the desired module name, your execution module would then be available for use. | 0 | 1 | 0 | 1 | 2015-09-24T13:29:00.000 | 1 | 1.2 | true | 32,762,675 | 0 | 0 | 0 | 1 | I am currently transforming a perl / bash tool into a salt module and I am wondering how I should sync the non-python parts of this module to my minions.
I want to run salt agent-less and ideally the dependencies would by synced automatically alongside the module itself once its called via salt-ssh.
But it seems that only python scripts get synced. Any thoughts for a nice and clean solution?
Copying the necessary files from the salt fileserver during module execution seems somehow wrong to me.. |
Fastest way to check if an image sequence string actually exists on disk | 32,875,932 | 1 | 0 | 1,146 | 0 | python,image,file,sequence,exists | You should probably not loop for candidates using os.path.isfile(), but use glob.glob() or os.listdir() and check the returned lists for matching your file patterns, i.e. prefer memory operations over disk accesses. | 0 | 1 | 0 | 1 | 2015-09-24T23:45:00.000 | 2 | 0.099668 | false | 32,772,672 | 0 | 0 | 0 | 2 | I have a potentially big list of image sequences from nuke. The format of the string can be:
/path/to/single_file.ext
/path/to/img_seq.###[.suffix].ext
/path/to/img_seq.%0id[.suffix].ext, i being an integer value, the values between [] being optional.
The question is: given this string, that can represent a sequence or a still image, check if at least one image on disk corresponds to that string in the fastest way possible.
There is already some code that checks if these files exist, but it's quite slow.
First it checks if the folder exists, if not, returns False
Then it checks if the file exists with os.path.isfile, if it does, it returns True.
Then it checks if no % or # is found in the path, and if not os.path.isfile, it returns False.
All this is quite fast.
But then, it uses some internal library which is in performance a bit faster than pyseq to try to find an image sequence, and does a bit more operations depending if start_frame=end_frame or not.
But it stills take a large amount of time to analyze if something is an image sequence, specially on some sections of the network and for big image sequences.
For example, for a 2500 images sequence, the analysis takes between 1 and 3 seconds.
If I take a very naive approach, and just checks if a frame exist by replacing #### by %04d, and loop over 10000 and break if found, it takes less than .02 seconds to check for os.path.isfile(f), specially if the first frame is between 1-3000.
Of course I cannot guarantee what the start frame will be, and that approach is not perfect, but in practice many of the sequences do begin between 1-3000, and I could return True if found and fallback to the sequence approach if nothing is found (it would still be quicker for most of the cases)
I'm not sure what's the best approach is for this, I already made it multithreaded when searching for many image sequences, so it's faster than before, but I'm sure there is room for improvement. |
Fastest way to check if an image sequence string actually exists on disk | 32,930,114 | 0 | 0 | 1,146 | 0 | python,image,file,sequence,exists | If there are potentially so many files that you're worried about wasting memory for a dictionary that holds them all, you could just store a single key for each img_seq.###[.suffix].ext pattern, removing the sequence number as you scan the directory. Then a single lookup will suffice. The values in the dictionary could either be "dummy" booleans because the existence of the key is the only thing you care about, or counters in case you ever want to know how many files you have for a certain sequence. | 0 | 1 | 0 | 1 | 2015-09-24T23:45:00.000 | 2 | 0 | false | 32,772,672 | 0 | 0 | 0 | 2 | I have a potentially big list of image sequences from nuke. The format of the string can be:
/path/to/single_file.ext
/path/to/img_seq.###[.suffix].ext
/path/to/img_seq.%0id[.suffix].ext, i being an integer value, the values between [] being optional.
The question is: given this string, that can represent a sequence or a still image, check if at least one image on disk corresponds to that string in the fastest way possible.
There is already some code that checks if these files exist, but it's quite slow.
First it checks if the folder exists, if not, returns False
Then it checks if the file exists with os.path.isfile, if it does, it returns True.
Then it checks if no % or # is found in the path, and if not os.path.isfile, it returns False.
All this is quite fast.
But then, it uses some internal library which is in performance a bit faster than pyseq to try to find an image sequence, and does a bit more operations depending if start_frame=end_frame or not.
But it stills take a large amount of time to analyze if something is an image sequence, specially on some sections of the network and for big image sequences.
For example, for a 2500 images sequence, the analysis takes between 1 and 3 seconds.
If I take a very naive approach, and just checks if a frame exist by replacing #### by %04d, and loop over 10000 and break if found, it takes less than .02 seconds to check for os.path.isfile(f), specially if the first frame is between 1-3000.
Of course I cannot guarantee what the start frame will be, and that approach is not perfect, but in practice many of the sequences do begin between 1-3000, and I could return True if found and fallback to the sequence approach if nothing is found (it would still be quicker for most of the cases)
I'm not sure what's the best approach is for this, I already made it multithreaded when searching for many image sequences, so it's faster than before, but I'm sure there is room for improvement. |
How to change python's "full name" (something like `cpython-34m-x86_64-linux-gnu`)? | 32,778,352 | 0 | 1 | 348 | 0 | python,ubuntu,python-import,python-c-extension,python-install | I reinstalled Python 3.4 via Ubuntu package system, and suddenly everything worked fine. I still have no clue how Ubuntu customize its own Python, since Python's configure command has no related option at all. Anyway, it works, so why bother :P. Finally, thank you for helping me with this problem. | 0 | 1 | 0 | 0 | 2015-09-25T07:53:00.000 | 3 | 0 | false | 32,777,369 | 1 | 0 | 0 | 2 | I installed Python 3.5 from source and broke a number of modules on Python 3.4, which unfortunately, was an essential part of Ubuntu. I've been trying to fix the system, now I'm almost there, with (hopefully) the last problem: My Python 3.4 only recognize C modules with name *.cpython-34m.so, while all packages from Ubuntu repository are named *.cpython-34m-x86_64-linux-gnu.so. It seems that the cpython-34m stuff is the full name of Python, so I need to change it in accord with Ubuntu's expectation. How can I achieve this? |
How to change python's "full name" (something like `cpython-34m-x86_64-linux-gnu`)? | 32,778,449 | 1 | 1 | 348 | 0 | python,ubuntu,python-import,python-c-extension,python-install | What you are trying makes no sense. The name cannot be changed, for a good reason. The reason the names are different is to prevent incompatible versions from mixing up each other. You can compile a different version with different options and then the name will be different, too. | 0 | 1 | 0 | 0 | 2015-09-25T07:53:00.000 | 3 | 1.2 | true | 32,777,369 | 1 | 0 | 0 | 2 | I installed Python 3.5 from source and broke a number of modules on Python 3.4, which unfortunately, was an essential part of Ubuntu. I've been trying to fix the system, now I'm almost there, with (hopefully) the last problem: My Python 3.4 only recognize C modules with name *.cpython-34m.so, while all packages from Ubuntu repository are named *.cpython-34m-x86_64-linux-gnu.so. It seems that the cpython-34m stuff is the full name of Python, so I need to change it in accord with Ubuntu's expectation. How can I achieve this? |
python virtualenv set up from mac - use it in linux | 32,803,088 | 1 | 1 | 57 | 0 | python,macos,ubuntu,virtualenv | It's not possible, because virtualenv use absolute paths to setup the environment.
Also it's kind of the reverse of what virtualenv is created for. | 0 | 1 | 0 | 0 | 2015-09-26T23:27:00.000 | 1 | 1.2 | true | 32,803,057 | 1 | 0 | 0 | 1 | Is it possible to set up a virtualenv on a Dropbox folder from a Mac and activate that from Ubuntu that also has access to that Dropbox folder?
I seem to be able to call source env/bin/activate and it activates the environment, but when I call which python, it gives me /usr/bin/python instead of the one in the virtual environment
Before I do anymore troubleshooting/add more details, is this possible at all and am I just doing something wrong or is this not possible? |
GoogleScraper Installation error - setuptools must be installed | 32,820,352 | 0 | 0 | 589 | 0 | python,web-scraping,pip,virtualenv | After upgrading the python3.4 package in Ubuntu 14.04 I get the same error.
A quick solution is to delete and re-create the virtualenv. | 0 | 1 | 0 | 0 | 2015-09-27T03:53:00.000 | 2 | 0 | false | 32,804,410 | 1 | 0 | 0 | 2 | I have virtualenv-13.1.2 set up with python 3.4 (global python is python-2.7) in ubuntu 14.04. When I try to install GoogleScraper using coammandpip install GoogleScraper it gives an error
setuptools must be installed to install from a source distribution
If I do pip install setuptools
Requirement already satisfied (use --upgrade to upgrade): setuptools in ./env/lib/python3.4/site-packages
If I do pip install setuptools --upgrade
Requirement already up-to-date: setuptools in ./env/lib/python3.4/site-packages
How can I successfully install GoogleScraper? |
GoogleScraper Installation error - setuptools must be installed | 32,836,035 | 1 | 0 | 589 | 0 | python,web-scraping,pip,virtualenv | I was missing python3-dev tools. I did sudo apt-get install python3-dev and it worked like a charm. | 0 | 1 | 0 | 0 | 2015-09-27T03:53:00.000 | 2 | 1.2 | true | 32,804,410 | 1 | 0 | 0 | 2 | I have virtualenv-13.1.2 set up with python 3.4 (global python is python-2.7) in ubuntu 14.04. When I try to install GoogleScraper using coammandpip install GoogleScraper it gives an error
setuptools must be installed to install from a source distribution
If I do pip install setuptools
Requirement already satisfied (use --upgrade to upgrade): setuptools in ./env/lib/python3.4/site-packages
If I do pip install setuptools --upgrade
Requirement already up-to-date: setuptools in ./env/lib/python3.4/site-packages
How can I successfully install GoogleScraper? |
python mechanize retrieving files larger than 1GB | 32,806,729 | 0 | 0 | 304 | 0 | python-2.7,mechanize-python | It sounds like you are trying to download the file into memory but you don't have enough. Try using the retrieve method with a file name to stream the downloaded file to disc. | 0 | 1 | 0 | 1 | 2015-09-27T09:01:00.000 | 2 | 0 | false | 32,806,238 | 0 | 0 | 0 | 2 | I am trying to download some files via mechanize. Files smaller than 1GB are downloaded without causing any trouble. However, if a file is bigger than 1GB the script runs out of memory:
The mechanize_response.py script throws out of memory at the following line
self.__cache.write(self.wrapped.read())
__cache is a cStringIO.StringIO, It seems that it can not handle more than 1GB.
How to download files larger than 1GB?
Thanks |
python mechanize retrieving files larger than 1GB | 32,808,075 | 0 | 0 | 304 | 0 | python-2.7,mechanize-python | I finally figured out a work around.
Other than using browser.retrieve or browser.open I used mechanize.urlopen which returned the urllib2 Handler. This allowed me to download files larger than 1GB.
I am still interested in figuring out how to make retrieve work for files larger than 1GB. | 0 | 1 | 0 | 1 | 2015-09-27T09:01:00.000 | 2 | 0 | false | 32,806,238 | 0 | 0 | 0 | 2 | I am trying to download some files via mechanize. Files smaller than 1GB are downloaded without causing any trouble. However, if a file is bigger than 1GB the script runs out of memory:
The mechanize_response.py script throws out of memory at the following line
self.__cache.write(self.wrapped.read())
__cache is a cStringIO.StringIO, It seems that it can not handle more than 1GB.
How to download files larger than 1GB?
Thanks |
Is it ok to install both Python 2.7 and 3.5? | 32,811,789 | 7 | 16 | 50,311 | 0 | python,python-3.x,python-2.7,osx-yosemite | As long as you keep your installation folders organized, you should have no issues having both on your computer, besides one thing. The path environment variable for python will determine which version is used by default, so I would say stick to one version, or make sure to make your programs as backwards compatible as possible. I have run into this issue on Windows, since I installed Python 3.4 before 2.7, and therefore to run older code, I have to manually select the python executable. In terms of libraries, I believe that for each python version, the libraries are completely separate, so you should be good there. | 0 | 1 | 0 | 0 | 2015-09-27T19:02:00.000 | 7 | 1 | false | 32,811,713 | 1 | 0 | 0 | 6 | Supposedly Python 2.7 is included native to OSX 10.8 and above (if I remember correctly), but I recently installed Python 3.5 to use for projects while I work through UDacity. Lo and behold, the UDacity courses seem to use 2.7 - wups! So instead of trying to uninstall 3.5 (this procedure seemed to scary for neophytes such as myself), I simply installed 2.7 in addition to the recently installed 3.5 and just run the 2.7 IDLE and Shell. Is this ok, or will I run into problems down the road? |
Is it ok to install both Python 2.7 and 3.5? | 32,823,074 | 17 | 16 | 50,311 | 0 | python,python-3.x,python-2.7,osx-yosemite | I have installed two versions, 2.7, 3.4 and I do not have any problem by now. 3.4 I am using for my work project in eclipse environment, 2.7 for udacity course, like You ;). | 0 | 1 | 0 | 0 | 2015-09-27T19:02:00.000 | 7 | 1.2 | true | 32,811,713 | 1 | 0 | 0 | 6 | Supposedly Python 2.7 is included native to OSX 10.8 and above (if I remember correctly), but I recently installed Python 3.5 to use for projects while I work through UDacity. Lo and behold, the UDacity courses seem to use 2.7 - wups! So instead of trying to uninstall 3.5 (this procedure seemed to scary for neophytes such as myself), I simply installed 2.7 in addition to the recently installed 3.5 and just run the 2.7 IDLE and Shell. Is this ok, or will I run into problems down the road? |
Is it ok to install both Python 2.7 and 3.5? | 38,431,312 | 1 | 16 | 50,311 | 0 | python,python-3.x,python-2.7,osx-yosemite | Im not sure about OSX, but with windows 10 my environment variables for 2.7 were overwritten with the 3.5 path. Not a tough fix, but a little confusing, since it was months later when I needed 2.7 again. | 0 | 1 | 0 | 0 | 2015-09-27T19:02:00.000 | 7 | 0.028564 | false | 32,811,713 | 1 | 0 | 0 | 6 | Supposedly Python 2.7 is included native to OSX 10.8 and above (if I remember correctly), but I recently installed Python 3.5 to use for projects while I work through UDacity. Lo and behold, the UDacity courses seem to use 2.7 - wups! So instead of trying to uninstall 3.5 (this procedure seemed to scary for neophytes such as myself), I simply installed 2.7 in addition to the recently installed 3.5 and just run the 2.7 IDLE and Shell. Is this ok, or will I run into problems down the road? |
Is it ok to install both Python 2.7 and 3.5? | 38,438,902 | 3 | 16 | 50,311 | 0 | python,python-3.x,python-2.7,osx-yosemite | As others have said, if the installation directory is different it should be no problem at all.
One thing that'll make your life easier for switching between the two is to use an IDE such as PyCharm, you just have to change a drop down to switch between the two versions. | 0 | 1 | 0 | 0 | 2015-09-27T19:02:00.000 | 7 | 0.085505 | false | 32,811,713 | 1 | 0 | 0 | 6 | Supposedly Python 2.7 is included native to OSX 10.8 and above (if I remember correctly), but I recently installed Python 3.5 to use for projects while I work through UDacity. Lo and behold, the UDacity courses seem to use 2.7 - wups! So instead of trying to uninstall 3.5 (this procedure seemed to scary for neophytes such as myself), I simply installed 2.7 in addition to the recently installed 3.5 and just run the 2.7 IDLE and Shell. Is this ok, or will I run into problems down the road? |
Is it ok to install both Python 2.7 and 3.5? | 34,581,673 | 2 | 16 | 50,311 | 0 | python,python-3.x,python-2.7,osx-yosemite | It should be fine. Its actually pretty common to have multiple Python environments. It helps to prevent dependency conflicts between your projects. That is what is happening when you are using tools like pyenv and virtualenv.
Using tools like pyenv and virtualenv may also help you with the path problems that others mentioned. They have commands to set up the path so that their version of pip, python, etc are used. | 0 | 1 | 0 | 0 | 2015-09-27T19:02:00.000 | 7 | 0.057081 | false | 32,811,713 | 1 | 0 | 0 | 6 | Supposedly Python 2.7 is included native to OSX 10.8 and above (if I remember correctly), but I recently installed Python 3.5 to use for projects while I work through UDacity. Lo and behold, the UDacity courses seem to use 2.7 - wups! So instead of trying to uninstall 3.5 (this procedure seemed to scary for neophytes such as myself), I simply installed 2.7 in addition to the recently installed 3.5 and just run the 2.7 IDLE and Shell. Is this ok, or will I run into problems down the road? |
Is it ok to install both Python 2.7 and 3.5? | 34,580,951 | 0 | 16 | 50,311 | 0 | python,python-3.x,python-2.7,osx-yosemite | I have the same problem and it is not necessary to uninstall on version of python. Please take care to not mix them up - When you search them up on the start menu. You can make a desktop shortcut saying 2.6 and 3.5. | 0 | 1 | 0 | 0 | 2015-09-27T19:02:00.000 | 7 | 0 | false | 32,811,713 | 1 | 0 | 0 | 6 | Supposedly Python 2.7 is included native to OSX 10.8 and above (if I remember correctly), but I recently installed Python 3.5 to use for projects while I work through UDacity. Lo and behold, the UDacity courses seem to use 2.7 - wups! So instead of trying to uninstall 3.5 (this procedure seemed to scary for neophytes such as myself), I simply installed 2.7 in addition to the recently installed 3.5 and just run the 2.7 IDLE and Shell. Is this ok, or will I run into problems down the road? |
Is it possible to output to and monitor streams other than stdin, stdout & stderr? (python) | 32,818,127 | 0 | 6 | 437 | 0 | python,linux,macos,terminal,stdout | File write operations are buffered by default so the file isn't effectiveley written until either the buffer is full, the file is closed or you explicitely call flush() on the file.
But anyway: dont use direct file access if you want to log to a file, use either a logging.StreamHandler with an opened file as stream or, better, a logging.FileHandler. Both will take care of flushing the file. | 0 | 1 | 0 | 0 | 2015-09-28T06:59:00.000 | 2 | 0 | false | 32,817,302 | 0 | 0 | 0 | 1 | This is a python question, but also a linux/BSD question.
I have a python script with two threads, one downloading data from the web and the other sending data to a device over a serial port. Both of these threads print a lot of status information to stdout using python's logging module.
What I would like is to have two terminal windows open, side by side, and have each terminal window show the output from one thread, rather than have the messages from both interleaved in a single window.
Are there file descriptors other than stdin, stdout & stderr to write to and connect to other terminal windows? Perhaps this wish is better fulfilled with a GUI?
I'm not sure how to get started with this.
edit: I've tried writing status messages to two different files instead of printing them to stdout, and then monitoring these two files with tail -f in other terminal windows, but this doesn't work for live monitoring because the files aren't written to until you call close() on them. |
Can't uninstall Python on Windows (3.4.2) | 32,821,417 | 1 | 3 | 8,420 | 0 | python,windows,uninstallation | Did you try to reinstall the version you want to delete and then uninstall it afterwards ? | 0 | 1 | 0 | 0 | 2015-09-28T10:21:00.000 | 2 | 1.2 | true | 32,820,673 | 1 | 0 | 0 | 1 | I accidentally downloaded Python 3.4.2 a while back but I actually needed Python 2.7, so I deleted the 3.4.2 files and downloaded 2.7 instead. Now I need Python 3, so I tried to download it but I noticed that in the control panel in the Uninstall Programs section it tells me that the 3.4.2 from back then is still on my PC.
Every time I try to uninstall/change/repair/download a newer version I can't and it tells me
A program required to complete the installation can not be found...
I can not find any remaining files connected to any sort of Python in my PC. My operating system is Windows 10. Does someone know how to solve this? |
Dbus & Bluez programming language | 34,717,559 | 0 | 4 | 1,918 | 0 | python,c,bluetooth-lowenergy,dbus,bluez | Something more to consider:
the latest BlueZ (eg. 5.36+), BLE should work fine and has been very stable for me - and remember to add "experimental" when building it and "-E" as a service parameter to get manufacturerData (and other experimental features)
Using the C API, I think your code must be GPL (not 100% sure tho). The DBus interface allows you to make closed source code (if it's for a company) | 0 | 1 | 0 | 1 | 2015-09-28T13:57:00.000 | 2 | 0 | false | 32,824,889 | 0 | 0 | 0 | 2 | For a project I am doing I have to connect my Linux PC to a Bluetooth LE device. The application I design will be deployed on an ARM embedded system when it is complete.
Searching for documentation online hints that the preferred programming language for these kind of applications is Python. All the Bluez /test examples are written in Python and there are quite a few sources of information regarding creating BLE applications in Python. Not so much in C.
My superior and I had an arguement about whether I should use Python or C. One of his arguments was that there was unacceptable overhead when using Python for setting up Bluetooth LE connections and that Bluetooth LE had to be very timely in order to function properly. My argument was that the overhead would not matter as much, since there were no time constraints regarding bluetooth LE connections; The application will find devices, connect to a specific one and read a few attributes, which it saves to a file.
My question is; is there any reason to prefer the low-level C approach over using a high-level Python implementation for a basic application that reads GATT services and their characteristics? What would the implications be for an embedded device? |
Dbus & Bluez programming language | 32,861,048 | 3 | 4 | 1,918 | 0 | python,c,bluetooth-lowenergy,dbus,bluez | This is quite an open question as there are so many things to consider when making this decision. So the best "answer" might rather be an attempt to narrow down the discussion:
Based on the question, I'm making the assumption that the system you are targeting has D-Bus and Python available with all needed dependencies.
I'd try to narrow down the discussion by first deciding on what BlueZ API to use. If you are planning on using the D-Bus API rather than the libbluetooth C library API, then there is already some overhead introduced by that and I don't believe Python in itself would be the major factor. That should of course be measured/evaluated to know for sure, but ruling out Python while still using D-Bus might be a premature optimization without much impact in practice.
If the C library API is to be used in order to avoid D-Bus overhead then I think you should go with C for the client throughout.
If the "timely manner" factor is very important I believe you will eventually need to have ways to measure performance anyway. Then perhaps a proof of concept of both design options might be the best way to really decide.
If the timing constraints turn out to be a moot question in practice, other aspects should weigh in more, e.g. ease of development (documentation and examples available), testability, and so on. | 0 | 1 | 0 | 1 | 2015-09-28T13:57:00.000 | 2 | 1.2 | true | 32,824,889 | 0 | 0 | 0 | 2 | For a project I am doing I have to connect my Linux PC to a Bluetooth LE device. The application I design will be deployed on an ARM embedded system when it is complete.
Searching for documentation online hints that the preferred programming language for these kind of applications is Python. All the Bluez /test examples are written in Python and there are quite a few sources of information regarding creating BLE applications in Python. Not so much in C.
My superior and I had an arguement about whether I should use Python or C. One of his arguments was that there was unacceptable overhead when using Python for setting up Bluetooth LE connections and that Bluetooth LE had to be very timely in order to function properly. My argument was that the overhead would not matter as much, since there were no time constraints regarding bluetooth LE connections; The application will find devices, connect to a specific one and read a few attributes, which it saves to a file.
My question is; is there any reason to prefer the low-level C approach over using a high-level Python implementation for a basic application that reads GATT services and their characteristics? What would the implications be for an embedded device? |
Python ffmpeg on Windows | 32,859,122 | 0 | 1 | 1,646 | 0 | python,ffmpeg | I run python ***.py in the CMD is OK.Should`t run the program in the PyCharm.Perhaps the environment is differnet. | 0 | 1 | 0 | 0 | 2015-09-28T15:07:00.000 | 1 | 1.2 | true | 32,826,262 | 0 | 0 | 0 | 1 | I had installed ffmpeg in WIN7 64bit. When I use
os.system("ffmpeg -i rtsp://218.204.223.237:554/live/1/66251FC11353191F/e7ooqwcfbqjoo80j.sdp -c copy dump.mp4") in my program by the PyCharm,it can also run but I can`t play the dump.mp4. I can run the same command in CMD or Python(command line) and get the dump.mp4 successfully.Why? How can I solve this problem?Can you help me ? I just use Python not long before. |
How to load balance celery tasks across several servers? | 58,830,493 | 3 | 6 | 4,182 | 0 | python,rabbitmq,celery | Best option is to use celery.send_task from the producing server, then deploy the workers onto n instances. The workers can then be run as @ealeon mentioned, using celery -A proj worker -l info -Ofair.
This way, load will be distributed across all servers without having to have the codebase present on the consuming servers. | 0 | 1 | 0 | 0 | 2015-09-28T20:18:00.000 | 2 | 0.291313 | false | 32,831,111 | 0 | 0 | 1 | 1 | I'm running celery on multiple servers, each with a concurrency of 2 or more and I want to load balance celery tasks so that the server that has the lowest CPU usage can process my celery tasks.
For example, lets say I have 2 servers (A and B), each with a concurrency of 2, if I have 2 tasks in the queue, I want A to process one task and B to process the other. But currently its possible that the first process on A will execute one task and the second process on A will execute the second task while B is sitting idle.
Is there a simple way, by means of celery extensions or config, that I can route tasks to the server with lowest CPU usage? |
How can I decide which python I will open in mac terminal? | 32,854,492 | 0 | 1 | 71 | 0 | python,macos,terminal | Also, if you just need to know which installation of Python is the system using, the way to do it is typing which python at the terminal. | 0 | 1 | 0 | 0 | 2015-09-29T21:34:00.000 | 3 | 0 | false | 32,854,150 | 1 | 0 | 0 | 1 | I have several two python on my mac, one is original, and another is downloaded on the website, when I open the python in terminal, how can I decide which I'm opening? Thanks for help. |
Google app engine, full text search for empty (None) field | 57,089,611 | 0 | 1 | 428 | 0 | python,google-app-engine,full-text-search | Have you tried with:
NOT logo_url: Null | 0 | 1 | 0 | 0 | 2015-10-01T08:28:00.000 | 2 | 0 | false | 32,882,856 | 0 | 0 | 1 | 1 | I'd like to use Google AppEngine full text search to search for items in an index that have their logo set to None
tried
"NOT logo_url:''"
is there any way I write such a query, or do I have to add another property which is has_logo? |
Keep a python script running on google VM | 32,886,058 | 2 | 2 | 854 | 0 | python,google-cloud-platform | Since you can open an SSH session you install any number of terminal multiplexers such as tmux, screen or byobu.
If you can't install things on your VM, invoking the script every minute via a cron job could also solve the issue. | 0 | 1 | 0 | 1 | 2015-10-01T11:01:00.000 | 1 | 1.2 | true | 32,885,938 | 1 | 0 | 0 | 1 | I set up a VM with google and want it to run a python script persistently. If I exit out of the SSH session, the script stops. Is there a simple way to keep this thing going after I log out? |
Where is Python interpreter on Mac? | 32,893,798 | 4 | 1 | 6,930 | 0 | python,macos,virtualenv | Most likely, /usr/local/opt/python3 is a symlink actually pointing to /usr/local/Cellar/python3/3.5.0/bin/python3. ls -l /usr/local/opt/python3 will show what it's pointing to.
To my knowledge, OSX doesn't have anything installed natively in /usr/local/opt/ without homebrew.
Also, OSX doesn't come with python3. | 0 | 1 | 0 | 0 | 2015-10-01T17:43:00.000 | 1 | 1.2 | true | 32,893,657 | 1 | 0 | 0 | 1 | I installed Python 3.5 and virtualenv using Homebrew. python3 symlink in /usr/local/bin points to /usr/local/Cellar/python3/3.5.0/bin/python3, which means that when we execute a .py script using command python3, then the interpreter in the location above will be used.
But, when I see the contents of virtualenv in /usr/local/bin using cat virtualenv, the shebang is #!/usr/local/opt/python3/bin/python3.5, which means that when we execute virtualenv, then interpreter in /usr/local/opt is used.
Why is there a difference in the python interpreter being used? Which one should be used? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.