Title
stringlengths 15
150
| A_Id
int64 2.98k
72.4M
| Users Score
int64 -17
470
| Q_Score
int64 0
5.69k
| ViewCount
int64 18
4.06M
| Database and SQL
int64 0
1
| Tags
stringlengths 6
105
| Answer
stringlengths 11
6.38k
| GUI and Desktop Applications
int64 0
1
| System Administration and DevOps
int64 1
1
| Networking and APIs
int64 0
1
| Other
int64 0
1
| CreationDate
stringlengths 23
23
| AnswerCount
int64 1
64
| Score
float64 -1
1.2
| is_accepted
bool 2
classes | Q_Id
int64 1.85k
44.1M
| Python Basics and Environment
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Web Development
int64 0
1
| Available Count
int64 1
17
| Question
stringlengths 41
29k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Should I dockerize different servers running on a single machine? | 33,002,100 | 0 | 0 | 27 | 0 | python,docker,webserver | Yes, Docker prefers "one process per container" approach. I would not see this as an overkill, quite to the contrary - at your case it might rather soon be beneficial to have instances of different users better isolated: less security risks, easier to maintain - say you need new version of everything for new version of you app, but would like to keep some of the users still on an old version due to a blocker. | 0 | 1 | 0 | 0 | 2015-10-02T18:00:00.000 | 1 | 1.2 | true | 32,913,119 | 0 | 0 | 1 | 1 | I want to make simple service that each user will have his own (simple and light) webserver.
I want to use an AWS instance to do this.
I understand that I can do that by starting Python's SimpleHTTPserver (Proof of concept) multiple times on different ports, and that the number of servers I can have depends on the resources.
My question is:
Is it a better practice or an overkill to Dockerize each user with his server ? |
GAE/P: Storing list of keys to guarantee getting up to date data | 32,941,257 | 1 | 0 | 60 | 1 | python,google-app-engine,google-cloud-datastore,eventual-consistency | if, like you say in comments, your lists change rarely and cant use ancestors (I assume because of write frequency in the rest of your system), your proposed solution would work fine. You can do as many get(multi) and as frequently as you wish, datastore can handle it.
Since you mentioned you can handle having that keys list updated as needed, that would be a good way to do it.
You can stream-read a big file (say from cloud storage with one row per line) and use datastore async reads to finish very quickly or use google cloud dataflow to do the reading and processing/consolidating.
dataflow can also be used to instantly generate that keys list file in cloud storage. | 0 | 1 | 0 | 0 | 2015-10-02T20:35:00.000 | 2 | 1.2 | true | 32,915,462 | 0 | 0 | 1 | 1 | In my Google App Engine App, I have a large number of entities representing people. At certain times, I want to process these entities, and it is really important that I have the most up to date data. There are far too many to put them in the same entity group or do a cross-group transaction.
As a solution, I am considering storing a list of keys in Google Cloud Storage. I actually use the person's email address as the key name so I can store a list of email addresses in a text file.
When I want to process all of the entities, I can do the following:
Read the file from Google Cloud Storage
Iterate over the file in batches (say 100)
Use ndb.get_multi() to get the entities (this will always give the most recent data)
Process the entities
Repeat with next batch until done
Are there any problems with this process or is there a better way to do it? |
Unicode Byte Order Mark (BOM) as a python constant? | 32,922,798 | 1 | 2 | 670 | 0 | python,python-3.x,unicode,python-unicode | There isn't one. The bytes constants in codecs are what you should be using.
This is because you should never see a BOM in decoded text (i.e., you shouldn't encounter a string that actually encodes the code point U+FEFF). Rather, the BOM exists as a byte pattern at the start of a stream, and when you decode some bytes with a BOM, the U+FEFF isn't included in the output string. Similarly, the encoding process should handle adding any necessary BOM to the output bytes---it shouldn't be in the input string.
The only time a BOM matters is when either converting into or converting from bytes. | 0 | 1 | 0 | 0 | 2015-10-03T11:56:00.000 | 2 | 0.099668 | false | 32,922,265 | 0 | 0 | 0 | 1 | It's not a real problem in practice, since I can just write BOM = "\uFEFF"; but it bugs me that I have to hard-code a magic constant for such a basic thing. [Edit: And it's error prone! I had accidentally written the BOM as \uFFFE in this question, and nobody noticed. It even led to an incorrect proposed solution.] Surely python defines it in a handy form somewhere?
Searching turned up a series of constants in the codecs module: codecs.BOM, codecs.BOM_UTF8, and so on. But these are bytes objects, not strings. Where is the real BOM?
This is for python 3, but I would be interested in the Python 2 situation for completeness. |
while spawning the fab file it asks me the login password for 'ubuntu' | 33,025,966 | 0 | 0 | 219 | 0 | python,django,amazon-web-services | Looks like there's an issue with your "ec2 key pairs". Make sure you have the correct key and that the permission of that key are 400.
To know if the key is working try to manually connect to the instance with
ssh -i ~/.ssh/<your-key> ubuntu@<your-host> | 0 | 1 | 0 | 1 | 2015-10-05T12:25:00.000 | 1 | 0 | false | 32,948,568 | 0 | 0 | 1 | 1 | I wrote the default password of my ami i.e. 'ubuntu' but it didn't work. I even tried with my ssh key. I've browsed enough and nothing worked yet.Can anybody please help me out?
[] Executing task 'spawn'
Started...
Creating instance
EC2Connection:ec2.us-west-2.amazonaws.com
Instance state: pending
Instance state: pending
Instance state: pending
Instance state: pending
Instance state: running
Public dns: ec2-52-89-191-143.us-west-2.compute.amazonaws.com
Waiting 60 seconds for server to boot...
[ec2-52-89-191-143.us-west-2.compute.amazonaws.com] run: whoami
[ec2-52-89-191-143.us-west-2.compute.amazonaws.com] Login password for 'ubuntu': |
Using AWS to execute on-demand ETL | 44,104,221 | 0 | 0 | 613 | 0 | python,amazon-web-services,etl,emr,amazon-emr | Now you can put your script on AWS Lambda for ETL. It supports scheduler and Trigger on other AWS components. It is on-demand and will charge you only when the Lambda function got executed. | 0 | 1 | 0 | 0 | 2015-10-05T17:44:00.000 | 4 | 0 | false | 32,954,687 | 0 | 0 | 0 | 1 | I want to execute a on-demand ETL job, using AWS architecture.
This ETL process is going to run daily, and I don't want to pay for a EC2 instance all the time. This ETL job can be written in python, for example.
I know that in EMR, I can build my cluster on-demand and execute a hadoop job.
What is the best architecture to run a simple on-demand ETL job? |
How to change python version in windows git bash? | 59,281,928 | 4 | 8 | 11,192 | 0 | python,git-bash | Follow these steps:
Open Git bash, cd ~
Depending on your favorite editortouch, code or vim (in my case) type code .bashrc
Add the line alias python='winpty c:/Python27/python.exe' to the open .bashrc
Save and Close.
Try python --version on git bash again.
Hopefully it works for you. | 0 | 1 | 0 | 0 | 2015-10-06T09:09:00.000 | 2 | 0.379949 | false | 32,965,980 | 1 | 0 | 0 | 1 | I've installed python 3.5 and python 2.7 on windows. And I've added path for python 2.7 in PATH variable. When I type 'python --version' in windows cmd, it prints 2.7. But when i type 'python --version' in git bush, it prints 3.5.
How to change python version in windows git bash to 2.7? |
Invoke gdb from python script | 32,979,189 | 0 | 0 | 886 | 0 | python,gdb | I was able to figure out. What I understood is
GDB embeds the Python interpreter so it can use Python as an extension language.
You can't just import gdb from /usr/bin/python like it's an ordinary Python library because GDB isn't structured as a library.
What you can do is source MY-SCRIPT.py from within gdb (equivalent to running gdb -x MY-SCRIPT.py). | 0 | 1 | 0 | 1 | 2015-10-06T19:36:00.000 | 2 | 0 | false | 32,978,233 | 0 | 0 | 0 | 1 | where should my python files be stored so that I can run that using gdb. I have custom gdb located at /usr/local/myproject/bin. I start my gdb session by calling ./arm-none-eabi-gdb from the above location.
I don't know how this gdb and python are integrated into each other.
Can anyone help.? |
How to install python correctly on Linux? | 33,025,633 | 0 | 0 | 156 | 0 | python,linux,directory,installation | It depends on your distro. Most of distros have it in its repository and for installing you just need to use your package manager. for example on ubuntu it's: sudo apt-get install python or on fedora it's: su -c 'dnf install python' | 0 | 1 | 0 | 0 | 2015-10-08T20:37:00.000 | 1 | 0 | false | 33,025,522 | 1 | 0 | 0 | 1 | How can i install python correctly?
When i install it manually it is in the
/usr/local/bin
directory but that causes many problems, for example i am not able to install modules.
I want to install it into
/usr/bin |
Sudden performance drop going from 1024 to 1025 bytes | 33,051,195 | 1 | 2 | 68 | 0 | python,django,django-rest-framework | This turned out to be related to libcurl's default "Expect: 100-continue" header. | 0 | 1 | 0 | 0 | 2015-10-09T02:34:00.000 | 1 | 0.197375 | false | 33,028,985 | 0 | 0 | 1 | 1 | I am running a dev server using runserver. It exposes a json POST route. Consistently I'm able to reproduce the following performance artifact - if request payload is <= 1024 bytes it runs in 30ms, but if it is even 1025 bytes it takes more than 1000ms.
I've profiled and the profile points to rest_framework/parsers.py JSONParser.parse() -> django/http/request HTTPRequest.read() -> django/core/handlers/wsgi.py LimitedStream.read() -> python2.7/socket.py _fileobject.read()
Not sure if there is some buffer issue. I'm using Python 2.7 on Mac os x 10.10. |
How to stop console from exiting | 33,034,977 | 0 | 0 | 67 | 0 | python | If it's not a multithreaded program, then just let the program do whatever it needs and then: raw_input("Press Enter to stop the sharade.\n")
Maybe it's not exactly what you're looking for but on the other hand you should not rely on a predefined sleep time. | 0 | 1 | 0 | 0 | 2015-10-09T09:29:00.000 | 3 | 0 | false | 33,034,687 | 1 | 0 | 0 | 1 | I want the program to wait like 5 seconds before exiting the console after finishing whatever it does, so the user can read the "Good Bye" message, how can one do this? |
Debugger is not detaching from Winpdb | 33,562,769 | 0 | 0 | 96 | 0 | python,c++,linux,winpdb | Finally i have fixed the issue by using the latest version of python QT | 0 | 1 | 0 | 0 | 2015-10-09T12:01:00.000 | 1 | 1.2 | true | 33,037,770 | 0 | 0 | 0 | 1 | I am using PythonQT to execute python script (because I need to call c++ methods from python script)
My winpdb version is 1.4.6 and machine is CetOS 6.5
Now I want to enable debugging in python script
I have added rpdb2.start_embedded_debugger('test') inside the script and called PythonQt.EvalFile() function, now script is waiting for debugger.
I have opened winpdb UI from terminal and attached to the debugger. I am able to do the “Next”, “Step into” etc. and all local variables are visible correctly
But when I am trying to detach the debugger it is not detaching. Status showing “DETACHING” and nothing happening and I cannot even close winpdb. The only way to exit is kill winpdb.
If I run the same script file from terminal it is working properly (python ) and detaching as expected.
By looking the logs I have found that ,If I run from terminal then the debug channel is encrypted but when from PythonQt debug channel is NOT encrypted, not sure this have any relation with detaching
By further looking into rpdb2.py code I have found that Winpdb is hang on the line self.getSession().getProxy().request_go(fdetach) of request_go(self, fdetach = False): in rpdb2.py
The port 51000 is still in established mode
Please advise me on this. |
PyZmq ensure connect() after bind() | 33,075,923 | 0 | 0 | 129 | 0 | python,ipc,zeromq,pyzmq | put some sleep before connecting. so bind will run first, and connect will continue after waiting for sometime | 0 | 1 | 0 | 0 | 2015-10-12T07:17:00.000 | 1 | 0 | false | 33,075,289 | 0 | 0 | 0 | 1 | Trying to establish some communication between two python processes , I've come to use pyzmq. Since the communication is simple enough I 'm using the Zmq.PAIR messaging pattern with a tcp socket. Basically one process binds on an address and the other one connects to the same address . However both operations happen at startup , and since I cannot control the order in which the processes start , I am often encountering the case in which 'connect()' is called before 'bind()' which leads to failing in establishing communication.
Is there a way to know a socket is not yet ready to be connected to ?
What are the strategies to employ in such situations in order to obtain a safe connection ? |
Control Libreoffice Impress from Python | 33,251,685 | 0 | 1 | 3,178 | 0 | python,libreoffice | Finally, I found a way to solve this using Python, in an elegant and easy way. Instead of libraries or APIs, Im using a socket to connect to Impress and control it.
At the end of the post you can read the full-text that indicates how to control Impress this way. It is easy, and amazing.
You send a message using Python to Impress ( that is listening in some port ), it receives the message and does things based on your request.
You must enable this "remote control" feeature in the app. I solved my problem using this.
Thanks for your replies!.
LibreOffice Impress Remote Protocol Specification
Communication is over a UTF-8 encoded character stream.
(Using RTL_TEXTENCODING_UTF8 in the LibreOffice portion.)
TCP
More TCP-specific details on setup and initial handshake to be
written, but the actual message protocol is the same as for Bluetooth.
Message Format
A message consists of one or more lines. The first line is the message description,
further lines can add any necessary data. An empty line concludes the message.
I.e. "MESSAGE\n\n" or "MESSAGE\nDATA\nDATA2...\n\n"
You must keep reading a message until an empty line (i.e. double
new-line) is reached to allow for future protocol extension.
Intialisation
Once connected the server sends "LO_SERVER_SERVER_PAIRED".
(I.e. "LO_SERVER_SERVER_PAIRED\n\n" is sent over the stream.)
Subsequently the server will send either slideshow_started if a slideshow is running,
or slideshow_finished if no slideshow is running. (See below for details of.)
The current server implementation then proceeds to send all slide notes and previews
to the client. (This should be changed to prevent memory issues, and a preview
request mechanism implemented.)
Commands (Client to Server)
The client should not assume that the state of the server has changed when a
command has been sent. All changes will be signalled back to the client.
(This is to allow for cases such as multiple clients requesting different changes, etc.)
Any lines in [square brackets] are optional, and should be omitted if not needed.
transition_next
transition_previous
goto_slide
slide_number
presentation_start
presentation_stop
presentation_resume // Resumes after a presentation_blank_screen.
presentation_blank_screen
[Colour String] // Colour the screen will show (default: black). Not
// implemented, and format hasn't yet been defined.
As of gsoc2013, these commands are extended to the existing protocol, since server-end are tolerant with unknown commands, these extensions doesn't break backward compatibility
pointer_started // create a red dot on screen at initial position (x,y)
initial_x // This should be called when user first touch the screen
initial_y // note that x, y are in percentage (from 0.0 to 1.0) with respect to the slideshow size
pointer_dismissed // This dismiss the pointer red dot on screen, should be called when user stop touching screen
pointer_coordination // This update pointer's position to current (x,y)
current_x // note that x, y are in percentage (from 0.0 to 1.0) with respect to the slideshow size
current_y // unless screenupdater's performance is significantly improved, we should consider limit the update frequency on the
// remote-end
Status/Data (Server to Client)
slideshow_finished // (Also transmitted if no slideshow running when started.)
slideshow_started // (Also transmitted if a slideshow is running on startup.)
numberOfSlides
currentSlideNumber
slide_notes
slideNumber
[Notes] // The notes are an html document, and may also include \n newlines,
// i.e. the client should keep reading until a blank line is reached.
slide_updated // Slide on server has changed
currentSlideNumber
slide_preview // Supplies a preview image for a slide.
slideNumber
image // A Base 64 Encoded png image.
As of gsoc2013, these commands are extended to the existing protocol, since remote-end also ignore all unknown commands (which is the case of gsoc2012 android implementation), backward compatibility is kept.
slideshow_info // once paired, the server-end will send back the title of the current presentation
Title | 0 | 1 | 0 | 0 | 2015-10-13T00:46:00.000 | 3 | 1.2 | true | 33,092,424 | 0 | 0 | 1 | 1 | Im writing an application oriented to speakers and conferences. Im writing it with Python and focused on Linux.
I would like to know if its possible to control LibreOffice Impress with Python, under Linux in some way.
I want to start an instance of LibreOffice Impress with some .odp file loaded, from my Python app. Then, I would like to be able to receive from the odp some info like: previous, current and next slide. Or somehow generate the images of the slides on the go.
Finally, I want to control LibreOffice in real time. This is: move through the slides using direction keys; right and left.
The idea is to use python alone, but I don't mind using external libraries or frameworks.
Thanks a lot. |
Send a wake on lan packet from a docker container | 67,018,294 | 3 | 12 | 5,187 | 0 | python,docker,uwsgi,wake-on-lan | It seems that UDP broadcast from docker isn't being routed properly (possibly only broadcasted in the container itself, not on the host).
You can't send UDP WoL messages directly, as the device you're trying to control is 'offline' it doesn't show up in your router's ARP table and thus the direct message can't be delivered.
You may try setting (CLI) --network host or (compose) network_mode: host.
If you feel this may compromise security (since your container's/host network are more directly 'connected') or otherwise interferes with your container; you may create/use a separated 'WoL' container. | 0 | 1 | 0 | 0 | 2015-10-13T11:33:00.000 | 1 | 0.53705 | false | 33,101,603 | 0 | 0 | 0 | 1 | I have a docker container running a python uwsgi app. The app sends a wake on lan broadcast packet to wake a pc in the local network.
It works fine without the use of docker (normal uwsgi app directly on the server), but with docker it won't work.
I exposed port 9/udp and bound it port 9 of the host system.
What am I missing here? Or with other words, how can I send a wake on lan command from a docker container to the outside network? |
Finding if a set of lines exists in a large text file | 33,118,839 | 1 | 1 | 35 | 0 | python,linux,windows,bash,shell | If repetition doesn't matter, then this command will do it:
sort <(sort file1 | uniq) <(sort file2 | uniq) | uniq -d | 0 | 1 | 0 | 0 | 2015-10-14T06:58:00.000 | 1 | 1.2 | true | 33,118,610 | 1 | 0 | 0 | 1 | How do i find out the intersection of 2 files in windows?
TextFile A: 100GB
TextFile B: 10MB
All i can think of is using python
I would read the lines in textfile B into memory in python and compare with each line in text file A.
I was wondering if there is any way to do it via the command prompt in linux/windows. |
Python code to open with another program? | 33,137,802 | 0 | 0 | 61 | 0 | python | Lots of options. Which you should choose depends on lots of things. Sorry for being so vague, but your question does not give any details besides the name Chrome.
How ever you start Chrome, e. g. by a button or menu of your window manager, maybe you can tweak that button or menu to start both programs. This is probably the simplest solution, but of course it bears the risk that a different event starts Chrome (e. g. clicking a link in a mail program). This would then pass-by your Program.
You can write a daemon process which you start via login-scripts at login time. That daemon would try to figure out when you start Chrome (e. g. by polling the process table) and react by starting your accompanying program.
You can configure Chrome so that it starts your program upon starting time. How to do this is a whole new question in itself with lots of answers; options include at least writing a plugin for Chrome which does what you want.
You can have your program up and running all the time (started via login scripts, i. e. too early) but stay invisible and passive until Chrome starts up. This is basically the same as option 2, just that your program is itself that daemon.
In case you tell us more, we probably can give narrower and thus more fitting answers. | 0 | 1 | 0 | 0 | 2015-10-14T23:48:00.000 | 1 | 0 | false | 33,137,569 | 0 | 0 | 0 | 1 | I have a program I want to open when I open Chrome. Instead of opening them separately, I want it to automatically start when Chrome starts. How would I write code to have the program attach itself to Chrome? I don't want the program to start on startup, just when Chrome starts. I know I can right click on the Chrome icon on my desktop and change the properties to open both programs but I want to know how to do the same thing with code. |
python: APScheduler in WSGI app | 33,232,473 | 3 | 5 | 1,030 | 0 | python,mod-wsgi,wsgi,apscheduler | You're right -- the scheduler won't start until the first request comes in.
Therefore running a scheduler in a WSGI worker is not a good idea. A better idea would be to run the scheduler in a separate process and connect to the scheduler when necessary via some RPC mechanism like RPyC or Execnet. | 0 | 1 | 0 | 0 | 2015-10-15T10:01:00.000 | 1 | 0.53705 | false | 33,145,523 | 0 | 0 | 1 | 1 | I would like to run APScheduler which is a part of WSGI (via Apache's modwsgi with 3 workers) webapp. I am new in WSGI world thus I would appreciate if you could resolve my doubts:
If APScheduler is a part of webapp - it becomes alive just after first request (first after start/reset Apache) which is run at least by one worker? Starting/resetting Apache won't start it - at least one request is needed.
What about concurrent requests - would every worker run same set of APScheduler's tasks or there will be only one set shared between all workers?
Would once running process (webapp run via worker) keep alive (so APScheduler's tasks will execute) or it could terminate after some idle time (as a consequence - APScheduler's tasks won't execute)?
Thank you! |
unable save file in a shared folder on a remote ubuntu machine using notepad++ | 33,161,926 | 0 | 0 | 59 | 0 | python,ubuntu | Try to kill the process which is opening the python file
or you can reboot your windows
It may resolve the problem | 0 | 1 | 0 | 0 | 2015-10-16T02:45:00.000 | 1 | 0 | false | 33,161,725 | 1 | 0 | 0 | 1 | I use a shared folder to share files in linux with a windows machine. However, after i have edited a python file using notepad++ in windows, changed the owner of the file to the user of ubuntu, and run the file, i wanted to edit the file again, when i save the file, I get the following error.
'Please check whether if this file is opened in another program'
is it concerned with the linux daemon?? If both machines are windows, i can edit it as administrator, but i am new to linux, i do not know how to deal with it?
Thanks for your help!! |
Multi-threading on Google app engine | 33,184,298 | 0 | 0 | 921 | 0 | multithreading,python-2.7,sockets,google-app-engine,apple-push-notifications | Found the issue. I was calling start_background_thread with argument set to function(). When I fixed it to calling it as function it worked as expected. | 0 | 1 | 0 | 0 | 2015-10-17T07:18:00.000 | 1 | 0 | false | 33,183,963 | 0 | 0 | 1 | 1 | Does Google App engine support multithreading? I am seeing conflicting reports on the web.
Basically, I am working on a module that communicates with Apple Push Notification server (APNS). It uses a worker thread that constantly looks for new messages that need to be pushed to client devices in a pull queue.
After it gets a message and sends it to APNS, I want it to start another thread that checks if APNS sends any error response in return. Basically, look for incoming bytes on same socket (it can only be error response per APNS spec). If nothing comes in 60s since the last message send time in the main thread, the error response thread terminates.
While this "error response handler" thread is waiting for bytes from APNS (using a select on the socket), I want the main thread to continue looking at the pull queue and send any packets that come immediately.
I tried starting a background thread to do the function of the error response handler. But this does not work as expected since the threads are executed serially. I.e control returns to the main thread only after the error response thread has finished its 60s wait. So, if there are messages that arrive during this time they sit in the queue.
I tried threadsafe: false in .yaml
My questions are:
Is there any way to have true multithreading in GAE. I.e the main thread and the error response thread execute in parallel in the above example instead of forcing one to wait while other executes?
If no, is there any solution to the problem I am trying to solve? I tried kick-starting a new instance to handle the error response (basically use a push queue), but I don't know how to pass the socket object the error response thread will have to listen to as a parameter to this push queue task (since dill is not supported to serialize socket objects) |
Command line programs in a web (python) | 33,184,244 | 1 | 1 | 179 | 0 | python-2.7,web-applications,command-line,pycharm | Based on what you've described, there are many ways to approach this:
Create a terminal emulator on your webpage.
If you want nicer UI, you can set up any web framework and have your command line programs in the backend, while exposing frontend interfaces to allow users to input parameters and see the results.
If you're just trying to surface the functionalities of your programs, you can wrap them as services which application can call and use (including your web terminal/app) | 0 | 1 | 0 | 0 | 2015-10-17T07:40:00.000 | 1 | 1.2 | true | 33,184,127 | 0 | 0 | 1 | 1 | I am new to programming and web apps so i am not even sure if this question is obvious or not.
Sometimes I find command line programs more time efficient and easy to use (even for the users). So is there a way to publish my command line programs with command line interface to a web as a web app using cgi or Wsgi?
For example if i make a program to calculate all the formulas in math can i publish it to a web in its command line form? I am using python in pycharm. I have tried using cgi but coudnt do much beacuse cgi has more to do with forms being send to a data server for information comparison and storage.
-Thanks |
Can we force run the jobs in APScheduler job store? | 43,736,075 | 0 | 4 | 1,592 | 0 | python,apscheduler | Another approach: you can write logic of your job in separate function. So, you will be able to call this function in your scheduled job as well as somewhere else. I guess that this is a more explicit way to do what you want. | 0 | 1 | 0 | 1 | 2015-10-19T10:14:00.000 | 3 | 0 | false | 33,211,867 | 0 | 0 | 0 | 1 | Using APScheduler version 3.0.3. Services in my application internally use APScheduler to schedule & run jobs. Also I did create a wrapper class around the actual APScheduler(just a façade, helps in unit tests). For unit testing these services, I can mock the this wrapper class. But I have a situation where I would really like the APScheduler to run the job (during test). Is there any way by which one can force run the job? |
Google App Engine Python Cron Job | 33,220,726 | 0 | 0 | 34 | 0 | python,google-app-engine,cron | Unfortunately, you cannot combine the weekday option with the interval.
You could add a switch in the request handler of your cron-job, that will just exit if current week-day is not Saturday, while your cron.job is scheduled "every 2 minutes from 01:00 to 03:00". But that means that your handler will be called 300 times per week for doing nothing, and only doing something the other 60 times.
Alternatively, you could combine an "every saturday 01:00" cron-job (as dispatcher) that will create 60 push tasks (as worker) with countdown or ETA, spread between 01:00 and 03:00. However, I don't think the execution time is not guaranteed. | 0 | 1 | 0 | 0 | 2015-10-19T17:18:00.000 | 1 | 0 | false | 33,220,279 | 0 | 0 | 0 | 1 | I wanted to run my cron job as 'schedule: every saturday every 2 minutes from 01:00 to 3:00', and it won't allow this format. Is it possible to set a cron job to target another cron job? Or is my schedule possible just not in the correct format? |
Handling a literal space in a filename | 33,241,435 | 6 | 3 | 4,377 | 0 | python,linux,file,space | You don't need to (and shouldn't) escape the space in the file name. When you are working with a command line shell, you need to escape the space because that's how the shell tokenizes the command and its arguments. Python, however, is expecting a file name, so if the file name has a space, you just include the space. | 0 | 1 | 0 | 0 | 2015-10-20T15:52:00.000 | 1 | 1.2 | true | 33,241,211 | 0 | 0 | 0 | 1 | I have problem with os.access(filename, os.R_OK) when file is an absolute path on a Linux system with space in the filename. I have tried many ways of quoting the space, from "'" + filename + "'" to filename.replace(' ', '\\ ') but it doesn't work.
How can I escape the filename so my shell knows how to access it? In terminal I would address it as '/home/abc/LC\ 1.a' |
Python Script not Running - Has to be something simple | 33,246,962 | 0 | 0 | 51 | 0 | python,python-2.7,fedora-21 | Path to python was different than other user. User was pointing to canopy. | 0 | 1 | 0 | 1 | 2015-10-20T20:52:00.000 | 2 | 0 | false | 33,246,572 | 0 | 0 | 0 | 1 | OS: Fedora 21
Python: 2.7.6
I run a python script as root or using sudo it runs fine. If I run it as just the user I get the following:
Traceback (most recent call last):
File "/home/user/dev_ad_list.py", line 12, in
import ldap
ImportError: No module named ldap
selinux=disabled -- What other security is preventing a user from running a python script that imports ldap |
Cannot open .py file in Google Virtual Machine SSH Terminal | 33,264,941 | 1 | 1 | 72 | 0 | python,ssh,google-compute-engine | X-Windows (X11 nowadays) is a client-server architecture. You can forward connections to your x server with a -X (uppercase) option to ssh (ie $ ssh -X username@server.com). This should work if everything is installed correctly on the server (apt-get usually does a good job of this, but I don't have a lot of experience with kwrite).
EDIT
from the ssh man page
X11 forwarding should be enabled with caution. Users with the ability to bypass file permissions on the remote host (for the user's X authorization
database) can access the local X11 display through the forwarded connection. An attacker may then be able to perform activities such as keystroke monitoring.
For this reason, X11 forwarding is subjected to X11 SECURITY extension restrictions by default. Please refer to the ssh -Y option and the
ForwardX11Trusted directive in ssh_config(5) for more information.
and the relevant -Y
-Y Enables trusted X11 forwarding. Trusted X11 forwardings are not subjected to the X11 SECURITY extension controls. | 0 | 1 | 0 | 1 | 2015-10-21T16:05:00.000 | 1 | 0.197375 | false | 33,264,119 | 0 | 0 | 0 | 1 | I cannot open .py file through Google VM SSH Console. Kwrite and sudo apt-get install xvfb are installed.
My command:
kwrite test.py
I get the following error:
kwrite: Cannot connect to X server.
Do I need to change the command/install additional software?
Thanks |
Methods to schedule a task prior to runtime | 33,283,524 | 2 | 3 | 50 | 0 | python,windows,cron,crontab,job-scheduling | cron is best for jobs that you want to repeat periodically. For one-time jobs, use at or batch. | 0 | 1 | 0 | 1 | 2015-10-22T12:13:00.000 | 1 | 1.2 | true | 33,280,783 | 0 | 0 | 0 | 1 | What are the best methods to set a .py file to run at one specific time in the future? Ideally, its like to do everything within a single script.
Details: I often travel for business so I built a program to automatically check me in to my flights 24 hours prior to takeoff so I can board earlier. I currently am editing my script to input my confirmation number and then setting up cron jobs to run said script at the specified time. Is there a better way to do this?
Options I know of:
• current method
• put code in the script to delay until x time. Run the script immediately after booking the flight and it would stay open until the specified time, then check me in and close. This would prevent me from shutting down my computer, though, and my machine is prone to overheating.
Ideal method: input my confirmation number & flight date, run the script, have it set up whatever cron automatically, be done with it. I want to make sure whatever method I use doesn't include keeping a script open and running in the background. |
How to enable the lazy-apps in uwsgi to use fork () in the code? | 58,931,038 | 1 | 3 | 2,374 | 0 | python,django-views,fork,uwsgi | use lazy-apps = true instead of 1 | 0 | 1 | 0 | 1 | 2015-10-22T21:17:00.000 | 1 | 0.197375 | false | 33,290,927 | 0 | 0 | 1 | 1 | I use Debian + Nginx + Django + UWSGI.
One of my function us fork() in the file view.py (the fork works well), then immediately written return render (request, ...
After the fork() the page loads for a long time and after that browser prints error - "Web page not available». On the other hand the error doesn’t occur if i reload the page during loading (because i don’t launch the fork() again).
The documentation UWSGI there is -
uWSGI tries to (ab)use the Copy On Write semantics of the fork() call whenever possible. By default it will fork after having loaded your applications to share as much of their memory as possible. If this behavior is undesirable for some reason, use the lazy-apps option. This will instruct uWSGI to load the applications after each worker’s fork(). Beware as there is an older options named lazy that is way more invasive and highly discouraged (it is still here only for backward compatibility)
I do not understand everything, and I wrote in a configuration option uWSGI lazy-apps: lazy-apps: 1 in my uwsgi.yaml.
It does not help that I'm wrong?
What do I do with this problem?
P.S. other options besides fork() is that I do not fit ..
PP.S. Sorry, I used google translate .. |
Python and Labview | 33,306,025 | 1 | 2 | 2,470 | 0 | python,labview | Why not use the System Exec.vi in Connectivity->Libraries and Executables menu?
You can execute the script and get the output. | 0 | 1 | 0 | 1 | 2015-10-23T12:51:00.000 | 4 | 0.049958 | false | 33,302,773 | 0 | 0 | 0 | 2 | I need to call a Python script from Labview, someone know which is the best method to do that?
I've tried Labpython, but it is not supported on newest versions of Labview and I'm not able to use it on Labview-2014.
Definitevly, I'm looking for an advice about python integration: I know this two solutions:
1)Labpython: is a good solution but it is obsolete
2)execute python script with shell_execute block in Labview. I think that it isn't the best solution because is very hard to get the output of python script |
Python and Labview | 55,798,097 | 0 | 2 | 2,470 | 0 | python,labview | You can save Python script as a large string constant(or load from text file) within the LabVIEW vi so that it can be manipulated within LabVIEW and then save that to a text file then execute the Python script using command line in LabVIEW. Python yourscript enter | 0 | 1 | 0 | 1 | 2015-10-23T12:51:00.000 | 4 | 0 | false | 33,302,773 | 0 | 0 | 0 | 2 | I need to call a Python script from Labview, someone know which is the best method to do that?
I've tried Labpython, but it is not supported on newest versions of Labview and I'm not able to use it on Labview-2014.
Definitevly, I'm looking for an advice about python integration: I know this two solutions:
1)Labpython: is a good solution but it is obsolete
2)execute python script with shell_execute block in Labview. I think that it isn't the best solution because is very hard to get the output of python script |
Send a DOS command in a virtual machines cmd through python script | 33,345,750 | 0 | 1 | 615 | 0 | python,testing,virtual-machine,squish | You can install an ssh server on the Windows machine and then use the paramiko module to communicate with it or you can also use wmi command to remotely execute command on Windows system. | 0 | 1 | 0 | 0 | 2015-10-26T12:19:00.000 | 1 | 0 | false | 33,345,609 | 0 | 0 | 0 | 1 | I'll get right into the point.
The problem is:
my localmachine is a Windows OS
I launched a Windows Virtual Machine (through VirtualBox) that awaits some python commands
on my localhost I have a python script that I execute and
after the VM is started, I want the script to open inside the VM, a cmd.exe process
after cmd.exe opens up, the python script should send to that cmd.exe, inside the VM, the delete command "del c:\folder_name"
I did searched on various issue on StackOverflow that suggested me using subprocess.call or subprocess.Popen, but unfortunately none of them worked in my case, because I'm sure that all of the solutions were meant to work on localhost, and not inside a virtual machine, how I want it.
Any suggestions? Thank you.
PS: I'm trying to do this without installing other packages in host/guest.
UPDATE:
Isn't there any solution, that will allow me to do this without installing something on VM ?! |
Escape characters in Google AppEngine full text search | 33,351,187 | 0 | 0 | 92 | 0 | python,google-app-engine | It might help to include the code in question, but try putting a \ before the +, that's what can escape things within quotes in python, so it might work here. E.g.: C\+ | 0 | 1 | 0 | 0 | 2015-10-26T16:30:00.000 | 1 | 0 | false | 33,350,869 | 0 | 0 | 1 | 1 | I'm using full text search and I'd like to search for items that have a property with value 'C+'
is there a way I can escape the '+' Char so that this search would work? |
Architectual pattern for CLI tool | 33,353,621 | 0 | 0 | 355 | 0 | python,design-patterns,command-line-interface,restful-architecture,n-tier-architecture | Since your app is not very complex, I see 2 layers here:
ServerClient: it provides API for remote calls and hides any details. It knows how to access HTTP server, provide auth, deal with errors etc. It has methods like do_something_good() which anyone may call and do not care if it remote method or not.
CommandLine: it uses optparse (or argparse) to implement CLI, it may support history etc. This layer uses ServerClient to access remote service.
Both layers do not know anything about each other (only protocol like list of known methods). It will allow you to use somethign instead of HTTP Rest and CLI will still work. Or you may change CLI with batch files and HTTP should work. | 0 | 1 | 0 | 0 | 2015-10-26T18:50:00.000 | 1 | 1.2 | true | 33,353,398 | 0 | 0 | 1 | 1 | I am going to write some HTTP (REST) client in Python. This will be a Command Line Interface tool with no gui. I won't use any business logic objects, no database, just using an API to communicate with the server (using Curl). Would you recommend me some architectual patterns for doing that, except for Model View Controller?
Note: I am not asking for a design patterns like Command or Strategy. I just want to know how to segregate and decouple abstraction layers.
I think using MVC is pointless regarding the fact of not having a business logic - please correct me if I'm wrong. Please give me your suggestions!
Do you know any examples of CLI projects (in any language, not necessarily in Python) that are well maintained and with clean code?
Cheers |
How do I fix the 'BlobService' is not defined' error | 42,435,442 | 0 | 0 | 3,863 | 0 | python,azure | BlobService is function you are trying to call, but it is not defined anywhere. It should be defined when you call from azure.storage import *. It is probably not being called, due to a difference in package versions.
Calling from azure.storage.blob import * should work, as it is now invoked correctly. | 0 | 1 | 0 | 0 | 2015-10-26T19:25:00.000 | 4 | 0 | false | 33,353,968 | 0 | 0 | 0 | 3 | I've installed the azure SDK for Python (pip install azure).
I've copied the Python code on the MS Azure Machine Learning Batch patch for the ML web-service into an Anaconda Notebook.
I've replaced all the place holders in the script with actual values as noted in the scripts comments.
When I run the script I get the error: "NameError: global name 'BlobService' is not defined" at the script line "blob_service = BlobService(account_name=storage_account_name, account_key=storage_account_key)".
Since the "from azure.storage import *" line at the beginning of the script does not generate an error I'm unclear as to what the problem is and don't know how to fix it. Can anyone point me to what I should correct? |
How do I fix the 'BlobService' is not defined' error | 33,354,366 | 0 | 0 | 3,863 | 0 | python,azure | It's been a long time since I did any Python, but BlobStorage is in the azure.storage.blob namespace I believe.
So I don't think your from azure.storage import * is pulling it in.
If you've got a code sample in a book which shows otherwise it may just be out of date. | 0 | 1 | 0 | 0 | 2015-10-26T19:25:00.000 | 4 | 0 | false | 33,353,968 | 0 | 0 | 0 | 3 | I've installed the azure SDK for Python (pip install azure).
I've copied the Python code on the MS Azure Machine Learning Batch patch for the ML web-service into an Anaconda Notebook.
I've replaced all the place holders in the script with actual values as noted in the scripts comments.
When I run the script I get the error: "NameError: global name 'BlobService' is not defined" at the script line "blob_service = BlobService(account_name=storage_account_name, account_key=storage_account_key)".
Since the "from azure.storage import *" line at the beginning of the script does not generate an error I'm unclear as to what the problem is and don't know how to fix it. Can anyone point me to what I should correct? |
How do I fix the 'BlobService' is not defined' error | 33,355,053 | 1 | 0 | 3,863 | 0 | python,azure | James, I figured it out. I just changed from azure.storage import * to azure.storage.blob import * and it seems to be working. | 0 | 1 | 0 | 0 | 2015-10-26T19:25:00.000 | 4 | 0.049958 | false | 33,353,968 | 0 | 0 | 0 | 3 | I've installed the azure SDK for Python (pip install azure).
I've copied the Python code on the MS Azure Machine Learning Batch patch for the ML web-service into an Anaconda Notebook.
I've replaced all the place holders in the script with actual values as noted in the scripts comments.
When I run the script I get the error: "NameError: global name 'BlobService' is not defined" at the script line "blob_service = BlobService(account_name=storage_account_name, account_key=storage_account_key)".
Since the "from azure.storage import *" line at the beginning of the script does not generate an error I'm unclear as to what the problem is and don't know how to fix it. Can anyone point me to what I should correct? |
Can Flask use the async feature of Tornado Server? | 33,369,914 | 4 | 3 | 1,151 | 0 | python,flask,tornado,python-asyncio | No. It is possible to run Flask on Tornado's WSGIContainer, but since Flask is limited by the WSGI interface it will be unable to take advantage of Tornado's asynchronous features. gunicorn or uwsgi is generally a much better choice than Tornado's WSGIContainer unless you have a specific need to run a Flask application in the same process as native Tornado RequestHandlers. | 0 | 1 | 0 | 0 | 2015-10-27T12:58:00.000 | 1 | 0.664037 | false | 33,368,621 | 1 | 0 | 1 | 1 | We have a project use Flask+Gunicorn(sync). This works well for a long time, however, recently i came across to know that Asyncio(Python3.5) support async io in standard library.
However, before Asyncio, there are both Twisted and Tornado async servers. So, i wander whether Flask can use the aysncio feature of Tornado, cause Gunicorn support tornado worker class. |
Tornado log rotation for each day | 33,393,197 | 2 | 1 | 826 | 0 | python-2.7,logging,tornado,log-rotation | Tornado's logging just uses the python logging module directly; it's not a separate system. Tornado defines some command-line flags to configure logging in simple ways, but if you want anything else you can do it directly with the logging module. A timed rotation mode is being added in Tornado 4.3 (--log-rotate-mode=time), and until then you can use logging.handlers.TimedRotatingFileHandler. | 0 | 1 | 0 | 0 | 2015-10-28T06:47:00.000 | 1 | 1.2 | true | 33,384,611 | 0 | 0 | 0 | 1 | I am trying to log the requests to my tornado server to separate file, and I want to make a log rotation for each day. I want to use the tornado.log function and not the python logging.
I have defined the log path in my main class and it is logging properly I want to know if I can do a log rotate.
Does tornado log allow us to log things based on type like log4j
Thanks |
App Engine Returning Error 500 on Post Requests from | 33,405,768 | 0 | 0 | 150 | 0 | jquery,python,google-app-engine | If you're not seeing anything in your server logs about the error, that suggests to me that you might have a configuration error in one of your .yaml files. Are GET requests working? Are you sure that you are sending your POST requests to an endpoint that is handled by your application? Check for typos in your JavaScript and application Route definitions, and check for a catch-all request handler (e.g. /*) that might be receiving the request and failing to respond.
Sharing the contents of your app.yaml, your server-side URL routes, and a snippet of your JavaScript would really help us to help you. | 0 | 1 | 0 | 0 | 2015-10-28T19:05:00.000 | 1 | 0 | false | 33,399,526 | 0 | 0 | 1 | 1 | I am getting error 500 on every second POST request made from a browser (chrome and firefox) irrespective of whether it is a Jquery Post or Form submissions, app engine is alternating between error 500, and successful post. The error 500 are not appearing anyway in the logs.
I have tested this with over 5 different post handlers, the errors are only occurring on production not on the Local SDK server.
Note that the requests are perfectly successful when made from a python script using the requests module. |
How to change build path on PyDev | 33,405,546 | 3 | 1 | 1,264 | 0 | python,eclipse,pydev | First check whether python3.5 is auto-configured in eclipse.
Go to Window>Preferences
On the preferences window you will find PyDev configurations on left pan.
PyDev>Interpreters>Python Interpreter
If python3.5 is not listed you can either add using "Quick Auto-Config" or if you want to add manually click "New" then add give the interpreter name (ex:Py3.5) and then browse to the path of python executable (In your case inside /Library/Frameworks/Python.framework/)
Once you have configured your interpreter in PyDev then you can change the interpreter of your project.
Right click on your project>Properties
On the left pan click PyDev-Interpreter.In that select the name of the PythonInterpreter(Py3.5) which you previously configured and you can also select the grammar version. | 0 | 1 | 0 | 0 | 2015-10-29T03:21:00.000 | 1 | 1.2 | true | 33,405,411 | 1 | 0 | 0 | 1 | In eclipse, I'm used to configuring the buildpath for versions of java installed on my computer.
I recently added Python 3.5 to my computer and want to use it in place of the default 2.7 that Macs automatically include.
How can I configure my build path on PyDev, if there is such as concept to begin with, for the plugin? I've found that Python 3.5 is located at/Library/Frameworks/Python.framework/; how can I now change PyDev to use it? |
Specify Python version in Microsoft Azure WebJob? | 33,427,176 | 3 | 3 | 1,707 | 0 | python,python-2.7,python-3.x,azure,azure-webjobs | Also if you wanna run different python versions in the same site, you can always drop a run.cmd that calls the right version of python for you. They are installed in D:\Python34 and D:\Python27 | 0 | 1 | 0 | 0 | 2015-10-29T15:32:00.000 | 3 | 1.2 | true | 33,418,463 | 1 | 0 | 0 | 1 | How can I select which Python version to use for a WebJob on Microsoft Azure?
When I do print(sys.version) I get 2.7.8 (default, Jun 30 2014, 16:03:49) [MSC v.1500 32 bit (Intel)]
Where can I specify another version? I would like to use Python 3 for some jobs.
I have tried adding runtime.txt reading python-3.4 to the root path, but it had no effect. |
Rundeck :: Execute a python script | 33,427,554 | 1 | 0 | 10,632 | 0 | python,rundeck | okay, so I changed the step type to a command rather than script file and it worked.
I guess my understanding of what a script file is was off. | 0 | 1 | 0 | 1 | 2015-10-30T00:34:00.000 | 2 | 0.099668 | false | 33,427,081 | 0 | 0 | 0 | 1 | I am new to Rundeck, so I apologize if I ask a question that probably has an obvious answer I'm overlooking.
I've installed Rundeck on my Windows PC. I've got a couple of Python scripts that I want to execute via Rundeck.
The scripts run fine when I execute them manually.
I created a job in Rundeck, created a single step (script file option) to test the python script.
The job failed after six seconds. When I checked the log, it was because it was executing it line by line rather than letting python run it as an entire script.
How do I fix this? |
App Engine: Few big scripts or many small ones? | 33,456,580 | 4 | 1 | 419 | 0 | python,google-app-engine,google-cloud-datastore | The are two important considerations here.
The number of roundtrip calls from the client to the server.
One call to update a user profile will execute much faster than 5 calls to update different parts of user profile as you save on roundtrip time between the client and the server and between the server and the datastore.
Write costs.
If you update 5 properties in a user profile and save it, and then update 5 other properties and save it, etc., your writing costs will be much higher because every update incurs writing costs, including updates on all indexed properties - even those you did not change.
Instead of creating a huge user profile with 50 properties, it may be better to keep properties that rarely change (name, gender, date of birth, etc.) in one entity, and separate other properties into a different entity or entities. This way you can reduce your writing costs, but also reduce the payload (no need to move all 50 properties back and forth unless they are needed), and simplify your application logic (i.e. if a user only updates an address, there is no need to update the entire user profile). | 0 | 1 | 0 | 0 | 2015-10-31T15:44:00.000 | 3 | 0.26052 | false | 33,453,441 | 0 | 0 | 1 | 3 | I am working on a website that I want to host on App Engine. My App Engine scripts are written in Python. Now let's say you could register on my website and have a user profile. Now, the user Profile is kind of extensive and has more than 50 different ndb properties (just for the sake of an example).
If the user wants to edit his records (which he can to some extend) he may do so through my website send a request to the app engine backend.
The way the profile is section, often about 5 to 10 properties fall into a small subsection or container of the page. On the server side I would have multiple small scripts to handle editing small sections of the whole Profile. For example one script would update "adress information", another one would update "interests" and an "about me" text. This way I end up with 5 scripts or so. The advantage is that each script is easy to maintain and only does one specific thing. But I don't know if something like this is smart performance wise. Because if I maintain this habbit for the rest of the page I'll probably end up with about 100 or more different .py scripts and a very big app.yaml and I have no idea how efficiently they are cached on the google servers.
So tl;dr:
Are many small backend scripts to perform small tasks on my App Engine backend a good idea or should I use few scripts which can handle a whole array of different tasks? |
App Engine: Few big scripts or many small ones? | 33,457,227 | 3 | 1 | 419 | 0 | python,google-app-engine,google-cloud-datastore | A single big script would have to be loaded every time an instance for your app starts, possibly hurting the instance start time, the response time of every request starting an instance and the memory footprint of the instance. But it can handle any request immediately, no additional code needs to be loaded.
Multiple smaller scripts can be lazy-loaded, on demand, after your app is started, offering advantages maybe appealing to some apps:
the main app/module script can be kept small, which keeps the instance startup time short
the app's memory footprint can be kept smaller, handler code in lazy-loaded files is not loaded until there are requests for such handlers - interesting for rarely used handlers
the extra delay in response time for a request which requires loading the handler code is smaller as only one smaller script needs to be loaded.
Of course, the disadvantage is that some requests will have longer than usual latencies due to loading of the handler scripts: in the worst case the number of affected requests is the number of scripts per every instance lifetime.
Updating a user profile is not something done very often, I'd consider it a rarely used piece of functionality, thus placing its handlers in a separate file looks appealing. Splitting it into one handler per file - I find that maybe a bit extreme. It's really is up to you, you know better your app and your style.
From the GAE (caching) infra perspective - the file quota is 10000 files, I wouldn't worry too much with just ~100 files. | 0 | 1 | 0 | 0 | 2015-10-31T15:44:00.000 | 3 | 1.2 | true | 33,453,441 | 0 | 0 | 1 | 3 | I am working on a website that I want to host on App Engine. My App Engine scripts are written in Python. Now let's say you could register on my website and have a user profile. Now, the user Profile is kind of extensive and has more than 50 different ndb properties (just for the sake of an example).
If the user wants to edit his records (which he can to some extend) he may do so through my website send a request to the app engine backend.
The way the profile is section, often about 5 to 10 properties fall into a small subsection or container of the page. On the server side I would have multiple small scripts to handle editing small sections of the whole Profile. For example one script would update "adress information", another one would update "interests" and an "about me" text. This way I end up with 5 scripts or so. The advantage is that each script is easy to maintain and only does one specific thing. But I don't know if something like this is smart performance wise. Because if I maintain this habbit for the rest of the page I'll probably end up with about 100 or more different .py scripts and a very big app.yaml and I have no idea how efficiently they are cached on the google servers.
So tl;dr:
Are many small backend scripts to perform small tasks on my App Engine backend a good idea or should I use few scripts which can handle a whole array of different tasks? |
App Engine: Few big scripts or many small ones? | 33,858,532 | 0 | 1 | 419 | 0 | python,google-app-engine,google-cloud-datastore | Adding to Dan Cornilescu’s answer, writing/saving an instance to the database re-writes to the whole instance (i.e. all its attributes) to the database. If you’re gonna use put() multiple times, you’re gonna re-write the who instance multiple times. Which, aside from being a heavy task to perform, will cost you more money. | 0 | 1 | 0 | 0 | 2015-10-31T15:44:00.000 | 3 | 0 | false | 33,453,441 | 0 | 0 | 1 | 3 | I am working on a website that I want to host on App Engine. My App Engine scripts are written in Python. Now let's say you could register on my website and have a user profile. Now, the user Profile is kind of extensive and has more than 50 different ndb properties (just for the sake of an example).
If the user wants to edit his records (which he can to some extend) he may do so through my website send a request to the app engine backend.
The way the profile is section, often about 5 to 10 properties fall into a small subsection or container of the page. On the server side I would have multiple small scripts to handle editing small sections of the whole Profile. For example one script would update "adress information", another one would update "interests" and an "about me" text. This way I end up with 5 scripts or so. The advantage is that each script is easy to maintain and only does one specific thing. But I don't know if something like this is smart performance wise. Because if I maintain this habbit for the rest of the page I'll probably end up with about 100 or more different .py scripts and a very big app.yaml and I have no idea how efficiently they are cached on the google servers.
So tl;dr:
Are many small backend scripts to perform small tasks on my App Engine backend a good idea or should I use few scripts which can handle a whole array of different tasks? |
How to execute Python file | 33,471,765 | -2 | 1 | 176 | 0 | python,linux,django,filesystems | Save your python code file somewhere, using "Save" or "Save as" in your editor. Lets call it 'first.py' in some folder, like "pyscripts" that you make on your Desktop. Open a prompt (a Windows 'cmd' shell that is a text interface into the computer): start > run > "cmd". | 0 | 1 | 0 | 0 | 2015-11-02T06:12:00.000 | 3 | -0.132549 | false | 33,471,710 | 0 | 0 | 1 | 1 | I am learning Python and DJango and I am relatively nub with Linux. When I create DJango project I have manage.py file which I can execute like ./manage.py runserver. However when I create some Python program by hand it looks like that my Linux trying to execute it using Bash, not Python. So i need to write python foo.py instead ./foo.py. Attributes of both files manage.py and foo.py are the same (-rwx--x---). So my Q is: where is difference and how I can execute python program without specifying python? Links to any documentations are very appreciate. Thanks. |
Windows missing Python.h | 46,394,599 | 14 | 13 | 20,029 | 0 | python,windows,theano | If you are using Visual Studio in Windows, right-click on your project in the Solution Explorer and navigate as follows: Properties -> C/C++ -> General -> Additional Include Directories -> Add C:/Anaconda/include/ (or wherever your Anaconda install is located) | 1 | 1 | 0 | 0 | 2015-11-02T08:54:00.000 | 1 | 1 | false | 33,473,848 | 1 | 0 | 0 | 1 | I'm trying to run a sample Theano code that uses GPU on windows.
My python (with python-dev and Theano and all required libraries) was installed from Anaconda.
This is the error I run into:
Cannot open include file: 'Python.h': No such file or directory
My Python.h is actually in c://Anaconda/include/
I'm guessing that I should add that directory to some environmental variable, but I don't know which. |
Session bus initialization | 33,510,521 | 0 | 0 | 194 | 0 | python,linux,dbus | Just a guess: python client might be able to use X11 to discover session bus address (in addition to using DBUS_SESSION_BUS_ADDRESS environment variable). It is stored in _DBUS_SESSION_BUS_ADDRESS property of _DBUS_SESSION_BUS_SELECTION_[hostname]_[uuid] selection owner window (uuid is content of /var/lib/dbus/machine-id ) | 0 | 1 | 0 | 0 | 2015-11-02T14:39:00.000 | 1 | 0 | false | 33,480,139 | 0 | 0 | 0 | 1 | I'm trying to use D-Bus to control another application. When using Python bindings, it is possible to use D-Bus just with dbus.SessionBus().
However, other application require to first set up the environment variables DBUS_SESSION_BUS_ADDRESS and DBUS_SESSION_BUS_PID, otherwise they report that the name "was not provided by any .service files".
My question is, why is it necessary for some application to set up the environment variables? Is the a standard procedure to initialize the session bus in some situations? |
Is it possible to add the module path to the python environment variable in linux with out root access? | 33,493,376 | 1 | 0 | 192 | 0 | python,linux,svn | Like setting environment variable in bash, if you close session it will be disapear.
So just add sys.path.append it will add path in runtime. | 0 | 1 | 0 | 1 | 2015-11-03T07:02:00.000 | 1 | 0.197375 | false | 33,493,178 | 0 | 0 | 0 | 1 | I am trying to utilise mailer.py script to send mails after a SVN Commit. In mailer.py svn module has been used. I think the svn module is present in /opt/CollabNet_Subversion/lib-146/svn-python/svn and I tried to append it to the sys path using sys.path.append. For once it is getting appended and when I do sys.path I can see the appended path but after that the path is removed and I am getting import error: No Module named SVN.
Am I missing something? |
Async execution of commands in Python | 33,503,827 | 1 | 0 | 302 | 0 | python,python-3.x,asynchronous,scalability,popen | You could use os.listdir or os.walk instead of ls, and the re module instead of grep.
Wrap everything up in a function, and use e.g. the map method from a multiprocessing.Pool object to run several of those in parallel. This is a pattern that works very well.
In Python3 you can also use Executors from concurrent.futures in a similar way. | 0 | 1 | 0 | 0 | 2015-11-03T15:37:00.000 | 1 | 0.197375 | false | 33,503,134 | 0 | 0 | 0 | 1 | Requirement - I want to execute a command that uses ls, grep, head etc using pipes (|). I am searching for some pattern and extracting some info which is part of the query my http server supports.
The final output should not be too big so m assuming stdout should be good to use (I read about deadlock issues somewhere)
Currently, I use popen from subprocess module but I have my doubts over it.
how many simultaneous popen calls can be fired.
does the result immediately come in stdout? (for now it looks the case but how to ensure it if the commands take long time)
how to ensure that everything is async - keeping close to single thread model?
I am new to Python and links to videos/articles are also appreciated. Any other way than popen is also fine. |
py2app worked with Homebrew, now it builds w/ module missing | 34,974,981 | 0 | 0 | 361 | 0 | macos,python-2.7,homebrew,py2app | You say you've "installed too much via Homebrew" and need "Apple's Python to find everything"
After installing Python modules into Homebrew's site-packages, you can make them importable from outside.
First make a directory here (assuming 2.7):
mkdir -p ~/Library/Python/2.7/lib/python/site-packages
Then put a path file in it:
echo 'import site; site.addsitedir("'$(brew --prefix)'/lib/python2.7/site-packages")' >> ~/Library/Python/2.7/lib/python/site-packages/homebrew.pth | 0 | 1 | 0 | 0 | 2015-11-04T00:57:00.000 | 1 | 0 | false | 33,511,936 | 0 | 0 | 0 | 1 | I have a new MacBook w/ Yosemite. In an attempt to get OSC, Zeroconf, PySide and Kivy working, I installed too much via Homebrew. I've successfully (?) undone most of the damage, I think, and have installed all the Python modules so that Apple's Python finds everything... from the terminal window.
However, now my code runs from the console, correctly importing a custom pythonosc module installed with "sudo python setup.py install", but when I package it with py2app it can no longer find pythonosc. (It found it previously with Python et al installed a la Homebrew.) |
How to get jupyter notebook kernel state? | 33,672,094 | 2 | 5 | 5,351 | 0 | ipython,ipython-notebook,jupyter | There is not, this state is not stored anywhere, in part because it changes rapidly, and in part because there shouldn't be many, if any, actions that should be taken differently based on its value. It is only published via messages on the IOPub channel, which you can connect to via zeromq or websocket. If you want to know the busy/idle state of a kernel:
connect to kernel (zmq or websocket)
initial state is busy
send a kernel_info request
monitor status IOPub messages for busy/idle changes
If the kernel is idle, it will handle the kernel_info request promptly and you will get a status:idle message. | 0 | 1 | 0 | 0 | 2015-11-04T01:45:00.000 | 1 | 0.379949 | false | 33,512,329 | 1 | 0 | 0 | 1 | I want to be able to detect from outside a notebook server if the kernel is busy or actively running some cell.
Is there some way for me to print this state as a command line call or have it returned as the response to a http request. |
Process.stdout.readline() doesn't output correctly | 33,530,241 | 0 | 1 | 784 | 0 | python | readline returns a single line of text including the trailing new line until the stream is closed on the other end. Once all data has been read, it starts returning empty strings to let the caller know that the stream is closed and there will never be new data. Generally, while loops should break when an empty string is returned.
You should also call proc.wait() at the end so that system information about the dead process is cleaned up. | 0 | 1 | 0 | 0 | 2015-11-04T17:39:00.000 | 1 | 0 | false | 33,528,486 | 1 | 0 | 0 | 1 | Im using Python 3.5, my code is as follows:
Given a_sentence the program hangs during the while loop because line_read is "" so it never increments nl_c, therefore never exits the loop, I'm relatively new to using sub processes so I'm not sure where the problem is, whether it's not being read in correctly or the output. tl;dr Output from subprocess is "" when it should be an arbitrary string.
Can someone point me in the right direction in getting the line_read = proc.stdout.readline() to be the line inputted above? |
Logging in an asynchronous Tornado (python) server | 33,535,454 | 4 | 4 | 1,242 | 0 | python,linux,multithreading,asynchronous,tornado | For "normal" logging (a few lines per request), I've always found logging directly to a file to be good enough. That may not be true if you're logging all the traffic to the server. The one time I've needed to do something like that I just captured the traffic externally with tcpdump instead of modifying my server.
If you want to capture it in the process, start by just writing to a file from the main thread. As always, measure things in your own environment before taking drastic action (IOLoop.set_blocking_log_threshold is useful for determining if your logging is a problem).
If writing from the main thread blocks for too long, you can either write to a queue that is processed by another thread, or write asynchronously to a pipe or socket to another process (syslog?). | 0 | 1 | 0 | 0 | 2015-11-04T19:45:00.000 | 3 | 1.2 | true | 33,530,673 | 0 | 0 | 0 | 2 | I am working on an application in which I may potentially need to log the entire traffic reaching the server. This feature may be turned on or off, or may be used when exceptions are caught.
In any case, I am concerned about the blocking nature of disk I/O operations and their impact on the performance of the server. The business logic that is applied when a request is handled (mostly POST http requests), is asynchronous in such that every network or db calls are asynchronously executed.
On the other hand, I am concerned about the delay to the thread while it is waiting for the disk IO operation to complete. The logged messages can be a few bytes to a few KBs but in some cases a few MBs. There is no real need for the thread to pause while data is written to disk, the http request can definitely complete at that point and there is no reason that the ioloop thread not to work on another task while data is written to disk.
So my questions are:
am I over-worried about this issue? is logging to standard output
and later redirecting it to a file "good enough"?
what is the common approach, or the one you found most practical for logging in tornado-based applications? even for simple logging and not the (extreme) case I outlined above?
is this basically an ideal case for queuing the logging messages and consume them from a dedicated thread?
Say I do offload the logging to a different thread (like Homer Simpson's "Can't Someone Else Do It?"), if the thread that performs the disk logging is waiting for the disk io operation to complete, does the linux kernel takes that point as an opportunity a context switch?
Any comments or suggestion are much appreciated,
Erez |
Logging in an asynchronous Tornado (python) server | 60,500,844 | 0 | 4 | 1,242 | 0 | python,linux,multithreading,asynchronous,tornado | " write asynchronously to a pipe or socket to another process
(syslog?"
How can it be? log_requestis a normal function - not a coroutine and all default python handlers are not driven by asyncio event loop so they are not truly asynchronous. This is imho one of the factors that make Tornado less performant than ie. aiohttp. Writing to the memory or using udp is fast but it is not async anyway. | 0 | 1 | 0 | 0 | 2015-11-04T19:45:00.000 | 3 | 0 | false | 33,530,673 | 0 | 0 | 0 | 2 | I am working on an application in which I may potentially need to log the entire traffic reaching the server. This feature may be turned on or off, or may be used when exceptions are caught.
In any case, I am concerned about the blocking nature of disk I/O operations and their impact on the performance of the server. The business logic that is applied when a request is handled (mostly POST http requests), is asynchronous in such that every network or db calls are asynchronously executed.
On the other hand, I am concerned about the delay to the thread while it is waiting for the disk IO operation to complete. The logged messages can be a few bytes to a few KBs but in some cases a few MBs. There is no real need for the thread to pause while data is written to disk, the http request can definitely complete at that point and there is no reason that the ioloop thread not to work on another task while data is written to disk.
So my questions are:
am I over-worried about this issue? is logging to standard output
and later redirecting it to a file "good enough"?
what is the common approach, or the one you found most practical for logging in tornado-based applications? even for simple logging and not the (extreme) case I outlined above?
is this basically an ideal case for queuing the logging messages and consume them from a dedicated thread?
Say I do offload the logging to a different thread (like Homer Simpson's "Can't Someone Else Do It?"), if the thread that performs the disk logging is waiting for the disk io operation to complete, does the linux kernel takes that point as an opportunity a context switch?
Any comments or suggestion are much appreciated,
Erez |
Crashing MR-3020 | 36,415,642 | 0 | 0 | 61 | 0 | python,linux | It could be related to many things: things that I had to fix also: check the external power supply of the router, needs to be stable, the usb drives could drain too much current than the port can handle, a simple fix is to add a externally powered usbhub or the same port but with capacitors in parallel to the powerline and at the beginning of the usb port where the drive is, maybe 1000uF | 0 | 1 | 0 | 1 | 2015-11-05T17:25:00.000 | 1 | 0 | false | 33,550,976 | 0 | 0 | 0 | 1 | I've got several MR-3020's that I have flashed with OpenWRT and mounted a 16GB ext4 USB drive on it. Upon boot, a daemon shell script is started which does two things:
1) It constantly looks to see if my main program is running and if not starts up the python script
2) It compares the lasts heartbeat timestamp generated by my main program and if it is older than 10 minutes in the past kills the python process. #1 is then supposed to restart it.
Once running, my main script goes into monitor mode and collects packet information. It periodically stops sniffing, connects to the internet and uploads the data to my server, saves the heartbeat timestamp and then goes back into monitor mode.
This will run for a couple hours, days, or even a few weeks but always seems to die at some point. I've been having this issue for nearly 6 months (not exclusively) I've run out of ideas. I've got files for error, info and debug level logging on pretty much every line in the python script. The amount of memory used by the python process seems to hold steady. All network calls are encapsulated in try/catch statements. The daemon writes to logread. Even with all that logging, I can't seem to track down what the issue might be. There doesn't seem to be any endless loops entered into, none of the errors (usually HTTP request when not connected to internet yet) are ever the final log record - the device just seems to freeze up randomly.
Any advice on how to further track this down? |
tornado websocket get multi message when on_message called | 33,562,632 | 1 | 0 | 247 | 0 | python,websocket,tornado | Maybe you should delimit the messages you send so it is easy to split them up - in this case you could add a \n, obviously the delimiter mustn't happen within the message. Another way would be to prefix each message with its length in also a clearly-delimited way, then the receiver reads the length then that number of bytes and parses it. | 0 | 1 | 1 | 0 | 2015-11-06T08:32:00.000 | 2 | 0.099668 | false | 33,562,499 | 0 | 0 | 0 | 1 | I use tornado websocket send/recv message, the client send json message, and server recv message and json parse, but why the server get message which is mutil json message, such as {"a":"v"}{"a":"c"}, how to process this message |
How do crossover.io, WAMP, twisted (+ klein), and django/flask/bottle interact? | 34,815,287 | 0 | 0 | 263 | 0 | python,django,twisted,wamp-protocol,crossbar | With a Web app using WAMP, you have two separate mechanisms: Serving the Web assets and the Web app then communicating with the backend (or other WAMP components).
You can use Django, Flask or any other web framework for serving the assets - or the static Web server integrated into Crossbar.io.
The JavaScript you deliver as part of the assets then connects to Crossbar.io (or another WAMP router), as do the backend or other components. This is then used to e.g. send data to display to the Web frontend or to transmit user input. | 0 | 1 | 0 | 0 | 2015-11-06T23:37:00.000 | 1 | 0 | false | 33,577,252 | 0 | 0 | 1 | 1 | As I understand it (please do correct misunderstandings, obviously), the mentioned projects/technologies are as follows:-
Crossover.io - A router for WAMP. Cross-language.
WAMP - An async message passing protocol, supporting (among other things) Pub/Sub and RPC. Cross-language.
twisted - An asynchronous loop, primarily used for networking (low-level). Python specific. As far as I can tell, current crossover.io implementation in python is built on top of twisted.
klein - Built on top of twisted, emulating flask but asynchronously (and without the plugins which make flask easier to use). Python specific.
django/flask/bottle - Various stacks/solutions for serving web content. All are synchronous because they implement the WSGI. Python specific.
How do they interact? I can see, for example, how twisted could be used for network connections between various python apps, and WAMP between apps of any language (crossover.io being an option for routing).
For networking though, some form of HTTP/browser based connection is normally needed, and that's where in Python django and alternatives have historically been used. Yet I can't seem to find much in terms of interaction between them and crossover/twisted.
To be clear, there's things like crochet (and klein), but none of these seem to solve what I would assume to be a basic problem, that of saying 'I'd like to have a reactive user interface to some underlying python code'. Or another basic problem of 'I'd like to have my python code update a webpage as it's currently being viewed'.
Traditionally I guess its handled with AJAX and similar on the webpage served by django et. al., but that seems much less scalable on limited hardware than an asynchronous approach (which is totally doable in python because of twisted and tornado et. al.).
Summary
Is there a 'natural' interaction between underlying components like WAMP/twisted and django/flask/bottle? If so, how does it work. |
Is Tensorflow compatible with a Windows workflow? | 33,623,888 | 4 | 61 | 22,555 | 0 | python,windows,tensorflow | Another way to run it on Windows is to install for example Vmware (a free version if you are not using it commercially), install Ubuntu Linux into that and then install TensorFlow using the Linux instructions. That is what I have been doing, it works well. | 0 | 1 | 0 | 0 | 2015-11-09T18:51:00.000 | 7 | 0.113791 | false | 33,616,094 | 0 | 0 | 0 | 1 | I haven't seen anything about Windows compatibility -- is this on the way or currently available somewhere if I put forth some effort? (I have a Mac and an Ubuntu box but the Windows machine is the one with the discrete graphics card that I currently use with theano). |
Using Drone-Kit to connect to Live Quad Copter | 37,731,570 | 0 | 1 | 1,420 | 0 | python,dronekit-python,dronekit | I was having the same issue yesterday and fixed it by installing the latest build from github. I'm on Windows 10 but in this case it should be irrelevant. | 0 | 1 | 0 | 0 | 2015-11-10T20:21:00.000 | 2 | 0 | false | 33,638,868 | 0 | 0 | 0 | 1 | I am tryingt to set a connection to a live quad copter using a the Drone-Kit api from the python command line. (I am using Python 2.7. I am also using OS X Yosemite 10.10.5)
from dronekit import connect
vehicle = connect('/dev/cu.usbserial-DJ00DA30', wait_ready=True)
I get a message:
Link timeout, no heartbeat in last 5 seconds
In another 30 seconds, the command aborts. I know this is the correct device to use (cu.usbserial-DJ00DA30) because I am able to connect with it to the drone using APM Planner 2.0.
Any help please |
Not able to link dll using cmd.exe | 33,650,903 | 0 | 0 | 49 | 0 | python,dll | The error has been resolved. The problem was that the third party dll should be present in the folder where iron python executable is present.
For e.g if ipy.exe is present in the folder C:\Program Files(x86)\Iron Python2.7 then the third party dll has to be present there.
The reason why it took me so long to figure this out because when i was using the previous version of the dll provided by the third party then it was not necessary to copy the dll in the path C:\Program Files(x86)\Iron Python2.7. But for the new version i have to do that. Strange it is!! | 0 | 1 | 0 | 0 | 2015-11-11T10:52:00.000 | 1 | 1.2 | true | 33,649,021 | 1 | 0 | 0 | 1 | From last one week I got stuck in this error. I have a dll from third party which needs to be linked for my system to behave properly.
I have put the dll into the folder from where i am running the command prompt.
For e.g. My python script is in C:/xyz and also my dll is in the same folder. When i am running the python script (using iron python) from cmd.exe it says that the dll is not found.
I am able to run the same python script using iron python environment from visual studio and it is running fine.
What could possibly go wrong? |
How to know if a python script is running with admin permissions in windows? | 33,656,984 | 0 | 3 | 97 | 0 | python | If you install the pywin tool set, the function win32.shell.IsUserAnAdmin() can be used to see if you a a member of the administrators group. | 0 | 1 | 0 | 1 | 2015-11-11T14:41:00.000 | 2 | 0 | false | 33,652,909 | 0 | 0 | 0 | 1 | I want to check if a python script is running with admin permissions on windows, without using the ctypes module. It is important for me not to use ctypes for some reasons.
I have looked with no luck.
Thanks. |
TAB completion does not work in Jupyter Notebook but fine in iPython terminal | 67,752,486 | 1 | 69 | 91,871 | 0 | ipython-notebook,readline,jupyter,tab-completion,ubuntu-15.10 | I had the same issue under my conda virtual env in windows pc and downgrading the jedi to 0.17.2 version resolved the issue for me.
conda install jedi==0.17.2 | 0 | 1 | 0 | 0 | 2015-11-12T05:32:00.000 | 19 | 0.010526 | false | 33,665,039 | 1 | 0 | 0 | 9 | TAB completion works fine in iPython terminal, but not in Firefox browser.
So far I had tried but failed,
1). run a command $ sudo easy_install readline,
then the .egg file was wrote in /usr/local/lib/python2.7/dist-packages/readline-6.2.4.1-py2.7-linux-x86_64.egg,
but TAB completion still doesn't work in Jupyter Notebook.
2). also tried to find locate the ipython_notebook_config.py or ipython_config.py, but failed.
I use Python 3.5 and iPython 4.0.0. and both are installed in Ubuntu 15.10 /usr/share/anaconda3/bin/ipython.
Any help would be appreciated! |
TAB completion does not work in Jupyter Notebook but fine in iPython terminal | 40,527,071 | 4 | 69 | 91,871 | 0 | ipython-notebook,readline,jupyter,tab-completion,ubuntu-15.10 | In my case, after running pip install pyreadline, I needed to re-execute all the lines in Jupyter before the completion worked. I kept wondering why it worked for IPython but not Jupyter. | 0 | 1 | 0 | 0 | 2015-11-12T05:32:00.000 | 19 | 0.04208 | false | 33,665,039 | 1 | 0 | 0 | 9 | TAB completion works fine in iPython terminal, but not in Firefox browser.
So far I had tried but failed,
1). run a command $ sudo easy_install readline,
then the .egg file was wrote in /usr/local/lib/python2.7/dist-packages/readline-6.2.4.1-py2.7-linux-x86_64.egg,
but TAB completion still doesn't work in Jupyter Notebook.
2). also tried to find locate the ipython_notebook_config.py or ipython_config.py, but failed.
I use Python 3.5 and iPython 4.0.0. and both are installed in Ubuntu 15.10 /usr/share/anaconda3/bin/ipython.
Any help would be appreciated! |
TAB completion does not work in Jupyter Notebook but fine in iPython terminal | 61,810,343 | 0 | 69 | 91,871 | 0 | ipython-notebook,readline,jupyter,tab-completion,ubuntu-15.10 | The best fix I've found for this issue was to create a new Environment. If you are using Anaconda simply create a new environment to fix the issue. Sure you have to reinstall some libraries but its all worth it. | 0 | 1 | 0 | 0 | 2015-11-12T05:32:00.000 | 19 | 0 | false | 33,665,039 | 1 | 0 | 0 | 9 | TAB completion works fine in iPython terminal, but not in Firefox browser.
So far I had tried but failed,
1). run a command $ sudo easy_install readline,
then the .egg file was wrote in /usr/local/lib/python2.7/dist-packages/readline-6.2.4.1-py2.7-linux-x86_64.egg,
but TAB completion still doesn't work in Jupyter Notebook.
2). also tried to find locate the ipython_notebook_config.py or ipython_config.py, but failed.
I use Python 3.5 and iPython 4.0.0. and both are installed in Ubuntu 15.10 /usr/share/anaconda3/bin/ipython.
Any help would be appreciated! |
TAB completion does not work in Jupyter Notebook but fine in iPython terminal | 63,544,019 | 0 | 69 | 91,871 | 0 | ipython-notebook,readline,jupyter,tab-completion,ubuntu-15.10 | I had the same issue when I was using miniconda,
I switched to anaconda and that seems to have solved the issue.
PS. I had tried everything I could find on the net but nothing resolved it except for switching to anaconda. | 0 | 1 | 0 | 0 | 2015-11-12T05:32:00.000 | 19 | 0 | false | 33,665,039 | 1 | 0 | 0 | 9 | TAB completion works fine in iPython terminal, but not in Firefox browser.
So far I had tried but failed,
1). run a command $ sudo easy_install readline,
then the .egg file was wrote in /usr/local/lib/python2.7/dist-packages/readline-6.2.4.1-py2.7-linux-x86_64.egg,
but TAB completion still doesn't work in Jupyter Notebook.
2). also tried to find locate the ipython_notebook_config.py or ipython_config.py, but failed.
I use Python 3.5 and iPython 4.0.0. and both are installed in Ubuntu 15.10 /usr/share/anaconda3/bin/ipython.
Any help would be appreciated! |
TAB completion does not work in Jupyter Notebook but fine in iPython terminal | 55,069,872 | 11 | 69 | 91,871 | 0 | ipython-notebook,readline,jupyter,tab-completion,ubuntu-15.10 | you can add
%config IPCompleter.greedy=True
in the first box of your Jupyter Notebook. | 0 | 1 | 0 | 0 | 2015-11-12T05:32:00.000 | 19 | 1 | false | 33,665,039 | 1 | 0 | 0 | 9 | TAB completion works fine in iPython terminal, but not in Firefox browser.
So far I had tried but failed,
1). run a command $ sudo easy_install readline,
then the .egg file was wrote in /usr/local/lib/python2.7/dist-packages/readline-6.2.4.1-py2.7-linux-x86_64.egg,
but TAB completion still doesn't work in Jupyter Notebook.
2). also tried to find locate the ipython_notebook_config.py or ipython_config.py, but failed.
I use Python 3.5 and iPython 4.0.0. and both are installed in Ubuntu 15.10 /usr/share/anaconda3/bin/ipython.
Any help would be appreciated! |
TAB completion does not work in Jupyter Notebook but fine in iPython terminal | 65,898,020 | 0 | 69 | 91,871 | 0 | ipython-notebook,readline,jupyter,tab-completion,ubuntu-15.10 | As the question was asked five years ago, the answer was likely different back then... but I want to add my two cents if anyone googles today: The answer by users Sagnik and more above worked for me.
One thing to add is that if running anaconda, you can do what I did: simply
start the anaconda-navigator software,
locate the jedi package in my environment,
click the little checkbox on the right of jedi
under "Mark for specific version installation", choose 0.17.2
After restarting the kernel everything worked :) | 0 | 1 | 0 | 0 | 2015-11-12T05:32:00.000 | 19 | 0 | false | 33,665,039 | 1 | 0 | 0 | 9 | TAB completion works fine in iPython terminal, but not in Firefox browser.
So far I had tried but failed,
1). run a command $ sudo easy_install readline,
then the .egg file was wrote in /usr/local/lib/python2.7/dist-packages/readline-6.2.4.1-py2.7-linux-x86_64.egg,
but TAB completion still doesn't work in Jupyter Notebook.
2). also tried to find locate the ipython_notebook_config.py or ipython_config.py, but failed.
I use Python 3.5 and iPython 4.0.0. and both are installed in Ubuntu 15.10 /usr/share/anaconda3/bin/ipython.
Any help would be appreciated! |
TAB completion does not work in Jupyter Notebook but fine in iPython terminal | 63,314,060 | 0 | 69 | 91,871 | 0 | ipython-notebook,readline,jupyter,tab-completion,ubuntu-15.10 | Creating a new env variable helped me to solve this problem.
Use environments.txt content in .conda as path. | 0 | 1 | 0 | 0 | 2015-11-12T05:32:00.000 | 19 | 0 | false | 33,665,039 | 1 | 0 | 0 | 9 | TAB completion works fine in iPython terminal, but not in Firefox browser.
So far I had tried but failed,
1). run a command $ sudo easy_install readline,
then the .egg file was wrote in /usr/local/lib/python2.7/dist-packages/readline-6.2.4.1-py2.7-linux-x86_64.egg,
but TAB completion still doesn't work in Jupyter Notebook.
2). also tried to find locate the ipython_notebook_config.py or ipython_config.py, but failed.
I use Python 3.5 and iPython 4.0.0. and both are installed in Ubuntu 15.10 /usr/share/anaconda3/bin/ipython.
Any help would be appreciated! |
TAB completion does not work in Jupyter Notebook but fine in iPython terminal | 65,775,964 | 6 | 69 | 91,871 | 0 | ipython-notebook,readline,jupyter,tab-completion,ubuntu-15.10 | I had a similar issue and unfortunately cannot comment on a post, so am adding an easy solution that worked for me here. I use conda and conda list showed I was running jedi-0.18.0. I used the command conda install jedi==0.17.2. This quickly fixed the problem for my conda environment.
Additional note: I usually use jupyter-lab, and was not seeing the error messages generated. By switching to jupyter notebook, I saw the following error:
[IPKernelApp] ERROR | Exception in message handler: Traceback (most
recent call last): File
"D:\apps\miniconda\envs\pydata-book\lib\site-packages\ipykernel\kernelbase.py",
line 265, in dispatch_shell
yield gen.maybe_future(handler(stream, idents, msg)) File "D:\apps\miniconda\envs\pydata-book\lib\site-packages\tornado\gen.py",
line 762, in run
value = future.result() File "D:\apps\miniconda\envs\pydata-book\lib\site-packages\tornado\gen.py",
line 234, in wrapper
yielded = ctx_run(next, result) File "D:\apps\miniconda\envs\pydata-book\lib\site-packages\ipykernel\kernelbase.py",
line 580, in complete_request
matches = yield gen.maybe_future(self.do_complete(code, cursor_pos)) File
"D:\apps\miniconda\envs\pydata-book\lib\site-packages\ipykernel\ipkernel.py",
line 356, in do_complete
return self._experimental_do_complete(code, cursor_pos) File "D:\apps\miniconda\envs\pydata-book\lib\site-packages\ipykernel\ipkernel.py",
line 381, in _experimental_do_complete
completions = list(_rectify_completions(code, raw_completions)) File
"D:\apps\miniconda\envs\pydata-book\lib\site-packages\IPython\core\completer.py",
line 484, in rectify_completions
completions = list(completions) File "D:\apps\miniconda\envs\pydata-book\lib\site-packages\IPython\core\completer.py",
line 1815, in completions
for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000): File "D:\apps\miniconda\envs\pydata-book\lib\site-packages\IPython\core\completer.py",
line 1858, in _completions
matched_text, matches, matches_origin, jedi_matches = self._complete( File
"D:\apps\miniconda\envs\pydata-book\lib\site-packages\IPython\core\completer.py",
line 2026, in _complete
completions = self._jedi_matches( File "D:\apps\miniconda\envs\pydata-book\lib\site-packages\IPython\core\completer.py",
line 1369, in jedi_matches
interpreter = jedi.Interpreter( File "D:\apps\miniconda\envs\pydata-book\lib\site-packages\jedi\api_init.py",
line 725, in init
super().init(code, environment=environment, TypeError: init() got an unexpected keyword argument 'column'
I highlighted a couple of the jedi messages, but this all reinforced it was a problem related to the version of jedi installed. | 0 | 1 | 0 | 0 | 2015-11-12T05:32:00.000 | 19 | 1 | false | 33,665,039 | 1 | 0 | 0 | 9 | TAB completion works fine in iPython terminal, but not in Firefox browser.
So far I had tried but failed,
1). run a command $ sudo easy_install readline,
then the .egg file was wrote in /usr/local/lib/python2.7/dist-packages/readline-6.2.4.1-py2.7-linux-x86_64.egg,
but TAB completion still doesn't work in Jupyter Notebook.
2). also tried to find locate the ipython_notebook_config.py or ipython_config.py, but failed.
I use Python 3.5 and iPython 4.0.0. and both are installed in Ubuntu 15.10 /usr/share/anaconda3/bin/ipython.
Any help would be appreciated! |
TAB completion does not work in Jupyter Notebook but fine in iPython terminal | 65,650,287 | 3 | 69 | 91,871 | 0 | ipython-notebook,readline,jupyter,tab-completion,ubuntu-15.10 | The answer from Sagnik above (Dec 20, 2020) works for me on Windows 10.
pip3 install jedi==0.17.2
[Sorry I'm posting this as an answer instead of comment. I have no permission to comment yet. ] | 0 | 1 | 0 | 0 | 2015-11-12T05:32:00.000 | 19 | 0.031568 | false | 33,665,039 | 1 | 0 | 0 | 9 | TAB completion works fine in iPython terminal, but not in Firefox browser.
So far I had tried but failed,
1). run a command $ sudo easy_install readline,
then the .egg file was wrote in /usr/local/lib/python2.7/dist-packages/readline-6.2.4.1-py2.7-linux-x86_64.egg,
but TAB completion still doesn't work in Jupyter Notebook.
2). also tried to find locate the ipython_notebook_config.py or ipython_config.py, but failed.
I use Python 3.5 and iPython 4.0.0. and both are installed in Ubuntu 15.10 /usr/share/anaconda3/bin/ipython.
Any help would be appreciated! |
Long-Running processes and hosting providers? | 33,695,205 | 0 | 0 | 45 | 0 | python,scripting,hosting,long-running-processes | I tweeted to my hosting corp and they said my long-running python data analysis script is probably okay as long as it doesn't over-use resources.
I let it rip - just a single process churning away, generating a sub-megabyte data output file, but alas, they killed the process for me during the night, with a note that the CPU usage was too much.
Just an FYI in case you have a bit of 'big data' analysis to do. I suppose I could chop it up and run it sporadically, but that would just be hiding the CPU usage. So I'll find an old machine to churn, albeit much more slowly, for me. :/
I suppose this is a task better suited to a dedicated hosting enviro? Big data apps not suited to low-cost shared/virtual hosting services? | 0 | 1 | 0 | 0 | 2015-11-12T15:12:00.000 | 1 | 0 | false | 33,674,736 | 0 | 0 | 0 | 1 | I have a python data analysis script that runs for many hours, and while it was running on my desktop, with fans blazing I realized I could just run it on a hosting account remotely in bkgnd and let it rip.
But I'm wondering - is this generally frowned upon by hosting providers? Are they assuming that all my CPU/memory usage is bursty-usage from my Apache2 instance and a flat-out process running for 12hrs will get killed by their sysop?
Or do they assume I'm paying for usage, so knock yourself out? My script and its data is self-contained and is using no network or database resources.
Any experience with that? |
How to install cryptography for python3 in Mac OS X? | 41,799,420 | 10 | 1 | 2,064 | 0 | python,macos,pip | Trying to install the scrapy I need to install cryptography package on Mac OS El Capitan. As explained in Cryptography installation doc
env LDFLAGS="-L$(brew --prefix openssl)/lib" CFLAGS="-I$(brew --prefix openssl)/include" pip install cryptography | 0 | 1 | 0 | 0 | 2015-11-13T08:52:00.000 | 1 | 1 | false | 33,688,875 | 1 | 0 | 0 | 1 | When executing pip3 install cryptography, pip3 gives an error:
fatal error: 'openssl/aes.h' file not found
#include <openssl/aes.h>
1 error generated.
error: command '/usr/bin/clang' failed with exit status 1
I checked with brew info openssl and got the answer:
Generally there are no consequences of this for you. If you build your
own software and it requires this formula, you'll need to add to your
build variables:
LDFLAGS: -L/usr/local/opt/openssl/lib
CPPFLAGS: -I/usr/local/opt/openssl/include
The problem now is: how can I tell pip add the paths into corresponding build variables when it uses clang to compile cpp file? |
Is the symbolic link python important? | 33,706,723 | 0 | 1 | 392 | 0 | python,python-3.x,raspberry-pi,raspbian | Yes, there are many applications and scripts that is written for python 2, and they usually come pre-installed in your linux distribution. Those applications expect python binary to be version 2. And they will most likely break if you force them to run on python 3. | 0 | 1 | 0 | 1 | 2015-11-14T08:31:00.000 | 3 | 0 | false | 33,706,579 | 1 | 0 | 0 | 1 | I am on a Raspberry Pi, and by default the following symbolic links were created in /usr/bin:
/usr/bin/python -> /usr/bin/python2.7
/usr/bin/python2 -> /usr/bin/python2.7
/usr/bin/python3 -> /usr/bin/python3.2
Most of my work is done in Python 3, so I decided to recreate /usr/bin/python to point to /usr/bin/python3.2 instead. Does this have any negative consequences when I install packages or run pip? Are there utilities that depend on the alias python in the search path and end up doing the wrong things? |
How to install pycharm 5.0 in ubuntu | 33,708,804 | 0 | 2 | 1,483 | 0 | python,ubuntu,pycharm | Delete the old Pycharm directory and replace it with the new one. Now run pycharm.sh from the termin to start Pycharm. Once opened, go to Tools> Create desktop entry.
Once this is done, close the current instance and now the new icon should appear in the launcher. | 0 | 1 | 0 | 0 | 2015-11-14T10:51:00.000 | 2 | 0 | false | 33,707,614 | 1 | 0 | 0 | 1 | Months ago,I installed pycharm 4.5 in Ubuntu(by run /bin/pycharm.sh),it works well.
Now I found 5.0 version is released.I download the .tar.gz file and unzip it.Then I want to install it in the same way.
But a matter is ,although it runs well, in launcher I found the icon of Pycharm becomes a big "?".Also,in terminal,it gives some warn:
log4j:warn no appenders could be found for logger (io.netty.util.internal.logging.internalloggerfactory). log4j:warn please initialize the log4j system properly.
What that mean?and is it the right way to install Pycharm? |
Cannot uninstall python 2.7.10 from windows 10 | 40,187,053 | 1 | 1 | 1,385 | 0 | python,windows,python-2.7,windows-10 | i have same problem and i use advanced system optimizer and clean registery and repair python then uninstall and it work for me | 0 | 1 | 0 | 0 | 2015-11-14T20:01:00.000 | 2 | 0.099668 | false | 33,712,729 | 1 | 0 | 0 | 2 | For some reason I messed up my install in python a while ago and I recently tried to repair the install but I am getting an error saying: "The specified account already exists." I then decided to rerun the install package and instead of repairing it decided to delete python so I clicked uninstall and got the error message saying: "There is a problem with this Windows Installer package. A program required for this install to complete could not be run. Contact your support personnel or package vendor." The only package I installed (if it is a package) was VPython and for some reason that does not open whenever I try opening it so I assumed I messed up the download for that also. I decided to go ahead and delete everything in my C directory that had the keyword Python including the Python27 folder but it still gave me the same error. |
Cannot uninstall python 2.7.10 from windows 10 | 57,189,000 | 0 | 1 | 1,385 | 0 | python,windows,python-2.7,windows-10 | I can confirm that this works. Use Ccleaner to fix the registry, then use installer to "Repair" 2.7.10 the installation, then use installer to "Remove" the installation. | 0 | 1 | 0 | 0 | 2015-11-14T20:01:00.000 | 2 | 0 | false | 33,712,729 | 1 | 0 | 0 | 2 | For some reason I messed up my install in python a while ago and I recently tried to repair the install but I am getting an error saying: "The specified account already exists." I then decided to rerun the install package and instead of repairing it decided to delete python so I clicked uninstall and got the error message saying: "There is a problem with this Windows Installer package. A program required for this install to complete could not be run. Contact your support personnel or package vendor." The only package I installed (if it is a package) was VPython and for some reason that does not open whenever I try opening it so I assumed I messed up the download for that also. I decided to go ahead and delete everything in my C directory that had the keyword Python including the Python27 folder but it still gave me the same error. |
Linux error: sh: qsub: command not found | 33,727,291 | 2 | 1 | 3,904 | 0 | python,linux,runtime-error | You specified --sge which is used to schedule jobs on Sun Grid Engine.
Since you want to run on your local machine instead of SGE, you should remove this flag. | 0 | 1 | 0 | 0 | 2015-11-16T01:01:00.000 | 1 | 1.2 | true | 33,727,053 | 0 | 0 | 0 | 1 | I am running shellfish.py in my local machine. Can someone please explain me why I am getting this error: sh: qsub: command not found |
Swig not found when installing pocketsphinx Python | 67,705,323 | 4 | 4 | 5,686 | 0 | python-2.7,windows-7,swig,pocketsphinx | You can use pipwin to install it without any issues.
Install pipwin [Run as Administrator, if any issues]
pip install pipwin
Install pocketsphinx using pipwin
pipwin install pocketsphinx
Note: Works on Windows-10(win32-py3.8) [Tested] | 0 | 1 | 0 | 0 | 2015-11-16T21:57:00.000 | 2 | 0.379949 | false | 33,745,389 | 1 | 0 | 0 | 1 | I would like to convert grapheme to phoneme. And I want to pip install pocketsphinx to do that. One of its dependency is swig, so I downloaded and placed it in a directory and go to the environment path variable and add the path that leads to swig.exe. When I cmd and type 'swig --help' it seems to be working.
But when I go 'pip install pocketsphinx, it says 'error: command 'swig.exe failed: No such file or directory'. |
How can I set limit to the duration of a job with the APScheduler? | 33,770,050 | 7 | 5 | 1,679 | 0 | python,apscheduler | APScheduler does not have a way to set the maximum run time of a job. This is mostly due to the fact that the underlying concurrent.futures package that is used for the PoolExecutors do not support such a feature. A subprocess could be killed but lacking the proper API, APScheduler would have to get a specialized executor to support this, not to mention an addition to the job API that allowed for timeouts. This is something to be considered for the next major version.
The question is, what do you want to do with the thread that is still running the job? Since threads cannot be forcibly terminated, the only option would be to let it run its course, but then it will still keep the thread busy. | 0 | 1 | 0 | 0 | 2015-11-17T08:40:00.000 | 1 | 1 | false | 33,752,419 | 0 | 0 | 1 | 1 | I set the scheduler with the "max_instances=10".There can be 10 jobs to run concurrently.Sometimes some jobs blocked, it wsa hanging there.When more than 10 jobs werr blocking there, the exception of "skipped: maximum number of running instances reached(10)".
Does APScheduler have a way to set the max time of a job's duration.If the job runs beyond the max time, it will be terminated.
If it doesn't have the way, what should I do? |
"Illegal instruction: 4" when trying to start Python with virtualenv in OS X | 40,489,765 | 1 | 6 | 5,705 | 0 | macos,python-2.7,virtualenv | I've had this problem a number of times now. While I can't say for certain what the actual issue is, I believe it basically means that some file(s) in the virtualenv installment of Python have become corrupted.
I keep my virtual environment in a synced Dropbox folder, so that may be a large contributor to the issue.
Restoring the virtual environment from a back-up archive worked for me. Or simply reinstall an identical virtual environment.
First, try activating the faulty environment by cd <path/to/old_env> and source /bin/activate.
If it's successfully activated, cd to an accessible location on the drive and run pip freeze > requirements.txt to export a list of currently installed Python modules.
Delete the old environment.
Install a new virtual environment of the latest version of Python 2 that you have on the computer, via virtualenv <path/new_env>
Or, if you want to use a specific Python version, first make sure you have you have it on your drive, and then do virtualenv -p <path>. Assuming that you have downloaded the Python version with Homebrew, e.g.: virtualenv -p /usr/local/bin/python2.6 <path/new_env>
Activate the virtual environment via cd <path/new_env> and then do source /bin/activate.
Assuming that you kept a list of modules to reinstall by previously doing pip freeze > requirements.txt, cd to the folder where the text file is located and do pip install -r requirements.txt.
Otherwise, reinstall the modules with pip manually. | 0 | 1 | 0 | 0 | 2015-11-17T10:31:00.000 | 2 | 1.2 | true | 33,754,660 | 1 | 0 | 0 | 2 | I've been using Python 2.7.10 in a virtualenv environment for a couple of months.
Yesterday, activating the environment went fine, but today suddently I get this cryptic error when trying to start Python from Terminal:
Illegal instruction: 4
I have made no changes to my environment (AFAIK), so I'm having a difficult time trying to come to terms with what this error is and what caused it.
Python works fine outside of this virtualenv environment. When running via /usr/local/bin it presents no problem. |
"Illegal instruction: 4" when trying to start Python with virtualenv in OS X | 49,254,513 | 1 | 6 | 5,705 | 0 | macos,python-2.7,virtualenv | I had same problem and found solution by uninstalling psycopg2 and installing older version. As I understood my comp was not supporting some commands in new version | 0 | 1 | 0 | 0 | 2015-11-17T10:31:00.000 | 2 | 0.099668 | false | 33,754,660 | 1 | 0 | 0 | 2 | I've been using Python 2.7.10 in a virtualenv environment for a couple of months.
Yesterday, activating the environment went fine, but today suddently I get this cryptic error when trying to start Python from Terminal:
Illegal instruction: 4
I have made no changes to my environment (AFAIK), so I'm having a difficult time trying to come to terms with what this error is and what caused it.
Python works fine outside of this virtualenv environment. When running via /usr/local/bin it presents no problem. |
get sublime text 3 to close certain windows on quit | 36,759,584 | 0 | 0 | 167 | 0 | python,sublimetext3,sublimetext,sublime-text-plugin | I don't know what you mean by "specific windows" - sublime windows? sublime views? Other application windows?
You can detect window close with EventListener. There is no direct pre-quitting event, but you can use view's on_close function and check if there is any widnows in sublime.windows().
def on_close(self, view):
if not sublime.windows():
self.close_specific_windows()
Be aware that this function will be called for each opened view (file) in sublime. | 0 | 1 | 0 | 0 | 2015-11-18T10:01:00.000 | 1 | 0 | false | 33,776,940 | 0 | 0 | 0 | 1 | Is there any way to write a script that will tell sublime to close specific windows on quit?
I've tried setting a window's remember_open_files setting to false, and I've tried using python's atexit library to run the close window command. So far no luck |
How to prevent python wheel from expanding shebang? | 33,808,977 | 2 | 6 | 713 | 0 | python,shebang,python-wheel | I finally narrowed it down and found the problem.
Here the exact steps to reproduce the problem and the solution.
Use a valid shebang in a script thats added in setup.py. In my case #!/usr/bin/env python
Create a virtualenv with virtualenv -p /usr/bin/python2 env and activate with source env/bin/activate.
Install the package with python setup.py install to the virtualenv.
Build the wheel with python setup.py bdist_wheel.
The problem is installing the package to the virtualenv in step 3. If this is not done the shebang is not expanded. | 0 | 1 | 0 | 0 | 2015-11-18T23:45:00.000 | 3 | 1.2 | true | 33,792,696 | 1 | 0 | 0 | 2 | If I build a package with python setup.py bdist_wheel, the resulting package expands the shebangs in the scripts listed in setup.py via setup(scripts=["script/path"]) to use the absolute path to my python executable #!/home/f483/dev/storj/storjnode/env/bin/python.
This is obviously a problem as anyone using the wheel will not have that setup. It does not seem to make a difference what kind of shebang I am using. |
How to prevent python wheel from expanding shebang? | 33,792,857 | 0 | 6 | 713 | 0 | python,shebang,python-wheel | Using the generic shebang #!python seems to solve this problem.
Edit: This is incorect! | 0 | 1 | 0 | 0 | 2015-11-18T23:45:00.000 | 3 | 0 | false | 33,792,696 | 1 | 0 | 0 | 2 | If I build a package with python setup.py bdist_wheel, the resulting package expands the shebangs in the scripts listed in setup.py via setup(scripts=["script/path"]) to use the absolute path to my python executable #!/home/f483/dev/storj/storjnode/env/bin/python.
This is obviously a problem as anyone using the wheel will not have that setup. It does not seem to make a difference what kind of shebang I am using. |
Celery restart loss scheduled tasks | 52,539,351 | 3 | 15 | 3,035 | 0 | python,django,redis,celery | you have to use RabbitMq instead redis.
RabbitMQ is feature-complete, stable, durable and easy to install. It’s an excellent choice for a production environment.
Redis is also feature-complete, but is more susceptible to data loss in the event of abrupt termination or power failures.
Using rabbit mq your problem of lossing message on restart have to gone. | 0 | 1 | 0 | 0 | 2015-11-19T10:59:00.000 | 1 | 0.53705 | false | 33,801,985 | 0 | 0 | 1 | 1 | I use Celery to schedule the sending of emails in the future. I put the task in celery with apply_async() and ETA setted sometimes in the future.
When I look in flower I see that all tasks scheduled for the future has status RECEIVED.
If I restart celery all tasks are gone. Why they are gone?
I use redis as a broker.
EDIT1
In documentation I found:
If a task is not acknowledged within the Visibility Timeout the task will be redelivered to another worker and executed.
This causes problems with ETA/countdown/retry tasks where the time to execute exceeds the visibility timeout; in fact if that happens it will be executed again, and again in a loop.
So you have to increase the visibility timeout to match the time of the longest ETA you are planning to use.
Note that Celery will redeliver messages at worker shutdown, so having a long visibility timeout will only delay the redelivery of ‘lost’ tasks in the event of a power failure or forcefully terminated workers.
Periodic tasks will not be affected by the visibility timeout, as this is a concept separate from ETA/countdown.
You can increase this timeout by configuring a transport option with the same name:
BROKER_TRANSPORT_OPTIONS = {'visibility_timeout': 43200}
The value must be an int describing the number of seconds.
But the ETA of my tasks can be measured in months or years.
EDIT 2
This is what I get when I type:
$ celery -A app inspect scheduled
{u'priority': 6, u'eta': u'2015-11-22T11:53:00-08:00', u'request': {u'args': u'(16426,)', u'time_start': None, u'name': u'core.tasks.action_du
e', u'delivery_info': {u'priority': 0, u'redelivered': None, u'routing_key': u'celery', u'exchange': u'celery'}, u'hostname': u'celery@app.myplanmap.com', u'ack
nowledged': False, u'kwargs': u'{}', u'id': u'8ac59984-f8d0-47ae-ac9e-c4e3ea9c4a
c6', u'worker_pid': None}}
If you look closely, task wasn't acknowledged yet, so it should stay in redis after celery restart, right? |
Add to Python Script from PowerShell Terminal? | 33,813,074 | 0 | 1 | 81 | 0 | python,powershell | Use Add-Content ex1.py 'print "Hello"'
Use python.exe -c "<cmd>" to execute a single python command. | 0 | 1 | 0 | 0 | 2015-11-19T19:44:00.000 | 1 | 0 | false | 33,812,995 | 1 | 0 | 0 | 1 | I'm learning Python from "Learn Python the Hard Way" by Zed A. Shaw, and can't figure out:
I'm working in Powershell. How do I add a line of text to my python script (ex1.py) from the PowerShell terminal? I've tried (starting in PowerShell): Add-Content ex1.py "print "Hello"" and other variations, but I get the message:
Add-Content : A positional parameter cannot be found that accepts argument 'Hello'.
How do I run just one of the seven lines of text ex1.py currently has? Without doing some extra bash script? Again, I'm working from the Windows PowerShell terminal, so I don't think bash applies there. |
No module named 'lxml' Windows 8.1 | 33,818,809 | 4 | 6 | 10,453 | 0 | python,windows,lxml | Go to the regular command prompt and try pip install lxml. If that doesn't work, remove and reinstall python. You'll get a list of check marks during installation, make sure you check pip and try pip install lxml again afterwards.
pip stands for pip installs packages, and it can install some useful python packages for you. | 0 | 1 | 1 | 0 | 2015-11-20T04:01:00.000 | 3 | 0.26052 | false | 33,818,770 | 0 | 0 | 0 | 1 | Everyone's code online refers to sudo apt-get #whatever# but windows doesn't have that feature. I heard of something called Powershell but I opened it and have no idea what it is.
I just want to get a simple environment going and lxml so I could scrape from websites. |
Google App Engine File Processing | 33,830,880 | 0 | 0 | 62 | 0 | python,file,google-app-engine | your best bet could upload to blobstore or Cloud Storage, then use Task Queue to process the file which has no time limits. | 0 | 1 | 0 | 0 | 2015-11-20T15:46:00.000 | 1 | 0 | false | 33,830,715 | 0 | 0 | 1 | 1 | I am trying to create a process that will upload a file to GAE to interpret it's contents (most are PDFs, so we would use something like PDF Miner), and then store it in Google Cloud Storage.
To my understanding, the problem is that file uploads are limited to both 60 seconds for it to execute, as well as a size limit of I think 10MB. Does anyone have any ideas of how to address this issue? |
AWS worker daemon locks multiple messages even before the first message is processed | 33,846,596 | 1 | 1 | 179 | 0 | python,amazon-web-services,flask,amazon-sqs,worker | Set the HTTP Connection setting under Worker Configuration to 1. This should prevent each server from receiving more than 1 message at a time.
You might want to look into changing your autoscaling configuration to monitor your SQS queue depth or some other SQS metric instead of worker CPU utilization. | 0 | 1 | 0 | 1 | 2015-11-21T17:29:00.000 | 1 | 1.2 | true | 33,846,425 | 0 | 0 | 1 | 1 | I have deployed a python-flask web app on the worker tier of AWS. I send some data into the associated SQS queue and the daemon forwards the request data in a POST request to my web app. The web app takes anywhere between 5 mins to 6 hours to process the request depending upon the size of posted data. I have also configured the worker app into an auto scaling group to scale based on CPU utilization metrics. When I send 2 messages to the queue in quick succession, both messages start showing up as in-flight. I was hoping that the daemon will forward the first message to the web app and then wait for it to be processed before pulling the second message out. In the meantime, auto scaling will spin up another instance (which it is but since the second message is also in-flight, it is not able to pull that message) and the new instance will pull and process the second message. Is there a way of achieving this? |
Sharing install files between virtualenv instances | 33,852,065 | 0 | 1 | 39 | 0 | python,pip,virtualenv | The whole point of virutalenv is to isolate and compartmentalize dependencies. What you are describing directly contradicts its use case. You could go into each individual project and modify the environmental variables but that's a hackish solution. | 0 | 1 | 0 | 0 | 2015-11-22T05:47:00.000 | 1 | 0 | false | 33,852,048 | 1 | 0 | 1 | 1 | I have 2-3 dozen Python projects on my local hard drive, and each one has its own virtualenv. The problem is that adds up to a lot of space, and there's a lot of duplicated files since most of my projects have similar dependencies.
Is there a way to configure virtualenv or pip to install packages into a common directory, with each package namespaced by the package version and Python version the same way Wheels are?
For example:
~/.cache/pip/common-install/django_celery-3.1.16-py2-none-any/django_celery/
~/.cache/pip/common-install/django_celery-3.1.17-py2-none-any/django_celery/
Then any virtualenv that needs django-celery can just symlink to the version it needs? |
Where is the Ananconda launcher in Windows 8? | 41,717,467 | 1 | 1 | 10,129 | 0 | python,anaconda | The previous answer suggesting upgrading to Anaconda 4.0+ is probably sensible. However if this is not a desirable option, the below will allow use of Anaconda Launcher on previous versions.
Anaconda is installed under 'C:\Users\%USERNAME%\Anaconda'.
The Anaconda Launcher can be open by clicking on the Start menu and typing Run (or hit Windows+r) and entering C:\Users\%USERNAME%\Anaconda\Scripts\launcher.bat and clicking OK. Alternatively you can navigate to 'C:\Users\%USERNAME%\Anaconda\Scripts' in a command-prompt and enter launcher.bat.
You stated in a comment on another answer that you were "actually looking to open spyder". You can do this with Windows+r C:\Users\%USERNAME%\Anaconda\Scripts\spyder.exe or by navigating to 'C:\Users\%USERNAME%\Anaconda\Scripts' in a command-prompt and typing python spyder-script.py.
If you're only ever after spyder, a taskbar shortcut with a pretty icon is always nice. To do this, go to 'C:\Users\%USERNAME%\Anaconda\Scripts' in an Explorer window and drag spyder.exe to the taskbar, then if you right-click this and goto Properties then Change Icon... you can add the icon from 'C:\Users\%USERNAME%\Anaconda\Scripts\spyder.ico'.
Hope this helps. | 0 | 1 | 0 | 0 | 2015-11-23T01:25:00.000 | 4 | 0.049958 | false | 33,862,418 | 1 | 0 | 0 | 1 | I am complete Python newb here who is just making the switch from MATLAB. I installed the Anaconda 2.4 with Python 2.7 on a 64-bit Windows 8.1 system. But I cannot even start the program as I cannot find any Anaconda launcher either on the Start menu or the desktop. Any help please? |
passenger stop kill orphan process | 33,891,874 | 0 | 0 | 361 | 0 | python,ruby-on-rails,linux | I have solve my problem by
restart my app instead of restart passenger
restart app command: passenger-config restart-app [path of my app] | 0 | 1 | 0 | 0 | 2015-11-23T07:02:00.000 | 1 | 0 | false | 33,865,344 | 0 | 0 | 1 | 1 | My app is rails and python .
In rails I create a new thread and start a shell command which executes python scripts.
This python script (parent process) will exit quickly, but before it exits it will fork a child process, and the child process will be an orphan process after the parent process exits.
Situation 1:
If I start app by rails: rails s -d
When the python parent process exits and python child process is going:
kill pid(./tmp/pids/server.pid)
Then the child process will be ok and not be killed. This is what I want.
Situation 2:
If I start app by passenger:
passenger start -e production -d
When the python parent process exits and python child process is going:
passenger stop;
then the child process will be killed.
So I want to know in situation 2, the child orphan process could not be killed? Has anyone experienced this or knows how to solve it? |
Pip Wheel Package Installation Fail | 33,873,874 | 0 | 0 | 1,089 | 0 | python,azure,pip,python-wheel | have you tried uninstalling and reinstalling?
I tried pip wheel azure-mgmt and that installed -0.20.1 for me.
The directory for mine is /Users/me/wheelhouse, so you could look there. I found that in the initial log of the build. | 0 | 1 | 0 | 0 | 2015-11-23T09:52:00.000 | 3 | 0 | false | 33,867,992 | 1 | 0 | 0 | 1 | I try to run pip wheel azure-mgmt=0.20.1, but whenever I run it I get following pip wheel error, which is very clear:
error: [Error 183] Cannot create a file when that file already exists: 'build\\bdist.win32\\wheel\\azure_mgmt-0.20.0.data\\..'
So my question is where or how I can find that path? I want to delete that existing file. I have been searching my local computer, searched for default path in Google, but still didn't find any solution.
Also is it possible to tell pip wheel to output full log? As you can see that full error path is not displayed. I'm using virtualenv. |
GDB and Python: How to disable Y/N requirement about running python as root | 33,877,733 | 1 | 0 | 723 | 0 | python,gdb | Run gdb with the --batch command line option. This will disable all confirmation requests. You can also run the command "set confirm off" | 0 | 1 | 0 | 0 | 2015-11-23T18:02:00.000 | 1 | 0.197375 | false | 33,877,549 | 0 | 0 | 0 | 1 | I'm running GDB with a bash(.sh) script that does need sudo/super user access and it works good, but there is a problem, every time i runs gdb with that script, before gdb load the executable it will ask about running python with superuser. I want to remove this requirement/question.
I want to remove this:
WARNING: Phyton has been executed as super user! It is recommended to
run as a normal user. Continue? (y/N)
I'm using gdb 7.9 on ubuntu server 12.x which i compiled by my own.
Ps: In another ubuntu server(version 15) the gdb(version 7.9) will not ask this question using the same script and access. |
create a hyperlink that executes a bash/python on the user machine | 33,879,045 | 0 | 0 | 41 | 0 | python,html,bash,jinja2 | No. That is not possible. Nor desirable, due to the security implications. | 0 | 1 | 0 | 1 | 2015-11-23T19:05:00.000 | 1 | 0 | false | 33,878,648 | 0 | 0 | 0 | 1 | Is it possible to create a hyperlink/button that calls a bash/python script on the user/local machine. I did search on the topic but there is a lot of discussion about the opening a port to a server (even the local port) but I don't want to open a port but execute everything locally. Is this even possible?
Thanks |
Cannot run a Python file from cmd line | 33,889,708 | 0 | 1 | 162 | 0 | python,python-2.7 | I assume you are running the script with command python file_name.py.
You can prevent closing of the cmd by getting a character from user.
use raw_input() function to get a character (which probably could be an enter). | 0 | 1 | 0 | 0 | 2015-11-24T09:08:00.000 | 3 | 0 | false | 33,889,476 | 1 | 0 | 0 | 2 | I have installed Python and written a program in Notepad++.
Now when I try to type the Python file name in the Run window, all that I see is a black window opening for a second and then closing.
I cant run the file at all, how can run this file?
Also I want to tell that I also tried to be in the same directory as a particular Python file but no success. |
Cannot run a Python file from cmd line | 33,890,132 | 0 | 1 | 162 | 0 | python,python-2.7 | It sounds like you are entering your script name directly into the Windows Run prompt (possibly Windows XP?). This will launch Python in a black command prompt window and run your script. As soon as the script finishes, the command prompt window will automatically close.
You have a number of alternatives:
First manually start a command prompt by just typing cmd in the Run window. From here you can change to the directory you want and run your Python script.
Create a Windows shortcut on the desktop. Right click on the desktop and select New > Shortcut. Here you can enter your script name as python -i script.py, and a name for the shortcut. After finishing, right click on your new shortcut on the desktop and select Properties, you can now specify the folder you want to run the script from. When the script completes, the Python shell will remain open until you exit it.
As you are using Notepad++, you could consider installing the Notepad++ NppExec plugin which would let you run your script inside Notepad++. The output would then be displayed in a console output window inside Notepad++.
As mentioned, you can add something to your script to stop it completing (and automatically closing the window), adding the line raw_input() to the last line in your script will cause the Window to stay open until Enter is pressed. | 0 | 1 | 0 | 0 | 2015-11-24T09:08:00.000 | 3 | 0 | false | 33,889,476 | 1 | 0 | 0 | 2 | I have installed Python and written a program in Notepad++.
Now when I try to type the Python file name in the Run window, all that I see is a black window opening for a second and then closing.
I cant run the file at all, how can run this file?
Also I want to tell that I also tried to be in the same directory as a particular Python file but no success. |
How do you stop a python SimpleHTTPServer in Terminal? | 33,910,508 | 7 | 2 | 10,625 | 0 | python,simplehttpserver | CTRL + C is usually the right way to kill the process and leave your terminal open. | 0 | 1 | 0 | 1 | 2015-11-25T07:11:00.000 | 3 | 1.2 | true | 33,910,489 | 0 | 0 | 0 | 2 | I've started a SimpleHTTPServer via the command python -m SimpleHTTPServer 9001.
I'd like to stop it without having to force quit Terminal. What's the keystrokes required to stop it? |
How do you stop a python SimpleHTTPServer in Terminal? | 33,910,517 | 3 | 2 | 10,625 | 0 | python,simplehttpserver | Use CTRL+C.
This is a filler text because answer must be 30 characters. | 0 | 1 | 0 | 1 | 2015-11-25T07:11:00.000 | 3 | 0.197375 | false | 33,910,489 | 0 | 0 | 0 | 2 | I've started a SimpleHTTPServer via the command python -m SimpleHTTPServer 9001.
I'd like to stop it without having to force quit Terminal. What's the keystrokes required to stop it? |
Cross platform movie creation in python | 34,064,048 | 1 | 1 | 35 | 0 | python,movie | opencv can solve this cross platform and has python bindings. | 0 | 1 | 0 | 0 | 2015-11-26T10:14:00.000 | 1 | 1.2 | true | 33,935,892 | 0 | 0 | 0 | 1 | I am developing a small application that generates a stochastic animation, and I would want the option to save the animation as a movie. An obvious solution in linux would be to save the images and subprocess a call to ffmpeg or the like, but the program should preferable run on windows as well, without any external dependencies and installations needed (I pack the program with pyinstaller for windows). Is there a solution for this, or will I have to depend on different external applications depending on the platform? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.