Q_Id
int64
2.93k
49.7M
CreationDate
stringlengths
23
23
Users Score
int64
-10
437
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
DISCREPANCY
int64
0
1
Tags
stringlengths
6
90
ERRORS
int64
0
1
A_Id
int64
2.98k
72.5M
API_CHANGE
int64
0
1
AnswerCount
int64
1
42
REVIEW
int64
0
1
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
15
5.1k
Available Count
int64
1
17
Q_Score
int64
0
3.67k
Data Science and Machine Learning
int64
0
1
DOCUMENTATION
int64
0
1
Question
stringlengths
25
6.53k
Title
stringlengths
11
148
CONCEPTUAL
int64
0
1
Score
float64
-1
1.2
API_USAGE
int64
1
1
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
15
3.72M
24,135,896
2014-06-10T08:06:00.000
0
1
0
0
0
python,selenium-webdriver,selenium-firefoxdriver
0
25,492,157
0
1
0
false
0
0
Selenium is not the right tool to use for performance testing. jmeter is a great tool for this. You would be able to see the response for each request.
1
3
0
0
Is it possible to calculate the performance testing through selenium with python? If it is possible, how should I do it?
Is it possible to calculate the performance testing through selenium with python?
1
0
1
0
1
304
24,159,713
2014-06-11T09:47:00.000
-1
0
1
0
0
python,python-2.7,dictionary,set,python-internals
0
24,159,900
0
2
0
false
0
0
These two are use the same datastructure in the backend. e.g in sets you cannot store duplicate values but in dict you can store multople same values and you can turn the dict to sets by changing the behavior of dict
1
4
0
0
Can anybody tell me how the internal implementation of set and dict is different in python? Do they use the same data structure in the background? ++ In theory, one can use dict to achieve set functionality.
difference between python set and dict "internally"
0
-0.099668
1
0
0
1,691
24,169,539
2014-06-11T17:59:00.000
1
1
0
1
0
python,database,web-applications,ipc
0
24,171,852
0
2
1
false
0
0
Python has since early stages a very comfortable PyZMQ binding for ZeroMQ. MATLAB can have the same, a direct ZeroMQ at work for your many-to-many communications. Let me move in a bit broader view, from a few, but KEY PRINCIPAL POINTS, that are not so common in other software-engineering "products" & "stacks" we meet today around us: [1] ZeroMQ is first of all a very powerful concept rather than a code or a DIY kit [2] ZeroMQ's biggest plus for any professional grade project sits in rather using the genuine Scaleable Formal Communication Patterns end-to-end, not in the ability to code pieces or to "trick/mod" the published internals [3] ZeroMQ team has done a terrific job and saves users from re-inventing wheels ("inside") and allows to rather stay on the most productive side by a re-use of the heroic knowledge ( elaborated, polished & tested by the ZeroMQ gurus, supporters & team-members ) from behind the ZMQ-abstraction-horizon. Having said these few principles, my recommendation would be to spend some time on the concepts in a published book from Peter Hintjens on ZeroMQ ( also available in PDF). This a worthwhile place to start from, to get the bigger picture. Then, there it would be a question of literally a few SLOC-s to make these world most powerful ( and believe me, that this sounds bold only on first sight, as there are not many real alternatives to compare ZeroMQ with ... well, ZeroMQ co-architect Martin Sustrik's [nanomsg] is that case, to mention at least one, if you need to go even higher in speed / lower in latency, but the above key principal points hold & remain the same even there ... ) Having used a ZeroMQ orchestrated Python & MQL4 & AI/ML system in FOREX high speed trading infrastructure environment is just a small example, where microseconds matter and nanosecond make a difference in the queue ... Presented in a hope that your interest in ZeroMQ library will only grow & that you will benefit as much as many other uses of this brilliant piece of art have gained giant leap & benefited from whatever the PUB/SUB, PAIR/PAIR, REQ/REP formal patterns does best match the very communication need of your MATLAB / Python / * heterogeneous multi-party / multi-host Project.
1
0
0
0
At a high level, what I need to do is have a python script that does a few things based on the commands it receives from various applications. At this stage, it's not clear what the application may be. It could be another python program, a MATLAB application, or a LAMP configuration. The commands will be sent rarely, something like a few times every hour. The problem is - What is the best way for my python script to receive these commands, and indicate to these applications that it has received them? Right now, what I'm trying to do is have a simple .txt file. The application(s) will write commands to the file. The python script will read it, do its thing, and remove the command from the file. I didn't like this approach for 2 reasons- 1) What happens if the file is being written/read by python and a new command is sent by an application? 2) This is a complicated approach which does not lead to anything robust and significant.
How do I communicate and share data between python and other applications?
0
0.099668
1
0
0
2,813
24,194,217
2014-06-12T21:28:00.000
0
0
0
1
1
python,google-app-engine,file-transfer,google-compute-engine
0
24,194,583
0
4
0
false
1
0
The most straightforward approach seems to be: A user submit a form on App Engine instance. App Engine instance makes a POST call to a handler on GCE instance with the new data. GCE instance updates its own file and processes it.
2
0
0
0
I am working on a project that involves using an Google App Engine (GAE) server to control one or more Google Compute Engine (GCE) instances. The rest of the project is working well, but I am having a problem with one specific aspect: file management. I want my GAE to edit a file on my GCE instance, and after days of research I have come up blank on how to do that. The most straightforward example of this is: Step 1) User enters text into a GAE form. Step 2) User clicks a button to indicate they would like to "submit" the text to GCE Step 3) GAE replaces the contents of a particular (hard-coded path) text file on the GCE with the user's new content. Step 4) (bonus step) GCE notices that the file has changed (either by detecting a change or by way of GAE alerting it when the new content is pushed) and runs a script to process the new file. I understand that this is easy to do using SCP or other terminal commands. I have already done that, and that works fine. What I need is a way for GAE to send that content directly, without my intervention. I have full access to all instances of GAE and GCE involved in this project, and can set up whatever code is needed on either of the platforms. Thank you in advance for your help!
Using Google App Engine to update files on Google Compute Engine
1
0
1
0
0
273
24,194,217
2014-06-12T21:28:00.000
0
0
0
1
1
python,google-app-engine,file-transfer,google-compute-engine
0
24,215,374
0
4
0
false
1
0
You can set an action URL in your form to point to the GCE instance (it can be load-balanced if you have more than one). Then all data will be uploaded directly to the GCE instance, and you don't have to worry about transferring data from your App Engine instance to GCE instance.
2
0
0
0
I am working on a project that involves using an Google App Engine (GAE) server to control one or more Google Compute Engine (GCE) instances. The rest of the project is working well, but I am having a problem with one specific aspect: file management. I want my GAE to edit a file on my GCE instance, and after days of research I have come up blank on how to do that. The most straightforward example of this is: Step 1) User enters text into a GAE form. Step 2) User clicks a button to indicate they would like to "submit" the text to GCE Step 3) GAE replaces the contents of a particular (hard-coded path) text file on the GCE with the user's new content. Step 4) (bonus step) GCE notices that the file has changed (either by detecting a change or by way of GAE alerting it when the new content is pushed) and runs a script to process the new file. I understand that this is easy to do using SCP or other terminal commands. I have already done that, and that works fine. What I need is a way for GAE to send that content directly, without my intervention. I have full access to all instances of GAE and GCE involved in this project, and can set up whatever code is needed on either of the platforms. Thank you in advance for your help!
Using Google App Engine to update files on Google Compute Engine
1
0
1
0
0
273
24,218,114
2014-06-14T08:34:00.000
0
0
0
0
1
python,sockets,tcp,timer
0
24,218,375
0
1
0
false
0
0
How about using a token that is switched when the client connects? Put it in a while loop and if the token is ever the same non-switched value twice in a row kill the loop and stop listen().
1
0
0
0
I'll explain you better my problem. I've code a simple python server who listening for web client connection. The server is running but i must add a function and i don't know how resolve this.. I must set up a timer, if the client don't connect every N seconds, I've to log it. I already looked for set up a timeout but in the lib socket, the timeout doesn't do what i want ... I tried to set up a timer with timestamps and compare values, but the socket.listen() method don't stop operation until a client connects. And i wanna stop the listen() method if the time is exceeded.
Python set up a timer for client connection
0
0
1
0
1
458
24,224,539
2014-06-14T21:38:00.000
1
0
0
1
0
python,ios,tornado
0
24,232,201
0
1
0
true
1
0
You can send your response with either self.write() or self.finish() (the main difference is that with write() you can assemble your response in several pieces, while finish() can only be called once. You also have to call finish() once if you're using asynchronous functions that are not coroutines, but in most cases it is done automatically). As for what to send, it doesn't really matter if it's a non-browser application that only looks at the status code, but I generally send an empty json dictionary for cases like this so there is well-defined space for future expansion.
1
1
0
0
So far I have a pretty basic server (I haven't built in any security features yet, like cookie authentication). What I've got so far is an iOS app where you enter a username and password and those arguments are plugged into a URL and passed to a server. The server checks to see if the username is in the database and then sends a confirmation to the app. Pretty basic but what I can't figure out is what the confirmation should look like? The server is a Python Tornado server with a MySQL dbms.. What I'm unsure of is what Tornado should/can send in response? Do I use self.write or self.response or self.render? I don't think it's self.render because I'm not rendering an HTML file, I'm just sending the native iOS app a confirmation response which, once received by the app, will prompt it to load the next View Controller. After a lot of googling I can't seem to find the answer (probably because I don't know how to word the question correctly). I'm new to servers so I appreciate your patience.
What's the proper Tornado response for a log in success?
0
1.2
1
0
0
246
24,237,335
2014-06-16T05:54:00.000
1
0
1
0
0
c#,python,.net,share,shared-objects
0
24,281,523
0
2
0
false
0
1
Since python is running as another process. This is no way for python to access object in c# directly since process isolation. A way of marshal and un-marshal should be included to communicate between processes. There are many way to communicate between processes. Share memory, file, TCP and so on.
2
3
0
0
I have created a windows app which runs a python script. I'm able to capture the output of the script in textbox. Now i need to pass a shared object to python script as an argument from my app. what type of shared object should i create so that python script can accept it and run it or in simple words how do i create shared object which can be used by python script. thanks
shared object in C# to be used in python script
0
0.099668
1
0
0
912
24,237,335
2014-06-16T05:54:00.000
1
0
1
0
0
c#,python,.net,share,shared-objects
0
32,651,105
0
2
0
false
0
1
5.4. Extending Embedded Python will help you to access the application object. In this case both application and python running in single process.
2
3
0
0
I have created a windows app which runs a python script. I'm able to capture the output of the script in textbox. Now i need to pass a shared object to python script as an argument from my app. what type of shared object should i create so that python script can accept it and run it or in simple words how do i create shared object which can be used by python script. thanks
shared object in C# to be used in python script
0
0.099668
1
0
0
912
24,249,796
2014-06-16T18:20:00.000
0
0
1
0
0
python,exception,exception-handling
1
24,249,840
0
4
0
false
0
0
how does the caller of something know if that something would throw an exception or not? By reading the documentation for that something.
2
0
0
0
In the Java world, we know that the exceptions are classified into checked vs runtime and whenever something throws a checked exception, the caller of that something will be forced to handle that exception, one way or another. Thus the caller would be well aware of the fact that there is an exception and be prepared/coded to handle that. But coming to Python, given there is no concept of checked exceptions (I hope that is correct), how does the caller of something know if that something would throw an exception or not? Given this "lack of knowledge that an exception could be thrown", how does the caller ever know that it could have handled an exception until it is too late?
In python how does the caller of something know if that something would throw an exception or not?
0
0
1
0
0
75
24,249,796
2014-06-16T18:20:00.000
0
0
1
0
0
python,exception,exception-handling
1
24,249,818
0
4
0
false
0
0
As far as I know Python (6 years) there isn't anything similar to Java's throws keyword in Python.
2
0
0
0
In the Java world, we know that the exceptions are classified into checked vs runtime and whenever something throws a checked exception, the caller of that something will be forced to handle that exception, one way or another. Thus the caller would be well aware of the fact that there is an exception and be prepared/coded to handle that. But coming to Python, given there is no concept of checked exceptions (I hope that is correct), how does the caller of something know if that something would throw an exception or not? Given this "lack of knowledge that an exception could be thrown", how does the caller ever know that it could have handled an exception until it is too late?
In python how does the caller of something know if that something would throw an exception or not?
0
0
1
0
0
75
24,250,152
2014-06-16T18:44:00.000
2
0
1
0
0
python,metadata,categories,pelican
0
24,419,741
0
1
0
false
0
0
You might try adding a key/value dictionary to your Pelican settings file that contains your category metadata, and then access that information from within your theme's index.html template.
1
4
0
0
I want to organize a blog into multiple categories. For each category index page I want to add some metadata like title, description, meta etc. specific to the category. Pelican uses folders to split posts into categories but how do I define the metadata for each category? Is it possible to use a metadata file to put into each category folder?
pelican blog define metadata for category index page
1
0.379949
1
0
0
590
24,271,006
2014-06-17T18:36:00.000
0
1
0
0
0
python,django,caching,django-rest-framework
0
24,293,403
0
1
0
false
1
0
One technique is to key the URLs on the content of the media they are referring too. For example if you're hosting images then use the sha hash of the image file in the url /images/<sha>. You can then set far-future cache expiry headers on those URLs. If the image changes then you also update the URL referring to it, and a request is made for an image that is no longer cached. You can use this technique for regular database models as well as images and other media, so long as you recompute the hash of the object whenever any of its fields change.
1
1
0
0
Using Python's Django (w/ Rest framework) to create an application similar to Twitter or Instagram. What's the best way to deal with caching content (JSON data, images, etc.) considering the constantly changing nature of a social network? How to still show updated state a user creates a new post, likes/comments on a post, or deletes a post while still caching the content for speedy performance? If the cache is to be flushed/recreated each time a user takes an action, then it's not worth having a cache because the frequency of updates will be too rapid to make the cache useful. What are some techniques of dealing with this problem. Please feel free to share your approach and some wisdom you learned while implementing your solution. Any suggestions would be greatly appreciated. :)
Handling Cache with Constant Change of Social Network
0
0
1
0
0
83
24,279,321
2014-06-18T07:19:00.000
1
0
1
0
0
python,virtualenv
0
24,279,522
0
1
0
true
0
0
Delete or rename the file /virtualenv_root/lib/python3.4/no-global-site-packages.txt OR Add a symlink between /virtualenv_root/lib/python3.4/site-packages/ and /path/to/desired/site-packages/ Here virtualenv_root is the name of your virtual environment.
1
0
0
0
How can a virtualenv be modified after it is created so as to achieve the same effect as creating it with virtualenv --system-site-packages? In other words, how to enable accessing any systemwide installed packages in a virtualenv which was originally created with that access disabled?
How to modify virtualenv to achieve the same effect as --system-site-packages?
0
1.2
1
0
0
76
24,284,390
2014-06-18T11:30:00.000
3
0
0
0
0
python,numpy,scipy,sympy
0
24,289,392
0
2
0
false
0
0
Although it would be nice if there were an existing routine for calculating the spherical Hankel functions (like there is for the ordinary Hankel functions), they are just a (complex) linear combination of the spherical Bessel functions of the first and second kind so can be easily calculated from existing routines. Since the Hankel functions are complex and depending on your application of them, it can be advantageous to rewrite your expression in terms of the Bessel functions of the first and second kind, ie entirely real quantities, particularly if your final result is real.
1
3
1
0
I knew that there is no builtin sph_hankel1 in scipy then i want to know that how to implement it in the right way? Additional: Just show me one correct implementation of sph_hankel1 either using of Scipy or Sympy.
How can i implement spherical hankel function of the first kind by scipy/numpy or sympy?
0
0.291313
1
0
0
997
24,333,323
2014-06-20T18:15:00.000
0
0
1
1
0
python,macos,pyinstaller
0
53,956,162
0
1
0
false
0
1
while you create application don't add those options --windowed and --noconsole
1
6
0
0
I'm packaging a GUI app for MacOS with Pyinstaller, using --windowed flag. Is it possible to package it so that it would show a console in addition to the GUI? When I tried to set console=True, the GUI part fails. In other words, when I start the App from the terminal by typing "open My.App/Contents/MacOS/myapp", then I do get both GUI and console. I'd like to get similar behaviour by just double-clicking on the App without starting the terminal. Is there a way to do it?
How to package a Mac OS app with Pyinstaller that shows both a console and a GUI?
1
0
1
0
0
425
24,333,423
2014-06-20T18:22:00.000
2
1
0
0
0
python,amqp,pika
0
41,400,921
0
1
0
false
1
0
I would like to write the answer down because it this question was before the documentation on google. def amqmessage(ch, method, properties, body): channel.basic_consume(amqmessage, queue=queue_name, no_ack=True) channel.start_consuming() The routing key can be found with:method.routing_key
1
5
0
1
New to RabbitMQ and I am trying to determine a way in which to retrieve the routing key information of an AMQP message. Has anyone really tried this before? I am not finding a lot of documentation that explicitly states how to query AMQP using pika (python). This is what I am trying to do: basically I have a Consumer class, for example: channel.exchange_declare(exchange='test', type='topic') channel.queue_declare(queue='topic_queue',auto_delete=True) channel.queue_bind(queue='topic_queue', exchange='test', routing_key = '#') I set up a queue and I bind to an exchange and all the routing_keys (or binding keys I suppose) being passed through that exchange. I also have a function: def amqmessage(ch, method, properties, body): channel.basic_consume(amqmessage, queue=queue_name, no_ack=True) channel.start_consuming() I think that the routing_key should be "method.routing_key" from the amqmessage function but I am not certain how to get it to work correctly.
Retrieving AMQP routing key information using pika
0
0.379949
1
0
0
3,298
24,336,655
2014-06-20T22:38:00.000
1
0
0
0
0
python-3.x,python-c-api,python-c-extension
0
26,024,351
0
1
0
false
0
1
The only way to do this is to create a new object with PyBufferProcs* PyTypeObject.tp_as_buffer. I checked cpython source code thoroughly, as of 3.4.1, there is no out-of-box (so to speak) solution.
1
7
0
0
It seems to me the buffer protocol is more for exposing Python buffer to C. I couldn't find a way to create a bytes object using existing buffer without copying in C. Basically what I want is to implement something similar to PyBytes_FromStringAndSize() but without copying, and with a callback to free the buffer when the object is released. I don't know how big the buffer is before I receive the buffer returned from a C API. So creating bytes object in Python first and later fill it in is not an option. I also looked into memoryview, PyMemoryView_FromMemory() doesn't copy but there is no way to pass a callback to free my buffer. And I'm not suse Python lib (e.g. Psycopg) can use memoryview object or not. Do I have to create my own object to achieve these 2 requirements? Any other shortcut? If I have to, how can I make sure this object works same as bytes so I can pass it to Python lib safely? Thanks.
python c-api: create bytes using existing buffer without copying
0
0.197375
1
0
0
444
24,349,335
2014-06-22T08:07:00.000
0
0
0
0
0
python,eclipse,web-applications,flask,pydev
0
24,350,506
0
3
0
false
1
0
I've had a very similar thing happen to me. I was using CherryPy rather than Flask, but my solution might still work for you. Oftentimes browsers save webpages locally so that they don't have to re-download them every time the website is visited. This is called caching, and although it's very useful for the average web user, it can be a real pain to app developers. If you're frequently generating new versions of the application, it's possible that your browser is displaying an old version of the app that it has cached instead of the most up to date version. I recommend clearing that cache every time you restart your application, or disabling the cache altogether.
1
3
0
0
I'm working on a simple Flask web application. I use Eclipse/Pydev. When I'm working on the app, I have to restart this app very often because of code changes. And that's the problem. When I run the app, I can see the frame on my localhost, which is good. But when I want to close this app, just click on the red square which should stop applications in Eclipse, sometimes (often), the old version of application keeps running so I can't test the new version. In this case the only thing which helps is to force close every process in Windows Task Manager. Will you give me any advice how to manage this problem? Thank you in advance. EDIT: This maybe helps: Many times, I have to run the app twice. Otherwise I can't connect.
Python/Flask: Application is running after closing
0
0
1
0
0
3,981
24,367,485
2014-06-23T13:40:00.000
0
0
0
0
0
java,python,amazon-web-services,amazon-ec2
0
24,373,021
0
1
0
true
1
0
First-time: Create a Postgres db - Depending on size(small or large), might want RDS or Redshift Connect to Amazon Server - EC2 Download code to server - upload your programs to an S3 bucket Once a month: Download large data file to server - Move data to S3, if using redshift data can be loaded directly from s3 to redshift Run code (written in python) to load database with data Run code (written in Java) to create a Lucene Search Index file from data in the database - might want to look into EMR with this Continuously: Run Java code in servlet container, this will use lucense search index file but DOES NOT require access to the database - If you have a java WAR file, you can host this using Elasticbean stalk In order to connect to your database, you must make sure the security group allows for this connection, and for an ec2 you must make sure port 22 is open to your IP to conncet to it. It sounds like the security group for RDS isn't opening up port 3306.
1
0
0
0
I'm used to having a remote server I can use via ssh but I am looking at using Amazon Web Services for a new project to give me better performance and resilience at reduced costs but I'm struggling to understand how to use it. This is what I want to do: First-time: Create a Postgres db Connect to Amazon Server Download code to server Once a month: Download large data file to server Run code (written in python) to load database with data Run code (written in Java) to create a Lucene Search Index file from data in the database Continuously: Run Java code in servlet container, this will use lucense search index file but DOES NOT require access to the database. Note: Technically I could create do the database population locally the trouble is the resultant lucene index file is about 5GB and I dont have a good enough Internet connection to upload a file of that size to Amazon. All that I have managed to do so far is create a Postgres database but I don't understand how to connect to it or get a ssh/telnet connection to my server (I requested a Direct Connect but this seems to be a different service). Update so far FYI: So I created a Postgres database using RDS I created a Ubuntu linux installation using EC2 I connected to the linux installation using ssh I installed required software (using apt-get) I downloaded data file to my linux installation I think according to the installation should be able to connect to my Postgres db from my EC2 instance and even from my local machine however in both cases it just times out. * Update 2 ** Probably security related but I cannot for the life of me understand what I'm meant to do with security groups ands why they don't make the EC2 instance able to talk to my database by default. Ive checked both RDS and EC2 have the3 same vpc id, and both are in the same availability zone. Postgres is using port 5432 (not 3306) but havent been able to access it yet. So taking my working EC2 instance as the starting point should I create a new security group before creating a database, and if so what values do I need to put into it so I can access the db with psql from within my ec2 ssh session - thats all that is holding me up for now and all I need to do * Update 3 * At last I have access to my database, my database had three security groups ( I think the other two were created when I created a new EC2 instance) I removed two of them and in the remaining on the inbound tab I set rule to All Traffic Ports 0-65535 Protocol All IPAddress 0.0.0.0/0 (The outbound tab already had the same rule) and it worked ! I realize this is not the most secure setup but at least its progress. I assume to only allow access from my EC2 instance I can change the IPAddress of the inbound rule but I don't how to calculate the CIDR for the ipaddress ? My new problem is having successfully downloaded my datafile to my EC2 instance I am unable to unzip it because I don't not have enough diskspace. I assume I have to use S3 Ive created a bucket but how do I make it visible as diskspace from my EC2 instance so I can Move my datafile to it Unzip the datafile into it Run my code against the unzipped datafile to load the database (Note the datafile is an Xml format and has to be processed with custom code to get it into the database it cannot just be loaded directly into the database using some generic tool) Update 4 S3 is the wrong solution for me instead I can use EBS which is basically disc storage accessible not as a service but by clicking Volumes in EC2 Console. Ensure create the volume in the same Availability zone as the instance, there maybe more than one in each location, for example my EC2 instance was created in eu-west-1a but the first time I created a volume it was in eu-west-1b and therefore could not be used. Then attach volume to instance But I cannot see the volume from the linux commandline, seems there is something else required. Update 5 Okay, have to format the disk and mount it in linux for it to work I now have my code for uploading the data to database working but it is running incredibly slow, much slower than my cheap local server I have at home. I'm guessing that because the data is being loaded one record at a time that the bottleneck is not the micro database but my micro instance, looks like I need to redo with a more expensive instance Update 6 Updated to a large Computative instance still very slow. Im now thinking the issue is the network latency between server and database perhaps need to install a postgres server directly onto my instance to cut that part out.
How do i get started with Amazon Web Services for this scenario?
1
1.2
1
1
0
186
24,371,646
2014-06-23T17:17:00.000
1
0
0
0
0
django,python-2.7,django-forms,django-templates,django-views
1
24,379,552
0
2
0
true
1
0
First of all, We will have to make sure if its a non_field_error or a field error. Where have you raise ValidationError in the ModelForm you have defined ? If its raised in def clean() of the Form, then it would be present in non_field_errors and can be accessed via form.non_field_errors in template If it is raised in def clean_<field_name>() then, it would be a field error and can be accessed via form.errors or form.<field_name>.error in template Please decide for yourself where do you want to raise it. Note: ModelForm can work with with FormView. But Ideally, there are CreateView and UpdateView for that
1
0
0
0
I'm using FormView with ModelForm to process a registration form. In case of duplication of email i'm raising ValidationError. But this error message is not available on registration template as non_field_errors. When i tried to find what is the form.errors in form_invalid method in RegistrationView, its showing the expected the errors, but somehow its not getting passed to template.
How to get non_field_errors on template when using FormView and ModelForm
0
1.2
1
0
0
1,090
24,376,961
2014-06-24T01:18:00.000
0
1
1
0
0
python,import,pythonpath
0
24,377,167
0
1
0
true
0
0
I haven't had occasion to ever use a .pth file. I prefer a two-pronged approach: Use a shebang which runs env python, so it uses the first python on your path, i.e.: #!/usr/bin/env python Use virtualenv to keep separate different environments and group the necessary libraries for any given program/program set together. This has the added benefit that the requirements file (from pip freeze output) can be stored in source control, and the environment can be recreated easily anywhere, such as for use with Jenkins tests, et al. In the virtualenv case the python interpreter can be explicitly invoked from the virtualenv's bin directory. For local modules in this case, a local PyPI server can be used to centralize custom modules, and they can also be included in the requirements file (via the --extra-index option of pip). Edit with response to comment from OP: I have not used SublimeREPL before, however, based on the scenario you have described, I think the overall simplest approach might be to simply symlink the directories into your site-packages (or dist-packages, as the case may be) directory. It's not an ideal scenario for a production server, but for your purposes, on a client box, I think it would be fine. If you don't want to have to use the folder name, i.e. import ch1/foo, you'll need to symlink inside of those directories so you can simply import foo. If you're OK with using the dir name, i.e. import ch1/foo, then you should only need to symlink the top-level code directory.
1
0
0
0
I am new to python and trying to add a project folder to the PYTHONPATH. I created a .pth file and add my root path of the file in my site-packages folder. However, when I trying to import the .py files in this folder, only those located under the root folder (for example '/sample') can be imported, but those subfolders under the /sample folder were not able to be imported (for example /sample/01). So my question is what file and how to change it to make my whole folder including all its subfolders can be importable. In the worst case I can think of is to write down all the folders name in the .pth file in site-packages. But I just believe that Python will provide a more efficient way to achieve that.
How to use PYTHONPATH to import the whole folder in Python
0
1.2
1
0
0
287
24,380,269
2014-06-24T06:59:00.000
3
0
0
0
0
python,mysql,django,django-cms
0
24,380,525
0
2
0
false
1
0
This is an error message you get if MySQLdb isn't installed on your computer. The easiest way to install it would be by entering pip install MySQL-python into your command line.
1
1
0
0
I can't connect with mysql and I can't do "python manage.py syncdb" on it how to connect with mysql in django and django-cms without any error?
Getting “Error loading MySQLdb module: No module named MySQLdb” in django-cms
0
0.291313
1
1
0
9,227
24,396,591
2014-06-24T21:25:00.000
3
0
0
0
0
python,sql,django
0
24,396,885
0
2
0
true
1
0
To do this, I would recommend breaking down each individual relationship. Your relationships seem to be: Authoring Following For authoring, the details are: Each Question is authored by one User Each User can author many questions As such, this is a One-to-Many relationship between the two. The best way to do this is a foreign key from the Question to the User, since there can only be one author. For following, the details are: Each Question can have many following Users Each User can be following many Questions As such, this is a Many-to-Many relationship. The Many-to-Many field in Django is a perfect candidate for this. Django will let you use a field through another model, but in this case this is not needed, as you have no other information associated with the fact that a user is following a question (e.g. a personal score/ranking). With both of these relationships, Django will create lists of related items for you, so you do not have to worry about that. You may need to define a related_name for this field, as there are multiple users associated with a question, and by default django would make the related set for both named user_set.
1
2
0
0
I'm new to Django and I'm trying to create a simple app! I basically want to create something like StackOverflow, I have many User and many Question. I don't know how I should define the relationship between these two Models. My Requirements: I want each Question to have a single author User, and a list of User that followed the Question. I want each User to have a list of posted Question and a list of followed Question. I'm pretty much lost, and I don't know how to define my relationship. Is this a many-to-many relationship? If so, how do I have like 2 lists of Question in my User model (Posted/Followed)? Please help!
Define models in Django
0
1.2
1
0
0
85
24,400,012
2014-06-25T04:05:00.000
0
0
0
0
1
python,numpy,matplotlib,scikit-learn
1
24,424,870
0
1
0
false
0
0
If X is a sparse matrix, you probably need X = X.todense() in order to get access to the data in the correct format. You probably want to check X.shape before doing this though, as if X is very large (but very sparse) it may consume a lot of memory when "densified".
1
0
1
0
I'm using scikit to perform text classification and I'm trying to understand where the points lie with respect to my hyperplane to decide how to proceed. But I can't seem to plot the data that comes from the CountVectorizer() function. I used the following function: pl.scatter(X[:, 0], X[:, 1]) and it gives me the error: ValueError: setting an array element with a sequence. Any idea how to fix this?`
How to plot text documents in a scatter map?
1
0
1
0
0
99
24,416,140
2014-06-25T18:39:00.000
0
0
0
0
0
python,sqlite,pandas
0
24,419,432
0
1
0
false
0
0
I have found the issue -- I am using SQLite Manager (Firefox Plugin) as a SQLite client. For whatever reason, SQLite Manager displays the tweet IDs incorrectly even though they are properly stored (i.e. when I query, I get the desired values). Very strange I must say. I downloaded a different SQLite client to view the data and it displays properly.
1
0
0
0
I am using pandas to organize and manipulate data I am getting from the twitter API. The 'id' key returns a very long integer (int64) that pandas has no problem handling (i.e. 481496718320496643). However, when I send to SQL: df.to_sql('Tweets', conn, flavor='sqlite', if_exists='append', index=False) I now have tweet id: 481496718320496640 or something close to that number. I converted the tweet id to str but Pandas SQLite Driver / SQLite still messes with the number. The data type in the SQLite database is [tweet_id] INTEGER. What is going on and how do I prevent this from happening?
Long integer values in pandas dataframe change when sent to SQLite database using to_sql
0
0
1
1
0
160
24,417,793
2014-06-25T20:18:00.000
1
0
0
0
0
java,python,multithreading,sockets,udp
0
24,419,007
0
1
1
true
0
0
"I assume that the majority of gameplay data (e.g. fine player movements) will need to be sent via UDP connections. I'm unfamiliar with UDP connections so I really don't know where to begin designing the server." UDP can be lower latency, but sometimes, it is far more important that packets aren't dropped in a game. If it makes any difference to you, World of Warcraft uses TCP. If you chose to use UDP, you would have to implement something to handle dropped packets. Otherwise, what happens if a player uses an important ability (Such as a spell interrupt or a heal) and the packet gets dropped? You COULD use both UDP and TCP to communicate different things, but that adds a lot of complexity. WoW uses only a single port for all gameplay traffic, plus a UDP port for the in-game voice chat that nobody actually uses. "How should the server be threaded? One thread per client connection that retains session info, and then a separate thread(s) to control autonomous world changes (NPCs moving, etc.)?" One thread per client connection can end up with a lot of threads, but would be a necessity if you use synchronous sockets. I'm not really sure of the best answer for this. "How should relatively large packets be transmitted? (e.g. ~25 nearby players and all of their gameplay data, usernames, etc.) TCP or UDP?" This is what makes MMORPG servers so CPU and bandwidth intense. Every action has to be relayed to potentially dozens of players, possibly hundreds if it scales that much. This is more of a scaling issue than a TCP vs UDP issue. To be honest, I wouldn't worry much about it unless your game catches on and it actually becomes an issue. "Lastly - is it safe for the gameplay server to interface with the login server via HTTP requests, how do I verify (from the login server's perspective) the gameplay server's identity - simple password, encryption?" You could easily use SSL. "Lastly - if this is relevant - I have not begun development on the client - not sure what my goals for the game itself are yet, I just want the servers to be scalable (up to ~150 players, beyond that I expect and understand that major rewrite will probably be necessary) and able to support a fair amount of players and open-world style content. (no server-taxing physics or anything like that necessary)" I wouldn't use Python for your server. It is horrendously slow and won't scale well. It's fine for web servers and applications where latency isn't too much of an issue, but for a real-time game server handling 100+ players, I'd imagine it would fall apart. Java will work, but even THAT will run into scaling issues before a natively coded server does. I'd use Java to rapidly prototype the game and get it working, then consider a rewrite in C/C++ to speed it up later. Also, something to consider regarding Python...if you haven't read about the Global Interpreter Lock, I'd make sure to do that. Because of the GIL, Python can be very ineffective at multithreading unless you're making calls to native libraries. You can get around it with multiprocessing, but then you have to deal with the overhead of communication between processes.
1
0
0
0
I'm working on an online multiplayer game. I already developed the login servers and database for any persistent storage; both are written in Python and will be hosted with Google's App Engine. (For now.) I'm relatively comfortable with two languages - Java and Python. I'd like to write the actual gameplay server in one of those languages, and I'd like for the latency of the client to gameplay-server connection to be as low as possible, so I assume that the majority of gameplay data (e.g. fine player movements) will need to be sent via UDP connections. I'm unfamiliar with UDP connections so I really don't know where to begin designing the server. How should the server be threaded? One thread per client connection that retains session info, and then a separate thread(s) to control autonomous world changes (NPCs moving, etc.)? How should relatively large packets be transmitted? (e.g. ~25 nearby players and all of their gameplay data, usernames, etc.) TCP or UDP? Lastly - is it safe for the gameplay server to interface with the login server via HTTP requests, how do I verify (from the login server's perspective) the gameplay server's identity - simple password, encryption? Didn't want to ask this kind of question because I know they're usually flagged as unproductive - which language would be better for me (as someone inexperienced with socketing) to write a sufficiently efficient server - assume equal experience with both? Lastly - if this is relevant - I have not begun development on the client - not sure what my goals for the game itself are yet, I just want the servers to be scalable (up to ~150 players, beyond that I expect and understand that major rewrite will probably be necessary) and able to support a fair amount of players and open-world style content. (no server-taxing physics or anything like that necessary)
Structuring a server for an online multiplayer game
0
1.2
1
0
1
1,168
24,446,884
2014-06-27T08:03:00.000
1
1
1
0
0
python,eclipse,syntax-highlighting
0
24,447,038
0
1
0
true
0
0
Assuming you use the PyDev plug-in you can access the color settings in the Window/Preferences/PyDev/Editor menu.
1
0
0
0
I am using Eclipse Indigo for python coding. When I comment something, I want the color of the comment to be blue how can I achieve? Thanks
Changing python syntax coloring in eclipse
0
1.2
1
0
0
442
24,454,538
2014-06-27T14:39:00.000
5
1
0
0
0
python,outlook,win32com
0
24,454,678
0
1
0
true
0
0
If you configured a separate POP3/SMTP account, set the MailItem.SendUsingAccount property to an account from the Namespace.Accounts collection. If you are sending on behalf of an Exchange user, set the MailItem.SentOnBehalfOfName property
1
3
0
0
I am trying to automate emails using python. Unfortunately, the network administrators at my work have blocked SMTP relay, so I cannot use that approach to send the emails (they are addressed externally). I am therefore using win32com to automatically send these emails via outlook. This is working fine except for one thing. I want to choose the "FROM" field within my python code, but I simply cannot figure out how to do this. Any insight would be greatly appreciated.
Choosing "From" field using python win32com outlook
0
1.2
1
0
1
2,282
24,491,143
2014-06-30T13:22:00.000
0
0
0
0
0
python,algorithm,minimax
0
24,496,933
0
3
0
false
0
0
The simplest way to select your move is to choose your move that has the maximum number of winning positions stemming from that move. I would, for each node in your search tree (game state) keep a record of possible win states that can be created by the current game state.
3
0
0
0
So far I have successfully been able to use the Minimax algorithm in Python and apply it to a tic-tac-toe game. I can have my algorithm run through the whole search treee, and return a value. However, I am confused as to how to take this value, and transform it into a move? How am I supposed to know which move to make? Thanks.
How to Get Move From Minimax Algorithm in Tic-Tac-Toe?
0
0
1
0
0
767
24,491,143
2014-06-30T13:22:00.000
0
0
0
0
0
python,algorithm,minimax
0
24,491,647
0
3
0
false
0
0
In using the MM algorithm, you must have had a way to generate the possible successor boards; each of those was the result of a move. As has been suggested, you can modify your algorithm to include tracking of the move that was used to generate a board (for example, adding it to the definition of a board, or using a structure that has the board and the move); or, you could have a special case for the top level of the algorithm, since that is the only one in which the particular move is important. For example, if your function currently returns just the computed value of the board it was passed, it could instead return a dict (or tuple, which isn't as clear) with both the value and the first move used to obtain that value, and then modify your code to use whichever bit is needed.
3
0
0
0
So far I have successfully been able to use the Minimax algorithm in Python and apply it to a tic-tac-toe game. I can have my algorithm run through the whole search treee, and return a value. However, I am confused as to how to take this value, and transform it into a move? How am I supposed to know which move to make? Thanks.
How to Get Move From Minimax Algorithm in Tic-Tac-Toe?
0
0
1
0
0
767
24,491,143
2014-06-30T13:22:00.000
0
0
0
0
0
python,algorithm,minimax
0
24,491,495
0
3
0
false
0
0
Conceptualize the minimax algorithm like a graph, where every vertex is a possible configuration of the board, and every edge from a vertex to its neighbor is a transition/move from one board configuration to the next. You need to look at the heuristic value of each board state neighboring your current state, then choose the state with the best heuristic value, then update your screen to show that board state. If you are doing animations/transitions between board states, then you would have to look at the edge and figure out which piece is different between the two states, and animate that piece accordingly.
3
0
0
0
So far I have successfully been able to use the Minimax algorithm in Python and apply it to a tic-tac-toe game. I can have my algorithm run through the whole search treee, and return a value. However, I am confused as to how to take this value, and transform it into a move? How am I supposed to know which move to make? Thanks.
How to Get Move From Minimax Algorithm in Tic-Tac-Toe?
0
0
1
0
0
767
24,497,239
2014-06-30T19:10:00.000
-1
0
0
0
0
python,ajax,django
0
32,511,257
0
2
0
false
1
0
Just think of the Web as a platform for building easy-to-use, distributed, loosely couple systems, with no guarantee about the availability of resources as 404 status code suggests. I think that creating tightly coupled solutions such as your idea is going against web principles and usage of REST. xhr.abort() is client side programming, it's completely different from server side. It's a bad practice trying to tighten client side technology to server side internal behavior. Not only this is a waste of resources, but also there is no guarantee on processing status of the request by web server. It may lead to data inconsistency too. If your request generates no server-side side effects for which the client can be held responsible. It is better just to ignore it, since these kind of requests does not change server state & the response is usually cached for better performance. If your request could cause changes in server state or data, for the sake of data consistency you can check whether the changes have taken effect or not using an API. In case of affection try to rollback using another API.
2
11
0
0
I initiate a request client-side, then I change my mind and call xhr.abort(). How does Django react to this? Does it terminate the thread somehow? If not, how do I get Django to stop wasting time trying to respond to the aborted request? How do I handle it gracefully?
How do I terminate a long-running Django request if the XHR gets an abort()?
0
-0.099668
1
0
0
1,318
24,497,239
2014-06-30T19:10:00.000
1
0
0
0
0
python,ajax,django
0
52,607,897
0
2
0
false
1
0
Due to how http works and that you usually got a frontend in front of your django gunicorn app processes (or uswgi etc), your http cancel request is buffered by nginx. The gunicorns don't get a signal, they just finish processing and then output whatever to the http socket. But if that socket is closed it will have an error (which is caught as a closed connection and move one). So it's easy to DOS a server if you can find a way to spawn many of these requests. But to answer your question it depends on the backend, with gunicorn it will keep going until the timeout.
2
11
0
0
I initiate a request client-side, then I change my mind and call xhr.abort(). How does Django react to this? Does it terminate the thread somehow? If not, how do I get Django to stop wasting time trying to respond to the aborted request? How do I handle it gracefully?
How do I terminate a long-running Django request if the XHR gets an abort()?
0
0.099668
1
0
0
1,318
24,520,176
2014-07-01T22:25:00.000
0
0
0
0
0
python,database,amazon-s3,amazon-dynamodb,boto
0
24,642,053
0
2
0
false
0
0
From what you described, I think you just need to create one table with hashkey. The haskey should be object id. And you will have columns such as "date", "image pointer", "text pointer", etc. DynamoDB is schema-less so you don't need to create the columns explicitly. When you call getItem the server will return you a dictionary with column name as key and the value. Being schema-less also means you can create new column dynamically. Assuming you already have a row in the table with only "date" column. now you want to add the "image pointer" column. you just need to call UpdateItem and gives it the hashkey and image-pointer key-value pair.
2
0
0
0
I'm attempting to store information from a decompiled file in Dynamo. I have all of the files stored in s3 however I would like to change some of that. I have an object id with properties such as a date, etc which I know how to create a table of in dynamo. My issue is that each object also contains images, text files, and the original file. I would like to have the key for s3 for the original file in the properties of the file: Ex: FileX, date, originalfileLoc, etc, images pointer, text pointer. I looked online but I'm confused how to do the nesting. Does anyone know of any good examples? Is there another way? I assume I create an images and a text table. Each with the id and all of the file's s3 keys. Any example code of how to create the link itself? I'm using python boto btw to do this.
How to correctly nest tables in DynamoDb
0
0
1
1
0
477
24,520,176
2014-07-01T22:25:00.000
0
0
0
0
0
python,database,amazon-s3,amazon-dynamodb,boto
0
24,536,208
0
2
0
true
0
0
If you stay between the limits of Dynamodb of 64Kb per item. You can have one item (row) per file. DynamoDB has String type (for file name, date, etc) and also a StringSet (SS) for list of attributes (for text files, images). From what you write I assume you are will only save pointers (keys) to binary data in the S3. You could also save binary data and binary sets in DynamoDB but I believe you will reach the limit AND have an expensive solution in terms of throughput.
2
0
0
0
I'm attempting to store information from a decompiled file in Dynamo. I have all of the files stored in s3 however I would like to change some of that. I have an object id with properties such as a date, etc which I know how to create a table of in dynamo. My issue is that each object also contains images, text files, and the original file. I would like to have the key for s3 for the original file in the properties of the file: Ex: FileX, date, originalfileLoc, etc, images pointer, text pointer. I looked online but I'm confused how to do the nesting. Does anyone know of any good examples? Is there another way? I assume I create an images and a text table. Each with the id and all of the file's s3 keys. Any example code of how to create the link itself? I'm using python boto btw to do this.
How to correctly nest tables in DynamoDb
0
1.2
1
1
0
477
24,535,601
2014-07-02T15:48:00.000
0
0
1
0
0
python,debugging,python-idle
0
28,057,981
0
2
0
false
0
0
pyshell.py file opens during the debugging process when the function that is under review is found in Python's library - for example print() or input(). If you want to bypass this file/process click Over and it will step over this review of the function in Python's library.
2
0
0
0
I have recently started to learn Python 3 and have run into an issue while trying to learn how to debug using IDLE. I have created a basic program following a tutorial, which then explains how to use the debugger. However, I keep running into an issue while stepping through the code, which the tutorial does not explain (I have followed the instructions perfectly) nor does hours of searching on the internet. Basically if I step while already inside a function, usually following print() the debugger steps into pyshell.py, specifically, PyShell.py:1285: write() if i step out of pyshell, the debugger will simple step back in as soon as I try to move on, if this is repeated the step, go, etc buttons will grey out. Any help will be greatly appreciated. Thanks.
Python 3 Debugging issue
0
0
1
0
0
243
24,535,601
2014-07-02T15:48:00.000
0
0
1
0
0
python,debugging,python-idle
0
28,735,903
0
2
0
false
0
0
In Python 3.4, I had the same problem. My tutorial is from Invent with Python by Al Sweigart, chapter 7. New file editor windows such as pyshell.py and random.pyopen when built-in functions are called, such as input(), print(), random.randint(), etc. Then the STEP button starts stepping through the file it opened. If you click OVER, you will have to click it several times, but if you click OUT, pyshell.py will close immediately and you'll be back in the original file you were trying to debug. Also, I encountered problems confusing this one--the grayed-out buttons you mentioned--if I forgot to click in the shell and give input when the program asked. I tried Wing IDE and it didn't run the program correctly, although the program has no bugs. So I googled the problem, and there was no indication that IDLE is broken or useless. Therefore, I kept trying till the OUT button in the IDLE debugger solved the problem.
2
0
0
0
I have recently started to learn Python 3 and have run into an issue while trying to learn how to debug using IDLE. I have created a basic program following a tutorial, which then explains how to use the debugger. However, I keep running into an issue while stepping through the code, which the tutorial does not explain (I have followed the instructions perfectly) nor does hours of searching on the internet. Basically if I step while already inside a function, usually following print() the debugger steps into pyshell.py, specifically, PyShell.py:1285: write() if i step out of pyshell, the debugger will simple step back in as soon as I try to move on, if this is repeated the step, go, etc buttons will grey out. Any help will be greatly appreciated. Thanks.
Python 3 Debugging issue
0
0
1
0
0
243
24,536,552
2014-07-02T16:36:00.000
5
0
0
0
0
python,opencv,image-processing,dwt
0
45,240,779
0
2
1
false
0
0
Answer of Navaneeth is correct but with two correction: 1- Opencv read and save the images as BGR not RGB so you should do cv2.COLOR_BGR2GRAY to be exact. 2- Maximum level of _multilevel.py is 7 not 10, so you should do : w2d("test1.png",'db1',7)
1
7
1
0
I need to do an image processing in python. i want to use wavelet transform as the filterbank. Can anyone suggest me which one library should i use? I had pywavelet installed, but i don't know how to combine it with opencv. If i use wavedec2 command, it raise ValueError("Expected 2D input data.") Can anyone help me?
How to Combine pyWavelet and openCV for image processing?
0
0.462117
1
0
0
11,865
24,548,398
2014-07-03T08:15:00.000
7
0
1
0
0
python,intellij-idea,pycharm,intellij-plugin
0
45,478,266
0
4
0
false
0
0
Ubuntu 16.04 defines Ctrl + Alt + Left as a workspace switch shortcut Then it does nothing on Pycharm. So you have to either disable the Ubuntu shortcut with: dash keyboard shortcuts navigation or redefine the PyCharm shortcuts to something else. Linux distro desktop devs: please make all desktop system wide shortcuts contain the Super key.
1
37
0
0
While browsing the code in PyCharm(community edition) how to go back to the previously browsed section? I am looking for eclipse back button type functionality with Pycharm.
How to go back in PyCharm while browsing code like we have a back button in eclipse?
0
1
1
0
0
17,353
24,552,964
2014-07-03T11:51:00.000
0
0
0
0
0
python,django
0
24,554,222
0
1
0
true
1
0
First you need to create page in admin console. Then add the placeholder in your template like what tutorial saying {% get_page "news" as news_page %} {% for new in news_page.get_children %} <li> {{ new.publication_date }} {% show_content new body %} {% endfor %}
1
0
0
0
I have installed django-page-cms successfully i think. Like other cms, it is also for creating new pages. But I already have html pages in my project. How to integrate with that? They want me to put place holder in html page, like: {% load pages_tags %} but I think this will bring the content from the already created page in admin Can anyone tell me how to integrate with my existing pages?
integrating with existing html page django-page-cms
0
1.2
1
0
0
64
24,563,782
2014-07-03T21:48:00.000
1
0
0
0
0
python,sublimerepl
0
24,564,624
0
1
0
true
0
0
You could try download and install SublimeREPL using Package Control on a computer with an internet connection and then in Sublime Text go to preferences > Browse packages… where you should find a folder named SublimeREPL. Copy that folder to the same directory on the other computer. That should work.
1
0
0
0
I'm trying to install SublimeREPL on an offline computer (it has secure data and so can't be Internet-connected). Any ideas for how to do so? I can copy any installation files to a USB drive, but haven't found any--everywhere I've seen insists on using the Package Manager (which requires connection to function properly)
Installing SublimeREPL offline
0
1.2
1
0
0
1,408
24,598,160
2014-07-06T16:54:00.000
3
0
0
0
1
python,opencv,ubuntu,uninstallation
0
24,598,296
0
1
0
true
0
0
The procedure depends on whether or not you built OpenCV from source with CMake, or snatched it from a repository. From repository sudo apt-get purge libopencv* will cleanly remove all traces. Substitute libopencv* as appropriate in case you were using an unofficial ppa. From source If you still have the files generated by CMake (the directory from where you executed sudo make install), cd there and sudo make uninstall. Otherwise, you can either rebuild them with the exact same configuration and use the above command, or recall your CMAKE_INSTALL_PREFIX (/usr/local by default), and remove everything with opencv in its name within that directory tree.
1
5
1
0
Im using openCV on Ubuntu 14.04, but some of the functions that I require particularly in cv2 library (cv2.drawMatches, cv2.drawMatchesKnn) does not work in 2.4.9. How do I uninstall 2.4.9 and install 3.0.0 from the their git? I know the procedure for installing 3.0.0 but how do I make sure that 2.4.9 get completely removed from disk?
Unistall opencv 2.4.9 and install 3.0.0
0
1.2
1
0
0
18,476
24,622,714
2014-07-08T02:16:00.000
11
0
0
0
0
python,django
0
24,644,610
0
5
0
false
1
0
A django app doesn't really map to a page, rather, it maps to a function. Apps would be for things like a "polls" app or a "news" app. Each app should have one main model with maybe a couple supporting ones. Like a news app could have a model for articles, with supporting models like authors and media. If you wanted to display multiple, you would need an integration app. One way to do this is to have a "project" app next to your polls and news apps. The project app is for your specific website- it is the logic that is specific to this application. It would have your main urls.py, your base templat(s), things like that. If you needed information from multiple apps in one page, you have to have a view that returns info from multiple apps. Say, for example, that you have a view that returns the info for a news article, and one that returns info for a poll. You could have a view in your project app that calls those two view functions and sticks the returned data into a different template that has spots for both of them. In this specific example, you could also have your polls app set up so that its return info could be embedded- and then embed the info into a news article. In this case you wouldn't really have to link the apps together at all as part of your development, it could be done as needed on the content creation end.
1
15
0
0
I've looked all over the net and found no answer. I'm new to Django. I've done the official tutorial and read many more but unfortunately all of them focus on creating only one application. Since it's not common to have a page as a single app, I would like to ask some Django guru to explain how I can have multiple apps on a webpage. Say I go to mysite.com and I see a poll app displaying a poll, gallery app displaying some pics, news app displaying latest news etc, all accessed via one url. I know I do the displaying in template but obviously need to have access to data. Do I create the view to return multiple views? Any advice, links and examples much appreciated.
Django - Multiple apps on one webpage?
0
1
1
0
0
11,743
24,639,577
2014-07-08T18:52:00.000
2
0
1
0
0
wxpython,pygame
0
24,644,067
0
1
0
false
0
1
The PyEmbeddedImage class has a GetData method (or Data property) that can be used to fetch the raw data of the embedded image, in PNG format.
1
1
0
0
I have used img2py to convert an image into a .py file. But how to use that converted file in pygame. Is there any specific code for it?
How to use/decompress the file made by img2py
0
0.379949
1
0
0
748
24,663,772
2014-07-09T21:03:00.000
0
0
1
1
0
python,input,command-line,command,command-prompt
1
24,663,846
0
2
0
false
0
0
if you pasted the code here that would help but the answer you are most likely looking for is commandline arguements. If I were to guess, in the command line the input would look something like: python name_of_script.py "c:\thefilepath\totheinputfile" {enter} {enter} being the actually key pressed on the keyboard and not typed in as the word Hopefully this starts you on the right answer :)
2
0
0
0
I'm new to python and I'm attempting to run a script provided to me that requires to input the name of a text file to run. I changed my pathing to include the Python directory and my input in the command line - "python name_of_script.py" - is seemingly working. However, I'm getting the error: "the following arguments are required: --input". This makes sense, as I need this other text file for the program to run, but I don't know how to input it on the command line, as I'm never prompted to enter any input. I tried just adding it to the end of my command prompt line, but to no avail. Does anybody know how this could be achieved? Thanks tons
Python script requires input in command line
0
0
1
0
0
352
24,663,772
2014-07-09T21:03:00.000
0
0
1
1
0
python,input,command-line,command,command-prompt
1
24,665,527
0
2
0
false
0
0
Without reading your code, I guess if I tried just adding it to the end of my command prompt line, but to no avail. it means that you need to make your code aware the command line argument. Unless you do some fancy command line processing, for which you need to import optparse or argparse, try: import sys # do something with sys.argv[-1] (ie, the last argument)
2
0
0
0
I'm new to python and I'm attempting to run a script provided to me that requires to input the name of a text file to run. I changed my pathing to include the Python directory and my input in the command line - "python name_of_script.py" - is seemingly working. However, I'm getting the error: "the following arguments are required: --input". This makes sense, as I need this other text file for the program to run, but I don't know how to input it on the command line, as I'm never prompted to enter any input. I tried just adding it to the end of my command prompt line, but to no avail. Does anybody know how this could be achieved? Thanks tons
Python script requires input in command line
0
0
1
0
0
352
24,684,316
2014-07-10T19:00:00.000
1
1
0
0
0
python,pdf,export,latex,pdflatex
0
24,684,691
0
3
0
false
1
0
generate a Latex file.tex with a Python script f= open("file.tex", 'w') f.write('\documentclass[12pt]{article}\n') f.write('\usepackage{multicol}\n') f.write('\n\begin{document}\n\n') ... f.write('\end{document}') f.close() run pdflatex on the LaTex file from the Python script as a subprocess subprocess.call('latex file.tex') As an alternative to 1. you can generate a LaTex template and just substitute the variable stuff using Python regular expressions and string substitutions.
1
0
0
0
I have a GUI program in Python which calculates graphs of certain functions. These functions are mathematical like say, cos(theta) etc. At present I save the graphs of these functions and compile them to PDF in Latex and write down the equation manually in Latex. But now I wish to simplify this process by creating a template in Latex that arranges, The Function Name, Graph, Equation and Table and complies them to a single PDF format with just a click. Can this be done? And how do I do it? Thank you.
Python Export Program to PDF using Latex format
0
0.066568
1
0
0
2,206
24,684,821
2014-07-10T19:32:00.000
0
0
0
0
0
python,django,rest,permissions,django-rest-framework
0
27,932,256
0
1
0
false
1
0
One of the arguments to has_permission is view, which has an attribute .action, which is one of the five "LCRUD" actions ("list"/"create"/"retrieve"/"update"/"destroy"). So I think you could use that to check, in has_permission, whether the action being performed is a list or a retrieve, and deny or allow it accordingly.
1
1
0
0
I'm using Django Rest Framework and I'm having some trouble with permissions. I know how to use has_permission and has_object_permission, but I have a number of cases where someone needs to be able to access retrieve but not list--e.g., a user has access to their own profile, but not to the full list of them. The problem is, has_permission is always called before has_object_permission, so has_object_permission can only be more restrictive, not less. So far, the only way I've been able to do this is to have more permissive permissions and then overwrite list() directly in the ViewSet and include the permission check in the logic, but I'd be able to actually store all of this logic in a Permissions class rather than in each individual viewset. Is there any way to do this? Right now I feel like I'm goign to have to write a ViewSet metaclass to automatically apply permissions as I want to each viewset method, which isn't really something I want to do.
Django Rest Framework--deny access to list but not to retrieve
0
0
1
0
0
302
24,715,230
2014-07-12T16:54:00.000
1
0
0
0
0
python,scikit-learn,random-forest,one-hot-encoding
0
66,810,359
0
5
0
false
0
0
Maybe you can use 1~4 to replace these four color, that is, it is the number rather than the color name in that column. And then the column with number can be used in the models
2
71
1
0
Say I have a categorical feature, color, which takes the values ['red', 'blue', 'green', 'orange'], and I want to use it to predict something in a random forest. If I one-hot encode it (i.e. I change it to four dummy variables), how do I tell sklearn that the four dummy variables are really one variable? Specifically, when sklearn is randomly selecting features to use at different nodes, it should either include the red, blue, green and orange dummies together, or it shouldn't include any of them. I've heard that there's no way to do this, but I'd imagine there must be a way to deal with categorical variables without arbitrarily coding them as numbers or something like that.
Can sklearn random forest directly handle categorical features?
0
0.039979
1
0
0
72,780
24,715,230
2014-07-12T16:54:00.000
16
0
0
0
0
python,scikit-learn,random-forest,one-hot-encoding
0
35,471,754
0
5
0
false
0
0
You have to make the categorical variable into a series of dummy variables. Yes I know its annoying and seems unnecessary but that is how sklearn works. if you are using pandas. use pd.get_dummies, it works really well.
2
71
1
0
Say I have a categorical feature, color, which takes the values ['red', 'blue', 'green', 'orange'], and I want to use it to predict something in a random forest. If I one-hot encode it (i.e. I change it to four dummy variables), how do I tell sklearn that the four dummy variables are really one variable? Specifically, when sklearn is randomly selecting features to use at different nodes, it should either include the red, blue, green and orange dummies together, or it shouldn't include any of them. I've heard that there's no way to do this, but I'd imagine there must be a way to deal with categorical variables without arbitrarily coding them as numbers or something like that.
Can sklearn random forest directly handle categorical features?
0
1
1
0
0
72,780
24,718,697
2014-07-13T01:08:00.000
1
0
0
0
0
python,apache-spark,pyspark
0
24,736,966
0
6
0
false
0
0
Personally I think just using a filter to get rid of this stuff is the easiest way. But per your comment I have another approach. Glom the RDD so each partition is an array (I'm assuming you have 1 file per partition, and each file has the offending row on top) and then just skip the first element (this is with the scala api). data.glom().map(x => for (elem <- x.drop(1){/*do stuff*/}) //x is an array so just skip the 0th index Keep in mind one of the big features of RDD's is that they are immutable, so naturally removing a row is a tricky thing to do UPDATE: Better solution. rdd.mapPartions(x => for (elem <- x.drop(1){/*do stuff*/} ) Same as the glom but doesn't have the overhead of putting everything into an array, since x is an iterator in this case
1
28
1
0
how do you drop rows from an RDD in PySpark? Particularly the first row, since that tends to contain column names in my datasets. From perusing the API, I can't seem to find an easy way to do this. Of course I could do this via Bash / HDFS, but I just want to know if this can be done from within PySpark.
PySpark Drop Rows
0
0.033321
1
0
0
49,327
24,729,427
2014-07-14T04:13:00.000
0
0
0
1
0
python,google-app-engine
1
24,765,014
0
1
0
false
1
0
I think I have found the answer to my own question. I have a small app I have written to backup my stuff to Google Drive, this app would appear to have an error in it that does not stop it from running but does cause it to make a file called C:\Usera\myname\Google Therefore GAE can not create a directory called C:\Usera\myname/Google nor a file called C:\Usera\myname/Google\google_appengine_launcher.ini I deleted the file Google, made a directory called Google and ran the GAE, saved pereferences and all working
1
1
0
0
Just installed Google Apps Engine and am getting "could not save" errors. Specifically if I go in to preferences I get Could not save into preference file C:\Usera\myname/Google\google_appengine_launcher.ini:No such file or directory. So some how I have a weird path, would like to know where and how to change this. I have search but found nothing, I have done a repair reinstall of GAE Can find nothing in the registry for google_appengine_launcher.ini I first saw the error when I created my first Application Called hellowd Parent Directory: C:\Users\myname\workspace Runtime 2.7 (PATH has this path) Port 8080 Admin port 8080 click create Error: Could not save into project file C:\Users\myname/Google\google_appengine_launcher.ini:No such file or directory. Thanks
could not save preference file google-apps-engine
0
0
1
0
0
53
24,735,926
2014-07-14T11:49:00.000
0
0
1
0
0
python
0
24,736,026
0
4
0
false
0
0
You can split you string to a list using list1=s.split() And then check wether each of them is a integer or not.
1
0
0
0
for example if my input was "1 2 3", how do I check if each part is a integer and not anything else and if there was something else, be able to input the string again so its correct otherwise it wont move on
In Python, If I split a string up, how do i check if each part of it is an integer
0
0
1
0
0
917
24,736,316
2014-07-14T12:13:00.000
0
0
1
0
0
python,installation,pip,package
0
24,736,486
0
7
0
false
0
0
pip freeze gives you all the installed packages. Assuming you know the folder: time.ctime(os.path.getctime(file)) should give you the creation time of a file, i.e. date of when the package has been installed or updated.
1
41
0
0
I know how to see installed Python packages using pip, just use pip freeze. But is there any way to see the date and time when package is installed or updated with pip?
See when packages were installed / updated using pip
1
0
1
0
0
36,060
24,737,909
2014-07-14T13:43:00.000
-2
1
1
0
0
python
0
24,739,463
0
3
0
false
0
0
Don't. Python is not C++ and using patterns that worked before are silly in Python. In particular, Python is not a "Bondage and Domination" language where phrases like "thereby strictly controlling creation" don't apply. "If you didn't want to instantiate a UsefulClass then why did you?" — me. If you can't trust yourself or your colleagues to read and follow the code's internal documentation, you're screwed regardless of the implementation language.
1
0
0
0
C++ programmer here. In Python, how do you make sure that a particular class (e.g. UsefulClass) can only be created through its related factory class (e.g. FactoryClass)? But, at the same time the public methods of UsefulClass are callable directly? In C++ this can be easily achieved by making the relevant methods of UsefulClass public, and by making its default constructor (and any other constructors) private. The related FactoryClass (which can be a "friend" of the UsefulClass) can return instances of UsefulClass and thereby strictly controlling creation, while allowing the user to directly call the public methods of UsefulClass. Thanks.
Only creating object through factory class in Python - factory class related
0
-0.132549
1
0
0
136
24,769,117
2014-07-15T22:18:00.000
160
0
1
0
1
python,intellij-idea
0
24,769,264
0
5
0
false
0
0
With the Python plugin installed: Navigate to File > Project Structure. Under the Project menu for Project SDK, select "New" and Select "Python SDK", then select "Local". Provided you have a Python SDK installed, the flow should be natural from there - navigate to the location your Python installation lives.
3
115
0
0
There is a tutorial in the IDEA docs on how to add a Python interpreter in PyCharm, which involves accessing the "Project Interpreter" page. Even after installing the Python plugin, I don't see that setting anywhere. Am I missing something obvious?
How do I configure a Python interpreter in IntelliJ IDEA with the PyCharm plugin?
0
1
1
0
0
74,300
24,769,117
2014-07-15T22:18:00.000
2
0
1
0
1
python,intellij-idea
0
44,602,198
0
5
0
false
0
0
Follow these steps: Open Setting (Ctrl + Alt + s) Click on plugins Find Browse Repositories and click Search for "python" Select Python SDK or pycharm Restart the IDE Go to project structure Select the python SDK in projects or create a new project with python SDK.
3
115
0
0
There is a tutorial in the IDEA docs on how to add a Python interpreter in PyCharm, which involves accessing the "Project Interpreter" page. Even after installing the Python plugin, I don't see that setting anywhere. Am I missing something obvious?
How do I configure a Python interpreter in IntelliJ IDEA with the PyCharm plugin?
0
0.07983
1
0
0
74,300
24,769,117
2014-07-15T22:18:00.000
3
0
1
0
1
python,intellij-idea
0
56,439,498
0
5
0
false
0
0
If you have multiple modules in your project, with different languages, you can set the interpreter in the following way: File -> Project Structure... Select Modules in the list on the left Select the Python module in the list of modules On the right-hand side, either choose an existing Python SDK from the dropdown list, or click on the New... button to create either a virtualenv, or create a new Python SDK from a Python installation on your system.
3
115
0
0
There is a tutorial in the IDEA docs on how to add a Python interpreter in PyCharm, which involves accessing the "Project Interpreter" page. Even after installing the Python plugin, I don't see that setting anywhere. Am I missing something obvious?
How do I configure a Python interpreter in IntelliJ IDEA with the PyCharm plugin?
0
0.119427
1
0
0
74,300
24,769,574
2014-07-15T23:02:00.000
1
1
0
0
0
python,cloud
0
24,769,619
0
2
0
false
1
0
Here are two approaches to this problem, both of which require shell access to the cloud server. Write the program to handle the scheduling itself. For example, sleep and wake up every few miliseconds to perform the necessary checks. You would then transfer this file to the server using a tool like scp, login, and start it in the background using something like python myscript.py &. Write the program to do a single run only, and use the scheduling tool cron to start it up every minute of the day.
2
1
0
0
I'm fairly competent with Python but I've never 'uploaded code' to a server before and have it run automatically. I'm working on a project that would require some code to be running 24/7. At certain points of the day, if a criteria is met, a process is started. For example: a database may contain records of what time each user wants to receive a daily newsletter (for some subjective reason) - the code would at the right time of day send the newsletter to the correct person. But of course, all of this is running out on a Cloud server. Any help would be appreciated - even correcting my entire formulation of the problem! If you know how to do this in any other language - please reply with your solutions! Thanks!
Uploading code to server and run automatically
0
0.099668
1
0
0
262
24,769,574
2014-07-15T23:02:00.000
1
1
0
0
0
python,cloud
0
24,983,741
0
2
0
false
1
0
Took a few days but I finally got a way to work this out. The most practical way to get this working is to use a VPS that runs the script. The confusing part of my code was that each user would activate the script at a different time for themselves. To do this, say at midnight, the VPS runs the python script (using scheduled tasking or something similar) and runs the script. the script would then pull times from a database and process the code at those times outlined. Thanks for your time anyways!
2
1
0
0
I'm fairly competent with Python but I've never 'uploaded code' to a server before and have it run automatically. I'm working on a project that would require some code to be running 24/7. At certain points of the day, if a criteria is met, a process is started. For example: a database may contain records of what time each user wants to receive a daily newsletter (for some subjective reason) - the code would at the right time of day send the newsletter to the correct person. But of course, all of this is running out on a Cloud server. Any help would be appreciated - even correcting my entire formulation of the problem! If you know how to do this in any other language - please reply with your solutions! Thanks!
Uploading code to server and run automatically
0
0.099668
1
0
0
262
24,785,121
2014-07-16T15:45:00.000
0
0
1
0
0
python,pyqt,updating,configparser
0
24,871,124
0
2
0
false
0
1
Updating the state of your application may not be a trivial thing if you are somewhere in the middle. Just an example: Your app = A car You launch your app = You start your car You set in the preferences the variable type_tyre to Winter Your running app still has type_tyre equals Sommer You attempt to change tyres while driving on the highway Crash And this while changing the tyres before starting the car might be a trivial and safe thing to do. So you just have to write a routine that adjusts the state of your application according to the change in preferences but this is in general different from initializing the app and depends on the current state of the app. But otherwise just write such a routine and call it.
1
0
0
0
I have terrible doubts regarding python config file approach. I am creating a program with a GUI (PyQt). The program loads some settings from a .cfg file using the configparser module. And the user can edit these settings from the GUI with the user preferences widget. When the preferences widget is closed the .cfg file is saved but I don't know how to update the rest of the program using the updated settings values. I will try to explain with an example: I launch the program. It create a ConfigParser() named config and read settings.cfg. The program retrieve the value of the option 'clock_speed' (let's say it is 5) from config and set clkSpd = 5. I click on Edit -> Preferences and change the clock speed via a QSpinBox to 8. I close the Preferences windget, settings.cfg is saved, the value of the option 'clock_speed' is now 8. BUT in its module, clkSpd is still 5. I know I can just load the settings, edit them, save them and reload all settings each time I close the Preferences window. It's simple but not very beautiful. Is there a classic and effficiant approach for config files in read/write mode ? Thanks by advance.
Python - update configparser between modules
0
0
1
0
0
699
24,793,636
2014-07-17T01:51:00.000
1
0
0
0
0
python,graph,matplotlib,geometry,intersection
0
24,797,554
0
1
0
false
0
0
Maybe you should try something more analytical? It should not be very difficult: Find the circle pairs whose distance is less than the sum of their radii; they intersect. Calculate the intersection angles by simple trigonometry. Draw a polygon (path) by using a suitably small delta angle in both cases (half of the polygon comes from one circle, the other half from the other circle. Collect the paths to a PathCollection None of the steps should be very long or difficult.
1
0
1
0
I'm working on a problem that involves creating a graph which shows the areas of intersection of three or more circles (each circle is the same size). I have many sets of circles, each set containing at least three circles. I need to graph the area common to the interior of each and every circle in the set, if it even exists. If there is no area where all the circles within the set intersect, I have nothing to graph. So the final product is a graph with little "slivers" of intersecting circles all over. I already have a solution for this written in Python with matplotlib, but it doesn't perform very well. This wasn't an issue before, but now I need to apply it to a larger data set so I need a better solution. My current approach is basically a test-and-check brute force method: I check individual points within an area to see if they are in that common intersection (by checking distance from the point to the center of each circle). If the point meets that criteria, I graph it and move on. Otherwise, I just don't graph it and move on. So it works, but it takes forever. Just to clarify, I don't scan through every point in the entire plane for each set of circles. First, I narrow my "search" area to a rectangle tightly bounded around the first two (arbitrarily chosen) circles in the set, and then test-and-check each point in there. I was thinking it would be nice if there were a way for me to graph each circle in a set (say there 5 circles in the set), each with an alpha value of 0.1. Then, I could go back through and only keep the areas with an alpha value of 0.5, because that's the area where all 5 circles intersect, which is all I want. I can't figure out how to implement this using matplotlib, or using anything else, for that matter, without resorting to the same brute force test-and-check strategy. I'm also familiar with Java and C++, if anyone has a good idea involving those languages. Thank you!
How to find and graph the intersection of 3+ circles with Matplotlib
0
0.197375
1
0
0
1,241
24,796,339
2014-07-17T06:29:00.000
1
0
1
0
0
python,class
0
24,797,103
0
2
0
false
0
0
A program often has to maintain state and share resources between functions (command line options, dB connection, etc). When that's the case a class is usually a better solution (wrt/ readability, testability and overall maintainability) than having to pass the whole context to every function or (worse) using global state.
1
3
0
0
Sometimes, when looking at Python code examples, I'll come across one where the whole program is contained within its own class, and almost every function of the program is actually a method of that class apart from a 'main' function. Because it's a fairly new concept to me, I can't easily find an example even though I've seen it before, so I hope someone understands what I am referring to. I know how classes can be used outside of the rest of a program's functions, but what is the advantage of using them in this way compared with having functions on their own? Also, can/should a separate module with no function calls be structured using a class in this way?
What is the benefit of having a whole program contained in a class?
0
0.099668
1
0
0
227
24,805,002
2014-07-17T13:31:00.000
0
0
1
0
0
python,probability,bayesian,pymc,mcmc
0
24,833,321
0
1
0
false
0
0
I recommend following the PyMC user's guide. It explicitly shows you how to specify your model (including priors). With MCMC, you end up getting marginals of all posterior values, so you don't need to know how to marginalize over priors. The Dirichlet is often used as a prior to multinomial probabilities in Bayesian models. The values of the Dirichlet parameters can be used to encode prior information, typically in terms of a notional number of prior events corresponding to each element of the multinomial. For example, a Dirichlet with a vector of ones as the parameters is just a generalization of a Beta(1,1) prior to multinomial quantities.
1
0
0
1
I am going through the tutorial about Monte Carlo Markov Chain process with pymc library. I am also a newbie using pymc and try to establish my own MCMC process. I have faced couple of question that I couldn't find proper answer in pymc tutorial: First: How could we define priors with pymc and then marginalise over the priors in the chain process? My second question is about Dirichlet distribution , how is this distribution related to the prior information in MCMC and how should it be defined?
Defining priors and marginalizing over priors in pymc
0
0
1
0
0
513
24,806,675
2014-07-17T14:45:00.000
1
0
0
1
0
python,google-app-engine,email
0
24,814,266
0
1
0
true
1
0
You can set up a CRON job to run every few minutes and process your email queue. It will require an endpoint where you can send a POST request, but you can use a secret token (like just any random guid) to verify the request is legitimate before you send the email.
1
0
0
0
I was wondering how I would go about emailing user emails stored in a python datastore. Should I create a sort of maintenance page where I can log in as an administrator and then send an email or is there a way for me to execute a python script without needing a handler pointing to a separate webpage so I don't have to worry about the page being discovered and exploited.
Google App Engine send batch email
1
1.2
1
0
0
88
24,806,713
2014-07-17T14:46:00.000
3
0
0
0
0
python,tkinter
0
24,814,914
0
2
0
true
0
1
No, there is no way to change the defaults. You can easily write your own grid function to automatically configure the weight of each column. You could do this by subclassing Frame, for instance.
1
0
0
0
I have an application made up of Frames, Frames in Frames and Labels in Frames. There is quite a lot of them and I am looking for a way to modify some of the default values. I am particularly interested in modifying .columnconfigure() since I call .columnconfigure(0, weight=1) on each of the columns, in each frame. This does not help with the code cleanness. Is there a way to set this behavior (ability to expand) globally?
how to change default values for .columnconfigure() in tkinter?
0
1.2
1
0
0
207
24,814,724
2014-07-17T22:43:00.000
2
0
1
1
0
python,python-3.x,path
0
24,814,742
0
2
0
false
0
0
If you want to male a file specifically open with a version you can start the file with #! python3.x the x being the version you want. If you want to be able to right click and edit with that version youll need to do some tweaking in the registry
1
1
0
0
Running Windows 7. 2.7, 3.3 and 3.4 installed. I just installed Python 3.3 for a recent project. In the command prompt, python launches 3.4, and py launches 3.3. I can access 3.3 using the 3.3 version of IDLE, but how can I access it via the command prompt? Is there a shortcut like py that I can use? Do I need to define this on my own like an alias? Or is the best route to somehow change the path to temporarily make 3.3 the default? Just downloaded virtualenv, maybe that might be part of the solution.
Python: how to access 3.3 if 3.4 is the default?
0
0.197375
1
0
0
238
24,816,237
2014-07-18T02:02:00.000
1
0
1
0
0
python,ipython,ipython-notebook
0
68,669,366
0
6
0
false
0
0
If I am not wrong you mean you just need to clear the output part of a cell, which could be a valid output or some error which you don't what to look anymore. If Yes! just go to top ribbon and select Cell > Current Outputs > Clear
1
192
0
0
In a iPython notebook, I have a while loop that listens to a Serial port and print the received data in real time. What I want to achieve to only show the latest received data (i.e only one line showing the most recent data. no scrolling in the cell output area) What I need(i think) is to clear the old cell output when I receives new data, and then prints the new data. I am wondering how can I clear old data programmatically ?
ipython notebook clear cell output in code
0
0.033321
1
0
0
263,058
24,830,313
2014-07-18T17:02:00.000
2
0
0
0
0
android,python,kivy
0
24,830,380
0
1
0
true
0
1
On mobile, your app should automatically fill the phone screen. You don't need to worry about it. On desktop, you can use the --size=WxH option to test a specific screen size, or use the screen module (-m screen:nexus7 for example - run kivy with -m screen to see the available options). No. All mouse/touchscreen interactions are considered touches in Kivy. So using on_touch_down/on_touch_move/on_touch_up will work regardless of the input device. The only difference is that with touchscreen you could have multi-touch - but if you write your app assuming single-touch it will work the same on both mobile and desktop.
1
1
0
0
I have two questions that I can not answer to myself: How can I change the size of my window, if I do not know the exact size of the phone screen? I.e. my aim is to fit all screen sizes. Is there any difference between clicking with mouse and touching with fingers in the code? If I write code for clicking, will it work with touch?
Kivy: how to change window size properties and the difference between click and touch
0
1.2
1
0
0
399
24,836,398
2014-07-19T03:20:00.000
1
0
1
0
0
eclipse,ipython,pydev
0
25,066,436
0
1
0
false
0
0
There's currently no option to do that in the UI. You can do that in a hackish way by manually opening: plugins\org.python.pydev\pysrc\pydev_ipython_console.py in your Eclipse installation and uncommenting the 'raise ImportError()' in the top of the file :) Now, I'm a bit curious on why you'd prefer the PyDev version instead of the IPython version in this case...
1
2
0
0
The most recent releases of PyDev IDE for Eclipse come with IPython 'embeded' in its interactive console. I'm just wondering if there is a way to disable this option and let PyDev uses a regular python interactive console without uninstalling IPython? I know that if IPython is not installed PyDev will use a regular python interactive console. But I think there must be a way of doing it without getting rid of IPython. If somebody knows how to do this, pleas advise. Thanks.
Is it possible to disable IPython from the Eclipse/PyDev console?
1
0.197375
1
0
0
566
24,875,008
2014-07-21T21:21:00.000
0
0
0
0
1
python,neural-network,pmml
0
26,639,275
0
1
0
false
0
0
Finally I found my own solution. I wrote my own PMML Parser and scorer . PMML is very much same as XML so its easy to build and retrieve fields accordingly. If anyone needs more information please comment below. Thanks , Raghu.
1
1
1
0
I have a model(Neural Network) in python which I want to convert into a PMML file . I have tried the following: 1.)py2pmml -> Not able to find the source code for this 2.)in R -> PMML in R works fine but my model is in Python.(Cant run the data in R to generate the same model in R) . Does not work for my dataset. 3.) Now I am trying to use augustus to make the PMML file. But augustus has examples of using a already built PMML file but not how to make one I am not able to find proper examples on how to use augustus in Python to customize the model. Any suggestion will be good. Thanks in advance. GGR
Produce a PMML file for the Nnet model in python
0
0
1
0
0
382
24,875,955
2014-07-21T22:33:00.000
0
1
0
0
0
python,raspberry-pi,gpio
0
24,920,035
0
3
0
false
0
0
Make sure your script runs fine from the command line first. Also, if you are dealing with the GPIO pins, make sure you are running your script with the proper permissions. I know when I access the GPIO pins on my pi, I need to use root/sudo to access them.
1
1
0
0
I have been looking for a few weeks now on how to make a .py file start on startup. I have had no luck on any of the methods working, does anyone have any ideas. The file is reasonably small and will need GPIO input from a PIR movement sensor.
Autostart on raspberry pi
0
0
1
0
0
1,105
24,890,579
2014-07-22T14:47:00.000
0
1
0
0
0
python,session,selenium,webdriver,python-unittest
0
24,891,062
0
2
0
false
0
0
The simplest way to achieve this is not to use the Setup() and TearDown() methods, or more specifically not to create a new instance of the WebDriver object at that start or each test case, and not to use the Quit() method at the end of each test case. In your first test case create a new instance of the WebDriver object and use this object for all of your test cases. At the end of your last test case use the Quit() method to close the browser.
1
0
0
0
I want to establish one session at the starting of the suite. That session should be stay for longer time for multiple test cases.That session should end at the last. That session should be implement in Selenium Web driver by using Unittest frame works in python language. please can anyone suggest any methods or how to implement it.
how to establish a session thourghout the selenium webdriver suite by using python in firefox
0
0
1
0
1
739
24,890,739
2014-07-22T14:54:00.000
1
0
0
0
0
python,url,client-server,client,webclient
0
24,893,137
0
2
0
false
1
0
The "action" part of a form is an url, and If you don't specify the scheme://host:port part of the URL, the client will resolve it has the current page one. IOW: just put the path part of your script's URL and you'll be fine. FWIW hardcoding the scheme://host:port of your URLs is an antipattern, as you just found out.
2
0
0
0
I used to create web app in the same computer, but if the server and the client is not in the same computer, how can we access to the web page ? I mean, for example I have an html form and a button "ok" : If the server and the client are in the same computer, in action = " " we put localhost/file.py , but if the server and the client are not in the same computer how to do this ? Because the client can't to have localhost in his webbrower (url).
make a Client-Server application
0
0.099668
1
0
1
136
24,890,739
2014-07-22T14:54:00.000
0
0
0
0
0
python,url,client-server,client,webclient
0
24,907,187
0
2
0
true
1
0
Your script is supposed to be run as a CGI script by a web-server, which sets environment variables like REMOTE_ADDR, REQUEST_METHOD ... You are running the script by yourself, and this environment variable are not available. That's why you get the KeyError.
2
0
0
0
I used to create web app in the same computer, but if the server and the client is not in the same computer, how can we access to the web page ? I mean, for example I have an html form and a button "ok" : If the server and the client are in the same computer, in action = " " we put localhost/file.py , but if the server and the client are not in the same computer how to do this ? Because the client can't to have localhost in his webbrower (url).
make a Client-Server application
0
1.2
1
0
1
136
24,908,188
2014-07-23T10:36:00.000
0
0
0
0
1
python,git,version-control,branching-and-merging
0
24,913,304
1
1
0
true
0
0
IMHO you should probably commit in your master branch, then rebase your upgrade branch, it will make more sense in your repository history. If those commits are working on both environments, you should use a different branch based on the master one, so you can work out on the newer version of python, then merge it in the master, then rebase your upgrade branch.
1
0
0
0
I am working on a python app that uses python 2.4, postgres 8.2 and old versions of pygresql, xlrd, etc. Because of this it is quite a pain to use, and has to be used in a windows xp VM. There are other problems such as the version of xlrd doesn't support .xlsx files, but the new version of xlrd doesn't work with python 2.4, understandably. I recently made a branch called 'upgrade' where I started to try to get it working with up to date versions of the libraries (python 2.7), for instance unicode handling has changed a bit so required some changes here and there. Most of the work I'm doing on the app should work in both environments, but it's nicer to work on the upgrade branch because it doesn't need to run in a vm. So my question is how can I make commits using the upgrade branch but then apply them to the master branch so they will still apply to the old version of the software the client is using? I realise I can cherry pick commits off the upgrade branch onto master but it seems a bit wrong, having commits in both branches. I'm thinking maybe I should be rebasing the upgrade branch so it is always branching off head after the most recent commits, but then that would mean committing to the master branch which means working in a VM. Hope this makes some kind of sense, I'll try and do some diagrams if not.
Managing a different python version as a branch in git
0
1.2
1
1
0
77
24,922,174
2014-07-23T22:34:00.000
5
0
1
1
0
python,c++,linux
0
24,922,209
0
2
1
false
0
0
It is not possible to ignore SIGKILL or handle it in any way. From man sigaction: The sa_mask field specified in act is not allowed to block SIGKILL or SIGSTOP. Any attempt to do so will be silently ignored.
1
6
0
0
I am trying to figure out how to get a process to ignore SIGKILL. The way I understand it, this isn't normally possible. My idea is to get a process into the 'D' state permanently. I want to do this for testing purposes (the corner case isn't really reproducible). I'm not sure this is possible programatically (I don't want to go damage hardware). I'm working in C++ and Python, but any language should be fine. I have root access. I don't have any code to show because I don't know how to get started with this, or if it's even possible. Could I possibly set up a bad NFS and try reading from it? Apologies in advance if this is a duplicate question; I didn't find anyone else trying to induce the D state. Many thanks.
How to ignore SIGKILL or force a process into 'D' sleep state?
1
0.462117
1
0
0
2,985
24,935,230
2014-07-24T13:33:00.000
0
0
0
0
0
python,numpy,nlopt,abaqus
0
25,251,918
0
1
0
false
0
0
I have similar problems. As an (annoying) work around I usually write out important data in text files using the regular python. Afterwards, using a bash script, I start a second python (different version) to further analyse the data (matplotlib etc).
1
2
1
0
I want to run an external library of python called NLopt within Abaqus through python. The issue is that the NLopt I found is compiled against the latest release of Numpy, i.e. 1.9, whereas Abaqus 6.13-2 is compiled against Numpy 1.4. I tried to replace the Numpy folder under the site-packages under the Abaqus installation folder with the respective one of version 1.9 that I created externally through an installation of Numpy 1.9 over Python 2.6 (version that Abaqus uses). Abaqus couldn't even start so I guess that such approach is incorrect. Are there any suggestions on how to overcome such issue? Thanks guys
How to overcome version incompatibility with Abaqus and Numpy (Python's library)?
0
0
1
0
0
318
24,944,626
2014-07-24T21:51:00.000
2
1
1
0
1
python,python-3.x
0
24,944,710
0
2
0
false
0
0
They do two different things; compare bytes(1234) with struct.pack("!H", 1234). The first just provides a 4-byte string representation of the number bytes object with 1,234 null bytes; the second provides a two-byte string with the (big-endian) value of the integer. (Edit: Struck out irrelevant Python 2 definition of bytes(1234).)
2
3
0
0
I'm just curious here, but I have been using bytes() to convert things to bytes ever since I learned python. It was until recently that I saw struct.pack(). I didn't bother learning how to use it because I thought It did essentially did the same thing as bytes(). But it appears many people prefer to use struct.pack(). Why? what are the advantages of one over the other?
Python-bytes() vs struct.pack()
0
0.197375
1
0
0
1,956
24,944,626
2014-07-24T21:51:00.000
3
1
1
0
1
python,python-3.x
0
24,944,749
0
2
0
true
0
0
bytes() does literally what the name implies: Return a new “bytes” object, which is an immutable sequence of integers in the range 0 <= x < 256 struck.pack() does something very different: This module performs conversions between Python values and C structs represented as Python strings While for some inputs these might be equivalent, they are not at all the same operation. struct.pack() is essentially producing a byte-string that represents a POD C-struct in memory. It's useful for serializing/deserializing data.
2
3
0
0
I'm just curious here, but I have been using bytes() to convert things to bytes ever since I learned python. It was until recently that I saw struct.pack(). I didn't bother learning how to use it because I thought It did essentially did the same thing as bytes(). But it appears many people prefer to use struct.pack(). Why? what are the advantages of one over the other?
Python-bytes() vs struct.pack()
0
1.2
1
0
0
1,956
24,966,984
2014-07-26T02:41:00.000
3
0
0
0
0
python,scikit-learn
0
25,012,484
0
2
0
false
0
0
There is no way around finding out which possible values your categorical features can take, which probably implies that you have to go through your data fully once in order to obtain a list of unique values of your categorical variables. After that it is a matter of transforming your categorical variables to integer values and setting the n_values= kwarg in OneHotEncoder to an array corresponding to the number of different values each variable can take.
1
2
1
0
I have a large dataset which I plan to do logistic regression on. It has lots of categorical variables, each having thousands of features which I am planning to use one hot encoding on. I will need to deal with the data in small batches. My question is how to make sure that one hot encoding sees all the features of each categorical variable during the first run?
One-hot encoding of large dataset with scikit-learn
0
0.291313
1
0
0
3,004
24,973,568
2014-07-26T17:19:00.000
0
0
1
0
0
python,django,eclipse,interpreter
1
24,973,729
0
1
0
false
1
0
On the menubar go to Window -> Preferences -> Pydev -> Interpreters -> Python Remove the interpreter and click on Quick Auto Config. That should do the trick. Make sure django is installed first.
1
1
0
0
I'm an eclipse noob. After adding PyDev to eclipse, I try to create a "PyDev Django Project", but and I get the "Django not found" error. I heard that you have to remove the python interpreter from eclipse, then add it again. But I don't know how to do that. Can someone show me how to remove/add the python interpreter in eclipse? It is greatly appreciated. Brent.
How do I remove/add the python interpreter from eclipse?
0
0
1
0
0
1,345
24,979,640
2014-07-27T09:28:00.000
1
1
0
0
0
python,processing,paperjs,codea,pythonista
0
38,641,689
0
1
0
false
0
0
The ui module actually includes a lot of vector drawing functions, inside a ui.ImageContext. ui.ImageContext is a thin wrapper around part of one of the Objective-C APIs (maybe CALayer?) The drawing methods are designed to operate inside the draw method of a custom view class, but you can present these things in other contexts using a UIImageContext, from which you can get a static image.
1
4
0
0
I've taken to creative coding on my iPad and iPhone using Codea, Procoding, and Pythonista. I really love the paper.js Javascript library, and I'm wondering how I might have the functionality that I find in paper.js when writing in Python. Specifically, I'd love to have the vector math and path manipulation that paper.js affords. Things like finding the intersection of two paths or binding events to paths (on click, mouse move, etc). There's an ImagePath module provided by Pythonista that does some path stuff but it's not as robust as paper.js (it seems). Any ideas?
A paperjs-equivalent for python (specifically, Pythonista for iOS)?
0
0.197375
1
0
0
901
24,980,103
2014-07-27T10:39:00.000
0
1
0
0
0
python,google-app-engine,twitter,webapp2
0
25,003,619
0
1
0
false
1
0
Are you talking about being logged in to Twitter.com or your app? If you have received oAuth access tokens by authenticating an app, then logging out of twitter.com won't 'log you out' of any apps, the tokens will remain valid until the user revokes the access.
1
0
0
0
I have managed to use oauth authentication and add a Sign in with Twitter functionality to a Google App Engine web app. How should I verify, during the site navigation, if the user is still logged in Twitter?
Sign in with Twitter: how to verify the current user is stil logged in
0
0
1
0
0
36
24,981,184
2014-07-27T12:56:00.000
20
0
0
0
0
python,flask,filenames
0
24,982,070
0
2
0
true
1
0
Found the answer. request.files['upload'].filename gives the file name and extension in flask
1
6
0
0
lets say that I have <input name="upload" type="file"> and I am uploading picture.jpg. The question is how can I get the file name+extention? By other words the correct script for request.files.filename or request.upload.filename
get input file name and file extention using flask
0
1.2
1
0
0
6,570
24,993,048
2014-07-28T10:16:00.000
2
0
0
0
0
javascript,python,angularjs,client-server
0
24,993,506
0
2
0
true
1
0
It doesn't have much to do with Python, really. Your javascript code is executed on the client's brower, and all it can do is issuing HTTP requests (synchronous or asynchronous). At this point which webserver / technology / language is used to handle the HTTP request is totally irrelevant. So, from the client javascript code POV, you are not "calling a Python function", you are sending an HTTP request and handling the HTTP response. If your web host doesn't let you run django (or any wsgi-compliant script) then you'll probably have to either use plain CGI (warning: very primitive techno) or migrate to PHP (no comment). Or find another hosting that doesn't live in the past ;)
1
0
0
0
this question have been asked numerous times, I know and I'm sorry if by ignorance I didn't get the answers. I have a hosting plan which doesn't allow me to install django, which was really nice to call an rest api easily with the routing settings. What I want is to be able to call a python function from javascript code doing a get/post (I'm using AngularJs, but it would be the same making an ajax get/post.) Let's say I have a js controller 'Employee' and a view 'CreateEmployee'. From my javascript view, I can call my CreateEmployee() on the js controller, now my question is, how can I call a specific function (let's say) def CreateEmployee(params...) on my .py file? All I found is making a get/post on my .py file, but didn't find how to invoke a specific function. I probably don't get the python and client/server communication paradigm, I've been coding on asp.net WebForms for a long time, and since I can't use frameworks like Django I'm stuck. Thanks
Call python FUNCTION from javascript
0
1.2
1
0
0
4,050
24,997,946
2014-07-28T14:52:00.000
0
0
0
0
1
mysql,python-2.7,unicode,sqlalchemy,cherrypy
0
25,016,312
0
1
0
false
0
0
SQLAlchemy provides Unicode or UnicodeText for your purposes. Also don't forget about u'text'
1
0
0
0
I am using cherrypy along with sqlalchemy-mysql as backend. I would like to know the ways of dealing with UNICODE strings in cherrypy web application. One brute-force way would be to convert all string coming in as parameters into UNICODE (and then decoding them to UTF-8) before storing them to database. But I was wondering if there is any standard way of handling UNICODE characters in a web application. I tried cherrypy's tools.encode but it doesn't seem to work for me (may be I haven't understood it properly yet). Or may be there are standard python libraries to handle UNICODEs which I could just import and use. What ways should I look for?
how to handle UNICODE characters in cherrypy-sqlalchemy-mysql application?
1
0
1
1
0
153
25,001,824
2014-07-28T18:27:00.000
1
0
0
1
0
python,linux,bash,gdb,named-pipes
0
25,047,902
0
1
0
true
0
0
My recommendation is not to do this. Instead there are two more supportable ways to go: Write your code in Python directly in gdb. Gdb has been extensible in Python for several years now. Use the gdb MI ("Machine Interface") approach. There are libraries available to parse this already (not sure if there is one in Python but I assume so). This is better than parsing gdb's command-line output because some pains are taken to avoid gratuitous breakage -- this is the preferred way for programs to interact with gdb.
1
0
0
0
Here's a general example of what I need to do: For example, I would initiate a back trace by sending the command "bt" to GDB from the program. Then I would search for a word such as "pardrivr" and get the line number associated with it by using regular expressions. Then I would input "f [line_number_of_pardriver]" into GDB. This process would be repeated until the correct information is eventually extracted. I want to use named pipes in bash or python to accomplish this. Could someone please provide a simple example of how to do this?
Use named pipes to send input to program based on output
0
1.2
1
0
0
289
25,003,108
2014-07-28T19:45:00.000
0
0
0
0
1
python,ajax,django
0
25,031,586
0
1
0
true
1
0
Turns out it was a "feature" of the client-side AJAX package we were using (flow.js) and we just had to increase chunkSize.
1
3
0
0
I'm making a Django app in which a user can upload a file (an image) using AJAX. While developing locally, I saw that PIL, which I used to process the image after upload, had a bug. After investigating I found out it's because PIL is getting the file data cut off. It's only getting the first 1MB of the file, which is why it's failing. (The file is around 3MB.) Why could this be, and how can I solve it? My immediate suspicion is that runserver, which I use locally, caps AJAX uploads for some reason. But I can't be sure. And if it does, I don't know how to make it stop. Can anyone help?
Django cutting off uploaded file
0
1.2
1
0
0
86
25,005,943
2014-07-28T23:28:00.000
0
0
1
0
0
python,windows
0
25,006,282
0
3
0
false
0
0
Insert input() in the last line. It will make the program wait for a input. While it doesn't occur the windows program will be open. If you press any key and then enter, it will close.
1
1
0
0
I am new to all programming and I just started to get interested in learning how to program. So to do so I started with what most people consider the easiest language: Python. The problem I am having right now though is that if I say to Python print("Hello!"), save it in a file, and then run it, a black window opens up and closes right away. I just do not understand why it is doing this.
Window closes immediately after running program
1
0
1
0
0
7,183
25,012,108
2014-07-29T09:30:00.000
0
0
1
0
0
python,character,word-count
0
25,012,260
0
4
0
false
0
0
If all your words written in latin letters are in English you could use regular expressions.
1
1
0
0
Let's say that I have a paragraph with different languages in it. like: This is paragraph in English. 这是在英国段。Это пункт на английском языке. این بند در زبان انگلیسی است. I would like to calculate what percentage (%) of this paragraph includes English words. So would like to ask how to do that in python.
How to calculate percentage of english words in a paragraph using Python
0
0
1
0
0
3,245
25,027,339
2014-07-30T01:11:00.000
0
0
0
0
0
python,html,extract
0
25,027,442
0
4
0
false
1
0
BeautifulSoup could be used to parse the html document, and extract anything you want. It's not designed for downloading. You could find the elements you want by it's class and id.
1
6
0
0
I'd like to get the data from inspect element using Python. I'm able to download the source code using BeautifulSoup but now I need the text from inspect element of a webpage. I'd truly appreciate if you could advise me how to do it. Edit: By inspect element I mean, in google chrome, right click gives us an option called inspect element which has code related to each element of that particular page. I'd like to extract that code/ just its text strings.
How to get data from inspect element of a webpage using Python
0
0
1
0
1
34,323
25,057,937
2014-07-31T11:38:00.000
1
0
0
0
1
wxpython,wxwidgets
0
25,059,718
0
1
0
true
0
1
The widget probably doesn't support text alignment. If you want complete control over how it displays its contents, then you should probably switch to a custom drawn control, such as ComboCtrl.
1
0
0
0
I have a combobox in wxpython but I cant figure out how to align the text it contains to the right? I have tried to use wx.ComboBox(self, choices=["1","2","3"], style=wx.TEXT_ALIGNMENT_RIGHT) but that didnt work.
How do I right align the text in a wx.ComboBox?
0
1.2
1
0
0
835
25,093,943
2014-08-02T10:17:00.000
1
1
0
0
0
python
0
25,106,484
0
1
0
true
0
0
When you're working on a library you can use python setup.py develop instead of install. This will install the package into your local environment and keep it updated as you develop. To be clear, if you use develop you don't have to run it again when you change your source files.
1
0
0
0
I have installed a python package slimit and I have cloned the source code from github. I am doing changes to this package in my local folders which I want to test (often) but I don't want to do allways python setup.py install. My folder structure is: ../develop/slimit/src/slimit (contains package files) ../develop/test/test.py I'm using eclipse + pydev + python 2.7, on linux Should I run eclipse with "sudo rights"? Even better, is there a way to import the local development package into my testing script?
how to test changes to an installed package
1
1.2
1
0
0
227
25,106,897
2014-08-03T16:52:00.000
-1
0
0
0
0
python,openpyxl
0
47,234,993
0
2
0
false
0
0
As A-Palgy said you have to add worksheet.sheet_view.rigtToleft = True but first, you have to enable that feature in the views.py file in this path: C:\python36\Lib\site-packages\openpyxl\worksheet\views.py and edit the line : rightToLeft = Bool(allow_none=True) to rightToLeft = Bool(allow_none=False)
1
4
0
0
I was wondering how to adjust display left to right / right to left with openpyxl or if its even possible. haven't really found anything in the documantation, maybe im blind. thanks in advance :)
Python Openpyxl display left to right
0
-0.099668
1
0
0
930
25,110,635
2014-08-04T01:23:00.000
1
1
0
1
1
python,linux,unix,cron,raspberry-pi
0
25,110,706
0
2
0
true
0
0
It looks like you may have a stray . in there that would likely cause an error in the command chain. Try this: cd usr/local/sbin/cronjobs && virtualenv/secret_ciphers/bin/activate && cd csgostatsbot && python3 CSGO_STATS_BOT_TASK.py && deactivate Assuming that the virtualenv directory is in the cronjobs directory. Also, you may want to skip the activate/deactivate, and simply run the python3 interpreter right out of the virtualenv. i.e. /usr/local/sbin/cronjobs/virtualenv/secret_ciphers/bin/python3 /usr/local/sbin/cronjobs/csgostatsbot/CSGO_STATS_BOT_TASK.py Edit in response to comments from OP: The activate call is what activates the virtualenv. Not sure what the . would do aside from cause shell command parsing issues. Both examples involve the use of the virtualenv. You don't need to explicitly call activate. As long as you invoke the interpreter out of the virtualenv's directory, you're using the virtualenv. activate is essentially a convenience method that tweaks your PATH to make python3 and other bin files refer to the virtualenv's directory instead of the system install. 2nd Edit in response to add'l comment from OP: You should redirect stderr, i.e.: /usr/local/sbin/cronjobs/virtualenv/secret_ciphers/bin/python3 /usr/local/sbin/cronjobs/csgostatsbot/CSGO_STATS_BOT_TASK.py > /tmp/botlog.log 2>&1 And see if that yields any additional info. Also, 5 asterisks in cron will run the script every minute 24/7/365. Is that really what you want? 3rd Edit in response to add'l comment from OP: If you want it to always be running, I'm not sure you really want to use cron. Even with 5 asterisks, it will run it once per minute. That means it's not always running. It runs once per minute, and if it takes longer than a minute to run, you could get multiple copies running (which may or may not be an issue, depending on your code), and if it runs really quickly, say in a couple seconds, you'll have the rest of the minute to wait before it runs again. It sounds like you want the script to essentially be a daemon. That is, just run the main script in a while (True) loop, and then just launch it once. Then you can quit it via <crtl>+c, else it just perpetually runs.
1
0
0
0
I am trying to run a cron script in python 3 so I had to setup a virtual environment (if there is an easier way, please let me know) and in order to run the script I need to be in the script's parent folder as it writes to text files there. So here is the long string of commands I have come up with and it works in console but does not work in cron (or I can't find the output..) I can't type the 5 asterisks without it turning into bullet points.. but I have them in the cron tab. cd usr/local/sbin/cronjobs && . virtualenv/secret_ciphers/bin/activate && cd csgostatsbot && python3 CSGO_STATS_BOT_TASK.py && deactivate
how to write a multi-command cronjob on a raspberry pi or any other unix system
0
1.2
1
0
0
822
25,120,159
2014-08-04T13:44:00.000
2
0
1
0
0
python,sockets,thread-safety
0
25,122,539
0
1
0
true
0
0
The socket API is thread safe (at least on linux and windows) to the extent that the system won't crash and the data will all be transferred. Its just that data sent among threads may be interleaved and there is no guarantee what any given thread will receive. But the minumum unit of transfer is 1 byte so if you have a protocol where messages are only 1 byte and interleaving doesn't make a difference, ... send away!
1
0
0
0
as i have found on thread safety of socket, it was not, But how about each thread accesses a socket to write or read only one byte at a once.(1 byte means 1 character) is it also un-safe? i am coding in python.
is socket thread safe in writing or reading a 1byte?
0
1.2
1
0
0
356
25,125,532
2014-08-04T18:49:00.000
1
0
1
0
1
python,installation,iexpress
0
31,952,116
0
1
0
false
0
0
iexpress doesn't let you include folders, but you can include a batch file, which may create folders and copy files to the respective folder. To run a batch file, specify cmd /c IncludedBatchFile.bat under Install Program in the iexpress wizard.
1
0
0
0
ok so i've used iexpress a few times without a problem. i created a nice little program for my buddies and i and i'm now in the process of creating a installation package for it. i like iexpress cause it makes it easy and has the license agreement window n whatnot. ok so program is made. using windows & iexpress i attempt to make the installer, problem is there is one folder that contains an item i need and it needs to be in that folder directory when the installed program needs to run. Problem: i can select files but not folders for the list of items to be in the installer. Question: how do i include the folder in the install package so there doesnt need to be a few more additional steps for the installation. i have thought about zipping it, but there isnt a way (that i know of) to add a extract command after the initial install extract. i figure installers are to programs what instruction booklets are to Ikea furniture so i figured this would be the best place for help. tyvm
iexpress assistance for my program
0
0.197375
1
0
0
535
25,138,508
2014-08-05T12:11:00.000
0
0
0
0
0
python,image,3d,transform
0
25,142,706
0
2
0
false
0
0
Firstly, all lines in 3d correspond to an equation; secondly, all lines in 3d that lie on a particular plane for part of their length correspond to equations that belong to a set of linear equations that share certain features, which you would need to determine. The first thing you should do is identify the four corners of the supposed plane - they will have x, y or z values more extreme than the other points. Then check that the lines between the corners have equations in the set - three points in 3d always define a plane, four points may not. Then you should 'plot' the points of two parallel sides using the appropriate linear equations. All the other points in the supposed plane will be 'on' lines (whose equations are also in the set) that are perpendicular between the two parallel sides. The two end points of a perpendicular line on the sides will define each equation. The crucial thing to remember when determining whether a point is 'on' a line is that it may not be, even if the supposed plane was inputted as a plane. This is because x, y and z values as generated by an equation will be rounded so as correspond to 'real' points as defined by the resolution that the graphics program allows. Therefore you must allow for a (very small) discrepancy between where a point 'should be' and where it actually is - this may be just one pixel (or whatever unit of resolution is being used). To look at it another way - a point may be on a perpendicular between two sides but not on a perpendicular between the other two solely because of a rounding error with one of the two equations. If you want to test for a 'bumpy' plane, for whatever reason, just increase the discrepancy allowed. If you post a carefully worded question about the set of equations for the lines in a plane on math.stackexchange.com someone may know more about it.
1
0
1
0
I used micro CT (it generates a kind of 3D image object) to evaluate my samples which were shaped like a cone. However the main surface which should be flat can not always be placed parallel to the surface of image stacks. To perform the transform, first of all, I have to find a way to identify the flat surface. Therefore I learnt python to read the image data into numpy array. Then I realized that I totally have no clue how to achieve the idea in a mathematical way. If you have any idea or any suggestion or even packages would be so appreciated.
How to check a random 3d object surface if is flat in python
0
0
1
0
0
435