Web Development
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 28
6.1k
| is_accepted
bool 2
classes | Q_Id
int64 337
51.9M
| Score
float64 -1
1.2
| Other
int64 0
1
| Database and SQL
int64 0
1
| Users Score
int64 -8
412
| Answer
stringlengths 14
7k
| Python Basics and Environment
int64 0
1
| ViewCount
int64 13
1.34M
| System Administration and DevOps
int64 0
1
| Q_Score
int64 0
1.53k
| CreationDate
stringlengths 23
23
| Tags
stringlengths 6
90
| Title
stringlengths 15
149
| Networking and APIs
int64 1
1
| Available Count
int64 1
12
| AnswerCount
int64 1
28
| A_Id
int64 635
72.5M
| GUI and Desktop Applications
int64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | 0 | I want to get the HTML content of a web page but most of the content is generated by javascript.
Is it posible to get this generated HTML (with python if posible)? | false | 9,618,776 | 0.197375 | 0 | 0 | 3 | The only way I know of to do this from your server is to run the page in an actual browser engine that will parse the HTML, build the normal DOM environment, run the javascript in the page and then reach into that DOM engine and get the innerHTML from the body tag.
This could be done by firing up Chrome with the appropriate URL from Python and then using a Chrome plugin to fetch the dynamically generated HTML after the page was done initializing itself and communicate back to your Python. | 0 | 2,524 | 0 | 3 | 2012-03-08T13:59:00.000 | javascript,python,html | How can I get the HTML generated with javascript? | 1 | 2 | 3 | 9,618,891 | 0 |
1 | 0 | I want to get the HTML content of a web page but most of the content is generated by javascript.
Is it posible to get this generated HTML (with python if posible)? | false | 9,618,776 | 0 | 0 | 0 | 0 | If most of the content is generated by Javascript then the Javascript may be doing ajax calls to retrieve the content. You may be able to call those server side scripts from your Python app.
Do check that it doesn't violate the website's terms though and get permission. | 0 | 2,524 | 0 | 3 | 2012-03-08T13:59:00.000 | javascript,python,html | How can I get the HTML generated with javascript? | 1 | 2 | 3 | 9,618,929 | 0 |
0 | 0 | So I have tried to find a answer but must not be searching correctly or what I'm trying to do is the wrong way to go about it.
So I have a simple python script that creates a chess board and pieces in a command line environment. You can in put commands to move the pieces. So one of my co workers thought it would be cool to play each other over the network. I agreed and tried by creating a text file to read and write to on the network share. Then we would both run the script that reads that file. The problem I ran into is I pretty much DOS attacked that file share since it kept trying to check that file on network share for a update.
I am still new to python and haven't ever wrote code that travels the internet, our even a simple local network. So my question is how should I go about properly allowing 2 people to access this data at the same time with out stealing all the network resources?
Oh also im using version 2.6 because thats what everyone else uses and they refuse to change to new syntax | false | 9,629,040 | 0 | 0 | 0 | 0 | First off, without knowing how many times you are checking the fle with the moves, it is difficult to know why the file-share is getting DoS-ed. Most networks and network shares these days can handle that level of traffic - they are all gigabit Ethernet, so unless you are transferring large chunks of data each time, you should be ok. If you are transferring the whole file each time, then I'd suggest that you look at optimizing that.
That said, coming to your second question on how this is handled at a network level, to be honest, you are already doing it in a certain way - you are accessing a file on a network share and modifying it. The only optimization required is to be able to do it efficiently. Even over the network operations in a concurrent world do the same. In that case, it will be using fast in-memory database storing various changes / using a high-scale RDBMS / in the case of fast-serving web-servers better async I/O.
In the current case, since there are two users playing the game, I suggest that you work on a way to transmit only the difference in the moves each time over the network. So, instead of modifying the file over the network share, you can send the moves over to a server component and it synchronizing the changes locally to the file. Of course, this means you will need to create a server component that would do something like this
user1's moves <--> server <--> user2's moves . Server will modify the moves file.
Once you start doing this, you get into the realm of server programming / preventing race conditions etc. It will be a good learning experience. | 0 | 347 | 0 | 0 | 2012-03-09T04:41:00.000 | python | Python over a network share | 1 | 1 | 2 | 9,629,110 | 0 |
1 | 0 | I'm a newer and if the question is so easy I apologize for that.
Assume I want to dev a classical online judge system, obviously the core part is
get users' code to a file
compile it on server
run it on server (with some sandbox things to prevent damage)
the program exit itself, then check the answer.
or get the signal of program collapsing.
I wonder if it's possible to do all the things using Node.js, how to do the sandbox things. Is there any example for compile-sandbox-run-abort-check thing?
additional:
is it more convenient to dev a such system using PYTHON?
thanks in advance. | false | 9,636,294 | 0 | 0 | 0 | 0 | To accomplish the sandbox, it would be fairly easy to do this by simply running your code inside of a closure that reassigns all of the worrisome calls to NaN
for instance, if the code executes inside a closure where eval=NaN | 0 | 1,228 | 0 | 4 | 2012-03-09T15:06:00.000 | javascript,python,node.js,compiler-construction | Is that possible to develop a ACM ONLINE JUDGE system using NODE.JS(or PYTHON)? | 1 | 1 | 2 | 9,636,911 | 0 |
1 | 0 | I want to take results from a web page, sent from dom as json through ajax, then send this data to a python script, run it, then return the new results back as json. I was told a php script running gearman would be a good bet, but I'm still not sure how that would work. | false | 9,642,259 | 0 | 1 | 0 | 0 | Put your Python script in your CGI directory and use the cgi and json modules in your script to read AJAX from post/get params. Of course you can do a system call from PHP to run a Python script, but I can't think of a good reason why you would. | 0 | 703 | 0 | 0 | 2012-03-09T23:18:00.000 | php,javascript,python,gearman | How can I run a python script through a webserver and return results to javascript? | 1 | 1 | 2 | 9,642,671 | 0 |
0 | 0 | I am working on a Python script that will extract data from Database and prepare data for a front graph amCharts tool. I am returning data via an AJAX call made from the front end side in JSON / XML format.
Is it safe to return data in JSON / XML format to browser - in case if my tool goes to live wouldn’t the JSON / XML format would be viewable to client side before reading. Is it a safe way for data to be transmitted to client side?
What are the other ways the Python script can print out data such that client side can show data in graphs etc in secured way? Also is my approach correct - is there any other way to represent data as Graphs?
Please advice | true | 9,643,548 | 1.2 | 0 | 0 | 1 | You should use HTTPS on your server, so that the data is encrypted as it leaves the server.
Ultimately, though, the browser will decrypt the data. If at anytime you don't even want the target end-user to see the data, then you have to follow this rule: don't send it. | 0 | 347 | 0 | 0 | 2012-03-10T02:53:00.000 | python | JSON XML Graph Data | 1 | 1 | 1 | 9,643,693 | 0 |
0 | 0 | I've written an IRC bot using Twisted and now I've gotten to the point where I want to be able to dynamically reload functionality.
In my main program, I do from bots.google import GoogleBot and I've looked at how to use reload to reload modules, but I still can't figure out how to do dynamic re-importing of classes.
So, given a Python class, how do I dynamically reload the class definition? | false | 9,645,388 | 0 | 0 | 0 | 0 | Better yet subprocess the plugins, then hypervise the subprocess, when the files change reload the plugins process.
Edit: cleaned up. | 1 | 6,092 | 0 | 4 | 2012-03-10T09:33:00.000 | python,reflection,twisted | Dynamically reload a class definition in Python | 1 | 1 | 7 | 9,645,892 | 0 |
1 | 0 | Id like to know if there is a way to get information from my banking website with Python, Id like to retrieve my card history and display it, and possibly save it into a text document each month.
I have found the urls ext to login and get the information from the website, which works from a browser, but I have been using liburl2 to "open" the webpages from Python and I have a feeling its not working because of some cookie or session things.
I can get any information I want from a website that does not require a login with urllib2, and then save the actual HTML and go through it later, but I cant on my banks website,
Any help would be appreciated | false | 9,647,381 | 0.53705 | 0 | 0 | 3 | This is a part of Web-Scraping :
Web-scraping is a standard task that can serve various needs.
Scraping data out of secure-website means https
Handling https is not a problem with mechanize and BeautifulSoup
Although urllib2 with HTTPCookieJar also works fine
If managing the cookies is the problem, then I would recommend mechanize
Considering the case of your BANK-Site :
I would recommend not to play with your account.
If you must then, its not as easy as any normal secure/non-secure site.
These sites are designed to with-stand such scripts.
Problems that you would face with this:
BANK sites will surely have Captcha that is almost impossible to by-pass with a script unless you employee a lot of rocket-science and effort.
Other problem that you will definitely face is javascript, standard scripting solutions are focused to manage cookies, HTML parsing, etc. For processing javascript on links you will have to process js in your python script. That again needs a lot of effort.
Then, AJAX that again comes from javascript fetches data from server after page-load.
So, it will require you to take a lot of effort to do this task.
Also, if you try doing this you risk of blocking access to your account since banking sites are quick to block account access on 3-4 unsuccessful attempt on login or captcha, etc.
So, think before you do. | 0 | 2,748 | 0 | 2 | 2012-03-10T14:53:00.000 | python,post,cookies,get,urllib2 | Python get data from secured website | 1 | 1 | 1 | 9,647,462 | 0 |
0 | 0 | I wonder if TDD could help my programming. However, I cannot use it simply as most of my functions take large network objects (many nodes and links) and do operations on them. Or I even read SQL tables.
Most of them time it's not really the logic that breaks (i.e. not semantic bugs), but rather some functions calls after refactoring :)
Do you think I can use TDD with such kind of data? What do you suggest for that? (mock frameworks etc?)
Would I somehow take real data, process it with a function, validate the output, save input/output states to some kind of mock object, and then write a test on it? I mean just in case I cannot provide hand made input data.
I haven't started TDD yet, so references are welcome :) | false | 9,656,064 | 0.291313 | 0 | 0 | 3 | You've pretty much got it. Database testing is done by starting with a clean, up-to-date schema and adding a small amount of known, fixed data into the database. You can then do operations on this controlled environment, knowing what results you expect to see.
Working with network objects is a bit more complex, but it normally involves stubbing them (i.e. removing the inner functionality entirely) or mocking them so that a fixed set of known data is returned.
There is always a way to test your code. If it's proving difficult, it's normally the code design that needs some rethinking.
I don't know any Python specific TDD resources, but a great resource on TDD in general is "Test Driven Development: A Practical Guide" by Coad. It uses Java as the language, but the principles are the same. | 1 | 420 | 0 | 2 | 2012-03-11T15:04:00.000 | python,tdd | TDD with large data in Python | 1 | 1 | 2 | 9,656,098 | 0 |
0 | 0 | I've just created a web chat server with Tornado over Python. The communication mechanism is to use long-polling and I/O events.
I want to benchmark this web chat server at large scale, meaning I want to test this chat server (Tornado based) to see how many chatters it can withstand.
Because I'm using cookies to identify sessions, presently I can only test with maximum 5 (IE, Firefox, Chrome, Safari, Opera) sessions per computer (cookie path has no use coz everything goes thru' the same web page), but in my office we only have limited number of computers.
I want to test this Tornado app at the extreme, hopefully it can withstand few thousand concurrent users like Tornado is advertising, but having no clue how to do this! | true | 9,665,913 | 1.2 | 1 | 0 | 1 | I would run the server in a mode where you let the client tell which client they are. i.e. change the code so it can be run this way as required. This is less secure, but makes testing easier. In production, don't use this option. This will give you a realistic test from a small number of client machines. | 0 | 510 | 0 | 2 | 2012-03-12T11:06:00.000 | python,performance,chat,tornado,long-polling | How to benchmark web chat performance? | 1 | 1 | 1 | 9,668,433 | 0 |
1 | 0 | This might be a simple question.but,does any one know how to disable a button after clicking it in OpenERP?
Please help!!!!!
Thanks for all your help.... | false | 9,666,425 | 0 | 0 | 0 | 0 | If we talking about web interface then you could disable it by javascript. | 0 | 2,267 | 0 | 4 | 2012-03-12T11:46:00.000 | python,openerp | How to disable a button after clicking it in OpenERP | 1 | 1 | 3 | 9,667,268 | 0 |
0 | 0 | I'm trying to create a 'Download Manager' for Linux that lets me download one single file using multiple threads. This is what I'm trying to do :
Divide the file to be downloaded into different parts by specifying an offset
Download the different parts into a temporary location
Merge them into a single file.
Steps 2 and 3 are solvable, and it is at Step #1 that I'm stuck. How do I specify an offset while downloading a file?
Using something along the lines of open("/path/to/file", "wb").write(urllib2.urlopen(url).read()) does not let me specify a starting point to read from. Is there any alternative to this? | false | 9,701,682 | 0.197375 | 0 | 0 | 3 | first, the http server should return Content-Length header. this is usually means the file is a static file, if it is a dynamic file, such as a result of php or jsp, you can not do such split.
then, you can use http Range header when request, this header tell the server which part of file should return. see python doc for how set and parse http head.
to do this, if the part size is 100k, you first request with Range: 0-1000000 100k will get first part, and in its conent-length in response tell your the size of file, then start some thread with different Range, it will work | 1 | 3,774 | 0 | 6 | 2012-03-14T12:08:00.000 | python,download,urllib2,fedora | Download A Single File Using Multiple Threads | 1 | 1 | 3 | 9,702,345 | 0 |
0 | 0 | I've read that socketserver is easier to use, but for someone who is just learning about sockets, which would be quicker and more beginner-friendly, socket or socketserver? For a very basic client/server setup using stream sockets. (Python) | true | 9,710,914 | 1.2 | 0 | 0 | 0 | socket is the low-level interface which SocketServer (as well as other networking code) is based off of. I'd start out learning it, whether you plan to use it directly or not, just so that you know what you're working with.
Also, SocketServer is of no use if you're writing client code. :) | 0 | 72 | 0 | 0 | 2012-03-14T21:58:00.000 | python,sockets,socketserver | Which is better to use for the server of a basic server/client socket implementation, "socket" or "socketserver"? | 1 | 1 | 1 | 9,711,006 | 0 |
0 | 0 | Is it possible to upload a video on youtube with a remote URL (not from the local machine). I am using Youtube API and python gdata tools for this.
I don't have the videos on the server where the script will run, and I want to upload them directly to youtube from a remote URL, instead of downloading them first... Do you know if this possible? | true | 9,711,561 | 1.2 | 0 | 0 | 3 | Short answer: Not possible.
Long answer: Videos are just data files. So the question becomes: is it possible for a program on Computer A to tell Server B to send a file to Server C using standard internet communication? YouTube only accepts POST requests for uploading videos, so Server B would need to send this request. And you can't tell Server B to do this with just a URL. | 0 | 1,998 | 0 | 1 | 2012-03-14T22:56:00.000 | python,youtube,gdata | Upload video to youtube via URL with python gdata | 1 | 1 | 1 | 9,714,978 | 0 |
0 | 0 | I have a problem I've been dealing with lately. My application asks its users to upload videos, to be shared with a private community. They are teaching videos, which are not always optimized for web quality to start with. The problem is, many of the videos are huge, way over the 50 megs I've seen in another question. In one case, a video was over a gig, and the only solution I had was to take the client's video from box.net, upload it to the video server via FTP, then associate it with the client's account by updating the database manually. Obviously, we don't want to deal with videos this way, we need it to all be handled automatically.
I've considered using either the box.net or dropbox API to facilitate large uploads, but would rather not go that way if I don't have to. We're using PHP for the main logic of the site, though I'm comfortable with many other languages, especially Python, but including Java, C++, or Perl. If I have to dedicate a whole server or server instance to handling the uploads, I will.
I'd rather do the client-side using native browser JavaScript, instead of Flash or other proprietary tech.
What is the final answer to uploading huge files though the web, by handling the server response in PHP or any other language? | true | 9,712,898 | 1.2 | 1 | 0 | 2 | It is possible to raise the limits in Apache and PHP to handle files of this size. The basic HTTP upload mechanism does not offer progressive information, however, so I would usually consider this acceptable only for LAN-type connections.
The normal alternative is to locate a Flash or Javascript uploader widget. These have the bonus that they can display progressive information and will integrate well with a PHP-based website. | 0 | 200 | 0 | 4 | 2012-03-15T01:44:00.000 | java,php,javascript,c++,python | Uploading huge files with PHP or any other language? | 1 | 1 | 2 | 9,712,935 | 0 |
1 | 0 | I am doing a car rental software, in which a front end back end are there, where the back end will do the accounting part. I have to send some data like customer name, amount, currency etc. to account engine to prepare the ledgers. I am confused whether to use json or soap for information exchange between front and back ends. ur suggestions are precious. thank u.. | true | 9,714,877 | 1.2 | 0 | 0 | 5 | Use JSON for data serialization. It's clean, simple, compact, widely supported, and understands data types. Use SOAP only if you like pain. It is a bloated sack of cruft built upon another bloated sack of cruft. | 0 | 181 | 0 | 1 | 2012-03-15T06:21:00.000 | php,python,json,zend-framework,serialization | Either json or Soap to exchange data in my project? | 1 | 3 | 3 | 9,716,132 | 0 |
1 | 0 | I am doing a car rental software, in which a front end back end are there, where the back end will do the accounting part. I have to send some data like customer name, amount, currency etc. to account engine to prepare the ledgers. I am confused whether to use json or soap for information exchange between front and back ends. ur suggestions are precious. thank u.. | false | 9,714,877 | 0.197375 | 0 | 0 | 3 | Use JSON.
My argument is that JSON maps directly to and from native data types in common scripting languages.
If you use Python, then None <-> null, True <-> true, False <-> false, int/float <-> Number, str/unicode <-> String, list <-> Array and dict <-> Object. You feel right at home with JSON.
If you use PHP, there should be similar mappings.
XML is always a foreign language for any programming language except Scala. | 0 | 181 | 0 | 1 | 2012-03-15T06:21:00.000 | php,python,json,zend-framework,serialization | Either json or Soap to exchange data in my project? | 1 | 3 | 3 | 9,720,556 | 0 |
1 | 0 | I am doing a car rental software, in which a front end back end are there, where the back end will do the accounting part. I have to send some data like customer name, amount, currency etc. to account engine to prepare the ledgers. I am confused whether to use json or soap for information exchange between front and back ends. ur suggestions are precious. thank u.. | false | 9,714,877 | 0 | 0 | 0 | 0 | Depending on your needs, you could use both. For example, using XML bindings you get the (de)serialization of the data going across the wire for free. That is, if you're going to be POSTing lots of data to your web-service, and want to avoid calling the equivalent of "request.getParameter" for each parameter and building your own objects and creating/registering different servlets for each endpoing, the bindings can save in development time. And for the response, you can have the payload be defined as a String and return JSON text, which gives you the benefits of that compact, javascript-friendly of that notation. | 0 | 181 | 0 | 1 | 2012-03-15T06:21:00.000 | php,python,json,zend-framework,serialization | Either json or Soap to exchange data in my project? | 1 | 3 | 3 | 9,720,841 | 0 |
0 | 0 | I have 10 XML files containing several Objects.
The XML files define ACTIONS on those objects.
ACTIONS on objects=
MODIFY values
DELETE Object
CREATE Object with values
I need to get the result of those 10 XML files (10 files of actions on those objects).
Any suggestion ?
programming .NET and ADO ?
programming PYTHON and minidom ?
spyXML from Altova ?
a commercial tool to load MYSQL ? | true | 9,725,725 | 1.2 | 0 | 0 | 1 | Finally we have used SAXON + XSLT2.0 (saxon called from Perl) and Perl::Twig for the parts we did not know how to program in XSLT | 0 | 81 | 0 | 1 | 2012-03-15T18:17:00.000 | python,.net,xml,xslt,ado.net | XML files to be sum up | 1 | 1 | 1 | 11,595,618 | 0 |
0 | 0 | I'm trying to send username and password data from a web form to my server.
The password is sent as plain text over a https connection, then properly encrypted on the server (using python hashlib.sha224) before being stored, however I'm not sure how to transmit the password text to the server in an encrypted format.
My web client is written in javascript, and the server is written in python. | false | 9,742,351 | 0 | 0 | 0 | 0 | The HTTPS channel over which you send the password to the server provides encryption that is good enough.
However, you need a more secure storage mechanism for the password. Use an algorithm like "bcrypt" with many thousands of hash iterations (bcrypt calls this the cost factor, and it should be at least 16, meaning 216 iterations), and a random "salt". This works by deriving an encryption key from the password, which is a computationally expensive process, then using that key to encrypt some known cipher text, which is saved for comparison on future login attempts.
Also, using HTTPS on the login only is not sufficient. You should use it for any requests that require an authenticated user, or that carry an authentication cookie. | 0 | 2,259 | 0 | 0 | 2012-03-16T18:05:00.000 | python,encryption,passwords,hashlib | How to encrypt password sent to server | 1 | 1 | 5 | 9,742,498 | 0 |
1 | 0 | I've been experimenting with the Google App Engine, and I'm trying to import certain libraries in order to execute API commands. I've been having trouble importing, however. When I tried to execute "from apiclient.discovery import build", my website doesn't load anymore. When I test locally in IDLE, this command works. | false | 9,747,258 | 0 | 0 | 0 | 0 | The packages needs to be locally available, where did you put the packages, in the Python folder or in your project folder? | 0 | 2,099 | 1 | 0 | 2012-03-17T04:30:00.000 | python,google-app-engine,google-api,google-api-client,google-api-python-client | Google App Engine library imports | 1 | 1 | 2 | 9,748,040 | 0 |
1 | 0 | I want to write a python script which downloads the web-page only if the web-page contains HTML. I know that content-type in header will be used. Please suggest someway to do it as i am unable to get a way to get header before the file download. | true | 9,750,481 | 1.2 | 0 | 0 | 2 | Use http.client to send a HEAD request to the URL. This will return only the headers for the resource then you can look at the content-type header and see if it text/html. If it is then send a GET request to the URL to get the body. | 0 | 103 | 0 | 0 | 2012-03-17T13:50:00.000 | python,download,html-parsing,beautifulsoup,printing-web-page | Download a URL only if it is a HTML Webpage | 1 | 1 | 1 | 9,750,658 | 0 |
1 | 0 | I am crawling online stores for price comparison. Mot of the stores are using dynamic URLs heavily. This is causing my crawler to spend lot of time on every online stores. Even though most of them have only 5-6k unique products, they have unique URLs >= 300k. Any idea how to get around this.
Thanks in advance! | true | 9,752,891 | 1.2 | 0 | 0 | 0 | If you parsing some product pages, usually these URLs have some kind of product id.
Find the pattern to extract product id from URLs, and use it to filter already visited URLs. | 0 | 205 | 0 | 0 | 2012-03-17T19:08:00.000 | python,url,dynamic | How to handle dynamic URLs while crawling online stores? | 1 | 1 | 1 | 9,753,135 | 0 |
0 | 0 | I have used twitter search API to collect lots of tweets given a search keyword. Now that I have this collection of tweets, I'd like to find out which tweet has been retweeted most.
Since search API does not have retweet_count, I have to find some other way to check how many times each tweet has been retweeted. The only clue I have is that I have ID number for each tweet. Is there any way I could use these ID numbers to figure out how many times each tweet has been retweeted??
I am using twitter module for python. | false | 9,758,636 | 0 | 1 | 0 | 0 | I am currently studying twitter structure and had found out that is a field called tweet_count associated with each tweet as to # of times that particular original tweet has been retweeted | 0 | 1,571 | 0 | 1 | 2012-03-18T13:16:00.000 | python,api,twitter | Getting Retweet Count of a Given Tweet ID Number | 1 | 2 | 2 | 29,076,949 | 0 |
0 | 0 | I have used twitter search API to collect lots of tweets given a search keyword. Now that I have this collection of tweets, I'd like to find out which tweet has been retweeted most.
Since search API does not have retweet_count, I have to find some other way to check how many times each tweet has been retweeted. The only clue I have is that I have ID number for each tweet. Is there any way I could use these ID numbers to figure out how many times each tweet has been retweeted??
I am using twitter module for python. | false | 9,758,636 | 0 | 1 | 0 | 0 | i don't think so, since one can either retweet using the retweet command or using a commented retweet. At least the second alternative generates a new tweet id | 0 | 1,571 | 0 | 1 | 2012-03-18T13:16:00.000 | python,api,twitter | Getting Retweet Count of a Given Tweet ID Number | 1 | 2 | 2 | 9,758,703 | 0 |
0 | 0 | I have an assignment to create a secure communication between 2 people with a middle man.
The messages have to be encrypted using public and private keys and a X.509 certificate should be created for each user, this certificate is stored by the third party.
I'm currently sending messages between users through a sockets.
Could someone suggest an easy to understand library that I could use to perform simple encryption? Any appropriate reading sources about the library will help as well. | true | 9,778,006 | 1.2 | 0 | 0 | 1 | I ended up using M2Crypto after trying PyOpenSSL
the problem with PyOpenSSL is that it doesnt have a method to return a public key. I was having a lot of problem with this.
M2Crypto has its own encryption method as well, meaning you dont need to install multiple libraries :) | 0 | 994 | 0 | 2 | 2012-03-19T21:11:00.000 | python,encryption,cryptography,toolkit | Python open source cryptographic toolkit | 1 | 1 | 5 | 9,821,078 | 0 |
0 | 0 | I have a multi threaded python application communicating with a separate service trough UDP.
Each thread is similar, at some point need a response from the separate service.
So practically for each thread I create e new client socket and start to communicate. The problem is on the server side each UDP packet seems to come from the same reception port thus this create problem on the client side on who receives whose message.
How can I enforce the socket to use a different ephemeral reception port for each instance in the same program different threads?
Thanks! | true | 9,789,410 | 1.2 | 0 | 0 | 2 | You can connect() each UDP socket to it's target. That way the ephemeral ports will be fixed (and different) for each thread. | 1 | 566 | 0 | 1 | 2012-03-20T15:08:00.000 | python,sockets,udp | Python UDP client Ephemeral Reception Port | 1 | 1 | 1 | 9,789,514 | 0 |
1 | 0 | In my latest Python project, utilizing Twisted, I've tried to be good at using the unittest module. At a high level, I'm building two RESTful APIs designed specifically to talk to each other. For most requests, I can just use DummyRequest and test the rendered values against an expected constant and that's been working fine.
However, I've got a few cases where the design requires a request on one server that (among other things) then sends a request to the other server, but doesn't really care about the response. What matters is that the request happens.
Like I said, I can test that functionality on the other side perfectly fine, but I'm getting stumped on how to test to ensure that the data was sent over. My ideas so far are either
Set up a dummy test server that just checks to see if the request was made and validates the input - Seems flaky and too much effort
Set up a decorator to wrap certain tests and modify urllib.urlopen to report when it was called, what we tried to retrieve, and allow me to simply return a known result there.
I'm leaning towards the second option as it seems more pythonic, but also a bit hacky.
Thoughts? | false | 9,798,910 | 0 | 0 | 0 | 0 | I don't know much about Twisted or how you set up your system under test, but could you start two servers on a single thread? One of them would be the one you are testing and another would be just a dummy that can accept any request. In addition to that, the dummy would store info that it has received the call. After initiating the operation on the first server that causes it to call the second one, you could then assert that the second one has received a request. | 0 | 963 | 0 | 3 | 2012-03-21T05:07:00.000 | python,unit-testing,twisted | Unit-testing client-server interaction in Twisted | 1 | 1 | 2 | 9,799,162 | 0 |
1 | 0 | I am working out how to build a python app to do image processing. A client (not a web browser) sends an image and some text data to the server and the server's response is based on the received image.
One method is to use a web server + WSGI module and have clients make a HTTP POST request (using multipart/form-data). The http server then 'works out' the uploaded image and other data that the program can use.
Another method is to create a protocol that only sends the needed data and is handled within the application. The application would be doing everything (listening on the port, etc).
Is one of these a stand-out 'best' way (if yes, which one?), or is it more up to preference (or is there another way which is better)? | false | 9,804,674 | 0 | 0 | 0 | 0 | In my opinion HTTP is an ideal protocol for sending files or large data, and its very common use, easy to suit any situation. If you use a self-created protocol, you may find it hard to transform when you get other client needs, like a web API.
Maybe the discussions about HTTP's lack of instantaneity and agility make you hesitate about choosing HTTP, but that mostly something about instant messaging and server push, there are better protocols. But when it comes to stability and flexiblity, HTTP is always a good choice. | 0 | 739 | 0 | 0 | 2012-03-21T12:32:00.000 | python,sockets,wsgi | Sending image to server: http POST vs custom tcp protocol | 1 | 1 | 2 | 9,805,325 | 0 |
0 | 0 | I'm writing a python script (in fact a Calibre Recipe) to retrieve all the items under a specific tag or category in my readlist.
I'm able to retrieve the whole items from the category but I'ld like to retrieve items feed-by-feed, so I need a way to list the feeds filed under a specific category (just as Google Reader UI does when you click on a folder).
I'm unable to find a API for doing that.
Any suggestions?
Thanks. | false | 9,819,233 | 0.197375 | 0 | 0 | 1 | You can use https://www.google.com/reader/api/0/subscription/list to list the user's subscriptions. The folders each subscription is in are given by the categories list. You can append ?output=json to get the list of subscriptions as JSON. | 0 | 358 | 0 | 0 | 2012-03-22T09:14:00.000 | python,google-reader,calibre | Google Reader API Listing feed from a single tag or category | 1 | 1 | 1 | 10,144,450 | 0 |
0 | 0 | I am looking to send an img file created using Qemu snapshot feature through the network using Python. Its file is of varying size. | true | 9,850,143 | 1.2 | 0 | 0 | 0 | I think you may want to read the socket module doc of python. | 0 | 103 | 0 | 0 | 2012-03-24T08:14:00.000 | python,file,sockets | Sending an img file using Python | 1 | 1 | 1 | 9,855,161 | 0 |
0 | 0 | I have a working application using python and zeromq and I would like to optimize it.
Briefly, a master node send the same request to all workers (about 200) and the then collect the answers. Based on the answer, it sends a message back to one node and the node answers back.
Right now I implemented a very simple pattern. Each worker has one REP socket and the server has a list of REQ sockets. The server iterates through all sockets sending the general message and then iterates through all sockets to collect the answers. Finally, based on the answers the server picks one worker, sends a message to it and waits for the reply.
This is, of course, quite slow. The slowest part is sending 200 times the same message. Collecting is also slow. The solutions that I have found to distribute tasks and collect answers do load balance which is not what I need. I need that each worker receives the message and responds.
What is the pattern recommended for this situation?
Thanks | true | 9,852,614 | 1.2 | 0 | 0 | 0 | I don't know zmq. Here's a pattern that might not work, just to get started:
a master node send the same request to all workers (about 200)
master PUB bind *:3140 send
worker SUB connect masterhost:3140 SUBSCRIBE recv
the then collect the answers
worker PUSH connect masterhost:3141 send
master PULL bind *:3141 recv
Based on the answer, it sends a message back to one node and the node answers back.
master REQ connect workerhost:3142 send recv
worker REP bind *:3142 recv send | 0 | 3,007 | 0 | 3 | 2012-03-24T14:43:00.000 | python,distributed,zeromq | Distributing task with python and zeromq | 1 | 1 | 2 | 9,853,246 | 0 |
0 | 0 | I have developed a server that serves audio files over HTTP. Using the Content-Length header I was able to view the current position, but this source isn't seekable. How can I make it seekable?
Some people recommended sending Accept-Range: bytes, but when I tried that the audio doesn't even play anymore. | false | 9,860,262 | -0.197375 | 0 | 0 | -1 | Think about what you're asking: A stream of bytes is arriving over the network, and you want random access over it? You can't. What you can do is implement buffering yourself, which you could do reasonably transparently with the io module. If you want to seek forward, discard the intermediate blocks; if you want to seek backward, you'll have to hold the stream in memory till you don't need it anymore.
If you don't want to buffer the entire stream client-side, you need a way to tell the server to seek to a different position and restart streaming from there. | 0 | 718 | 0 | 1 | 2012-03-25T12:54:00.000 | python,html,http,audio,streaming | Can't seek in streamed file | 1 | 1 | 1 | 9,861,360 | 0 |
0 | 0 | I cannot request url "http://www.besondere-raumdüfte.de" with urllib2.urlopen().
I tried to encode string using urllib.urlencode with utf-8, idna, ascii But still doesn't work. Raises URLError: <urlopen error unknown url type. | true | 9,887,223 | 1.2 | 0 | 0 | 2 | What you need is u"http://www.besondere-raumdüfte.de/".encode('idna'). Please note how the source string is a Unicode constant (the u prefix).
The result is an URL usable with urlopen().
If you have a domain name with non-ASCII characters and the rest of the URL contains non-ASCII characters, you need to .encode('idna') the domain part and iri2uri() the rest. | 0 | 2,142 | 0 | 2 | 2012-03-27T09:56:00.000 | python,unicode,urllib2,urlopen | How to request a url with non-unicode carachters on main domainname (not params) in Python? | 1 | 1 | 2 | 9,889,366 | 0 |
0 | 0 | I'm using urllib2.urlopen() to open sometimes potentially large files. I have a signal handler to catch SIGTERM, but is it possible to actually interrupt urlopen() when it's downloading a big file to close my program immediately, without waiting for the call to finish? | false | 9,897,647 | 0 | 0 | 0 | 0 | urlopen returns a file-like object. Data is only sent over the network when you make a .read() request on this object. (Your OS does some buffering of network data, so this isn't strictly true, but it's close enough for all practical purposes.)
So simply use the .read() method's capability to read data in chunks using a loop, perhaps 16K or 64K at a time, rather than retrieving the whole file at once. In your signal handler, then, you can close the file-like object and the file will stop downloading after the current chunk finishes. The smaller the chunk you use, the less latency there will be in stopping the download.
I'd use a global variable to hold the reference to file-like object so it is accessible in your signal handler; in this case it seems like the simplest solution.
If you should happen to try to read from the file-like object after closing it, you will get an exception, which you can handle gracefully. | 0 | 396 | 0 | 0 | 2012-03-27T21:01:00.000 | python,signals,urllib2,sigterm | Python: interrupting urllib2.urlopen() with SIGTERM | 1 | 1 | 1 | 9,897,777 | 0 |
1 | 0 | Is there any way in a browser, to type python code into an input field, it will then be sent to a local server and executed and the result pushed back to the browser.
Basically a browser hosted python notebook, where the code gets evaluated on a different machine.
Is there any python package to do this.
something like what ideone.com or picloud do, but opensource and that can install on your own server.
Or any suggestions on how to do it, I have looked around already but have struggled to find something meaningful. | false | 9,899,180 | 0.066568 | 0 | 0 | 1 | I haven't tried myself. You may want to check out ipython notebook. | 1 | 92 | 0 | 0 | 2012-03-27T23:32:00.000 | python,python-3.x,cloud | cloud scripting through browser and evaluate python on server | 1 | 1 | 3 | 9,899,346 | 0 |
1 | 0 | I'm making an application in Python and using Amazon Web Services in some modules.
I'm now hard coding my AWS access id and secret key in *.py file. Or might move them out to an configuration file in future.
But there's a problem, how can I protect AWS information form other people? As I know python is a language that easy to de-compile.
Is there a way to do this?
Well what I'm making is an app to help user upload/download stuff from cloud. I'm using Amazon S3 as cloud storage. As I know Dropbox also using S3 so I'm wondering how they protects the key.
After a day's research I found something.
I'm now using boto (an AWS library for python). I can use a function of 'generate_url(X)' to get a url for the app to accessing the object in S3. The url will be expired in X seconds.
So I can build a web service for my apps to provide them the urls. The AWS keys will not be set into the app but into the web service.
It sounds great, but so far I only can download objects with this function, upload doesn't work. Any body knows how to use it for uploading?
Does anyone here know how to use key.generate_url() of boto to get a temporary url for uploading stuff to S3? | false | 9,926,825 | 0 | 0 | 0 | 0 | Don't put it in applications you plan to distribute. It'll be visible and they can launch instances that are directly billable to you or worst..they can take down instances if you use it in production.
I would look at your programs design and seriously question why I need to include that information in the app. If you post more details on the design I'm sure we can help you figure out a way in which you don't need to bundle this information. | 0 | 6,307 | 0 | 11 | 2012-03-29T13:54:00.000 | python,amazon-web-services | How can I protect my AWS access id and secret key in my python application | 1 | 1 | 4 | 9,928,772 | 0 |
0 | 0 | I'm making a server which is controlled by php scripts, i have clients (androidphones) who calls these script to control the server. I'm saving the ip's the php receives to the DB.
now i'm looking for a way to check if these ip's are still reachable..
this is what i tried so far:
Cron job -> i'm developing with usbwebserver so i cant use php from cmd
make a ping request to the ip from the php script: if available it will return rather "fast" but if it fails i get the maximum excution error. this way of working get me the results i want but without the possibilty of threading this, this solution isnt going to perform in time as i wanted. is there a way this approach could work?
something i'm considering: -
make a request to de device through http, and do something with the result i get back, is this even possible (making a request from server to client)?
making a python script that gets the ip from db and makes the ping calls and stores the results back into the db.
making timestamps when a device connects and check timestamps from other devices, if max time was exceeded then update DB
any suggestions? | false | 9,942,423 | 0 | 1 | 0 | 0 | This will not work, as you will always must have your client responding. I suggest using the client (eg. in html using javascript) to make a connection (ping) every minute from your script on the client to your server, and in your cronjobs letting a script run every x minutes to update the clients that didn't within x. | 0 | 692 | 0 | 0 | 2012-03-30T11:54:00.000 | php,python,mysql,request,ip | PHP make a request check if client is available via ip | 1 | 2 | 3 | 9,942,581 | 0 |
0 | 0 | I'm making a server which is controlled by php scripts, i have clients (androidphones) who calls these script to control the server. I'm saving the ip's the php receives to the DB.
now i'm looking for a way to check if these ip's are still reachable..
this is what i tried so far:
Cron job -> i'm developing with usbwebserver so i cant use php from cmd
make a ping request to the ip from the php script: if available it will return rather "fast" but if it fails i get the maximum excution error. this way of working get me the results i want but without the possibilty of threading this, this solution isnt going to perform in time as i wanted. is there a way this approach could work?
something i'm considering: -
make a request to de device through http, and do something with the result i get back, is this even possible (making a request from server to client)?
making a python script that gets the ip from db and makes the ping calls and stores the results back into the db.
making timestamps when a device connects and check timestamps from other devices, if max time was exceeded then update DB
any suggestions? | false | 9,942,423 | 0 | 1 | 0 | 0 | There is absolutely no point in such checking.
You will always have these IPs UP.
Just because an android client seldom have a real IP but using Net via some router/proxy.
Web is one-way road.
To implement a heart-beat feature you have to make your android clients to ping server. | 0 | 692 | 0 | 0 | 2012-03-30T11:54:00.000 | php,python,mysql,request,ip | PHP make a request check if client is available via ip | 1 | 2 | 3 | 9,942,749 | 0 |
1 | 0 | I am a newborn programmer still programming from the book on my Alt+Tab. One of the first programs I want to create is to help my mom in her work. I need to know if I can use Python to create it.
It needs to:
Go on-line and log-in with account / pass.
Do a search with specific criteria (use the site's search engine)
View all the results and pick only the newest ones.
Sort them out.
Notify me so that the newest adds are noticed the moment they are posted on the website.
From what I see the site says : .cgi in the end.
I know python can connect, download the text from a page and sort the wanted info, but can it log in, use the search engine and pick the options I need?
I don't want to skip my learning process, but I am so serious about this project I am ready to put Python on hold and start learning some language that can do it!
I will very much appreciate your guidance!
Thank you for your time!
AJ | false | 9,945,206 | 0.028564 | 1 | 0 | 1 | If I understand well, the idea of your program is to do an automated browsing session.
So yes, it's possible. It's not important in what the website is programmed (cgi, php etc). All you need is to send data through post/get (like a real browser) and process the return (regexp and so on).
Good luck | 0 | 10,966 | 0 | 4 | 2012-03-30T14:45:00.000 | python,automation,notifications | Can web automation be done in Python? | 1 | 2 | 7 | 9,945,337 | 0 |
1 | 0 | I am a newborn programmer still programming from the book on my Alt+Tab. One of the first programs I want to create is to help my mom in her work. I need to know if I can use Python to create it.
It needs to:
Go on-line and log-in with account / pass.
Do a search with specific criteria (use the site's search engine)
View all the results and pick only the newest ones.
Sort them out.
Notify me so that the newest adds are noticed the moment they are posted on the website.
From what I see the site says : .cgi in the end.
I know python can connect, download the text from a page and sort the wanted info, but can it log in, use the search engine and pick the options I need?
I don't want to skip my learning process, but I am so serious about this project I am ready to put Python on hold and start learning some language that can do it!
I will very much appreciate your guidance!
Thank you for your time!
AJ | false | 9,945,206 | 0.028564 | 1 | 0 | 1 | I would point out that depending upon what site you are on, there may be a more efficient way (perhaps an exposed web service) than scraping data from the page and working with mechanize/selenium to do what you want. If you are on the web, browser driver tools are the hammers, and they will get the screws in the wood, but sometimes another tool will work better. | 0 | 10,966 | 0 | 4 | 2012-03-30T14:45:00.000 | python,automation,notifications | Can web automation be done in Python? | 1 | 2 | 7 | 9,945,530 | 0 |
0 | 0 | I'm writing a python program to get rsa public key. Is there a way to get it via paramiko or I just read it like plain text and with the assumption from id_rsa.pub? | true | 9,946,744 | 1.2 | 1 | 0 | 1 | If you don't know where the public key file is located, Paramiko can't help you either - it also needs you to specify where it is. You can of course try the usual places (starting by parsing ~/.ssh/config if available), but you don't need Paramiko for that. | 0 | 215 | 0 | 0 | 2012-03-30T16:18:00.000 | python,ssh,paramiko | Can we get public key using Paramiko? Or just read it like plain text? | 1 | 1 | 1 | 9,947,158 | 0 |
1 | 0 | I would appreciate some help here.
Google checkout has many ways to send it checkout data. I am using the XML server-to-server.
I have everything ready and now I want to throw some xml at google. I have been doing some reading and I know of a couple of ways to do this, one with urllib, another with pyCurl, but I am using django over here and I searched the Django api for some way to POST data to another site and I havent fallen upon anything. I really would like to use the django way, if there is one because I feel it would be more fluid and right, but if you all don't know of any way I will probably use urllib. | true | 9,951,216 | 1.2 | 0 | 0 | 2 | urllib2 is the appropriate way to post data if you're looking for python standard library. Django doesn't provide a specific method to do this (as well as it shouldn't). Django goes out of it's way to not simply reinvent tools that already exist in the the standard library (except email...), so you should never really fear using something out of the python standard library.
requests is also great, but not standard library. Nothing wrong with that though. | 0 | 471 | 0 | 1 | 2012-03-30T22:35:00.000 | python,django,post | Posting data to another Site with Django | 1 | 1 | 1 | 9,951,986 | 0 |
1 | 0 | I have a small website hosted by my university. The policy is that no server side scripting language (e.g. PHP, etc.) is enabled, hence websites are either static or can use client side scripting (e.g. javascript, etc.). I also can't touch the server/configure it/install things.
Anyway, I wanted to add some data from other websites (namely, google scholar citations) that I manage to scrape with Python+lxml, dynamically.
Is there any way I can have these data dynamically queried - on the client side of course?
I tried using IronPython to embed my Python code in my webpage, but it was complaining about failing to find the lxml imported library. But a similar solution would be great.
Or a library in pure javascript which allows for opening and parsing external webpages...?
Thanks! | false | 9,978,862 | 0 | 0 | 0 | 0 | Given the server limitation, I would run those scripts locally to generate HTML, and push that static output onto your server. | 0 | 682 | 0 | 0 | 2012-04-02T15:08:00.000 | javascript,python,ironpython,client-side | Alternative to a server side scripting language for scraping | 1 | 1 | 3 | 9,979,094 | 0 |
0 | 0 | I am writing a web crawler that processes multiple URLs at the same time and works in the following way:
It gets a URL from a list of URLs included in seed_list.txt,
It crawls it and write the data into data.txt;
just like how most of web crawlers work.
When I make it single-threaded, I can get the data in data.txt in the same order with that of the URLs in seed_list.txt, but when it's multi-threaded, I don't seem able to control it, as each thread writes the data to data.txt once it is finished.
Is there a way I can make my web crawler multi-threaded but keep the original order?
Thank you very much!
@Lance, Ignacio, and Maksym,
thank you all for your help - your answers definitely point me in the right direction. | true | 9,980,517 | 1.2 | 0 | 0 | 1 | You could create a class that has an index number of the line from seed_list.txt, the URL, and where the data from the web. An object of this type can be created with the line number and URL, then it is passed to the worker thread which will put the data into the object, and then the object is passed to a write thread which will order the objects by the line number and output the data as necessary. | 1 | 167 | 0 | 2 | 2012-04-02T16:57:00.000 | python,web-crawler,python-2.7 | Processing Order of Python's Threading Module | 1 | 3 | 3 | 9,980,811 | 0 |
0 | 0 | I am writing a web crawler that processes multiple URLs at the same time and works in the following way:
It gets a URL from a list of URLs included in seed_list.txt,
It crawls it and write the data into data.txt;
just like how most of web crawlers work.
When I make it single-threaded, I can get the data in data.txt in the same order with that of the URLs in seed_list.txt, but when it's multi-threaded, I don't seem able to control it, as each thread writes the data to data.txt once it is finished.
Is there a way I can make my web crawler multi-threaded but keep the original order?
Thank you very much!
@Lance, Ignacio, and Maksym,
thank you all for your help - your answers definitely point me in the right direction. | false | 9,980,517 | 0.132549 | 0 | 0 | 2 | Create an additional thread that is responsible for enumerating results from each of the crawler threads. | 1 | 167 | 0 | 2 | 2012-04-02T16:57:00.000 | python,web-crawler,python-2.7 | Processing Order of Python's Threading Module | 1 | 3 | 3 | 9,980,555 | 0 |
0 | 0 | I am writing a web crawler that processes multiple URLs at the same time and works in the following way:
It gets a URL from a list of URLs included in seed_list.txt,
It crawls it and write the data into data.txt;
just like how most of web crawlers work.
When I make it single-threaded, I can get the data in data.txt in the same order with that of the URLs in seed_list.txt, but when it's multi-threaded, I don't seem able to control it, as each thread writes the data to data.txt once it is finished.
Is there a way I can make my web crawler multi-threaded but keep the original order?
Thank you very much!
@Lance, Ignacio, and Maksym,
thank you all for your help - your answers definitely point me in the right direction. | false | 9,980,517 | 0.066568 | 0 | 0 | 1 | You can run a special thread which outputs data and interact with it through a queue. I mean your 'crawling' thread will not write a result to a text file but put it to the queue.
This 'output' thread can sort/filter your results. | 1 | 167 | 0 | 2 | 2012-04-02T16:57:00.000 | python,web-crawler,python-2.7 | Processing Order of Python's Threading Module | 1 | 3 | 3 | 9,980,948 | 0 |
0 | 0 | I need to get the content of URL bar (of currently opened tab) of Firefox using Python. I've found few solutions for Windows (using libraries such as pywin32) but I need it for GNU/Linux (if available then multi-platform way is the most preferred, of course). I have also found ways doing it by installing add-on for Firefox but I want user to install only Python and (when it's needed) libraries for it [Python]. | true | 10,029,355 | 1.2 | 0 | 0 | 0 | You might have some luck with DBus, although I don't know if it is provided in a standard install. | 0 | 283 | 0 | 1 | 2012-04-05T13:16:00.000 | python,linux,firefox,gnu | Python - How to get the Firefox's URL bar content in GNU/Linux? | 1 | 1 | 1 | 10,029,542 | 0 |
0 | 0 | I was able to get attributes about an user by doing queries on LDAP using Python ldap but I don't know how to obtain his DN.
Remark: Doing CN=sAMAccount,base_dn) is not valid because the user can be somewhere in another sub-tree.
Which is the proper way of getting the DN for an user for which I do have the sAMAccount? | true | 10,054,878 | 1.2 | 0 | 0 | 1 | The search result contains:
A list of search result entries. Each search result entry in the list contains the distinguished name of the entry and a (partial) attribute list
or
A list of search result references. Each search result reference contains a sequence of URIs
After the entries or references comes a single search result done message.
Therefore, if any entries matched, they are returned in the list of search result entries, each of which contains the distinguished name of the entry that was matched. Your python API documentation should contain information as to how to extract the distinguished name of the entry that matched. | 0 | 1,443 | 0 | 0 | 2012-04-07T13:24:00.000 | python,active-directory,ldap | How do I get the full DN, distinguishedName of an user with python ldap? | 1 | 1 | 1 | 10,056,144 | 0 |
0 | 0 | I am trying to build a simple email client with python and IMAPClient. The problem is that the folder names aren't uniform for all servers.
If i mark an e-mail as spam, it has to be moved to the spam/junk folder from the inbox folder (?) .. but i am unable to do that because i don't know what the folder name would be (Spam or INBOX.junk or [Gmail]/Spam )
How do other email clients work with varying folder names ? | false | 10,064,769 | 0 | 1 | 0 | 0 | They try a lot of possibilities, let you choose one and/or create one ;) | 0 | 1,797 | 0 | 1 | 2012-04-08T17:24:00.000 | python,email,imap | Identifying IMAP mail folders (spam,sent...), folder names vary with servers | 1 | 3 | 4 | 10,064,803 | 0 |
0 | 0 | I am trying to build a simple email client with python and IMAPClient. The problem is that the folder names aren't uniform for all servers.
If i mark an e-mail as spam, it has to be moved to the spam/junk folder from the inbox folder (?) .. but i am unable to do that because i don't know what the folder name would be (Spam or INBOX.junk or [Gmail]/Spam )
How do other email clients work with varying folder names ? | false | 10,064,769 | 0 | 1 | 0 | 0 | Roundcube have this in both the server and user configuration. I don't know about other mail clients, but I guess they either use heuretics, either by just looking at what folders there are or by using knowledge about the particular IMAP server. | 0 | 1,797 | 0 | 1 | 2012-04-08T17:24:00.000 | python,email,imap | Identifying IMAP mail folders (spam,sent...), folder names vary with servers | 1 | 3 | 4 | 10,064,809 | 0 |
0 | 0 | I am trying to build a simple email client with python and IMAPClient. The problem is that the folder names aren't uniform for all servers.
If i mark an e-mail as spam, it has to be moved to the spam/junk folder from the inbox folder (?) .. but i am unable to do that because i don't know what the folder name would be (Spam or INBOX.junk or [Gmail]/Spam )
How do other email clients work with varying folder names ? | true | 10,064,769 | 1.2 | 1 | 0 | 2 | I believe that for common EMail providers they have a mapping as to what that provider uses by default (gmail, hotmail, exchange etc.).
Another way of doing it is to let the user decide first time, persist the setting for that account and not ask again.
Mixed approach would be to try and detect all common variations and use the first valid one you encounter. If there are more then one simply allow user to choose. | 0 | 1,797 | 0 | 1 | 2012-04-08T17:24:00.000 | python,email,imap | Identifying IMAP mail folders (spam,sent...), folder names vary with servers | 1 | 3 | 4 | 10,064,811 | 0 |
1 | 0 | Exception: ERROR: Element xpath=//*[@id='seriesNwsHldr']/div[2]/p[1]/a not found.
I checked in Fierbug. The path is correct but I don't know what's the reason for this test case to fail. | false | 10,068,871 | 0 | 0 | 0 | 0 | It looks like a problem of timing. May be you can intentionally add wait time till the element appears on the page.
Another possibility is that element which you are trying to interact is hidden.
Would be great if you can post errors you are getting when you test fails. | 0 | 146 | 0 | 0 | 2012-04-09T04:24:00.000 | python,xpath,selenium-rc | sel.click("xpath=//*[@id='seriesNwsHldr']/div[2]/p[1]/a") is not working | 1 | 2 | 2 | 10,233,957 | 0 |
1 | 0 | Exception: ERROR: Element xpath=//*[@id='seriesNwsHldr']/div[2]/p[1]/a not found.
I checked in Fierbug. The path is correct but I don't know what's the reason for this test case to fail. | false | 10,068,871 | 0 | 0 | 0 | 0 | Can I have the site for checking?
BTW sometimes you should to wait the loading of the page, so you need to do before of this action an instructions like:
clickAndWait(30000)
in my cases it solves a lotof problems :) | 0 | 146 | 0 | 0 | 2012-04-09T04:24:00.000 | python,xpath,selenium-rc | sel.click("xpath=//*[@id='seriesNwsHldr']/div[2]/p[1]/a") is not working | 1 | 2 | 2 | 10,234,023 | 0 |
1 | 0 | I have a problem regarding with session handling in Chrome and Firefox. When in authenticate to a website and I closed it and reopen the home page in Firefox it shows my name . But when I do the same thing in chrome it didnt show my name . it shows as guest. The session for this site implemented by tipfy. do I have to configure the session management? | true | 10,069,594 | 1.2 | 0 | 0 | 1 | check the Tipfy session configuration attributes, Check the path attribute '/' you need to do some modification their | 0 | 308 | 0 | 1 | 2012-04-09T06:10:00.000 | python,google-app-engine,firefox,google-chrome,tipfy | Session Handling in Chrome and Firefox | 1 | 1 | 1 | 10,085,230 | 0 |
0 | 0 | I am a newbie in python sockets and am really troubled by the stubbornness of the socket.accept() method. I really need a way of ending a socket.accept() method or any other alternative to socket.accept() which runs one time only. | true | 10,090,236 | 1.2 | 0 | 0 | 16 | You have several options here:
Close the listening socket from another thread - the accept() will raise an exception if it fails.
Open a local connection to the listening socket - that makes the accept() return by design.
Use an accept mechanism that can block on more than one synchronization object so that the wait can be signaled to return without a connection.
Use a non-blocking alternative to accept(), (async like AcceptEx() and overlapped IO on Windows). | 1 | 18,731 | 0 | 16 | 2012-04-10T13:59:00.000 | python,sockets | How to stop a python socket.accept() call? | 1 | 1 | 1 | 10,090,348 | 0 |
1 | 0 | I need to navigate though a website that is written mostly in Javascript. There are no hard links at all, as the page is simply modified through the script. I can do what I need to using Javascript injections one after another, but chrome starts searching for my input instead of injecting it after a certain string length. I've tried to use frames to do this in HTML, but chrome won't let me use Javascript inside the frame since the source is from a different domain. Is there a good way that I can do this? I've looked into using Java or Python, but I don't see anything that lets you work with Javascript.
EDIT: Thanks for telling me about different software, but I don't want to use other third-party software. I would really like to know how to execute Javascript injections in a systematic manner from a HTML page. I can do it from the browser, so why can't I do it from an HTML document? | false | 10,098,963 | 0.197375 | 0 | 0 | 2 | You can use a tool like Selenium to emulate a user clicking things in a web browser (I believe it actually "drives" a real instance of whatever browser you choose.)
Selenium has a domain-specific language for specifying what actions you want to perform, and Python bindings for controlling it programmatically. I haven't actually used it, so I can't say much more about it, but you should go check it out. | 0 | 343 | 0 | 2 | 2012-04-11T02:09:00.000 | java,javascript,python,html,navigation | How do I navigate a website through software? | 1 | 1 | 2 | 10,099,151 | 0 |
0 | 0 | I have a cgi script wrote in Python that is receiving some complex http request, one that could be POST or GET.
I am looking for a simple way to log the request in some way so I can replay it later any number of times I want. | false | 10,131,506 | 0 | 1 | 0 | 0 | Seems you're looking to cache queries made to your site.
After calculating a response, save a record with the request url, method, params, and response in a your preferred storage.
Depending on your environment and the number of requests, your may choose a database or filesystem.
However, you need to take into account that some of the result data may change, in which case you'd need to remove cached data that depend on that data. | 0 | 91 | 0 | 0 | 2012-04-12T20:44:00.000 | python,http,cgi | How can I save a HTTP request from a python cgi scripts so I can easily repeat it? | 1 | 1 | 2 | 40,729,146 | 0 |
0 | 0 | For example:
I have a task named "URLDownload", the task's function is download a large file from internet.
Now I have a Worker Process running, but have about 1000 files to download.
It is easy for a Client Process to create 1000 task, and send them to Gearman Server.
My Question is the Worker Process will do the task one by one, or it will accept multi-tasks at one time,
If the Worker Process can accept multi-tasks, So How can I limit the task-pool-size in Worker Process. | true | 10,134,617 | 1.2 | 0 | 0 | 2 | Workers process one request at a time. You have a few options:
1) You can run multiple workers (this is the most common method). Workers sit in poll() when they aren't processing so this model works pretty well.
2) Write a fork() implementation around the worker. This way you can fire up a set number of worker processes, but don't have to monitor multiple processes. | 0 | 594 | 0 | 0 | 2012-04-13T02:45:00.000 | gearman,python-gearman | is PYTHON Gearman Worker accept multi-tasks | 1 | 1 | 1 | 10,164,124 | 0 |
0 | 0 | The goal is to collect the MAC address of the connected local NIC,
not a list of all local NICs :)
By using socket and connect (to_a_website),
I can just use getsockname() to get the IP,
that is being used to connect to the Internet.
But from the IP how can I then get the MAC address of the local NIC ?
Main reason for the question is if there are multiple NICs. | false | 10,137,594 | 0 | 0 | 0 | 0 | Another roundabout way to get a systems mac id is to use the ping command to ping the system's name then performing an arp -a request against the ip address that was pinged. the downfall to doing it that way thou is you need to write the ping response into the memory of python and performing a readline operation to retrieve the ip address and then writing the corresponding arp data into memory while writing the system name, ip address, and mac id to the machine in question to either the display or to a test file.
Im trying to do something similar as a system verification check to improve the automation of a test procedure and the script is in python for the time being. | 0 | 8,993 | 0 | 1 | 2012-04-13T08:30:00.000 | python,networking,ethernet | Python - Get MAC address of only the connected local NIC | 1 | 1 | 6 | 16,929,947 | 0 |
1 | 0 | I have an application I am developing on top of GAE, using Python APIs. I am using the local development server right now. The application involves parsing large block of XML data received from outside service.
So the question is - is there an easy way to get this XML data exported out of the GAE application - e.g., in regular app I would just write it to a temp file, but in GAE app I can not do that. So what could I do instead? I can not easily run all the code that produces the service call outside of GAE since it uses some GAE functions to create the call, but it would be much easier if I could take the XML result out and develop/test the parser part outside and then put it back to GAE app.
I tried to log it using logging and then extract it from the console, but when XML is getting big it doesn't work well. I know there's bulk data import/export APIs but seems to be an overkill for extracting just this one piece of information to write it to data store and then export the whole store. So how to do it in the best way? | true | 10,152,055 | 1.2 | 0 | 0 | 3 | How about writing the XML data to the blobstore and then write a handler that uses send_blob to download to your local file system?
You can use the files API to write to the blobstore from you application. | 0 | 66 | 1 | 0 | 2012-04-14T08:01:00.000 | python,google-app-engine | Getting a piece of information from development GAE server to local filesystem | 1 | 1 | 1 | 10,152,181 | 0 |
0 | 0 | I am trying to access and parse a website at work using Python. The sites authorization is done via siteminder, so the usual urllib/urllib2 user password does not work.
Does anyone have an idea how to do that?
Thanks
NoamM | false | 10,169,500 | 0.066568 | 0 | 0 | 1 | Agree with Martin - you need to just replicate what the browser does. Siteminder will pass you a token once successfully authenticated. I have to do this as well, will post once I find a good way. | 0 | 2,741 | 0 | 3 | 2012-04-16T06:19:00.000 | python,url,web,siteminder | Use Python/urllib to access web sites with "siteminder" authentication? | 1 | 1 | 3 | 19,700,519 | 0 |
0 | 0 | I am using shutil.copy() to transfer files from one server to another server on a network, both Windows.
I have used shutil and os modules for lot of automation tasks but confined to local machine. Are there better approaches to transfer file (I mean in terms of performance) from one server to another? | false | 10,196,803 | 0.099668 | 0 | 0 | 1 | for lot of automation tasks but confined to local machine use fabric | 0 | 1,457 | 1 | 3 | 2012-04-17T18:22:00.000 | python,windows | Transferring files between Windows Servers using shutil copy/move | 1 | 1 | 2 | 10,197,024 | 0 |
1 | 0 | I have developed a few python programs that I want to make available online.
I am new to web services, and I am not sure what I need to do in order to create a service where somebody makes a request to an URL (for example), and the URL triggers a Python program that displays something in the user's browser, or a set of inputs are given to the program via browser, and then python does whatver it is supposed to do.
I was playing with the google app engine, which runs fine with the tutorial, and was planning to use it becuase it looks easy, but the problem with GAE is that it does not work well (or does not work at all) with some libraries that I plan to use.
I guess what I am trying to do is some sort of API using my WebFaction account.
Can anybody point me in the right directions? What choices do I have in WebFaction? What are the easiest tools available?
Thank you very much for your help in advance.
Cheers | true | 10,199,697 | 1.2 | 0 | 0 | 1 | Well, your question is a little bit generic, but here are a few pointers/tips:
Webfaction allows you to install pretty much anything you want (you need to compile it / or ask the admins to install some CentOS package for you).
They provide some default Apache server with mod_wsgi, so you can run web2py, Django or any other wsgi frameworks.
Most popular Python web frameworks have available installers in Webfaction (web2py, django...), so I would recommend you to go with one of them.
I would also install supervisord to keep your service running after some reboot/crash/problem.
I would be glad to help you if you have any specific question... | 0 | 597 | 0 | 1 | 2012-04-17T21:49:00.000 | python,web-services,api | RESTful Web service or API for a Python program in WebFaction | 1 | 1 | 1 | 10,332,534 | 0 |
0 | 0 | All the forks of gevent-socketio in bitbucket and github have examples/chat.py that do not work.
Can anyone find me a working example of gevent-socketio? | false | 10,204,230 | 0.066568 | 1 | 0 | 1 | what browser do you use. I saw this behavior with IE. both Mozilla and chrome were fine. there were issues with the flashscket protocol which I have fixed so ie should work but the jquery UI does not work that is the issue. don't know enough JS to fix it | 0 | 7,663 | 0 | 4 | 2012-04-18T06:46:00.000 | python,websocket,socket.io,gevent | Do anyone have a working example of gevent-socketio? | 1 | 1 | 3 | 11,271,531 | 0 |
1 | 0 | How can I check the status code of a request? I want to check what kind of redirects were used to access a page.
For response objects I would use *response.status_code* | false | 10,212,293 | 0.099668 | 0 | 0 | 1 | I might be wrong but there is no such thing as incoming request status code, application doing requests in case of redirect gets 302 from initial request checks location and do another request. And history of incoming request in shape of thing like "traceroute" just don't exists in HTTP. | 0 | 111 | 0 | 0 | 2012-04-18T15:06:00.000 | python,google-app-engine,http-status-codes | How to determine status code of request | 1 | 1 | 2 | 10,212,363 | 0 |
1 | 0 | I am using Scrapy crawler to crawl over 100k pages website. The speed is the big concern in this case. Today I noticed that hxs.select('//*').re('something') is way slower than hxs.select('//script/text()').re('something'). Can any expert explain to me why?
As I understand, the crawler should download the entire page no matter what xpath selector I use. So the xpath should not affect the speed much at all.
Thanks a lot for any tips. | false | 10,217,708 | 0.066568 | 0 | 0 | 1 | This has nothing to do with download speed.
XPath //* selects the entire page. XPath //script/text() selects only text inside script elements. So of course the second one is faster, because there is less text to search with the re() call! | 0 | 410 | 0 | 0 | 2012-04-18T20:47:00.000 | python,web-crawler,scrapy | How does xpathselector affect the speed of scrapy crawl running? | 1 | 3 | 3 | 10,217,884 | 0 |
1 | 0 | I am using Scrapy crawler to crawl over 100k pages website. The speed is the big concern in this case. Today I noticed that hxs.select('//*').re('something') is way slower than hxs.select('//script/text()').re('something'). Can any expert explain to me why?
As I understand, the crawler should download the entire page no matter what xpath selector I use. So the xpath should not affect the speed much at all.
Thanks a lot for any tips. | false | 10,217,708 | 0 | 0 | 0 | 0 | XPath definitely has a role in crawler speed, crawler download the page, but Xpath process the Html that crawler downloaded. so if the page is big then xpath will take time to process whole Html. | 0 | 410 | 0 | 0 | 2012-04-18T20:47:00.000 | python,web-crawler,scrapy | How does xpathselector affect the speed of scrapy crawl running? | 1 | 3 | 3 | 10,248,266 | 0 |
1 | 0 | I am using Scrapy crawler to crawl over 100k pages website. The speed is the big concern in this case. Today I noticed that hxs.select('//*').re('something') is way slower than hxs.select('//script/text()').re('something'). Can any expert explain to me why?
As I understand, the crawler should download the entire page no matter what xpath selector I use. So the xpath should not affect the speed much at all.
Thanks a lot for any tips. | false | 10,217,708 | 0.066568 | 0 | 0 | 1 | I am afraid that you might look for 'something' in the entire document, so you probably should still use hxs.select('//*').re('something').
And about the speed question: the answer is that if you look for the word 'something' in a document which is 4k large, of course it will take longer then filtering the document for text() and after looking for that word within that text. | 0 | 410 | 0 | 0 | 2012-04-18T20:47:00.000 | python,web-crawler,scrapy | How does xpathselector affect the speed of scrapy crawl running? | 1 | 3 | 3 | 10,217,990 | 0 |
1 | 0 | I am doing some R&D on selenium+python. I wrote some test cases in python using selenium webdriver and unittest module. I want to know how can I create report of the test cases. Is there inbuilt solution available in selenium or I need to code to generate file.
Or is there any other web testing framework with javascript support available in python which have reporting functionality.
I am basically new to python as well as selenium. Just trying to explore. | false | 10,218,679 | 0.033321 | 1 | 0 | 1 | My experience has been that any sufficiently useful test framework will end up needing a customized logging solution. You are going to end up wanting domain specific and context relevant information, and the pre-baked solutions never really fit the bill by virtue of being specifically designed to be generic and broadly applicable. If you are already using Python, I'd suggest looking in to the logging module and learning how to write Handlers and Formatters. It's actually pretty straight forward, and you will end up getting better results than trying to shoehorn the logging you need in to some selenium-centric module. | 0 | 24,481 | 0 | 7 | 2012-04-18T21:59:00.000 | python,selenium | Selenium+python Reporting | 1 | 1 | 6 | 10,218,792 | 0 |
1 | 0 | I try to create a related field on OpenERP 6.0.1 . is it possible to define two different onetomany relationfor the same field name? What all changes i must do in the(.py file and XML Files). | true | 10,222,493 | 1.2 | 0 | 0 | 2 | No you cannot do that:
the field names are keys in a Python dictionary, in what you write the second invoice_line will overwrite the first one
this would mess up OpenERP's ORM anyway as it does not handle relations to different tables.
So you need two different columns, one relative to account.invoice.line and the other to account.service.line. If you really need a merged view, then you can add a function field which will return the union of the invoice and service lines found by the two previous fields. But I'm not sure the forms will be able to handle this. | 0 | 759 | 0 | 0 | 2012-04-19T06:05:00.000 | python,xml,openerp | onetomany relation field in Openerp | 1 | 1 | 1 | 10,224,838 | 0 |
0 | 0 | I'm attempting to get the user's IP from inside of my upload handler, but it seems that the only IP supplied is 0.1.0.30. Is there any way around this or any way to get the user's actual IP from inside of the upload handler? | true | 10,227,042 | 1.2 | 0 | 0 | 2 | Try checking for user IP at the point where you generate upload url via create_upload_url().
The upload handler is actually called by Blobstore upload logic after the upload is done, hence the strange IP. | 0 | 204 | 1 | 1 | 2012-04-19T11:28:00.000 | google-app-engine,python-2.7,blobstore | How can I get a user's IP from a Blobstore upload handler? | 1 | 1 | 1 | 10,227,981 | 0 |
0 | 0 | I have a script that sends a time-sensitive notification to users when there is a new question directed to them. However, I found that some people leave their computers open and go grab lunch, therefore missing notifications.
I'm looking to put together a script that detects if the user is idle for 5 minutes, and if so, it would show them as 'offline' and close down notifications.
I was curious if it is possible to detect inactivity even across tabs? (for example if a user switches to another tab to Facebook.com and stays active there, they would be seen as 'active' even though they are not on our webpage specifically). | false | 10,234,885 | 0.039979 | 0 | 0 | 1 | Store their last activity in a database table when they are active. You can use mouse movement, keypresses, or some other activity to update the timestamp. Periodically poll that table with an ajax call on the page on which the user would see their online/offline status. If the last active time is > 5 minutes, show them as offline or idle. | 0 | 3,773 | 0 | 3 | 2012-04-19T18:55:00.000 | javascript,detect,python-idle | Is it possible to detect idle time even across tabs? | 1 | 3 | 5 | 10,234,996 | 0 |
0 | 0 | I have a script that sends a time-sensitive notification to users when there is a new question directed to them. However, I found that some people leave their computers open and go grab lunch, therefore missing notifications.
I'm looking to put together a script that detects if the user is idle for 5 minutes, and if so, it would show them as 'offline' and close down notifications.
I was curious if it is possible to detect inactivity even across tabs? (for example if a user switches to another tab to Facebook.com and stays active there, they would be seen as 'active' even though they are not on our webpage specifically). | true | 10,234,885 | 1.2 | 0 | 0 | 2 | Everything that happens when the user is NOT on your side is impossible to track (luckily).
So not this is not possible (think about the security).
UPDATE
Now that I think of it. It is possible, however very unlikely that you can do it. If your name would have been Google you would have come a long way, because lots of websites use Google analytics. But other than that: NO not possible for reasons mentioned. | 0 | 3,773 | 0 | 3 | 2012-04-19T18:55:00.000 | javascript,detect,python-idle | Is it possible to detect idle time even across tabs? | 1 | 3 | 5 | 10,235,006 | 0 |
0 | 0 | I have a script that sends a time-sensitive notification to users when there is a new question directed to them. However, I found that some people leave their computers open and go grab lunch, therefore missing notifications.
I'm looking to put together a script that detects if the user is idle for 5 minutes, and if so, it would show them as 'offline' and close down notifications.
I was curious if it is possible to detect inactivity even across tabs? (for example if a user switches to another tab to Facebook.com and stays active there, they would be seen as 'active' even though they are not on our webpage specifically). | false | 10,234,885 | 0.039979 | 0 | 0 | 1 | if I am on such a thing I use either the HTML5 Visibility API or fallback to blur and focus events observing when the user left the page and then returns... leaving means unfocus the browser window or tab (but still keeping the page open)
but since you wanna react on inactivity... hmmm you could start a timeout (of course that would need a global event delegation for many events to stop it if something happens like submit, click, change, mousemove and so on) | 0 | 3,773 | 0 | 3 | 2012-04-19T18:55:00.000 | javascript,detect,python-idle | Is it possible to detect idle time even across tabs? | 1 | 3 | 5 | 10,235,004 | 0 |
1 | 0 | I need to set up a page that will let the user upload a file, modify the file with a script, and then serve the file back to the user.
I have the uploading and modifying parts down, but I don't know where to put the file. They are going to be in the area of 1 or 2mb, and I have very little space on my webhosting plan, so I want to get rid of the files as soon as possible. There's no reason for the files to exist any longer than after the users are given the option to download by their browser upon being redirected.
Is the only way to this with a cron job that checks the creation time of the files and deletes them if they're a certain age?
I'm working with python and PHP.
edit:
First the file is uploaded. Then the location of the file is sent back to the user. The javascript on the page redirects to the path of the file. The browser opens save file dialog, and they choose to save the file or cancel. If they cancel, I want to delete the file immediately. If they choose to save the file, I want to delete the file once their download has completed. | true | 10,241,950 | 1.2 | 0 | 0 | 0 | I don't know why I didn't think of it, but I was on an IRC for Python, discussing a completely unrelated issue, and someone asked why I didn't just serve the file. Exactly!
I never needed to save the file, just to send it back to the user with the correct header. Problem solved! | 0 | 136 | 0 | 3 | 2012-04-20T07:23:00.000 | php,python,temporary-files | Store file for duration of time on webpage | 1 | 1 | 3 | 10,266,536 | 0 |
0 | 0 | I'm new to cgi and python, so I've been making quite a few mistakes. The problem is that if something goes wrong with the script, I just get a 500 error on the webpage. The only way I can see what caused the error is by executing the page via ssh, but the page involves file uploads, so I can't test that part.
Is there a way I can output Python errors to a file? | true | 10,253,898 | 1.2 | 1 | 0 | 3 | there are a couple of options, use the logging module as directed, you can tail the server's error log, and you can enable cgitb with import cgitb; cgitb.enable()
Depending on exactly where the error occurs, the error will show up in different places, so checking all three, and using print statements and exception blocks helps to debug your code.
With file uploads, I've found I have to explicitly state enctype="multipart/form-data" in the form tag or it breaks, often quietly. | 0 | 2,081 | 0 | 0 | 2012-04-20T21:37:00.000 | python,cgi | Logging python errors on website? | 1 | 1 | 2 | 10,254,010 | 0 |
0 | 0 | I would like to know how I would have a python web scrape dump all of it's results into excel.
It's not that I don't know how to webscrape, it's just I do not know how to scrape into excel. | false | 10,255,185 | 0.197375 | 0 | 0 | 2 | if you don't want to introduce a full excel library, you can write an HTML table or CSV and Excel will happily import those. The downside with this is that you're limited to one worksheet and no formulaes. | 0 | 4,830 | 0 | 2 | 2012-04-21T00:44:00.000 | python,excel,web-scraping | How to scrape in python into excel | 1 | 1 | 2 | 10,255,232 | 0 |
0 | 0 | I need to download a file that could be potentially quite large (300MB) that will later be saved locally. I don't want to read the remote file in one go to prevent excessive memory usage and intend to read the remote file in small chunks.
Is there an optimum size for these chunks? | false | 10,307,953 | 0 | 0 | 0 | 0 | It depends on what type of connection do you expect to use. I think, 64 Kb could be just enough. | 0 | 404 | 0 | 0 | 2012-04-25T00:45:00.000 | python,download | Downloading a file in chunks -- is there an optimal sized chunk? | 1 | 1 | 1 | 10,308,010 | 0 |
0 | 0 | In my case I'm using the Dropbox API. Currently I'm storing the key and secret in a JSON file, just so that I can gitignore it and keep it out of the Github repo, but obviously that's no better than having it in the code from a security standpoint. There have been lots of questions about protecting/obfuscating Python before (usually for commercial reasons) and the answer is always "Don't, Python's not meant for that."
Thus, I'm not looking for a way of protecting the code but just a solution that will let me distribute my app without disclosing my API details. | false | 10,356,870 | 0.197375 | 0 | 0 | 3 | Plain text. Any obfuscation attempt is futile if the code gets distributed. | 1 | 5,852 | 0 | 8 | 2012-04-27T19:42:00.000 | python,api,dropbox-api,api-key | How should I store API keys in a Python app? | 1 | 3 | 3 | 10,357,811 | 0 |
0 | 0 | In my case I'm using the Dropbox API. Currently I'm storing the key and secret in a JSON file, just so that I can gitignore it and keep it out of the Github repo, but obviously that's no better than having it in the code from a security standpoint. There have been lots of questions about protecting/obfuscating Python before (usually for commercial reasons) and the answer is always "Don't, Python's not meant for that."
Thus, I'm not looking for a way of protecting the code but just a solution that will let me distribute my app without disclosing my API details. | false | 10,356,870 | 0.132549 | 0 | 0 | 2 | Don't know if this is feasible in your case. But you can access the API via a proxy that you host.
The requests from the Python APP go to the proxy and the proxy makes the requests to the Dropbox API and returns the response to the Python app. This way your api key will be at the proxy that you're hosting. The access to the proxy can be controlled by any means you prefer. (For example username and password ) | 1 | 5,852 | 0 | 8 | 2012-04-27T19:42:00.000 | python,api,dropbox-api,api-key | How should I store API keys in a Python app? | 1 | 3 | 3 | 10,360,373 | 0 |
0 | 0 | In my case I'm using the Dropbox API. Currently I'm storing the key and secret in a JSON file, just so that I can gitignore it and keep it out of the Github repo, but obviously that's no better than having it in the code from a security standpoint. There have been lots of questions about protecting/obfuscating Python before (usually for commercial reasons) and the answer is always "Don't, Python's not meant for that."
Thus, I'm not looking for a way of protecting the code but just a solution that will let me distribute my app without disclosing my API details. | false | 10,356,870 | 0.132549 | 0 | 0 | 2 | There are two ways depending on your scenario:
If you are developing a web application for end users, just host it in a way that your API key does not come to disclosure. So keeping it gitignored in a separate file and only upload it to your server should be fine (as long there is no breach to your server). Any obfuscation will not add any practical benefit, it will just give a false feeling of security.
If you are developing a framework/library for developers or a client application for end users, ask them to generate an API key on their own. | 1 | 5,852 | 0 | 8 | 2012-04-27T19:42:00.000 | python,api,dropbox-api,api-key | How should I store API keys in a Python app? | 1 | 3 | 3 | 11,080,881 | 0 |
0 | 0 | Whats the purporse of using makefile() when working with Python Sockets??
I can make a program works just with send() and recv() functions on Python. But i read that is better to use the makefile() method to buffer the data. I didn't understand this relation and differences...any help?
Tks ! | true | 10,372,287 | 1.2 | 0 | 0 | 3 | You can use makefile if you find the file interface of Python convenient. For example, you can then use methods like readlines on the socket (you'd have to implement it manually when using recv). This can be more convenient if sending text data on the socket, but YMMV. | 0 | 3,003 | 0 | 4 | 2012-04-29T12:54:00.000 | python,sockets | Differences between makefile() and send() recv() Python | 1 | 1 | 1 | 10,372,331 | 0 |
0 | 0 | I want to find a way to get all the sub-elements of an element tree like the way ElementTree.getchildren() does, since getchildren() is deprecated since Python version 2.7.
I don't want to use it anymore, though I can still use it currently. | false | 10,408,927 | 1 | 0 | 0 | 13 | in the pydoc it is mentioned to use list() method over the node to get child elements.
list(elem) | 0 | 103,415 | 0 | 33 | 2012-05-02T06:43:00.000 | python,xml,elementtree | How to get all sub-elements of an element tree with Python ElementTree? | 1 | 1 | 5 | 51,963,017 | 0 |
0 | 0 | I was looking for a Whois Api, but most of them charge heavy price and not reliable enough. We can code in Python or Php.
We need to make a Whois lookup service, to integrate with our site. What AWS Resource we need for this? We need at least 5k lookups per day.
AWS provides: S3 , elastic, and others. We are confused. As Amazon provides free tire. Does it allow who is lookup? As google app engine never allowed this. | false | 10,447,970 | 0 | 1 | 0 | 0 | The Amazon service you want to use is the server service: EC2.
You get full access to a server and, of course, you can performs socket connections on port 43 (the one required by the Whois protocol). | 0 | 766 | 0 | 1 | 2012-05-04T11:30:00.000 | php,python,amazon-web-services,whois | Amazon AWS For Whois? | 1 | 1 | 1 | 10,679,625 | 0 |
0 | 0 | I would like to install some Python modules on my EC2 instance. I have the files I need for the installation on an S3 bucket. I can also connect from my EC2 instance to the S3 bucket through Python boto, but I cannot access the bucket contents to get the source files I need installed. | true | 10,461,356 | 1.2 | 0 | 0 | 2 | Using s3cmd tools (http://s3tools.org/s3cmd) it is possible to download/upload files stored in buckets. | 0 | 3,321 | 0 | 2 | 2012-05-05T11:22:00.000 | python,amazon-s3,amazon-ec2,amazon-web-services,boto | How can I navigate into S3 bucket folders from EC2 instance? | 1 | 1 | 2 | 10,461,505 | 0 |
0 | 0 | Twitter, Facebook and some other websites are blocked in my country.
And I want to call the open API to do some hacking. I have searched but it can't solve my
problem. Any python libraries can help me sign the OAuth request through proxy and get
the access token ?
Thanks. | false | 10,483,013 | 0 | 1 | 0 | 0 | I am guessing you will have to set up your own proxy service for this, i.e set up your entire API and OAuth logic on a server outside your own country. If you call this proxy service from within your own country it is probably not apparent that you are actually communicating with Twitter.
You will need some sort of cryptographic layer between your client and your proxy/relay service though to make it somewhat secure/obscure. Your own request signing mechanism so to say, and your proxy/relay endpoint should definitely talk (HTTPS/SSL). | 0 | 771 | 0 | 0 | 2012-05-07T13:33:00.000 | python,oauth,proxy | How can I sign OAuth with proxy | 1 | 1 | 1 | 10,496,221 | 0 |
0 | 0 | I am having weird behaviors in my Python environment on Mac OS X Lion.
Apps like Sublime Text (based on Python) don't work (I initially thought it was an app bug),
and now, after I installed hg-git, I get the following error every time I lauch HG in term:
*** failed to import extension hggit from /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-package/hggit/: [Errno 2] No such file
or directory
So it probably is a Python environment set up error. Libraries and packages are there in place.
Any idea how to fix it?
Notes:
I installed hg-git following hg-git web site directions.
I also added the exact path to the extension in my .hgrc file as: hggit = /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-package/hggit/
Python was installed using official package on Python web site.
Echoing $PYTHONPATH in term print anything | false | 10,510,450 | 0 | 1 | 0 | 0 | "site-package"? Did you mean "site-packages"? | 0 | 2,128 | 0 | 2 | 2012-05-09T05:43:00.000 | python,mercurial,path,pythonpath | Python: Failed to import extension - Errno 2 | 1 | 1 | 1 | 10,510,460 | 0 |
1 | 0 | newbie here in need of help.
Using App Inventor amd App Engine. Learning python as i go along. Still early days. Need to post text data from AI to app engine, save in blob store as file (.xml), to be emailed as an attachment.
Am able to send pictures using Shival Wolfs wolfwebmail2, and am sure with a bit of playing with the code i can change it to save the text post as a file in blob store to do the same operation.
As stated newbie learning fast.
Many thanks in advance for any pointers. | false | 10,522,243 | 0.379949 | 0 | 0 | 2 | The solution I found was to do nothing with shival wolfs code on app engine, and to replace the 'postfile' block in the app inventor code with a 'posttext' block with the text you want to send attached to it. Also change the filename variable to the name you want the file called including file type (i.e. .xml, .csv, .txt etc). This appears to work for me. | 0 | 265 | 1 | 0 | 2012-05-09T18:47:00.000 | python,google-app-engine,app-inventor | convert text post into xml file using python in google app engine | 1 | 1 | 1 | 10,561,318 | 0 |
0 | 0 | I've got a website that I want to check to see if it was updated since the last check (using hash). The problem is that I need to enter a username and password before I can visit the site.
Is there a way to input the username and the password using python? | false | 10,557,475 | 0.066568 | 0 | 0 | 1 | Yes, it's possible to send the same info as the browser does. Depending on how the browser communicates with the server, it could be as simple as adding username and password as HTTP POST parameters. Other common methods are for the browser to send a hash of the password, or to log in with more than one step. | 0 | 11,449 | 0 | 2 | 2012-05-11T19:29:00.000 | python,http,networking,post | send post request python | 1 | 1 | 3 | 10,557,539 | 0 |
0 | 0 | I'm new to python and I need to know if there is and equivalent to perl's LWP and HTTP::Parse. I've a perl script to get the content from an URL and parsing it and I would like to port it to python. | true | 10,558,381 | 1.2 | 0 | 0 | 1 | Look at the Python modules httplib and urllib for the fetching part. There are a couple xml modules in Python for parsing. You could also look at BeautifulSoup which is not part of standard Python modules. | 0 | 909 | 0 | 0 | 2012-05-11T20:44:00.000 | python | python equivalent to perl's LWP and HTTP::Parse | 1 | 1 | 1 | 10,558,645 | 0 |
0 | 0 | I'm writing a script, to help me do some repetitive testing of a bunch of URLs.
I've written a python method in the script that it opens up the URL and sends a get request. I'm using Requests: HTTP for Humans -http://docs.python-requests.org/en/latest/- api to handle the http calls.
There's the request.history that returns a list of status codes of the directs. I need to be able to access the particular redirects for those list of 301s. There doesn't seem to be a way to do this - to access and trace what my URLS are redirecting to. I want to be able to access the redirected URLS (status code 301)
Can anyone offer any advice?
Thanks | false | 10,560,005 | 0.197375 | 1 | 0 | 1 | Okay, I'm so silly. Here's the answer I was looking for
r = requests.get("http://someurl")
r.history[1].url will return the URL | 0 | 1,706 | 0 | 1 | 2012-05-12T00:10:00.000 | python,module,urllib2,httplib | Python trace URL get requests - using python script | 1 | 1 | 1 | 10,569,571 | 0 |
1 | 0 | I need to scrape a website that has a basic folder system, with folders labled with keywords - some of the folders contain text files. I need to scan all the pages (folders) and check the links to new folders, record keywords and files. My main problem ise more abstract: if there is a directory with nested folders and unknown "depth", what is the most pythonc way to iterate through all of them. [if the "depth" would be known, it would be a really simple for loop). Ideas greatly appriciated. | false | 10,562,380 | 0.197375 | 0 | 0 | 2 | Recursion is usually the easiest way to go.
However, that might give you a StackOverflowError after some time if someone creates a directory with a symlink to itself or a parent. | 0 | 273 | 0 | 0 | 2012-05-12T09:07:00.000 | python,html,iteration,web-scraping | HTML scraping: iterating through nested directories | 1 | 1 | 2 | 10,562,395 | 0 |
0 | 0 | I want to make a redirect and keep what is the query string. Something like self.redirect plus the query parameters that was sent. Is that possible? | false | 10,569,768 | 1 | 0 | 0 | 10 | You can fetch the query string to the current request with self.request.query_string; thus you can redirect to a new URL with self.redirect('/new/url?' + self.request.query_string). | 0 | 12,957 | 0 | 19 | 2012-05-13T06:26:00.000 | python,google-app-engine,python-2.7,wsgi,webapp2 | How to make a redirect and keep the query string? | 1 | 1 | 4 | 10,594,001 | 0 |
0 | 0 | I'm working on a script which involves continuously analyzing data and outputting results in a multi-threaded way. So basically the result file(an xml file) is constantly being updated/modified (sometimes 2-3 times/per second).
I'm currently using lxml to parse/modify/update the xml file, which works fine right now. But from what I can tell, you have to rewrite the whole xml file even sometimes you just add one entry/sub-entry like <weather content=sunny /> somewhere in the file. The xml file is growing bigger gradually, and so is the overhead.
As far as efficiency/resource is concerned, any other way to update/modify the xml file? Or you will have to switch to SQL database or similar some day when the xml file is too big to parse/modify/update? | true | 10,570,615 | 1.2 | 0 | 0 | 2 | No you generally cannot - and not just XML files, any file format.
You can only update "in place" if you overwite bytes exactly (i.e. don't add or remove any characters, just replace some with something of the same byte length).
Using a form of database sounds like a good option. | 0 | 153 | 0 | 1 | 2012-05-13T09:19:00.000 | python,xml,lxml | Are there ways to modify/update xml files other than totally over writing the old file? | 1 | 1 | 2 | 10,570,673 | 0 |
0 | 0 | I have written a program using Python and OpenCV where I perform operations on a video stream in run time. It works fine. Now if I want to publish it on a website where someone can see this using their browser and webcam, how do I proceed? | true | 10,578,763 | 1.2 | 0 | 0 | 0 | not really sure what you want to happen but if your going to implement this kind of feature in a website I think you should use a flash application instead of python (or if possible html 5). though your using python on the development of you web app it would only run on the server side instead and the feature you want to use is on a client side so for me it's more feasible to use flash instead to capture the video then after capturing the video you upload it to your server then your python code will do the rest of the process on the server side. | 0 | 878 | 0 | 3 | 2012-05-14T07:01:00.000 | python,webcam | Access webcam over internet using Python | 1 | 1 | 1 | 10,578,877 | 0 |
1 | 0 | I have an issue when trying to test a web application with Selenium/Python. Basically I can't test elements of a pop-up window.
A scenario: I can test all elements for a page. But when I go to click on a button that opens up a small pop up box I can't test the elements on the popup. It's like the pop up isn't in focus or active.
I can test elements on the next page. For example click a button, brings me on to next page, and I can work with elements on the 'next' page. So it the problem seems to be popup specific.
I could post code but to be honest it might confuse at this stage. I may post code in a later post, thanks | false | 10,580,772 | 0 | 0 | 0 | 0 | For the Selenium RC API, you need to use the SelectWindow command to switch to the pop-up window. The window can be specified either by its name (as specified on the JavaScript window.open() function) or its title. To switch back to the main window, use SelectWindow(None). | 0 | 7,687 | 0 | 5 | 2012-05-14T09:36:00.000 | python,html,selenium | Selenium: Testing pop-up windows | 1 | 1 | 2 | 10,586,476 | 0 |
0 | 0 | I am trying to make a webform where you can give input as a file or to paste it into textarea. But when the same data arrives to bottle it is different. Data length from the textarea is larger when from file input. Why could this happen? | true | 10,588,730 | 1.2 | 0 | 0 | 1 | I would suspect formatting chars getting inserted on the textarea e.g(newline and carriage returns) might be the issue. Have you checked for this? | 1 | 111 | 0 | 0 | 2012-05-14T18:10:00.000 | python,bottle | Why the same file differs from textarea and file input? | 1 | 1 | 1 | 10,588,776 | 0 |
1 | 0 | I am working on python web scraping
The web page is polluted using iframe and the content is filled by ajax(jquery)
I have tried using src of iframe(using lxml,.) but its of no use
How can i extract the content of the iframe using python modules
Thanks | false | 10,650,676 | 0.099668 | 0 | 0 | 1 | Splinter (http://splinter.cobrateam.info - uses selenium) makes browsing iframe elements easy. At least as long iframe tag has id attribute. | 0 | 1,272 | 0 | 1 | 2012-05-18T10:10:00.000 | python,web-scraping | python web extraction of iframe (ajax ) content | 1 | 1 | 2 | 10,663,615 | 0 |
0 | 0 | How can I find IP address of clients in python? | false | 10,675,029 | 1 | 0 | 0 | 7 | In general, you can't. If someone has a different computer make a request on behalf of their computer, then you only get network information about the machine you receive the connection from.
An HTTP proxy might add a X-Forwarded-For header. | 0 | 319 | 0 | 0 | 2012-05-20T16:08:00.000 | python,ip-address | How can I find real client ip address? | 1 | 1 | 1 | 10,675,083 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.