Web Development
int64
0
1
Data Science and Machine Learning
int64
0
1
Question
stringlengths
28
6.1k
is_accepted
bool
2 classes
Q_Id
int64
337
51.9M
Score
float64
-1
1.2
Other
int64
0
1
Database and SQL
int64
0
1
Users Score
int64
-8
412
Answer
stringlengths
14
7k
Python Basics and Environment
int64
0
1
ViewCount
int64
13
1.34M
System Administration and DevOps
int64
0
1
Q_Score
int64
0
1.53k
CreationDate
stringlengths
23
23
Tags
stringlengths
6
90
Title
stringlengths
15
149
Networking and APIs
int64
1
1
Available Count
int64
1
12
AnswerCount
int64
1
28
A_Id
int64
635
72.5M
GUI and Desktop Applications
int64
0
1
0
0
We use Python 3.x in our projects. But the official client of Protocol Buffers only supports python 2.x. I don't want to downgrade to python 2.x.
false
8,663,468
0.033321
0
0
1
Google's official library has supported Python 3 since version 3.0 (Jul 29, 2016).
0
14,100
1
26
2011-12-29T02:43:00.000
python,python-3.x,protocol-buffers
Is there any way to access Protocol Buffers with python 3.x?
1
2
6
50,994,740
0
0
0
How to upload a picture on a web application with the selenium testing tool? I am using python. I tried many things, but nothing worked.
false
8,665,072
0
0
0
0
Using splinter : browser.attach_file('file_chooser_id',fully_qualified_file_path)
0
111,886
0
75
2011-12-29T07:27:00.000
python,testing,file-upload,selenium,upload
How to upload file ( picture ) with selenium, python
1
1
16
47,689,624
0
1
0
I have a problem with links on my website. Please forgive me if this is asked somewhere else, but I have no idea how to search for this. A little background on the current situation: I've created a python program that randomly generates planets for a sci-fi game. Each created planet is placed in a text file to be viewed at a later time. The program asks the user how many planets he/she wants to create and makes that many text files. Then, after all the planets are created, the program zips all the files into a file 'worlds.zip'. A link is then provided to the user to download the zip file. The problem: The first time I run this everything works perfectly fine. When run a second time, however, and I click the link to download the zip file it gives me the exact same zip file as I got the first time. When I ftp in and download the zip file directly I get the correct zip file, despite the link still being bad. Things I've tried: When I refresh the page the link is still bad. When I delete all my browser history the link is still bad. I've tried a different browser and that didn't work. I've attempted to delete the file from the web server and that didn't solve the problem. Changing the html file providing the link worked once, but didn't work a second time. Simplified Question: How can I get a link on my web page to update to the correct file? I've spent all day trying to fix this. I don't mind looking up information or reading articles and links, but I don't know how to search for this, so even if you guys just give me links to other sites I'll be happy (although directly answering my question will always be appreciated :)).
false
8,674,077
0
1
0
0
I don't know anything about Python, but in PHP, in some fopen modes, if a file is trying to be made with the same name as an existing file, it will cancel the operation.
0
45
0
0
2011-12-29T22:08:00.000
python,html,web
Website Links to Downloadable Files Don't Seem to Update
1
1
1
8,674,108
0
0
0
As you know, sometimes, you have to click space to get the next page under telnet connections, unix. For instance, you 'more' a text file. You can't get all the content at one time. Using 'space' can get to the next page. Here is the problem, what should I do when using telnetlib, python? I have to get all the information. Posting codes here would be better. Thanks!
false
8,675,138
0.197375
0
0
1
Instead of using more(1) or less(1) to view a file, use cat(1). It will not perform any pagination tasks and will write all the content of the file to the terminal, raw.
0
272
0
0
2011-12-30T00:59:00.000
python,unix,telnetlib
telnetlib | read all message when I have to click space if do it mannually
1
1
1
8,675,162
0
0
0
I have one client subscribe to one channel. After a certain period of time about 10 minutes idle, the client can not receive any message, but the publish command still returns 1. I've tried redis-py and servicestack.redis clients. The only difference is seems the idle period can be little longer when use servicestack.redis. Any idea? Thanks in advance.
false
8,678,349
0
0
0
0
I had similar issues with an older version of Redis that was fixed by the latest version. As an alternative you could try adding a separate thread that sends a "PING" command once in a while to keep the connection up.
0
1,314
0
0
2011-12-30T10:09:00.000
c#,python,redis
subscription to redis channel does not keep alive
1
1
1
8,724,716
0
0
0
I don't know if this has been answered before(i looked online but couldn't find one), but how can i send a file (.exe if possible) over a network to another computer that is connected to the network? I tried sockets but i could only send strings and i've tried to learn ftplib but i don't understand it at all or if ftp is even what i am looking for, so i am at a complete standstill. Any input is appreciated (even more so if someone can explain FTP, is it like socket? All the examples i've seen don't have a server program where the client can connect to.)
false
8,721,870
0.148885
0
0
3
ZeroMQ helps to replace sockets. You can send an entire file in one command. A ZMQ 'party' can be written in any major language and for a given ZMQ-powered software, it doesnt matter what the other end it written in. From their site: It gives you sockets that carry whole messages across various transports like in-process, inter-process, TCP, and multicast. You can connect sockets N-to-N with patterns like fanout, pub-sub, task distribution, and request-reply.
0
24,645
0
7
2012-01-04T04:03:00.000
python,file,sockets
How to transfer a file between two connected computers in python?
1
1
4
8,722,433
0
0
0
I would like to convert a accented string to a seo-url ... For instance: "Le bébé (de 4 ans) a également un étrange "rire"" to : "le-bebe-de-4-ans-a-egalement-un-etrange-rire" Any solution, please ? Thanks !
false
8,723,808
0.033321
0
0
1
If you have Django around, you can use its defaultfilter slugify (or adapt it for your needs).
0
1,338
0
1
2012-01-04T08:11:00.000
python,string,url,unicode,seo
How to convert string to seo-url?
1
1
6
8,724,105
0
0
0
I am trying to send a previously recorded traffic (captured in pcap format) with scapy. Currently I am stuck at striping original Ether layer. The traffic was captured on another host and I basically need to change both IP and Ether layer src and dst. I managed to replace IP layer and recalculate checksums, but Ether layer gives me trouble. Anyone has experience resending packets from capture file with applied changes to IP and Ether layer(src and dst)? Also, the capture is rather big couple of Gb, how about scapy performance with such amounts of traffic?
false
8,726,881
0
0
0
0
For correct checksum, I also needed to add del p[UDP].chksum
0
43,692
0
25
2012-01-04T12:29:00.000
python,pcap,scapy
Sending packets from pcap with changed src/dst in scapy
1
1
4
34,000,828
0
1
0
I'm downloading around 200 images from url's using urlretrieve. They all download correctly except for one. I have opened the url in my browser and the image loads correctly. urlretrieve downloads something for that image but it doesn't open. It gives me an error "The file xxx.jpg could not be opened." and it shows it's 1kb and no dimensions. When I manually save the image is shows as 289 kb and 1280x986. Does anyone have any ideas what the problem might be?
true
8,733,163
1.2
0
0
1
I know it sounds dumb but, speaking of experience, check that the device on which the script is saving the files is not full (or have permission problems or whatever). Modify your script to print out the URL instead of downloading the file. See if the URL is well printed and if there's no strange character that may be misinterpreted (including space). If you are still in trouble, please post the script so we can have a look.
0
664
0
0
2012-01-04T19:58:00.000
python,image
Python urlretrieve image
1
1
1
8,735,741
0
1
0
Is it possible to open a non-blocking ssh tunnel from a python app on the heroku cedar stack? I've tried to do this via paramiko and also asyncproc with no success. On my development box, the tunnel looks like this: ssh -L local_port:remote_server:remote_port another_remote_server
false
8,735,487
0.099668
1
0
1
Can you please post the STDERR of ssh -v -L .....? May be you need to disable the tty allocation and run ssh in batch mode.
0
476
0
1
2012-01-04T23:16:00.000
python,ssh,heroku,paramiko,cedar
open an ssh tunnel from heroku python app on the cedar stack?
1
1
2
8,735,515
0
1
0
I have pretty big XML documents, so I don't want to use DOM, but while parsing a document with SAX parser I want to stop at some point (let's say when I reached element with a certain name) and get everything inside that element as a string. "Everything" inside is not necessary a text node, it may contain tags, but I don't want them to me parsed, I just want to get them as text. I'm writing in Python. Is it possible to solve? Thanks!
false
8,744,604
0.049958
0
0
1
I don't believe it's possible with the xml.sax. BeautifulSoup has SoupStrainer which does exactly that. If you're open to using the library, it's quite easy to work with.
0
1,889
0
3
2012-01-05T15:00:00.000
python,xml,sax,saxparser
Can I somehow tell to SAX parser to stop at some element and get its child nodes as a string?
1
1
4
8,744,989
0
1
0
I am trying to grab a PNG image which is being dynamically generated with JSP in a web service. I have tried visiting the web page it is contained in and grabbing the image src attribute; but the link leads to a .jsp file. Reading the response with urllib2 just shows a lot of gibberish. I also need to do this while logged into the web service in question, using mechanize. This seems to exclude the option of grabbing a screenshot with webkit2png or similar. Thanks for any suggestions.
true
8,758,131
1.2
0
0
1
If you use urllib correctly (for example, making sure your User-Agent resembles a browser etc), the "gibberish" you get back is the actual file, so you just need to write it out to disk (open the file with "wb" for writing in binary mode) and re-read it with some image-manipulation library if you need to play with it. Or you can use urlretrieve to save it directly on the filesystem. If that's a jsp, chances are that it takes parameters, which might be appended by the browser via javascript before the request is done; you should look at the real request your browser makes, before trying to reproduce it. You can do that with the Chrome Developer Tools, Firefox LiveHTTPHeaders, etc etc. I do hope you're not trying to break a captcha.
0
214
0
0
2012-01-06T12:56:00.000
python,jsp,png,screen-scraping,mechanize
Grabbing a .jsp generated PNG in Python
1
1
1
8,758,625
0
1
0
I don't want to crawl simultaneously and get blocked. I would like to send one request per second.
false
8,768,439
1
0
0
7
Delays Can we set in 2 says:- We can specify the delay while running the crawler. Eg. scrapy crawl sample --set DOWNLOAD_DELAY=3 ( which means 3 seconds delay between two requests) Or else we can specify Globaly in the settings.py DOWNLOAD_DELAY = 3 by default scrapy takes 0.25 seconds delay between 2 requests.
0
47,919
0
51
2012-01-07T08:44:00.000
python,web-scraping,scrapy
How to give delay between each requests in scrapy?
1
1
6
33,116,540
0
1
0
I need to create a script that will log into an authenticated page and download a pdf. However, the pdf that I need to download is not at a URL, but it is generated upon clicking on a specific input button on the page. When I check the HTML source, it only gives me the url of the button graphic and some obscure name of the button input, and action=".". In addition, both the url where the button is and the form name is obscured, for example: url = /WebObjects/MyStore.woa/wo/5.2.0.5.7.3 input name = 0.0.5.7.1.1.11.19.1.13.13.1.1 How would I log into the page, 'click' that button, and download the pdf file within a script?
false
8,772,935
0.049958
0
0
1
You could observe what requests are made when you click the button (using Firebug in Firefox or Developer Tools in Chrome). You may then be able to request the PDF directly. It's difficult to help without seeing the page in question.
0
686
0
0
2012-01-07T20:37:00.000
python,curl,screen-scraping
Advanced screen-scraping using curl
1
2
4
8,773,176
0
1
0
I need to create a script that will log into an authenticated page and download a pdf. However, the pdf that I need to download is not at a URL, but it is generated upon clicking on a specific input button on the page. When I check the HTML source, it only gives me the url of the button graphic and some obscure name of the button input, and action=".". In addition, both the url where the button is and the form name is obscured, for example: url = /WebObjects/MyStore.woa/wo/5.2.0.5.7.3 input name = 0.0.5.7.1.1.11.19.1.13.13.1.1 How would I log into the page, 'click' that button, and download the pdf file within a script?
true
8,772,935
1.2
0
0
2
Try mechanize or twill. HttpFox or firebug can help you to build your queries. Remember you can also pickle cookies from browser and use it later with py libs. If the code is generated by javascript it could be possible to 'reverse engineer' it. If nof you can run some javascript interpret or use selenium or windmill to script a real browser.
0
686
0
0
2012-01-07T20:37:00.000
python,curl,screen-scraping
Advanced screen-scraping using curl
1
2
4
8,773,567
0
1
0
So I was flicking through my Facebook Timeline, and started to look at some old posts I made in 2009~2010. And they're a bit stupid and I'd like to remove them or change the privacy settings on them. There are too many to do it individually, so I've been looking at the Graph API. However, I have been unable to find anything about changing the privacy settings of posts, or even searching for posts made in a specific date range. So here is the information that I want: Is it possible to change privacy settings for OLD posts via the Graph API? Is it possible to search the Graph API for posts in a particular date range? Preferably before 31st December 2010. If it is possible, how do you do it!?
true
8,779,159
1.2
0
0
1
1) Nope. 2) Yes, you can use the Graph API and HTTP Get me/feed?until={date}
0
740
0
4
2012-01-08T16:29:00.000
php,python,facebook,api,facebook-graph-api
Changing privacy of old Facebook posts using the Graph API
1
1
1
8,781,271
0
1
0
I'm using .find_element_by_class_name() to get an element with a given class name. It seems that the returned element is the first with the class name. How can I get the n'th element with that class name? Also, is it possible to get all the DOM elements with a given class name?
true
8,788,299
1.2
0
0
2
There is a find_elements_by_class_name, notice the elements (plural) method which returns an iterator. To find the n'th element simply replace num: find_elements_by_class_name('className')[num] This should return all DOM elements with the same class name.
0
2,275
0
2
2012-01-09T12:16:00.000
python,webdriver
Get n'th element with given class name with WebDriver
1
1
1
8,788,422
0
0
0
I want to build an application that routes all network traffic (not just HTTP) through my application. Basically, what I want is all the traffic to be given to my application (they should never reach the actual target, my application should handle this), which will in turn be forwarded to a server; same goes for input, just reversed (server -> application -> program which wants an answer). Are there any libraries (or similar stuff) that would make creating the application easier? I'm looking for something that I can use from Python or Java, but if it's really needed, I can learn another language.
false
8,808,104
0.049958
0
0
1
If you want to route only tcp traffic is actually kind of simple using threads and sockets. You should listen in a different port for every server you want to reach. Either in Java or Python you have to create a "socket" for every port you want to listen in. For every new connection you create a new connection to the server and create two new threads to handle that connection, one thread will be reading everything from the client and sending it to the server. The other will be reading everything from the server and sending it to the client. When any end of the connection closes it, you close the other and end both threads.
0
2,625
1
5
2012-01-10T18:01:00.000
java,python,networking,tcp,tunnel
Routing all packets through my program?
1
1
4
8,808,582
0
0
0
Hi I'm studying Python and I've started my first little project. The first thing that I want to do is to add an item to the right click menu of Firefox. So, when I right-click a link that item will be available and when I click it some Python code will be called in order to "do something" with that URL. Do I have to create a Firefox extension to do this? Can I specify in that extension the Python code that should be called?
false
8,811,403
0
0
0
0
I think it is not possible. Normal FF extensions are afaik written in XUL and Javascript and therefore can not call other (non JS-) code.
0
534
0
2
2012-01-10T22:23:00.000
python,firefox,firefox-addon,right-click
Add option to right-click menu in Firefox
1
1
2
8,811,426
0
1
0
I have the some 50 raw HTML page contents which are relevant to my project. I am not sure these contents are having unique pattern. I need to parse the contents from all pages and has to be classified based on the keywords. Keywords all like that 'REVIEWS',"REPORTS","FEEDBACK","DESCRIPTION","COMMENTS","SUCCESS RATES","FAILURE RATES" The crawled HTML content has to be classified and mapped to the relevant keywords. Also need to be split the contents and it's headers from the page for comparison I am using Python. Would you please suggest the way to do this? Which will be suitable to choose? How the idea has to be organised?
false
8,817,296
0.099668
0
0
1
If you need to do classification given the content of pages, I would suggest you to take a look at NLTK (http://www.nltk.org/), a natural language toolkit of open source python modules. Don't just try to look at occurrences of e.g. "report" in the the pages. A report may or may not have "report" as a title or in the content. You can use NLTK to find terms related to your keywords (e.g. success rates vs. approval rates), or from the same family (e.g. description vs. described). Take a look at the pages' contents and try to define what sets them apart from the others. For instance, a page with comments will probably have expressions such as "I think that", "in my opinion" and subjective terms, usually adjectives and adverbs, like "good", "quickly", "horrible", etc. A report is unlikely to have such words in it. Apart from the content, the structure of the page may vary from category to category. If you intend to analyse that, maybe using Beautiful Soup (http://www.crummy.com/software/BeautifulSoup/) for parsing is a good idea.
0
218
0
0
2012-01-11T09:56:00.000
python,regex,html-parsing,data-mining
Parsing and splitting multiple HTML pages without having a clue
1
1
2
8,854,322
0
1
0
Client browsers are sending the header HTTP_ACCEPT_CHARSET: ISO-8859-1,utf-8;q=0.7,*;q=0.3. I only serve webpages as utf8 with the correct header but browsers are posting data from forms encoded with the ISO-8859-1 charset. My question is, will a browser always prefer charsets in the order of its ACCEPT_CHARSET header so I can reliably write a middleware that will decode any posted data with the first entry, in this case ISO-8859-1, and encode it as utf8. UPDATE: I updated the form tag with accept-charset="utf-8" and I'm still seeing non-unicode characters appearing. Is it possible that a user copy/pasting their password from somewhere else (lastpass, excel file) could be injecting non-unicode characters?
true
8,841,227
1.2
0
0
2
The request header Accept-Charset (which may get mapped to HTTP_ACCEPT_CHARSET server-side) expresses the client’s preferences, to be used when the server is capable to serving the resource in different encodings. The server may ignore it, and often will. If your page is UTF-8 encoded and declared as such, then any form on your page will send its data as UTF-8 encoded, unless you specify an accept-charset attribute. So if a browser posts data as ISO-8859-1 encoded, then this is a browser bug. However, this would need to be analyzed before drawing conclusions. There’s an ald technique of including some special character, written using a character reference for safety, as the value of a hidden field. The server-side handler can then pick up the value of this field and detect an encoding mismatch, or even to heuristically deduce the actual encoding from the encoded form of the special character.
0
114
0
1
2012-01-12T19:38:00.000
python,html,character-encoding
Browser charsets order of precedence
1
1
2
8,841,681
0
0
0
I want to replace child elements from one tree to another , based on some criteria. I can do this using Comprehension ? But how do we replace element in ElementTree?
true
8,845,456
1.2
0
0
2
You can't replace an element from the ElementTree you can only work with Element. Even when you call ElementTree.find() it's just a shortcut for getroot().find(). So you really need to: extract the parent element use comprehension (or whatever you like) on that parent element The extraction of the parent element can be easy if your target is a root sub-element (just call getroot()) otherwise you'll have to find it.
1
6,654
0
6
2012-01-13T03:15:00.000
python,elementtree
How can i do replace a child element(s) in ElementTree
1
1
3
8,848,549
0
0
0
I'm creating some SVGs in batches and need to convert those to a PDF document for printing. I've been trying to use svglib and its svg2rlg method but I've just discovered that it's absolutely appalling at preserving the vector graphics in my document. It can barely position text correctly. My dynamically-generated SVG is well formed and I've tested svglib on the raw input to make sure it's not a problem I'm introducing. So what are my options past svglib and ReportLab? It either has to be free or very cheap as we're already out of budget on the project this is part of. We can't afford the 1k/year fee for ReportLab Plus. I'm using Python but at this stage, I'm happy as long as it runs on our Ubuntu server. Edit: Tested Prince. Better but it's still ignoring half the document.
false
8,853,553
0.039979
0
0
1
rst2pdf uses reportlab for generating PDFs. It can use inkscape and pdfrw for reading PDFs. pdfrw itself has some examples that show reading PDFs and using reportlab to output. Addressing the comment by Martin below (I can edit this answer, but do not have the reputation to comment on a comment on it...): reportlab knows nothing about SVG files. Some tools, like svg2rlg, attempt to recreate an SVG image into a PDF by drawing them into the reportlab canvas. But you can do this a different way with pdfrw -- if you can use another tool to convert the SVG file into a PDF image, then pdfrw can take that converted PDF, and add it as a form XObject into the PDF that you are generating with reportlab. As far as reportlab is concerned, it is really no different than placing a JPEG image. Some tools will do terrible things to your SVG files (rasterizing them, for example). In my experience, inkscape usually does a pretty good job, and leaves them in a vector format. You can even do this headless, e.g. "inkscape my.svg -A my.pdf". The entire reason I wrote pdfrw in the first place was for this exact use-case -- being able to reuse vector images in new PDFs created by reportlab.
0
8,568
0
12
2012-01-13T16:10:00.000
python,pdf,svg,reportlab
Convert SVG to PDF (svglib + reportlab not good enough)
1
2
5
11,892,537
0
0
0
I'm creating some SVGs in batches and need to convert those to a PDF document for printing. I've been trying to use svglib and its svg2rlg method but I've just discovered that it's absolutely appalling at preserving the vector graphics in my document. It can barely position text correctly. My dynamically-generated SVG is well formed and I've tested svglib on the raw input to make sure it's not a problem I'm introducing. So what are my options past svglib and ReportLab? It either has to be free or very cheap as we're already out of budget on the project this is part of. We can't afford the 1k/year fee for ReportLab Plus. I'm using Python but at this stage, I'm happy as long as it runs on our Ubuntu server. Edit: Tested Prince. Better but it's still ignoring half the document.
false
8,853,553
-0.039979
0
0
-1
When using svglib or cairosvg, the text in the svg cannot be rendered properly. My solution is import the svg file to draw.io and then export it as a pdf.
0
8,568
0
12
2012-01-13T16:10:00.000
python,pdf,svg,reportlab
Convert SVG to PDF (svglib + reportlab not good enough)
1
2
5
63,741,596
0
0
0
I have a several webdriver nodes connecting to a single hub with nearly identical configuration. I have a simple test that loads a URL and takes a screen shot. Sometimes a test will fail on one run and pass on the next. The only difference should be which node executes the test. The problem is I don't know which ran the test and checking the logs of each node is time consuming. Is there a way to retrieve from the hub which node was asked to run the test? I'm using the python bindings, and when I inspect the object returned from webdriver.Remote("http://myhub:4444/wd/hub", browser), I don't see any methods or properties that store this information. I also don't see any information about the remote webdriver being passed in the network traffic between the hub and my machine that's directing the hub. Of course, it could be that I'm not sending a query to the hub to request the information. The only information that seems relevant that is being passed is a session id. Suggestions?
false
8,853,685
0
0
0
0
Can't you just grep the output log of the Selenium hub with the sessionID, you'll then see which node executed the test.
0
1,809
0
3
2012-01-13T16:18:00.000
python,selenium,webdriver
How to get diagnostic information from a Selenium hub when a node is failing?
1
1
2
8,864,912
0
0
0
I want to create a ping service, that would make a http/https/ping/tcp connection to a website to see if the service is up or not. Would python be suitable for this, seeing as I want to build it to be able to ping 1K endpoints per minute.
false
8,874,032
0
0
0
0
Yes, Python would be suitable for this. (Next time, just try it--it's trivial.)
0
284
0
0
2012-01-15T22:47:00.000
python
Writing a ping service to ping 1K websites per minute
1
3
3
8,874,160
0
0
0
I want to create a ping service, that would make a http/https/ping/tcp connection to a website to see if the service is up or not. Would python be suitable for this, seeing as I want to build it to be able to ping 1K endpoints per minute.
false
8,874,032
0
0
0
0
Practically all, if not all, modern programming languages are capable of that speed of execution easily. The network itself would be the bottleneck, and depending on how many actual pings you want to do of each service, they could get backed up. If I was doing this, I would use Python with a Java frontend if necessary. So, in short, yes, Python is both capable and (in my opinion) a good choice for such a program.
0
284
0
0
2012-01-15T22:47:00.000
python
Writing a ping service to ping 1K websites per minute
1
3
3
8,874,210
0
0
0
I want to create a ping service, that would make a http/https/ping/tcp connection to a website to see if the service is up or not. Would python be suitable for this, seeing as I want to build it to be able to ping 1K endpoints per minute.
false
8,874,032
0.132549
0
0
2
Would python be suitable for this, seeing as I want to build it to be able to ping 1K endpoints per minute. Python has all you need, but there are two bottlenecks, first is OS and the other is network. While planning for such a program, I would do some research on the limit of the IP stack of your target OS and the relevant limits for the source network.
0
284
0
0
2012-01-15T22:47:00.000
python
Writing a ping service to ping 1K websites per minute
1
3
3
8,874,364
0
0
0
I have a website hosted by 1and1. On the server there is a folder with about 5,300 pictures. I have a python script that does some image processing. I want to run the python script on all of the pictures in the folder. The only way I know to make the server run the script is to put the file on my site and then go to www.mysite.com/pythonscipt.py This works decently, except I get an Error 500 message after the script has gone through only about 280 pictures (after about 10 seconds). I could just run the script 20 times, removing the processed pictures after each run, but I figured there is a better way to do it (I just don't know what it is). Question: Is this actually a timeout error? If so, can I make the time until timeout longer? Or, is there a better way to run the script (such that timeouts won't even be an issue)?
true
8,875,081
1.2
0
0
1
I would advise asking the hosting service about it. Generally there will be a way to run scripts on the server.
0
288
0
0
2012-01-16T02:03:00.000
python,timeout
Prevent Server side Python Timeout?
1
1
1
8,875,106
0
0
0
I am using a Chrome Driver, writing in Python, accessing pages that each require a login. After login I would like the program to wait for the entire page to load before taking a screenshot of the page. Can you help with this wait procedure?
false
8,885,544
0.379949
0
0
2
driver.get(url) already waits for the page to load. You could use selenium.webdriver.support.ui.WebDriverWait() to explicitly wait for a condition. ChromeDriver 18.0.995.0 doesn't support taking a page screenshot on Linux.
0
1,946
0
4
2012-01-16T19:55:00.000
python,webdriver
Python WebDriver wait for page to load
1
1
1
8,886,515
0
0
0
How would you go about getting a completely unparsed response to a HTTPS request? Are there any HTTP libraries that will allow you to get at the raw response, or is constructing a request manually using sockets the only way? I'm specifically trying to see what newline characters the server is sending.
false
8,919,011
0
0
0
0
It depends on what you are going to do with the result. httplib.HTTPSConnection has set_debuglevel method which allows printing the response data to stdout (though each line is prefixed). For debugging purposes that was exactly what I needed. Not sure, it is what you need.
0
2,108
0
0
2012-01-18T23:22:00.000
python,https,http-headers
Getting a raw, unparsed response to a HTTPS GET request
1
1
2
8,919,354
0
1
0
I wrote some basic Python code to scrape a remote webpage and grab a few pieces of data. On a different page I'm trying to scrape, the data is hidden from view, and only appears after changing the value of a <select> box. After de-minifying and digging through the remote website's javascript, I confirmed that it is using AJAX (custom implementation of Prototype I think) to switch the <tbody> of the <table> I'm interested in scraping. Is there a way to use Python (or Javascript via Python) to trigger the onChange event of that select box so I can "refresh" the DOM and grab the new HTML?
true
8,937,190
1.2
0
0
2
Figure out the AJAX request URL and request it directly. :-)
0
2,574
0
3
2012-01-20T05:38:00.000
javascript,python,dom,web-scraping
Trigger Javascript event on remote website with Python
1
1
1
8,943,959
0
0
0
I have not been able to make drag-and-drop work with Selenium 2, so I'm considering using Selenium 1 instead for drag-and-drop. Before I dive into Selenium 1, are there known complications of having tests based on both Selenium 1 and Selenium 2 at the same time?
true
8,939,727
1.2
0
0
1
My $0.02: Will it work? Most likely. Is it ideal? Probably not. It sounds similar to what used to be called "dependency hell"... Most of the time, making something work takes a higher priority than making something ideal. That being said, make sure to architect it in such a way that it's as clear as possible to the next guy why you are doing it, where and when it's being used and how it is done.
0
111
0
0
2012-01-20T10:09:00.000
python,selenium,webdriver
Working with both Selenium 1 and Selenium 2
1
1
1
8,943,628
0
0
0
In python, I am firing Sparql queries to get data from dbpedia. At a point approximately firing 7,000 queries, my script is hangs at line results = sparql.query().convert() which is already executed atleast 5000 times in the loop Any idea what could be the issue in it
false
8,965,123
0.53705
0
1
3
try splitting up the .query() and .convert() into two separate lines. I would guess that .query() is where it's hanging, and I would further guess that you are being rate-limited by DBPedia, but I can't find any information on what their limits might be.
0
157
0
1
2012-01-22T22:02:00.000
python,sparql,mysql-python,dbpedia
python script hangs at results = sparql.query().convert()
1
1
1
8,965,215
0
1
0
If I'm looking for the keyword "sales" and I want to get the nearest "http://www.somewebsite.com" even if there is multiple links in the file. I want the nearest link not the first link. This means I need to search for the link that comes just before the keyword match. This doesn't work... regex = (http|https)://[-A-Za-z0-9./]+.*(?!((http|https)://[-A-Za-z0-9./]+))sales sales Whats the best way to find a link that is closest to a keyword?
false
8,966,244
0
0
0
0
I don't think you can do this one with regex alone (especially looking before the keyword match) as it has no sense of comparing distances. I think you're best off doing something like this: find all occurences of sales & get substring index, called salesIndex find all occurences of https?://[-A-Za-z0-9./]+ and get the substring index, called urlIndex loop through salesIndex. For each location i in salesIndex, find the urlIndex closest. Depending on how you want to judge "closest" you may need to get the start and end indices of the sales and http... occurences to compare. i.e., find the end index of a URL that is closest to the start index of the current occurence of sales, and find the start index of a URL that is closest to the end index of the current occurence of sales, and pick the one that is closer. You can use matches = re.finditer(pattern,string,re.IGNORECASE) to get a list of matches, and then match.span() to get the start/end substring indices for each match in matches.
0
366
0
4
2012-01-23T01:05:00.000
python,regex,negative-lookahead
Using Regex to Search for HTML links near keywords
1
1
4
8,966,597
0
0
0
I'm trying to test an application where a user enters information into a gui, clicks on save button and then has to click "ok" in an alert/prompt popup window for the request to take place to the web service. I'm using a python script to automate the requests to the web service. My question is: after submitting the user information how can I interact with the alert/prompt popup to click the "ok" button so that the request completes. How is this being done within a python script. I'm grateful for any input
false
8,982,704
0.099668
0
0
1
Javascript is just a client-side thing. It doesn't matter what you select in the alert box. If you press Cancel, it won't happen but if you press OK, it will POST the form data. You don't need to emulate a button press of OK. So, what really matters is to sniff the POST data. For that you can use Firebug (in Firefox) or Developer Tools (in Chrome) to sniff the POST parameters by using the Network tab.
0
2,520
0
3
2012-01-24T06:10:00.000
python
Interacting with alert/prompt popup window with python
1
1
2
8,982,872
0
0
0
In order to download files, I'm creating a urlopen object (urllib2 class) and reading it in chunks. I would like to connect to the server several times and download the file in six different sessions. Doing that, the download speed should get faster. Many download managers have this feature. I thought about specifying the part of file i would like to download in each session, and somehow process all the sessions in the same time. I'm not sure how I can achieve this.
false
9,007,456
0.197375
0
0
3
As we've been talking already I made such one using PycURL. The one, and only one, thing I had to do was pycurl_instance.setopt(pycurl_instance.NOSIGNAL, 1) to prevent crashes. I did use APScheduler to fire requests in the separate threads. Thanks to your advices of changing busy waiting while True: pass to while True: time.sleep(3) in the main thread the code behaves quite nice and usage of Runner module from python-daemon package application is almost ready to be used as a typical UN*X daemon.
1
10,489
0
6
2012-01-25T17:45:00.000
python,http,asynchronous,urllib2,urllib
Parallel fetching of files
1
1
3
13,502,466
0
0
0
I am looking through the Tweepy API and not quite sure how to find the event to register for when a user either send or receives a new tweet. I looked into the Streaming API but it seems like that is only sampling the Twitter fire house and not really meant for looking at one indvidual user. What I am trying to do is have my program update whenever something happens to the user. Essentially what a user would see if they were in their account on the twitter homepage. So my question is: What is the method or event I should be looking for in the Tweepy API to make this happen?
true
9,027,884
1.2
1
0
1
I used the .filter function then filtered for the user I was looking for.
0
183
0
0
2012-01-27T01:25:00.000
python,twitter,twitter-oauth,tweepy
How to register an event for when a user has a new tweet?
1
2
3
9,056,152
0
0
0
I am looking through the Tweepy API and not quite sure how to find the event to register for when a user either send or receives a new tweet. I looked into the Streaming API but it seems like that is only sampling the Twitter fire house and not really meant for looking at one indvidual user. What I am trying to do is have my program update whenever something happens to the user. Essentially what a user would see if they were in their account on the twitter homepage. So my question is: What is the method or event I should be looking for in the Tweepy API to make this happen?
false
9,027,884
0.066568
1
0
1
I don't think there is any event based pub-sub exposed by twitter. You just have to do the long polling.
0
183
0
0
2012-01-27T01:25:00.000
python,twitter,twitter-oauth,tweepy
How to register an event for when a user has a new tweet?
1
2
3
9,028,060
0
0
0
I am looking for an existing library or code samples, to extract the relevant parts from a mime message structure in order to perform analysis on the textual content of those parts. I will explain: I am writing a library (in Python) that is part of a project that needs to iterate over very large amount of email messages through IMAP. For each message, it needs to determine what are the mime parts it will need in order to analyze the textual content of the message that require the least amount of parsing (e.g. prefer text/plain over text/html or rich text) and without duplicates (i.e. if text/plain exists, ignore the matching text/html). It also needs to address nested parts (text attachments, forwarded messages, etc) and all this without downloading the entire message body (takes too much time and bandwidth). The end goal is later to retrieve only those parts in order to perform some statistical and pattern analysis on the text content of those messages (excluding any markup, meta data, binary data, etc). The libraries and examples I've seen, require the full message body in order to assemble the message structure and understand the content of the message. I am trying to achieve this using the response from the IMAP FETCH command with the BODYSTRUCTURE data item. BODYSTRUCTURE should contain enough information to achieve my goal but although the structure and returned data are officially documented in the relevant RFCs (3501, 2822, 2045), the amount of nesting, combinations and various quirks all add up to make the task very tedious and error prune. Does anyone know any libraries that can help to achieve this or any code samples (preferably in Python but any language will do)?
true
9,045,626
1.2
1
0
0
Answering my own question for the sake of completeness and to close this question. I couldn't find any existing library that answers the requirements. I ended up writing my own code to fetch BODYSTRUCTURE tree, parse it and store it in an internal structure. This gives me the control I need to decide which exact parts of the message I need to actually download and take into account various cases like attachments, forwards, redundant parts (plain text vs html) etc.
0
1,090
0
0
2012-01-28T13:32:00.000
python,email,imap,mime
MIME message structure parsing and analysis
1
1
2
13,953,238
0
0
0
I'm have an issue with running the built in Python server that comes with 3.1, this may or may not be an issue with Python, in fact it probably isn't. I start my server in the correct directory with "python -m http.server 8000" as the documentation suggests (http://docs.python.org/release/3.1.3/library/http.server.html). When I navigate to that port on my local network with another computer using the url 192.168.2.104:8000 (my local ip and the port) my page loads. When I use my global IP, however, it stops working. Port 8000 is forwarded correctly. I used www.yougetsignal.com to verify that port 8000 was open using my global IP. Why in the world would Chrome be saying "Oops! Google Chrome could not connect to [REDACTED]:8000" then? Other server applications (such as my Minecraft server) work just fine. Is there something I'm missing? Furthermore, why would yougetsignal connect to my port but not Chrome?
true
9,058,731
1.2
0
0
1
With most routers ports are only mapped when someone connects from the outside (internet/WAN). You're testing it from your LAN so basically you're connecting to your router when you use your public IP. Ask a friend to test, i.e. from an outside connection.
0
297
0
1
2012-01-30T02:36:00.000
python,ip,port
Quick issue with Python 3.1 http server
1
1
1
9,058,904
0
1
0
I am writing a basic screen scraping script using Mechanize and BeautifulSoup (BS) in Python. However, the problem I am running into is that for some reason the requested page does not download correctly every time. I am concluding this because when searching the downloaded pages using BS for present tags, I get an error. If I download the page again, it works. Hence, I would like to write a small function that checks to see if the page has correctly downloaded and re-download if necessary (I could also solve it by figuring out what goes wrong, but that is probably too advanced for me). My question is how would I go about checking to see if the page has been downloaded correctly?
false
9,080,634
0
0
0
0
The most generic solution is to check that the </html> closing tag exists. That will allow you to detect truncation of the page. Anything else, and you will have to describe your failure mode more clearly.
0
109
0
0
2012-01-31T13:52:00.000
python,beautifulsoup,mechanize
Ensure a page has downloaded correctly in Python
1
2
3
9,081,637
0
1
0
I am writing a basic screen scraping script using Mechanize and BeautifulSoup (BS) in Python. However, the problem I am running into is that for some reason the requested page does not download correctly every time. I am concluding this because when searching the downloaded pages using BS for present tags, I get an error. If I download the page again, it works. Hence, I would like to write a small function that checks to see if the page has correctly downloaded and re-download if necessary (I could also solve it by figuring out what goes wrong, but that is probably too advanced for me). My question is how would I go about checking to see if the page has been downloaded correctly?
false
9,080,634
0
0
0
0
I think you may simple search for html ending tag if this tag is in - this is a valid page.
0
109
0
0
2012-01-31T13:52:00.000
python,beautifulsoup,mechanize
Ensure a page has downloaded correctly in Python
1
2
3
9,081,631
0
0
0
I'm thinking of constantly pinging a website and use python http to send a POST request to the login server if the network is down. Any suggestions?
true
9,088,191
1.2
0
0
0
You could create a web page with some javascript behind it with a timer and every 90 minutes or so have it programmatically log yourself in using javascript DOM methods. If the login is done by using a cookie you could always modify the cookie to never expire.
0
200
0
0
2012-01-31T22:48:00.000
python,http,networking
My internet connection requires a web based login after 90 minutes. What would be the best way to make sure I'm always online?
1
1
1
9,088,375
0
0
0
How can I automate running of 3 python scripts. Suppose, I have 3 scripts say a.py,b.py and c.py. Here a.py runs a web crawler and saves it as a xml file. Now b.py parses the xml file generated and saves as a pickle file. Now c.py inserts the list from pickle file to database. Is there a way to automate this?
false
9,091,281
0
0
0
0
Write a wrapper python script that imports a, b and c and runs then in sequence (with error checking, notification and accounting). Then schedule this wrapper using the system cron daemon (if on UNIX).
1
4,739
0
4
2012-02-01T05:37:00.000
python,automation,cron
how to automate running a python script
1
1
2
9,091,488
0
1
0
Whenever there is dilemma to do a particular task, which can be accomplished either by client side code or by using a server code, which should be preferred? For example: I can iterate through a javascript object and construct a string and then send it to server, or should i send the javascript object and process it in the server? My views: whenever I come across this situation I use the client side code as it reduces the computation load on server. What has to be done in such a situation? Which is the right approach?
false
9,095,758
0.049958
0
0
1
String manipulation is pretty fast in the server side but it's fine to do on the client side too. However make sure to validate the string once it arrives at the server side because any client can replicate requests to your server with a string and try to cause trouble.
0
215
0
1
2012-02-01T12:24:00.000
javascript,python,ruby
Design issue: client side or server side?
1
4
4
9,095,840
0
1
0
Whenever there is dilemma to do a particular task, which can be accomplished either by client side code or by using a server code, which should be preferred? For example: I can iterate through a javascript object and construct a string and then send it to server, or should i send the javascript object and process it in the server? My views: whenever I come across this situation I use the client side code as it reduces the computation load on server. What has to be done in such a situation? Which is the right approach?
false
9,095,758
0
0
0
0
You must take into consideration a lot of factors, including and especially security ones. For example, if you must do some complicated form validation, then it is extremely important to do it server-side, even if that puts a little more load on the server. Always keep in mind that a bad guy can change/break your javascript and send invalid (or possibly malicious) data to the server. When it comes to web programming, the single and most important rule of all is: Never trust the user. With anything. Period. In your case (with the String), you probably can do it client-side and verify on the server that it's ok.
0
215
0
1
2012-02-01T12:24:00.000
javascript,python,ruby
Design issue: client side or server side?
1
4
4
9,095,857
0
1
0
Whenever there is dilemma to do a particular task, which can be accomplished either by client side code or by using a server code, which should be preferred? For example: I can iterate through a javascript object and construct a string and then send it to server, or should i send the javascript object and process it in the server? My views: whenever I come across this situation I use the client side code as it reduces the computation load on server. What has to be done in such a situation? Which is the right approach?
true
9,095,758
1.2
0
0
1
(This answer assumes a web programming context) I personally put a lot of front-end logic in Javascript running on the client side, because it makes things very responsive, even for people with a slow connection. This also means that some functionality continues working even if the user goes offline. I find some programmers use AJAX for things which don't reallly require communication with the server at all. When using this technique, if possible, I find it helpful to embed all the data which the front-end Javascript may need in the page, so it doesn't have to go back and request more data from the server using an AJAX call. (In general, whenever data is being communicated over a high-latency channel, you want to send it in big chunks so the client doesn't have to keep constantly coming back for more. The same applies to DB queries: you can often increase performance by pulling back all the data you might need with a single query, rather than making numerous fine-grained queries. Even if you don't end up using all of that data, performance will still usually be better.) One caution: unless you are very disciplined to keep things organized, Javascript which updates the UI by dynamically generating HTML in response to user actions can become very hairy. If the Javascript is also dynamically generated using another language (running on the server side), things can get downright frightening. Testability is also important in any app. If you are using something like Selenium, testing a combination client/server web app may not be a problem, but if you aren't, testing server-side code may be easier. As others have noted, you must also make sure that you don't compromise security. Don't expect that people will only invoke your server-side code in the way you provide for. View all the types of requests your server handles as forming an "API" which people may invoke with any arguments and in any order.
0
215
0
1
2012-02-01T12:24:00.000
javascript,python,ruby
Design issue: client side or server side?
1
4
4
9,096,236
0
1
0
Whenever there is dilemma to do a particular task, which can be accomplished either by client side code or by using a server code, which should be preferred? For example: I can iterate through a javascript object and construct a string and then send it to server, or should i send the javascript object and process it in the server? My views: whenever I come across this situation I use the client side code as it reduces the computation load on server. What has to be done in such a situation? Which is the right approach?
false
9,095,758
0
0
0
0
I am a django/python developer and my logic is "templates should be used for rendering". I keep my load minimum on the client side and all computation is done on the server side. Another reason for doing this is because a user cannot be trusted. Keep the equation simple for the user.
0
215
0
1
2012-02-01T12:24:00.000
javascript,python,ruby
Design issue: client side or server side?
1
4
4
9,095,862
0
0
0
For imported module, is it possible to get the importing module (name)? I'm wondering if inspect can achieve it or not~
false
9,106,166
0.148885
1
0
3
Even if you got it to work, this is probably less useful than you think since subsequent imports only copy the existing reference instead of executing the module again.
1
106
0
2
2012-02-02T02:04:00.000
python
Is it possible to get "importing module" in "imported module" in Python?
1
2
4
9,106,241
0
0
0
For imported module, is it possible to get the importing module (name)? I'm wondering if inspect can achieve it or not~
false
9,106,166
0.148885
1
0
3
It sounds like you solved your own problem: use the inspect module. I'd traverse up the stack until I found a frame where the current function was not __import__. But I bet if you told people why you want to do this, they'd tell you not to.
1
106
0
2
2012-02-02T02:04:00.000
python
Is it possible to get "importing module" in "imported module" in Python?
1
2
4
9,106,211
0
1
0
I heard SubmitFeed API is for adding products. But i didn't find any example. By the way, i need a Python solution. Thanks a lot.
false
9,123,764
0.379949
0
0
2
The general gist of it is you use SubmitFeed to send your product list. Then you must check the status of the submission. Once the submission is complete you can then get the results. You have to repeat these steps for images, pricing and availability. It's a bit of a pain to get started with it, Amazon supply a LOT of useful information but it is everywhere and not particulary very easy to understand at first. Experiment with just adding products to your inventory and go from there. Make use of the scratchpad too, very handy tool indeed. As for python I can't help you there I'm afraid but I think there is sample code within the python download available from Amazon.
0
2,077
0
6
2012-02-03T04:40:00.000
python,amazon-mws
How to upload/publish products to Amazon via Amazon MWS API?
1
1
1
9,312,763
0
0
0
I need to write a python script that constantly checks a remote web service for updates. The faster it loops the better. How do I get a script to run on my server over and over again without me having to manually start it each time? And if the server crashes or something, how does this script automatically start up again? thanks
false
9,124,657
0.132549
0
0
2
If you really want to run it as fast as possible, there is an alternative to using cron which is write the python program as an endless loop and then start it as a background process using nohup python script.py &. The output of the python process will then be written in nohup.out.
0
3,351
0
3
2012-02-03T06:36:00.000
python,process,cron
constantly running a script on my server
1
2
3
9,126,061
0
0
0
I need to write a python script that constantly checks a remote web service for updates. The faster it loops the better. How do I get a script to run on my server over and over again without me having to manually start it each time? And if the server crashes or something, how does this script automatically start up again? thanks
true
9,124,657
1.2
0
0
5
Rather than making your script loop many times, just write it to perform this task a single time. Then run the script multiple times, as often as you wish, as a cronjob. Edit your cron table to specify timings using the command crontab -e. You needn't worry about server restarts because cron will be started as a service automatically.
0
3,351
0
3
2012-02-03T06:36:00.000
python,process,cron
constantly running a script on my server
1
2
3
9,124,719
0
0
0
I have a bunch of mp3 files that are pretty old and don't have any copy rights. Yet, the place I got them from has filled the copy right tags with its own website url. I was wondering if there's an easy way to remove these tags programmatically? There's a winamp add on that allows me to do this for each song, but that's not very feasible. Edit: Is copyright part of the ID3 tags? Thanks, -Roozbeh
false
9,125,733
0.028564
1
0
1
You can just use VLC player. Click on Tools->Media Information
0
45,647
0
3
2012-02-03T08:31:00.000
php,python,id3
How can I remove the copyright tag from ID3 of mp3s in python or php?
1
3
7
19,575,869
0
0
0
I have a bunch of mp3 files that are pretty old and don't have any copy rights. Yet, the place I got them from has filled the copy right tags with its own website url. I was wondering if there's an easy way to remove these tags programmatically? There's a winamp add on that allows me to do this for each song, but that's not very feasible. Edit: Is copyright part of the ID3 tags? Thanks, -Roozbeh
false
9,125,733
-0.028564
1
0
-1
No Need Of any PHP code Just Reproduce the mp3 file i.e.either burn & rip or cut the size &time making a new file where you can specify your own multitudes of options
0
45,647
0
3
2012-02-03T08:31:00.000
php,python,id3
How can I remove the copyright tag from ID3 of mp3s in python or php?
1
3
7
39,645,756
0
0
0
I have a bunch of mp3 files that are pretty old and don't have any copy rights. Yet, the place I got them from has filled the copy right tags with its own website url. I was wondering if there's an easy way to remove these tags programmatically? There's a winamp add on that allows me to do this for each song, but that's not very feasible. Edit: Is copyright part of the ID3 tags? Thanks, -Roozbeh
false
9,125,733
0
1
0
0
Yes Yes This works! Just download the latest version of VLC media player. Open the mp3 file in it. Right click on file > choose 'information' > edit publisher & copyright information there > click 'Save Metadata' below. And u r done. :)
0
45,647
0
3
2012-02-03T08:31:00.000
php,python,id3
How can I remove the copyright tag from ID3 of mp3s in python or php?
1
3
7
26,053,995
0
0
0
I am using Google's Oauth 2.0 to get the user's access_token, but I dont know how to use it with imaplib to access inbox.
false
9,134,491
-0.049958
1
0
-1
IMAP does not support accessing inbox without password -> so imaplib doesnt
0
6,657
0
12
2012-02-03T19:38:00.000
python,gmail,oauth-2.0,gmail-imap,imaplib
Access Gmail Imap with OAuth 2.0 Access token
1
1
4
11,414,012
0
0
0
Can different crytographic library be use for server and client? I want to implement TLS. Server is currently written in Python; Client is written in C#. Example, using openssl with m2crypto for client and using Bouncy Castle for Server.
true
9,156,981
1.2
0
0
3
Absolutely. They only need to share the same protocol.
0
187
0
2
2012-02-06T07:42:00.000
c#,python,openssl,bouncycastle,ssl
Using different crypto library for server and client
1
1
1
9,157,076
0
0
0
Given a protobuf serialization is it possible to get a list of all tag numbers that are in the message? Generally is it possible to view the structure of the message without the defining .proto files?
true
9,158,329
1.2
1
0
2
Most APIs will indeed have some form of reader-based API that allows you to enumerate a raw protobuf stream. However, that by itself is not enough to fully understand the data, since without the schema the interpretation is ambiguous: a varint could be zig-zag encoded (sint32/sint64), or not (int32/int64/uint32/uint64) - radically changing the meaning, or a boolean, or an enum a fixed-32/fixed-64 could be a signed or unsigned integer, or could be an IEEE754 float/double a length-prefixed chunk could be a UTF-8 string, a BLOB, a sub-message, or a "packed" repeated set of primitives; if it is a sub-message, you'll have to repeat recursively So... yes and no. Certainly you can get the field numbers of the outermost message. Another approach would be to use the regular API against a type with no members (message Naked {}), and then query the unexpected data (i.e. all of it) via the "extension" API that many implementations provide.
0
428
0
2
2012-02-06T09:54:00.000
java,python,google-api,protocol-buffers
Can all tag numbers be extracted from a given protobuf serialization?
1
1
2
9,158,407
0
0
0
I'm writing a small Python script which requires resolving hosts IPs or domain names. Normally I'd use gethostbyname or gethostbyaddr. However, whole traffic is sent via proxy. I'm able to retrieve data using curl with -x option. My question is how can I resolve hostname and IP with proxy on the way? In Python I can't use socks. Thank you in advance.
true
9,158,845
1.2
0
0
0
I believe if you use http proxy, name resolution for a symbolic hostname will be done by the proxy as well. If you want to wrap it in python, use pycurl and setopt pycurl.HTTPPROXYTUNNEL.
0
1,209
0
1
2012-02-06T10:39:00.000
python,linux,curl,proxy,dns
Hostname resolve using proxy
1
1
1
9,160,090
0
0
0
once i run a .py to send 10000 UDP(lengh is: 110) using socket sendto(), a server receive about 400 msgs quickly, and later became very slow for more than 10s each msg. it is weird if i run the .py again, another 400 msgs received quickly. is there a limit of buffer or UDP problem, for this situation? yes i got it! UDP is uncertain to destination, continues package jams after 300 msgs. so i have to add time.sleep(0.2) to make it work. Now i'm trying multiple server process for time interval.
false
9,171,383
0
0
0
0
It might just be that other things are using your network card to send at the same time, and the system can only send 400ish from your application at a time before something else needs to use the NIC. It sounds more like an issue of network usage, than a problem with UDP or something.
0
569
0
1
2012-02-07T05:22:00.000
python,udp,sendto
UDP sendto() became very low, python
1
1
1
9,171,420
0
0
0
I'm using the socket module in Python to do some basic UDP client-server communication. What I would need to do is quite simple: client sends server a packet, server answers with client's public ip address, port and a number representing the TTL the UDP packet had when it got to the server. This is my main problem: is there any way to recieve a packet with recvfrom() or so, and read the TTL value it had when it reached my server? Thank you very much! Matteo Monti
false
9,237,006
0.066568
0
0
1
Python 3.51 has the support for flags such as IP_RECVTTL or IP_RECVTOS. I gave it a try and it worked for me in a 3.x linux kernel.
0
2,675
0
1
2012-02-11T01:28:00.000
python,sockets,udp,ip,ttl
Getting TTL of incoming UDP packet in Python
1
1
3
37,691,767
0
0
0
I'm somewhat new to Python, and am trying to build a standalone parser. The actual parser works when executed through python, but I get an error when I try to run it after it's been converted into an exe file. I need it to be able to run without any third-party software. The error says that there is no module named xml.dom Help!
false
9,244,255
0
0
0
0
I can not post comments currently, but if you are using py2exe, make sure you are ONLY importing modules from your python path. If you are not, you might want to check if your module goes into the same path your program specifies. Py2exe (if assuming you have it) creates two directories when it compiles (by default) and moves the executable into a dist folder that has all the modules in a zib file (called a library). Py2exe WILL NOT search for custom import statements to modules outside of your python path. Thus you must manually place the folder or module where ever the program originally called for it (within a folder, or the same directory the compiled exe is in.) Sorry if this isn't helpful.
1
292
0
0
2012-02-11T21:35:00.000
python,xml,parsing,exe
XML Parsing using Python converted to an exe file
1
1
1
9,244,785
0
0
0
I have a very large XML file with 40,000 tag elements. When i am using element tree to parse this file it's giving errors due to memory. So is there any module in python that can read the xml file in data chunks without loading the entire xml into memory?And How i can implement that module?
false
9,249,219
0.197375
0
0
2
This is a problem that people usually solve using sax. If your huge file is basically a bunch of XML documents aggregated inside and overall XML envelope, then I would suggest using sax (or plain string parsing) to break it up into a series of individual documents that you can then process using lxml.etree.
0
3,249
0
7
2012-02-12T13:42:00.000
python
How to parse XML file in chunks
1
1
2
9,253,503
0
0
0
I am thinking of launching a hadoop cluster on amazon ec2 to download a few tens of thousands of files and later do some processing of them but before putting to much work to it I would like to know if anyone more experienced with hadoop than me thinks that it is possible? I have some doubts about being able to download files on hadoop slaves. If you think that this is possible, can I expect each slave running on amazon ec2 to have different ip address? I would like to use python to do most of the job (e.g. urllib2 module for downloading) and as little java as possible.
true
9,259,531
1.2
0
0
0
Its possible to download data onto hadoop on ec2. Hadoop has a distributed File system (HDFS) which takes care of placing blocks of data onto the slaves, and also honors the replication factor specified in configurations. The slaves in ec2 have different ip addresses.
0
157
1
0
2012-02-13T11:16:00.000
python,hadoop,amazon-ec2
Downloading many large files through Amazon EC2 Hadoop
1
1
1
9,377,499
0
0
0
I'm looking for a python library for easily creating a server which exposes web services (SOAP), and can process multiple requests simultaneously. I've tried using ZSI and rcplib, but with no success. Update: Thanks for your answers. Both ZSI and rcplib (the successor of soaplib) implement their own Http server. How do I integrate ZSI/rcplib with the libraries you mentioned? Update2: After some tweaking, I managed to install and run this on linux, and it seems to work well. Then I installed it on windows, after a lot of ugly tweakings, and then I stubmled upon the fact that WSGIDaemonProcess isn't supported in windows (also mentioned in mod_wsgi docs). I tried to run it anyway, and it does seems to work on each request asynchronicly, but I'm not sure it will work well under pressure. Thanks anyway...
false
9,308,629
0.066568
0
0
1
Excuse me, may be I didn't understand you right. I think that you want your server to process HTTP requests in parallel, but then you don't need to think about your code/library. Parallelizing should be done by Apache httpd and mod_wsgi/mod_python module. Just set up httpd.conf with 'MaxClients 100' for example and 'WSGIDaemonProcess webservice processes=1 threads=100' for example.
0
1,184
0
1
2012-02-16T09:29:00.000
python,multithreading,web-services,soap
Python library for a SOAP server that handles multiple requests simultaneously?
1
1
3
9,310,348
0
0
0
I want to write a server for a browser-based MMO game, which uses WebSocket for communication, SQL Server for database, and the language of choice for server is Python. What I would like to know is which libraries can provide Websocket and MMO support, and should I use Stackless or PyPy?
false
9,363,363
0.066568
0
0
1
Tornado is definitely a good choice for what you are doing. It supports web sockets with the latest version and it works fine with PyPy if you are concerned about performance. I already have a prototype MMO working with this set up and it works great. Also you can add new connection types later. So you could start with web sockets, but if you ported the game client to a mobile device you can add a TCP handler into the game with minimal effort. On the database side, I would consider looking around at other options. Maybe SQL Server is perfect for your needs, but I am more inclined to use something like Membase (renamed Couchbase recently) if you can do without the database being relational. Only because it scales well and seems to be very efficient on cloud hardware. Good luck with your endeavour.
0
2,452
0
2
2012-02-20T15:11:00.000
python,sql-server,html,websocket
Writing browser-based MMO server in Python
1
1
3
10,478,720
0
0
0
I want to ask you guys, how to make my php (or python) socket server to start when a client make request to a specific file and to stop, when client stops. Also, is there a way to make a php or python socket server not to open any ports (maybe to use port 80, which I think is possible, thanks to the request above). I'm using a public hosting which doesn't allow me to open ports or to use terminal commands.
false
9,366,899
0
1
0
0
Erm, sorry, you can't do WebSockets (at least not properly to my knowledge) without opening ports. You might be able to fake it with PHP, but the timeout would defeat it. I would recommend Comet AJAX/long-polling instead.
0
87
0
0
2012-02-20T19:06:00.000
php,python,sockets
html5 websockets OR flash sockets activated on load?
1
1
1
9,367,348
0
1
0
I am working on some programs in spanish, so I need to use accent marks. This is why I use # -*- coding: iso-8859-1 -*- and <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"> on all my programs (python). I tested in chrome,firefox and safari and they all work puttin the accent marks. The only one that does not work is IE8. It does not apply the accent mark, and add some other character instead. Does anyone know if there is a problem with IE8? Is it better to use UTF-8 instead?
false
9,370,343
0.379949
1
0
4
It is better to use UTF-8. Note that "iso-8859-1" is a common mislabeling of "windows-1252", also known as "cp1252". Try being more explicit and see if this resolves your issues.
0
1,203
0
3
2012-02-21T00:08:00.000
python,html,utf-8,iso-8859-1
ISO-8859-1 Not working on IE
1
2
2
9,370,450
0
1
0
I am working on some programs in spanish, so I need to use accent marks. This is why I use # -*- coding: iso-8859-1 -*- and <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"> on all my programs (python). I tested in chrome,firefox and safari and they all work puttin the accent marks. The only one that does not work is IE8. It does not apply the accent mark, and add some other character instead. Does anyone know if there is a problem with IE8? Is it better to use UTF-8 instead?
true
9,370,343
1.2
1
0
2
Yes, it is better to use UTF-8 instead. Your question really cannot be answered unless you also provide the bytes that you are sending.
0
1,203
0
3
2012-02-21T00:08:00.000
python,html,utf-8,iso-8859-1
ISO-8859-1 Not working on IE
1
2
2
9,370,369
0
0
0
I have hundreds of small (on the order of kilobytes) XML files whose information I need to use at run-time. All of the data in these XML files is useful to me, not just some. At runtime, as I hit the need for information from one of these I could construct an ElementTree, parse the XML file, and iterate over it recursively - resulting in a python object that I keep around and throw away the DOM. But if I'm going to ship the XML files and parse them at runtime I'm wondering if I ought to look at a forward-only parser rather than a DOM-based parser. Given that this data is static at build-time, perhaps I ought to even parse the XML into python objects, pickle them, ship 'em, and un-pickle them at runtime. I haven't used pickling yet - would it allow for the user of dictionaries, etc? Or is it meant for very basic data structures? Hope I'm being clear - I have a lot of data in XML files that I'll use at runtime. Wondering what would be fastest (at run-time) to access this data. I don't mind leaving it in memory at runtime once it's been accessed once. Can share an example of the data if that would be helpful (whether in XML format or what I'd want the python class/object look like)... EDIT: A few people have mentioned lxml. I'll go look into that. Anyone have links to parsing data from xml using lxml versus un-pickling?
false
9,388,461
0.53705
0
0
3
lxml is the fastest XML parser for Python. I would keep it in XML format unless size is an issue. Combine your XML files together if speed is an issue. Depending on your data, putting your information into an SQLite database might be a good choice as well.
1
227
0
1
2012-02-22T02:52:00.000
python,xml,pickle
Python - Need to parse all the elements of a number of XML files. Fastest parser?
1
1
1
9,388,522
0
0
0
when I use HTTPConnection under python 2.4,if the serve do not answer,the connection is keeped forever.How can i break it?
false
9,393,118
0
0
0
0
Use that connection in the separate thread and join it using the desired timeout value.
1
731
0
1
2012-02-22T10:32:00.000
python
How to set the timeout on HTTPConnection under python 2.4
1
1
3
9,393,202
0
0
0
How can we identify distinct computers/devices on an intranet? This is possible using cookies but that is not foolproof. I am expecting something on the lines of finding local ip address. It would be great if you mention some tools(libraries) required to integrate it with an intranet application. The application is designed in Python(Django).
true
9,397,234
1.2
0
0
0
You can get the client (computer connecting to your web server) IP address from the HttpRequest object. If your Django view is def MyView(request): you can get the IP from request.META.get('REMOTE_ADDR'). Is that what you're looking for?
0
1,372
0
2
2012-02-22T15:00:00.000
python,django,networking,intranet
How to identify computers on intranet?
1
1
2
9,399,359
0
0
0
I have a set of python Selenium tests that run on chromedriver, and I've found that certain tests will fail occasionally because chromedriver crashes. If chromedriver crashes during one of my tests it's almost always at the same spot in that test, but I've looked at the tests and there doesn't seem to be anything that would cause the crash. Often it's just a link click that seems to cause it. I can run the same test twice and one time it will pass, the other time it will fail because chromedriver crashes. I'm running the latest version of the selenium standalone server (2.18.0), Chrome version 17 and python version 2.7.1. Does anyone know why this might be happening? Thanks in advance!
false
9,400,487
0
0
0
0
Or you can try to move the mouse pointer to 0,0 coordinates. Because click event is not as reliable as the one in firefox.
0
2,525
0
2
2012-02-22T18:09:00.000
python,selenium,selenium-chromedriver
Selenium chromedriver crashes on some test runs
1
2
2
31,517,267
0
0
0
I have a set of python Selenium tests that run on chromedriver, and I've found that certain tests will fail occasionally because chromedriver crashes. If chromedriver crashes during one of my tests it's almost always at the same spot in that test, but I've looked at the tests and there doesn't seem to be anything that would cause the crash. Often it's just a link click that seems to cause it. I can run the same test twice and one time it will pass, the other time it will fail because chromedriver crashes. I'm running the latest version of the selenium standalone server (2.18.0), Chrome version 17 and python version 2.7.1. Does anyone know why this might be happening? Thanks in advance!
false
9,400,487
0.099668
0
0
1
What often happens in chromedriver is that when an element is not in the visible region (for example if there vertical scroll bars and the web element is not in the region) the driver will throw Element not clickable error, which is essentially saying that the element you are trying to click is not currently visible to the user although it is present in the DOM. Ie and FF do not has this issue because they auto scroll to the focused web element.
0
2,525
0
2
2012-02-22T18:09:00.000
python,selenium,selenium-chromedriver
Selenium chromedriver crashes on some test runs
1
2
2
11,804,002
0
0
0
I'm using networkx to manage large network graph which consists of 50k nodes. I want to calculate the shortest path length between a specific set of nodes, say N. For that i'm using the nx.shortest_path_length function. In some of the nodes from N there might not be a path so networkx is raising and stopping my program. Is there any way to run this program without any error? And to tell to shortest_path_length to return some maximum value? Code simply uses nx.shortest_path_length(G,i,j)in loop. and error is as follows raise nx.NetworkXNoPath("No path between %s and %s." % (source, target)) networkx.exception.NetworkXNoPath: No path between V and J
false
9,430,027
0
0
0
0
Alternatively, depending on the type of graph--namely, directed, strongly or weakly connected, or undirected--create component subgraphs (sub_G), that is, (G.subgraph(c) for c in connected_components(G)) or if directed: nx.weakly_connected_component_subgraphs(G) or nx.strongly_connected_component_subgraphs(G) Furthermore, given sub_G is a directed graph, check for the strength of its connections, e.g. nx.is_strongly_connected(sub_G) or ng.is_weakly_connected(sub_G) Combined or individually, these recommendations'll reduce unnecessary checking of paths that do not exist due to the nature of the component subgraph(s).
0
14,103
0
14
2012-02-24T11:31:00.000
python,networkx
Networkx - Shortest path length
1
1
2
56,307,005
0
0
0
first thing is, I'm a long time lurker but the first time user, I'd like to thank you all for creating the site! I'm in a situation that I need to implement the client part of a proprietary protocol. The protocol uses TCP/IP underneath and the message flow can be summarized as follows: Client connects to server Client expresses interest in data of a certain type If server has any such data, it it sent down to the client Client confirms the reception to the server Client now needs to tell the server that it is still interested in data of the same type Server sends the data to the client as it is coming in Client needs to send application-level keep-alive requests to the server from time to time (like each minute or so) Some messages from the server require the client to send a reply back to the server Client disconnects All of that is to happen within a single TCP session, which is to be a long-living one, sort of like WebSocket I imagine. Another thing is that the client is to be deployed on a multi-threaded server and should allow for concurrent access from several threads, which means that a single thread should be able to 'subscribe' to messages of a certain type from the server and should also be able to send the messages to it. I'm well aware of GIL hence I guess it doesn't much make to comment on that, I'm only writing a client, I can't change any other pieces of the architecture. So the thing is that I've never had to delve below the HTTP level, I've been lucky enough to always use some already existing libraries and on the other hand, I haven't done that much network programming, that would be the first time I'll be something like that myself. I'd like it to be an opportunity for me to learn more about all the asynchronous libraries/tools/toolkits like select, epoll, libev or gevent. The trouble is that most of the resources on the web deal with writing servers and I'm not even sure if that I'm to deploy the client on a multi-threaded doesn't mean all of them won't do much. All of the servers people are obviously single-threaded but it's not clear if it's because multiple threads aren't needed or if it's because things like epoll don't really like multiple threads using them. The server clearly treats all the clients like if they were single-threaded units so I think I will need to serialize the access to the client. The thing I can't quite get is how to make sure the server responses match threads and vice versa. If one thread receives a message but another thread needs to confirm a message it received prior to the former one's getting a chance of confirming it, then how I can be sure the messages simply don't get mixed up? What do you think about it all? Are asynchronous libraries a good choice in this case? Can you think of any code samples I could have a look at? I'm using Python but the question I think is generic enough that I can use C, C++ or Java for inspiration. Many, many thanks!
false
9,434,494
0
0
0
0
Is it possible for each thread (each client) to open their own socket? In that case, it's all a non-issue: Only the client in that one thread has a handle on that socket and so it automatically gets the correct data from the server. For the server, all of these connections from the client will look like completely independent client connections (which is exactly what they are). If one of the requirements is to limit the overall number of network connections then you could construct a special thread that maintains the connection to the server, and which locally receives communication requests from the various threads/ But using the independent sockets (one per thread) is probably much simpler. Do you absolutely have to use application level keepalive? Because TCP can do this for you, automatically. If the keepalive is not received in time then the socket is closed, thus informing the other side that the connection has timed out. Do consider this as an option, if it's possible in your case. Finally, if you don't have to do the application-level keepalive you can take advantage of one of the nice aspects of multi-threaded programming: Develop each thread as if it's the only one out there and you then don't need to worry about anything asynchronous at all. For example, you can write your client to send request and then blocking-wait for resonse, do computation and either send result or check if more data from the server has arrived. The data from the server will have accumulated in the TCP receive window on your side. This also serves as a means of flow control: If your client becomes too slow and the receive window is full then the server cannot send anymore. This might block the server, so you need to see whether the server can handle this situation.
1
494
0
2
2012-02-24T16:35:00.000
python,asynchronous,client,asyncsocket
Long-lived multithreaded client for a proprietary protocol (Python, select, epoll)
1
1
1
9,527,618
0
0
0
Is there any easy way to initiate ssh connection with Python 3 without using popen? I would like to achieve password and password less authentication.
false
9,465,807
0
1
0
0
No. Paramiko does not work with Python 3.x yet
0
1,977
0
0
2012-02-27T13:28:00.000
ssh,python-3.x,connection
How to initiate ssh connection with Python 3
1
1
3
17,140,320
0
0
0
We are building an in house software package that will exclusively be used on Verizon aircards. We want a simple way to send data from the laptop to our servers. Originally we were going to use python and FTP but found out that Verizon's ToS sometimes block ftp access. Our next idea was to use port 80 to send files. How could I achieve this with python
true
9,469,389
1.2
0
0
3
Python has http client libraries -- you can easily use those to post data to a web server. Read up on the documentation for the python core libraries.
0
1,471
0
0
2012-02-27T17:30:00.000
python,http,sockets,networking,ftp
Sending data to remote servers with python
1
1
2
9,469,432
0
0
0
A Python web server started with python -m SimpleHTTPServer will print on the console requests it has accepted. Can I get it to print requests that returned a connection refused to the client? I am trying to debug why it refuses some requests from an Android client.
true
9,469,645
1.2
0
0
5
No. If the client gets a Connection refused, this means that the connection request did not reach the server application. Therefore, the server application cannot possibly register these errors. Check firewalls, routing, connectivity, and correctness of server address and port.
0
1,571
0
3
2012-02-27T17:47:00.000
python,simplehttpserver
Python SimpleHTTPServer able to register connection attempts?
1
1
1
9,469,677
0
1
0
I have used both of these (Python and HTML5) seperately, however I'm keen to use the full power of Python over the web using HTML5 to draw things and handle the client side of things. I guess I'm looking for avenues to go down in terms of implementation. Here are some things I'd like to do if possible: Have very interactive data, which will need to processed server-side by Python but displayed and locally manipulated by HTML5 Canvas. Clickable components on the HTML5 Canvas which will communicate with the server side. Is there an implementation that people can recommend? I.e. would Google App Engine be any good. Django? Pyjamas? Thanks - apologies if this seems a little vague. I'm asking before trying one path to see if there is a heads-up to save time and effort.
false
9,485,761
0.132549
0
0
2
A viable approach for a rich client widget like this is to use a stack like: [ your javascript user interface ] [ a js lib for your graphics ] backbone.js for managing your objects client side django-tastypie for wrapping your django objects in a RESTful API django for defining your backend
0
29,496
0
13
2012-02-28T16:27:00.000
python,django,html,google-app-engine
Mixing HTML5 Canvas and Python
1
2
3
9,485,888
0
1
0
I have used both of these (Python and HTML5) seperately, however I'm keen to use the full power of Python over the web using HTML5 to draw things and handle the client side of things. I guess I'm looking for avenues to go down in terms of implementation. Here are some things I'd like to do if possible: Have very interactive data, which will need to processed server-side by Python but displayed and locally manipulated by HTML5 Canvas. Clickable components on the HTML5 Canvas which will communicate with the server side. Is there an implementation that people can recommend? I.e. would Google App Engine be any good. Django? Pyjamas? Thanks - apologies if this seems a little vague. I'm asking before trying one path to see if there is a heads-up to save time and effort.
false
9,485,761
0.26052
0
0
4
I do exactly what you have mentioned using Django on the server side and HTML5 canvas/javascript on the client side. I'm pretty happy with the results but would like to point out that what you do with a Canvas on the client side doesn't have anything to do with what you use on the server side for Python.
0
29,496
0
13
2012-02-28T16:27:00.000
python,django,html,google-app-engine
Mixing HTML5 Canvas and Python
1
2
3
9,486,058
0
1
0
I am trying to make a redirection from a python app to another site. I am currently doing it in the controller which works just fine but breaks the back browser button. I know that a redirection with meta refresh or js, will allow me to add a delay so the user will have time to go back but I read everywhere that these techniques are deprecated and better be avoided. Any thoughts or ideas? Thanks
true
9,499,173
1.2
0
0
2
The correct way is sending HTTP status code 302 instead of 200 and adding Location: <url> to response headers. How to do this depends on the WEB framework you are running your Python app on.
0
219
0
2
2012-02-29T12:27:00.000
python,http,redirect,pylons
How to properly redirect to another site without breaking the browser back button?
1
1
1
9,499,524
0
0
0
I want to display all the Internet History Information of a system using Python. The index.dat file holds all the history information of user, but it's encoded. How can I decode it? [I have heard about WinInet Method INTERNET_CACHE_ENTRY_INFO. It provides information about websites visited, hit counts, etc.] Are there any libraries available in Python for achieving this? If not, are there any alternatives available?
false
9,506,894
0.099668
1
0
1
If you wanted to do this for Firefox history, it's an SQLITE database in the file places.sqlite in the user's firefox profile. It can be opened with python's sqlite3 library. Now if you only care about Explorer (as implied by your mention of index.dat), well I don't know about that.
0
5,552
0
2
2012-02-29T21:27:00.000
python,internet-explorer,browser-cache,browser-history
How do I Retrieve and Display the Internet History Information in Python?
1
1
2
9,508,666
0
1
0
I need to validate an uploaded SWF to ensure it meets certain Flash and ActionScript version limitations. Anyone know a good Python library to parse metadata out of a SWF?
false
9,510,565
0
0
0
0
Hexagonit.swfheader checks Flash version, which is part of Michael's question, but doesn't cover ActionScript version, does it?
0
587
0
0
2012-03-01T04:20:00.000
python,django,actionscript,flash
Check a SWF's Flash Version and ActionScript version from Python?
1
1
2
9,521,701
0
1
0
I am working on my senior project at university and I have a question. My advisor and other workers don't know much more on the matter so I thought I would toss it out to SO and see if you could help. We want to make a website that will be hosted on a server that we are configuring. That website will have buttons on it, and when visitors of that website click a certain button we want to register an event on the server. We plan on doing this with PHP. Once that event is registered (this is where we get lost), we want to communicate with a serial device on a remote computer. We are confident we can set up the PHP event/listener for the button press, but once we have that registered, how do we signal to the remote computer(connected via T1 line/routers) to communicate with the serial device? What is this sequence of events referred to as? The hardest thing for us (when researching it) is that we are not certain what to search for! We have a feeling that a python script could be running on the server, get signals from the PHP listener, and then communicate with the remote PC. The remote PC could also be running a python script that then will communicate with our serial device. Again, most of this makes sense, but we are not clear on how we communicate between Python and PHP on the web server (or if this is possible). If any one could give me some advice on what to search for, or similar projects I would really appreciate it. Thanks,
false
9,523,147
0
1
0
0
You can set up a web server also on the remote computer, perhaps using the same software as on the public server, so you do not need to learn another technology. The public server can make HTTP requests and the remote server responds by communicating with the serial device.
0
232
0
3
2012-03-01T20:02:00.000
php,python,web
Website to computer communications
1
1
2
9,523,459
0
1
0
I've been reading about beautifulSoup, http headers, authentication, cookies and something about mechanize. I'm trying to scrape my favorite art websites with python. Like deviant art which I found a scraper for. Right now I'm trying to login but the basic authentication code examples I try don't work. So question, How do I find out what type of authentication a site uses so that I know I'm trying to login the correct way? Including things like valid user-agents when they try to block bots. Bear with my ignorance as I'm new to HTTP, python, and scraping.
false
9,528,395
0
1
0
0
It's very unlikely that any of the sites you are interested in use basic auth. You will need a library like mechanize that manages cookies and you will need to submit the login information to the site's login page.
0
829
0
0
2012-03-02T05:23:00.000
python,http,authentication,screen-scraping,web-scraping
how to find the authentication used on a website
1
1
1
9,542,705
0
1
0
In web.py, I use web.seeother() to redirect to another page, is there a way to transfer some message to that page too?
false
9,529,113
0
0
0
0
you can use a Get Variable: web.seeother('/somepage?message=hello') bye
0
569
0
0
2012-03-02T06:39:00.000
python,web.py
How to transfer message between different request in web.py?
1
1
2
9,531,601
0
0
0
I would like to increment an ip address by a fixed value. Precisely this is what I am trying to achieve, I have an ip address say, 192.168.0.3 and I want to increment it by 1 which would result in 192.168.0.4 or even by a fixed value, x so that it will increment my ip address by that number. so, I can have a host like 192.168.0.3+x. I just want to know if any modules already exist for this conversion. I tried socket.inet_aton and then socket.inet_ntoa, but I don't know how to get that working properly. Need some help or advice on that.
false
9,539,006
0.07486
0
0
3
Convert the last part of your IP address into a number, add 1 to it, and call ifconfig. I think the approach of incrementing the last bit will not scale well as we span across networks. –OP I thought of mentioning that in my original answer, but didn't, for various reasons. These reasons are as follows: I thought it is unlikely you would need to do this, and could not guess why you'd want to. Even if you did need to do this, you could just parse the second-to-last number. This is only valid for those bits where the netmask is 0. You also have to worry about "special" reserved IP ranges, such as 192.168.etc.etc. Also hex doublets with 0 and possibly ff/255 have special meaning. There are different rules in IPv6.
0
22,989
0
15
2012-03-02T19:12:00.000
python,networking
python increment ipaddress
1
1
8
9,539,066
0
0
0
I am making a website where I am using some html forms, which passes the values to a python script and in return the python script opens a new page/tab in the web browser. I am using the webbrowser module for it. Although I can choose the default browser or any other browser using "webbrowser.get([name])"; but my concern is, as this will be a public webpage, so anyone can open the page in any browser of their choice.The problem I am facing is : Lets say my default browser is "firefox", and I open the page in "chrome", so when the python script opens the new page it opens that in "firefox" instead of "chrome". Here are my questions : How do I detect the current web browser the user is using? How to open the new page in that browser? The code looks like this : #!C:\Python27\python.exe -u # -- coding: UTF-8 -- import MySQLdb import sys import cgi import re import cgitb import webbrowser cgitb.enable() print "Content-Type: text/plain;charset=utf-8" print try: db = MySQLdb.connect(host = "localhost", user = "root", passwd = "", db = "pymysql") except MySQLdb.Error, e: print "Error %d: %s" % (e.args[0], e.args[1]) sys.exit() ----- Do some analysis with the database ---- ----- Create some kml files ---- #Use the kml files to display points in the map. #Open the page where openlayers is present webbrowser.open_new_tab('http://localhost/simulator.html')
true
9,543,993
1.2
0
0
2
The only reason that you are probably convinced that this is a working approach is most likely because you are running the server on your local machine. The python code you are executing is server-side so it has no control over the client. The client would normally be on a remote machine. In your case since your client is also on the server, you get the effect of seeing your python script open a browser tab with the webbrowser module. This is impossible in a standard client server web situation. The client will be remote and your server side code cannot control their machine. You may only serve back http requests which will simply be something their browser receives and renders. If you want to open tabs it will need to be a javascript solution on the client side. A more realistic solution would be to have your server serve back proper client side code. If the form is submitted via ajax then your response could contain javaacript that would open a new page.
0
2,834
0
1
2012-03-03T06:16:00.000
python,browser,cgi,urllib
How to detect the current open webbrowser and open new page in that same browser using Python?
1
1
2
9,544,394
0
1
0
So lets say I'm scraping multiple pages (lets say a 1000) on a website. I want to know which language is best to use to scrape those pages with - javascript or python. Further, I've heard about javascript scrapers being faster (due to multiple get requests), but am unsure how to implement this - can anyone enlighten me? Thanks!
false
9,550,690
0
0
0
0
If I'm reading your question right, you're not trying to build a web app (client- or server-side), but rather a standalone app that simply requests and downloads pages from the Web. You can write a standalone app in JavaScript, but it's not common. The primary use of JavaScript is for code that's going to run in a user's Web browser. For standalone apps, Python is the better choice. And it has very good support (in the form of the urllib2 and related libraries) for tasks like Web scraping. Of course, if your scraping task is relatively simple, you might be better off just using wget.
0
1,586
0
0
2012-03-03T23:13:00.000
jquery,python,screen-scraping
Scraping with JQuery or Python?
1
2
2
9,551,356
0
1
0
So lets say I'm scraping multiple pages (lets say a 1000) on a website. I want to know which language is best to use to scrape those pages with - javascript or python. Further, I've heard about javascript scrapers being faster (due to multiple get requests), but am unsure how to implement this - can anyone enlighten me? Thanks!
true
9,550,690
1.2
0
0
3
This is just my opinion but I would rank them like this javascript might be the best choice but only if you have a node environment already set up. The advantage of javascript scrapers is they can interpret the js in the pages you're scraping. next is a three way tie between perl python and ruby. They all have a mechanize library and do xpath and regex in a sensible way. Down at the bottom is php. It's lack of a cookie handling library like mechanize (curl isn't great) and it's clumsy dom and regex functions make it a poor choice for scraping.
0
1,586
0
0
2012-03-03T23:13:00.000
jquery,python,screen-scraping
Scraping with JQuery or Python?
1
2
2
9,551,757
0
1
0
Google and Bing are both free, for Google I use urllib and json to get the results. For Bing i use pyBing. Yahoo requires me to pay per 1000 queries, which I don't want to do for a homework assignment. Is there any other SEs that has a python api? Or has something similar to Google's ajax googleapis?
true
9,551,108
1.2
0
0
1
pySearch is no longer supported, the only way to search yahoo is to use their BOSS API. The BOOS api would require payments for every 1000 queries.
0
332
0
0
2012-03-04T00:16:00.000
python,search-engine
besides google and bing, what other search engine has an python api?
1
1
2
10,133,419
0
0
0
I am interested in implementing and running some heavy graph-theory algorithms for the purpose of (hopefully) finding counterexamples for some conjecture. What is the most efficient libraries, server setups you would recommend? I am thinking of using Python's Graph API. For running the algorithms I was thinking of using Hadoop, but researching Hadoop I get the feeling it is more appropriate for analysing databases than enumerating problems. If my thinking about Hadoop is correct, what is the best server setup you would recommend for running such a process? Any leads on how to run an algorithm in a remote distributed environment that won't require a lot of code rewritting or cost a lot of money would be helpful. many thanks!
false
9,557,074
0.291313
0
0
3
You can look CUDA as another option, if it is highly computational task.
0
252
0
2
2012-03-04T17:18:00.000
python,algorithm,hadoop,graph-theory,distributed-computing
I am interested in disproving some graph theory conjectures in python, what is the most efficient library/server set up to use?
1
1
2
9,557,746
0
0
0
I have a CherryPy web site running on a virtual ubuntu linux server. I'm attempting to move the application to a second, larger-memory server. Both servers appears to have CherryPy 3.2 installed (I just used apt-get to install it on the newer server). The newer server, however, does not appear to have the CherryPy auth_digest module installed which is what I'm using for authentication. It is present in the CherryPy egg on the older server. How can I update my copy of CherryPy to incorporate that module?
true
9,559,475
1.2
0
0
0
I wound up downloading the tar file (which I think may be a minor version or two more recent than what apt-get knows about) and using setup.py to install it. This version includes the digest authorization module.
0
219
1
0
2012-03-04T22:34:00.000
python,ubuntu,cherrypy
Where to get the CherryPy auth_digest module?
1
1
1
9,561,238
0
0
0
I'm using boto to spawn a new EC2 instance based on an AMI. The ami.run method has a number of parameters, but none for "name" - maybe it's called something different?
false
9,575,148
0.066568
1
0
1
In EC2 there's no api to change the actually name of the machine. You basically have two options. You can pass the desired name of the computer in the user-data and when the server starts run a script that will change the name of the computer. You can use an EC2 tag to name the server ec2-create-tags <instance-id> --tag:Name=<computer name>. Downside to this solution is the server wont actually update to this name. This tag is strictly for you or for when you're querying the list of servers in aws. Generally speaking if you're at the point where you want your server to configure itself when starting up I've found that renaming your computer in EC2 just causes more trouble than it's worth. I suggest not using them if you don't have to. Using the tags or elb instances is the better way to go.
0
9,225
0
23
2012-03-05T22:39:00.000
python,amazon-ec2,amazon-web-services,boto
With boto, how can I name a newly spawned EC2 instance?
1
1
3
9,575,281
0
1
0
I have an app which amounts to a Python script, running on the user's phone, and a JS client, running in the user's browser. The Python script sends messages to App Engine as HTTP requests. The server then pushes the messages to the JS client. The problem is authentication: The server can easily use Google Accounts to authenticate anything coming from the JS client as being sent by a particular user, but I do not know how to get the Python script to make HTTP requests which will also authenticate. Any ideas?
false
9,593,659
0.197375
0
0
2
Can you use OAUth to authenticate with Google, then use the OAuth token to ensure the messages are legitimate?
0
425
0
0
2012-03-06T23:39:00.000
python,google-app-engine,cookies,oauth
Need to send a HTTP request in Python that'll authenticate with Google Accounts
1
1
2
9,596,143
0
0
0
I need to take an audio signal, and extract overlapping audio frames from it. I then need to convert these to frequency data (FFT stuff / like a spectrogram) and analyze the frequency information. For example, if I have a 1 minute mp3 file, I want split the file into smaller files, from 00:00.000 to 00:03.000, from 00:00.010 to 00:03.010. Then I need to see the frequency breakdown of each sub-file. Which programming languages have good audio tools that could help me do this? Are there linux command-line tools I could use? Bonus points for Node.js (yeah right) or Haskell, which I'm most familiar with.
false
9,606,942
0.197375
0
0
3
MATLAB. GNU Octave is the free sorta-clone.
1
439
0
8
2012-03-07T18:17:00.000
python,ruby,linux,node.js,haskell
Good sound libraries?
1
1
3
9,607,629
0
0
0
I'm creating a web chat service for my company product which is a local social network serving a lot of concurrent connections from online users. Long-polling is used together with Tornado over Python. However, when there's a lot of inactive long-polling connections (receiving no incoming messages), I should close some of them. How long should I let a long-polling connection stay? 1 hour or at most 1 day?
true
9,617,049
1.2
0
0
3
You have to adjust long-poll timeout to practical network environment, for example some adsl modems silently drop connections after, say, 10 minutes of inactivity. Some laptops might move to another wireless network or go to sleep, etc. IMO you shouldn't have more than 5 minutes for practical public use case on the internet. You may consider longer timeouts if you have control over network or peers; or if you have independent verification that client is still waiting, e.g. through a separate request that came from the same session. When you analyze your workload, you might discover that after a certain threshold, doubling timeout brings very small performance improvement. Usually that threshold will be pretty small, then use that.
0
1,115
0
3
2012-03-08T11:37:00.000
python,http,chat,tornado,long-polling
How long should I let a long-polling connection stay?
1
1
1
9,618,212
0