Web Development
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 28
6.1k
| is_accepted
bool 2
classes | Q_Id
int64 337
51.9M
| Score
float64 -1
1.2
| Other
int64 0
1
| Database and SQL
int64 0
1
| Users Score
int64 -8
412
| Answer
stringlengths 14
7k
| Python Basics and Environment
int64 0
1
| ViewCount
int64 13
1.34M
| System Administration and DevOps
int64 0
1
| Q_Score
int64 0
1.53k
| CreationDate
stringlengths 23
23
| Tags
stringlengths 6
90
| Title
stringlengths 15
149
| Networking and APIs
int64 1
1
| Available Count
int64 1
12
| AnswerCount
int64 1
28
| A_Id
int64 635
72.5M
| GUI and Desktop Applications
int64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 0 | I'm downloading files over HTTPS, I request the files through urllib2.Request and they come back as a socket._fileobject. I'd ideally like to stream this to file to avoid loading it into memory but I'm not sure how to do this.
My problem is if I call .read() on the object it only returns all the data up to the first NUL character and doesn't read the whole file. How can I solve this?
The NUL character comes down as \x00 if that's any help, not sure what encoding that is | true | 7,581,963 | 1.2 | 0 | 0 | 1 | I found out the problem was that I was running the code inside PyScripter and the in-built python interpreter terminates NUL bytes in the output. So there was no problem with my code, if I run it outside PyScripter everything works fine. Now running Wing IDE and never looking back :) | 0 | 406 | 0 | 2 | 2011-09-28T10:42:00.000 | python,nul | read() stops after NUL character | 1 | 1 | 1 | 7,791,175 | 0 |
1 | 0 | Lets say an operation could take 2 to 10 minutes on the server to perform, for example to update an index which is time consuming.
Would you perform the operation while holding the request? ie not sending a HTTP Response until the operation has finished? And if the client/browser drops the request, just continue with the operation anyway?
What is the alternative? To kick-off the operation, and response with "long-time operation kicked off" ? What if the operation fails mid-way how would the client know? Maintain a server-side "status" of the operation?
Thanks | false | 7,587,939 | 0.132549 | 0 | 0 | 2 | For requests that you know will take a long time (more than a few seconds) to process, you must assume at minimum that the connection may be forcibly severed by a firewall.
I would say you should implement a queue system on the backend. The request to perform the operation becomes a request to queue the operation. When the operation is actually completed, you can either wait for the client to poll, or proactively notify them somehow. For a browser, you'll pretty much have to either poll or send an e-mail. | 0 | 489 | 0 | 2 | 2011-09-28T18:42:00.000 | python,http,request | Should you hold a HTTP request while the server performs a time consuming operation? Or let the request go? | 1 | 2 | 3 | 7,588,103 | 0 |
1 | 0 | Lets say an operation could take 2 to 10 minutes on the server to perform, for example to update an index which is time consuming.
Would you perform the operation while holding the request? ie not sending a HTTP Response until the operation has finished? And if the client/browser drops the request, just continue with the operation anyway?
What is the alternative? To kick-off the operation, and response with "long-time operation kicked off" ? What if the operation fails mid-way how would the client know? Maintain a server-side "status" of the operation?
Thanks | true | 7,587,939 | 1.2 | 0 | 0 | 3 | You might also use a chunked response. First, push a chunk with some code that would display the “please wait” screen, flush the response and start the work. Then you can either push and flush chunks with periodical progress updates or just push one at the end with a “completed” information. Obviously you can employ JavaScript to get a nice UI.
(The above does not apply if you're using WSGI as the first WSGI specification is written in a way that blocks using responses of unknown length so using chunked responses there is impossible.) | 0 | 489 | 0 | 2 | 2011-09-28T18:42:00.000 | python,http,request | Should you hold a HTTP request while the server performs a time consuming operation? Or let the request go? | 1 | 2 | 3 | 7,588,099 | 0 |
1 | 0 | I have got a working web application in Python that downloads a file into the web server upon a user's request. This works fine for small file downloads but when the user requests a larger file, the connection times out. So, I think I need to process the download in the background but I'm not sure what tool is most suitable for this. Celery seems to be right but I don't really want it to be queued(the download must start immediately). What would you suggest? | true | 7,595,809 | 1.2 | 0 | 0 | 2 | Timout duration is up to you, you could just make it longer.
Anyway there are plenty of flash or AJAX uploaders out there, nothing you can do only server side AFAIK | 0 | 751 | 0 | 3 | 2011-09-29T10:14:00.000 | python,web-applications,background | Downloading files in background with Python | 1 | 1 | 1 | 7,595,902 | 0 |
0 | 0 | I asked a similar question yesterday but I included some code that basically took my question on a different tangent than I had intended. So I shall try again.
I am rewriting a python script that crawls a website to find a few hundred text files, I have no interest in any content of the text file beyond the second line of the file. Previously I would download all of the files then loop through them all to extract the second line. I would now like to open each file as my script discovers it, grab the second line, and close it without downloading to my harddrive then opening it.
So basically is there a way I can open a file that is at www.example.com/123456.txt and take the second line from that file copy it to an array or something without downloading it and then opening it. | false | 7,621,249 | 0.066568 | 0 | 0 | 1 | You could try something like urllib2.urlopen('url').read().splitlines()[1] but I guess that would download the entire file to memory | 0 | 124 | 0 | 0 | 2011-10-01T15:47:00.000 | python | opening files from a website | 1 | 1 | 3 | 7,621,359 | 0 |
0 | 0 | I have an HTTP server which host some large file and have python clients (GUI apps) which download it.
I want the clients to download the file only when needed, but have an up-to-date file on each run.
I thought each client will download the file on each run using the If-Modified-Since HTTP header with the file time of the existing file, if any. Can someone suggest how to do it in python?
Can someone suggest an alternative, easy, way to achieve my goal? | false | 7,623,600 | 0.066568 | 0 | 0 | 1 | You can add a header called ETag, (hash of your file, md5sum or sha256 etc ), to compare if two files are different instead of last-modified date | 0 | 1,447 | 0 | 3 | 2011-10-01T23:25:00.000 | python,http,httpclient,urllib2,if-modified-since | Sync local file with HTTP server location (in Python) | 1 | 2 | 3 | 7,623,988 | 0 |
0 | 0 | I have an HTTP server which host some large file and have python clients (GUI apps) which download it.
I want the clients to download the file only when needed, but have an up-to-date file on each run.
I thought each client will download the file on each run using the If-Modified-Since HTTP header with the file time of the existing file, if any. Can someone suggest how to do it in python?
Can someone suggest an alternative, easy, way to achieve my goal? | false | 7,623,600 | 0 | 0 | 0 | 0 | I'm assuming some things right now, BUT..
One solution would be to have a separate HTTP file on the server (check.php) which creates a hash/checksum of each files you're hosting. If the files differ from the local files, then the client will download the file. This means that if the content of the file on the server changes, the client will notice the change since the checksum will differ.
do a MD5 hash of the file contents, put it in a database or something and check against it before downloading anything.
Your solution would work to, but it requires the server to actually include the "modified" date in the Header for the GET request (some server softwares does not do this).
I'd say putting up a database that looks something like:
[ID] [File_name] [File_hash]
0001 moo.txt asd124kJKJhj124kjh12j | 0 | 1,447 | 0 | 3 | 2011-10-01T23:25:00.000 | python,http,httpclient,urllib2,if-modified-since | Sync local file with HTTP server location (in Python) | 1 | 2 | 3 | 7,623,922 | 0 |
0 | 0 | So I'm working on a Python IRC framework, and I'm using Python's socket module. Do I feel like using Twisted? No, not really.
Anyway, I have an infinite loop reading and processing data from socket.recv(xxxx), where xxxx is really irrelevant in this situation. I split the received data into messages using str.split("\r\n") and process them one by one.
My problem is that I have to set a specific 'read size' in socket.recv() to define how much data to read from the socket. When I receive a burst of data (for example, when I connect to the IRC server and receive the MOTD.etc), there's always a message that spans two 'reads' of the socket (i.e. part of the line is read in one socket.recv() and the rest is read in the next iteration of the infinite loop).
I can't process half-received messages, and I'm not sure if there's even a way of detecting them. In an ideal situation I'd receive everything that's in the buffer, but it doesn't look like socket provides a method for doing that.
Any help? | false | 7,642,309 | 0 | 0 | 0 | 0 | Stream-mode sockets (e.g, TCP) never guarantee that you'll receive messages in any sort of neatly framed format. If you receive partial lines of input -- which will inevitably happen sometimes -- you need to hold onto the partial line until the rest of the line shows up.
Using Twisted will save you a lot of time. Better yet, you may want to look into using an existing IRC framework -- there are a number of them already available. | 1 | 195 | 0 | 1 | 2011-10-04T01:02:00.000 | python,sockets,irc | Issues with Python socket module | 1 | 1 | 2 | 7,642,402 | 0 |
0 | 0 | I'd like to do perform data mining on a large scale. For this, I need a fast crawler. All I need is something to download a web page, extract links and follow them recursively, but without visiting the same url twice. Basically, I want to avoid looping.
I already wrote a crawler in python, but it's too slow. I'm not able to saturate a 100Mbit line with it. Top speed is ~40 urls/sec. and for some reason it's hard to get better results. It seems like a problem with python's multithreading/sockets. I also ran into problems with python's gargabe collector, but that was solvable. CPU isn't the bottleneck btw.
So, what should I use to write a crawler that is as fast as possible, and what's the best solution to avoid looping while crawling?
EDIT:
The solution was to combine multiprocessing and threading modules. Spawn multiple processes with multiple threads per process for best effect. Spawning multiple threads in a single process is not effective and multiple processes with just one thread consume too much memory. | false | 7,653,276 | 0.119427 | 0 | 0 | 3 | Around 2 years ago i have developed a crawler. And it can download almost 250urls per second. You could flow my steps.
Optimize your file pointer use. Try to use minimal file pointer.
Don't write your data every time. Try to dump your data after
storing around 5000 url or 10000 url.
For your robustness you don't need to use different configuration.
Try to Use a log file and when you want to resume then just try to
read the log file and resume your crawler.
Distributed all your webcrawler task. And process it in a interval
wise.
a. downloader
b. link extractor
c. URLSeen
d. ContentSeen | 1 | 7,297 | 0 | 8 | 2011-10-04T19:51:00.000 | python,multithreading,web-crawler,web-mining | Fast internet crawler | 1 | 1 | 5 | 11,539,776 | 0 |
0 | 0 | In boto and S3 modules the S3 connection constructor takes the access key and the secret key. Is there a connection object that exists that also takes a session token? | true | 7,673,840 | 1.2 | 0 | 0 | 2 | This hadn't currently been implemented in boto but has been now and will be in version 2.1 or is available now if you check out the source from github.
You can use a session token by passing the token with the key word argument security_token to boto.connect_s3. I think the session token will be implemented elsewhere soon as well. | 0 | 2,197 | 0 | 1 | 2011-10-06T11:49:00.000 | python,amazon-s3 | Is there a way to create a S3 connection with a sessions token? | 1 | 1 | 2 | 7,791,237 | 0 |
0 | 0 | I need to trigger my AntiVirus (McAfee) when accessing a test-virus URL (http://eicar.org/download/eicar.com) via python. If I use IE, Firefox or even wget for windows, the AntiVirus detects that a virus URL was accessed, which is the expected behavior. However, when using urllib or urllib2, the virus URL is successfully accessed and the AntiVirus does not detect that a "bad" URL has been reached.
Has anyone tried something similar? | true | 7,679,557 | 1.2 | 0 | 0 | 2 | Write the output to disk- the virus scanner will see it then. | 0 | 145 | 0 | 1 | 2011-10-06T19:37:00.000 | python,windows | Python and Opswat | 1 | 1 | 1 | 7,679,786 | 0 |
1 | 0 | Imagine that you need to write some Javascript that simply changes a set of checkboxes when a drop down list is changed.
Depending on which item is selected in the list, some of the checkboxes will become checked/unchecked.
In the back, you have Python code along with some SQLAlchemy.
The Javascript needs to identify the selected item in the list as usual, send it back to the Python module which will then use the variable in some SQLAlchemy to return a list of checkboxes which need to be checked i.e. "User selected 'Ford', so checkboxes 'Focus', 'Mondeo', 'Fiesta' need to be checked"
The issue Im having is that I cant seem to find a way to access the python modules from the Javascript without turning a div into a mini browser page and passing a url containing variables into it!
Does anyone have any ideas on how this should work? | false | 7,689,695 | 1 | 0 | 0 | 6 | python has a json module, which is a perfect fit for this scenario.
using a good old AJAX, with json as the data format will allow you to exchange data between javascript and your python module.
(unless your python module is running on the client side, but then i don't see how you could execute it from the browser...) | 0 | 18,016 | 0 | 15 | 2011-10-07T15:49:00.000 | javascript,python,variables,sqlalchemy | Passing variables between Python and Javascript | 1 | 1 | 3 | 7,689,717 | 0 |
1 | 0 | Hello I am having problems with audio being sent over the network. On my local system with no distance there is no problems but whenever I test on a remote system there is audio but its not the voice input i want its choppy/laggy etc. I believe its in how I am handling the sending of the audio but I have tried now for 4 days and can not find a solution.
I will post all relevant code and try and explain it the best I can
these are the constant/global values
#initilaize Speex
speex_enc = speex.Encoder()
speex_enc.initialize(speex.SPEEX_MODEID_WB)
speex_dec = speex.Decoder()
speex_dec.initialize(speex.SPEEX_MODEID_WB)
#some constant values
chunk = 320
FORMAT = pyaudio.paInt16
CHANNELS = 1
RATE = 44100
I found adjusting the sample rate value would allow for more noise
Below is the pyAudio code to initialize the audio device this is also global
#initalize PyAudio
p = pyaudio.PyAudio()
stream = p.open(format = FORMAT,
channels = CHANNELS,
rate = RATE,
input = True,
output = True,
frames_per_buffer = chunk)
This next function is the keypress function which writes the data from the mic and sends it using the client function This is where I believe I am having problems.
I believe how I am handling this is the problem because if I press and hold to get audio it loops and sends on each iteration. I am not sure what to do here. (Ideas!!!)
def keypress(event):
#chunklist = []
#RECORD_SECONDS = 5
if event.keysym == 'Escape':
root.destroy()
#x = event.char
if event.keysym == 'Control_L':
#for i in range(0, 44100 / chunk * RECORD_SECONDS):
try:
#get data from mic
data = stream.read(chunk)
except IOError as ex:
if ex[1] != pyaudio.paInputOverflowed:
raise
data = '\x00' * chunk
encdata = speex_enc.encode(data) #Encode the data.
#chunklist.append(encdata)
#send audio
client(chr(CMD_AUDIO), encrypt_my_audio_message(encdata))
The server code to handle the audio
### Server function ###
def server():
PORT = 9001
### Initialize socket
server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
server_socket.bind((socket.gethostbyname(socket.gethostname()), PORT))
# socket.gethostbyname(socket.gethostname())
server_socket.listen(5)
read_list = [server_socket]
### Start receive loop
while True:
readable, writable, errored = select.select(read_list, [], [])
for s in readable:
if s is server_socket:
conn, addr = s.accept()
read_list.append(conn)
print "Connection from ", addr
else:
msg = conn.recv(2048)
if msg:
cmd, msg = ord(msg[0]),msg[1:]
## get a text message from GUI
if cmd == CMD_MSG:
listb1.insert(END, decrypt_my_message(msg).strip() + "\n")
listb1.yview(END)
## get an audio message
elif cmd == CMD_AUDIO:
# make sure length is 16 --- HACK ---
if len(msg) % 16 != 0:
msg += '\x00' * (16 - len(msg) % 16)
#decrypt audio
data = decrypt_my_message(msg)
decdata = speex_dec.decode(data)
#Write the data back out to the speaker
stream.write(decdata, chunk)
else:
s.close()
read_list.remove(s)
and for completion the binding of the keyboard in Tkinter
root.bind_all('', keypress)
Any ideas are greatly appreciated how I can make that keypress method work as needed or suggest a better way or maybe I am doing something wrong altogether
*cheers
Please note I have tested it without the encryption methods also and same thing :-) | false | 7,720,932 | 0 | 1 | 0 | 0 | Did you run ping or ttcp to test network performance between the 2 hosts?
If you have latency spikes or if some packets are dropped your approach to sending voice stream will suffer badly. TCP will wait for missing packet, report it being lost, wait for retransmit, etc.
You should be using UDP over lossy links and audio compression that handles missing packets gracefully. Also in this case you have to timestamp outgoing packets. | 0 | 2,438 | 0 | 2 | 2011-10-11T02:55:00.000 | python,tcp,speex,pyaudio | Python Audio over Network Problems | 1 | 1 | 1 | 13,102,430 | 0 |
1 | 0 | I want to crawl and save some webpages as HTML. Say, crawl into hundreds popular websites and simply save their frontpages and the "About" pages.
I've looked into many questions, but didn't find an answer to this from either web crawling or web scraping questions.
What library or tool should I use to build the solution? Or is there even some existing tools that can handle this? | false | 7,722,876 | 0.066568 | 0 | 0 | 2 | If you are going to builld a crawler you need to (Java specific):
Learn how to use the java.net.URL and java.net.URLConnection classes or use the HttpClient library
Understand http request/response headers
Understand redirects (both HTTP, HTML and Javascript)
Understand content encodings (charsets)
Use a good library for parsing badly formed HTML (e.g. cyberNecko, Jericho, JSoup)
Make concurrent HTTP requests to different hosts, but ensure you issue no more than one to the same host every ~5 seconds
Persist pages you have fetched, so you don't need to refetch them every day if they
don't change that often (HBase can be useful).
A way of extracting links from the current page to crawl next
Obey robots.txt
A bunch of other stuff too.
It's not that difficult, but there are lots of fiddly edge cases (e.g. redirects, detecting encoding (checkout Tika)).
For more basic requirements you could use wget.
Heretrix is another option, but yet another framework to learn.
Identifying About us pages can be done using various heuristics:
inbound link text
page title
content on page
URL
if you wanted to be more quantitative about it you could use machine learning and a classifier (maybe Bayesian).
Saving the front page is obviously easier but front page redirects (sometimes to different domains, and often implemented in the HTML meta redirect tag or even JS) are very common so you need to handle this. | 0 | 4,419 | 0 | 2 | 2011-10-11T07:48:00.000 | java,python,web-crawler,web-scraping,web-mining | Web mining or scraping or crawling? What tool/library should I use? | 1 | 1 | 6 | 7,723,049 | 0 |
1 | 0 | Im using my GAE application on my phone. I face a problem in getting the disconnect notification to /_ah/channel/disconnected in channels api even if i manually close the socket by socket.close() function. the post occurs after a delay of say one minute. Does anyone know the way to speed things up? In my case socket.close() doesnt produce the channel disconnect notification(only in phone though.. it works perfectly from a laptop)! | true | 7,736,105 | 1.2 | 0 | 0 | 4 | The amount of time it takes the Channel API front-end servers to "realize" that a channel is disconnected is contingent on browser implementation.
On well-behaved browsers, we catch the beforeunload event and post a message to our front-end that says, "this client is going away." On other browsers, we may not get the event (or we may not be able to listen to it for various implementation reasons, like the browser sends it too often (FFFUUUU IE)) or once we get the event the XHR we send may get swallowed. In those cases, the frontend server realizes the client is gone because it fails to receive a heartbeat -- this is what's happening on your phone. (out of curiousity, what phone?)
Your case is interesting because you're explicitly calling onclose. The only thing this does is dispose the iframe that has the Channel FE code -- in other words, onclose just behaves as if the whole browser window was closed; it doesn't take advantage of the fact that the browser is still in a good state and could wait to close until a message is sent.
So I recommend two things: add a custom handler to your code (that does the same thing as your /_ah/disconnect handler) so you can just make an XHR when you know you're manually closing the channel. This is kludgy but functional. The bummer about this is you'll need to explicitly know your client id in your javascript code.
Second, add an issue to our issue tracker (http://code.google.com/p/googleappengine/issues/list) to request better disconnect notification when onclose is called explicitly.
Hope that helps; sorry there's not an easy answer right now. | 0 | 1,617 | 1 | 4 | 2011-10-12T06:29:00.000 | javascript,python,google-app-engine,channel-api | Channel disconnect notification in channel api in google app engine | 1 | 1 | 1 | 7,741,709 | 0 |
0 | 0 | We have devices that run a proprietary FTP client on them. They retrieve media files (AVI videos and Images) as well as XML files from our web service utilizing a python based FTP server. The problem I'm having is that the FTP client wants to download the media files in ASCII mode instead of binary mode. I'd like to continue to use our python FTP server (pyftpdlib) but I can't figure out a way to force the client to use binary mode.
I've skimmed through the FTP RFC looking for a command/response sequence that would allow our FTP server to tell the FTP client to use binary instead of ASCII. Does such a command/response sequence exist? | true | 7,742,965 | 1.2 | 1 | 0 | 2 | You can override the default behaviour or you ftp server by using a custom FTPHandler and overriding the FTPHandler.ftp_TYPE(filetype) method and this way force your server to serve file in binary mode self._current_type = "i". | 0 | 1,814 | 0 | 3 | 2011-10-12T15:58:00.000 | python,ftp | Can you force an FTP client to use binary from the server side | 1 | 1 | 1 | 7,743,098 | 0 |
1 | 0 | I want to have a "control panel" on a website, and when a button is pressed, I want it to run a command on the server (my computer). The panel is to run different python scripts I wrote (one script for each button), and I want to run the panel on my Mac, my iPod touch, and my wii. The best way I see for this is a website, since they all have browsers. Is there a javascript or something to run a command on my computer whenever the button is pressed?
EDIT: I heard AJAX might work for server-based things like this, but I have no idea how to do that. Is there like a 'system' block or something I can use? | false | 7,747,852 | 0 | 0 | 0 | 0 | On the client side (the browser), you can do it with the simplest approach. Just an html form. javascript would make it nicer for validation and to do ajax calls so the page doesnt have to refresh. But your main focus is handling it on the server. You could receive the form request in the language of your choice. If you are already running python, you could write a super fast cgi python script. Look at the cgi module for python. You would need to put this into the apache server on osx if thats where you will host it.
Unfortunately, your question about exactly how to write it is beyond the scope of a simple answer. But google for how to write and html form, or look at maybe jquery to build a quick form that can make ajax calls easily.
Then search for how to use the python cgi module and receive POST requests. | 0 | 198 | 0 | 0 | 2011-10-12T23:43:00.000 | javascript,python,macos,unix,controls | Running command with browser | 1 | 2 | 3 | 7,747,962 | 0 |
1 | 0 | I want to have a "control panel" on a website, and when a button is pressed, I want it to run a command on the server (my computer). The panel is to run different python scripts I wrote (one script for each button), and I want to run the panel on my Mac, my iPod touch, and my wii. The best way I see for this is a website, since they all have browsers. Is there a javascript or something to run a command on my computer whenever the button is pressed?
EDIT: I heard AJAX might work for server-based things like this, but I have no idea how to do that. Is there like a 'system' block or something I can use? | true | 7,747,852 | 1.2 | 0 | 0 | 0 | Here are three options:
Have each button submit a form with the name of the script in a hidden field. The server will receive the form parameters and can then branch off to run the appropriate script.
Have each button hooked to it's own unique URL and use javascript on the button click to just set window.location to that new URL. Your server will receive that URL and can decide which script to run based on the URL. You could even just use a link on the web page with no javascript.
Use Ajax to issue a unique URL to your server. This is essentially the same (from the server's point of view) as the previous two options. The main difference is that the web browser doesn't change what URL it's pointing to. The ajax call just directs the server to do something and return some data which the host web page can then do whatever it wants with. | 0 | 198 | 0 | 0 | 2011-10-12T23:43:00.000 | javascript,python,macos,unix,controls | Running command with browser | 1 | 2 | 3 | 7,747,896 | 0 |
0 | 0 | I'm using Python to transfer (via scp) and database a large number of files. One of the servers I transfer files to has odd ssh config rules to stop too many ssh requests from a single location. The upshot of this is that my python script, currently looping through files and copying via os.system, hangs after a few files have been transferred.
Is there a way in which Python could open up an ssh or other connection to the server, so that each file being transferred does not require an instance of ssh login?
Thanks, | false | 7,757,059 | 0 | 1 | 0 | 0 | This is not really python specific, but it probably depends on what libraries you can use.
What you need is a way to send files through a single connection.
(This is probably better suited to superuser or severfault.com though.)
Create tarfile locally, upload it and unpack at target?
Maybe you could even run 'tar xz' remotely and upload the file on stdin over SSH? (As MichaelDillon says in the comment, Python can create the tarfile on the fly...)
Is SFTP an option?
Rsync over SSH?
Twisted is an async library that can handle many sockets/connections at once. Is probably overkill for your solution though,
Hope it helps. | 0 | 696 | 1 | 1 | 2011-10-13T16:07:00.000 | python,sockets,ssh,scp | open (and maintain) remote connection with python | 1 | 1 | 3 | 7,757,147 | 0 |
0 | 0 | I'm trying to run some automated functional tests using python and Twill. The tests verify that my application's OAuth login and connection endpoints work properly.
Luckily Twitter doesn't mind that Twill/Mechanize is accessing twitter.com. However, Facebook does not like the fact that I'm using Twill to access facebook.com. I get their 'Incompatible Browser' response. I simply want to access their OAuth dialog page and either allow or deny the application I'm testing. Is there a way to configure Twill/Mechanize so that Facebook will think its a standard browser? | false | 7,772,387 | 0.099668 | 1 | 0 | 1 | Try to send user agent header w/ mechanize. | 0 | 946 | 0 | 1 | 2011-10-14T19:06:00.000 | python,facebook,browser,twill | How to configure the python Twill/Mechanize library to acces Facebook | 1 | 1 | 2 | 7,773,528 | 0 |
0 | 0 | How do I write the function for Selenium to wait for a table with just a class identifier in Python? I'm having a devil of a time learning to use Selenium's Python webdriver functions. | false | 7,781,792 | 1 | 0 | 0 | 7 | I have made good experiences using:
time.sleep(seconds)
webdriver.Firefox.implicitly_wait(seconds)
The first one is pretty obvious - just wait a few seconds for some stuff.
For all my Selenium Scripts the sleep() with a few seconds (range from 1 to 3) works when I run them on my laptop, but on my Server the time to wait has a wider range, so I use implicitly_wait() too. I usually use implicitly_wait(30), which is really enough.
An implicit wait is to tell WebDriver to poll the DOM for a certain amount of time when trying to find an element or elements if they are not immediately available. The default setting is 0. Once set, the implicit wait is set for the life of the WebDriver object instance. | 0 | 100,199 | 0 | 48 | 2011-10-16T01:33:00.000 | python,selenium,selenium-webdriver,automation,automated-tests | Selenium waitForElement | 1 | 2 | 14 | 7,784,387 | 0 |
0 | 0 | How do I write the function for Selenium to wait for a table with just a class identifier in Python? I'm having a devil of a time learning to use Selenium's Python webdriver functions. | false | 7,781,792 | 0 | 0 | 0 | 0 | If I don't know something about selenium command, I use selenium web idea RC with firefox. You can choose and add command in the combobox and when finish your test case after you can export the test code different language. like java, ruby, phyton, C#, etc.. | 0 | 100,199 | 0 | 48 | 2011-10-16T01:33:00.000 | python,selenium,selenium-webdriver,automation,automated-tests | Selenium waitForElement | 1 | 2 | 14 | 42,267,544 | 0 |
0 | 0 | I want to write an (GUI) application that listens both to keyboard events (client side generated events) and to a network port (server side generated events). I could use some high level advice on how to do this. Some additional info:
- I am using the wxPython module for the GUI
- I could set the socket in non-blocking mode, but this way I have to keep polling the socket by keeping executing the recv() command. I did this earlier and I can recall that this used considerable resources
- I could use the thread module, but since I am not familiar with it, I try to avoid this, but maybe I can't
Advice would be appreciated. | false | 7,784,969 | 0 | 0 | 0 | 0 | I am not a wx expert. Could you use wx's native event driven mechanisms? The keypress would certainly have an event. Wx has a socket class wxSocketClient() that could translate the low level socket events (data ready, closed, etc) into a wx event. | 0 | 803 | 0 | 1 | 2011-10-16T14:40:00.000 | python,wxpython | Listening to network event and keyboard input at the same time in Python | 1 | 1 | 3 | 7,785,429 | 0 |
0 | 0 | I'm using zmq with python with an scheme REQ-REP in order to transfer data. I'm using right now the method send_json to send the data. But for some weird reason, with some examples it works, with other would not.
When the error occurs, the following error message and exception is shown:
File "socket.pyx", line 723, in zmq.core.socket.Socket.send_json
(zmq/core/socket.c:6062) File "socket.pyx", line 504, in
zmq.core.socket.Socket.send (zmq/core/socket.c:4307) File
"socket.pyx", line 148, in zmq.core.socket._send_copy
(zmq/core/socket.c:1752) ZMQError: Operation cannot be accomplished in
current state
At first I was thinking it was related with the length of the data sent, but then I've found that in some examples even with big chunks of data it works.
Any clues or things I should look for?
thanks | false | 7,789,200 | 0.197375 | 0 | 0 | 1 | REQ-REP sockets have a strict send/receive cycle(or vice versa)
Mostly, this happens when you try to send a request before receiving a response or something similar. | 1 | 683 | 0 | 1 | 2011-10-17T03:52:00.000 | python,sockets,zeromq | Weird error sending integer list with send_json using sockets with zmq with python | 1 | 1 | 1 | 10,416,815 | 0 |
0 | 0 | I can't import WebOb 1.1 with the Python 2.7 runtime, as WebOb imports io, io imports _io, which is blocked by the SDK. Is there a way to whitelist _io? It is obviously not supposed to be blacklisted. | true | 7,801,387 | 1.2 | 1 | 0 | 1 | From context, it sounds like you're trying to run your app on the dev_appserver. The dev_appserver does not yet support the Python 2.7 runtime; for now you'll have to do your development and testing on appspot. | 0 | 201 | 0 | 1 | 2011-10-18T01:06:00.000 | python,google-app-engine,webob | GAE Python 2.7, no _io module? | 1 | 1 | 1 | 7,815,687 | 0 |
0 | 0 | I always had the idea that doing a HEAD request instead of a GET request was faster (no matter the size of the resource) and therefore had it advantages in certain solutions.
However, while making a HEAD request in Python (to a 5+ MB dynamic generated resource) I realized that it took the same time as making a GET request (almost 27 seconds instead of the 'less than 2 seconds' I was hoping for).
Used some urllib2 solutions to make a HEAD request found here and even used pycurl (setting headers and nobody to True). Both of them took the same time.
Am I missing something conceptually? is it possible, using Python, to do a 'quick' HEAD request? | true | 7,826,349 | 1.2 | 0 | 0 | 7 | The server is taking the bulk of the time, not your requester or the network. If it's a dynamic resource, it's likely that the server doesn't know all the header information - in particular, Content-Length - until it's built it. So it has to build the whole thing whether you're doing HEAD or GET. | 0 | 2,998 | 0 | 4 | 2011-10-19T18:39:00.000 | python,http,urllib2,head,pycurl | HEAD request vs. GET request | 1 | 3 | 3 | 7,826,403 | 0 |
0 | 0 | I always had the idea that doing a HEAD request instead of a GET request was faster (no matter the size of the resource) and therefore had it advantages in certain solutions.
However, while making a HEAD request in Python (to a 5+ MB dynamic generated resource) I realized that it took the same time as making a GET request (almost 27 seconds instead of the 'less than 2 seconds' I was hoping for).
Used some urllib2 solutions to make a HEAD request found here and even used pycurl (setting headers and nobody to True). Both of them took the same time.
Am I missing something conceptually? is it possible, using Python, to do a 'quick' HEAD request? | false | 7,826,349 | 0.066568 | 0 | 0 | 1 | The response time is dominated by the server, not by your request. The HEAD request returns less data (just the headers) so conceptually it should be faster, but in practice, many static resources are cached so there is almost no measureable difference (just the time for the additional packets to come down the wire). | 0 | 2,998 | 0 | 4 | 2011-10-19T18:39:00.000 | python,http,urllib2,head,pycurl | HEAD request vs. GET request | 1 | 3 | 3 | 7,826,409 | 0 |
0 | 0 | I always had the idea that doing a HEAD request instead of a GET request was faster (no matter the size of the resource) and therefore had it advantages in certain solutions.
However, while making a HEAD request in Python (to a 5+ MB dynamic generated resource) I realized that it took the same time as making a GET request (almost 27 seconds instead of the 'less than 2 seconds' I was hoping for).
Used some urllib2 solutions to make a HEAD request found here and even used pycurl (setting headers and nobody to True). Both of them took the same time.
Am I missing something conceptually? is it possible, using Python, to do a 'quick' HEAD request? | false | 7,826,349 | 0.066568 | 0 | 0 | 1 | Chances are, the bulk of that request time is actually whatever process generates the 5+MB response on the server rather than the time to transfer it to you.
In many cases, a web application will still execute the full script when responding to a HEAD request--it just won't send the full body back to the requester.
If you have access to the code that is processing that request, you may be able to add a condition in there to make it handle the request differently depending on the the method, which could speed it up dramatically. | 0 | 2,998 | 0 | 4 | 2011-10-19T18:39:00.000 | python,http,urllib2,head,pycurl | HEAD request vs. GET request | 1 | 3 | 3 | 7,826,411 | 0 |
1 | 0 | Currently, there's a game that has different groups, and you can play for a prize 'gold' every hour. Sometimes there is gold, sometimes there isn't. It is posted on facebook every hour ''gold in group2" or "gold in group6'', and other times there isn't a post due to no gold being a prize for that hour. I want to write a small script that will check the site hourly and grab the result (if there is gold or not, and what group) and display it back to me. I was wanting to write it in python as I'm learning it. Would this be the best language to use? And how would I go about doing this? All I can really find is information on extracting links. I don't want to extract links, just the text. Thanks for any and all help. I appreciate it. | false | 7,829,768 | 0.099668 | 0 | 0 | 1 | I have something similiar to what you have, but you left out what my main question revolves around. I looked at htmlparser and bs, but I am unsure how to do something like if($posttext == gold) echo "gold in so and so".. seems like bs deals a lot with tags..i suppose since facebook posts can use a variety of tags, how would i go about doing just a search on the text and to return the 'post' ?? | 0 | 105 | 0 | 0 | 2011-10-20T00:21:00.000 | python,screen-scraping | fetch text from a web site and displaying it back | 1 | 1 | 2 | 7,853,614 | 0 |
0 | 0 | We occasionally have to debug glitchy Cisco routers that don't handle the TCP Selective Acknowledgment (SACK) options correctly. This causes our TCP sessions to die when routed through an IPTABLES port redirection rule.
To help with the diagnosis, I've been constructing a python-based utility to construct a sequence of packets that can reproduce this error at will, the implementation uses raw sockets to perform this trick. I've got an ICMP ping working nicely but I've run into a snag on the UDP implementation, I can construct, send and receive the packet without problem, the issue that I'm seeing is that Linux doesn't like the UDP packets being sent back from the remote system and always sends an ICMP Destination unreachable packet, even though my python script is able to receive and process the packet without any apparent problems.
My question: Is it possible to subsume the Linux UDP stack to bypass these ICMP error messages when working with RAW sockets?.
Thanks | true | 7,851,817 | 1.2 | 0 | 0 | 1 | Are you receiving and processing the packet and only need to suppress the ICMP port-unreachable? If so, maybe just add an entry to the iptables OUTPUT chain to drop it? | 0 | 197 | 1 | 2 | 2011-10-21T15:37:00.000 | python,linux,networking | Subsuming the Linux packet processing stack | 1 | 1 | 1 | 7,851,867 | 0 |
1 | 0 | I've heard that the Google Nexus S has RFID capabilities. I'd like to start learning about RFID and programmatically doing things. Where should I start? Good tutorials or code examples are what I'm after. (or hardware if it's not Android I suppose).
Doesn't have to be Android, could be python or java libraries as well. Preference for Android.
I see this as the future, and I want to get in on it :) | true | 7,857,940 | 1.2 | 0 | 0 | 1 | Buy a Nexus-S, buy some tags.
Then take a look at the code of the 'Tags' application that comes with android. Play with it, modify it. Write some tags with your own application.
Learn what Ndef is and how you craft your own messages/records. Learn how to use the transceive function to do direct communication to the tags. This will open up a world on it's own (aka you can write-protect tags that are not write protectable by Android itself etc).
All in all that can be done in two weeks. The Android NFC/RFID subsystem is easy to use. Most of the the hard stuff is hidden from you.
Afterwards write your own little application, show it to advertising agencies that do Android apps and get a high payed job. NFC experience is highly sought after at the moment. | 0 | 1,251 | 0 | 1 | 2011-10-22T07:00:00.000 | java,android,python,rfid | RFID + Android --> where do I start? | 1 | 1 | 1 | 7,858,130 | 0 |
1 | 0 | I have been using the XML package successfully for extracting HTML tables but want to extend to PDF's. From previous questions it does not appear that there is a simple R solution but wondered if there had been any recent developments
Failing that, is there some way in Python (in which I am a complete Novice) to obtain
and manipulate pdfs so that I could finish the job off with the R XML package | true | 7,918,718 | 1.2 | 0 | 0 | 11 | Extracting text from PDFs is hard, and nearly always requires lots of care.
I'd start with the command line tools such as pdftotext and see what they spit out. The problem is that PDFs can store the text in any order, can use awkward font encodings, and can do things like use ligature characters (the joined up 'ff' and 'ij' that you see in proper typesetting) to throw you.
pdftotext is installable on any Linux system... | 0 | 4,014 | 0 | 10 | 2011-10-27T15:54:00.000 | python,r,pdf,screen-scraping | PDF scraping using R | 1 | 1 | 4 | 7,918,885 | 0 |
0 | 0 | I'm using pycurl to crawl data from the Twitter Streaming API. However, after several hours, the connection just hangs there.
Is there anyway to detect this and exit the program? I know pycurl has TIMEOUT and CONNECTTIMEOUT, but these two params do not apply. | false | 7,924,499 | 0 | 0 | 0 | 0 | Do you get an exception or something and could you please add some code? :)
Maybe you should think about using another module like httplib (if you want to
use SSL/TLS you could create a new socket and overwrite the connect function of httplib with
your secure wrapped socket :) ) | 0 | 478 | 0 | 0 | 2011-10-28T02:24:00.000 | python,api,twitter,streaming,pycurl | Python pycurl with Twitter Streaming API | 1 | 1 | 1 | 7,928,557 | 0 |
0 | 0 | I need some help in implementing Multicast Streaming server over IPv6 preferably in Python. I am able to do so with Datagram servers but since I need to send large amounts of data (images and videos) over the connection, I get an error stating , data too large to send.
Can any one tell me how do I implement a Streaming Socket with multicast that can both send and receive data?
Also, if there is a better way to do than Stream Sockets, please tell.
Thank You. | false | 7,937,928 | 0.53705 | 0 | 0 | 3 | You DO want to use datagrams, as with multicast there are multiple receivers and a stream socket will not work.
You need to send your data in small chunks (datagrams) and state in each which part of the stream it is so receivers can detect lost (and reordered) datagrams.
Instead of inventing a new mechanism for identifying the parts you are most likely better off encapsulating your data in RTP.
If you are going to stream video it might be worth looking into gstreamer which can do both sending and receiving RTP and has python bindings. | 0 | 570 | 0 | 0 | 2011-10-29T09:04:00.000 | python,sockets,streaming,ipv6,multicast | How do I create a multicast stream socket over IPv6 in Python? | 1 | 1 | 1 | 7,939,658 | 0 |
1 | 0 | I am looking for a library in Python OR Java that can use webkit or similar rendering engine on the server side (without GUI) and return the DOM object for further processing like selecting the elements etc. | false | 7,939,069 | 0 | 0 | 0 | 0 | If you want to process (execute) the javascript on headless server to generate the HTML snapshot, try using a tool like Selenium.
Selenium will allow you to fully render the HTML webpage on server side and then you can use the generated HTML to make a snapshot. | 0 | 412 | 0 | 1 | 2011-10-29T13:02:00.000 | java,python,webkit,rendering,server-side | Python or Java module to render HTML page on server side and obtain DOM object | 1 | 1 | 1 | 51,939,389 | 0 |
0 | 0 | What is the Python 3 equivalent of python -m SimpleHTTPServer? | false | 7,943,751 | 1 | 1 | 0 | 7 | Just wanted to add what worked for me:
python3 -m http.server 8000 (you can use any port number here except the ones which are currently in use) | 0 | 725,970 | 0 | 1,528 | 2011-10-30T07:22:00.000 | python,python-3.x,httpserver,simplehttpserver | What is the Python 3 equivalent of "python -m SimpleHTTPServer" | 1 | 1 | 7 | 71,111,456 | 0 |
0 | 0 | I'm trying to use feedparser to retrieve some specific information from feeds, but also retrieve the raw XML of each entry (ie. elements for RSS and for Atom), and I can't see how to do that. Obviously I could parse the XML by hand, but that's not very elegant, would require separate support for RSS and Atom, and I imagine it could fall out of sync with feedparser for ill-formed feeds. Is there a better way?
Thanks! | true | 7,945,669 | 1.2 | 0 | 0 | 2 | I'm the current developer of feedparser. Currently, one of the ways you can get that information is to monkeypatch feedparser._FeedParserMixin (or edit a local copy of feedparser.py). The methods you'll want to modify are:
feedparser._FeedParserMixin.unknown_starttag
feedparser._FeedParserMixin.unknown_endtag
At the top of each method you can insert a callback to a routine of your own that will capture the elements and their attributes as they're encountered by feedparser. | 0 | 1,115 | 0 | 2 | 2011-10-30T15:06:00.000 | python,xml,rss,atom-feed,feedparser | Retrieving raw XML for items with feedparser | 1 | 1 | 1 | 8,021,162 | 0 |
0 | 0 | I'm parsing some XML using Python's Expat (by calling parser = xml.parsers.expat.ParserCreate() and then setting the relevant callbacks to my methods).
It seems that when Expat calls read(nbytes) to return new data, nbytes is always 2,048. I have quite a lot of XML to process, and suspect that these small read()s are making the overall process rather slow. As a point of reference, I'm seeing throughput around 9 MB/s on an Intel Xeon X5550, 2.67 GHz running Windows 7.
I've tried setting parser.buffer_text = True and parser.buffer_size = 65536, but Expat is still calling the read() method with an argument of just 2,048.
Is it possible to increase this? | true | 7,953,708 | 1.2 | 0 | 0 | 2 | You're talking about the xmlparse.ParseFile method, right?
Unfortunately, no, that value is hardcoded as BUF_SIZE = 2048 in pyexpat.c. | 0 | 192 | 0 | 0 | 2011-10-31T12:28:00.000 | python,xml,performance,expat-parser | Controlling number of bytes read() at a time with Expat | 1 | 1 | 1 | 7,960,421 | 0 |
0 | 0 | I'm writing a Python library to access Ubuntu One's REST API. (Yes, I know one already exists; this is a scratch-my-itch-and-learn-while-doing-it project.)
The library will be a relatively thin wrapper around the REST calls. I would like to be able to unit-test my library, without hitting U1 at all. What's the best practise standard for making this possible?
At the moment each REST call is an explicit http request. I can't see how to mock that out, but if I create a (mockable) UbuntuOneRESTAPI class hiding those http calls I suspect it will end up including most of the functionality of the wrapper library, which sort of defeats the purpose. | true | 7,955,695 | 1.2 | 1 | 0 | 1 | Your cutting point is the HTTP requests.
Write a mock library which intercepts the sending of the HTTP requests. Instead of sending them, convert them into a String and analyze them to test sending code.
For receiving code, mock the response handler. Save a good response from the REST server in a String and create the HTTP response object from it to test your receiver.
Write a few test cases which create these requests against the real thing so you can quickly verify that the requests/responses are good. | 0 | 604 | 0 | 1 | 2011-10-31T15:18:00.000 | python,unit-testing,mocking | How to test python library wrapping an external REST service (without hitting the service) | 1 | 1 | 1 | 7,956,472 | 0 |
0 | 0 | I am trying to calculate shortest path between 2 points using Dijkstra and A Star algorithms (in a directed NetworkX graph).
At the moment it works fine and I can see the calculated path but I would like to find a way of restricting certain paths.
For example if we have following nodes:
nodes = [1,2,3,4]
With these edges:
edges = ( (1,2),(2,3),(3,4) )
Is there a way of blocking/restricting 1 -> 2 -> 3 but still allow 2 -> 3 & 1 -> 2.
This would mean that:
can travel from 1 to 2
can travel from 2 to 3
cannot travel from 1 to 3 .. directly or indirectly (i.e. restrict 1->2->3 path).
Can this be achieved in NetworkX.. if not is there another graph library in Python that would allow this ?
Thanks. | false | 7,983,724 | 0.291313 | 0 | 0 | 3 | You could set your node data {color=['blue']} for node 1, node 2 has {color=['red','blue']} and node3 has {color=['red']}. Then use an networkx.algorithms. astar_path() approach setting the
heuristic is set to a function which returns a might_as_well_be_infinity when it encountered an node without the same color you are searching for
weight=less_than_infinity. | 0 | 2,143 | 0 | 11 | 2011-11-02T16:20:00.000 | python,routing,path-finding,networkx | How to restrict certain paths in NetworkX graphs? | 1 | 1 | 2 | 33,684,974 | 0 |
1 | 0 | I need to make an application which streams live multimedia. At present my application is taking image frames from a webcam (using OpenCV) and sending it to the client. It is also sending audio using pymedia module. The problem is that both the image and audio packets that arrive at the client are out of sync.
So I have following questions:
Is there any module in python for live-multimedia streaming?
Can I make the audio and image frames somehow in sync for the client?
PS. pymedia has not been in development since 2006 and is not working. | true | 7,993,624 | 1.2 | 0 | 0 | 2 | You can use gstreamer's python module. I mean gst-python mentioned above. Use rtmp protocol to synchronize client/server videos. Last time I use gst-python, there was no support for rtmp. At the time, my solution was to limit buffer size. When buffer gets full oldest frames will be dropped. | 1 | 18,589 | 0 | 10 | 2011-11-03T10:51:00.000 | python,video,streaming,live,ipv6 | Streaming audio and video with Python | 1 | 1 | 3 | 7,994,014 | 0 |
0 | 0 | So i been readying for a awhile now. And it seems like asynchronous socket handling would be a better approach to dealing with what I'm trying to do.
Right now I'm working on a gaming server. At the moment socket server will do ok with about 3 clients or so. Sending data at the same exact time.
But my problem is, after that things start to get laggy. So if i do a asynchronous server in the same manner to what i'm already doing. Would it make the game data transfer more smoothly?
This is in python by the way. | true | 7,995,290 | 1.2 | 0 | 0 | 1 | Asynchronous sockets are more effective then synchronous. But if the game is lagging for 4+ clients, then your server/client system is badly written and it is not the matter of sockets imho. | 0 | 198 | 0 | 0 | 2011-11-03T13:00:00.000 | python,sockets,asynchronous,udp | Is asynchronous socket handling the way i need to go? In Python | 1 | 1 | 1 | 7,995,699 | 0 |
1 | 0 | What is an accepted way to get authentication credentials (login and password) when using webapp?
I'm pretty sure that they get submitted and/or interpreted differently than the rest of the information coming through the request and I'm afraid I can't remember where exactly I'm supposed to get them from.
FYI: The requests are forced to https
Thanks! | true | 8,039,374 | 1.2 | 0 | 0 | 1 | Are you using the built in authentication, or trying to roll your own? If the former, you can't access a user's credentials - just get the information you need from the User object. If the latter, you can handle the credentials any way you wish - you're rolling your own, and App Engine has no magic way to detect that what you're handling is a username or password. | 0 | 198 | 0 | 2 | 2011-11-07T16:24:00.000 | python,google-app-engine,web-applications,authentication,credentials | Getting login credentials when using webapp | 1 | 2 | 2 | 8,060,118 | 0 |
1 | 0 | What is an accepted way to get authentication credentials (login and password) when using webapp?
I'm pretty sure that they get submitted and/or interpreted differently than the rest of the information coming through the request and I'm afraid I can't remember where exactly I'm supposed to get them from.
FYI: The requests are forced to https
Thanks! | false | 8,039,374 | 0.197375 | 0 | 0 | 2 | If you've got HTTPS enabled, sending them along with the request (usually a POST) is acceptable, and the "standard" method of logging in.
If you want to get clever, you could hash the password using SHA1 on the client end so that even an sslstrip won't reveal the password in plaintext (though it won't prevent replay attacks). | 0 | 198 | 0 | 2 | 2011-11-07T16:24:00.000 | python,google-app-engine,web-applications,authentication,credentials | Getting login credentials when using webapp | 1 | 2 | 2 | 8,039,432 | 0 |
1 | 0 | I parse a website with python. They use a lot of redirects and they do them by calling javascript functions.
So when I just use urllib to parse the site, it doesn't help me, because I can't find the destination url in the returned html code.
Is there a way to access the DOM and call the correct javascript function from my python code?
All I need is the url, where the redirect takes me. | false | 8,053,295 | -0.099668 | 0 | 0 | -1 | It doesnt sound like fun to me, but every javascript function is a is also an object, so you can just read the function rather than call it and perhaps the URL is in it. Otherwise, that function may call another which you would then have to recurse into... Again, doesnt sound like fun, but might be doable. | 0 | 5,021 | 0 | 3 | 2011-11-08T15:58:00.000 | python,urllib2 | Getting the final destination of a javascript redirect on a website | 1 | 1 | 2 | 8,053,358 | 0 |
0 | 0 | Is it possible to set default headers for boto requests? Basically I want to include a couple of headers in every API call I make to S3. | true | 8,068,422 | 1.2 | 0 | 0 | 1 | Right now, extra headers have to be specified on each request. The various methods of the bucket and key class all take an optional headers parameter and the contents of that dict gets merged into the request headers.
Being able to specify extra headers at the bucket level and then have those merged into all requests automatically sounds like a great feature. I'll add that to boto in the near future. | 0 | 348 | 0 | 2 | 2011-11-09T16:47:00.000 | python,amazon-s3,amazon-web-services,boto | Add "default" headers to all boto requests? | 1 | 1 | 1 | 8,068,778 | 0 |
0 | 0 | Does anyone know of a web proxy written in Python that will support SSL connections and will also support PKCS#11 tokens? I am in need of a proxy that will send SSL web requests using a PKCS#11 smartcard.
I have been looking for projects that are using something like Twisted but have not seen any. | false | 8,073,753 | 0 | 0 | 0 | 0 | If Twisted has a proxy, then you can use it with M2Crypto+engine_pkcs11. I had the code, I can see if it is still existing somewhere. | 0 | 459 | 0 | 1 | 2011-11-10T00:50:00.000 | python,proxy,smartcard,pkcs#11 | Python Web Proxy that Supports PKCS#11 | 1 | 1 | 1 | 8,079,122 | 0 |
0 | 0 | When I send credentials using the login method of the python SMTP library, do they go off the wire encrypted or as plaintext? | true | 8,074,227 | 1.2 | 1 | 0 | 3 | They will only be encrypted if you use SMTP with TLS or SSL. | 0 | 232 | 0 | 1 | 2011-11-10T02:12:00.000 | python,security,smtp,credentials | sending an email using python SMTP library credentials security | 1 | 1 | 1 | 8,074,236 | 0 |
0 | 0 | I wanted to know how to maximize a browser window using the Python bindings for Selenium 2-WebDriver. | false | 8,075,297 | 1 | 0 | 0 | 12 | You can use browser.maximize_window() for that | 0 | 9,312 | 0 | 2 | 2011-11-10T05:17:00.000 | python,webdriver,selenium-webdriver | How to maximize a browser window using the Python bindings for Selenium 2-WebDriver? | 1 | 1 | 3 | 18,481,265 | 0 |
0 | 0 | I write a python telnet client to communicate with a server through telnet. However, many people tell me that it's no secure. How can I convert it to ssh? Should I need to totally rewrite my program? | false | 8,088,742 | 0.066568 | 1 | 0 | 1 | While Telnet is insecure, it's essentially just a serial console over a network, which makes it easy to code for. SSH is much, much more complex. There's encryption, authentication, negotiation, etc to do. And it's very easy to get wrong in spectacular fashion.
There's nothing wrong with Telnet per se, but if you can change things over the network - and it's not a private network - you're opening yourself up for trouble.
Assuming this is running on a computer, why not restrict the server to localhost? Then ssh into the computer and telnet to localhost? All the security with minimal hassle. | 0 | 1,247 | 0 | 0 | 2011-11-11T01:44:00.000 | python,security,ssh,telnet | python: convert telnet application to ssh | 1 | 1 | 3 | 8,088,775 | 0 |
1 | 0 | I have a page with a lot of ads being loaded in piece by piece.
I need to position an element relative to overall page height, which is changing during load, because of ads being added.
Question: Is there a jquery event or similar to detect, when all elements are loaded? I'm currently "waiting" with setTimeout, but this is far from nice.
An idle event would be nice, which fires once after pageload if no new http requests are made for xyz secs. | false | 8,093,297 | 0 | 1 | 0 | 0 | Ideally the answer would be $(function(){ }) or window.onload = function(){} that fires after all the DOM contents are loaded. But I guess, the ads on your page starts loading asynchronously after the DOM load.
So, assuming you know the number of 'ads' on your page (you said you are loading them piece by piece), my advise would be to increment a counter on each successful 'ad' load. When that counter reaches the total number of ads, you fire a 'all_adv_loaded' function. | 0 | 6,659 | 0 | 3 | 2011-11-11T11:26:00.000 | jquery,events,python-idle | jquery - can I detect once all content is loaded? | 1 | 1 | 4 | 8,093,470 | 0 |
1 | 0 | I have a program that I wrote in python that collects data. I want to be able to store the data on the internet somewhere and allow for another user to access it from another computer somewhere else, anywhere in the world that has an internet connection. My original idea was to use an e-mail client, such as g-mail, to store the data by sending pickled strings to the address. This would allow for anyone to access the address and simply read the newest e-mail to get the data. It worked perfectly, but the program requires a new e-mail to be sent every 5-30 seconds. So the method fell through because of the limit g-mail has on e-mails, among other reasons, such as I was unable to completely delete old e-mails.
Now I want to try a different idea, but I do not know very much about network programming with python. I want to setup a webpage with essentially nothing on it. The "master" program, the program actually collecting the data, will send a pickled string to the webpage. Then any of the "remote" programs will be able to read the string. I will also need the master program to delete old strings as it updates the webpage. It would be preferred to be able to store multiple string, so there is no chance of the master updating while the remote is reading.
I do not know if this is a feasible task in python, but any and all ideas are welcome. Also, if you have an ideas on how to do this a different way, I am all ears, well eyes in this case. | false | 8,098,068 | 0.099668 | 0 | 0 | 2 | I would suggest taking a look at setting up a simple site in google app engine. It's free and you can use python to do the site. Than it would just be a matter of creating a simple restful service that you could send a POST to with your pickled data and store it in a database. Than just create a simple web front end onto the database. | 0 | 403 | 0 | 1 | 2011-11-11T18:02:00.000 | python,networking | Sending data through the web to a remote program using python | 1 | 4 | 4 | 8,098,102 | 0 |
1 | 0 | I have a program that I wrote in python that collects data. I want to be able to store the data on the internet somewhere and allow for another user to access it from another computer somewhere else, anywhere in the world that has an internet connection. My original idea was to use an e-mail client, such as g-mail, to store the data by sending pickled strings to the address. This would allow for anyone to access the address and simply read the newest e-mail to get the data. It worked perfectly, but the program requires a new e-mail to be sent every 5-30 seconds. So the method fell through because of the limit g-mail has on e-mails, among other reasons, such as I was unable to completely delete old e-mails.
Now I want to try a different idea, but I do not know very much about network programming with python. I want to setup a webpage with essentially nothing on it. The "master" program, the program actually collecting the data, will send a pickled string to the webpage. Then any of the "remote" programs will be able to read the string. I will also need the master program to delete old strings as it updates the webpage. It would be preferred to be able to store multiple string, so there is no chance of the master updating while the remote is reading.
I do not know if this is a feasible task in python, but any and all ideas are welcome. Also, if you have an ideas on how to do this a different way, I am all ears, well eyes in this case. | false | 8,098,068 | 0 | 0 | 0 | 0 | Adding this as an answer so that OP will be more likely to see it...
Make sure you consider security! If you just blindly accept pickled data, it can open you up to arbitrary code execution. | 0 | 403 | 0 | 1 | 2011-11-11T18:02:00.000 | python,networking | Sending data through the web to a remote program using python | 1 | 4 | 4 | 8,098,342 | 0 |
1 | 0 | I have a program that I wrote in python that collects data. I want to be able to store the data on the internet somewhere and allow for another user to access it from another computer somewhere else, anywhere in the world that has an internet connection. My original idea was to use an e-mail client, such as g-mail, to store the data by sending pickled strings to the address. This would allow for anyone to access the address and simply read the newest e-mail to get the data. It worked perfectly, but the program requires a new e-mail to be sent every 5-30 seconds. So the method fell through because of the limit g-mail has on e-mails, among other reasons, such as I was unable to completely delete old e-mails.
Now I want to try a different idea, but I do not know very much about network programming with python. I want to setup a webpage with essentially nothing on it. The "master" program, the program actually collecting the data, will send a pickled string to the webpage. Then any of the "remote" programs will be able to read the string. I will also need the master program to delete old strings as it updates the webpage. It would be preferred to be able to store multiple string, so there is no chance of the master updating while the remote is reading.
I do not know if this is a feasible task in python, but any and all ideas are welcome. Also, if you have an ideas on how to do this a different way, I am all ears, well eyes in this case. | false | 8,098,068 | 0 | 0 | 0 | 0 | I suggest you to use a good middle-ware like: Zero-C ICE, Pyro4, Twisted.
Pyro4 using pickle to serialize data. | 0 | 403 | 0 | 1 | 2011-11-11T18:02:00.000 | python,networking | Sending data through the web to a remote program using python | 1 | 4 | 4 | 8,099,975 | 0 |
1 | 0 | I have a program that I wrote in python that collects data. I want to be able to store the data on the internet somewhere and allow for another user to access it from another computer somewhere else, anywhere in the world that has an internet connection. My original idea was to use an e-mail client, such as g-mail, to store the data by sending pickled strings to the address. This would allow for anyone to access the address and simply read the newest e-mail to get the data. It worked perfectly, but the program requires a new e-mail to be sent every 5-30 seconds. So the method fell through because of the limit g-mail has on e-mails, among other reasons, such as I was unable to completely delete old e-mails.
Now I want to try a different idea, but I do not know very much about network programming with python. I want to setup a webpage with essentially nothing on it. The "master" program, the program actually collecting the data, will send a pickled string to the webpage. Then any of the "remote" programs will be able to read the string. I will also need the master program to delete old strings as it updates the webpage. It would be preferred to be able to store multiple string, so there is no chance of the master updating while the remote is reading.
I do not know if this is a feasible task in python, but any and all ideas are welcome. Also, if you have an ideas on how to do this a different way, I am all ears, well eyes in this case. | false | 8,098,068 | 0.049958 | 0 | 0 | 1 | Another option in addition to what Casey already provided:
Set up a remote MySQL database somewhere that has user access levels allowing remote connections. Your Python program could then simply access the database and INSERT the data you're trying to store centrally (e.g. through MySQLDb package or pyodbc package). Your users could then either read the data through a client that supports MySQL or you could write a simple front-end in Python or PHP that displays the data from the database. | 0 | 403 | 0 | 1 | 2011-11-11T18:02:00.000 | python,networking | Sending data through the web to a remote program using python | 1 | 4 | 4 | 8,098,220 | 0 |
1 | 0 | I'm scraping pdf files from a site using Scrapy, a Python web-scraping framework.
The site requires to follow the same session in order to allow you to download the pdf.
It works great with Scrapy's because it's all automated but when I run the script after a couple of seconds it starts to give me fake pdf files like when I try to access directly the pdf, without my session.
Why is that so & any idea how to overcome this problem!? | false | 8,108,477 | 0 | 0 | 0 | 0 | I think the site tracks your session. If it's a PHP site, pass PHPSESSID cookie to the request which downloads the PDF file. | 0 | 1,740 | 0 | 0 | 2011-11-12T23:54:00.000 | python,session,cookies,scrapy | Downloading PDF files with Scrapy | 1 | 1 | 1 | 8,113,814 | 0 |
0 | 0 | Here is my setup: I have a Python webserver (written myself) that listens on port 80 and also have the Transmission-daemon (bittorrent client) that provides a webUI on port 9101. (running on Linux)
I can access both webservers locally without problems, but now would like to access them externally also. My issue is that I would prefer not to have to open extra ports on my firewall to access the Transmission webUI. Is it possible to within the python webserver to redirect some traffic to the appropriate port.
So for example:
http: //mywebserver/index.html -> served by the Python webserver
http: //mywebserver/transmission.html -> redirected to transmission (which is currently http: //localhost:9101)
Thanks | false | 8,149,701 | 0.197375 | 0 | 0 | 2 | I found my answer: a reverse proxy. It will take care of the routing to the correct port based on the URL. I now just have to select the right one there are so many (NginX, pound, lighttd etc...)
Thanks anyway. | 0 | 388 | 1 | 1 | 2011-11-16T09:57:00.000 | python,linux,redirect,webserver,transmission | Redirecting traffic to other webserver | 1 | 1 | 2 | 8,166,815 | 0 |
1 | 0 | So I am writing link fetchers to find new links on particular sites for a given group of 'starting links'.
Currently I am using Python/Beautiful Soup to accomplish this with decent success.
I have an input file [for each site] that I build the 'starting links' list from.
I use urllib2 to load the webpages and then beautiful soup to find the group of links I need to fetch and append them to a list. Some sites have the links split between a lot of different pages so I have to load them all to collect the links.
After it collects all the specified type of links from each 'starting link', I then have it compare this list with a 'previously collected' list that I load from file. I then return the difference to another list which is the 'new links' list as well as add these to the 'previously collected' link list.
My problem is performance. I am recollecting all of these previously seen links each time I rerun the program which means I am reloading a bunch of pages that I am not going to get any new links from.
Generally the sites add new links on top of the others, so I am thinking my next move might be to compare the 'currently availiable' link with the 'previously collected' list and if there is not a match, then collect the link until a match occurs, where it would then drop out for this given 'starting link' and move on to the next, potentially saving a lot of page loads for sites the break up their links.
Does this make sense to help speed up the fetching of new links which I will schedule to run every few days?
The 'previously collected' list could have a couple hundred thousand links in it, so I was not sure how this would effect things to run this comparison over and over vs keeping the program dumb and always recollecting all availiable links.
Do you have a better solution all together? Any input is much appreciated. | false | 8,156,736 | 0 | 0 | 0 | 0 | You should consider using hashes for comparing the previously collected list. Instead of storing a list of links as strings, store a list of MD5 or SHA1 hashes for those links. Comparing a hash to a list of hashes is much faster than comparing a string to a list of strings.
Or if you maintain and persist an actual hash table of encountered links, then you wouldn't have to do any searching and comparing through a list, but would have constant time lookup to know if you've seen a link. A full hash table will cost a lot of memory if your list is big though. | 0 | 130 | 0 | 0 | 2011-11-16T18:28:00.000 | python,beautifulsoup | Python Link Fetcher Performance Issue | 1 | 1 | 2 | 8,156,982 | 0 |
0 | 0 | I have a binary file which was created by a VBA file (I don't work with VBA or binary at all) but I need to get Python to read this binary file (which includes a list of inputs for a calculation) and then write these values into an xml file.
If I know the order of the inputs into the file that is created, is it possible to read the binary code line by line, to get input by input and then write into the xml?
I have to use Python rather than VBA since I am not authorised to change the original VBA files.
I apologise for the lack of information, I only know a bit of Python and have never worked with VBA or binary. I really appreciate any help anyone can give me! Thank you =) | false | 8,168,482 | 0 | 0 | 0 | 0 | Python is probably not the best tool, I would recommend VBA as it is going to be best suited to reading that file. Then, create your xml file as output. | 0 | 1,333 | 0 | 0 | 2011-11-17T14:10:00.000 | python,xml,binary | Python file to read in binary and write in xml | 1 | 2 | 2 | 8,168,534 | 0 |
0 | 0 | I have a binary file which was created by a VBA file (I don't work with VBA or binary at all) but I need to get Python to read this binary file (which includes a list of inputs for a calculation) and then write these values into an xml file.
If I know the order of the inputs into the file that is created, is it possible to read the binary code line by line, to get input by input and then write into the xml?
I have to use Python rather than VBA since I am not authorised to change the original VBA files.
I apologise for the lack of information, I only know a bit of Python and have never worked with VBA or binary. I really appreciate any help anyone can give me! Thank you =) | true | 8,168,482 | 1.2 | 0 | 0 | 0 | If you have the VBA code that made the file you should be able to copy the data structure in python and then deserialize it. | 0 | 1,333 | 0 | 0 | 2011-11-17T14:10:00.000 | python,xml,binary | Python file to read in binary and write in xml | 1 | 2 | 2 | 8,168,665 | 0 |
1 | 0 | I plan to run a webserver with some content generated through a Python script. I have a script that generates the data I would want to present at the moment that polls every 2 minutes with a fairly large request. How can I put this data onto a webpage without making voluminous numbers of requests? I can think of a few stupid methods including writing all of my data to text files to be read by some JavaScript, but I'm looking for a better solution. Thank you. | true | 8,175,406 | 1.2 | 0 | 0 | 0 | I wouldn't write off the javascript polling idea.
Consider generating the file and pushing to a CDN. Make sure to let the CDN know you need a specific TTL that meets with the schedule you're generating with as their default TTL will probably be longer than 2 min. The CDN should be awesome at serving static content and dealing with a ton of requests. It should scale well.
I was just at a conference last week where the guys from Push.IO recommend this strategy for clients to appear to receive live sporting updates during a game. | 0 | 199 | 0 | 0 | 2011-11-17T22:39:00.000 | python,html,cgi | Dynamic Content with a Polling Interval | 1 | 1 | 1 | 8,175,486 | 0 |
0 | 0 | How can I wrap a boto.storage_uri() call in python so I can handle possible exceptions? | false | 8,176,002 | 1 | 0 | 0 | 30 | Your question about Boto is a good one, not not easy to answer. The Boto exception hierarchy is poorly designed, and ultimately the only way to determine what the exception you want to trap is requires looking at the boto source code.
For example if you look at (on Ubuntu) /usr/share/pyshared/boto/exception.py you will see that there are two broad classes:
boto.exception.BotoClientError
boto.exception.BotoServerError
Many of the exceptions are derived from these two, though the concept of "Client" and "Server" is not very well defined and you probably would want to check for both to be sure as many exceptions can happen unexpected (as usual). However, exceptions such as boto.exception.NoAuthHandlerFound is derived directly from Exception and therefore you would have to check for it separately.
Unfortunately from looking at the code, it appears that there is neither consistency nor much care in defining the exception hierarchy in Boto, which is a flaw in Boto's design which unfortunately requires that you rely on more broad exception checking than would normally be recommended. | 1 | 14,487 | 0 | 8 | 2011-11-17T23:45:00.000 | python,boto | How can I handle a boto exception in python? | 1 | 1 | 3 | 12,064,611 | 0 |
0 | 0 | I am using sleekxmpp to connect to Google Talk. I am trying to track when contacts change their status using the changed_status event. The issue I am having is that as I log a status change, the function associated with the changed_status event seems to be called multiple times. Why might this be?
I am thinking it has something to do with the way that contact is logged into Google Talk, that is they may have it open multiple times from the same computer. So when they close their computer it affects both sessions, and each session triggers a changed_status event. | false | 8,177,403 | 0 | 0 | 0 | 0 | Check the resource associated with each change. If the resources are all different for the same user, it is because the user is logged on from several different clients, perhaps from multiple different machines. You will get presence updates from all of the user's clients if you're subscribed to them. | 0 | 471 | 0 | 2 | 2011-11-18T03:29:00.000 | python,xmpp,google-talk | sleekxmpp changed_status event, firing multiple times | 1 | 2 | 2 | 8,179,210 | 0 |
0 | 0 | I am using sleekxmpp to connect to Google Talk. I am trying to track when contacts change their status using the changed_status event. The issue I am having is that as I log a status change, the function associated with the changed_status event seems to be called multiple times. Why might this be?
I am thinking it has something to do with the way that contact is logged into Google Talk, that is they may have it open multiple times from the same computer. So when they close their computer it affects both sessions, and each session triggers a changed_status event. | true | 8,177,403 | 1.2 | 0 | 0 | 5 | The answer is that you exposed a bug in SleekXMPP that I need to fix :)
The changed_status event was firing for any presence stanza received, and not firing only when a resource's status or show value changed.
The bug fix is now in the develop branch and it will be in the soon-to-be RC3 release. | 0 | 471 | 0 | 2 | 2011-11-18T03:29:00.000 | python,xmpp,google-talk | sleekxmpp changed_status event, firing multiple times | 1 | 2 | 2 | 8,189,697 | 0 |
0 | 0 | I have a situation where XML data is being processed by two different mechanisms. In one place it is being processed using Python's xml.dom.minidom library. In the other, similar processing is being performed in .NET, via an XmlTextWriter.
In the output generated by the Python code, empty elements are written <ElementName/> (with no space before the element close). In the .NET code, a space is being inserted (resulting in <ElementName />). This makes no difference whatsoever to the validity or meaning of the XML, but it does cause the output to be detected as different when the two outputs are compared.
Is there any way to tell the XmlTextWriter not to include the extra space? Failing that, is there any way to include the extra space in the Python-generated output (short of messing with the library source, which wile possible is something I consider undesirable ;-))?
Update: Perhaps I should explain what I'm trying to do instead of just describing the problem. It's possible that I'm making things more complicated / painful than I should.
What I really need is some mechanism to determine that the structure represented by the XML has not been modified. I was originally flattening the XML (which eliminated whitespace issues when everything was being done in .NET world), then calculating an appropriately salted hash of the data. Is there a better mechanism I could / should be using? | false | 8,187,959 | 0.039979 | 0 | 0 | 1 | You'll find that the problem only occurs if you set the Indent property in the XmlWriterSettings to true. When Indent == false, there is no space inserted. But if you want indentation, you have to live with that space.
So perhaps the solution to your program is to turn off indentation in both tools?
This is unfortunate, because it's almost possible to change that behavior.
The implementation of XmlWriter actually calls XmlWriterSettings.CreateWriter to create a writer based on the settings you pass. If Indent == true, then it creates an XmlEncodedRawTextWriterIndent, which is an internal class derived from the abstract XmlWriter. It overrides WriteFullEndElement and inserts that space.
In theory, you could create your own class derived from XmlEncodedRawTextWriterIndent that overrides WriteFullEndElement. If you could do that, it'd be easy to prevent the indentation. But you can't do that because it's an internal class (internal to System.Xml). Even if you could subclass XmlEncodedRawTextWriterIndent, you'd have the problem that XmlWriterSettings.CreateXmlWriter doesn't have a way to instantiate your class, and XmlWriterSettings is sealed.
I imagine there are good reasons for effectively preventing creation of custom XmlWriter classes, although they escape me at the moment. | 0 | 505 | 0 | 4 | 2011-11-18T19:24:00.000 | .net,python,xml | Can I tell an XmlTextWriter to write instead of ? | 1 | 3 | 5 | 8,188,895 | 0 |
0 | 0 | I have a situation where XML data is being processed by two different mechanisms. In one place it is being processed using Python's xml.dom.minidom library. In the other, similar processing is being performed in .NET, via an XmlTextWriter.
In the output generated by the Python code, empty elements are written <ElementName/> (with no space before the element close). In the .NET code, a space is being inserted (resulting in <ElementName />). This makes no difference whatsoever to the validity or meaning of the XML, but it does cause the output to be detected as different when the two outputs are compared.
Is there any way to tell the XmlTextWriter not to include the extra space? Failing that, is there any way to include the extra space in the Python-generated output (short of messing with the library source, which wile possible is something I consider undesirable ;-))?
Update: Perhaps I should explain what I'm trying to do instead of just describing the problem. It's possible that I'm making things more complicated / painful than I should.
What I really need is some mechanism to determine that the structure represented by the XML has not been modified. I was originally flattening the XML (which eliminated whitespace issues when everything was being done in .NET world), then calculating an appropriately salted hash of the data. Is there a better mechanism I could / should be using? | false | 8,187,959 | 0 | 0 | 0 | 0 | If you're just looking for file integrity wouldn't a MD5 (or something similar) of the file be sufficient? | 0 | 505 | 0 | 4 | 2011-11-18T19:24:00.000 | .net,python,xml | Can I tell an XmlTextWriter to write instead of ? | 1 | 3 | 5 | 8,188,391 | 0 |
0 | 0 | I have a situation where XML data is being processed by two different mechanisms. In one place it is being processed using Python's xml.dom.minidom library. In the other, similar processing is being performed in .NET, via an XmlTextWriter.
In the output generated by the Python code, empty elements are written <ElementName/> (with no space before the element close). In the .NET code, a space is being inserted (resulting in <ElementName />). This makes no difference whatsoever to the validity or meaning of the XML, but it does cause the output to be detected as different when the two outputs are compared.
Is there any way to tell the XmlTextWriter not to include the extra space? Failing that, is there any way to include the extra space in the Python-generated output (short of messing with the library source, which wile possible is something I consider undesirable ;-))?
Update: Perhaps I should explain what I'm trying to do instead of just describing the problem. It's possible that I'm making things more complicated / painful than I should.
What I really need is some mechanism to determine that the structure represented by the XML has not been modified. I was originally flattening the XML (which eliminated whitespace issues when everything was being done in .NET world), then calculating an appropriately salted hash of the data. Is there a better mechanism I could / should be using? | false | 8,187,959 | 0 | 0 | 0 | 0 | I would just post-process the output to do search/replace instead of trying to mess with the library | 0 | 505 | 0 | 4 | 2011-11-18T19:24:00.000 | .net,python,xml | Can I tell an XmlTextWriter to write instead of ? | 1 | 3 | 5 | 8,188,032 | 0 |
0 | 0 | I tried a few python ports of flickr api and they didn't work on my Python here which is 3.2. I also have 2.7. Do you guys know any API that is compatible with the latest Flickr API as well latest update of Python? | true | 8,196,386 | 1.2 | 0 | 0 | 2 | I could run Python Flickr API (http://stuvel.eu/flickrapi) on Python2.7. It's not possible to run it on Python3 (according to its author). | 0 | 888 | 0 | 3 | 2011-11-19T18:50:00.000 | python,flickr | How to access Flickr API using Python? | 1 | 1 | 2 | 8,753,955 | 0 |
0 | 0 | I am using couchdb to store twitter data. I found that couchdb stops updating its data base though I keep getting the twitter data. I basically store the dictionary that contains twitter data by using the python couchdb save method, db.save(twitter_dic) where db is the database instance. I find that some times I get 3GB of data and couchdb stops storing, sometimes it stops storing even when it reaches 0.6GB. I don't know what is the reason. If some one have come across similar situation please help me out. If this problem cannot be solved I would look forward to use some other key-value data base where python is used as wrapper to store the values (Very similar to CouchDB) where I can do map reduce etc, can some one provide me such a database? | true | 8,207,433 | 1.2 | 0 | 0 | 0 | I had to re install couchdb and I am marking this question accepted. | 0 | 140 | 0 | 0 | 2011-11-21T04:52:00.000 | python,database,couchdb | python couchdb data collection stops | 1 | 1 | 1 | 12,101,359 | 0 |
0 | 0 | I'm working on a online server, and I need all my list and dict saved. What would be the best and quickest way to approach this?
I tried importing the data, and it works to load the data. But how can I update the imported file? | true | 8,207,512 | 1.2 | 0 | 0 | 1 | I think you can use pickle/cPickle module to save and load the date, which are built-in module and easy to use.
I am not very sure the meaning of update import file, what about rewrite the content back to the file after updating in the program? | 1 | 98 | 0 | 1 | 2011-11-21T05:05:00.000 | python,storage,pickle | Saving to external PY file? | 1 | 1 | 1 | 8,207,550 | 0 |
0 | 0 | I'm using a class RequestHandler(SocketServer.BaseRequestHandler)to handle incoming connections to a server.
I am trying to get the name of the client which is stored as an attribute that sends data to this server, but right now I can only get it by asking for self.client_address which returns a tuple like Name of client that sent request: ('127.0.0.1', 57547).
Is there a way to ask for an attribute of the object that initiated the connection? | true | 8,219,865 | 1.2 | 0 | 0 | 1 | No. You would have to send that name over the communication channel. | 0 | 1,312 | 0 | 1 | 2011-11-21T23:11:00.000 | python,sockets,socketserver | How can I get a client's Name attribute with SocketServer in python? | 1 | 1 | 1 | 8,220,008 | 0 |
0 | 0 | I am doing a muticlient chat server program in twisted python. Is there any function in twisted python similar to 'select' in in socket programming? Can anybody give me the answer please?.. If yes, please tell me the implementation, please. | false | 8,280,373 | 0.197375 | 0 | 0 | 1 | No, there isn't.
Twisted calls select (or something like it) for you.
You don't ever need to call a function like select; just let the reactor run, and do its work for you. | 0 | 213 | 0 | 2 | 2011-11-26T17:53:00.000 | python,twisted | Is there any function in twisted python similar to 'select' in socket programming? | 1 | 1 | 1 | 8,282,424 | 0 |
0 | 0 | What is the most efficient way to get all of the external ip address of a machine with multiple nics, using python? I understand that an external server is neeeded (I have one available) but am un able to find a way to find a good way to specify the nic to use for the connection (So I can use a for loop to iterate through the various nics). Any advice about the best way to go about this? | false | 8,281,371 | 0.039979 | 0 | 0 | 1 | For the general case, there is no solution. Consider the case where the local machine is behind a machine with two IP addresses doing NAT. You can change your local interface to anything you like, but it's unlikely that you'll convince the NAT machine to make a different routing decision on its outgoing connections. | 0 | 5,595 | 0 | 5 | 2011-11-26T20:25:00.000 | python,networking,ip,nic | Python, How to get all external ip addresses with multiple NICs | 1 | 1 | 5 | 8,282,345 | 0 |
1 | 0 | I am working with a Python function that sends mails wich include an attachment and a HTML message......I want to add an image on the HTML message using
<img src="XXXX">
When I try it, the message respects the tag, but does not display the image I want (it displays the not found image "X").....
does anyone know if this is a problem with the MIME thing....because i am using the MIMEMultipart('Mixed').....
or it is a problem with the path of the image (I'm using the same path for the atachment file and there is no problem with it)....
I dont know what else could it be!!
thanks a lot!! | false | 8,301,501 | 0.099668 | 1 | 0 | 1 | You need to write src="cid:ContentId" to refer to an attached image, where ContentId is the ID of the MIME part. | 0 | 431 | 0 | 0 | 2011-11-28T19:53:00.000 | python,html,mime-types,sendmail,mime-message | Image in the HTML email message | 1 | 2 | 2 | 8,301,559 | 0 |
1 | 0 | I am working with a Python function that sends mails wich include an attachment and a HTML message......I want to add an image on the HTML message using
<img src="XXXX">
When I try it, the message respects the tag, but does not display the image I want (it displays the not found image "X").....
does anyone know if this is a problem with the MIME thing....because i am using the MIMEMultipart('Mixed').....
or it is a problem with the path of the image (I'm using the same path for the atachment file and there is no problem with it)....
I dont know what else could it be!!
thanks a lot!! | true | 8,301,501 | 1.2 | 1 | 0 | 1 | In your html you need the fully qualified path to the image: http://yourdomain.com/images/image.jpg
You should be able to take the URL in the image tag, paste it into the browser's address bar and view it there. If you can't see it, you've got the wrong path. | 0 | 431 | 0 | 0 | 2011-11-28T19:53:00.000 | python,html,mime-types,sendmail,mime-message | Image in the HTML email message | 1 | 2 | 2 | 8,301,539 | 0 |
1 | 0 | i'm working on a project thata i need develop one web service ( in java ) that get one simple number from a Corba python implementation... how can i proceed with this??
im using omniOrb and already done the server.py that genetares one simple number!
thx a lot | false | 8,310,262 | 0.664037 | 0 | 0 | 4 | You will need a Java CORBA provider - for example IONA or JacORB. Generate the IDL files for your python service and then use whatever IDL -> stub compiler your Java ORB provides to generate the java client-side bindings.
From there it should be as simple as binding to the corbaloc:// at which your python server is running and executing the remote calls from your java stubs.
Of course, CORBA being CORBA, it is likely to require the ritual sacrifice of small mammals and, possibly, lots of candles. | 0 | 375 | 0 | 0 | 2011-11-29T11:55:00.000 | java,python,web-services,corba | Corba python integration with web service java | 1 | 1 | 1 | 8,310,341 | 0 |
1 | 0 | I am using Selenium webdriver in Python for a web-scraping project.
How to print the HTML text of the selenium.WebElement ?
I intend to use the BeautifulSoup to parse the HTML to extract the data of interest.
Thanks | true | 8,316,152 | 1.2 | 0 | 0 | 12 | It's not possible to get the raw HTML from a WebElement.
You can get the page source from the browser object though: browser.page_source. | 0 | 7,324 | 0 | 6 | 2011-11-29T18:54:00.000 | python,selenium,beautifulsoup,web-scraping,urllib2 | Print HTML text of a selenium webelement in Python | 1 | 1 | 1 | 8,316,622 | 0 |
0 | 0 | I am trying to use the OAuth of a website, which requires the signature method to be 'HMAC-SHA1' only.
I am wondering how to implement this in Python? | false | 8,338,661 | 0 | 0 | 0 | 0 | In Python 3.7 there is an optimized way to do this. HMAC(key, msg, digest).digest() uses an optimized C or inline implementation, which is faster for messages that fit into memory. | 0 | 70,277 | 0 | 57 | 2011-12-01T08:55:00.000 | python,oauth,sha1,hmac | Implementation HMAC-SHA1 in python | 1 | 1 | 8 | 62,871,193 | 0 |
0 | 0 | I know how to parse a page using Python. My question is which is the fastest method of all parsing techniques, how fast is it from others?
The parsing techniques I know are Xpath, DOM, BeautifulSoup, and using the find method of Python. | false | 8,342,335 | 0.099668 | 0 | 0 | 1 | lxml was written on C. And if you use x86 it is best chose.
If we speak about techniques there is no big difference between Xpath and DOM - it's very quickly methods. But if you will use find or findAll in BeautifulSoup it will be slow than other. BeautifulSoup was written on Python. This lib needs a lot of memory for parse any data and, of course, it use standard search methods from python libs. | 0 | 4,678 | 0 | 6 | 2011-12-01T13:45:00.000 | python,dom,xpath,html-parsing,lxml | Xpath vs DOM vs BeautifulSoup vs lxml vs other Which is the fastest approach to parse a webpage? | 1 | 1 | 2 | 8,342,483 | 0 |
0 | 0 | I wrote an API which which does some database operations with values requested by the API caller. How does this whole API system work when more than on person calls a function at the same time?
Do different instances of my API code start when a number of API calls are made?
If I need to handle like 2500 parallel API calls, what exact precaution (like paying attention to database load) do I need to take? | false | 8,344,728 | 0 | 0 | 0 | 0 | Do you plan to call your python API from some other python code? If so then how is the parallelism achieved? Do you plan to spawn many threads, use your api in every thread?
Anyway it's worthwhile to take a look at multiprocessing module that allows one to create separate python processes. There are lots of threading modules that allow to parallelize code execution within the same process. But keep in mind that the latter case is a subject for Global Interpreter Lock - google for more info. | 1 | 177 | 0 | 0 | 2011-12-01T16:26:00.000 | python,api,parallel-processing | How to handle parallel calls to a Python API? | 1 | 1 | 1 | 8,345,037 | 0 |
0 | 0 | After running for a number of hours on Linux, my Python 2.6 program that uses urllib2, httplib and threads, starts raising this error for every request:
<class 'urllib2.URLError'> URLError(gaierror(-3, 'Temporary failure in name resolution'),)
If I restart the program it starts working again. My guess is some kind of resource exhaustion but I don't know how to check for it. How do I diagnose and fix the problem? | true | 8,356,517 | 1.2 | 0 | 0 | 18 | This was caused by a library's failure to close connections, leading to a large number of connections stuck in a CLOSE_WAIT state. Eventually this causes the 'Temporary failure in name resolution' error due to resource exhaustion. | 1 | 18,882 | 0 | 21 | 2011-12-02T12:45:00.000 | python,urllib2,httplib | Permanent 'Temporary failure in name resolution' after running for a number of hours | 1 | 1 | 2 | 8,376,268 | 0 |
1 | 0 | We have a glossary with up to 2000 terms (where each glossary term may
consist of one, two or three words (either separated with whitespaces
or a dash).
Now we are looking for a solution for highlighting all terms inside a
(longer) HTML document (up to 100 KB of HTML markup) in order to
generate a static HTML page with the highlighted terms.
The constraints for a working solution are: large number of glossary terms
and long HTML documents...what would be the blueprint for an efficient solution
(within Python).
Right now I am thinking about parsing the HTML document using lxml, iterating over all text nodes and then matching the contents within each text node against all glossary terms.
Client-side (browser) highlighting on the fly is not an option since IE will complain about long running scripts with a script timeout...so unusable for production use.
Any better idea? | false | 8,366,909 | -0.066568 | 0 | 0 | -1 | How about going through each term in the glossary and then, for each term, using regex to find all occurrences in the HTML? You could replace each of those occurrences with the term wrapped in a span with a class "highlighted" that will be styled to have a background color. | 0 | 689 | 0 | 1 | 2011-12-03T09:59:00.000 | javascript,python,highlighting,glossary,glossaries | Highlighting glossary terms inside a HTML document | 1 | 1 | 3 | 8,366,996 | 0 |
0 | 0 | I have made a simple command-line URL downloader in Python. When the user supplies a URL it reads the file from web and saves it in a string, then saves the string in a file on the computer.
I want to add a progress bar. How should I go about it? | true | 8,397,667 | 1.2 | 0 | 0 | 2 | Figure out the total size of the file you're downloading. This is often present in the HTTP header Content-Length (which is in bytes).
Keep count of the total data downloaded so far.
The amount of the progress bar that should be filled at any moment is given by the formula: (downloaded so far) / (total size) which is a number between 0 and 1, inclusive. | 0 | 428 | 0 | 2 | 2011-12-06T09:26:00.000 | python,command-line,download,progress-bar | How to add a progress bar in a Python command-line URL downloader? | 1 | 1 | 2 | 8,397,748 | 0 |
0 | 0 | I'm using urllib to open one site and get some information on it.
Is there a way to "open" this site only to the part I need and discard the rest (discard I mean don't open/load the rest)? | false | 8,409,372 | 0 | 0 | 0 | 0 | You should be able to read(bytes) instead of read(), this will read a number of bytes instead of all of it. Then append to already downloaded bytes, and see if it contains what you're looking for. Then you should be able to stop download with .close(). | 0 | 41 | 0 | 0 | 2011-12-07T01:37:00.000 | python,windows,urllib | Is there a way to use urllib to open one site until a specified object in it? | 1 | 1 | 2 | 8,409,568 | 0 |
0 | 0 | I want to run a selenium tests using python against a server farm that has about 50 web servers. What I have been doing is changing my host file (@ /etc/hosts) to switch to the desired server and run my selenium tests. This manual process can be tedious. Is there a better way to test these individual servers faster?
I've looked at using selenium grid to run these in parallel. But can't find any real life examples using python. The selenium grid demo is vague and the example are mostly in ruby. I don't really care if I do this in sequence (test server a, then test server b) or in parallel. Although parallel would be nice.
I've also looked and nose not sure if this is the right approach either. Of course I could dig deeper.
I've also looked at Sauce, from what I understand this is a paid service and don't want to go down that road.
Any advice or direction would greatly help me. | false | 8,411,129 | 0 | 0 | 0 | 0 | use selenium2(webdriver) python binding ,does it make sense than change the server address in for loop ?
then it run in sequence way. | 0 | 618 | 0 | 0 | 2011-12-07T06:11:00.000 | python,testing,selenium,grid | Running Selenium Tests Against Server Farm using Python | 1 | 1 | 2 | 8,411,384 | 0 |
0 | 0 | Is it possible to upload file attachment with selenium in Python script? | false | 8,428,102 | -0.057081 | 0 | 0 | -2 | it is quite simple, just record it using IDE. Upload command is working | 0 | 7,807 | 0 | 3 | 2011-12-08T08:23:00.000 | python,file-upload,selenium,attachment | Upload file with Selenium in Python | 1 | 1 | 7 | 24,584,170 | 0 |
1 | 0 | I'm using python to develop a web app.
I defined both "get" and "post" method in the same request handler to serve different purpose. That is, I use "get" method to present a form to user, and "post" method to handle the submitted form.
It works fine, but is this approach appropriate? Or should I better define get and post separately in different request handler? Thanks! | true | 8,473,712 | 1.2 | 0 | 0 | 2 | Your approach is appropriate. According to the newest documentation you can even define post and get as functions outside request handler and just as other functions in your module and that's a way I would choose since it eliminates problems that can happen when instanciating request handlers.
If starting a new app from scratch I probably would try to put my get and post function outside of request handler with the new python 2.7 runtime that according to docs supports that. | 0 | 546 | 0 | 1 | 2011-12-12T11:38:00.000 | python,google-app-engine,webapp2 | Defining both post and get method under same request handler | 1 | 1 | 1 | 8,474,190 | 0 |
1 | 0 | Using ecs.py, I used to be able to get the reviews of a customer with a query like ecs.CustomerContentLookup(customerId, ResponseGroup='CustomerReviews').
This is now deprecated. Is there an alternative?
Thanks. | false | 8,483,263 | -0.197375 | 0 | 0 | -1 | Have you looked at using web page scraping to fetch/get the customer reviews / ratings?
If you know the customer id you should be able to extract all the information you need directly from the Amazon web pages. | 0 | 179 | 0 | 1 | 2011-12-13T01:24:00.000 | python,amazon-web-services | Is there a way to lookup all reviews of a customer in Amazon's new Product Advertising API? | 1 | 1 | 1 | 8,494,166 | 0 |
1 | 0 | How can i pool a connection to XMPP server in django so that it is available across multiple requests. I don't want to connect and authenticate on every request which makes it a bit slow. Is this possible?
EDIT:
I am using xmpppy python xmpp library | true | 8,500,455 | 1.2 | 0 | 0 | 2 | As xmpppy has its own main loop I suggest to use it in a separate thread or even start separately. Actually you do have two separate applications: website and xmpp-client and it is normal to run them separately.
In this case you may use different ways to communicate between your applications: pipes between threads and/or processes, tcp or unix socket, file queue, different amqp silutions, any persistent storage, even d-bus, etc. But that is a subject for another question I think. | 0 | 415 | 0 | 0 | 2011-12-14T06:44:00.000 | python,django,xmpp,connection-pooling | Django XMPP Connection pooling | 1 | 1 | 1 | 8,504,436 | 0 |
1 | 0 | I am using scrapy to crawl a site which seems to be appending random values to the query string at the end of each URL. This is turning the crawl into a sort of an infinite loop.
How do i make scrapy to neglect the query string part of the URL's? | false | 8,567,171 | 1 | 0 | 0 | 13 | There is a function url_query_cleaner in w3lib.url module (used by scrapy itself) to clean urls keeping only a list of allowed arguments. | 0 | 13,148 | 0 | 16 | 2011-12-19T20:31:00.000 | python,url,scrapy,web-crawler | How do I remove a query from a url? | 1 | 1 | 6 | 8,620,843 | 0 |
0 | 0 | How can I get paramiko to do the equivalent of setting "TCPKeepAlive yes" in ~/.ssh/config? | true | 8,588,506 | 1.2 | 1 | 0 | 2 | Got it: Transport.set_keepalive. Use in conjunction with the timeout argument to SSHClient.connect to set the socket timeout. | 0 | 383 | 0 | 2 | 2011-12-21T10:39:00.000 | python,ssh,paramiko | Can I enable TCPKeepAlive with paramiko? | 1 | 1 | 1 | 8,588,973 | 0 |
0 | 0 | I am new to Python programming and have covered sockets a bit. I couldn't find any simple implementation on the web. Its basic functionality should cover:
simple chat
file sharing
peer lookup
How do I start, and what should be the p2p model? I don't want to use any library such as Twisted, as it is complex. | true | 8,600,796 | 1.2 | 0 | 0 | 1 | You could write the library yourself, if you're willing to work with sockets directly. Have each node contain a list of peers that is regularly updated, and set each node to advertise its presence to a central server. You'd need to look into network traversal algorithms, hash tables, etc but it could be done. As Xavier says, start simple first, and get it working quickly - then add features.
For simplification you could implement manual peering to start with; get file sharing and chat working first, and then add peering/discovery later.
There is quite a bit of work here but it may be more achievable if you've written everything - everything is easier to understand! But, the upside of a library is a lot of the work is done for you. It's a trade-off :) | 1 | 2,394 | 0 | 0 | 2011-12-22T07:47:00.000 | python,p2p | How to build a Python p2p application? | 1 | 2 | 2 | 8,603,939 | 0 |
0 | 0 | I am new to Python programming and have covered sockets a bit. I couldn't find any simple implementation on the web. Its basic functionality should cover:
simple chat
file sharing
peer lookup
How do I start, and what should be the p2p model? I don't want to use any library such as Twisted, as it is complex. | false | 8,600,796 | 0 | 0 | 0 | 0 | for the peer lookup I would begin with a simple central server and for simple chat and file sharing I would use a derivate of HTTP protocol. | 1 | 2,394 | 0 | 0 | 2011-12-22T07:47:00.000 | python,p2p | How to build a Python p2p application? | 1 | 2 | 2 | 8,601,303 | 0 |
0 | 0 | Given an id of any facebook group, using FQL I can fetch all the members of that group, if I am a member of that group. I can also see who of my friends is in the same group as me, that is also not a problem. Now, I need to see of all of the group members, who of them is friends, I mean, if there are 2 group members in the same group as me, but they are not my friends, is there any way to see if those 2 are friends? | false | 8,608,843 | 0.197375 | 0 | 0 | 1 | Without each of those friends giving you access to view who their friends are, it's impossible. If Facebook allowed this to happen without a friend granting you that access, then I'm going go scream from the hills about a HUGE security hole. | 0 | 4,952 | 0 | 0 | 2011-12-22T19:33:00.000 | python,facebook,facebook-fql | How to see mutual friendships in a facebook group? | 1 | 1 | 1 | 8,610,628 | 0 |
1 | 0 | I heard one can load a custom Firefox profile when starting webdriver, but I've yet to find a way to do that. The Python binding documentation doesn't state it very clearly.
I need to start up Firefox without JS because the site I'm testing has a lot of ads injected by Google and some are very slow to load, making the tests slow as well because it waits for all the page objects to finish loading. | false | 8,613,440 | 0 | 0 | 0 | 0 | You can use the -firefoxProfileTemplate command line option when starting the Selenium server. But it seems rather counterproductive to disable javascript when testing how browsers behave on your site (unless your site doesn't have any scripts of its own) - you should rather use adblock, or disable the IP used by Google ads in the hosts file of the Selenium server, or set a custom useragent for Selenium tests and don't load ads based on that. | 0 | 1,288 | 0 | 1 | 2011-12-23T07:40:00.000 | javascript,python,selenium,automated-tests | How to load Firefox with javascript disable when running Selenium Webdriver (Python) tests? | 1 | 1 | 1 | 8,613,669 | 0 |
1 | 0 | I am scraping a webpage using Selenium webdriver in Python
The webpage I am working on, has a form. I am able to fill the form and then I click on the Submit button.
It generates an popup window( Javascript Alert). I am not sure, how to click the popup through webdriver.
Any idea how to do it ?
Thanks | false | 8,631,500 | 0 | 0 | 0 | 0 | that depends on the javascript function that handles the form submission
if there's no such function try to submit the form using post | 0 | 38,288 | 0 | 20 | 2011-12-25T20:50:00.000 | python,selenium,webdriver,web-scraping,alert | Click the javascript popup through webdriver | 1 | 1 | 6 | 8,631,520 | 0 |
0 | 0 | I have two computers on a network. I'll call the computer I'm working on computer A and the remote computer B. My goal is to send a command to B, gather some information, transfer this information to computer A and use this in some meaningfull way. My current method follows:
Establish a connection to B with paramiko.
Use paramiko to execute a remote command e.g. paramiko.exec_command('python file.py').
Write the information to a file with pickle, and use paramiko.ftp to transfer the file to computer A.
Open this file and parse the information back into a usable form, like a class or dictionary.
This seems very ad-hoc. My question is, is there a better way to transfer data between computers using Python? I know how to transfer files, this is a different question. I want to make an object on B and use it on A. | false | 8,649,405 | 0 | 0 | 0 | 0 | I have been doing something similar on a larger scale (with 15 clients). I use the pexpect module to do essentially ssh commands on the remote machine (computer B). Another module to look into would be the subprocess module. | 0 | 2,292 | 0 | 1 | 2011-12-27T21:20:00.000 | python,rpc | How do I transfer data between two computers ? | 1 | 1 | 3 | 8,649,530 | 0 |
1 | 0 | I'm going to use ftplib to open up an FTP connection to an FTP server provided by the user. The client will be sending FTP commands to my django server via Ajax, which will then be forwarded to the FTP server the user provided. However, I'd like to not have to create a new FTP server connection every time the client sends an FTP command. In other words, I want to keep the FTP connection alive between requests by the client.
How would I do this? Would some sort of comet implementation be best? I was initially planning to use WebSockets until I discovered my host won't allow it. =\ | true | 8,663,391 | 1.2 | 0 | 0 | 0 | You'll need to use a persistent connection framework as what you're trying to achieve really isn't what HTTP was meant for (in the sense that HTTP commands are stateless and independent), and thus not what Django is built for. There are a number of options, but since it seems you are in a restricted environment you'll need to do some research to determine what's best. | 0 | 631 | 0 | 0 | 2011-12-29T02:24:00.000 | python,django,html,ftp | How to maintain server-initiated FTP connection between client requests in Python/Django? | 1 | 2 | 2 | 8,664,646 | 0 |
1 | 0 | I'm going to use ftplib to open up an FTP connection to an FTP server provided by the user. The client will be sending FTP commands to my django server via Ajax, which will then be forwarded to the FTP server the user provided. However, I'd like to not have to create a new FTP server connection every time the client sends an FTP command. In other words, I want to keep the FTP connection alive between requests by the client.
How would I do this? Would some sort of comet implementation be best? I was initially planning to use WebSockets until I discovered my host won't allow it. =\ | false | 8,663,391 | 0 | 0 | 0 | 0 | Switch hosts. Webfaction allows websockets with dedicated IP at around $20 per month. | 0 | 631 | 0 | 0 | 2011-12-29T02:24:00.000 | python,django,html,ftp | How to maintain server-initiated FTP connection between client requests in Python/Django? | 1 | 2 | 2 | 8,665,512 | 0 |
0 | 0 | We use Python 3.x in our projects. But the official client of Protocol Buffers only supports python 2.x.
I don't want to downgrade to python 2.x. | false | 8,663,468 | 1 | 0 | 0 | 6 | The latest version of Google Protocol Buffers (2.6) added Python 3 support. I suggest using that.
EDIT: Nevermind. They lied in their release notes. | 0 | 14,100 | 1 | 26 | 2011-12-29T02:43:00.000 | python,python-3.x,protocol-buffers | Is there any way to access Protocol Buffers with python 3.x? | 1 | 2 | 6 | 26,048,683 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.