Web Development
int64
0
1
Data Science and Machine Learning
int64
0
1
Question
stringlengths
28
6.1k
is_accepted
bool
2 classes
Q_Id
int64
337
51.9M
Score
float64
-1
1.2
Other
int64
0
1
Database and SQL
int64
0
1
Users Score
int64
-8
412
Answer
stringlengths
14
7k
Python Basics and Environment
int64
0
1
ViewCount
int64
13
1.34M
System Administration and DevOps
int64
0
1
Q_Score
int64
0
1.53k
CreationDate
stringlengths
23
23
Tags
stringlengths
6
90
Title
stringlengths
15
149
Networking and APIs
int64
1
1
Available Count
int64
1
12
AnswerCount
int64
1
28
A_Id
int64
635
72.5M
GUI and Desktop Applications
int64
0
1
0
0
I am writing a script in python, and part of it needs to connect to a remote computer using rdp. Is there a script or an api that I could use to create this function? Also, if there is not, is there a way to package a rdp application along side python and then use a python script to run it? Any help would be much appreciated. Thanks in advance, Nate
false
21,210,283
0.099668
0
0
1
If you need an interactive window, use the subprocess module to start your rdesktop.exe (or whatever). If you need to run some command automatically, you're probably better off forgetting about RDP and using ssh (with passwordless, passphraseless authentication via RSA or similar), psexec (note that some antivirus programs may dislike psexec, not because it's bad, but because it's infrequently been used by malware for bad purposes) or WinRM (this is what you use in PowerShell; it's like ssh or psexec, except it serializes objects on the sender, transmits, and deserializes back to an object on the recipient). Given a choice among the 3, I'd choose ssh. Cygwin ssh works fine, but there are several other implementations available for Windows. HTH
0
5,376
0
0
2014-01-18T21:45:00.000
python,rdp
RDP script in python?
1
1
2
21,210,586
0
0
0
I got much anonymous questions that attack my friendship. Is there a way to get the IP-Adresss of these Questions with a Python script? I have little more than normal Python knowledge, so you mustn't show me complete Code, just 1-5 lines or just explain something. I hope you'll help me!
false
21,236,742
0.197375
1
0
2
If the IPs are not logged by ask.fm, there is not much you can do about it. And if it's logged, you probably don't need any script to extract it, as it should be presented somewhere along with the questions or separately in some list.
0
1,205
0
0
2014-01-20T14:40:00.000
python,ip
Python Ask.fm IP of Anonymous Questions
1
2
2
21,237,260
0
0
0
I got much anonymous questions that attack my friendship. Is there a way to get the IP-Adresss of these Questions with a Python script? I have little more than normal Python knowledge, so you mustn't show me complete Code, just 1-5 lines or just explain something. I hope you'll help me!
false
21,236,742
0
1
0
0
In addition to @Michael's answer, even if you might be able to get the IP you won't be able to do much. Most of people also use dynamic IP addresses. You may want to contact ask.fm to get more informations, it's very hard they will give you them though.
0
1,205
0
0
2014-01-20T14:40:00.000
python,ip
Python Ask.fm IP of Anonymous Questions
1
2
2
21,237,392
0
1
0
I had read about an app called citycounds.fm, which is no longer active, where they made city-based playlists. Unfortunately, I can't seem to find any way to search for tracks by city in the soundcloud api documentation. Any one know if this is possible?
false
21,245,341
0
0
0
0
You can't filter tracks by city. The city is actually stored with the user. So you would have to search for the tracks you want, then perform an additional step to check if the user for each of the tracks is from the city you want. I wanted to do something similar, but too many users do not have their city saved in their profile so the results are very limited.
0
934
0
0
2014-01-20T22:19:00.000
python,api,soundcloud
Is there any way to search for tracks by city in the SoundCloud API?
1
1
1
21,263,821
0
1
0
I am working on some applets and whenever I'm trying to open the applets on IE using my python script, It stops for a manual input to enable the activex. I tried doing it from the IE settings. but, I require a command line to do it by which I can integrate it in my python script only.
false
21,250,136
0.197375
1
0
1
I found one solution to this. We can make the below modification to the registry and achieve running of applets automatically without pop-ups C:\Windows\system32>reg add "HKCU\Software\Microsoft\Internet Explorer\Main\Feat ureControl\FEATURE_LOCALMACHINE_LOCKDOWN" /v iexplore.exe /t REG_DWORD /d 0 /f
0
923
0
1
2014-01-21T05:49:00.000
java,python,internet-explorer,activex
How can I enable activex controls on IE for auto loading of applets
1
1
1
21,260,619
0
1
0
How would I go about turning off SSL when sending data to Google cloud storage? I'm using their apiclient module. The data that I'm putting to the cloud is already encrypted. I'm also trying to put data from AWS to GCS in 512k sized blocks. I'm seeing about 600ms+ in putting just one block. I was thinking if I don't have to set up a secure connection then I can cut down that PUT time a little. The code is server side code that lives on AWS and for some god awful reason my company wants to have two (S3 and GCS) as production storage regions.
false
21,270,951
0.197375
0
0
3
Try to PUT in larger blocks, since latency is probably the gating factor. You can edit the DEFAULT_CHUNK_SIZE in apiclient/http.py as a workaround.
0
1,475
0
3
2014-01-21T23:07:00.000
python,google-cloud-storage
Turn off SSL to Google cloud storage
1
3
3
21,271,776
0
1
0
How would I go about turning off SSL when sending data to Google cloud storage? I'm using their apiclient module. The data that I'm putting to the cloud is already encrypted. I'm also trying to put data from AWS to GCS in 512k sized blocks. I'm seeing about 600ms+ in putting just one block. I was thinking if I don't have to set up a secure connection then I can cut down that PUT time a little. The code is server side code that lives on AWS and for some god awful reason my company wants to have two (S3 and GCS) as production storage regions.
false
21,270,951
0.066568
0
0
1
You should keep SSL. When using OAuth2 (as GCS does), any request may include an http header (access_token) that you don't want third parties to see. Otherwise, hijacking your account would be extremely easy.
0
1,475
0
3
2014-01-21T23:07:00.000
python,google-cloud-storage
Turn off SSL to Google cloud storage
1
3
3
21,678,526
0
1
0
How would I go about turning off SSL when sending data to Google cloud storage? I'm using their apiclient module. The data that I'm putting to the cloud is already encrypted. I'm also trying to put data from AWS to GCS in 512k sized blocks. I'm seeing about 600ms+ in putting just one block. I was thinking if I don't have to set up a secure connection then I can cut down that PUT time a little. The code is server side code that lives on AWS and for some god awful reason my company wants to have two (S3 and GCS) as production storage regions.
true
21,270,951
1.2
0
0
3
apiclient uses the Google Cloud Storage JSON API, which requires HTTPS. Can you say a bit about why you would like to disable SSL? Thanks.
0
1,475
0
3
2014-01-21T23:07:00.000
python,google-cloud-storage
Turn off SSL to Google cloud storage
1
3
3
21,271,640
0
0
0
I am using Python SocketServer to implement a socket server. How can I find out if client used example.com to connect to me, or used x.x.x.x? Actually, I need something like virtual hosts in Apache. Googling didn't come up with any notable result. Thanks
true
21,307,904
1.2
0
0
1
virtual hosts in Apache works because it is specified in the HTTP RFC to send the host header. Unless your client similarly sends the name it used to connect, there is really no way to find this out. DNS lookup happens separately and resolves a host name to an IP. The IP is then used to connect. – Kinjal Dixit
0
188
0
1
2014-01-23T12:16:00.000
python,sockets
Find host name used to connect to my socket server
1
1
1
27,230,282
0
0
0
I am using python 2.7 and i need secure url with ssl protocol(HTTPS).Can we do this in python 2.7 when i trying to import ssl i m getting the following error Traceback (most recent call last): File "", line 1, in File "/usr/lib64/python2.7/ssl.py", line 60, in import _ssl # if we can't import it, let the error propagate ImportError: /usr/lib64/python2.7/lib-dynload/_ssl.so: symbol SSLeay_version, version OPENSSL_1.0.1 not defined in file libcrypto.so.10 with link time reference. Please help me if anybody know...Thanks in advance
false
21,335,434
0
0
0
0
I got Answer I have re-installed openssl This is the code i used in terminal sudo yum install openssl-devel
0
2,484
0
0
2014-01-24T14:42:00.000
python-2.7,ssl,https
How to use ssl in python 2.7
1
1
1
21,338,087
0
1
0
I'm developing a multiplayer Android game with push notifications by using Google GCM. My web server has a REST API. Most of the requests sent to this API send a request to Google GCM server to send a notification to the opponent. The thing is on average, a call to my API is ~140 ms long, and ~100 ms is due to the http request sent to Google server. What can I do to speed up this? I was thinking (I have full control of my server, my stack is Bottle/gunicorn/nginx) of creating an independent process with a database that will try to send a queue of GCM requests, but maybe there's a much simpler way to do that directly in bottle or in pure python.
false
21,349,462
0.066568
0
0
1
The problem is that your clients are waiting for your server to send the GCM push notifications. There is no logic to this behavior. You need to change your server-side code to process your API requests, close the connection to your client, and only then send the push notifications.
0
1,501
0
0
2014-01-25T10:39:00.000
android,python,web,google-cloud-messaging,bottle
Avoiding a 100ms http request to slow down the REST API where it's called from
1
2
3
21,350,980
0
1
0
I'm developing a multiplayer Android game with push notifications by using Google GCM. My web server has a REST API. Most of the requests sent to this API send a request to Google GCM server to send a notification to the opponent. The thing is on average, a call to my API is ~140 ms long, and ~100 ms is due to the http request sent to Google server. What can I do to speed up this? I was thinking (I have full control of my server, my stack is Bottle/gunicorn/nginx) of creating an independent process with a database that will try to send a queue of GCM requests, but maybe there's a much simpler way to do that directly in bottle or in pure python.
false
21,349,462
0
0
0
0
The best thing you can do is making all networking asynchronous, if you don't do this yet. The issue is that there will always be users with a slow internet connection and there isn't a generic approach to bring them fast internet :/. Other than that, ideas are to send only few small packets or one huge in favor of many small packets (that's faster) use UDP over TCP, UDP being connectionless and naturally faster
0
1,501
0
0
2014-01-25T10:39:00.000
android,python,web,google-cloud-messaging,bottle
Avoiding a 100ms http request to slow down the REST API where it's called from
1
2
3
21,349,485
0
0
0
I'm looking for a good package that can be used to implement a OpenId Connect Provider. I've found one called pyoidc but the documentation around it is not great at all. Can anyone suggest a different package or does any one have an example implementation of pyoidc?
false
21,376,619
0.099668
0
0
2
There are examples in the distribution. Just added another RP example (rp3) which I think should be easier to understand. Also started to add documentation.
0
3,016
0
4
2014-01-27T09:00:00.000
python,python-3.x,oauth-2.0,openid,openid-connect
OpenId Connect Provider Python 3
1
1
4
23,262,902
0
0
0
I have not found a satisfactory answer/tutorial for this, but I'm sure it must be out there. My goal is to access Google Drive programmatically using my credentials. A secondary and lower-priority goal is to do this properly and that means using OAuth rather than ClientLogin. Thus: How do you authenticate with the Google Drive API using your own credentials for your own Google Drive (without creating an application on the Google Developers Console)? All of the documentation assumes an application, but what I'm writing is merely helper scripts in Python 2.7 for my own benefit.
true
21,390,913
1.2
0
0
2
"How do you authenticate with the Google Drive API using your own credentials for your own Google Drive (without creating an application on the Google Developers Console)?" You can't. The premise of OAuth is that the user is granting access to the application, and so the application must be registered. In Google's case, that's the API/Cloud Console. In your case, there is no need to register each application that uses your helper scripts. Just create an app called helper_scripts, embed the client Id in your script source, and then reuse those scripts in as many applications as you like.
0
359
0
0
2014-01-27T20:24:00.000
python,google-drive-api,google-oauth
How to authenticate as myself for Google Drive API?
1
1
1
21,409,448
0
0
0
I'm just starting exploring IAM Roles. So far I launched an instance, created an IAM Role. Everything seems to work as expected. Currently I'm using boto (Python sdk). What I don't understand : Does the boto takes care of credential rotation? (For example, imagine I have an instance that should be up for a long time, and it constantly have to upload keys to s3 bucket. In case if credentials are expired, do I need to 'catch' an exception and reconnect? or boto will silently do this for me?) Is it possible to manually trigger IAM to change credentials on the Role? (I want to do this, because I want to test above example. Or if there is there an alternative to this testcase? )
true
21,408,290
1.2
1
0
2
The boto library does handle credential rotation. Or, rather, AWS rotates the credentials and boto automatically picks up the new credentials. Currently, boto does this by checking the expiration timestamp of the temporary credentials. If the expiration is within 5 minutes of the current time, it will query the metadata service on the instance for the IAM role credentials. The service is responsible for rotating the credentials. I'm not aware of a way to force the service to rotate the credentials but you could probably force boto to look for updated credentials by manually adjusting the expiration timestamp of the current credentials.
0
265
0
3
2014-01-28T14:27:00.000
python,amazon-web-services,amazon-s3,boto,amazon-iam
How to manually change IAM Roles credentials?
1
1
1
21,409,299
0
0
0
I am looking to set up an HTTP server which takes in some input and then the application needs to send multiple HTTP requests at the same time (to another server). What is the best approach for this? If I use the Twisted framework, do I still need to use threading?
false
21,423,798
0
0
0
0
Either Threading or Twisted can do this. ie. If you use twisted, you won't need to use Threading. Keep in mind that some servers have a limit on the number of connections they will allow from a single IP address.
1
2,932
0
2
2014-01-29T06:36:00.000
python,multithreading,thread-safety,twisted
Python: Multiple HTTP requests at the same time
1
1
3
21,423,872
0
1
0
There are multiple mobile apps. I want people using one app to login with their same login credentials into all other apps. What is the best approach to implement this? I'm thinking to create a separate authorization server that will issue tokens/secrets on registering and logins. It will have a validation API that will be used by mobile app servers to validate requests.
false
21,426,024
0
0
0
0
The authentification method in every application connects to the same webservice for autentification.
0
386
0
1
2014-01-29T08:44:00.000
python,django,authorization,server-side
One login for multiple products
1
2
3
21,426,172
0
1
0
There are multiple mobile apps. I want people using one app to login with their same login credentials into all other apps. What is the best approach to implement this? I'm thinking to create a separate authorization server that will issue tokens/secrets on registering and logins. It will have a validation API that will be used by mobile app servers to validate requests.
true
21,426,024
1.2
0
0
1
First check if OAuth could be adapted to using this, that would save you a lot of work. Of course all the services and apps would have to talk to some backend network server to sync tokens issued to apps. Half-secure/maybe-abusable solution: have symmetric cipher encrypted cookie that webpages (and apps?) hold and use it for authorization with different network services (which again have to verify cookie for authorization with authorization service that knows the passphrase used to encrypt the cookie) I've used approach #2 on internal systems but I am not sure if it is advisable to use it in in the wild - this may pose some security risks.
0
386
0
1
2014-01-29T08:44:00.000
python,django,authorization,server-side
One login for multiple products
1
2
3
21,427,549
0
0
0
I am planning on doing a bit of home automation. I decided on going with the RPi, because it is cheap, and can connect to the internet wirelessly via a USB dongle. I was planning on controlling the system through a PHP webpage hosted on my webserver. I was wondering if I could make it so that when I click a button on the PHP site, it somehow sends a signal to the raspberry pi and makes it activate a GPIO pin. I realize that it would be easier to host the webpage on the actual Pi itself, but I plan to have multiple Pis and would like to be able to control all of them with one webpage. Thanks In advance
false
21,442,470
0
1
0
0
Use a websocket (e.g., on Node.js) to open a channel of communication between the Raspberry Pi and the Web page. Run a socket server on the Web server and run clients on your Rasberry Pis. Then create a simple messaging protocol for commands that the Web server will send over the websocket and that the Raspberry Pis will listen for over the socket. They can even communicate when the task is done that it's been done successfully.
0
525
0
0
2014-01-29T20:57:00.000
php,python,raspberry-pi,home-automation
Python, PHP: Controlling RPi GPIO from website on a separate server
1
2
2
31,495,240
0
0
0
I am planning on doing a bit of home automation. I decided on going with the RPi, because it is cheap, and can connect to the internet wirelessly via a USB dongle. I was planning on controlling the system through a PHP webpage hosted on my webserver. I was wondering if I could make it so that when I click a button on the PHP site, it somehow sends a signal to the raspberry pi and makes it activate a GPIO pin. I realize that it would be easier to host the webpage on the actual Pi itself, but I plan to have multiple Pis and would like to be able to control all of them with one webpage. Thanks In advance
false
21,442,470
0
1
0
0
I don't think it would be as easy as 'sending a signal' to your Pi. What you could do, however, is set up a MySQL database on the server with your control signals input to the database and have the Pi poll it every so often to check the values. For actually controlling, you would simply use UPDATE statements to set the values. There may be some lag involved, but this depends on your polling rate and network speed.
0
525
0
0
2014-01-29T20:57:00.000
php,python,raspberry-pi,home-automation
Python, PHP: Controlling RPi GPIO from website on a separate server
1
2
2
21,864,500
0
0
0
I need to access my python programs through IP address for making it to do something in server. Creating Apache Server for only one python script is not good solution. In server it works like: python script.py --arg Now I need something like http://xxx.xxx.xxx.xxx:xxxx/script.py --arg or something else. Main idea is to send argument to program remotely without ssh. PS. Main problem with framework and python simple HTTP server was block in firewall.
false
21,509,104
0.099668
1
0
1
With Flask you can do that in about ten lines of code.
0
205
0
0
2014-02-02T09:43:00.000
python,webserver
Best way to create simple web server for python files
1
1
2
21,509,226
0
1
0
I am learning endpoints and saw that other Google APIs have this "fields" query attribute. Also it appears in the api explorer. I would like to get a partial response for my api also, but when using the fields selector from the api explorer it is simply ignored by the server. Do I need to implement something in the server side? Haven't found anything in the docs. Any help is welcome.
true
21,516,287
1.2
0
0
0
From what I gather, Google has enabled partial response for their APIs, but has not yet explained how to enable it for custom APIs. I'm assuming if they do let us know, it might entail annotations, and possibly overriding a method or two. I've been looking also, to no avail. I've been looking into this just due to a related question, where I'd like to know how to force the JSON object in the response from my google Endpoint API, to include even the members of the class that are null valued. I was trying to see if anything would be returned if I used a partial response with a field indicated that was null.. would the response have the property at least, or would it still not even exist as a property. Anyway, this lead me into the same research, and I do not believe we can enable partial responses in our own APIs yet.
0
418
1
0
2014-02-02T21:10:00.000
google-app-engine,python-2.7,google-cloud-endpoints
How do you return a Partial response in app engine python endpoints?
1
1
2
23,165,174
0
0
0
I am using buildbot version 0.8.5 and need to send an HTTP post request from it as a step. After searching for it on internet, I found that the latest version 0.8.8 has a step called HTTPStep for doing so. Is there any similar step in the older version? I know it can be done using batch file or python program using urllib2. but is there any other way to do it?
false
21,520,459
-0.099668
1
0
-1
Just my thoughts..As far as I know it is better to use a python script from a build step. Simple and easy to control. The logic being: the entire buildbot is inside one http connection/session and sending another http request somewhere might have issues with the connection/session. from the buildbot httpstep description, you need to install additional python packages which might be not be so convenient to do on multiple slaves/masters.
0
268
0
1
2014-02-03T05:39:00.000
python,httprequest,buildbot
Sending http post request in buildbot
1
1
2
22,557,265
0
1
0
I am working to automate retrieving the Order data from the Google Wallet Merchant Center. This data is on the Orders screen and the export is through a button right above the data. Google has said this data is not available to export to a Google Cloud bucket like payments are and this data is not available through a Google API. I'm wondering if anyone has been successful in automating retrieval of this data using an unofficial method such as scraping the site or a separate gem or library? I have done tons of searching and have not seen any solutions.
false
21,611,503
0
0
0
0
analyticsPierce, I've asked the same question and have not received any answers. Here was my question, maybe we can work out a solution somehow. I've just about given up. "HttpWebRequest with Username/Password" on StackOverflow. Trey
0
433
0
1
2014-02-06T18:53:00.000
python,ruby,android-pay
How to automate Google Wallet order export data?
1
1
1
21,741,357
0
1
0
I am successful to record all the links of the website but missed some of the links which can only be visible with the form posting (for example login). What i did is recorded all the links without login. And took the form values. Then i posted the data and recorded the new links, but here i missed the other forms and links which are not in that posted links. Please suggest any efficient algorithm so that i could grab all the links by posting form datas. Thanks in advance.
false
21,612,246
0
0
0
0
The links in a set of web pages can be seen as a tree graph and hence you could use various tree traversal algorithms like depth first and breadth first search to find all links. The links and related form data can be saved in a queue or stack depending on what traversal algorithm you are using.
0
77
0
2
2014-02-06T19:32:00.000
python,algorithm,web,hyperlink,traversal
Algorithm for traversing website including forms
1
1
1
31,978,451
0
1
0
I have a HTML/Javascript file with google's web speech api and I'm doing testing using selenium, however everytime I enter the site the browser requests permission to use my microphone and I have to click on 'ALLOW'. How do I make selenium click on ALLOW automatically ?
true
21,628,904
1.2
0
0
8
@ExperimentsWithCode Thank you for your answer again, I have spent almost the whole day today trying to figure out how to do this and I've also tried your suggestion where you add that flag --disable-user-media-security to chrome, unfortunately it didn't work for me. However I thought of a really simple solution: To automatically click on Allow all I have to do is press TAB key three times and then press enter. And so I have written the program to do that automatically and it WORKS !!! The first TAB pressed when my html page opens directs me to my input box, the second to the address bar and the third on the ALLOW button, then the Enter button is pressed. The python program uses selenium as well as PyWin32 bindings. Thank you for taking your time and trying to help me it is much appreciated.
0
21,421
0
9
2014-02-07T13:23:00.000
google-chrome,python-2.7,selenium-webdriver
Accept permission request in chrome using selenium
1
1
8
21,686,531
0
0
0
Im working on a little project that running rabbitmq with python, I need a way to access the management api and pull stats, jobs, etc. I have tried using pyRabbit, but doen't appear to be working unsure why, hoping better programmers might know? Below I was just following the basic tutorial and readme to perform the very basic task. My server is up, I'm able to connect outside of python and pyrabbit fine. I have installed off the dependencies with no luck, at least I think. Also open to other suggestions for just getting queue size, queues, active clients etc outside of pyRabbit. 'Microsoft Windows [Version 6.1.7601] Copyright (c) 2009 Microsoft Corporation. All rights reserved. C:\Users\user>python Python 2.7.3 (default, Apr 10 2012, 23:31:26) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. import nose import httplib2 import mock from pyrabbit.api import Client import pyrabbit cl = Client('my.ip.com:15672', 'guest', 'guest') cl.is_alive() No JSON object could be decoded - (Not found.) () Traceback (most recent call last): File "", line 1, in File "C:\Python27\lib\site-packages\pyrabbit\api.py", line 48, in wrapper if self.has_admin_rights: File "C:\Python27\lib\site-packages\pyrabbit\api.py", line 175, in has_admin_right whoami = self.get_whoami() File "C:\Python27\lib\site-packages\pyrabbit\api.py", line 161, in get_whoami whoami = self.http.do_call(path, 'GET') File "C:\Python27\lib\site-packages\pyrabbit\http.py", line 112, in do_call raise HTTPError(content, resp.status, resp.reason, path, body) pyrabbit.http.HTTPError: 404 - Object Not Found (None) (whoami) (None)'
false
21,639,733
0
1
0
0
I was never able to solve this. But, this forced me to learn what json is, I used simplejson along with httplib2 and it worked like a charm...
0
781
0
0
2014-02-07T23:38:00.000
python,django,rabbitmq
Unable to get pyrabbit to run
1
1
2
21,939,627
0
1
0
Hi everyone i'm working on a social network project . I'm using Pythons Common Gateway Interface to write everything ,handle database , ajax . So i have a question , i heard that Web Server Gateway Interface is better than Common Gateway Interface and can handle more users and higher traffic but now i have already finished my website more than half of the project . What should i do now ? i don't have much time to going back either.Is Python Common Gateway Interface that bad for large scale project ?
false
21,656,058
0
0
0
0
Go ahead with your work.. It will be fine if you are using Common Gateway Interface but you should have a look at network traffic.
0
227
0
0
2014-02-09T06:17:00.000
python,python-3.x,cgi,mod-wsgi,wsgi
Python cgi or wsgi
1
1
1
21,656,118
0
0
0
I have a bunch of websites, I want to find all the IP addresses that they communicate with while we browse them. For example once we browse Yahoo.com, it contacts several destinations until it is getting loaded. Is there any library in the C++ or Python that can help me? One way that I'm thinking about is to get the HTML file of the website and look for the format "src = http://", but it is not quite correct.
false
21,689,654
0
1
0
0
You can do that with http, urllib or urllib2. You have to look for "src=" (images, flash etc.) and for "href=" (hyperlinks). Why do you think it's not correct?
0
80
0
1
2014-02-10T22:56:00.000
python,c++,ip,web-crawler,fetch
How to get all IP addresses fetched
1
1
1
21,689,802
0
0
0
I am trying to make a HTTP Request to a JSON API like https://api.github.com using tornado.httpclient and I found that it always responses with FORBIDDEN 403. Simplifying, I make a request using the CLI with: $ python -m tornado.httpclient https://api.github.com getting a tornado.httpclient.HTTPError: HTTP 403: Forbidden. In other hand, if I try to request this URL via browser or a simple $ curl https://api.github.com, the response is 200 OK and the proper JSON file. What is causing this? Should I set some specific Headers on the tornado.httpclient request? What's the difference with a curl request?
false
21,703,829
0
0
0
0
I faced similar issue & problem was with configurable-http-proxy so I killed its process & restarted jupyterhub ps aux | grep configurable-http-proxy if there are any pid's from above command, kill them with kill -9 <PID> and restart ``
0
756
0
1
2014-02-11T13:55:00.000
python,http,tornado
Simple not-authorized request to a Github API using Tornado httpclient returns Forbidden
1
1
3
47,112,953
0
1
0
Currently I have written a python script that extracts data from flickr site, and dump as a python class object and YAML file. I was planning to turn this into a website: and a simple html front page that can send a request to the backend, triggers the python scripts running on the backend the response will be parsed and render as a table on the html page. As i am very new to python, I am not sure how to plan my project. is there any suggestions for building this application? any framework or technologies i should use? For example, i guess i should use ajax in the front end, how about the backend? Thanks in advance for your suggestions!
false
21,704,977
0
0
0
0
you should use Django framework for your app. You can integrate your scripts into Django views. And you can also use the loaddata system in order to insert your yaml data into the database
0
1,253
0
1
2014-02-11T14:44:00.000
python,ajax
Python backend and html/ajax front end, need suggestions for my application
1
1
2
21,705,056
0
0
0
I'm looking to implement (or use a library if one already exists) the Max Flow algorithm on a graph with directed and undirected edges, and visualize it. I am leaning toward JavaScript. I am aware that d3.js and arbor.js allow interactive graph visualization, but is there a recommended way to visualize the actual flow from node to node? This is to demonstrate some concepts in theoretical computer science. The ideal graph would be able to show edge capacities, edge costs (different from capacities), and node names, and edges can be one-way (directed) or two-way (bidirectional, arrows pointing to both nodes, or just no arrows at all. This is not two separate directed edges). Any advice regarding a graph visualization tool - one where you can see the flow going from edge to edge - would be appreciated. Note: I am not opposed to using Python or some other language if someone is aware of a nice framework/library that allows this kind of visualization. Thanks.
false
21,717,013
0
0
0
0
d3 may be the solution to what you're trying to do, but it's good to keep in mind what it is and what it is not. What it is: a very effective tool at creating data-based graphics. What it is not: a graphing library. That being said, you CAN use it for graphs. Most of the graphs that I do in javascript are built on d3, but when doing so, expect to do write a lot of to code for setting up your plots. You can create a flow graph that will show you what you want, but d3 doesn't contain a canned flow graph that you can drop your data into.
0
733
0
0
2014-02-12T01:48:00.000
javascript,python,graph,visualization,network-flow
Max flow visualization with JavaScript api - use d3.js, or something similar?
1
1
2
21,729,708
0
0
0
I'm running a set of tornado instances that handles many requests from a small set of keep-alive connections. When I take down the server for maintenance I want to gracefully close the keep-alive requests so I can take the server down. Is there a way to tell clients "Hey this socket is closing" with Tornado? I looked around and self.finish() just flushes the connection.
false
21,720,346
0.099668
0
0
1
finish() doesn't apply here because a connection in the "keep-alive" state is not associated with a RequestHandler. In general there's nothing you can (or need to) do with a keep-alive connection except close it, since the browser isn't listening for a response. Websockets are another story - in that case you may want to close the connections yourself before shutting down (but don't have to - your clients should be robust against the connection just going away).
0
1,783
1
1
2014-02-12T06:29:00.000
python,sockets,tornado
Close all (keep-alive) socket connections in tornado?
1
1
2
21,735,934
0
1
0
I have a dilemma. I need to read very large XML files from all kinds of sources, so the files are often invalid XML or malformed XML. I still must be able to read the files and extract some info from them. I do need to get tag information, so I need XML parser. Is it possible to use Beautiful Soup to read the data as a stream instead of the whole file into memory? I tried to use ElementTree, but I cannot because it chokes on any malformed XML. If Python is not the best language to use for this project please add your recommendations.
true
21,740,376
1.2
0
0
2
Beautiful Soup has no streaming API that I know of. You have, however, alternatives. The classic approach for parsing large XML streams is using an event-oriented parser, namely SAX. In python, xml.sax.xmlreader. It will not choke with malformed XML. You can avoid erroneous portions of the file and extract information from the rest. SAX, however, is low-level and a bit rough around the edges. In the context of python, it feels terrible. The xml.etree.cElementTree implementation, on the other hand, has a much nicer interface, is pretty fast, and can handle streaming through the iterparse() method. ElementTree is superior, if you can find a way to manage the errors.
0
1,799
0
3
2014-02-12T21:44:00.000
python,xml
Need to read XML files as a stream using BeautifulSoup in Python
1
1
1
21,740,512
0
1
0
I am trying to target specific CSS elements on a page, but the problem is that they have varying selector names. For instance, input#dp156435476435.textinput.wihtinnextyear.datepicker.hasDatepicker.error. I need to target the CSS because i am specifcally looking for the .error at the end of the element, and that is only in the CSS (testing error validation for fields on a website. I know if I was targeting class/name/href/id/etc, I could use xpath, but I'm not aware of a partial CSS selector in selenium webdriver. Any help would be appreciated, thanks!
false
21,765,396
0
0
0
0
css=span.error -- Error css=span.warning -- Warning css=span.critical -- Critical Error Simple above are the CSS Selectors we can use.
0
2,815
0
2
2014-02-13T21:00:00.000
python,selenium-webdriver
Selenium webdriver, Python - target partial CSS selector?
1
1
3
22,903,875
0
0
0
I wish to scrape news articles of the local newspaper. The archive is behind a paywall and I have a paid account, how would I go about automating the input of my credentials?
false
21,807,914
0.099668
0
0
1
You can't access page behind paywall directly because that page may require some authentication data like session or cookies. So you have to first create these data and store it so that when you pass request to secure pages you pass require data as part of request and also have authentication session data. To get authentication data you should scrape login page first. Get the session info,cookies of login page and pass login input as a request (get or post based on form action type) to action page. Once you will be logged in store authentication data and use this to scrape pages behind paywall.
0
6,129
0
1
2014-02-16T06:02:00.000
python,web-scraping
How to scrape a website behind a paywall
1
1
2
21,807,979
0
0
0
I am trying to run IPython notebook but its not execute any output,it gives error like that,Error:A WebSocket connection to could not be established. You will NOT be able to run code. Check your network connection or notebook server configuration,so what can i do for that?
false
21,823,306
-0.379949
0
0
-2
This error means that your ipython notebook server is not running. If you are running Ubuntu or OSX, you need to go to the command-line, cd into the directory where your notebook file is, and run ipython notebook. This will start the local notebook webserver and you can then run code inside your notebooks. The error you are getting probably means that you accidentally killed the local webserver that lets the notebooks run.
1
521
0
3
2014-02-17T07:39:00.000
python-3.x,ipython-notebook
How to execute the ipython notebook
1
1
1
21,826,880
0
0
0
We have a REST API as part of which we provide the client with several APIs to draw analytic reports. Some very large queries can take 5 to 10 minutes to complete and can return responses in the 50mb to 150mb range. At the moment, the client is just expected to wait for the response. We are not sure if this is really the best practice or if such complex/large queries & responses should be dealt with in another manner. Any advice on current best practices would be appreciated please? Note: The API will be called by automated processes building large reports, so we are not sure if standard pagination is efficient or desirable.
false
21,824,677
0.197375
0
0
1
If you need to process a long running task, from client point of view it is always better to process it asynchronously as follows. A client sends a POST request, the server creates a new resource (or can start immediate background processing) and returns HTTP 202 Accepted with a representation of the task (e.g. status, start time, expected end time and the like) along with the task URL in Content-Location header so that the client can track it. The client can send a GET request to the specified URL to get the status. Server can return following responses. Not done yet Server returns HTTP 200 OK along with the task resource so that client can check status. Done Server returns HTTP 303 See Other and a Location header with the URL of a resource that shows the task results. Error Server returns HTTP 200 OK with the task resource describing the error
0
277
0
1
2014-02-17T09:01:00.000
python,rest,bigdata
API Responses with large result sets
1
1
1
21,829,930
0
0
1
In igraph, what's the least cpu-expensive way to find: the two most remote vertices (in term of shortest distances form one another) of a graph. Unlike the farthest.points() function, which chooses the first found pair of vertices with the longest shortest distance if more than one pair exists, I'd like to randomly select this pair. same thing with the closest vertices of a graph. Thanks!
false
21,836,929
0.379949
0
0
2
For the first question, you can find all shortest paths, and then choose between the pairs making up the longest distances. I don't really understand the second question. If you are searching for unweighted paths, then every pair of vertices at both ends of an edge have the minimum distance (1). That is, if you don't consider paths to the vertices themselves, these have length zero, by definition.
0
99
0
0
2014-02-17T18:47:00.000
python,igraph,shortest-path
least cpu-expensive way to find the two most (and least) remote vertices of a graph [igraph]
1
1
1
21,840,597
0
1
0
I am using robot framework to test a GUI application , I need to select a span element which is a list box , but I have multiple span elements with same class ID in the same page , So how can I select each span element(list box) ??? Thanks in advance
false
21,846,978
0
0
0
0
Could you please provide a part of your code you use to get the span element and a part of your GUI application where you are trying to get the element from (HTML, or smth.)?
0
313
0
0
2014-02-18T07:32:00.000
python,selenium,automated-tests,robotframework
How to select a span element which is a list box ,when multiple span elements with same class ID are present in the same page?
1
2
2
22,005,969
0
1
0
I am using robot framework to test a GUI application , I need to select a span element which is a list box , but I have multiple span elements with same class ID in the same page , So how can I select each span element(list box) ??? Thanks in advance
false
21,846,978
0
0
0
0
Selenium provides various ways to locate elements in the page. If you can't use id, consider using CSS or Xpath.
0
313
0
0
2014-02-18T07:32:00.000
python,selenium,automated-tests,robotframework
How to select a span element which is a list box ,when multiple span elements with same class ID are present in the same page?
1
2
2
22,013,855
0
1
0
I have an HTTP API using Flask and in one particular operation clients use it to retrieve information obtained from a 3rd party API. The retrieval is done with a celery task. Usually, my approach would be to accept the client request for that information and return a 303 See Other response with an URI that can be polled for the response as the background job is finished. However, some clients require the operation to be done in a single request. They don't want to poll or follow redirects, which means I have to run the background job synchronously, hold on to the connection until it's finished, and return the result in the same response. I'm aware of Flask streaming, but how to do such long-pooling with Flask?
false
21,868,709
0.197375
0
0
1
Tornado would do the trick. Flask is not designed for asynchronization. A Flask instance processes one request at a time in one thread. Therefore, when you hold the connection, it will not proceed to next request.
0
2,430
0
0
2014-02-19T00:47:00.000
python,api,flask
Flask request waiting for asynchronous background job
1
1
1
25,578,832
0
1
0
I'm trying to figure out how to parse a website that doesn't have documentation available to explain the query string. I am wondering if there is a way to get all possible valid values for different fields in a query string using Python. For example, let's say I have the current URL that I wish to parse: http://www.website.com/stat?field1=a&field2=b Is there a way to find all of the possible values for field1 that return information? Let's say that field1 of the qs can take either values "a" or "z" and I do not know it can take value "z". Is there a way to figure out that "z" is the only other value that is possible in that field without any prior knowledge?
false
21,871,636
0
0
0
0
It depends on the website itself. If it has other values of field1 or field2, you can only know that by looking into the code or documentation(if available). That's the only accurate way of knowing it. Otherwise, you can try brute forcing (trying all possible alphanumeric values Ever), but that doesn't guaranty anything. In that case you'll need a way to know which values are valid and which are not. Hardly efficient.
1
299
0
0
2014-02-19T05:13:00.000
python,parsing,query-string
Find all possible query string values with python
1
1
1
21,872,039
0
0
0
I want to write a simple python C/S exec code model, which will send all codes written in client to execute in server. Simply, you can think that I'm using exec(code, globals()) to run remote code. And I meet a problem about namespace : If I import something in a connection, another connection can also use this module. For example, we have two connections: A and B. I import os in connection A, then connection B can use os module also. Question : And what I want is that each connection have its own execute environment, say 'globals'.
true
21,871,784
1.2
0
0
0
Currently, I'm using a stupid and violent solution. I create dict for each connection, and exec code in dict accordingly. exec code in connection_dict[connection]. Any smart solution ? Such as python CAPI ? Thanks again!
0
23
0
0
2014-02-19T05:24:00.000
python,namespaces
How to set local namespace for specified connection?
1
1
1
21,871,871
0
0
0
I searched the net but couldn't get anything that works. I am trying to write a python script which will trigger a timer if a particular url is opened in the current browser. How do i obtain the url from the browser.
true
21,872,515
1.2
0
0
1
You cannot do it platform independent way. You need to use pywin32 for Windows platform (or any other suitable module which provides access to platform API, for example pywm) to access window (you can get it by window name). After that you should analyse all child to get to window which represents URL string. Finally you can get text of this.
0
477
0
0
2014-02-19T06:12:00.000
python,url
Python: Getting current URL
1
1
1
21,872,656
0
0
0
I'm not sure if the language I'm using makes a difference or not, but for the record it's python (2.7.3). I recently tried to add functionality to a project I forked on GitHub. Specifically, I changed the underlying http request library from httplib2 to requests, so that I could easily add proxies to requests. The resultant function calls changed slightly (more variables passed and in a slightly different order), and the mock unit test calls failed as a result. What's the best approach to resolving this? Is it OK to just jump in and rewrite the unit test so that they pass with the new function calls? Intuitively, that would seem to be undermining the purpose of unit tests somewhat.
true
21,878,696
1.2
1
0
0
The purpose of a unit test is to verify the implementation of a requirement. As any other piece of software, you have to distinguish what the unit test does, how it tests the requirement (roughly speaking its design), and how it is implemented. Unless the requirement itself is changed, the design of the unit test should not be changed. However, it may happen that a change from another requirement impacts its implementation (because of side effect, interface change, etc.). Then according to your process, you may let the new implementation be reviewed to make sure that the change doesn't impact the nature of the test and that the original requirement is still fulfilled.
0
29
0
0
2014-02-19T11:07:00.000
python,unit-testing
Changing unit tests based on added functionality
1
1
1
21,879,161
0
1
0
I have 200,000 URLs that I need to scrape from a website. This website has a very strict scraping policy and you will get blocked if the scraping frequency is 10+ /min. So I need to control my pace. And I am thinking about start a few AWS instances (say 3) to run in parallel. In this way, the estimated time to collect all the data will be: 200,000 URL / (10 URL/min) = 20,000 min (one instance only) 4.6 days (three instances) which is a legit amount of time to get my work done. However, I am thinking about building a framework using boto. That I have a paragraph of code and a queue of input (a list of URLs) in this case. Meanwhile I also don't want to do any damage to their website so I only want to scrape during the night and weekend. So I am thinking about all of this should be controlled on one box. And the code should look similar like this: class worker (job, queue) url = queue.pop() aws = new AWSInstance() result aws.scrape(url) return result worker1 = new worker() worker2 = new worker() worker3 = new worker() worker1.start() worker2.start() worker3.start() The code above is totally pseudo and my idea is to pass the work to AWS. Question: (1) How to use boto to pass the variable/argument to another AWS instance and start a script to work on those variable and .. use boto to retrieve the result back to the master box. (2) What is the best way to schedule a job only on specific time period inside Python code. Say only work on 6:00pm to 6:00 am everyday... I don't think the Linux crontab will fit my need in this situation. Sorry about that if my question is more verbally descriptive and philosophical.. Even if you can offer me any hint or throw away some package/library name that meet my need. I will be gratefully appreciated!
false
21,892,302
0.197375
0
0
1
Question: (1) How to use boto to pass the variable/argument to another AWS instance and start a script to work on those variable Use shared datasource, such as DynamoDB or messaging framework such as SQS and .. use boto to retrieve the result back to the master box. Again, shared datasource, or messaging. (2) What is the best way to schedule a job only on specific time period inside Python code. Say only work on 6:00pm to 6:00 am everyday... I don't think the Linux crontab will fit my need in this situation. I think crontab fits well here.
0
262
0
0
2014-02-19T20:58:00.000
python,amazon-web-services,cron,queue,boto
BOTO distribute scraping tasks among AWS
1
1
1
21,899,718
0
1
0
I have flask running on my Macbook (10.9.1 if it makes a difference). I have no problem accessing what I have hosted there over my local network, but I'm trying to see if I can access it publicly. For example load a webpage on my iPhone over it's 3G connection. It doesn't appear to be as simple as /index. With my limited knowledge, my public IP seems to be the one for our internet connection, moreso my own laptop. Is that what is causing the issue? Appreciate any help!
true
21,905,560
1.2
0
0
0
You need to set your router to forward the relevant port to your laptop.
0
115
0
0
2014-02-20T10:54:00.000
python,flask
Connect to flask over public connection
1
1
1
21,905,637
0
0
0
I'm learning to use sockets in python and something weird is happening. I call socket.connect in a try block, and typically it either completes and I have a new socket connection, or it raises the exception. Sometimes, however, it just hangs. I don't understand why sometimes it returns (even without connecting!) and other times it just hangs. What makes it hang? I am using blocking sockets (non-blocking don't seem to work for connect...), so I've added a timeout, but I'd prefer connect to finish without needing to timeout. Perhaps, when it doesn't hang, it receives a response that tells it the requested ip/port is not available, and when it does hang there is just no response from the other end? I'm on OSX10.8 using python2.7
false
21,921,509
0.099668
0
0
1
Firewall may be the explanation behind this unexpected response. Rather than supposing the remote firewall accepts connection, using timeout is the best option. Since, making a connection is a swift process and within a network, it won't take longer time. So, give a proper timeout so that you can tell that the host is either down or dropping packets.
0
2,733
0
1
2014-02-20T22:56:00.000
python,sockets,network-programming
Python socket.connect hangs sometimes
1
2
2
22,111,178
0
0
0
I'm learning to use sockets in python and something weird is happening. I call socket.connect in a try block, and typically it either completes and I have a new socket connection, or it raises the exception. Sometimes, however, it just hangs. I don't understand why sometimes it returns (even without connecting!) and other times it just hangs. What makes it hang? I am using blocking sockets (non-blocking don't seem to work for connect...), so I've added a timeout, but I'd prefer connect to finish without needing to timeout. Perhaps, when it doesn't hang, it receives a response that tells it the requested ip/port is not available, and when it does hang there is just no response from the other end? I'm on OSX10.8 using python2.7
true
21,921,509
1.2
0
0
3
When connect() hangs it is usually because you connect to an address that is behind a firewall and the firewall just drops your packets with no response. It keeps trying to connect for around 2 minutes on Linux and then times out and return an error.
0
2,733
0
1
2014-02-20T22:56:00.000
python,sockets,network-programming
Python socket.connect hangs sometimes
1
2
2
21,921,616
0
1
0
I went through the instructions on for the Google Glass Python Quick Start. I deployed the app and the app supposedly finished deploying successfully. I then went to the main URL for the app and attempted to open the page. The page asked me which Google Account I wanted to use to access the app, and I chose one. It went through some type of redirect and then came back to my app and tried to open up the openauth2callback page at which time nothing else happened. It just stopped on the openauth2callback page and sat there whitescreened. I assume that the app is supposed to look like the sample app that was posted where I should see timeline cards and be able to send messages, but I don't see any of that. I checked my oauth callbacks and they look exactly like the quick start instructions said to make them. What am I missing?
false
21,988,618
0
0
0
0
A couple of things that are standard debugging practices, and you may want to update the original question to clarify: Did OAuth actually fail? What information do you have that it failed? Can you verify from web server logs that the callback URL was hit and that it contained non-error return values? Can you check your web server and app server logs to see if there are any error messages or exceptions logged?
0
48
0
0
2014-02-24T13:02:00.000
python,google-glass
OAuth fails after deploying google glass application
1
1
1
22,016,838
0
1
0
I just decided to start working on a mobile application for fun, but it will require a back-end. So I created an EC2 instance on Amazon Web Services, with an Amazon Linux AMI installed. I also have set up an database instance as well, and inserted some dummy data in there. Now, the next step I want to take is to write an RESTful web service that will run on my server that will interface with my database (which is independent from my server) First question, would this be considered an API? Second, I am doing research to implement this web service in Python, in your opinion are there better choices? Third, if I make a website, would/should it also be able to use this RESTful web service to query data from the database?
false
22,000,918
0.099668
0
0
1
A bit broad really especially the Python part. Yes this can be considered a API. Think of SOAP and REST services as an API available via the network. This question is opinion based and not suited for discussion here. A guideline is that if it works for you it is good. Yes you should use the REST services for the website otherwise you will duplicate work.
0
81
0
0
2014-02-24T22:55:00.000
python,web-services,api,rest,amazon-web-services
A Few questions on writing a RESTful web service
1
1
2
22,004,122
0
1
0
I have a javascript placed on third party site and this js makes API calls to my server. JS is publicly available and third party cannot save credentials in JS. I want to authenticate API calls before sharing JSON and also want to rate limit. Any one has ideas on how can i authenticate API?
false
22,013,532
0
0
0
0
It all depends on what you're authenticating. If you're authenticating each user that uses your API, you have to do something like the following: Your site has to somehow drop a cookie in that user's browser, Your API needs to support CORS (we use easyXDM.js), somehow upon logging in to their site, their site needs to send the user to your site to have a token passed that authenticates the user against your API (or vice versa, depending on the relationship). If you're just authenticating that a certain site is authorized to use your API, you can issue that site an API key. You check for that API key whenever your API is called. The problem with this approach is that JavaScript is visible to the end user. Anyone who really wants to use your API could simply use the same API key. It's not really authentication without some sort of server to server call. At best, you're simply offering a very weak line of defense against the most obvious of attacks.
0
117
0
1
2014-02-25T11:59:00.000
javascript,jquery,python,api
how to do authentication of rest api from javascript, if javascript is on third party site?
1
1
1
25,426,295
0
0
0
Why does TCP socket.recvfrom() not return the sender address as it does with UDP? When does TCP socket.recv() an empty string? Thanks!
false
22,017,835
0.379949
0
0
2
Why does TCP socket.recvfrom() not return the sender address as it does with UDP? Because once a TCP connection is established that address does not change. That is the address that was passed to connect or received from accept calls. You can also find out the peer's address (if you lost it somehow) with getpeername. When does TCP socket.recv() an empty string? When the peer has closed the connection and no more data will be coming in. You can still send data though because TCP connections can be half-closed.
0
2,605
0
0
2014-02-25T14:55:00.000
python,tcp,udp,recv,recvfrom
recv() and recvfrom() methods for TCP
1
1
1
22,018,452
0
0
0
When you click on some of the links on this particular page the GET request gets initiated by javascript. In this case it's a file so when you click it webdriver.Firefox makes a dialog window appear that asks you whether you want to download the file or not. Is it possible to capture the GET request directly and save it to disk or otherwise automate the dialog window?
false
22,022,938
0
0
0
0
If it's an OS dialog, no, you can't manipulate it with Selenium, you'd need a library that provides you hooks directly in to the OS. To capture the request, you would either need to use a proxy to capture the traffic and again another interface to interact with the proxy to inspect the request, or you might be able to inject some JS through Selenium that modifies the behavior of the button to return the link to you instead of navigating the browser to it.
0
583
0
0
2014-02-25T18:25:00.000
python,selenium
How to capture GET requests in Selenium initiated via JavaScript?
1
1
1
22,023,178
0
0
0
I'm writing a program which gathers basic CNAME information for given domains. I'm currently using Google's DNS server as the one I'm questioning, but afraid that if I'll send couple of millions DNS lookups this will get me blocked (Don't worry, it's by no means any type of DDOS or anything in that area). I'm wondering 2 things. 1. is it possible to use dnspython package to send requests through proxy servers? this way I can distribute my requests through several proxies. 2. I couldn't find a reference for a similar thing, but is it possible that I'll get blocked for so many DNS lookups? Thanks, Meny
false
22,074,156
0
0
0
0
If Google blocked that number of requests from a given IP address, one has to assume that sending such a number of requests is against their usage policy (and no doubt a form of 'unfair usage'). So hiding your source IP behind proxies is hardly ethical. You could adopt a more ethical approach by: Distributing your requests across a number of public DNS servers (search for 'public DNS servers', there 8 or 9 providers and at least 2 servers per providers), thus reducing the number of request per server. Spread your requests across a reasonable period of time to limit the effect of queries may have on the various providers' DNS servers. Or simply limit your query rate to something reasonable. If your requests cover a number of different domains, perform your own recursive resolution so that the bulk of your requests are targeted against the authoritative servers and not public recursive servers. This way, you would resolve the authoritative servers for a domain against the public servers (i.e. NS queries) but resolve CNAME queries against the authoritative server themselves, thus further spreading load. And there is no such thing as a DNS proxy (other than a DNS server which accepts recursive queries for which it is not authoritative).
0
1,227
1
0
2014-02-27T16:15:00.000
python,proxy,dns,dnspython
Is it possible to use dnspython through proxy?
1
1
1
22,074,588
0
0
0
I am using the NetworkX library for Python in my application that does some graph processing. One task is to call the all_simple_paths() function of NetworkX to give me all non-looping paths in the graph (up to a certain max. length of paths). This is working well. Using this list of all simple paths in the graph the task is now to find a number of n paths out of this list, where each of these n paths should be as different from all other n paths as possible. Or in other words: any two paths from the resulting n paths should have as few common nodes as possible. Or in even other words: each path in the resulting n paths should be as unique as possible. Can you guys think of any (non brute force) algorithm to achieve this?
true
22,096,525
1.2
0
0
0
It depends a lot on your particular needs. There are a few options. Two built-in, and one that requires a bit more work, but might be faster. If what you really want is to find two non-intersecting paths, then you can use a filtered graph - after finding one path, induce a subgraph with the intermediate nodes removed, and find the shortest path in that graph. If you can't guarantee that the paths won't be non-intersecting, then you are back to brute-force. Since paths don't include cycles, and they are simple lists, finding the number of intersecting nodes is as simple as generating sets from the two paths and finding the length of their difference, which is pretty fast. Check all pairs and find the one with the fewest intersection. Which of the above two is faster depends on your particular graph - is it sparse or dense? How many nodes are there? etc. Since all_simple_paths is a generator, you can actually focus that algorithm somewhat. i.e. if you graph the first two paths, and they are completely non-intersecting, then you already have your minimal case and don't need to look at any more. There may be a lot of paths, but you can bound with an upper limit of how many to look at, or a threshold that is allowable (i.e. instead of absolutely 0, if these only have 1 in common, it's good enough, return it), or some other combination that uses both how many paths I've looked at and the current maximum to bound the calculation time. If calculation time is really critical to your algorithm, also consider switching to igraph... networkx is MUCH easier to deal with, and usually performance is 'good enough', but for large brute force algorithms like this, igraph will probably be at least an order of magnitude faster. One last possibility is to avoid using all_simple_paths at all, and use the bfs tree instead. I am not sure if all_simple_paths is BFS, it probably is, but that might give you a better selection of initial paths to look at with the second algorithm, not sure. E.G. if you know that your source node has multiple successors, you may get decent results by just forcing your starting two paths to start with two different successors instead of just from the initial node. Note that this can also bite you too - this greedy algorithm can lead you astray as well, unless your graph is already a good fit for it (which you may already know, or may not).
0
905
0
0
2014-02-28T13:28:00.000
python,algorithm,graph
How to find n most different paths in a graph?
1
2
2
22,098,160
0
0
0
I am using the NetworkX library for Python in my application that does some graph processing. One task is to call the all_simple_paths() function of NetworkX to give me all non-looping paths in the graph (up to a certain max. length of paths). This is working well. Using this list of all simple paths in the graph the task is now to find a number of n paths out of this list, where each of these n paths should be as different from all other n paths as possible. Or in other words: any two paths from the resulting n paths should have as few common nodes as possible. Or in even other words: each path in the resulting n paths should be as unique as possible. Can you guys think of any (non brute force) algorithm to achieve this?
false
22,096,525
0
0
0
0
You could create a similarity or distance between two paths based on the number of edges that they share. Then apply a clustering algorithm to find n clusters, and pick one representative from each cluster, perhaps in a greedily fashion to minimise (in the case of similarities) edge weights between representatives.
0
905
0
0
2014-02-28T13:28:00.000
python,algorithm,graph
How to find n most different paths in a graph?
1
2
2
22,098,044
0
1
0
I was trying to log into a website that is loaded fuly dinamically using dojo.js scripts. On my tests I am using: Selenium 2.40 Phantomjs 1.9.7 (downloaded via npm) Ubuntu 12.04 When I try my script with: driver = webdriver.Firefox() Everything works fine, Firefox logins through login page /login.do, gets through authentication page and arrives at the landing page and everything works perfectly. But I have to make this code for an Ubuntu Server so I can't use a GUI, when I change to: driver = webdriver.PhantomJS() I arrived again at /login.do ( print driver.current_url) I have tried to use WebDriverWait and nothing happens. Does PhantomJS for python an issue with dynamically loading pages? If not, can I use another tool or better yet, someone knows a book or tutorial to understand XHR Requests and doing this job with requests and urllib2?
true
22,102,352
1.2
0
0
0
I just discovered that my problem was with a elem.send_keys(Keys.ENTER) line. Phantomjs seems to be very fast so I had to put a time.sleep of 2 seconds before that line, and now the script works fine. What happened is that Enter button for login wasn't clicked properly. Of course time.sleep(2) isn't the best way to solve it, I will change the ENTER statement into a click with xpath.
0
633
0
2
2014-02-28T17:51:00.000
python,selenium,phantomjs
Selenium phantomjs (python) not redirecting to welcome page after login, page is load dynamically using dojo
1
1
1
22,103,574
0
0
0
I am trying to set the ethernet port pins directly to send High / Low signals to light up four LEDs. Is there any way I can simply connect LEDs to the ethernet cable? What would be the best approach, if using ethernet is not a good option, to light up four LED lights and switch them on/off using a PC. I am planning to create a command-line tool in Python. So far I have gone through a lot of articles about PySerial, LibUSB etc. and have been suggested to use USB to UART converter modules, or USB to RS-232 converters. Actually I don't want any interfaces except the PC port and the LEDs as far as possible. Please suggest!
true
22,111,408
1.2
1
0
1
No, it is not possible. There is no sane way to affect the PHY in PC software.
0
250
0
0
2014-03-01T07:18:00.000
python,usb,ethernet,libusb
Toggle pins of ethernet port to high / low state?
1
1
1
22,111,428
0
0
0
I was trying to download some files from a sever but it came back with an error page saying only links from internal server is allowed. I was able to download the file with any browser by clicking the link and I have verified the link I captured in Python was correct. Is there any way this can be done by using python? I tried urllib, urllib2 and requests, but none works. I could use selenium but the solution is not elegent
false
22,121,165
0
0
0
0
When you use your browser, it sends a header known as a User-Agent that identifies it. You need to 'spoof' the user agent from your python script to make it think a human is browsing the website. Set the User-Agent header to that of a common browser, this makes it difficult for the server to detect that you are using a script.
0
56
0
0
2014-03-01T22:21:00.000
python
Download files that only allow access from internal server with Python
1
1
2
22,121,194
0
0
0
i have read other Stackoverflow threads on this. Those are older posts, i would like to get the latest update. Is it possible to send multiple commands over single channel in Paramiko ? or is it still not possible ? If so, is there any other library which can do the same. Example scenario, automating the Cisco router confi. : User need to first enter "Config t" before entering the other other commands. Its currently not possible in paramiko. THanks.
false
22,141,637
0
1
0
0
if you are planning to use the exec_command() method provided within the paramiko API , you would be limited to send only a single command at a time , as soon as the command has been executed the channel is closed. The below excerpt from Paramiko API docs . exec_command(self, command) source code Execute a command on the server. If the server allows it, the channel will then be directly connected to the stdin, stdout, and stderr of the command being executed. When the command finishes executing, the channel will be closed and can't be reused. You must open a new channel if you wish to execute another command. but since transport is also a form of socket , you can send commands without using the exec_command() method, using barebone socket programming. Incase you have a defined set of commands then both pexpect and exscript can be used , where you read a set of commands form a file and send them across the channel.
0
4,222
0
0
2014-03-03T08:11:00.000
python,paramiko
Paramiko - python SSH - multiple command under a single channel
1
1
3
22,160,534
0
0
0
I created a Python script to use Rackspace's API (Pyrax) to handle some image processing. It works perfect locally, but when I upload it to Iron.io worker, it builds but does not import. I am using a Windows 8 pc, but my boss runs OS X and uploading the exact worker package, it works fine. So I'm thinking it's something with Windows 8 but I don't know how to check/fix. I do apologize in advance if I ramble or do not explain things clearly enough but any help would be greatly appreciated. My worker file looks like this: runtime "python" exec "rackspace.py" pip "pyrax" full_remote_build true Then I simply import pyrax in my python file.
false
22,148,917
0
0
0
0
It's difficult to know for sure what's happening without being able to see a traceback. Do you get anything like that which could be used to help figure out what's going on?
0
116
0
1
2014-03-03T14:10:00.000
python,pip,rackspace,iron.io,pyrax
pip "pyrax" dependency with iron worker
1
2
2
22,153,710
0
0
0
I created a Python script to use Rackspace's API (Pyrax) to handle some image processing. It works perfect locally, but when I upload it to Iron.io worker, it builds but does not import. I am using a Windows 8 pc, but my boss runs OS X and uploading the exact worker package, it works fine. So I'm thinking it's something with Windows 8 but I don't know how to check/fix. I do apologize in advance if I ramble or do not explain things clearly enough but any help would be greatly appreciated. My worker file looks like this: runtime "python" exec "rackspace.py" pip "pyrax" full_remote_build true Then I simply import pyrax in my python file.
false
22,148,917
0.197375
0
0
2
I figured out that it was a bad Ruby install. No idea why, but reinstalling it worked.
0
116
0
1
2014-03-03T14:10:00.000
python,pip,rackspace,iron.io,pyrax
pip "pyrax" dependency with iron worker
1
2
2
22,153,804
0
0
0
I'm looking into the new paypal REST api. I want the ability to be able to pay another paypal account, transfer money from my acount to their acount. All the documentation I have seen so far is about charging users. Is paying someone with the REST api possible? Similar to the function of the mass pay api or adaptive payments api.
true
22,159,824
1.2
1
0
1
At this moment, paying another user via API is not possible via REST APIs, so mass pay/Adaptive payments would be the current existing solution. It is likely that this ability will be part of REST in a future release.
0
105
0
1
2014-03-03T23:32:00.000
python,paypal
Paypal REST Api for Paying another paypal account
1
1
1
22,611,605
0
1
0
I am working on creating a web spider in python. Do i have to worry about permissions from any sites for scanning there content? If so, how do i get those? Thanks in advance
true
22,165,086
1.2
0
0
0
robots.txt file does have limits. Its better to inform the owner of the site if you are crawling too often and read reserved rights at the bottom of the site. It is a good idea to provide a link, to the source of your content.
0
43
0
1
2014-03-04T07:01:00.000
python,web-scraping
Permission to get the source code using spider
1
1
1
22,165,257
0
1
0
How can we traverse back to parent in xpath? I am crawling IMDB, to obtain genre of films, I am using elem = hxs.xpath('//*[@id="titleStoryLine"]/div/h4[text()="Genres:"]') Now,the genres are listed as anchor links, which are siblings to this tag. how can this be achieved?
true
22,177,872
1.2
0
0
2
This will select the parent element of the XPath expression you gave: //*[@id="titleStoryLine"]/div/h4[text()="Genres:"]/..
0
405
0
2
2014-03-04T16:45:00.000
python,lxml,lxml.html
Traversing back to parent with lxml.html.xpath
1
1
2
22,177,986
0
0
0
I am writing an application that would asynchronously trigger some events. The test looks like this: set everything up, sleep for sometime, check that event has triggered. However because of that waiting the test takes quite a time to run - I'm waiting for about 10 seconds on every test. I feel that my tests are slow - there are other places where I can speed them up, but this sleeping seems the most obvious place to speed it up. What would be the correct way to eliminate that sleep? Is there some way to cheat datetime or something like that? The application is a tornado-based web app and async events are triggered with IOLoop, so I don't have a way to directly trigger it myself. Edit: more details. The test is a kind of integration test, where I am willing to mock the 3rd party code, but don't want to directly trigger my own code. The test is to verify that a certain message is sent using websocket and is processed correctly in the browser. Message is sent after a certain timeout which is started at the moment the client connects to the websocket handler. The timeout value is taken as a difference between datetime.now() at the moment of connection and a value in database. The value is artificially set to be datetime.now() - 5 seconds before using selenium to request the page. Since loading the page requires some time and could be a bit random on different machines I don't think reducing the 5 seconds time gap would be wise. Loading the page after timeout will produce a different result (no websocket message should be sent). So the problem is to somehow force tornado's IOLoop to send the message at any moment after the websocket is connected - if that happened in 0.5 seconds after setting the database value, 4.5 seconds left to wait and I want to try and eliminate that delay. Two obvious places to mock are IOLoop itself and datetime.now(). the question is now which one I should monkey-patch and how.
false
22,180,528
0.099668
1
0
1
I you want to mock sleep then you must not use it directly in your application's code. I would create a class method like System.sleep() and use this in your application. System.sleep() can be mocked then.
0
481
0
1
2014-03-04T18:57:00.000
python,tdd,tornado
Mocking "sleep"
1
1
2
22,180,586
0
1
0
iPython was working fine until a few hours ago when I had to do a hard shutdown because I was not able to interrupt my kernel. Now opening any notebook gives me the following error: "WebSocket connection to could not be established. You will NOT be able to run code. Check your network connection or notebook server configuration." I have the latest version of Chrome and I am only trying to access local notebooks. The Javascript console gives me this: Starting WebSockets: [link not allowed by StackOverflow]/737c7279-7fab-467c-9e0f-cba16233e4b5 kernel.js:143 WebSocket connection failed: [link not allowed by StackOverflow]/737c7279-7fab-467c-9e0f-cba16233e4b5 notificationarea.js:129 Resource interpreted as Image but transferred with MIME type image/x-png: "[link not allowed by StackOverflow]/static/components/jquery-ui/themes/smoothness/images/ui-bg_glass_75_dadada_1x400.png". (anonymous function)
false
22,186,057
0
0
0
0
Try reinstalling your iPython server or creating a new profile for the server
0
832
0
6
2014-03-05T00:24:00.000
websocket,ipython,ipython-notebook
iPython notebook Websocket connection cannot be established
1
1
1
26,615,734
0
0
0
I have a task to monitor disk usage and notify a few users when it runs out of space. I wrote python script that checks disk usage. Unfortunately I can't use email notification from the script because company policy does not allow it. My question: Are there any other options that would allow me to notify selected users in my network about particular event i.e. full disk space? I mean some kind of message that will pop-up on the screen or etc. Please keep in mind that I practically don't have any administrative privileges in the network. Thanks
true
22,239,230
1.2
0
0
0
Well ... I figured out that in my organization Microsoft Exchange will not allow email started from script except those originated from server. I handled to start email from server and now I'm all set. Thanks for suggestions. Ticket could be closed.
0
129
1
0
2014-03-07T00:41:00.000
python,notify
How to notify users in network
1
1
1
27,495,023
0
0
0
In my recent project, I need to use the UDP protocol to transmit data. If I send data by using size = s.sendto(data, (<addr>, <port>)), will the UDP protocol ensure that the data is packed into one UDP packet? If so, will size == len(data) always be True? Is there anything that I misunderstood? More precisely, will 'sendto()' split my data into several smaller chunks and then pack each chunk into UDP packet to transimit?
false
22,292,615
0.099668
0
0
1
The length of UDP packet is limited. If your data is too large, the return value can't equal the length.There are also some situations, such as not enough send buffer, network fault. The size only means the bytes that have been sent to send buffer.
1
2,494
0
3
2014-03-10T05:23:00.000
python,udp,sendto
Python - Is sendto()'s return value useless?
1
1
2
22,292,817
0
0
0
This is the second time today this has happened.. I tried to import requests earlier and I got an Import Error: no module named requests Same thing for serial I googled the crap out of this and nothing I've found works. Any ideas as to what's going on? I'm trying to use pyserial to take input from an arduino
false
22,295,895
0
1
0
0
are you looking for urllib.requests? if you are using python 2.7 when you ask for requests you import urllib and you don't actually use request, but its methods are available to the urllib handle, so for instance: urllib.urlopen("http://google.com") will work in python 2.7.x, where urllib.request.urlopen("http://google.com") will work in python3.x.x
0
2,633
0
1
2014-03-10T09:05:00.000
python,serial-port,pyserial
Python no module named serial / no module named requests
1
1
2
24,922,412
0
0
0
I want to write a client for a protocol where the server sends length-prefixed data. The data requires nontrivial decoding and processing, and at any moment the client only needs the latest available data: if there is more data available, it's desirable to flush the old entries and use only the newest one. This is to avoid situations where the client spends so much time processing data that it starts getting more and more behind the server. I can easily implement reading the length-prefixed data with twisted.protocols.basic.IntNStringReceiver, but how would I check if there is more data available? What I would perhaps like to do is call select on the socket with a zero timeout to see if reading from the socket would block, and if not I'd just skip all decoding and processing. The socket is of course not available in the Protocol.dataReceived method call. Another idea would be to store the data somewhere and start a delay, and if the method is called again before the delay fires, overwrite the data. This imposes a constant delay even in the usual case where there is no more data available. Is there some way to do this that would fit well the Twisted programming model?
true
22,331,935
1.2
0
0
1
There aren't any peek-ahead APIs in Twisted that will let you know if there is data waiting in a buffer somewhere that's about to be delivered. I think your second idea is just fine - as long as you notice that you can pick an arbitrarily small constant delay. For example, you could pick 0 seconds. In practice this introduces a slightly longer delay (unless you have a really fast computer ;) but it's still small enough that you probably won't notice it. It's possibly also worth knowing that Twisted reactors try to interleave processing of time-based events with processing of file descriptor-based events. If you didn't know this then you might suspect that using reactor.callLater(0, f) would call f before any more I/O happens. While there's no guarantee of exactly how events are ordered, all of the reactors that ship with Twisted just go back and forth: process all I/O events, process all time events, repeat. And if you pick only a slightly larger value, perhaps 1/10th of 1 millisecond, then you can pretty much be sure that if dataReceived isn't called again before the timer expires there isn't any more locally received data that is about to be delivered to your application.
0
71
1
0
2014-03-11T17:17:00.000
python,twisted
Skipping stale data in a Twisted protocol handler
1
1
1
22,333,848
0
1
0
I am new to JavaScript. The problem goes as follows: I have 10 div in my html file. I am using links to go from one to another. Now, I want to test a condition which if satisfied (I am using python for this), should redirect me to another div within the same html. But I want that to be automatic. For eg, I am in <div id="table1"> and inside it I am checking a condition. If that is true, I should be redirected automatically to <div id="table3">.Can anyone please help me find the way out?On google,when I am trying to search for it, it is giving me results where I have to click a button for redirection (which will invoke a JS function). But I don't want that. I want it to happen automatically. So, please tell. <div id="table5"> <div style="position:fixed;right:40px;top:65px;"> <a name="t10" href="#t10" onclick='show(10);'><button> GO TO MAIN PAGE </button></a> </div> % if not obj.i: <h2>REDIRECT ME TO id="table3"</h2> % else: <table border=1 cellpadding=7> <tr> <th>Row No. in Sheet</th> <th>Parameters</th> </tr> <tr> <th colspan=2>Total : ${value}</th> </tr> </table> % endif </div>
false
22,348,501
0
0
0
0
I am not going to search the code for you. But most sites tell you you need an onclick event because that is needed to open a link. And example.com#idOfDiv, is the kind of link you would open. Howerver, there is another possibility. Find some javascript code to decide the position of an element in x and y coordinats. After you got it, make JS Scroll ;).
0
68
0
1
2014-03-12T10:25:00.000
javascript,python,html
Get redirected to a div automatically without user's intervention
1
1
2
22,348,762
0
0
0
I have a protobuf string representation for some object (__str___() def). Is it possible easily create that object from that string? Or the only possible way is the self written parser? Default serialization\deserialization is not applicable since the object state modification should be performed outside of the programming scope. And the whole flow is following: get ser representation from network; deser this representation to the object; get the string representation for the object; pass this representation to somebody who wants change some fields (values changed in the string representation). here will be created a separate file with the str representation for all received objects (there will be lots of objects); convert NEW string to the Py object; Ser object; Pass ser message over network.
false
22,355,594
0
0
0
0
It will depend on how you represent the object as str(). If the object is machine parseable, you will have the best luck. If you use json, yaml or xml, you can use built in serialization libraries, otherwise you'll need to roll your own.
0
717
0
0
2014-03-12T15:04:00.000
python,protocol-buffers
Is it possible to create an object from the protobuf str representation in Python
1
1
1
22,355,757
0
0
0
I want to be able to scan a network of servers and match IP addresses to hostnames. I saw a lot of questions about this (with a lot of down votes), but none are exactly what I'm looking for. So I've tried python's socket library socket.gethostbyaddr(ip). But this only returns results if I have a DNS setup or the IP-to-host mapping is in my hosts file. I want to be able to ask a machine for their hostname, rather than querying DNS. How can a query a Linux machine for their hostname? Preferably using python or bash, but other ways are good too.
false
22,358,516
0.53705
0
0
3
You can remotely execute the command hostname command on these machines to acquire the Hostname
0
218
1
1
2014-03-12T16:54:00.000
python,linux,bash
Query machine for hostname
1
1
1
22,358,568
0
0
0
I'd like to make my own crypto currency. I don't want to just recompile the Bitcoin source code and the rename it. I'd like to do it from scratch just to learn more about it. I'm thinking of using Python as the language for the implementation but I heard that in terms of performance Python isn't the best. My question is, would a network written in Python be able to perform well under the possibility of millions of peers (I know it's not going to happen but I'd like to make my network scalable.)
true
22,360,093
1.2
1
0
2
Depends which part is in Python. The network is, by definition, I/O bound. It's unlikely that using Python rather than C/C++/etc. will cause a noticeable performance drop for the client itself. Your choice of cryptographic algorithm will also have a large impact on performance (how quick it is to verify transactions, etc.). Now, as for 'mining' the currency, it would be silly to do that with Python since that's very much a CPU-bound task. In fact, using a GPU which allows for massive parallelism on trivially parallel problems is a much better idea (CUDA or OpenCL work great here).
0
744
0
0
2014-03-12T18:02:00.000
python,networking,p2p,bitcoin,peer
Python Peer to Peer Network
1
2
2
22,360,849
0
0
0
I'd like to make my own crypto currency. I don't want to just recompile the Bitcoin source code and the rename it. I'd like to do it from scratch just to learn more about it. I'm thinking of using Python as the language for the implementation but I heard that in terms of performance Python isn't the best. My question is, would a network written in Python be able to perform well under the possibility of millions of peers (I know it's not going to happen but I'd like to make my network scalable.)
false
22,360,093
0.197375
1
0
2
Nothing beats good ol' C for performance. However, if you plan on parallelising everything for multi-CPU support I would give Haskell a try. It is inherently parallel, so you won't have to put in extra effort for optimizations. You can also do something similar in C with OpenMP and Cilk using pragmas. Good Luck!
0
744
0
0
2014-03-12T18:02:00.000
python,networking,p2p,bitcoin,peer
Python Peer to Peer Network
1
2
2
22,360,906
0
0
0
So for sake of understanding, we're going to assume both machines are on ipv4 and behind NAT networks. I'd like to be able to open a socket on both machines and have the machines connect through those sockets (or a similar system). I know nat punchthrough is required for this, but I'm not sure how nat punchtrough applies (can a socket that was once connecting now be accepting?) Anyone who has worked with nat punchthrough in python I would really appreciate the help.
false
22,396,635
0
0
0
0
It sounds like you need to set up port forwarding -- essentially tell your router to forward calls it receives on a specific port to a service that sits behind it. That's usually done through your router's admin interface.
0
3,618
0
3
2014-03-14T05:10:00.000
python,sockets,networking
Python P2P Networking (NAT Punchtrough)
1
1
3
22,396,729
0
1
0
I'm going to have to code a program in python that retrieves results after filling a web form (which in turn calls different javascript functions), and those results appear in a different frame of the website. I considered using the Selenium web engine, but I was wondering if anyone has any better idea? Thank you Daniel
true
22,407,628
1.2
0
0
0
Yea. Reverse engineer the javascript using the chrome/firefox console, see what request it makes and mimic them in python using urllib2 or the requests library.
0
41
0
0
2014-03-14T14:20:00.000
javascript,python,forms,web
How do I fill a form in a web site that calls javascript code and retrieve the results from a different frame using python?
1
1
1
22,408,435
0
1
0
I've generated an HTML file that sits on my local disk, and which I can access through my browser. The HTML file is basically a list of links to external websites. The HTML file is generated from a local text file, which is itself a list of links to the remote sites. When I click on one of the links in the HTML document, as well the browser loading the relevant site (in a new tab), I want to remove the site from the list of sites in the local text file. I've looked at Javascript, Flask (Python), and CherryPy (Python), but I'm not sure these are valid solutions. Could someone advise on where I should look next? I'd prefer to do this with Python somehow - because it's what I'm familar with - but I'm open to anything. Note that I'm running on a Linux box.
false
22,413,513
0
0
0
0
There is many ways to do this Here is the easiest 3 Use JavaScript 2 install wampserver or similar and use php o modify the file 3 don't use te browser to delete and instead use a bat file to open the browser and remove the link from the text file
0
994
0
0
2014-03-14T18:57:00.000
javascript,python,html,flask
How to manipulate a local text file from an HTML page
1
1
2
22,413,574
0
1
0
I'm currently running an html and jsp file locally and hosting it by running this command through the terminal: python -m SimpleHTTPServer 8888 &. This has been going smoothly, but I recently ran into an issue where I have to include library files (d3, jQuery, ajax, etc.) I've included the following command in my html file <script src="../libs/d3.v3.min.js"> but noticed that it was pulling up a 404 error. I've tried to remedy it with a change in script to : <script src="http://d3js.org/d3.v3.min.js">. But I actually feel that it doesn't go to the root of the problem. Why am I unable to include the files I have in my lib? Edited wording to question thanks for the heads up Amber: The lib file is located one directory up from the html file.
false
22,426,005
0.197375
0
0
1
The SimpleHTTPServer module will only serve things that are within the directory you're telling it to serve and folders beneath that directory, for security reasons. (Otherwise a visitor could ask it for e.g. ../../../../etc/passwd or similar.) If you want to serve scripts and other assets, you'll need to put them in a subfolder of the directory you're running SimpleHTTPServer in.
0
197
0
1
2014-03-15T15:48:00.000
python,d3.js,local
How to include libraries through the python local server
1
1
1
22,430,282
0
0
0
I am using scapy to send data over ICMP. I need to send image and other files over ICMP using scapy. I am able to send simple strings in ICMP. How could I send image and other files ?
false
22,436,119
0
0
0
0
I think you've discovered why the inventors of the Internet developed TCP. There's a limit on the payload size in an ICMP packet. Most networks have a 1500-byte total limit on the size of Ethernet packets, so with a 20-byte IP header and an 8-byte ICMP header, your maximum payload size in a single packet will be 1472 octets. That's not enough for many image files. You need a way of breaking up your image data stream into chunks of about that size, sending them in multiple ICMP packets, and reassembling them into a data stream on the receiver. Considering that there's no guarantee ICMP packets are received in order, and indeed no guarantee that they'll all be received in order, you will need to put some kind of sequence number into the the individual packets, so you can reassembly them in order. You also may need some timeout logic so your receiving machine can figure out that a packet was lost and so the image will never be completed. RTP and TCP are two protocols layered on IP that do that. They are documented in precise detail, and you can learn from those documents how to do what you're trying to do and some of the pitfalls and performance implications.
0
2,190
0
0
2014-03-16T11:17:00.000
python,scapy,icmp,tunneling
How to send an image in ICMP data field using ICMP
1
1
2
22,436,235
0
0
0
I'm working on something that sends data from one program over UDP to another program at a known IP and port. The program at the known IP and port receives the message from the originating IP but thanks to the NAT the port is obscured (to something like 30129). The program at the known IP and port wants to send an acknowledgement and/or info to the querying program. It can send it back to the original IP and the obscured port #. But how will the querying program know what port to monitor to get it back on? Or, is there a way (this is Python) to say "send this out over port 3200 to known IP (1.2.3.4) on port 7000? That way the known IP/port can respond to port 30129, but it'll get redirected to 3200, which the querying program knows to monitor. Any help appreciated. And no, TCP is not an option.
false
22,487,388
0
0
0
0
The simple answer is you don't care what the "real" (ie: pre-natted) port is. Just reply to the nat query and allow the nat to handling delivering the result. If you ABSOLUTELY have to know the source UDP port, include the information in your UDP packet -- but I strongly recommend against this.
0
369
0
1
2014-03-18T18:00:00.000
python,sockets,udp,port,nat
How to determine outgoing port in Python through NAT
1
2
2
22,487,519
0
0
0
I'm working on something that sends data from one program over UDP to another program at a known IP and port. The program at the known IP and port receives the message from the originating IP but thanks to the NAT the port is obscured (to something like 30129). The program at the known IP and port wants to send an acknowledgement and/or info to the querying program. It can send it back to the original IP and the obscured port #. But how will the querying program know what port to monitor to get it back on? Or, is there a way (this is Python) to say "send this out over port 3200 to known IP (1.2.3.4) on port 7000? That way the known IP/port can respond to port 30129, but it'll get redirected to 3200, which the querying program knows to monitor. Any help appreciated. And no, TCP is not an option.
false
22,487,388
0.099668
0
0
1
Okay, I figured it out - the trick is to use the same sock object to receive that you used to send. At least in initial experiments, that seems to do the trick. Thanks for your help.
0
369
0
1
2014-03-18T18:00:00.000
python,sockets,udp,port,nat
How to determine outgoing port in Python through NAT
1
2
2
22,488,054
0
0
0
I am looking for a script which will upload files from a unix server to an HTTP location. I want to use only these modules - cookielib, urllib2, urllib, mimetypes, mimetools, traceback
false
22,490,452
0
0
0
0
I would like to make question more clear here http location mentioned is a common share folder link which can use by number of people by accessing through username/passwd. for ex: my target location has: I have http:/commonfolder.mydomain.com/somelocation/345333 location I have username = someuid ,password = xxxxx my source location has: I have a my.txt file contains some data I want to run shell or python script which will tranfer the file to target location mentioned above. so that many people can start using latest info I am uploading timely. -shijo
0
62
0
0
2014-03-18T20:35:00.000
python,http
How to upload files to an HTTP location using a python script
1
1
1
22,567,478
0
0
0
I am writing python two scripts using scapy one executed on server side and the other on client side. On client side, the script sends UDP packets to a closed port on server. The aim of my scripts, is to test if client will accept invalid ICMP packets received from server. On server side, I am going to sniff for incoming traffic and respond every UDP packet with an ICMP port unreachable, and everytime I will modify a field in ICMP packet (false value) to test if the packet is received. My question is: when I modify the Raw field (payload) ,is it normal that client will accept this ICMP packet ? I mean there is no control done on Raw field. I hope my question is clear. Thank you very much.
true
22,502,741
1.2
0
0
0
Well, at least for the ID and sequence fields, these can be any 16-bit numbered combination and the kernel will accept the packet and forward it to all registered ICMP socket handlers. But if the checksum field is incorrect, the receiving kernel will not pass the header up to the handlers (it will however to link layer sniffers). Also, from what I tested, if you change the type/code flags to incorrect combinations of known numbers, or numbers undefined by the protocol, the receiving kernel does not pass that to handlers (but it is still seen by link layer sniffers). Note I didn't use scapy, just straight python/socket code, and my system is Linux.
0
609
0
0
2014-03-19T10:22:00.000
python,udp,scapy,icmp
Invalid field Raw in an ICMP destination unreachable (port unreachable) packet
1
1
1
22,513,923
0
0
0
I am using pexpect in python to receive continuous audio data from an audio input for my home automation project. Is there a way to pause the pexpect from using my audio device? Or can I use the audio device in two separate programs/scripts? What I want to do is: Use speech recognition (julius) to listen for keywords. For more complex commands I want to use Google's Speech to Text API because of a higher accuracy. Both things work perfectly fine separately. What my problem is: Once the keyword is found, audio data needs to be recorded and send to the Google API. However, I have only one audio device and this is already used by the speech recognition with julius. I cannot .close and .spawn the speech recognition, because it takes a long time to load. Is there any chance the pexpect can be paused? Or do you guys know any other workaround? Bests, MGG
false
22,516,007
0
0
0
0
A workaround for my problem was the following: using dsnoop for ALSA audio settings in .asoundrc.
0
248
0
0
2014-03-19T19:16:00.000
python,linux,audio,pexpect,julius-speech
python pause pexpect.spawn and its used devices
1
1
1
22,633,327
0
1
0
I have this web application with LDAP backend, to read and modify some LDAP attributes. Web application use the SSO (Single Sign-on) to authenticate user. How can I bind to LDAP, if I only get a user name as an attribute from SSO, withouth asking for password again, because it will make SSO useless? I use SimpleSAMLphp as identity provider, and python driven web application for LDAP management.
false
22,517,604
0.197375
0
0
1
Rather than using the user's credentials to bind to LDAP, get an application account at LDAP that has read permissions for the attributes you need on the users within the directory. Then, when you get the username via SSO, you just query LDAP using your application's ID. Make sure you make your application ID's password super strong - 64 chars with a yearly change should be good. Better yet, do certificate-based authn.
0
153
0
1
2014-03-19T20:39:00.000
php,python,ldap,single-sign-on,saml
Bind to LDAP after SSO?
1
1
1
22,533,335
0
0
0
I am working on an appliance using an old version of python (2.5.2). I'm working on a script which needs to read a webpage, but I can't access the normal libraries - urllib, urllib2 and requests are not available. How did people collect this in the olden days? I could do a wget/curl from the shell, but I'd prefer to stick to python if possible. I also need to be able to go through a proxy which may force me into system calls.
true
22,535,539
1.2
0
0
1
If you really want to do it old-school entirely within Python but without urllib, then you'll have to use socket and implement a tiny subset of HTTP 1.0 to fetch the page. Jumping through the hoops to get through a proxy will be really painful though. Use wget or curl and save yourself a few days of debugging.
0
79
0
0
2014-03-20T14:09:00.000
python,web-scraping,python-2.5
collect web page source with python 2.5.2
1
1
1
22,536,352
0
1
0
The scenario is I have multiple local computers running a python application. These are on separate networks waiting for data to be sent to them from a web server. These computers are on networks without a static IP and generally behind firewall and proxy. On the other hand I have web server which gets updates from the user through a form and send the update to the correct local computer. Question What options do I have to enable this. Currently I am sending csv files over ftp to achieve this but this is not real time. The application is built on python and using django for the web part. Appreciate your help
false
22,602,390
0
0
0
0
Sounds like you need a message queue. You would run a separate broker server which is sent tasks by your web app. This could be on the same machine. On your two local machines you would run queue workers which connect to the broker to receive tasks (so no inbound connection required), then notify the broker in real time when they are complete. Examples are RabbitMQ and Oracle Tuxedo. What you choose will depend on your platform & software.
0
63
0
0
2014-03-24T06:24:00.000
python,django,data-binding,architecture
Sync data with Local Computer Architecture
1
1
2
22,629,711
0
0
0
I have followed the wiki and set up everything necessary, but all the images are broken right now. I used the aptitude package manager to install. Here are my configuration files: /etc/default/thumbor # set this to 0 to disable thumbor, remove or set anything else to enable it # you can temporarily override this with # sudo service thumbor start force=1 enabled=1 # Location of the configuration file conffile=/etc/thumbor.conf # Location of the keyfile which contains the signing secret used in URLs #keyfile=/etc/thumbor.key # IP address to bind to. Defaults to all IP addresses # ip=127.0.0.1 # TCP port to bind to. Defaults to port 8888. # multiple instances of thumbor can be started by putting several ports coma separeted # Ex: # port=8888,8889,8890 # or port=8888 #Default /etc/thumbor.conf #!/usr/bin/python # -*- coding: utf-8 -*- # thumbor imaging service # https://github.com/globocom/thumbor/wiki # Licensed under the MIT license: # http://www.opensource.org/licenses/mit-license # Copyright (c) 2011 globo.com timehome@corp.globo.com # the domains that can have their images resized # use an empty list for allow all sources #ALLOWED_SOURCES = ['mydomain.com'] ALLOWED_SOURCES = ['admin.mj.dev', 'mj.dev', 'api.mj.dev', 's3.amazonaws.com'] # the max width of the resized image # use 0 for no max width # if the original image is larger than MAX_WIDTH x MAX_HEIGHT, # it is proportionally resized to MAX_WIDTH x MAX_HEIGHT # MAX_WIDTH = 800 # the max height of the resized image # use 0 for no max height # if the original image is larger than MAX_WIDTH x MAX_HEIGHT, # it is proportionally resized to MAX_WIDTH x MAX_HEIGHT # MAX_HEIGHT = 600 # the quality of the generated image # this option can vary widely between # imaging engines and works only on jpeg images QUALITY = 85 # enable this options to specify client-side cache in seconds MAX_AGE = 24 * 60 * 60 # client-side caching time for temporary images (using queued detectors or after detection errors) MAX_AGE_TEMP_IMAGE = 0 # the way images are to be loaded LOADER = 'thumbor.loaders.http_loader' # maximum size of the source image in Kbytes. # use 0 for no limit. # this is a very important measure to disencourage very # large source images. # THIS ONLY WORKS WITH http_loader. MAX_SOURCE_SIZE = 0 # if you set UPLOAD_ENABLED to True, # a route /upload will be enabled for your thumbor process # You can then do a put to this URL to store the photo # using the specified Storage UPLOAD_ENABLED = False UPLOAD_PHOTO_STORAGE = 'thumbor.storages.file_storage' UPLOAD_PUT_ALLOWED = False UPLOAD_DELETE_ALLOWED = False # how to store the loaded images so we don't have to load # them again with the loader #STORAGE = 'thumbor.storages.redis_storage' #STORAGE = 'thumbor.storages.no_storage' STORAGE = 'thumbor.storages.file_storage' #STORAGE = 'thumbor.storages.mixed_storage' # root path of the file storage FILE_STORAGE_ROOT_PATH = '/var/lib/thumbor/storage' # If you want to cache results, use this options to specify how to cache it # Set Expiration seconds to ZERO if you want them not to expire. #RESULT_STORAGE = 'thumbor.result_storages.file_storage' #RESULT_STORAGE_EXPIRATION_SECONDS = 60 * 60 * 24 # one day #RESULT_STORAGE_FILE_STORAGE_ROOT_PATH = '/tmp/thumbor/result_storage' RESULT_STORAGE_STORES_UNSAFE=False # stores the crypto key in each image in the storage # this is VERY useful to allow changing the security key STORES_CRYPTO_KEY_FOR_EACH_IMAGE = True #REDIS_STORAGE_SERVER_HOST = 'localhost' #REDIS_STORAGE_SERVER_PORT = 6379 #REDIS_STORAGE_SERVER_DB = 0 #REDIS_STORAGE_SERVER_PASSWORD = None # imaging engine to use to process images #ENGINE = 'thumbor.engines.graphicsmagick' #ENGINE = 'thumbor.engines.pil' ENGINE = 'thumbor.engines.opencv' # detectors to use to find Focal Points in the image # more about detectors can be found in thumbor's docs # at https://github.com/globocom/thumbor/wiki DETECTORS = [ 'thumbor.detectors.face_detector', 'thumbor.detectors.feature_detector', ] # Redis parameters for queued detectors # REDIS_QUEUE_SERVER_HOST = 'localhost' # REDIS_QUEUE_SERVER_PORT = 6379 # REDIS_QUEUE_SERVER_DB = 0 # REDIS_QUEUE_SERVER_PASSWORD = None # if you use face detection this is the file that # OpenCV will use to find faces. The default should be # fine, so change this at your own peril. # if you set a relative path it will be relative to # the thumbor/detectors/face_detector folder #FACE_DETECTOR_CASCADE_FILE = 'haarcascade_frontalface_alt.xml' # this is the security key used to encrypt/decrypt urls. # make sure this is unique and not well-known # This can be any string of up to 16 characters SECURITY_KEY = "thumbor@musejam@)!$" # if you enable this, the unencryted URL will be available # to users. # IT IS VERY ADVISED TO SET THIS TO False TO STOP OVERLOADING # OF THE SERVER FROM MALICIOUS USERS ALLOW_UNSAFE_URL = False # Mixed storage classes. Change them to the fullname of the # storage you desire for each operation. #MIXED_STORAGE_FILE_STORAGE = 'thumbor.storages.file_storage' #MIXED_STORAGE_CRYPTO_STORAGE = 'thumbor.storages.no_storage' #MIXED_STORAGE_DETECTOR_STORAGE = 'thumbor.storages.no_storage' FILTERS = [ 'thumbor.filters.brightness', 'thumbor.filters.contrast', 'thumbor.filters.rgb', 'thumbor.filters.round_corner', 'thumbor.filters.quality', 'thumbor.filters.noise', 'thumbor.filters.watermark', 'thumbor.filters.equalize', 'thumbor.filters.fill', 'thumbor.filters.sharpen', 'thumbor.filters.strip_icc', 'thumbor.filters.frame', # can only be applied if there are already points for the image being served # this means that either you are using the local face detector or the image # has already went through remote detection # 'thumbor.filters.redeye', URLs for images that I try to load look like this: http://localhost:8888/Q9boJke8j2p2Qtv53Hbz_g1nMZo=/250x250/smart/http://s3.amazonaws.com/our-company/0ea7eeb2979215f35112d2e5753a1ee5.jpg I have also setup a key in /etc/thumbor.key, please let me know if that's necessary to post here.
false
22,609,742
0.197375
0
0
1
You are missing a closing bracket in the filters option in your thumbor.conf. Did you miss it posting here or actually in the thumbor.conf file?
0
3,567
1
0
2014-03-24T12:43:00.000
python,ubuntu-12.04,thumbnails,thumbor
Thumbor installation not working
1
1
1
23,401,532
0
0
0
Is it possible to switch between interfaces in Python program? I will have eth0 and wlan0 connection, both are different routers. I'm using boto to upload images to AWS server. And I need to upload using router with fast upload speed and for other downloads I need to use another interface which is connected to a router with fast download speed. If this is possible how I can do it?
true
22,636,894
1.2
0
0
0
Its possible by using route add command in linux
0
409
0
1
2014-03-25T14:01:00.000
python,router
Use different interfaces (eth0 and wlan0) for sending and receiving in Python program
1
1
1
22,789,614
0
0
0
I have an ftp server that contains all of my tar files, those tar files are as big as 500MB+, and they are too many and all I needed to do is to get a single file from a TAR that contains multiple files which becomes 500MB+. My initial idea is to download each tar files and get the single file I needed, but that seems to be inefficient. I'm using Python as Programming language.
true
22,650,764
1.2
0
0
0
This answer is not specific to python, because the problem is not specific to python: In theory you could read the part of the Tar-file where your data are. With FTP (and also with pythons ftplib) this is possible by doing first a REST command to specify the start position in the file, then RETR to start the download of the data and after you got the amount of data you need you can close the data connection. But, Tar is a file format without a central index, e.g. each file in Tar is prefixed with a small header with information about name, size and other. So to get a specific file you must read the first header, check if it is the matching file and if it is not you skip the size of the unwanted file and try with the next one. With lots of smaller files in Tar this will be less effective than downloading the complete file (or at least downloading up to the relevant part - you might parse the file while downloading) because all these new data connections for each read cause lots f overhead. But if you have large files in the Tar this might work. But, you are completely out of luck if it is not a TAR (*.tar), but a TGZ (*.tgz or *.tar.gz) file. These are compressed Tar-files and to get any part of the file you would need to decompress everything you have before. So in this case there is no way around downloading the file or at least downloading everything up to the relevant part.
0
981
0
0
2014-03-26T03:10:00.000
python,ftp,tar,tarfile
Python: Get single file in a TAR from FTP
1
1
1
22,652,307
0
0
0
Is it possible to telnet to a server and from there telnet to another server in python? Since there is a controller which I telnet into using a username and password, and from the controller command line I need to login as root to run linux command. How would I do that using python? I use the telentlib to login into router controller but from the router controller I need to log in again to get into shell. Is this possible using python? Thanks!
false
22,668,907
0
0
0
0
Have you looked into using expect (there should be a python binding); basically, what I think you want to do is: From your python script, use telnetlib to connect to server A (pass in username/password). Within this "socket", send the remaining commands, e.g. "telnet serverB" and use expect (or some other mechanism) to check that you get back the expected "User:" prompt; if so, send user and then password and then whatever commands, and otherwise handle errors. This should be very much doable and is fairly common with older stuff that doesn't support a cleaner API.
0
1,456
0
2
2014-03-26T17:37:00.000
python,telnet
Telnet from telnet session in python
1
2
3
22,669,160
0
0
0
Is it possible to telnet to a server and from there telnet to another server in python? Since there is a controller which I telnet into using a username and password, and from the controller command line I need to login as root to run linux command. How would I do that using python? I use the telentlib to login into router controller but from the router controller I need to log in again to get into shell. Is this possible using python? Thanks!
false
22,668,907
0.066568
0
0
1
Just checked it with the hardware I have in hand & telnetlib. Saw no problem. When you are connected to the first device just send all the necessary commands using telnet.write('cmd'). It may be sudo su\n, telnet 192.168.0.2\n or whatever else. Telnetlib keeps in mind only its own telnet connection, all secondary connections are handled by the corresponding controllers.
0
1,456
0
2
2014-03-26T17:37:00.000
python,telnet
Telnet from telnet session in python
1
2
3
22,669,249
0
0
0
we have a problem with sending in most efficient way about 1000 (or even more) 2MB's chunks via network. We want to avoid pure sockets (if it won't be possible we will use them). For now we've tested: List item rabbitmq client -> server: about 39sec/GB on localhost (very slow) requests client -> flask server : still about 40sec/GB on localhost flask on tornado with creating threads for each IO write opperation and still 40 sec/GB of SSD flash drive raw tornado still 40 sec/GB We are running out of ideas. Best solution for us is to use lightweight solution, maybe http.
false
22,697,210
0
1
0
0
If all the files are available at the start. I would zip them first to a single file. It is not about the compression but the number of files. There are certain IO operations (open/close and network start/end) that will happen 100 times for each file which you can easily avoid. Compression will help too. Now sockets or HTTP, it wont matter much if you have single file (or technically a stream).
0
965
0
1
2014-03-27T19:21:00.000
python,upload,flask,rabbitmq,tornado
What's the fastest way to send 1000 2MB's files using Python?
1
1
1
43,458,666
0
0
0
I have the following line of code: xml = BytesIO("<A><B>some text</B></A>") for the file named test.xml. But I would like to have something like xml = "/home/user1/test.xml" How can I use the file location instread of having to put the file content?
false
22,703,236
0.066568
0
0
1
Exactly like you have. lxml.etree.parse() accepts a string filename and will read the file for you.
0
265
0
0
2014-03-28T02:43:00.000
python,lxml
XML file as input
1
1
3
22,703,299
0
0
1
I am using the community module to extract communities from a networkx graph. For the community module, the order in which the nodes are processed makes a difference. I tried to set the seed of random to get consistent results but that is not working. Any idea on how to do this? thanks
false
22,719,863
0
0
0
0
I had to change the seed inside every class I used.
0
165
0
1
2014-03-28T17:48:00.000
python,networkx
Fix the seed for the community module in Python that uses networkx module
1
1
1
24,051,792
0
0
0
I am building a web crawler in Python using MongoDB to store a queue with all URLs to crawl. I will have several independent workers that will crawl URLs. Whenever a worker completes crawling a URL, it will make a request in the MongoDB collection "queue" to get a new URL to crawl. My issue is that since there will be multiple crawlers, how can I ensure that two crawlers won't query the database at the same time and get the same URL to crawl? Thanks a lot for your help
true
22,737,982
1.2
0
1
0
Since reads in MongoDB are concurrent I completely understand what your saying. Yes, it is possible for two workers to pick the same row, amend it and then re-save it overwriting each other (not to mention wasted resources on crawling). I believe you must accept that one way or another you will lose performance, that is an unfortunate part of ensuring consistency. You could use findAndModify to pick exclusively, since findAndModify has isolation it can ensure that you only pick a URL that has not been picked before. The problem is that findAndModify, due to being isolated, to slow down the rate of your crawling. Another way could be to do an optimistic lock whereby you write a lock to the database rows picked very quickly after picking them, this will mean that there is some wastage when it comes to crawling duplicate URLs but it does mean you will get the maximum performance and concurrency out of your workers. Which one you go for requires you to test and discover which best suites you.
0
510
0
0
2014-03-29T22:55:00.000
python,mongodb,queue,mongodb-query,worker
Multiple workers getting information from a single MongoDB queue
1
1
1
22,738,408
0