Web Development
int64
0
1
Data Science and Machine Learning
int64
0
1
Question
stringlengths
28
6.1k
is_accepted
bool
2 classes
Q_Id
int64
337
51.9M
Score
float64
-1
1.2
Other
int64
0
1
Database and SQL
int64
0
1
Users Score
int64
-8
412
Answer
stringlengths
14
7k
Python Basics and Environment
int64
0
1
ViewCount
int64
13
1.34M
System Administration and DevOps
int64
0
1
Q_Score
int64
0
1.53k
CreationDate
stringlengths
23
23
Tags
stringlengths
6
90
Title
stringlengths
15
149
Networking and APIs
int64
1
1
Available Count
int64
1
12
AnswerCount
int64
1
28
A_Id
int64
635
72.5M
GUI and Desktop Applications
int64
0
1
0
0
I opened python code from github. I assumed it was python2.x and got the above error when I tried to run it. From the reading I've seen Python 3 has depreciated urllib itself and replaced it with a number of libraries including urllib.request. It looks like the code was written in python 3 (a confirmation from someone who knows would be appreciated.) At this point I don't want to move to Python 3 - I haven't researched what it would do to my existing code. Thinking there should be a urllib module for Python 2, I searched Google (using "python2 urllib download") and did not find one. (It might have been hidden in the many answers since urllib includes downloading functionality.) I looked in my Python27/lib directory and didn't see it there. Can I get a version of this module that runs on Python27? Where and how?
false
31,601,238
0
0
0
0
For now, it seems that I could get over that by adding a ? after the URL.
0
108,467
0
22
2015-07-24T02:28:00.000
python,urllib
Python 2.7.10 error "from urllib.request import urlopen" no module named request
1
3
7
37,034,337
0
0
0
I opened python code from github. I assumed it was python2.x and got the above error when I tried to run it. From the reading I've seen Python 3 has depreciated urllib itself and replaced it with a number of libraries including urllib.request. It looks like the code was written in python 3 (a confirmation from someone who knows would be appreciated.) At this point I don't want to move to Python 3 - I haven't researched what it would do to my existing code. Thinking there should be a urllib module for Python 2, I searched Google (using "python2 urllib download") and did not find one. (It might have been hidden in the many answers since urllib includes downloading functionality.) I looked in my Python27/lib directory and didn't see it there. Can I get a version of this module that runs on Python27? Where and how?
false
31,601,238
1
0
0
7
Instead of using urllib.request.urlopen() remove request for python 2. urllib.urlopen() you do not have to request in python 2.x for what you are trying to do. Hope it works for you. This was tested using python 2.7 I was receiving the same error message and this resolved it.
0
108,467
0
22
2015-07-24T02:28:00.000
python,urllib
Python 2.7.10 error "from urllib.request import urlopen" no module named request
1
3
7
34,079,450
0
0
0
I am debugging a test case. I use Python's OptionParser (from optparse) to do some testing and one of the options is a HTTP request. The input in this specific case for the http request was 269KB in size. So my python program fails with "Argument list too long" (I verified that there was no other arguments passed, just the request and one more argument as expected by the option parser. When I throw away some of the request and reduce its size, things work fine. So I have a strong reason to believe the size of the request is causing my problems here.) Is it possible for a HTTP request to be that big ? If so how do I fix the OptionParser to handle this input?
false
31,630,636
0.197375
0
0
3
Is it possible for a HTTP request to be that big ? Yes it's possible but it's not recommended and you could have compatibility issues depending on your web server configuration. If you need to pass large amounts of data you shouldn't use GET. If so how do I fix the OptionParser to handle this input? It appears that OptionParser has set its own limit well above what is considered a practical implementation. I think the only way to 'fix' this is to get the Python source code and modify it to meet your requirements. Alternatively write your own parser. UPDATE: I possibly mis-interpreted the question and the comment from Padraic below may well be correct. If you have hit an OS limit for command line argument size then it is not an OptionParser issue but something much more fundamental to your system design that means you may have to rethink your solution. This also possibly explains why you are attempting to use GET in your application (so you can pass it on the command line?)
0
82
0
3
2015-07-25T20:09:00.000
python,http
Is a HTTP Get request of size 269KB allowed?
1
3
3
31,630,829
0
0
0
I am debugging a test case. I use Python's OptionParser (from optparse) to do some testing and one of the options is a HTTP request. The input in this specific case for the http request was 269KB in size. So my python program fails with "Argument list too long" (I verified that there was no other arguments passed, just the request and one more argument as expected by the option parser. When I throw away some of the request and reduce its size, things work fine. So I have a strong reason to believe the size of the request is causing my problems here.) Is it possible for a HTTP request to be that big ? If so how do I fix the OptionParser to handle this input?
true
31,630,636
1.2
0
0
0
Typical limit is 8KB, but it can vary (like, be even less).
0
82
0
3
2015-07-25T20:09:00.000
python,http
Is a HTTP Get request of size 269KB allowed?
1
3
3
31,630,668
0
0
0
I am debugging a test case. I use Python's OptionParser (from optparse) to do some testing and one of the options is a HTTP request. The input in this specific case for the http request was 269KB in size. So my python program fails with "Argument list too long" (I verified that there was no other arguments passed, just the request and one more argument as expected by the option parser. When I throw away some of the request and reduce its size, things work fine. So I have a strong reason to believe the size of the request is causing my problems here.) Is it possible for a HTTP request to be that big ? If so how do I fix the OptionParser to handle this input?
false
31,630,636
0.132549
0
0
2
A GET request, unlike a POST request, contains all its information in the url itself. This means you have an URL of 269KB, which is extremely long. Although there is no theoretical limit on the size allowed, many servers don't allow urls of over a couple of KB long and should return a 414 response code in that case. A safe limit is 2KB, although most modern software will allow a bit more than that. But still, for 269KB, use POST (or PUT if that is semantically more correct), which can contain larger chunks of data as the content of a request rather than the url.
0
82
0
3
2015-07-25T20:09:00.000
python,http
Is a HTTP Get request of size 269KB allowed?
1
3
3
31,630,678
0
1
0
I have developed an API in flask which is using HttpBasicAuth to authenticate users. API is working absolutely fine in fiddler and returning 401 when we pass wrong credential but when I am using the same on login page I am getting extra pop up from browser. I really don't want to see this extra pop-up which is asking for credential (default behaviour of browser when returning 401 with WWW-Authenticate: Basic realm="Authentication Required" ). It is working fine when deployed locally but not working when hosted on remote server. How can we implement 401 which will not let browser to display popup asking for credentials.
true
31,666,601
1.2
0
0
3
This is a common problem when working with REST APIs and browser clients. Unfortunately there is no clean way to prevent the browser from displaying the popup. But there are tricks that you can do: You can return a non-401 status code. For example, return 403. Technically it is wrong, but if you have control of the client-side API, you can make it work. The browser will only display the login dialog when it gets a 401. Another maybe a bit cleaner trick is to leave the 401 in the response, but not include the WWW-Authenticate header in your response. This will also stop the login dialog from appearing. And yet another (that I haven't tried myself, but have seen mentioned elsewhere) is to leave the 401 and the WWW-Authenticate, but change the auth method from Basic to something else that is unknown to the browser (i.e. not Basic and not Digest). For example, make it CustomBasic.
0
13,044
0
4
2015-07-28T02:54:00.000
python-2.7,flask,basic-authentication,www-authenticate
How to return 401 authentication from flask API?
1
1
2
31,666,814
0
1
0
I am working for a company who wants me to test and cover every piece of code I have. My code works properly from browser. There is no error no fault. Except my code works properly on browser and my system is responding properly do I need to do testing? Is it compulsory to do testing?
false
31,692,090
0.197375
1
0
1
Whether it’s compulsory depends on organization you work for. If others say it is, then it is. Just check how tests are normally written in the company and follow existing examples. (There’re a lot of ways Django-based website can be tested, different companies do it differently.) Why write tests? Regression testing. You checked that your code is working, does it still work now? You or someone else may change something and break your code at some point. Running test suite makes sure that what was written yesterday still works today; that the bug fixed last week wasn’t accidentally re-introduced; that things don’t regress. Elegant code structuring. Writing tests for your code forces you to write code in certain way. For example, if you must test a long 140-line function definition, you’ll realize it’s much easier to split it into smaller units and test them separately. Often when a program is easy to test it’s an indicator that it was written well. Understanding. Writing tests helps you understand what are the requirements for your code. Properly written tests will also help new developers understand what the code does and why. (Sometimes documentation doesn’t cover everything.) Automated tests can test your code under many different conditions quickly, sometimes it’s not humanly possible to test everything by hand each time new feature is added. If there’s the culture of writing tests in the organization, it’s important that everyone follows it without exceptions. Otherwise people would start slacking and skipping tests, which would cause regressions and errors later on.
0
33
0
1
2015-07-29T05:47:00.000
python,django,testing
is testing compulsory if it works fine on realtime on browser
1
1
1
31,694,536
0
1
0
So I am trying to download multiple excel links to different file paths depending on the link using Selenium. I am able to set up the FirefoxProfile to download all links to a certain single path, but I can't change the path on the fly as I try to download different files into different file paths. Does anyone have a fix for this? self.fp = webdriver.FirefoxProfile() self.ft.set_preferences("browser.download.folderList", 2) self.ft.set_preferences("browser.download.showWhenStarting", 2) self.ft.set_preferences("browser.download.dir", "C:\SOURCE FILES\BACKHAUL") self.ft.set_preferences("browser.helperApps.neverAsk.saveToDisk", ("application/vnd.ms-excel)) self.driver = webdriver.Firefox(firefox_profile = self.fp) This code will set the path I want once. But I want to be able to set it multiple times while running one script.
false
31,734,447
0
0
0
0
You can define it only while initializing driver. So to do it with a new path you should driver.quit and start it again.
0
667
0
2
2015-07-30T21:33:00.000
python-2.7,selenium-webdriver
Changing FirefoxProfile() preferences more than once using Selenium/Python
1
1
2
31,735,197
0
0
0
I need to fetch the data using REST Endpoints(returns JSON file) and load the data(JSON) into Cassandra cluster which is sitting on AWS. This is a migration effort, which involves millions of records. No access to source DB. Only access to REST End points. What are the options I have? What is the programming language to use?(I am thinking of Python or any scripting language)? Since I will have to migrate millions of records, I would like to process the jobs concurrently. What are the challenges? Thanks for the time and help. --GK.
false
31,737,396
0.197375
0
1
1
Cassandra 2.2.0 give feature to insert and get data as JSON .So you can use that . Like for insert json data . CREATE TABLE test.example ( id int PRIMARY KEY, id2 int, id3 int ) ; cqlsh > INSERT INTO example JSON '{"id":10,"id2":10,"id3":10}' ; For Select data as Json : cqlsh > SELECT json * FROM example; [json] {"id": 10, "id2": 10, "id3": 10}
0
129
0
1
2015-07-31T02:59:00.000
python,json,rest,cassandra,data-migration
Data Migration to Cassandra using REST End points
1
1
1
31,738,908
0
1
0
I want to run multiple spiders to crawl many different websites. Websites I want to crawl take different time to be scraped (some take about 24h, others 4h, ...). I have multiple workers (less than the number of websites) to launch scrapy and a queue where I put the websites I want to crawl. Once a worker has finished crawling a website, the website goes back to the queue waiting for a worker to be available to launch scrapy, and so on. The problem is that small website will be crawled more times than big ones and I want all websites to be crawled the same number of time. I was thinking about using RabbitMQ for queue management and to prioritise some websites. But when I search for RabbitMQ, it is often used with Celery. What I understood about these tools is that Celery will allow to launch some code depending on a schedule and RabbitMQ will use message and queues to define the execution order. In my case, I don't know if using only RabbitMQ without Celery will work. Also, is using RabbitMQ helpful for my problem? Thanks
true
31,834,738
1.2
0
0
1
Yes, using RabbitMQ is very helpful for your use case since your crawling agent can utilize a message queue for storing the results while your document processor can then store that in both your database back end (in this reply I'll assume mongodb) and your search engine (and I'll assume elastic search here). What one gets in this scenario is a very rapid and dynamic search engine and crawler that can be scaled. As for celery+rabbitmq+scrapy portion; celery would be a good way to schedule your scrapy crawlers and distribute your crawler bots across your infrastructure. Celery is just using RabbitMQ as its back end to consolidate and distribute the jobs between each instance. So for your use case to use both celery and scrapy just write the code for your scrapy bot to use its own rabbitmq queue for storing the results then write up a document processor to store the results into your persistent database back end. Then setup celery to schedule the batches of site crawls. Throw in sched module to maintain a bit of sanity in your crawling scheude. Also, review the works done at google for how they resolve the issues for over crawling a site in thier algorithm plus respect sane robots.txt settings and your crawler should be good to go.
0
1,802
0
2
2015-08-05T14:01:00.000
python,web-scraping,scrapy,rabbitmq,celery
Scrapy - Use RabbitMQ only or Celery + RabbitMQ for scraping multiple websites?
1
1
1
31,841,508
0
0
0
I'm looking for solution how to setup domain authorization with aiohttp. There are several ldap librarys, but all of them blocks event loop, plus i don't have clear understanding about user authorization with aiohttp. As i see i need session managment and store isLogdedIn=True in cookie file, check that cookie at every route -> redirect at login handler, and check key in every template? It seems very insecure, session could be stolen.
false
31,857,628
0.379949
1
0
2
You may call synchronous LDAP library in thread pool (loop.run_in_executor()). aiohttp itself doesn't contain abstractions for sessions and authentication but there are aiohttp_session and aiohttp_security libraries. I'm working on these but current status is alpha. You may try it as beta-tester :)
0
549
0
2
2015-08-06T13:51:00.000
python,python-asyncio
Proper way to setup ldap auth with aiohttp.web
1
1
1
31,880,066
0
0
0
I am running a Python script that uses the requests library to get data from a service. The script takes a while to finish and I am currently running it locally on my Windows 7 laptop. If I lock my screen and leave, will the script continue to run (for ~3 hours) without Windows disconnecting from the internet or halting any processes? The power settings are already set up to keep the laptop from sleeping. If it will eventually halt anything, how do I keep this from happening? Thanks.
true
31,866,507
1.2
0
0
13
As long as the computer doesn't get put to sleep, your process should continue to run.
0
27,171
0
14
2015-08-06T21:57:00.000
python,windows
Keep Python script running after screen lock (Win. 7)
1
2
2
31,866,538
0
0
0
I am running a Python script that uses the requests library to get data from a service. The script takes a while to finish and I am currently running it locally on my Windows 7 laptop. If I lock my screen and leave, will the script continue to run (for ~3 hours) without Windows disconnecting from the internet or halting any processes? The power settings are already set up to keep the laptop from sleeping. If it will eventually halt anything, how do I keep this from happening? Thanks.
false
31,866,507
1
0
0
7
Check "Power Options" in the Control panel. You don't need to worry about the screen locking or turning off as these wont affect running processes. However, if your system is set to sleep after a set amount of time you may need to change this to Never. Keep in mind there are separate settings depending on whether or not the system is plugged in.
0
27,171
0
14
2015-08-06T21:57:00.000
python,windows
Keep Python script running after screen lock (Win. 7)
1
2
2
31,866,586
0
0
0
And also explain what is osv.osv and sometimes why we include class name at last line in python code like this student(). Why do we need to do that? And last what is arch field in xml code. Thanks in advance
true
31,903,327
1.2
0
0
0
Python will give you functionalities(i.e. "Back end" Not DataBase) and XML will gives you the view(i.e "Front End"). OSV = Object Service. Keeps the definitions of objects and their fields in memory, more or less. "arch" will give "View Architecture" for XML!
1
282
0
1
2015-08-09T10:39:00.000
python-2.7,openerp,odoo
What does python code do and what does xml code do in odoo?
1
2
2
31,917,329
0
0
0
And also explain what is osv.osv and sometimes why we include class name at last line in python code like this student(). Why do we need to do that? And last what is arch field in xml code. Thanks in advance
false
31,903,327
0.197375
0
0
2
If you have experience with MVC, then you can compare odoo python file to a model / controller which holds the business logic, for creating masters etc and a xml file to a view which is for presenting the data to the UI. osv class inside in OSV module in OpenERP server , which contains all the OpenERP properties like you can see _column, _defaults and other many things. student() - its like a constructor to invoke the object, but its not needed now in latest versions
1
282
0
1
2015-08-09T10:39:00.000
python-2.7,openerp,odoo
What does python code do and what does xml code do in odoo?
1
2
2
31,939,808
0
0
0
If there a data structures likes container/queue, based on time , I could use it this way: add item(may duplicate) into it one by one, pop out those added time ealier then 60 minutes; count the queue; then I got top 10 most added items, in a dymatice period, said, 60min. How to implement this time based container ?
false
31,907,080
0
0
0
0
You can do something like this: Start timer 60 minutes Get the pages that people visits Save pages If timer is not ended do step 2-3 again if timed is ended: Count wich one is the most visited Count wich one is the second most visited Etc
0
347
0
0
2015-08-09T17:43:00.000
python,algorithm,data-structures,queue
python, data structures, algorithm: how to rank top 10 most visited pages in latest 60 minutes?
1
1
1
31,908,093
0
0
0
Ok so I know nothing about programming with python yet but I have wanted to make a bot to post to instagram for a while so thought it would be a good way to 'hit the ground running'. I dont have a specific time frame so no rush. I don't know any programming languages yet but have wanted to branch out since I use a GUI based web automaiton tool which I see has quite alot of overlap with programming languages such as if statements, variables, loops etc. I have been feeling that learning a proper language will be a better investment long term. So since I know nothing about it, but I have my goal in mind can people suggest what where I start in terms of what I should study for the task? Then I can laser focus what I need to learn and work at it piece by piece. I want to just upload pictures as one operation and follow/unfollow as another on instagram. So please illuminate me on how Id go about that. I was told that python is the best all rounder to learn since it does everything in a tidy fashion ie less code and is intuitive. I will want to make other projects in future based on web automation so felt this would be a good one to learn from what I was told by a pro programmer. I understand I may have been vague but not sure what to ask yet given my ignorance so please ask away if needed to hone the question/s.
false
31,938,658
0
1
0
0
It's a bit heavy, but you can use Selenium and do a bot on your browser. You can even make automatic clicks on the window if you don't want to read web codes.
0
36,619
0
7
2015-08-11T09:56:00.000
python,instagram
Making an instagram posting bot with python?
1
2
10
55,689,785
0
0
0
Ok so I know nothing about programming with python yet but I have wanted to make a bot to post to instagram for a while so thought it would be a good way to 'hit the ground running'. I dont have a specific time frame so no rush. I don't know any programming languages yet but have wanted to branch out since I use a GUI based web automaiton tool which I see has quite alot of overlap with programming languages such as if statements, variables, loops etc. I have been feeling that learning a proper language will be a better investment long term. So since I know nothing about it, but I have my goal in mind can people suggest what where I start in terms of what I should study for the task? Then I can laser focus what I need to learn and work at it piece by piece. I want to just upload pictures as one operation and follow/unfollow as another on instagram. So please illuminate me on how Id go about that. I was told that python is the best all rounder to learn since it does everything in a tidy fashion ie less code and is intuitive. I will want to make other projects in future based on web automation so felt this would be a good one to learn from what I was told by a pro programmer. I understand I may have been vague but not sure what to ask yet given my ignorance so please ask away if needed to hone the question/s.
false
31,938,658
1
1
0
6
You should note that while you can follow and unfollow users and like and unlike media. you CAN NOT post to Instagram using their API.
0
36,619
0
7
2015-08-11T09:56:00.000
python,instagram
Making an instagram posting bot with python?
1
2
10
31,963,001
0
0
0
So I first use requests.head() to download the header. I do some validation(check the status code, check content-type) if that is good I download the body. However I use requests.get() to that, but using .get() not only downloads the body but also the header which I just downloaded. So I don't need to download the header twice, anyway I can download the body of a get response if the header looks good?
false
32,003,889
0
0
0
0
Ask only the header in case of a 404 or 302 is non sense. Either you get the error and the cost is the same as directly asking the body, or you get no error and need a second request for the body. So unless the body is really large I would directly ask for the body with a single request. And if it is large, but tranferred in chunked mode, you can read the header with the first chunk and abort the transfert if it has a wrong content-type. HTTP HEAD request do not have common usages, and what you describe does not really looks like one. They are normally only used when you only need the header and will never use the body.
0
1,162
0
1
2015-08-14T06:47:00.000
python,python-requests
Python requests: Anyway to download just the body of a get response?
1
3
3
32,004,454
0
0
0
So I first use requests.head() to download the header. I do some validation(check the status code, check content-type) if that is good I download the body. However I use requests.get() to that, but using .get() not only downloads the body but also the header which I just downloaded. So I don't need to download the header twice, anyway I can download the body of a get response if the header looks good?
false
32,003,889
0.066568
0
0
1
No, there's no way. HTTP has the HEAD request, which only gets the header, but there's no request to get only the body. Don't worry too much about efficiency until you need to; the header usually isn't too big anyway. Either use requests.get() in the first place if the body is small, or else do requests.head() followed by requests.get() if you need to.
0
1,162
0
1
2015-08-14T06:47:00.000
python,python-requests
Python requests: Anyway to download just the body of a get response?
1
3
3
32,003,982
0
0
0
So I first use requests.head() to download the header. I do some validation(check the status code, check content-type) if that is good I download the body. However I use requests.get() to that, but using .get() not only downloads the body but also the header which I just downloaded. So I don't need to download the header twice, anyway I can download the body of a get response if the header looks good?
false
32,003,889
0
0
0
0
You can't! This is not the way HTTP works. If you want to use less traffic you are in the right track all you need is to perform the head requests then check if you wanna get the body. If so use get. Downloading the HTTP headers twice will not will not cost you (a lot) in performance or traffic and it's necessary.
0
1,162
0
1
2015-08-14T06:47:00.000
python,python-requests
Python requests: Anyway to download just the body of a get response?
1
3
3
32,004,066
0
0
0
How to send eof signal, over a socket, to a command running in remote shell? I've programmed in Python, using sockets, a remote shell application, where I send commands to be executed on another PC. Everything works fine (for most commands), except a command like cat > file is causing me problems. Normally, I would terminate the command above with CTRL + D (eof signal), but pressing CTRL + D in my client, doesn't send the signal to the remote shell. Therefore I have no means of terminating the command and I'm stuck. Anyone have suggestions ?
false
32,023,586
1
0
0
7
eof is not a signal but is implemented by the tty driver as a read of length 0 when you type ctrl-d. If your remote is not running in a tty then you cannot generate an eof as you cannot send a packet that reads as of length 0. However if you run a program like script /dev/null as the first command in your remote shell, then this will envelop your shell inside a pseudo-tty, and you will be able to send a real ctrl-d character (hex 0x04) and the pty will convert this to eof and end a cat, for example. Send a stty -a to the remote to check that eol is enabled in your pty. stty -a on my terminal says lnext = ^V (literal-next char) so I can type ctrl-vctrl-d to input a real hex 0x04 char. I chose script as I know it effectively interposes a pseudo-tty in the communication, and does nothing to the data stream. This is not its original purpose (see its man page), but that doesn't matter.
0
4,705
1
6
2015-08-15T09:56:00.000
linux,sockets,python-3.x,signals
How to send `eof` signal, over a socket, to a command running in remote shell?
1
1
2
32,023,977
0
1
0
from the requests documentation : Remove a Value From a Dict Parameter Sometimes you’ll want to omit session-level keys from a dict parameter. To do this, you simply set that key’s value to None in the method-level parameter. It will automatically be omitted. I need the data with key's value as None to take the Json value null instead of being removed. Is it possible ? edit : This seems to happen with my request data keys. While they are not session-level the behaviour of removing is still the same.
true
32,097,768
1.2
0
0
5
There is no session-level JSON parameter, so the merging rules don't apply. In other words, the json keyword argument to the session.request() method is passed through unchanged, None values in that structure do not result in keys being removed. The same applies to data, there is no session-level version of that parameter, no merging takes place. If data is set to a dictionary, any keys whose value is set to None are ignored. Set the value to '' if you need those keys included with an empty value. The rule does apply when merging headers, params, hooks and proxies.
1
4,509
0
1
2015-08-19T14:03:00.000
python,python-requests
python requests module - set key to null
1
1
1
32,097,869
0
0
0
I am hosting a http server on Python using BaseHTTPServer module. I want to understand why it's required to specify the IP on which you are hosting the http server, like 127.0.0.1/192.168.0.1 or whatever. [might be a general http server concept, and not specific to Python] Why can't it be like anybody who knows the IP of the machine could connect to the http server? I face problems in case when my http server is connected to two networks at the same time, and I want to serve the http server on both the networks. And often my IP changes on-the-fly when I switch from hotspot mode on the http server machine, to connecting to another wifi router.
false
32,110,965
0.197375
0
0
2
Try running it on 0.0.0.0, this accepts connections from all interfaces. Explicitly specifying the IP is a good practice in general (load balancing, caching servers, security, internal netwrok-only micro services, etc), but judging by your story this is not a production server, but some internal LAN application.
0
1,443
0
1
2015-08-20T06:27:00.000
python,httpserver,basehttpserver
Why host http server needs to specify the IP on which it is hosting?
1
2
2
32,111,853
0
0
0
I am hosting a http server on Python using BaseHTTPServer module. I want to understand why it's required to specify the IP on which you are hosting the http server, like 127.0.0.1/192.168.0.1 or whatever. [might be a general http server concept, and not specific to Python] Why can't it be like anybody who knows the IP of the machine could connect to the http server? I face problems in case when my http server is connected to two networks at the same time, and I want to serve the http server on both the networks. And often my IP changes on-the-fly when I switch from hotspot mode on the http server machine, to connecting to another wifi router.
true
32,110,965
1.2
0
0
2
You must specify the IP address of the server, mainly because the underlying system calls for listening on a socket requires it. At a lower level you declare what pair (IP address, port) you want to use, listen on it and accept incoming connexions. Another reason is that professional grade server often have multiple network interfaces and multiple IP addresses, and some services only need to listen on some interface addresses. Hopefully, there are special addresses: localhost or 127.0.0.1 is the loopback address, only accessible from local machine. It is currently used for tests of local services 0.0.0.0 (any) is a special address used to declare that you want to listen to all the local interfaces. I think that it is what you want here.
0
1,443
0
1
2015-08-20T06:27:00.000
python,httpserver,basehttpserver
Why host http server needs to specify the IP on which it is hosting?
1
2
2
32,112,117
0
1
0
I have the following html extract 0.94 I am trying to read the href value ie 0.94.I tried the following : answer = browser.find_element_by_class_name("res") print answer output = answer.get_attribute('data-href') print output The Result is as follows: None I tried various other methods, using find_element_by_xpath etc,but not able to get the desired value ie. 0.94 (as in this example). How can I get this value in the shortest way? Thanks in advance
false
32,113,290
0
0
0
0
Use getText method if you want to to print 0.94
0
206
0
0
2015-08-20T08:30:00.000
python-2.7,selenium
getting href-data using selenium
1
1
1
32,117,309
0
0
0
I am new to socket programming and recently picked up Python for it. I have a few questions in mind which I can't seems to find a definite answer for. I am looking into sending data over UDP and have written a simple python script to do just that. Works fine sending small objects (Small pickled objects to be exact) across but how should I handle objects that are too large to be fitted in one UDP packet? I've thought of first sizing up the object in bytes. Nothing will be done if the object is small enough to be fitted in a UDP packet, but if the object is too huge, the object will then be split up evenly (if possible) into many smaller chunks so that it can be fitted into multiple UDP packets and be sent across to the client. Once the client receive the chunks, the client will reassemble the multiple UDP packets into the original state. I immediately hit my first brick wall when trying to implement the mentioned above. From my research done, it doesn't seems like there is any 'effective' way in getting the byte size of an object. This means I am unable to determine if an object is too large to fit in a UDP packet. What happen if I insist on sending an large object across to the client? Will it get fragmented automatically and be reassembled on the client side or will the packet be dropped by the client? What is the right way to handle large object over UDP? Keeping in mind that the large object could be a file that is 1GB in size or a byte object that is 25MB in size. Thanks in advance. Side Notes: I do understand that UDP packets may not always come in order and therefore I have already implemented countermeasure for it which is to tag a sequence number to the UDP packets sent out to the client. I do understand that there is no assurance that the client will receive all of the UDP packets. I am not concerned about packet loss for now. I do understand that TCP is the right candidate for what I am trying to do but I am focusing on understanding UDP and on how to handle situations where acknowledgement of packets from client is not possible for now. I do understand the usage of pickle is insecure. Will look into it at later stage.
true
32,125,774
1.2
0
0
3
A UDP packet can be as large as approximately 64k. So if you want to send a file that is larger than that you can fragment yourself into packets of 64k. That is the theoretical maximum. My advice is to use fragments of smaller chunks of 500 bytes. IP is responsible for fragmentation and reassembly of the packets if you do use 64k packets. Smaller packets of 500 bytes are not likely to be fragmented because the mtu is usually around 1500 bytes. If you use larger packets that are fragmented, IP is going to drop them if one of those fragments is lost. You are right that using TCP is probably better to use for something like this or even an existing protocol like TFTP. It implements a per packet acking mechanism and sequence numbers just like you did.
0
5,845
0
4
2015-08-20T18:32:00.000
python,sockets,networking,udp
Python: Sending large object over UDP
1
1
3
32,126,306
0
0
0
I am using Py.test to implement integration testing for uploading photos into Picasa. However, the authentication method from oauth2client.flow_from_clientsecrets (that should open a web-browser to authentication URL), simply stopped. I am not sure about why it occur though, is it because from py.test we can't create/span new process? This is because oauth2client.flow_from_clientsecrets will call webbrowser.open that in turn will call subprocess.Popen
true
32,153,084
1.2
0
0
0
@Bruno Oliveira is right. I try to use a clean py.test to test flickr/picasa auth, and it's able to open a web-browser. The problem may lies in other custom library that being developed. Thanks! PS: I will report it here if I found why webbrowser.open won't work
0
567
0
0
2015-08-22T06:25:00.000
pytest,python-multithreading
How to solve thread blocking in Py.test?
1
1
1
32,164,189
0
1
0
I'm writing a script to download a pdf automatically. Firstly, I open the url manually, it will redirect to a login website. and I type my username and password, and click "submit". Then download will start directly. During this procedure, I check the firebug, I find there is no post while I click "submit". I'm not familiar with this behavior, that means the pdf(300K) is saved before I submit? If there is no post, then I must use some tool like selenium to simulate this "click"?
false
32,154,052
0
0
0
0
Is there no request at all, or a GET request? I suspect there is a GET request. In that case, did you turn Persist on in Firebug's Net tab? Possibly the POST request was hidden after redirects.
0
44
0
0
2015-08-22T08:30:00.000
python,selenium,urllib2
No post request after submitting a form when I want to download a PDF
1
1
1
32,155,740
0
0
0
Is it possible if I have a list of url parse them in python and take this server calls key/values without need to open any browser manually and save them to a local file? The only library I found for csv is pandas but anything for the first part. Any example will be perfect for me.
false
32,196,417
0
0
0
0
You can investigate the use of one of the built in or available libraries that let python actually perform the browser like operations and record the results, filter them and then use the built in csv library to output the results. You will probably need one of the lower level libraries: urllib/urllib2/urllib3 And you may need to override, one or more, of the methods to record the transaction data that you are looking for.
0
30
0
0
2015-08-25T05:50:00.000
python,pandas
Collect calls and save them to csv
1
1
1
32,196,617
0
0
0
I know my friend's external IP (from whatsmyip) and internal IP (e.g 192.168.1.x) and he knows mine. How do I establish a TCP connection with him? Is it possible to do it without any port forwarding? Or do I require a server with an external IP to transfer messages between me and him?
false
32,204,773
0.099668
0
0
1
Basically, it isn't (shouldn't be) possible for you to connect to your friends private IP through his firewall. That's the point of firewalls :-o Two solutions - the simplest is a port forwarding rule on his firewall, the second is as you suggest an external server that both clients connect to.
0
918
1
1
2015-08-25T13:02:00.000
python,python-2.7,sockets,networking,tcp
Connecting to a known external ip and internal ip without port forwarding
1
2
2
32,205,167
0
0
0
I know my friend's external IP (from whatsmyip) and internal IP (e.g 192.168.1.x) and he knows mine. How do I establish a TCP connection with him? Is it possible to do it without any port forwarding? Or do I require a server with an external IP to transfer messages between me and him?
false
32,204,773
0.291313
0
0
3
You cannot do that because of NAT(Network Address Translation). The public ip you see by whatsmyip.com is the public ip of your router. Since different machines can connect to the same router all of them will have the same public ip( that of the router). However each of them have an individual private ip assigned by the router. Each outgoing connection from the private network has to be distinguished hence the router converts the connection(private ip, port) to a (different port) and adds it to the NAT table. So if you really want to have a working connection, you should have to determine both the internal and external port for both ends and do the port forwarding in the router. Its a bit tricky and hence techniques like TCP hole punching are used.
0
918
1
1
2015-08-25T13:02:00.000
python,python-2.7,sockets,networking,tcp
Connecting to a known external ip and internal ip without port forwarding
1
2
2
32,220,457
0
1
0
I'm doing a web scrape with Python (using the Scrapy framework). The scrape works successfully until it gets about an hour into the process and then every request comes back with a HTTP400 error code. Is this just likely to be a IP based rate limiter or scrape detection tool? Any advice on how I might investigate the root cause further?
false
32,217,773
0
0
0
0
It could be a rate limiter. However a 400 error generally means that the client request was malformed and therefore rejected by the server. You should start investigating this first. When your requests start failing, exit your program and immediately start it again. If it starts working, you know that you aren't being rate-limited and that there is in fact something wrong with how your requests are formed later on.
0
611
0
0
2015-08-26T04:01:00.000
python,http,web-scraping,scrapy
Python Web Scraping HTTP 400
1
1
2
32,217,810
0
1
0
I'm running a Python Tornado server with a WebSocket handler. We've noticed that if we abruptly disconnect the a client (disconnect a cable for example) the server has no indication the connection was broken. No on_close event is raised. Is there a workaround? I've read there's an option to send a ping, but didn't see anyone use it in the examples online and not sure how to use it and if it will address this issue.
false
32,245,227
0
0
0
0
The on_close event can only be triggered when the connection is closed. You can send a ping and wait for an on_pong event. Timouts are typically hard to detect since you won't even get a message that the socket is closed.
0
1,215
1
0
2015-08-27T09:13:00.000
python,websocket,tornado
Tornado websocket pings
1
1
1
32,245,768
0
1
0
I have a JSP page called X.JSP (contains few radio button and submit button), when i hit the submit button in X.JSP, the next page is displayed Y.JSP?xxxx=1111&yyyy=2222&zzzz=3333 how to know what page or service or ajax call is being made when i hit the submit button in X.JSP page. xxxx=1111&yyyy=2222&zzzz=3333 these are generated after i click the submit button in X.JSP Currently i am using python to script. i select a radio button and post the form. i am not able to get the desired O/P. how do I what page or service or ajax call is being made when i hit the submit button in X.JSP page, so that i can directly hit that page or is there any better way to solve this
false
32,278,459
0
0
0
0
The developers console ( F12 in Chrome and Firefox) is a wonderful thing. Check the Network or Net tab. There you can see all the requests between your browser and your server.
0
49
0
1
2015-08-28T19:24:00.000
javascript,python,ajax,jsp,jsp-tags
how to know what page or call is being when in a JSP page when submited
1
1
1
32,278,473
0
1
0
I am writing an image crawler that scrapes the images from a Web page. This done by finding the img tag on the Web page. But recently I noticed, some img tags don't have an alt attribute in it. Is there any way to find the keywords for that particular image? And are there any precautions for crawling the websites for images?
true
32,286,059
1.2
0
0
0
If there is no alt attribute in a tag, or it is empty, check fo attribute name, if not name, check for id. Well, id, when .asp or .aspx for instance, doesn't have to make sense. But, well, as a last resort, use src attribute by getting just the filename without an extension. Sometimes attribute class can also be used, but, well, I don't recomend it. Even id can be very much deceiving. You will have trouble with JS imposed images, of course, but even that can be solved with a lot of time and will. As for precautions, what exactly do you mean? Check whether src is really an image or what?
0
278
0
0
2015-08-29T12:30:00.000
python,image,web-crawler
Crawling and finding keywords for images without any "alt" attribute
1
1
1
32,286,161
0
0
0
I have created a script that : Imports a list of IP's from .txt ( around 5K ) Connects to a REST API and performs a query based on the IP ( web logs for each IP) Data is returned from the API and some calculations are done on the data Results of calculations are written to a .csv At the moment it's really slow as it takes one IP at a time does everything and then goes to the next IP . I may be wrong but from my understanding with threading or multiprocessing i could have 3-4 threads each doing an IP which would increase the speed of the tool by a huge margin . Is my understanding correct and if it is should i be looking at threading or multi-processing for my task ? Any help would amazing Random info, running python 2.7.5 , Win7 with plenty or resources.
false
32,286,655
0.099668
0
0
1
With multiprocessing a primitive way to do this whould be chunck the file into 5 equal pieces and give it to 5 different processes write their results to 5 different files, when all processes are done you will merge the results. You can have the same logic with Python threads without much complication. And probably won't make any difference since the bottle neck is probably the API. So in the end it does not really matter which approach you choose here. There are two things two consider though: Using Threads, you are not realy using multiple CPUs hence you are have "wasted resources" Using Multiprocessing will use multiple processors but it is heavier on start up ... So you will benefit from never stoping the script and keeping the processes alive if the script needs to run very often. Since the information you gave about the scenario where you use this script (or better say program) is limited, it really hard to say which is the better approach.
1
110
0
0
2015-08-29T13:37:00.000
python,multithreading
Python threading or multiprocessing for my 'tool'
1
1
2
32,286,881
0
0
0
when the client server submit a request in the website, it triggers a python program in that website's server that python program scrap data in the Internet, or do its other job. and return the scrapped data to the user. thanks
false
32,290,371
0
0
0
0
You need to write handler/controller functions to handle each request from the client (the view). The routers will route each request to a specific controller which invoke the code i.e. query the database (via the Model) and return data with the response to the client via that controller. Read more on MVC and frameworks like Flask/Django for more info.
0
22
0
0
2015-08-29T20:31:00.000
javascript,python,mysql,web
how to trigger a python module in a linux website server when client submit data through website
1
1
1
32,290,680
0
1
0
I am trying to embed a A.html file inside B.html(report) file as a hyperlink. Just to be clear, both html files are offline and available only on my local machine. and at last the report html file will be sent over email. The email recipient will not be having access to my local machine. So if they click on hyperlink in report.html, they will get "404 - File or directory not found". Is there any way to embed A.html inside report.html, so that email recipient can open A.html from report.html on their machine
false
32,304,781
0
1
0
0
You need to one of: Attach A.html as well as report.html, Post A.html to a shared location such as Google drive and modify the link to point to it, or Put the content of A.html into a hidden <div> with a show method.
0
78
0
0
2015-08-31T05:57:00.000
python,html,hyperlink
Embed one html file inside other report html file as hyperlink in python
1
1
1
32,305,050
0
1
0
I am using python requests right now but if there is a way to do this then it would be a game changer... Specifically i want to download a bunch of pdf's from one web site. I have the urls to the pages i want. Can i grab more then one at a time?
false
32,311,385
0
0
0
0
I don't know specifically Python APIs but, for sure, HTTP specification does not allow a single GET request to fetch multiple resources. To each request always corresponds one and only one resource in response. This is intrinsic of the protocol. In some situations you have to do many request to obtain a single resource, or a part of it, as happens with range requests. But also in this case every request has only one response which is finally used by the client to assemble the complete final resource.
0
96
0
0
2015-08-31T12:33:00.000
python,webserver,python-requests
Is there a way to fetch multiple urls(chunks) from a web server with one GET request?
1
1
1
32,314,458
0
1
0
Basicly, I'm crawling text from a webpage with python using Beautifulsoup, then save it as an HTML and send it to my Kindle as a mail attachement. The problem is; Kindle supports Latin1(ISO-8859-1) encoding, however the text I'm parsing includes characters that are not a part of Latin1. So when I try to encode text as Latin1 python gives following error because of the illegal characters: UnicodeEncodeError: 'latin-1' codec can't encode character u'\u2019' in position 17: ordinal not in range(256) When I try to encode it as UTF-8, this time script runs perfectly but Kindle replaces some incompatible characters with gibberish.
false
32,316,480
0
1
0
0
Use <meta http-equiv="Content-Type" content="text/html; charset=utf-8"/> I previously used <meta charset="UTF-8" />, which did not seem to work.
0
171
0
1
2015-08-31T17:13:00.000
python,encoding,kindle,latin1
Text Encoding for Kindle with Python
1
1
1
64,088,459
0
0
0
I'm kinda confused. So I understand that If we want to grab data from an API, we can just call that API url in whatever language we are using ( example in python, we can do urllib.open( url of api) and then load the json). My question is, if we can just open the url in any language, what's the point of the API libraries that developers usually have on the site ( library wrapper for python, java, c#, ruby, etc). Do we need to use a specific library to call an API in that specific language? Can we not just open up the API url in any language? What's the point of having a library in each language if we can just extract the API in each of those languages?
true
32,322,434
1.2
0
0
1
You don't need a library for the client. However, developers tend to like libraries because it helps with things like creating authorization headers, creating parameterized URLs and converting response bodies into native types. However, there are good/bad ways of building these kinds of libraries. Many libraries hide the HTTP API and introduce a whole new set of issues that HTTP interfaces were originally designed to avoid.
1
2,429
0
3
2015-09-01T01:57:00.000
python,api,rest
Difference between API and API library/Wrapper
1
1
1
32,322,492
0
0
0
I'm new to AWS using Python and I'm trying to learn the boto API however I noticed that there are two major versions/packages for Python. That would be boto and boto3. What is the difference between the AWS boto and boto3 libraries?
false
32,322,503
0.099668
0
0
1
Boto is the Amazon Web Services (AWS) SDK for Python. It enables Python developers to create, configure, and manage AWS services, such as EC2 and S3. while Boto3 generates the client from a JSON service definition file. The client’s methods support every single type of interaction with the target AWS service. Resources, on the other hand, are generated from JSON resource definition files. Boto3 generates the client and the resource from different definitions
1
51,063
0
158
2015-09-01T02:09:00.000
python,amazon-web-services,boto,boto3
What is the difference between the AWS boto and boto3
1
1
2
70,191,456
0
1
0
I'm working on aws S3 multipart upload, And I am facing following issue. Basically I am uploading a file chunk by chunk to s3, And during the time if any write happens to the file locally, I would like to reflect that change to the s3 object which is in current upload process. Here is the procedure that I am following, Initiate multipart upload operation. upload the parts one by one [5 mb chunk size.] [do not complete that operation yet.] During the time if a write goes to that file, [assuming i have the details for the write [offset, no_bytes_written] ]. I will calculate the part no for that write happen locally, And read that chunk from the s3 uploaded object. Read the same chunk from the local file and write to read part from s3. Upload the same part to s3 object. This will be an a-sync operation. I will complete the multipart operation at the end. I am facing an issue in reading the uploaded part that is in multipart uploading process. Is there any API available for the same? Any help would be greatly appreciated.
true
32,348,812
1.2
0
1
3
There is no API in S3 to retrieve a part of a multi-part upload. You can list the parts but I don't believe there is any way to retrieve an individual part once it has been uploaded. You can re-upload a part. S3 will just throw away the previous part and use the new one in it's place. So, if you had the old and new versions of the file locally and were keeping track of the parts yourself, I suppose you could, in theory, replace individual parts that had been modified after the multipart upload was initiated. However, it seems to me that this would be a very complicated and error-prone process. What if the change made to a file was to add several MB's of data to it? Wouldn't that change your boundaries? Would that potentially affect other parts, as well? I'm not saying it can't be done but I am saying it seems complicated and would require you to do a lot of bookkeeping on the client side.
0
1,069
0
1
2015-09-02T08:59:00.000
python,amazon-web-services,file-upload,amazon-s3,boto
How to read a part of amazon s3 key, assuming that "multipart upload complete" is yet to happen for that key?
1
1
1
32,352,584
0
0
0
I want to create a custom topology using Python API and mininet. It should be such that, if there are n number of hosts, then odd numbered hosts can ping each other and also even numbered hosts can ping each other. For example, if we have 5 hosts, h1 .. to h5, then h1 can ping h3 and h5, while h2 can only ping h4. I have tried writing code, in which I added links between all even hosts and between all odd hosts. But I am not able to get the desired outcome. h1 is able to ping h3, but not h5. Also, is it correct to define links between hosts? Or should we only have links between hosts and switches and within switches?
false
32,362,586
0.197375
0
0
1
Since you are setting the controller to be remote, --controller=remote, you need to provide a controller explicitly. For example, if you are using POX, In another terminal, run this: cd pox ./pox.py openflow.discovery forwarding.l2_learning Now do a pingall in the mininet console, there should be 0% packet loss
0
769
0
0
2015-09-02T20:33:00.000
python,networking,network-programming,mininet
Custom Topology in Mininet
1
1
1
36,837,694
0
0
0
I'm trying to make a socket connection that will stay alive so that in event of connection loss. So basically I want to keep the server always open (also the client preferably) and restart the client after the connection is lost. But if one end shuts down both ends shut down. I simulated this by having both ends on the same computer "localhost" and just clicking the X button. Could this be the source of my problems? Anyway my connection code m.connect(("localhost", 5000)) is in a if and try and while e.g. while True: if tryconnection: #Error handeling try: m.connect(("localhost", 5000)) init = True tryconnection = False except socket.error: init = False tryconnection = True And at the end of my code I just a m.send("example") when I press a button and if that returns an error the code of trying to connect to "localhost" starts again. And the server is a pretty generic server setup with a while loop around the x.accept(). So how do keep them both alive when the connection closes so they can reconnect when it opens again. Or is my code alright and its just by simulating on the same computer is messing with it?
false
32,397,089
0
0
0
0
The issue is not related to the programming language, in this case python. The oeprating system (Windows or linux), has the final word regarding the resilience degree of the socket.
0
1,309
0
0
2015-09-04T11:33:00.000
python,sockets,networking,connection
Keeping python sockets alive in event of connection loss
1
2
2
32,403,092
0
0
0
I'm trying to make a socket connection that will stay alive so that in event of connection loss. So basically I want to keep the server always open (also the client preferably) and restart the client after the connection is lost. But if one end shuts down both ends shut down. I simulated this by having both ends on the same computer "localhost" and just clicking the X button. Could this be the source of my problems? Anyway my connection code m.connect(("localhost", 5000)) is in a if and try and while e.g. while True: if tryconnection: #Error handeling try: m.connect(("localhost", 5000)) init = True tryconnection = False except socket.error: init = False tryconnection = True And at the end of my code I just a m.send("example") when I press a button and if that returns an error the code of trying to connect to "localhost" starts again. And the server is a pretty generic server setup with a while loop around the x.accept(). So how do keep them both alive when the connection closes so they can reconnect when it opens again. Or is my code alright and its just by simulating on the same computer is messing with it?
true
32,397,089
1.2
0
0
1
I'm assuming we're dealing with TCP here since you use the word "connection". It all depend by what you mean by "connection loss". If by connection loss you mean that the data exchanges between the server and the client may be suspended/irresponsive (important: I did not say "closed" here) for a long among of time, seconds or minutes, then there's not much you can do about it and it's fine like that because the TCP protocol have been carefully designed to handle such situations gracefully. The timeout before deciding one or the other side is definitely down, give up, and close the connection is veeeery long (minutes). Example of such situation: the client is your smartphone, connected to some server on the web, and you enter a long tunnel. But when you say: "But if one end shuts down both ends shut down. I simulated this by having both ends on the same computer localhost and just clicking the X button", what you are doing is actually closing the connections. If you abruptly terminate the server: the TCP/IP implementation of your operating system will know that there's not any more a process listening on port 5000, and will cleanly close all connections to that port. In doing so a few TCP segments exchange will occur with the client(s) side (it's a TCP 4-way tear down or a reset), and all clients will be disconected. It is important to understand that this is done at the TCP/IP implementation level, that's to say your operating system. If you abruptly terminate a client, accordingly, the TCP/IP implementation of your operating system will cleanly close the connection from it's port Y to your server port 5000. In both cases/side, at the network level, that would be the same as if you explicitly (not abruptly) closed the connection in your code. ...and once closed, there's no way you can possibly re-establish those connections as they were before. You have to establish new connections. If you want to establish these new connections and get the application logic to the state it was before, now that's another topic. TCP alone can't help you here. You need a higher level protocol, maybe your own, to implement stateful client/server application.
0
1,309
0
0
2015-09-04T11:33:00.000
python,sockets,networking,connection
Keeping python sockets alive in event of connection loss
1
2
2
32,399,470
0
0
0
I've noticed that the FTP library doesn't seem to have a method or function of straight up downloading a file from an FTP server. The only function I've come across for downloading a file is ftp.retrbinary and in order to transfer the file contents, you essentially have to write the contents to a pre-existing file on the local computer where the Python script is located. Is there a way to download the file as-is without having to create a local file first? Edit: I think the better question to ask is: do I need to have a pre-existing file in order to download an FTP server file's contents?
false
32,406,089
0
0
0
0
To download a file from FTP this code will do the job import urllib urllib.urlretrieve('ftp://server/path/to/file', 'file') # if you need to pass credentials: # urllib.urlretrieve('ftp://username:password@server/path/to/file', 'file')
0
214
0
0
2015-09-04T20:27:00.000
python,ftp
How do I simply transfer and download a file from an FTP server with Python?
1
1
1
40,610,963
0
0
0
I have something like 1500 mail messages in eml format and I want to parse them na get e-mail addresses that caused error and error message (or code). I would like to try to do it in python. Someone have any idea how to do that except parsing line by line and searching for line and error code (or know software to do that)? I see nothing about errors in mail headers which is sad.
false
32,438,646
0
1
0
0
So you have 1500 .eml files and want to identify mails from mailer-daemons and which adress caused the mailer-daemon message? Just iterate over the files, then check the from: line and see if it is a mailer-daemon message, and then get the adress that caused the error out of the text. There is no other way than iterating over them line by line.
0
165
0
0
2015-09-07T12:20:00.000
python,email,parsing
Parse mailer daemons, failure notices
1
1
1
32,442,153
0
0
0
I'm trying to directly edit an XML file's text. I'd prefer to find and remove a certain phrase potentially by using the "sub" function. For particular reasons I'd prefer not to return the edited strings and then find a way to replace the existing XML file test. Is there an easy way to do this? Thanks for any help.
false
32,460,273
0
0
0
0
No, in Python you can not change strings in place as Python strings are immutable.
1
75
0
3
2015-09-08T14:22:00.000
python,regex,xml,xml-parsing
Is it possible to use Regular Expression to alter a string directly instead of returning altered version of the string?
1
1
1
32,904,156
0
0
0
I have to write a python script which will copy a file in s3 to my EBS directory, here the problem is I'm running this python script from my local machine. is there any boto function in which I can copy from s3 to EBS without storing in my local?
false
32,478,432
0.53705
0
1
3
No. EBS volumes are accessible only on the EC2 instance they're mounted on. If you want to download a file directly from S3 to an EBS volume, you need to run your script on the EC2 instance.
0
830
0
0
2015-09-09T11:33:00.000
python,amazon-web-services,amazon-s3,boto,boto3
Copy file from S3 to EBS
1
1
1
32,478,697
0
0
0
I'm currently able to connect a client computer to two servers on my local network (using Python sockets), but because I'm trying to emulate an external networking set-up, I'd like the client to access the machines externally, i.e. for the data to be routed over the internet as opposed to locally and directly. (This is for research purposes, so it's intentionally inefficient.) Would using a machine's IPv6 address as the host be sufficient, or would the router recognize the IPv6 address as internal and just bounce it back as opposed to first sending it to some external node?
false
32,490,281
0.099668
0
0
1
or would the router recognize the IPv6 address as internal and just bounce it back as opposed to first sending it to some external node? Yes
0
147
0
0
2015-09-09T22:40:00.000
python-2.7,sockets,network-programming,ipv6
How to connect two computers on the same local network externally with Python sockets?
1
2
2
32,490,605
0
0
0
I'm currently able to connect a client computer to two servers on my local network (using Python sockets), but because I'm trying to emulate an external networking set-up, I'd like the client to access the machines externally, i.e. for the data to be routed over the internet as opposed to locally and directly. (This is for research purposes, so it's intentionally inefficient.) Would using a machine's IPv6 address as the host be sufficient, or would the router recognize the IPv6 address as internal and just bounce it back as opposed to first sending it to some external node?
false
32,490,281
0.099668
0
0
1
If the client has at least two interfaces, you can assign one interface for local networking and the other one for Internet connection. In addition, you can also try to use virtual interfaces + IP tunnel for the Internet connection.
0
147
0
0
2015-09-09T22:40:00.000
python-2.7,sockets,network-programming,ipv6
How to connect two computers on the same local network externally with Python sockets?
1
2
2
32,491,099
0
0
0
I am using Python Kafka topic. Is there any provision producer that can update a message in a queue in Kafka and append it to the top of queue again? According to spec of Kafka, it doesn't seems feasible.
false
32,508,415
1
0
0
19
Kafka is a distributed immutable commit log. That said, there is no possibility to update a message in a topic. Once it is there all you can do is consume it, update and produce to another (or this) topic again
0
7,086
1
11
2015-09-10T17:40:00.000
python,apache-kafka,kafka-consumer-api,kafka-python
Update message in Kafka topic
1
1
1
32,510,798
0
0
0
My flow is such that I already have the access token available in my backend server. So basically I was using the REST Apis until now for getting all user messages. However, I would like to use the Gmail API batch requests to improve on performance. I see that it is non-trivial to use python requests to do so. The gmail api client for python on the other hand does not seem to have a option where I can use the access token to get the results. Rather I need to use the authorization code which is unavailable to me. Can someone help me solve this? Thanks, Azeem
false
32,514,000
0
1
0
0
You need to activate the Gmail API in your project on Google Developer Console to get the API key which will have separate billing cost involved.
0
100
0
0
2015-09-11T01:14:00.000
python
Gmail Python API: Build service using access token
1
1
1
32,514,035
0
1
0
I can't find a solution to authorize server-to-server authentication using Google SDK + Python + MAC OSx + GMAIL API. I would like testing GMail API integration in my local environment, before publishing my application in GAE, but until now I have no results using samples that I have found in GMail API or OAuth API documentation. During all tests I received the same error "403-Insufficient Permission" when my application was using GCP Service Account, but if I convert the application to use User Account everything was fine.
false
32,524,226
0
0
0
0
A service account isn't you its it's own user. Even if you could access Gmail with a service account which I doubt you would only be accessing the service accounts GMail account (Which I don't think it has) and not your own. To my knowledge the only way to access Gmail API is with Oauth2. Service accounts can be used to access some of the Google APIs for example Google drive. The service account his its own Google drive account files will be uploaded to its drive account. I can give it permission to upload to my google drive account by adding it as a user on a folder in Google drive. You cant give another user permission to read your Gmail Account so again the only way to access the Gmail API will be to use Oauth2.
0
163
1
0
2015-09-11T13:04:00.000
python,google-app-engine,gmail,google-oauth,service-accounts
Google App Engine Server to Server OAuth Python
1
1
2
32,524,341
0
1
0
from scrapy.spiders import CrawlSpider Rule is giving error I am using ubuntu I have Scrapy 0.24.5 and Python 2.7.6 I tried with tutorial project of scrapy I am working on pycharm
false
32,525,307
0.033321
0
0
1
Don't delete __init__.py from any place in your project directory. Just because it's empty doesn't mean you don't need it. Create a new empty file called __init__.py in your spiders directory, and you should be good to go.
1
4,316
0
1
2015-09-11T14:00:00.000
python,scrapy
ImportError: No module named spiders
1
3
6
40,071,747
0
1
0
from scrapy.spiders import CrawlSpider Rule is giving error I am using ubuntu I have Scrapy 0.24.5 and Python 2.7.6 I tried with tutorial project of scrapy I am working on pycharm
false
32,525,307
0
0
0
0
Make sure scrapy is installed. Try running scrapy when your terminal directory is python, or you can try to update scrapy..
1
4,316
0
1
2015-09-11T14:00:00.000
python,scrapy
ImportError: No module named spiders
1
3
6
32,531,370
0
1
0
from scrapy.spiders import CrawlSpider Rule is giving error I am using ubuntu I have Scrapy 0.24.5 and Python 2.7.6 I tried with tutorial project of scrapy I am working on pycharm
false
32,525,307
0
0
0
0
Mostly the tutorial you are fallowing and your version is mismatched. Simply replace this (scrapy.Spider) with (scrapy.spiders.Spider). Spider function is put into spiders module.
1
4,316
0
1
2015-09-11T14:00:00.000
python,scrapy
ImportError: No module named spiders
1
3
6
46,567,274
0
0
0
I have always used Firefox in webdriver. I want to try using Chrome. I have downloaded chromedriver and included it in the Path variable. However, this code returns an error: >>> webdriver.Chrome() selenium.common.exceptions.WebDriverException: Message: 'chromedriver' executable needs to be in PATH. Please see https://sites.google.com/a/chromium.org/chromedriver/home I have also tried including the path: >>> webdriver.Chrome('C:\Python34\chromedriver_win32.zip') OSError: [WinError 193] %1 is not a valid Win32 application What is the problem here? I am sorry if I am doing something completely wrong or my problem seems hard to solve. Any help will be appreciated. I have also searched all over the internet, but I have not found anything yet. Seriously, can't anybody solve this problem?
true
32,551,277
1.2
0
0
3
It turns out that I had to unzip the folder and instead of typing the path to the folder as an argument, you had to supply the .exe file in the path as well. Maybe it was an intermittent thing, or something that only didn't work when I posted the question.
0
7,011
0
4
2015-09-13T15:28:00.000
python,google-chrome,selenium
Using webdriver to run in Chrome with Python
1
1
1
33,557,990
0
0
0
So I have a question; How does one get the files from a webpage and the urls attached to them. For example, Google.com so we go to google.com and open firebug (Mozilla/chrome) and go to the "network" We then see the location of every file attached, and extension of the file. How do I do this in python? For url stuff, I usually look into urllib/mechanize/selenium but none of these seem to support what I want or I don't know the code that would be associated with it. I'm using linux python 2.7 - Any help/answers would be awesome. Thank you for anyone attempting to answer this. Edit: The things the back end servers generate, I don't know how but firebug in the "net" or "network" section show this information. I wondered if it could be implemented into python some how.
false
32,590,327
0
0
0
0
It is not difficult to parse the webpage and find the links of all "attached" files such as (css, icon, js, images, etc.) which will be fetched by the browser that you can see them in the 'Network' panel. The harder part is that some files are fetched by javascript using ajax. The only way to do that (completely and correctly) is to simulate a browser (parse html+css and run javascripts) which I don't think python can do.
0
76
0
0
2015-09-15T15:44:00.000
python,url,networking,request
Get files attached to URL using python
1
1
2
32,590,563
0
0
0
EDIT: I want to telnet into my web server on localhost, and request my php file from command line: I have: 1) cd'd into the directory I want to serve, namely "/www" (hello.php is here) 2) run a server at directory www: python -m SimpleHTTPServer 3) telnet localhost 80 but "connection is refused". what am I doing wrong?
true
32,599,677
1.2
1
0
0
You're probably trying to connect to a wrong port. Check with netstat -lntp which port is your http server listening on. The process will be listed as python/pid_number.
0
2,140
0
0
2015-09-16T04:14:00.000
php,python,telnet
Telnet connection on localhost Refused
1
1
2
32,620,211
0
0
0
I'm a bit confused by some of the design decisions in the Python ElementTree API - they seem kind of arbitrary, so I'd like some clarification to see if these decisions have some logic behind them, or if they're just more or less ad hoc. So, generally there are two ways you might want to generate an ElementTree - one is via some kind of source stream, like a file, or other I/O stream. This is achieved via the parse() function, or the ElementTree.parse() class method. Another way is to load the XML directly from a string object. This can be done via the fromstring() function. Okay, great. Now, I would think these functions would basically be identical in terms of what they return - the difference between the two of them is basically the source of input (one takes a file or stream object, the other takes a plain string.) Except for some reason the parse() function returns an ElementTree object, but the fromstring() function returns an Element object. The difference is basically that the Element object is the root element of an XML tree, whereas the ElementTree object is sort of a "wrapper" around the root element, which provides some extra features. You can always get the root element from an ElementTree object by calling getroot(). Still, I'm confused why we have this distinction. Why does fromstring() return a root element directly, but parse() returns an ElementTree object? Is there some logic behind this distinction?
false
32,620,254
0
0
0
0
I'm thinking the same as remram in the comments: parse takes a file location or a file object and preserves that information so that it can provide additional utility, which is really helpful. If parse did not return an ET object, then you would have to keep better track of the sources and whatnot in order to manually feed them back into the helper functions that ET objects have by default. In contrast to files, Strings- by definition- do not have the same kind of information attached from them, so you can't create the same utilities for them (otherwise there very well may be an ET.parsefromstring() method which would return an ET Object). I suspect this is also the logic behind the method being named parse instead of ET.fromfile(): I would expect the same object type to be returned from fromfile and fromstring, but can't say I would expect the same from parse (it's been a long time since I started using ET, so there's no way to verify that, but that's my feeling). On the subject Remram raised of placing utility methods on Elements, as I understand the documentation, Elements are extremely uniformed when it comes to implementation. People talk about "Root Elements," but the Element at the root of the tree is literally identical to all other Elements in terms of its class Attributes and Methods. As far as I know, Elements don't even know who their parent is, which is likely to support this uniformity. Otherwise there might be more code to implement the "root" Element (which doesn't have a parent) or to re-parent subelements. It seems to me that the simplicity of the Element class works greatly in its favor. So it seems better to me to leave Elements largely agnostic of anything above them (their parent, the file they come from) so there can't be any snags concerning 4 Elements with different output files in the same tree (or the like). When it comes to implementing the module inside of code, it seems to me that the script would have to recognize the input as a file at some point, one way or another (otherwise it would be trying to pass the file to fromstring). So there shouldn't arise a situation in which the output of parse should be unexpected such that the ElementTree is assumed to be an Element and processed as such (unless, of course, parse was implemented without the programmer checking to see what parse did, which just seems like a poor habit to me).
0
1,617
0
9
2015-09-16T23:15:00.000
python
Python ElementTree: ElementTree vs root Element
1
1
2
37,981,930
0
1
0
I'm facing a new problem. I'm writing a scraper for a website, usually for this kind of tasks I use selenium, but in this case I cannot use anything that simulate a web-browser. Researching on StackOverflow, I read the best solution is to undestand what javascript did and rebuild the request over HTTP. Yeah, I understand well in theory, but don't know how to start, as I don't know well technologies involved. In my specific case, some HTML is added to the page when the button is clicked. With developer tools I set a breakpoint on the 'click' event, but from here, I'm literally lost. Anyone can link some resource and examples I can study?
false
32,680,534
0.379949
0
0
2
In most cases, it is enougth to analyze the "network" tab of the developer tools and see the requests that are fired when you hit that button you metioned. As you understand those requests, you will be able to implement your scraper to run similar requests and grab the relevant data.
0
84
0
0
2015-09-20T14:35:00.000
javascript,python,browser,beautifulsoup,python-requests
Python - Rebuild Javascript generated code using Requests Module
1
1
1
32,680,624
0
0
0
I've written a TLS parser library in C++ which now I need to write unit tests for. The library is simply fed the TLS data stream and it invokes various callbacks on certain events in TLS protocol. I'm searching for a Python implementation of TLS protocol for both client and server sides which allow me to create several deterministic and reproducible TLS data connections with parameters of my choosing (cipher suites, certificates, transmitting different TLS protocol messages, etc) and simultaneously dump the traffic in a raw binary file. Does twisted allow me to create such test setup and if so is there any code sample available to help me jump start this project?
false
32,794,943
0.197375
0
0
1
Ultimately Twisted will just speak TLS to a Transport and dump bytes into it; you can specify a pyOpenSSL context object configured however you like. So this is really more of a question about pyOpenSSL or Cryptography. The TLS handshake generally involves generating random data (session keys) at various points. While I think it is probably possible to make OpenSSL do something completely deterministic by plugging in a special ENGINE that generates non-random random data, this is a use-case that the Twisted TLS toolchain is definitely not geared towards. (For example, randomness is always global in OpenSSL, even though the other parts of the SSL stack are local to the SSL_CTX.)
0
80
0
0
2015-09-26T07:57:00.000
python,ssl,automated-tests,twisted
Create a controlled TLS conversation in Python twisted and store it to be used as test data
1
1
1
32,816,518
0
1
0
I want to know if there is a response from requests.get(url) when the page is fully loaded. I did tests with around 200 refreshes of my page and it happens randomly that once or twice the page does not load the footer.
false
32,796,751
0
0
0
0
First requests GET will return you the entire page but requests is no a browser, it does not parse the content. When you load a page with the browser, it does usually 10-50 requests for each resource, runs the JavaScript, ....
0
522
0
0
2015-09-26T11:35:00.000
python,python-requests,loaded
Detect page is fully loaded using requests library
1
1
1
32,796,957
0
1
0
I am using Selenium to navigate a webpage. To analyze the elements and data, I use BeautifulSoup because of the excellent options they give, including searching with regex. So now I have an element located in BeautifulSoup. I want to select it in Selenium. I figured I could somehow pass a XPath or CSS selector from the BeautifulSoup element to the Selenium element. Is there a direct way of going from a BeautifulSoup element to Selenium element?
false
32,813,646
0
0
0
0
These are completely different tools that, in general, cannot be considered alternatives, though they somewhat cross on the "Locating Elements" front. The located elements though are very different - one is a Tag instance in BeautifulSoup and the other one is a webdriver WebElement instance that can actually be interacted with it is "live". Both tools support CSS selectors. The support is quite different, but if you don't go in depth with things like multiple attribute checks (a[class*=test1][class^=test] - not gonna work in BeautifulSoup, for instance), nth-child, nth-of-type, going sideways with + etc, you can assume things are gonna work on both ends. Please add examples of the elements you want to correlate and we can work through them.
0
386
0
3
2015-09-27T22:49:00.000
python,selenium,beautifulsoup
Send element from BeautifulSoup to Selenium
1
1
1
32,815,252
0
1
0
I should preface this post by saying that I am a very elementary developer with a generic IS degree. Without going into too much detail, I was given a moderately large web application from an interning software engineer to support an enhance if need be. It was written primarily in Python, JavaScript and HTML5 and utilizes a Google Map API to visually represent the location and uses of given inputs. This leads me to my question. There is a date picker modal that the application/user utilizes. They pick a START and END date, in the default format YYYY-MM-DD (if the user does not use that exact format (i.e. 2015-09-29) the date picker will not work), and the application then goes to the DB and picks the given inputs between those dates and represents them on the map. I have been told that, for usability, I have to make the program accept multiple date formats (i.e. September 29 2015, 09-29-2015, 9-29-2015, 9/29/2015). How would I go about doing this?
false
32,847,487
0
0
0
0
You can use Javascript to give the user the feedback that the correct format(s) is being used. But if you are taking any data to your server be sure verify the data on the server. To verify the correct dataformat you can use Regular expressions and check if any format is correct. You should iterate through all allowed possibilities until one is found correct.
1
163
0
0
2015-09-29T15:04:00.000
javascript,python,date,web,format
javascript accepting multiple date format
1
1
2
32,847,574
0
0
0
Rethinkdb IO reaches 100% whenever there is a data upload. The load reaches near about 50. Is this a common phenomenon, or do we need to do some optimizations here?
false
32,860,932
0.197375
0
0
1
RethinkDB uses a blocker pool to do IO. On Linux systems, each thread in this blocker pool contributes 1 to the load average while blocking on disk, so RethinkDB sometimes causes the system to report an extremely high load average even under normal load. Using 100% of your disk throughput is a different story. If you're running an IO-heavy workload on a slow disk, especially on a rotating drive, then that's pretty reasonable, but it does mean that you might have scaling problems if you want to do more disk-intensive operations. If you start to have those scaling problems, probably the best solution would be to get a faster disk.
0
86
0
0
2015-09-30T08:03:00.000
io,rethinkdb,rethinkdb-python
Rethinkdb IO reaches 100%
1
1
1
32,872,765
0
1
0
I have a scraper to pull search results from a number of travel websites. Now that I have the search results nicely displayed with "Book Now" buttons, I want those "Book Now" buttons to redirect to the specific search result so the user can book that specific travel search result. These search results are dynamic so the redirect may change. What's the easiest way to accomplish this? I'm building this search engine in Python/Django and have Django CMS.
true
32,870,164
1.2
0
0
0
Are you storing the results in a database or some persistent storage mechanism (maybe even in a KV store)? Once you hold the results somewhere on your website, you can redirect from your results page via the Book Now button to a view withs the result's identifying value (say some hash) and have that view redirect to the website offering the service.
0
66
0
0
2015-09-30T15:34:00.000
python,django,web-scraping,url-redirection
Redirect to a specific search result
1
1
1
32,870,284
0
0
0
I've been learning about web scraping using BeautifulSoup in Python recently, but earlier today I was advised to consider using XPath expressions instead. How does the way XPath and BeautifulSoup both work differ from each other?
false
32,911,933
0.379949
0
0
4
I would suggest bs4, its usage and docs were more friendly, will save your time and increase confidence which is very important when you are self learning string manipulation. However in practice, it will require a strong CPU. I once scrape with not more than 30 connections on my 1core VPS, and CPU usage of python process keeps at 100%. It could be result of bad implementation, but later I chaned all to re.compile and performance issue was gone. As for performance, regex > lxml >> bs4. As for get things done, no difference.
0
4,014
0
5
2015-10-02T16:43:00.000
python,xpath,web-scraping,beautifulsoup
Pros and Cons of Python Web Scraping using BeautifulSoup vs XPath
1
1
2
32,912,192
0
1
0
I made a website with many pages, on each page is a sample essay. The homepage is a page with a search field. I'm attempting to design a system where a user can type in a word and when they click 'search', multiple paragaphs containing the searched word from the pages with a sample essays are loaded on to the page. I'm 14 and have been programming for about 2 years, can anyone please explain to me the programming languages/technologies I'll need to accomplish this task and provide suggestions as to how I can achieve my task. All I have so far are the web pages with articles and a custom search page I've made with PHP. Any suggestions?
true
32,913,859
1.2
0
0
1
The programing language does not realy matter for the way to solve the problem. You can implement it in the language which you are comfortable with. There are two basic ways to solve the problem: Use a crawler which creates a index of words found on the different pages The use that index to lookup the searched word or When the user has entered the search expression, you start crawling the pages and look if the search expression is found Of course both solutions will have different (dis)advantages For example: In 1) you need to do a inital crawl (and udate it later on when the pages change) In 1) you need to store the crawl result in some sort of database In 1) you will receive instanst search results In 2) You don't need a database/datastore In 2) You will have to wait until all pages are searched before showing the final resultlist
0
41
0
0
2015-10-02T18:46:00.000
java,php,python,html,mysql
Creating a webpage crawler that finds and maches user input
1
1
1
32,915,448
0
1
0
I'm using Selenium with PhantomJS in order to scrape a dynamic website with infinite scroll. It's working but my teacher suggested to use a mobile phantom driver in order to get the mobile version of the website. With the mobile version I expect to see less Ads or JavaScript and retrieve the information faster. There is any "phantom mobile driver"?
true
32,952,744
1.2
0
0
3
There is no such thing as a "phantom mobile driver". You can change the user agent string and the viewport/window size in order to suggest to the website to deliver the same markup that a mobile client would receive.
0
370
0
1
2015-10-05T15:54:00.000
python,selenium,mobile,selenium-webdriver,phantomjs
Is it possibile to use PhantomJS like a mobile driver in Selenium?
1
1
1
32,953,131
0
0
0
I need to work with a very huge page (there are a lot of elements, really) with selenium and Chromedriver. After navigation happened and page loaded test gets hung for more than 2 hours. Chrome is consuming 100 % CPU during this process. I suspect it to parse the loaded page. Is there a way to avoid or handle it somehow? (I know that the page should not be that huge, but it is a different story) Thanks in advance for your help.
true
32,994,489
1.2
0
0
0
All, thanks for your help. I found the root cause. Actually the problem was with non optimal usage of find_elements. Even when it is called once it executes for ages. Replaced with a workaround using find_element and it started to work. The workaround is fragile, but it's better than nothing.
0
224
0
1
2015-10-07T14:07:00.000
python,selenium,selenium-chromedriver
Chromedriver works with page for too long
1
1
2
32,999,196
0
0
0
I am implementing selenium and i have already included "from selenium import webdriver" but still getting this error "ImportError: cannot import name webdriver" Any idea how to resolve this error?
false
33,009,532
0
0
0
0
Cross check your language binding, check with older versions
0
1,185
0
0
2015-10-08T07:40:00.000
python-2.7,selenium-webdriver
ImportError: cannot import name webdriver
1
1
1
33,010,025
0
0
0
This might be bad practice so forgive me, but when python ends on non telnet lib exception or a non paramiko (SSH) exception, will the SSH or Telnet connection automatically close? Also, will sys.exit() close all connections that the script is using?
true
33,017,847
1.2
1
0
1
Yes, the system (Linux and Windows) keeps track of all of the resources your process uses. It can be files, mutexes, sockets, anything else. When process dies, all of the resources are freed. It doesn't really matter which programming language do you use and how you terminate your application. There are several subtle exceptions for this rule like WAIT_CLOSE state for server sockets or resources held by zombie processes, but in general you can assume that whenever your process is terminated, all of the resources are freed. UPD. As it was mentioned in comments, OS cannot guarantee that the resources were freed properly. In network connections case it means that there are no guarantee that the FIN packet was sent, so although everything was cleaned up on your machine, the remote endpoint can still wait for data from you. Theoretically, it can wait infinitely long. So it is always better practice to use "finally" statement to notify the other endpoint about closing connection.
0
702
0
1
2015-10-08T13:54:00.000
python,ssh,telnet,paramiko
Does python automatically close SSH and Telnet
1
1
2
33,018,073
0
0
0
I'm having a problem with a Python script which should check if the user is connected to a wifi network with a captive portal. Specifically, the script is long-running, attempting to connect to example.org every 60 seconds. The problem is that if the network starts offline (meaning the wifi isn't connected at the start of the script), socket.getaddrinfo will always fail with the error "Name or service not known", even once the wifi is connected, until the Python script is restarted. (This isn't a DNS thing -- all requests fail.) Because both urllib and requests use sockets, it's totally impossible to download an example page once Python gets into this state. Is there a way around this or a way to reset sockets so it works properly once the network fails? To be clear, here's a repro: Disconnect wifi Run an interactive Python session import urllib and urllib.open("http://stackoverflow.com/") -- fails as expected Reconnect wifi urllib.open("http://example.com/") Expected: Returned HTML from example.com Actual: socket.gaierror: [Errno -2] Name or service not known
false
33,042,478
-0.197375
0
0
-1
If you're not connected to an access point when running the script, and don't have an IP address assigned to your device socket.getaddrinfo will fail. Maybe it's still connecting when you run the script. The domain name cannot be resolved because you are not connected to the network, thus no DNS. Does it fail when you're actually connected to the network? Does curl http://icanhazip.com work at the point when the script fails? Or if you run ifconfig does your device have an IP? (I'm assuming you're on a *nix box).
0
302
0
1
2015-10-09T15:51:00.000
python,python-2.7
socket.getaddrinfo fails if network started offline
1
1
1
33,042,717
0
0
0
I am working on phylogenies by using Python libraries (Bio.Phylo and DendroPy). I have to import 2 trees in Newick format (this is obviously not the difficult part) and join them together, more precisely I have to add one tree at one tip/leaf of another. I have tried with add_child and new_child methods from DendroPy, but without success. How would I solve this issue?
false
33,059,170
0
0
0
0
Without resorting to anything fancier than an editor you could find your "tip" in tree1 and insert the string that is tree2 at that point. (being nested sets and all)
1
91
0
2
2015-10-10T21:31:00.000
python,bioinformatics,phylogeny
Join 2 trees using Python (dendroPy or other libraries)
1
1
1
36,169,560
0
0
0
I am running a cron job which executes the python script for reading gmail (2 min interval). I have used imaplib for reading the new mails. This was working fine until yesterday. Suddenly its throwing below error imaplib.error: [AUTHENTICATIONFAILED] Invalid credentials (Failure) and sometimes i am getting the below error raise self.abort(bye[-1]) imaplib.abort: [UNAVAILABLE] Temporary System Error When i run the same script on a different machine. Its working fine. I am assuming that the host has been blacklisted or something like that. What are my options ? I cant generate the Credentials (Gmail API) as this is under company domain account.
false
33,119,667
0.033321
1
0
1
Thanks guys. Its working now. The issue was that the google blocked our network.. because of multiple attempts. I tried that unlock URL from a different URL and it didnt work. The catch is that, we have to run that URL in the machine where you are trying to run the script. Hope it may help someone :)
0
23,571
0
17
2015-10-14T07:53:00.000
python,authentication,gmail-api,imaplib
reading gmail is failing with IMAP
1
3
6
33,492,711
0
0
0
I am running a cron job which executes the python script for reading gmail (2 min interval). I have used imaplib for reading the new mails. This was working fine until yesterday. Suddenly its throwing below error imaplib.error: [AUTHENTICATIONFAILED] Invalid credentials (Failure) and sometimes i am getting the below error raise self.abort(bye[-1]) imaplib.abort: [UNAVAILABLE] Temporary System Error When i run the same script on a different machine. Its working fine. I am assuming that the host has been blacklisted or something like that. What are my options ? I cant generate the Credentials (Gmail API) as this is under company domain account.
false
33,119,667
0.066568
1
0
2
Got the same error and it was fixed by getting the new google app password. Maybe this will work for someone
0
23,571
0
17
2015-10-14T07:53:00.000
python,authentication,gmail-api,imaplib
reading gmail is failing with IMAP
1
3
6
41,952,132
0
0
0
I am running a cron job which executes the python script for reading gmail (2 min interval). I have used imaplib for reading the new mails. This was working fine until yesterday. Suddenly its throwing below error imaplib.error: [AUTHENTICATIONFAILED] Invalid credentials (Failure) and sometimes i am getting the below error raise self.abort(bye[-1]) imaplib.abort: [UNAVAILABLE] Temporary System Error When i run the same script on a different machine. Its working fine. I am assuming that the host has been blacklisted or something like that. What are my options ? I cant generate the Credentials (Gmail API) as this is under company domain account.
false
33,119,667
1
1
0
24
Some apps and devices use less secure sign-in technology. So we need to enable Less secure app access option from gmail account. Steps: Login into Gmail Go to Google Account Navigate to Security section Turn on access for Less secure app access By following above steps, issue will be resolved.
0
23,571
0
17
2015-10-14T07:53:00.000
python,authentication,gmail-api,imaplib
reading gmail is failing with IMAP
1
3
6
60,630,329
0
0
0
I would like to create multiple sockets between all users. So how can i pass key and ID such as the server is divided in seprated windows. Thank You.
true
33,126,905
1.2
0
0
1
You do exactly that: you [can] pass around keys and make them show up in separate windows. From the way you've phrased your question, you appear new to streams/sockets. I'd recommend you first start with one socket and make a chat application so you can get a feel for how to develop protocols which let you do that.
0
141
0
2
2015-10-14T13:37:00.000
python,ios,objective-c,xcode,socketrocket
SocketRocket (iOS) : How to identify whom user are chatting with another?
1
1
1
33,127,431
0
0
0
I`m trying to use the oslo config package. But I found that someone is using this package like this import oslo.config while some others using it like this import oslo_config I`m confused , can anyone tell me what is the difference between this two package? Thanks
false
33,205,691
0.197375
0
0
1
The oslo namespace package is deprecated, you can see nothing in /usr/lib/python2.7/dist_packages/oslo, except middle directory, after oslo.config is installed. so use oslo_config instead when you want to import something.
0
363
0
0
2015-10-19T02:27:00.000
python,oslo
what is the different between oslo.config and oslo_config?
1
1
1
34,715,692
0
1
0
I'm new in python and websocket programmer. I want to sent data that was encrypted with RSA key (in Python) through websocket to server in cloud (using nodeJs). For decrypt that data I need that key, right? How I can sent RSA to server and use that key to decrypt? Thankyou
false
33,210,021
0
0
0
0
You haven't provided information on how you are encrypting your data. But you should never send the private keys over the network. Never. Doing that is as secure as locking your house but leave the key in the keyhole. If you've done it even once, throw the key away and generate a new one. The strength of RSA comes from the fact that anyone holding the public key can encrypt data, but only the one holding the private keys can decrypt. Think of it as an old video store's return box: any costumer can put the videos in it through the hole, but only the store staff can take the videos from it. What you want to do is: Generate the keys on the server Client call the server and grab the public key Client encrypt data using the retrieved public key Client send encrypted data to server. Server decrypt it using the private key
0
120
0
0
2015-10-19T08:38:00.000
python,encryption,autobahn
Sent RSA Key Through Autobahn
1
1
1
33,211,565
0
1
0
I am writing some test cases in the Robot Framework using Ride. I can run the tests on both Chrome and Firefox, but for some reason Internet Explorer is not working. I have tested with the iedriverServer.exe (32bit version 2.47.0.0). One thing to add is that I am using a proxy. When I disable the proxy in IE and enable the automatic proxy configuration... IE can start up. But it can not load the website. For Chrome and FF the proxy is working fine. Error message: WebDriverException: Message: Can not connect to the IEDriver.
false
33,212,370
0.099668
0
0
1
I have also encountered the same problem.Below are the steps which i have followed. 1.I have enabled the proxy in IE. 2.Set environmental variable no_proxy to 127.0.0.1 before launching the browser Ex: Set Environmental Variable no_proxy 127.0.0.1 3.Set all the internet zones to same level(medium to high) expect restricted sites Open browser>Tools>Internet Options>Security Tab 4.Enable "Enable Protected mode" in all zones Please let me know your feedback.
0
5,285
0
3
2015-10-19T10:42:00.000
python,internet-explorer,internet-explorer-11,robotframework
Robot Framework Internet explorer not opening
1
1
2
35,595,207
0
1
0
I'm quite new to whole Selenim thing and I have a simple question. When I run tests (Django application) on my local machine, everything works great. But how this should be done on server? There is no X, so how can I start up webdriver there? What's the common way? Thanks
false
33,231,156
0
0
0
0
I suggest you use a continuous integration solution like Jenkins to run your tests periodically.
0
63
0
0
2015-10-20T08:01:00.000
python,django,selenium,selenium-webdriver
Selenium on server
1
1
1
33,231,191
0
0
0
I used requests to login to a website using the correct credentials initially. Then I tried the same with some invalid username and password. I was still getting response status of 200. I then understood that the response status tells if the corresponding webpage has been hit or not. So now my doubt is how to verify if I have really logged in to the website using correct credentials
true
33,236,054
1.2
0
0
0
HTTP status codes are usually meant for the browsers, or in case of APIs for the client talking to the server. For normal web sites, using status codes for semantical error information is not really useful. Overusing the status codes there could even cause the browser to not render responses correctly. So for normal HTML responses, you would usually expect a code 200 for almost everything. In order to check for errors, you will then have to check the—application specific—error output from the HTML response. A good way to find out about these signs is to just try logging in from the browser with invalid credentials and then check what output is rendered. Or as many sites also show some kind of user menu once you’re logged in, check for its existence to figure out if you’re logged in. And when it’s not there, the login probably failed.
0
98
0
1
2015-10-20T11:58:00.000
python,python-requests
How to verify that we have logged in correctly to a website using requests in python?
1
2
3
33,236,211
0
0
0
I used requests to login to a website using the correct credentials initially. Then I tried the same with some invalid username and password. I was still getting response status of 200. I then understood that the response status tells if the corresponding webpage has been hit or not. So now my doubt is how to verify if I have really logged in to the website using correct credentials
false
33,236,054
0
0
0
0
What status code the site responds with depends entirely on their implementation; you're more likely to get a non-200 response if you're attempting to log in to a web service. If a login attempt yielded a non-200 response on a normal website, it'd require a special handler on their end, as opposed to a 200 response with a normal page prompting you (presumably a human user, not a script) with a visual cue indicating login failure. If the site you're logging into returns a 200 regardless of success or failure, you may need to use something like lxml or BeautifulSoup to look for indications of success or failure (which presumably you'll be using already to process whatever it is you're logging in to access).
0
98
0
1
2015-10-20T11:58:00.000
python,python-requests
How to verify that we have logged in correctly to a website using requests in python?
1
2
3
33,236,191
0
0
0
How to list name of users who tweeted with given keyword along with count of tweets from them ? I am using python and tweepy. I used tweepy to list JSON result in a file by filter(track["keyword"]) but doesn't know how to list users who tweeted given keyword.
true
33,273,885
1.2
1
0
0
Once your data has been loaded into JSON format, you can access the username by calling tweet['user']['screen_name']. Where tweet is whatever varibale you have assigned that holds the JSON object for that specific tweet.
1
578
0
0
2015-10-22T05:22:00.000
python,api,twitter,tweepy,twitter-streaming-api
how to list all users who tweeted a given keyword using twitter api and tweepy
1
1
1
33,290,201
0
0
0
I'm writing a python script to grab my bank account details using urllib. To access the login page, there is a hyperlink button present on the page. How can I get my script to click that button or indeed bypass it? Please, any help me on this would be appreciated.
false
33,279,279
0
0
0
0
Using urllib you can't perform a click on an <a> tag. You may want to look into selenium-webdriver for that matter. You can fetch its href attribute value and then call the urllib.urlopen(path) function (make sure path variable contains the full path and not the relative path).
0
1,484
0
1
2015-10-22T10:51:00.000
python,urllib
Python: Clicking a hyperlink button with urllib
1
2
2
33,279,321
0
0
0
I'm writing a python script to grab my bank account details using urllib. To access the login page, there is a hyperlink button present on the page. How can I get my script to click that button or indeed bypass it? Please, any help me on this would be appreciated.
false
33,279,279
0
0
0
0
I want to add to Jason's answer that if you're going to pass login data you might want to step away from urllib and use requests. I would find the url of the page that has the login form and use requests there.
0
1,484
0
1
2015-10-22T10:51:00.000
python,urllib
Python: Clicking a hyperlink button with urllib
1
2
2
33,279,434
0
0
0
I have a s3 bucket with a file in it and while retrieving file from s3 bucket, I want to check if it is retrieved from the source I am expecting from and not from some man-in-the-middle thingy. What is the best way to do that? Something with Authentication header may be or associate with key?
false
33,290,705
0
0
0
0
You use HTTPS. SSL certificates not only serve as a public key for encryption, they are also signed by a trusted certificate authority and confirm that the server you intended to contact possesses a certificate (and has the correlated private key) that matches the hostname you used when you made the contact. A properly-configured client will refuse to communicate over HTTPS with a server presenting a certificate with a mismatched hostname, or a certificate signed by an untrusted/unknown certificate authority.
0
60
0
0
2015-10-22T21:03:00.000
python,amazon-s3
python : signature matching aws s3 bucket
1
1
1
33,291,826
0
1
0
I have an EC2 instance and an S3 bucket in different region. The bucket contains some files that are used regularly by my EC2 instance. I want to programatically download the files on my EC2 instance (using python) Is there a way to do that?
false
33,298,821
0
0
1
0
As mentioned above, you can do this with Boto. To make it more secure and not worry about the user credentials, you could use IAM to grant the EC2 machine access to the specific bucket only. Hope that helps.
0
6,069
0
5
2015-10-23T09:17:00.000
python,amazon-web-services,amazon-s3,amazon-ec2,amazon-iam
Access to Amazon S3 Bucket from EC2 instance
1
1
5
33,375,622
0
0
0
My basic problem is that I am looking for a way for multiple clients to connect to a server over the internet, and for the server to be able to tell if those clients are online or offline. My current way of doing this is a python socket server, and python clients, which send the server a small message every 2 seconds. The server checks each client to see if it has received such a message in the last 5 seconds, and if not, the client is marked as offline. However, I feel that is is probably not the best way of doing this, and even if it is, there might be a library that does this for me. I have looked for such a library but have come up empty handed. Does anyone know of a better way of doing this, or a library which can automatically check the status of multiple connected clients? Note: by "offline", I mean that the client could be powered off, network connection disconnected or program quit.
true
33,307,845
1.2
0
0
1
Assuming you are not after ping from server to client. I believe that your approach is fine. Very ofther server will not be able to hit client but it works otherway around. You may run out of resources if you have many connected clients. Also over this established channel you can send other data/metrics and boom monitoring was born ;-) IF you send other data you will probably reliaze you don't need to send data every 2 secs but only if no other data was sent - boom FIX works this way ( and many other messaging protocol) What you may like is something like kafka that will transport the messages for you there are other messaging protocols too.. and they scale better then if you just connect all client(assuming you have many of them) Happy messaging
0
1,192
0
0
2015-10-23T17:04:00.000
python,sockets,server,client,disconnect
Python client-server - tell if client offline
1
1
2
33,307,987
0
0
0
Am I able to send and receive data (therefore, to use sendto and recvfrom methods) via the same UDP socket simultaneously in Python? I need to listen for new packets while sending some data to previous clients from another thread.
true
33,310,292
1.2
0
0
2
Yes, UDP sockets are bidirectional.
0
583
0
0
2015-10-23T19:45:00.000
python,sockets,network-programming
Am I able to send and receive data via the same UDP socket simultaneously
1
1
1
33,310,427
0
1
0
I am trying to extract the data from webpage after log in. To log in website, i can see the token (authenticity_token) in Form Data section. It seems, token generating automatically.I am trying to get token values but no luck.Please anyone help me on this,how to get the token value while sending the post requests.
false
33,321,076
0
0
0
0
You need to get the response from the page then regex match for token.
0
70
0
0
2015-10-24T17:19:00.000
python,session,cookies,python-requests
How to get token value while sending post requests
1
2
2
33,321,212
0
1
0
I am trying to extract the data from webpage after log in. To log in website, i can see the token (authenticity_token) in Form Data section. It seems, token generating automatically.I am trying to get token values but no luck.Please anyone help me on this,how to get the token value while sending the post requests.
false
33,321,076
0
0
0
0
token value is stored in the cookie file..check the cookie file and extract the value from it.. for example,a cookie file after login contain jsession ID=A01~xxxxxxx where 'xxxxxxx' is the token value..extract this value..and post this value
0
70
0
0
2015-10-24T17:19:00.000
python,session,cookies,python-requests
How to get token value while sending post requests
1
2
2
37,413,165
0
1
0
I am experiencing a once per 60-90 minute spike in traffic that's causing my Heroku app to slow to a crawl for the duration of the spike - NewRelic is reporting response times of 20-50 seconds per request, with 99% of that down to the Heroku router queuing requests up. The request count goes from an average of around 50-100rpm up to 400-500rpm Looking at the logs, it looks to me like a scraping bot or spider trying to access a lot of content pages on the site. However it's not all coming from a single IP. What can I do about it? My sysadmin / devops skills are pretty minimal. Guy
false
33,345,960
0
0
0
0
Have your host based firewall throttle those requests. Depending on your setup, you can also add Nginx in to the mix, which can throttle requests too.
0
46
0
0
2015-10-26T12:30:00.000
heroku,python-requests
How to deal with excessive requests on heroku
1
1
1
33,346,079
0
0
0
I am using Python v2.7 on Win 7 PC. I have my robot connected to computer and COM 4 pops out in device manager. My plan is to send API to robot through COM 4. Here is the question, how could python identify which serial port is for which device? So far, I can list all the available ports in python, but I need to specifically talk to COM 4 to communicate with robot. As a newie, any help would be appreciated.
false
33,354,977
0
1
0
0
ser = serial.Serial(3) # open COM 4 print ser.name # check which sport was really used and 'ser' is the serial object Here is the python code to open specific serial port.
0
64
0
0
2015-10-26T20:28:00.000
python
Serial Port Identity in Python
1
1
2
33,374,589
0
1
0
I am using the Boto 3 python library, and want to connect to AWS CloudFront. I need to specify the correct AWS Profile (AWS Credentials), but looking at the official documentation, I see no way to specify it. I am initializing the client using the code: client = boto3.client('cloudfront') However, this results in it using the default profile to connect. I couldn't find a method where I can specify which profile to use.
false
33,378,422
1
0
0
8
Just add profile to session configuration before client call. boto3.session.Session(profile_name='YOUR_PROFILE_NAME').client('cloudwatch')
0
163,599
0
214
2015-10-27T21:02:00.000
python,amazon-web-services,boto3,amazon-iam,amazon-cloudfront
How to choose an AWS profile when using boto3 to connect to CloudFront
1
1
4
57,297,264
0