[{"Question":"First of all, I can't use any executables. I need to do this in pure Python but sadly requests and BS4 doesn't support JS pages and Selenium needs a webdriver which is an executable.\nDoes anyone knows \/ Is there any way to scrape a JS Rendered page using purely Python and it's modules without having to run any exe?\nI'm not asking for exact solutions, only for the method and modules, if it's \neven possible.\nThank you for reading this and thank you for any constructive comments!\nHave a nice day!\nFor the full context: I'm trying to run a web-scraping script on a daily basis on a cloud that doesn't allows running any exes. Tried it with Selenium and PhantomJS but got a no permission error.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":51,"Q_Id":51965133,"Users Score":0,"Answer":"Sorry for the misunderstanding, just noticed that the free version of the cloud didn't give permission for these. I got a paid one and it now works like a charm.","Q_Score":1,"Tags":"python,python-3.x,web-scraping","A_Id":51965854,"CreationDate":"2018-08-22T10:46:00.000","Title":"Any way to scrape a JS Rendered page without using executables in Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I get this error when I try to import wxPython from python 3.7. I have google around but to no luck. Any help would be appreciated.\n\nTraceback (most recent call last):\n File \"C:\/Users\/STEVE\/Desktop\/Python Files\/Chat Bot\/Joyla\/joyla.py\", line 3, in \n import wx\n File \"C:\\Users\\STEVE\\AppData\\Local\\Programs\\Python\\Python37-32\\lib\\site-packages\\wx__init__.py\", line 17, in \n from wx.core import *\n File \"C:\\Users\\STEVE\\AppData\\Local\\Programs\\Python\\Python37-32\\lib\\site-packages\\wx\\core.py\", line 12, in \n from ._core import *\n ImportError: DLL load failed: The specified module could not be found.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":742,"Q_Id":51976677,"Users Score":1,"Answer":"I had this same problem with the same exact error message. It turned out that I have mistakenly installed the 32 bit version of python instead of the 64 bit version.\nThe python web site will trick you - if you just use the download link on the top page it will give you the 32 bit version. You have to go to the downloads page, then windows, then look for the 64 bit version - \"Windows x86-64 executable installer\". \nThe actual file name of the 64 bit download is\n \"python-3.7.0-amd64.exe\". \nIf you get the 32 bit version the file name will be \n \"python-3.7.0.exe\".\nAnother way to check, after installing, is to open a python console (or a command prompt and type \"python\" to open the python command line).\nIf you have the 32 bit version it will say:\n Python 3.7.0 (v3.7.0:1bf9cc5093, Jun 27 2018, 04:06:47) \n [MSC v.1914 32 bit (Intel)] on win32\nIf you have the 64 bit version it will say:\n Python 3.7.0 (v3.7.0:1bf9cc5093, Jun 27 2018, 04:59:51) \n [MSC v.1914 64 bit (AMD64)] on win32","Q_Score":0,"Tags":"python,dll,wxpython,importerror","A_Id":52228847,"CreationDate":"2018-08-23T00:01:00.000","Title":"ImprtError when importing wxPython from Python3.7","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is it possible to manipulate existing svg files in python? I know the svgwrite package but it only creates new svg files... I just want to add 3 numbers to an existing svg file. It should be only an update showing the old svg file with these 3 new numbers.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1445,"Q_Id":51982876,"Users Score":0,"Answer":"No. As with other text files, you need to read the file to memory, modify and then write the modified file to disk.","Q_Score":1,"Tags":"python,svg,svgwrite","A_Id":51983709,"CreationDate":"2018-08-23T09:46:00.000","Title":"Manipulating SVG-files with python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"how to install twilio via pip?\nI tried to install twilio python module\nbut i can't install it \ni get following error \nno Module named twilio\nWhen trying to install twilio\npip install twilio\nI get the following error.\npyopenssl 18.0.0 has requirement six>=1.5.2, but you'll have six 1.4.1 which is incompatible.\nCannot uninstall 'pyOpenSSL'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.\ni got the answer and installed\npip install --ignore-installed twilio\nbut i get following error\n\nCould not install packages due to an EnvironmentError: [Errno 13] Permission denied: '\/Library\/Python\/2.7\/site-packages\/pytz-2018.5.dist-info'\nConsider using the `--user` option or check the permissions.\n\ni have anaconda installed \nis this a problem?","AnswerCount":2,"Available Count":1,"Score":-0.0996679946,"is_accepted":false,"ViewCount":2822,"Q_Id":51985401,"Users Score":-1,"Answer":"step1:download python-2.7.15.msi\nstep 2:install and If your system does not have Python added to your PATH while installing\n\"add python exe to path\"\nstep 3:go C:\\Python27\\Scripts of your system\nstep4:in command prompt C:\\Python27\\Scripts>pip install twilio\nstep 5:after installation is done >python command line\n import twilio\nprint(twilio.version)\nstep 6:if u get the version ...you are done","Q_Score":2,"Tags":"python,module,installation,twilio","A_Id":53578904,"CreationDate":"2018-08-23T12:03:00.000","Title":"How to install twilio via pip","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to retrieve the objects\/items (server name, host name, domain name, location, etc...) that are stored under the saved quote for a particular Softlayer account. Can someone help how to retrieve the objects within a quote? I could find a REST API (Python) to retrieve quote details (quote ID, status, etc..) but couldn't find a way to fetch objects within a quote.\nThanks!\nBest regards,\nKhelan Patel","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":71,"Q_Id":51988650,"Users Score":0,"Answer":"Thanks Albert getRecalculatedOrderContainer is the thing I was looking for.","Q_Score":0,"Tags":"python,rest,ibm-cloud,ibm-cloud-infrastructure","A_Id":52010150,"CreationDate":"2018-08-23T14:53:00.000","Title":"How to retrieve objects from the sotlayer saved quote using Python API","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to set up Google OAuth 2.0 on my homepage.\nOther things seem to be well set up, but CallBack url is a problem.\nI'm using \"https\" and also entered my callback url that starts with https in Google Oauth 2.0 console, but OAuth is still trying to callback configured with http url. How do I fix it?\nIf I go directly into the callback redirection url starting with https, it works fine.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":70,"Q_Id":51996810,"Users Score":0,"Answer":"I solved this problem myself.\nSimply added it in the settings.py was solution.\nACCOUNT_DEFAULT_HTTP_PROTOCOL = \"https\"\nThat's all.","Q_Score":0,"Tags":"django,python-3.6,google-oauth,django-allauth","A_Id":52008496,"CreationDate":"2018-08-24T02:53:00.000","Title":"Google OAuth 2.0 keep trying to callback using \"http\" url","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"If requests.Session() can handle cookies and does almost everything that app.test_client() does. Then why use the app.test_client()?","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":3506,"Q_Id":52028124,"Users Score":2,"Answer":"test_client is already prebuilt into flask, this makes it easier for people to quickly test their programs. Both the requests utility and test_client server the same functionality, so the usage is just based on personal preference.","Q_Score":11,"Tags":"python,python-3.x,flask","A_Id":52045779,"CreationDate":"2018-08-26T16:21:00.000","Title":"Why use Flask's app.test_client() over requests.Session() for testing","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a directed bigraph that contains directed and undirected edges.\nnetworkx_Graph = nx.read_adjlist('graph.txt', create_using=nx.DiGraph())\nI was able to find the number of directed edges using: len(list(networkx_Graph.in_edges(data=False)))\nBut I am trying to find the number of undirected edges.\nThis is quite easy using the python package snap, but I am not finding anything like this in networkx's documentation?\nWhat is the networkx equivalent of snap.CntUniqDirEdges()?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":372,"Q_Id":52030344,"Users Score":0,"Answer":"You have to convert the graph to an undirected graph the calculate the size. NetworkX does not have a function that does this in one shot like snap does.\n\nnetworkx_Graph.to_undirected().size()","Q_Score":0,"Tags":"python,graph,networkx","A_Id":52031095,"CreationDate":"2018-08-26T21:19:00.000","Title":"How can I find the number of unique undirected edges of a graph using networkx?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have an AWS Lambda (python) function that is set to trigger on S3 Object Create. What I am trying to do is push the created file to a network drive through our VPC but I am not entirely sure how to configure this or the python code to map a shared drive through the VPC. Am I thinking about this in the wrong way?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":941,"Q_Id":52046518,"Users Score":0,"Answer":"Step 1: Accessing the file (Object created event) from Lambda Python Code can be done through Boto3 Library. AWS SDK for Python\nStep 2: Configuring your lambda function for VPC - to be done on the Console or through CLI when you create the lambda function. On the Lamda Function Details Page, there is a \"Network\" block of UI containing a drop down called \"VPC\". VPC End Points will be listed here and to be selected.","Q_Score":0,"Tags":"python,amazon-s3,aws-lambda,amazon-vpc","A_Id":52073184,"CreationDate":"2018-08-27T20:25:00.000","Title":"How can I download a file from S3 to local network drive using lambda?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am facing token expire issue every 20 to 40 mins but actual time is one hour but I need a token validity one day.\nPlease help me.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":146,"Q_Id":52054877,"Users Score":0,"Answer":"This is not possible to change the token validity period with AWS Cognito User Pools. Currently, its fixed to 1 hour.","Q_Score":0,"Tags":"python-3.x,aws-lambda,aws-cognito","A_Id":52055046,"CreationDate":"2018-08-28T09:48:00.000","Title":"How to increase the AWS Cognito Access Token Validity one hour to one day in Amazon Cognito User Pools","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a way to provide a single URL for a user to download all the content from an S3 path?\nOtherwise, is there a way to create a zip with all files found on an S3 path recursively?\nie. my-bucket\/media\/123\/*\nEach path usually has 1K+ images and 10+ videos.","AnswerCount":3,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":3130,"Q_Id":52064420,"Users Score":3,"Answer":"There's no built-in way. You have to download all files, compact them \"locally\", re-upload it, and then you'll have a single URL for download.","Q_Score":1,"Tags":"python,amazon-web-services,amazon-s3,aws-lambda","A_Id":52064465,"CreationDate":"2018-08-28T18:43:00.000","Title":"Create a zip with files from an AWS S3 path","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I need some assistance with migrating with Heroku as I've added git+https:\/\/github.com\/Rapptz\/discord.py@rewrite#egg=discord.py[voice] into my requirements.txt file and I'm confused as when I do that, it doesn't change to rewrite, even if I still import discord. I have changed all my code to rewrite, like for example: bot.say to ctx.send. All of that is done, but when I place git+https:\/\/github.com\/Rapptz\/discord.py@rewrite#egg=discord.py[voice] into my requirements.txt file, it still thinks it's async. Please help as I tried so much just to get this working and I can't seem to find a way.\nThanks in advance.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":710,"Q_Id":52075155,"Users Score":1,"Answer":"LOL wait woops I just had to add yarl<1.2 to requirements.txt","Q_Score":0,"Tags":"python,asynchronous,heroku,discord,discord.py-rewrite","A_Id":52083610,"CreationDate":"2018-08-29T10:24:00.000","Title":"How do I change from Discord.py async to rewrite while using Heroku?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to store data about tweets in the database in raw format and figured out that you can pull out the jsob from tweepy.Status for this purpose like this:\nstatus._json\nHow can I parse json back to the tweepy.Status object?","AnswerCount":2,"Available Count":1,"Score":0.2913126125,"is_accepted":false,"ViewCount":913,"Q_Id":52077085,"Users Score":3,"Answer":"I've found non-elegant solution for my problem. All you need is that:\ntweepy.Status().parse(None, status_json)\nwhere None should be tweepy.api.API object, but it not nedeed for parsing at all.\n\nYou can also compare the result with the original status for self-check. In my case this has True result:\ntweepy.Status().parse(None, status_json) == status","Q_Score":1,"Tags":"python,json,tweepy","A_Id":52077086,"CreationDate":"2018-08-29T12:08:00.000","Title":"How to parse JSON string to tweepy.Status object?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have given a download file url in parse method, what is the default location for that files and Is there any possibility to change that downloaded directory?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":162,"Q_Id":52082742,"Users Score":0,"Answer":"The response contents in parse(self, response) method are not files and are not stored on disk. The response is a python object and is stored in memory. \nScrapy only downloads contents if HTTP_CACHE_ENABLED setting is set and it will cache all pages in \/.scrapy directory.","Q_Score":0,"Tags":"python,scrapy","A_Id":52088635,"CreationDate":"2018-08-29T17:04:00.000","Title":"what is default downloaded path in scrapy?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"so I'm having the weirdest problem! I have a python script which was working just fine and accessing a local mongodb and all was good.. then I wanted to add a new feature but I wanted to try it first so I copied my script into a new file and when I tried to run it it didn't access to mongodb and it kept on giving me this error pymongo: [Errno 10013] An attempt was made to access a socket in a way forbidden by its access permissions\n BUT the old script is still working just fine!!\nI searched every were google could have taken me and tried everything I know but the problem is still there all new scripts are giving me this error and all old ones are working just fine.. \ncan anyone please help me?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":501,"Q_Id":52087994,"Users Score":1,"Answer":"the problem was with the antivirus.. I have comodo antivirus and it has been blocking the scripts,, I unblocked them and now they work :)","Q_Score":0,"Tags":"python,mongodb,dbaccess","A_Id":52088266,"CreationDate":"2018-08-30T01:16:00.000","Title":"pymongo: [Errno 10013] An attempt was made to access a socket in a way forbidden by its access permissions","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Python: 2.7.9\nSelenium 2.53.6 \n\nWe drive Selenium from Python, and it works great.\nIssue is that when we hit this line:\ndriver = webdrive.Firefox()\nThe Windows Firewall pops up, asking us to give permission to python.exe\nWe notice that we can ignore the firewall prompt, and everything seems to work OK.\nCan anyone tell us:\n\nWhy something in python land (selenium specifically) is opening a port\nThe port open that triggers the firewall prompt is clearly NOT required, (since we can ignore the prompt, and web drier still works). What is the explanation for this? \nHow we can , in code, suppress the firewall prompt? (e.g. by perhaps only allowing the engine to bind to 127.0.0.1 rather than the device IP?)","AnswerCount":1,"Available Count":1,"Score":-0.1973753202,"is_accepted":false,"ViewCount":630,"Q_Id":52102843,"Users Score":-1,"Answer":"Why something in python land (selenium specifically) is opening a port\nIs this required?\n\nyour Python selenium code must communicate with a standalone server (geckodriver) over HTTP. This can be a local or remote connection depending on your configuration... but you must allow this socket connection to use selenium with Firefox.","Q_Score":1,"Tags":"python,selenium","A_Id":52107370,"CreationDate":"2018-08-30T17:53:00.000","Title":"Selenium: Why does initializing the Firefox webdriver from Python trigger a Windows Firewall alert?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I installed selenium and downloaded chromedriver.exe\nWhen i run the code in my gitbash terminal then its working but not working when I run a python script in visual studio code.\nOn internet it say to put the file in the path but i don't know much about it. Where should i place the chromedriver.exe?","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":3951,"Q_Id":52111479,"Users Score":0,"Answer":"I use Anaconda for which i placed chromedriver.exe in the following\nC:\\Users\\AppData\\Local\\Continuum\\anaconda3\\Scripts","Q_Score":1,"Tags":"python,selenium,google-chrome,selenium-webdriver,selenium-chromedriver","A_Id":52137040,"CreationDate":"2018-08-31T08:31:00.000","Title":"Where to place ChromeDriver while executing tests through selenium in python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I installed selenium and downloaded chromedriver.exe\nWhen i run the code in my gitbash terminal then its working but not working when I run a python script in visual studio code.\nOn internet it say to put the file in the path but i don't know much about it. Where should i place the chromedriver.exe?","AnswerCount":4,"Available Count":2,"Score":0.049958375,"is_accepted":false,"ViewCount":3951,"Q_Id":52111479,"Users Score":1,"Answer":"driver=webdriver.Chrome(executable_path=r'C:\\Users\\littl\\Downloads\\chromedriver_win32\\chromedriver.exe')","Q_Score":1,"Tags":"python,selenium,google-chrome,selenium-webdriver,selenium-chromedriver","A_Id":57448535,"CreationDate":"2018-08-31T08:31:00.000","Title":"Where to place ChromeDriver while executing tests through selenium in python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using python and boto to assume an AWS IAM role. I want to see what policies are attached to the role so i can loop through them and determine what actions are available for the role. I want to do this so I can know if some actions are available instead of doing this by calling them and checking if i get an error. However I cannot find a way to list the policies for the role after assuming it as the role is not authorised to perform IAM actions.\nIs there anyone who knows how this is done or is this perhaps something i should not be doing.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":209,"Q_Id":52119306,"Users Score":1,"Answer":"To obtain policies, your AWS credentials require permissions to retrieve the policies.\nIf such permissions are not associated with the assumed role, you could use another set of credentials to retrieve the permissions (but those credentials would need appropriate IAM permissions).\nThere is no way to ask \"What policies do I have?\" without having the necessary permissions. This is an intentional part of AWS security because seeing policies can reveal some security information (eg \"Oh, why am I specifically denied access to the Top-Secret-XYZ S3 bucket?\").","Q_Score":0,"Tags":"python,amazon-web-services,aws-sdk,boto3,amazon-iam","A_Id":52123513,"CreationDate":"2018-08-31T16:13:00.000","Title":"How to list available policies for an assumed AWS IAM role","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using Twilio to send and receive SMS messages from a Python application. The issue is that their tutorials use ngrok as a way to get through the firewall but I don't want to have to run ngrok every time I run my app and the URL changes every time ngrok runs so I have to change the webhook url on Twilio every time. Is there a better way around this? Is this something that requires a server?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1159,"Q_Id":52123648,"Users Score":2,"Answer":"There are two options that you have.\n\nThe paid option of ngrok allows you to set a persistent url so that you don't have to chance the webhook url on Twilio each time.\nIf you have a server, then you would also be able to set a persistent url to your server.\n\nUnfortunately, the free version of ngrok does not allow you to set a persistent url.","Q_Score":0,"Tags":"python,twilio,webhooks","A_Id":52123715,"CreationDate":"2018-08-31T23:30:00.000","Title":"How to generate fixed webhook url?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am kinda newby in python thus the question. \nI am trying to create a simple http web server that can receive chunked data from a post request. \nI have realized later that once a request sends a headers with chunked data, the Content-length headers will be set to zero, thus reading the sent data with 'request.get_data()' will fail.\nIs there another way of reading the chunked data?\nThe request I receive does give me the data length in the 'X-Data-Length' headers.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":519,"Q_Id":52137499,"Users Score":0,"Answer":"Did you write both of the js upload file code and the flask in backend to handle upload request? If not then you will need some help with js to upload it.\nOne way to achieve chucked data upload is:\n\nChucked that file in the frontend with js. Give it some headers in the request for the total size, number of the chunk, chunk size... and send each chuck in a separate POST request (You can use dropzone.js for example, they will do the job for you, just need to config the params) \nIn the backend, create an upload API which will read the request headers and merge the file chunks back together","Q_Score":0,"Tags":"python-2.7,flask","A_Id":52137640,"CreationDate":"2018-09-02T13:29:00.000","Title":"Flask unable to receive chunked data","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have some code on my mac in the latest version of python idle 3, that collects certain data from a csv file that gets sent to myself and prints out the output in the terminal. I want to create a webpage that has a button or link that a user clicks and it runs the code and prints out the output of my program.\nEventually i want to be able to create a website with multiple links that can do the same operation.\nWill i need to create an sql database? If so how?...","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":876,"Q_Id":52156530,"Users Score":0,"Answer":"From the sound of it, you want to use a webpage as a user interface for your python script. Unfortunately without utilising a server-side language this is not possible.\nMultiple options exist for reacting to a button press on the server side, with PHP being the most well known, but solutions using only python do exist, such as Flask.\nIf you're just after a local GUI for your script, simpler options exist within python such as Tk.","Q_Score":0,"Tags":"html,database,python-3.x,web","A_Id":52156697,"CreationDate":"2018-09-03T21:54:00.000","Title":"How to run a python script when a hyperlink is clicked from a webpage?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Is it possible to send additional metadata with a Celery 'FAILURE' state?\nCurrently I'm only finding it possible to send exception data and nothing else. Ideally I'd like to send with it an array of extra information that can be picked up by my frontend.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1346,"Q_Id":52163940,"Users Score":0,"Answer":"I don't think so. However, you can access the task's traceback property to get the stacktrace, does that help?","Q_Score":2,"Tags":"python,celery","A_Id":52164731,"CreationDate":"2018-09-04T10:14:00.000","Title":"Additional Metadata on Celery 'FAILURE'","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a question here. I have been using Python's Etree to parse XML and do syntax checking on it. The problem I am having is it will throw an error when it is unable to parse the XML, but it is not good about indicating where the mistake was actually first made. I realized what I kind of need is to be able to enforce a rule that says there is to be no '>' in the text of an XML element (which for my XML purposes is correct and sound). Is there a way to tell Etree to do this when parsing the XML? I know there is libxml, but if I am to use a library that doesn't come by default with Python 2.75, then I will need the source code as I am not allowed to install additional Python libraries where I work. So, an answer to the question about enforcing no '>' in the text of an XML element, and some suggestions on how to spot the line where a mistake is first made in an XML document; such as forgetting the opening '<' in a closing XML tag. Any help would be much appreciated! Thanks.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":90,"Q_Id":52170669,"Users Score":1,"Answer":"I'm not sure about your headline question. Why do you want to enforce a rule that \">\" does not appear in text, since there is no such rule in XML?\nIf you're not happy with the diagnostics you're getting from an XML parser then the only real option is to try a different parser (though do check that you are extracting all the information that's available - I don't know Python's ETree, but some parsers hide the diagnostics in obscure places).\nBut there are some limitations. If a start tag is missing, then no parser is going to be able to tell you where it should have been; it can only tell you where the unmatched end tag is. So asking it to tell you \"where the mistake was first made\" is asking too much.","Q_Score":0,"Tags":"python,xml,parsing,syntax","A_Id":52174016,"CreationDate":"2018-09-04T16:38:00.000","Title":"Python XML syntax checking - Enforce NO '>' in the text of an element","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am new to programming and my first task, which should be really simple, is to automate a proccess, where I log in in a website with my credentials, I click on some buttons and that is it. I am using Python 3.6 on windows 10.\nI am trying to do it using the webbot module, which so far has come really handy but I have one big problem.\nThe standard browser for webbot is Google Chrome and thus the site always opens with Chrome. I need to open the site with Internet Explorer.\nI have set IE as default browser, but nothing changed, Chrome would still open.\nI deleted Chrome, but then when I would run the programm nothing would happen.\nI checked the init.py file and the drivers folder of the module and I think that this module can only work with chrome.\nIs it possible to use IE or does this mean that this package does not support this browser at all?\nWhich alternatives would you suggest?\nEdit: If I am not mistaken Selenium does not support IE11 on windows 10, so that is not an option, unless I am mistaken.\nThanks in advance","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1106,"Q_Id":52178568,"Users Score":0,"Answer":"There is no support for another browser other than Chrome (as far as the webbot module is concerned).","Q_Score":0,"Tags":"python-3.x,web-scraping","A_Id":53250134,"CreationDate":"2018-09-05T06:36:00.000","Title":"Python - Using a browser other than Chrome with webbot package","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a rather odd situation where I will have multiple interfaces connected to the same network. When I receive a broadcast or multicast message, I would like to know what interface it came from. It there a way to do that in C or ideally Python?\nThanks!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":65,"Q_Id":52196479,"Users Score":0,"Answer":"The most obvious one would be to bind several sockets, each to one interface only - do not listen to 0.0.0.0.","Q_Score":2,"Tags":"python,c,sockets,network-programming,udp","A_Id":52196631,"CreationDate":"2018-09-06T04:23:00.000","Title":"How do you know which interface a broadcast message came from?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using Selenium to scrape table data from a website. I found that I can easily iterate through the rows to get the information that I need using xcode. Does selenium keep hitting the website every time I search for an object's text by xcode? Or does it download the page first and then search through the objects offline?\nIf the former is true does is there a way to download the html and iterate offline using Selenium?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":43,"Q_Id":52249090,"Users Score":1,"Answer":"Selenium uses a Web Driver, similar to your web browser. Selenium will access\/download the web page once, unless you've wrote the code to reload the page.\nYou can download the web page and access it locally in selenium. For example you could get selenium to access the web page \"C:\\users\\public\\Desktop\\index.html\"","Q_Score":0,"Tags":"python,selenium","A_Id":52249127,"CreationDate":"2018-09-09T22:24:00.000","Title":"What happens when scraping items from a website table using Selenium?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a list of about 100,000 URLs saved in my computer. ( that 100,000 can very quickly multiply into several million.) For every url, i check that webpage and collect all additional urls on that page, but only if each additional link is not already in my large list. The issue here is reloading that huge list into memory iteratively so i can consistently have an accurate list. where the amount of memory used will probably very soon become much too much, and even more importantly, the time it takes inbetween reloading the list gets longer which is severely holding up the progress of the project.\nMy list is saved in several different formats. One format is by having all links contained in one single text file, where i open(filetext).readlines() to turn it straight into a list. Another format i have saved which seems more helpful, is by saving a folder tree with all the links, and turning that into a list by using os.walk(path).\nim really unsure of any other way to do this recurring conditional check more efficiently, without the ridiculous use of memory and loadimg time. i tried using a queue as well, but It was such a benefit to be able to see the text output of these links that queueing became unneccesarily complicated. where else can i even start?","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":54,"Q_Id":52253181,"Users Score":1,"Answer":"The main issue is not to load the list in memory. This should be done only once at the beginning, before scrapping the webpages. The issue is to find if an element is already in the list. The in operation will be too long for large list.\nYou should try to look into several thinks; among which sets and pandas. The first one will probably be the optimal solution.\nNow, since you thought of using a folder tree with the urls as folder names, I can think of one way which could be faster. Instead of creating the list with os.walk(path), try to look if the folder is already present. If not, it means you did not have that url yet. This is basically a fake graph database. To do so, you could use the function os.path.isdir(). If you want a true graph DB, you could look into OrientDB for instance.","Q_Score":0,"Tags":"python","A_Id":52253298,"CreationDate":"2018-09-10T07:43:00.000","Title":"Iteratively Re-Checking a Huge list","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a list of about 100,000 URLs saved in my computer. ( that 100,000 can very quickly multiply into several million.) For every url, i check that webpage and collect all additional urls on that page, but only if each additional link is not already in my large list. The issue here is reloading that huge list into memory iteratively so i can consistently have an accurate list. where the amount of memory used will probably very soon become much too much, and even more importantly, the time it takes inbetween reloading the list gets longer which is severely holding up the progress of the project.\nMy list is saved in several different formats. One format is by having all links contained in one single text file, where i open(filetext).readlines() to turn it straight into a list. Another format i have saved which seems more helpful, is by saving a folder tree with all the links, and turning that into a list by using os.walk(path).\nim really unsure of any other way to do this recurring conditional check more efficiently, without the ridiculous use of memory and loadimg time. i tried using a queue as well, but It was such a benefit to be able to see the text output of these links that queueing became unneccesarily complicated. where else can i even start?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":54,"Q_Id":52253181,"Users Score":0,"Answer":"Have you considered mapping a table of IP addresses to URL? Granted this would only work if you are seeking unique domains vs thousands of pages on the same domain. The advantage is you would be dealing with a 12 integer address. The downside is the need for additional tabulated data structures and additional processes to map the data.","Q_Score":0,"Tags":"python","A_Id":52253636,"CreationDate":"2018-09-10T07:43:00.000","Title":"Iteratively Re-Checking a Huge list","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"So there are variants of this question - but none quite hit the nail on the head.\nI want to run spyder and do interactive analysis on a server. I have two servers , neither have spyder. They both have python (linux server) but I dont have sudo rights to install packages I need.\nIn short the use case is: open spyder on local machine. Do something (need help here) to use the servers computation power , and then return results to local machine.\nUpdate:\nI have updated python with my packages on one server. Now to figure out the kernel name and link to spyder.\nLeaving previous version of question up, as that is still useful.\nThe docker process is a little intimidating as does paramiko. What are my options?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":32710,"Q_Id":52283535,"Users Score":1,"Answer":"I did a long search for something like this in my past job, when we wanted to quickly iterate on code which had to run across many workers in a cluster. All the commercial and open source task-queue projects that I found were based on running fixed code with arbitrary inputs, rather than running arbitrary code. \nI'd also be interested to see if there's something out there that I missed. But in my case, I ended up building my own solution (unfortunately not open source). \nMy solution was: \n1) I made a Redis queue where each task consisted of a zip file with a bash setup script (for pip installs, etc), a \"payload\" Python script to run, and a pickle file with input data.\n2) The \"payload\" Python script would read in the pickle file or other files contained in the zip file. It would output a file named output.zip.\n3) The task worker was a Python script (running on the remote machine, listening to the Redis queue) that would would unzip the file, run the bash setup script, then run the Python script. When the script exited, the worker would upload output.zip.\nThere were various optimizations, like the worker wouldn't run the same bash setup script twice in a row (it remembered the SHA1 hash of the most recent setup script). So, anyway, in the worst case you could do that. It was a week or two of work to setup.\nEdit: \nA second (much more manual) option, if you just need to run on one remote machine, is to use sshfs to mount the remote filesystem locally, so you can quickly edit the files in Spyder. Then keep an ssh window open to the remote machine, and run Python from the command line to test-run the scripts on that machine. (That's my standard setup for developing Raspberry Pi programs.)","Q_Score":20,"Tags":"python,docker,server,spyder,fabric","A_Id":52285027,"CreationDate":"2018-09-11T20:09:00.000","Title":"Run Spyder \/Python on remote server","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have created and REST API using Flask-RESTPlus. I am getting response in json and its fine. Now I have a new requirement to specify Response content type as csv or json. \nI checked API doc there is nothing mentioned !!\nIs it possible to get reponse in csv using Flask-RESTPlus ??","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":586,"Q_Id":52325100,"Users Score":2,"Answer":"I was able to get output in csv . It took some time though\n\n@api.representation('text\/csv')\ndef data_csv(data, code, headers):\n '''Get result in csv '''\n resp = make_response(convert_data(data), code)\n resp.headers.extend(headers)\n return resp","Q_Score":1,"Tags":"python,csv,flask-restplus","A_Id":52396235,"CreationDate":"2018-09-14T04:43:00.000","Title":"Get response from Flask-RESTPlus in csv","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I've been working on scrapy for 3 months. for extracting selectors I use simple response.css or response.xpath..\nI'm asked to switch to ItemLoaders and use add_xpath add_css etc.\nI know how ItemLoaders work and ho convinient they are but can anyone compare these 2 w.r.t efficiency? which way is efficient and why ??","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":191,"Q_Id":52330140,"Users Score":0,"Answer":"Item loaders do exactly the same thing underneath that you do when you don't use them. So for every loader.add_css\/add_xpath call there will be responce.css\/xpath executed. It won't be any faster and the little amount of additional work they do won't really make things any slower (especially in comparison to xml parsing and network\/io load).","Q_Score":1,"Tags":"python,python-3.x,scrapy,css-selectors","A_Id":52332084,"CreationDate":"2018-09-14T10:30:00.000","Title":"Scrapy: Difference between simple spider and the one with ItemLoader","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I need Twitter tweet button below every blog post.\nHow do I make {% pageurl %} return an absolute URL of that specific blog post?\nThanks","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":510,"Q_Id":52405554,"Users Score":1,"Answer":"Instead of {% pageurl my_page %}, use {{ my_page.full_url }}.","Q_Score":0,"Tags":"python,django,wagtail","A_Id":52406374,"CreationDate":"2018-09-19T12:09:00.000","Title":"How to make pageurl return an absolute url","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Suppose I have multiple mongodbs like mongodb_1, mongodb_2, mongodb_3 with same kind of data like employee details of different organizations.\nWhen user triggers GET request to get employee details from all the above 3 mongodbs whose designation is \"TechnicalLead\". then first we need to connect to mongodb_1 and search and then disconnect with mongodb_1 and connect to mongodb_2 and search and repeat the same for all dbs.\nCan any one suggest how can we achieve above using python EVE Rest api framework.\nBest Regards,\nNarendra","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":50,"Q_Id":52418721,"Users Score":0,"Answer":"First of all, it is not a recommended way to run multiple instances (especially when the servers might be running at the same time) as it will lead to usage of the same config parameters like for example logpath and pidfilepath which in most cases is not what you want.\nSecondly for getting the data from multiple mongodb instances you have to create separate get requests for fetching the data. There are two methods of view for the model that can be used:\n\nquery individual databases for data, then assemble the results for viewing on the screen.\nQuery a central database that the two other databases continously update.","Q_Score":0,"Tags":"python,mongodb,eve","A_Id":52872010,"CreationDate":"2018-09-20T06:14:00.000","Title":"How to search for all existing mongodbs for single GET request","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm studying Python for 4\/5 months and this is my third project built from scratch, but im not able to solve this problem on my own.\nThis script downloads 1 image for each url given.\nIm not able to find a solution on how to implement Thread Pool Executor or async in this script. I cannot figure out how to link the url with the image number to the save image part. \nI build a dict of all the urls that i need to download but how do I actually save the image with the correct name?\nAny other advise?\nPS. The urls present at the moment are only fake one.\nSynchronous version:\n\n\n import requests\n import argparse\n import re\n import os\n import logging\n\n from bs4 import BeautifulSoup\n\n\n parser = argparse.ArgumentParser()\n parser.add_argument(\"-n\", \"--num\", help=\"Book number\", type=int, required=True) \n parser.add_argument(\"-p\", dest=r\"path_name\", default=r\"F:\\Users\\123\", help=\"Save to dir\", )\n args = parser.parse_args()\n\n\n\n logging.basicConfig(format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',\n level=logging.ERROR)\n logger = logging.getLogger(__name__) \n\n\n def get_parser(url_c): \n url = f'https:\/\/test.net\/g\/{url_c}\/1'\n logger.info(f'Main url: {url_c}')\n responce = requests.get(url, timeout=5) # timeout will raise an exeption\n if responce.status_code == 200:\n page = requests.get(url, timeout=5).content\n soup = BeautifulSoup(page, 'html.parser')\n return soup\n else:\n responce.raise_for_status()\n\n\n def get_locators(soup): # take get_parser\n # Extract first\/last page num\n first = int(soup.select_one('span.current').string)\n logger.info(f'First page: {first}')\n last = int(soup.select_one('span.num-pages').string) + 1\n\n # Extract img_code and extension\n link = soup.find('img', {'class': 'fit-horizontal'}).attrs[\"src\"]\n logger.info(f'Locator code: {link}')\n code = re.search('galleries.([0-9]+)\\\/.\\.(\\w{3})', link)\n book_code = code.group(1) # internal code \n extension = code.group(2) # png or jpg\n\n # extract Dir book name\n pattern = re.compile('pretty\":\"(.*)\"')\n found = soup.find('script', text=pattern)\n string = pattern.search(found.text).group(1)\n dir_name = string.split('\"')[0]\n logger.info(f'Dir name: {dir_name}')\n\n logger.info(f'Hidden code: {book_code}')\n print(f'Extension: {extension}')\n print(f'Tot pages: {last}')\n print(f'')\n\n return {'first_p': first, \n 'last_p': last, \n 'book_code': book_code, \n 'ext': extension, \n 'dir': dir_name\n }\n\n\n def setup_download_dir(path, dir): # (args.path_name, locator['dir'])\n # Make folder if it not exist\n filepath = os.path.join(f'{path}\\{dir}')\n if not os.path.exists(filepath):\n try:\n os.makedirs(filepath)\n print(f'Directory created at: {filepath}')\n except OSError as err:\n print(f\"Can't create {filepath}: {err}\") \n return filepath \n\n\n def main(locator, filepath):\n for image_n in range(locator['first_p'], locator['last_p']):\n url = f\"https:\/\/i.test.net\/galleries\/{locator['book_code']}\/{image_n}.{locator['ext']}\"\n logger.info(f'Url Img: {url}')\n responce = requests.get(url, timeout=3)\n if responce.status_code == 200:\n img_data = requests.get(url, timeout=3).content \n else: \n responce.raise_for_status() # raise exepetion \n\n with open((os.path.join(filepath, f\"{image_n}.{locator['ext']}\")), 'wb') as handler:\n handler.write(img_data) # write image\n print(f'Img {image_n} - DONE')\n\n\n if __name__ == '__main__':\n try:\n locator = get_locators(get_parser(args.num)) # args.num ex. 241461\n main(locator, setup_download_dir(args.path_name, locator['dir'])) \n except KeyboardInterrupt:\n print(f'Program aborted...' + '\\n')\n\n\nUrls list:\n\n\n def img_links(locator):\n image_url = []\n for num in range(locator['first_p'], locator['last_p']):\n url = f\"https:\/\/i.test.net\/galleries\/{locator['book_code']}\/{num}.{locator['ext']}\"\n image_url.append(url)\n logger.info(f'Url List: {image_url}') \n return image_url","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":353,"Q_Id":52430038,"Users Score":0,"Answer":"I found the solution in the book fluent python. Here the snippet:\n\n def download_many(cc_list, base_url, verbose, concur_req):\n counter = collections.Counter()\n with futures.ThreadPoolExecutor(max_workers=concur_req) as executor:\n to_do_map = {}\n for cc in sorted(cc_list):\n future = executor.submit(download_one, cc, base_url, verbose)\n to_do_map[future] = cc\n done_iter = futures.as_completed(to_do_map)\n if not verbose:\n done_iter = tqdm.tqdm(done_iter, total=len(cc_list))\n for future in done_iter:\n try:\n res = future.result()\n except requests.exceptions.HTTPError as exc:\n error_msg = 'HTTP {res.status_code} - {res.reason}'\n error_msg = error_msg.format(res=exc.response)\n except requests.exceptions.ConnectionError as exc:\n error_msg = 'Connection error'\n else:\n error_msg = ''\n status = res.status\n if error_msg:\n status = HTTPStatus.error\n counter[status] += 1\n if verbose and error_msg:\n cc = to_do_map[future]\n print('*** Error for {}: {}'.format(cc, error_msg))\n return counter","Q_Score":2,"Tags":"python,python-3.x,asynchronous,python-multithreading,imagedownload","A_Id":52735044,"CreationDate":"2018-09-20T17:05:00.000","Title":"python asyncronous images download (multiple urls)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"So I'm making a school project and I have to make a python server and android client. So I've already created a server using websockets with python 3 (I'm not using Flask or socket.io and etc, just regular websockets) and a client in android studio. I ran everything locally and it works great! \nSo now I want to go up in a level a little bit, I want to host my server on Heroku. I tried but I have some problems with that... As I mentioned, I'm using only websockets and not Flask and that means that I need to specify a host ip\/url and a port. But when I host the server on Heroku it says that \"address already in use\".\nDoes anyone know how solve it?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1539,"Q_Id":52477103,"Users Score":5,"Answer":"You can host in heroku by using the ip \"0.0.0.0\", and get the port from the env variable called \"PORT\". On the client you can connect to the websocket server using \"wss:\/\/yourherokuapp.herokuapp.com\/0.0.0.0\". Make sure on your Procfile your script is running as a web process type.","Q_Score":5,"Tags":"python,heroku,websocket","A_Id":54756879,"CreationDate":"2018-09-24T10:14:00.000","Title":"How to host websocket app python server on Heroku without Flask?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm exploring ODL and mininet and able to run both and populate the network nodes over ODL and I can view the topology via ODL default webgui.\nI'm planning to create my own webgui and to start with simple topology view. I need advise and guideline on how I can achieve topology view on my own webgui. Plan to use python and html. Just a simple single page html and python script. Hopefully someone could lead me the way. Please assist and thank you.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":162,"Q_Id":52493515,"Users Score":0,"Answer":"If a web GUI for ODL would provide value for you, please consider working to contribute that upstream. The previous GUI (DLUX) has recently been deprecated because no one was supporting it, although it seems many people were using it.","Q_Score":0,"Tags":"python,html,api,web,opendaylight","A_Id":52532208,"CreationDate":"2018-09-25T08:12:00.000","Title":"How to view Opendaylight topology on external webgui","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"My script scans the price of certain products on nike.com and will keep scraping the price of the products till it goes on sale and at that point it will create multiple instances to login into multiple accounts to purchase the product. \nI already have the function of scraping the website and checking out the product made but I want to know should I use multiprocessing or multithreading to execute the logging in and checking out process?\nWhich will be more efficient at handling multiple instances of the web-automation process? I'm using selenium headless in firefox if that helps.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":353,"Q_Id":52507999,"Users Score":0,"Answer":"Threads are much faster to create, have a smaller memory footprint since they share memory and can communicate by just updating variables, since they all have access to the same memory space.\nI personally like to you the multiprocessing.dummy module that let's you handle threads with the multiprocessing api which is very handy.","Q_Score":0,"Tags":"python,multithreading,selenium,multiprocessing,webautomation","A_Id":52508193,"CreationDate":"2018-09-25T23:42:00.000","Title":"Multithreading or Multiprocessing for a webautomation bot in python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I would like to share my internal IP between two devices on a network (using python 3).\nLet's say I have my phone, and my computer. Both connected to the same network. I need to run a client and server script to connect the two but in order to do that, my phone (client) needs the ip of the computer (server).\nThe IP of the computer changes all the time (school wifi, nothing I can do about it) and even so I would like this to work instantly when connected to a new router without having to manually enter the IP.\nOne more thing, due to the huge amounts of devices on the network, mapping every device and finding the computer name to get the IP will take too long for its purpose.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":50,"Q_Id":52513050,"Users Score":0,"Answer":"In case anyone was wondering. I take this question as unsolvable, but, in order to solve my issue, I have set my computer to upload its internal IP to a server each time it connects to a new network. My phone then reads the server.","Q_Score":0,"Tags":"python,python-3.x,networking,ip","A_Id":52564276,"CreationDate":"2018-09-26T08:15:00.000","Title":"Exchange internal ip blindly using python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I would like to share my internal IP between two devices on a network (using python 3).\nLet's say I have my phone, and my computer. Both connected to the same network. I need to run a client and server script to connect the two but in order to do that, my phone (client) needs the ip of the computer (server).\nThe IP of the computer changes all the time (school wifi, nothing I can do about it) and even so I would like this to work instantly when connected to a new router without having to manually enter the IP.\nOne more thing, due to the huge amounts of devices on the network, mapping every device and finding the computer name to get the IP will take too long for its purpose.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":50,"Q_Id":52513050,"Users Score":0,"Answer":"Please use DNS for the purpose, or assign static addresses to your devices, and use the defined static addresses in your scripts.","Q_Score":0,"Tags":"python,python-3.x,networking,ip","A_Id":52534087,"CreationDate":"2018-09-26T08:15:00.000","Title":"Exchange internal ip blindly using python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How can i import Robot Keywords file from Python script and execute keywords from that file and generate report like Robot html report.\nThanks","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":229,"Q_Id":52516100,"Users Score":0,"Answer":"You cannot import a robot keyword file from a python script, unless that script is using the low level robot API to run a test. \nRobot keywords can only be used from within a running robot test.","Q_Score":0,"Tags":"python,robotframework","A_Id":52516514,"CreationDate":"2018-09-26T10:55:00.000","Title":"How can i import Robot Keywords file from Python script and execute keywords from that file and generate report","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"This might be a silly question. When use sessions with Python to request a file from a website behind a login, can the website detect that you are logging in via a script? How common is it for websites to detect this? I tried looking this up but couldn't find an answer so if this is a repeat question could you point me to some info so I can find my answer?\nThanks in advance!","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":83,"Q_Id":52526787,"Users Score":0,"Answer":"You should change the user agent. But other than that I don\u2019t think its detectable. \nYou can change the user agent by setting a custom header e.g. requests.get(url, headers= {\"user-agent\": \"Mozilla\/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit\/537.36 (KHTML, like Gecko) Chrome\/60.0.3112.113 Safari\/537.36\"}","Q_Score":1,"Tags":"python,server,webserver","A_Id":52526825,"CreationDate":"2018-09-26T22:21:00.000","Title":"Can a web server detect a login from a script?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"This might be a silly question. When use sessions with Python to request a file from a website behind a login, can the website detect that you are logging in via a script? How common is it for websites to detect this? I tried looking this up but couldn't find an answer so if this is a repeat question could you point me to some info so I can find my answer?\nThanks in advance!","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":83,"Q_Id":52526787,"Users Score":0,"Answer":"Nothing is a silly question when you don't have the answer.\nThat being said, every request adds a header by default called the user agent, this can be a multitude of different things but is primarily used to detect what type of device or browser thenuser is connecting to the site with.\nThis includes Python requests! Python sends a user agent Python (version number) in its user agent header.\nIt is very common for webmasters to block these user agents, however, it is extremely simple to spoof the user agent header by changing it prior to sending the request itself.\nYou should look into header customisation of requests. \nIt is also very common for people writing scripts that connect to webserver to hide, spoof or otherwise obfuscate their user agent, but there are plenty more headers send by default which are also used to block unwanted traffic.\nHope this helps","Q_Score":1,"Tags":"python,server,webserver","A_Id":52526889,"CreationDate":"2018-09-26T22:21:00.000","Title":"Can a web server detect a login from a script?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"What is the BeautifulSoup method for retrieving text from an element. I haveprices = soup.find_all('p', class_='price_color') and want to retrieve the text from prices. I tried p = prices.get_text() but got an error stating: 'ResultSet' object has no attribute 'text'","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":36,"Q_Id":52529065,"Users Score":2,"Answer":"find_all returns a ResultSet object and you can iterate that object using a for loop.\nYou can try something such as:\nfor values in soup.find_all('p', class_='price_color'):\n print values.text","Q_Score":0,"Tags":"python,text,beautifulsoup","A_Id":52529129,"CreationDate":"2018-09-27T03:37:00.000","Title":"BeautifulSoup method to retrieve the text from an element","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm working on a scraper project and one of the goals is to get every image link from HTML & CSS of a website. I was using BeautifulSoup & TinyCSS to do that but now I'd like to switch everything on Selenium as I can load the JS.\nI can't find in the doc a way to target some CSS parameters without having to know the tag\/id\/class. I can get the images from the HTML easily but I need to target every \"background-image\" parameter from the CSS in order to get the URL from it. \nex: background-image: url(\"paper.gif\");\nIs there a way to do it or should I loop into each element and check the corresponding CSS (which would be time-consuming)?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":84,"Q_Id":52558105,"Users Score":1,"Answer":"You can grab all the Style tags and parse them, searching what you look.\nAlso you can download the css file, using the resource URL and parse them.\nAlso you can create a XPATH\/CSS rule for searching nodes that contain the parameter that you're looking for.","Q_Score":1,"Tags":"python,css,selenium,web-scraping,selenium-chromedriver","A_Id":52559637,"CreationDate":"2018-09-28T15:04:00.000","Title":"Selenium Python: How to get css without targetting a specific class\/id\/tag","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have one very general question and another a bit more specific. \nLet's start with the general one as it is strictly connected with the second one. \nSo I'd like to create a website (internal, not available on the internet) with a form that validates user input and if it doesn't meet certain criteria- it cannot be submitted. \nDoes it make sense to create this website (and form) with python django? I mean- obviously it has a lot of sense but there is something else I am trying to puzzle out here:\nWill it make possible to get user input from this website form and pass it to python selenium script? \nI am trying to figure this out as I'd like to improve ordering process with which I work. It is done with a web application provided by an external provider (all things that happen in this application are automatized with python selenium).\nFor the time being I have to read order form (mostly scanned paper forms, pdfs and faxes) put data to excel and then get them to python selenium script but I would like to stop using excel (as it can be really messy). I have to use excel as an intermediary as I don't have a working OCR (so I am an OCR in this case :)) plus current order forms are very different from one another (and standardization is not an option). \nIs it even possible? Is the way I am thinking about anywhere near common sense? Maybe there is an easier method? Any ideas? Thanks for all advices and suggestions.","AnswerCount":1,"Available Count":1,"Score":0.537049567,"is_accepted":false,"ViewCount":107,"Q_Id":52595166,"Users Score":3,"Answer":"Pretty broad, but definitely possible. I would use Flask personally. You can pass any data easily from a form to python and execute any python code on that data with a simple Flask website.","Q_Score":0,"Tags":"python,django,selenium","A_Id":52595231,"CreationDate":"2018-10-01T16:22:00.000","Title":"Web user input form and passing its contents to python selenium","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a python script that pulls information from an API (JSON format) and then executes a series of commands and computations based on the API data. I want the computation to run only when there is new data available. So my question is: what is the best way to detect the availability of new data in the API?\nMy current idea is to just pull all the data once every day. Hash the entire thing and compare the hash-numbers. The problem is that python doesn't want to hash a dicitionary object. Any suggestions?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":256,"Q_Id":52607902,"Users Score":0,"Answer":"you can convert data to string and then hash the result. \nyou can use json.dumps() to convert","Q_Score":0,"Tags":"python,hash","A_Id":52607988,"CreationDate":"2018-10-02T12:00:00.000","Title":"Python: best way to detect when API updates?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a script that uses a lot of headless Selenium automation and looped HTTP requests. It's very important that I implement a threading\/worker queue for this script. I've done that.\nMy question is: Should I be using multi-thread or multi-process? Thread or ProcessPool? I know that:\n\"If your program spends more time waiting on file reads or network requests or any type of I\/O task, then it is an I\/O bottleneck and you should be looking at using threads to speed it up.\"\nand...\n\"If your program spends more time in CPU based tasks over large datasets then it is a CPU bottleneck. In this scenario you may be better off using multiple processes in order to speed up your program. I say may as it\u2019s possible that a single-threaded Python program may be faster for CPU bound problems, it can depend on unknown factors such as the size of the problem set and so on.\"\nWhich is the case when it comes to Selenium? Am I right to think that all CPU-bound tasks related to Selenium will be executed separately via the web driver or would my script benefit from multiple processes?\nOr to be more concise: When I thread Selenium in my script, is the web driver limited to 1 CPU core, the same core the script threads are running on?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1504,"Q_Id":52617726,"Users Score":4,"Answer":"Web driver is just a driver, a driver cannot drive a car without a car. \nFor example when you use ChromeDriver to communicate with browser, you are launching Chrome. And ChromeDriver itself does no calculation but Chrome does.\nSo to clarify, webdriver is a tool to manipulate browser but itself is not a browser.\nBased on this, definitely you should choose thread pool instead of process pool as it is surely an I\/O bound problem in your python script.","Q_Score":7,"Tags":"python,python-3.x,multithreading,selenium,concurrency","A_Id":52618724,"CreationDate":"2018-10-02T23:27:00.000","Title":"Concurrency and Selenium - Multiprocessing vs Multithreading","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I was trying to do a weather api related python program and initially while running the dependencies there was a error that occurred which reads as \n'No module named 'citipy' error'. \nBut i used from citipy import city command initially and even installed citipy using pip install citipy and upgraded it too. \nThe error still persists. Please help.","AnswerCount":4,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":4976,"Q_Id":52638844,"Users Score":2,"Answer":"I was able to solve it by just changing the kernel. I don't know how kernel affects when your trying to import a module, but it worked for me.","Q_Score":0,"Tags":"python,api,openweathermap","A_Id":52750167,"CreationDate":"2018-10-04T04:06:00.000","Title":"ModuleNotFoundError: No module named 'citipy' error","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I was trying to do a weather api related python program and initially while running the dependencies there was a error that occurred which reads as \n'No module named 'citipy' error'. \nBut i used from citipy import city command initially and even installed citipy using pip install citipy and upgraded it too. \nThe error still persists. Please help.","AnswerCount":4,"Available Count":3,"Score":0.049958375,"is_accepted":false,"ViewCount":4976,"Q_Id":52638844,"Users Score":1,"Answer":"I had initially installed citipy through git bash, but when I ran my code in jupyter notebook it did not run. To solve the problem I had to install citipy through anaconda prompt (which is how I open my jupyter notebook) and I think this is how both the citipy and your code are in the same kernel.","Q_Score":0,"Tags":"python,api,openweathermap","A_Id":55272731,"CreationDate":"2018-10-04T04:06:00.000","Title":"ModuleNotFoundError: No module named 'citipy' error","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I was trying to do a weather api related python program and initially while running the dependencies there was a error that occurred which reads as \n'No module named 'citipy' error'. \nBut i used from citipy import city command initially and even installed citipy using pip install citipy and upgraded it too. \nThe error still persists. Please help.","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":4976,"Q_Id":52638844,"Users Score":0,"Answer":"It happened the same to me, and I noticed that I skipt to activate pythondata from anaconda prompt","Q_Score":0,"Tags":"python,api,openweathermap","A_Id":69748249,"CreationDate":"2018-10-04T04:06:00.000","Title":"ModuleNotFoundError: No module named 'citipy' error","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am running multiple scrapers using the command line which is an automated process.\nPython : 2.7.12\nScrapy : 1.4.0\nOS : Ubuntu 16.04.4 LTS\nI want to know how scrapy handles the case when \n\nThere is not enough memory\/cpu bandwidth to start a scraper.\nThere is not enough memory\/cpu bandwidth during a scraper run.\n\nI have gone through the documentation but couldn't find anything.\nAnyone answering this, you don't have to know the right answer, if you can point me to the general direction of any resource you know which would be helpful, that would also be appreciated","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":95,"Q_Id":52643398,"Users Score":1,"Answer":"The operating system kills any process that tries to access more memory than the limit.\nApplies to python programs too. and scrapy is no different.\nMore often than not, bandwidth is the bottleneck in scraping \/ crawling applications.\nMemory would only be a bottleneck if there is a serious memory leak in your application.\nYour application would just be very slow if CPU is being shared by many process on the same machine.","Q_Score":0,"Tags":"python,python-2.7,memory-management,scrapy","A_Id":52643555,"CreationDate":"2018-10-04T09:28:00.000","Title":"How does scrapy behave when enough resources are not available","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Folks, I have a scraping script that I need to run on specific times for live info, but I can't have my computer on me all day. So I thought about running it on an online interpreter, but repl.it doesn't have webdriver and the other I found didn't neither. Could you help me with that?\nThanks","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":694,"Q_Id":52693693,"Users Score":0,"Answer":"I'm not sure, but I don't guess if you can do it on a free online interpreter!\nYou can buy a server and use that, You can SSH to it anytime you want, or even better, You can develop a micro web service using Flask or something else to report the data you need!\nOther way I can think of is let your computer be online 24\/7 and use smtplib to email yourself the data in an interval!","Q_Score":0,"Tags":"python,selenium,selenium-webdriver,pythoninterpreter,repl.it","A_Id":52693719,"CreationDate":"2018-10-07T23:09:00.000","Title":"How to run Selenium with Webdriver on Online Python interpreters?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I would like to pull messages from a RabbitMQ queue, wrap them in a object and dispatch for some kind of processing. Ofcourse I could iteratively do that until the queue is empty, but I was wondering if there is any other way (some flag of some kind) or a neater way.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":202,"Q_Id":52699030,"Users Score":1,"Answer":"RabbitMQ does not support batches of messages, so you do indeed need to consume each message individually. \nMaybe an alternative would be to batch the messages yourself by publishing one large message with all the required content.","Q_Score":0,"Tags":"python-2.7,rabbitmq,pika","A_Id":52726069,"CreationDate":"2018-10-08T09:18:00.000","Title":"Is it possible to pull all messages from a RabbitMQ queue at once?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"What options do I set for chromedriver so that the web server cannot tells the browser is manually launched or programming launched using Selenium?\nThanks,","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":67,"Q_Id":52712046,"Users Score":1,"Answer":"The Webserver you try to access has no way of knowing how the browser has been launched. It can only detect (or rather, guess) that it's an automated browser when said browser shows atypical behavior for a human (e.g. makes loads of requests per seconds, clicks 2 things with no delay whatsoever). Therefor it doesn't matter how you launch the browser - just how you use it.","Q_Score":1,"Tags":"python,selenium,selenium-webdriver","A_Id":52712071,"CreationDate":"2018-10-09T01:17:00.000","Title":"Launching Chromedriver using Selenium","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"We are trying to order a 128 subnet. But looks like it doesn't work, get an error saying Invalid combination specified for ordering a subnet. The same code works to order a 64 subnet. Any thoughts how to order a 128 subnet? \n\nnetwork_mgr = SoftLayer.managers.network.NetworkManager(client)\nnetwork_mgr.add_subnet(\u2018private\u2019, 128, vlan_id, test_order=True)\n\n\nTraceback (most recent call last):\n File \"subnet.py\", line 11, in \n result = nwmgr.add_subnet('private', 128, vlan_id, test_order=True)\n File \"\/usr\/local\/lib\/python2.7\/site-packages\/SoftLayer\/managers\/network.py\", line 154, in add_subnet\n raise TypeError('Invalid combination specified for ordering a'\nTypeError: Invalid combination specified for ordering a subnet.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":52,"Q_Id":52714654,"Users Score":0,"Answer":"Currently it seems not possible to add 128 ip subnet into the order, the package used by the manager to order subnets only allows to add subnets for: 64,32,16,8,4 (capacity),\nIt is because the package that does not contain any item that has 128 ip addresses subnet, this is the reason why you are getting the error Exception you provided.\nYou may also verify this through the Portal UI, if you can see 128 ip address option through UI in your account, please update this forum with a screenshot.","Q_Score":0,"Tags":"python-2.7,ibm-cloud-infrastructure","A_Id":52730451,"CreationDate":"2018-10-09T06:31:00.000","Title":"SoftLayer API: order a 128 subnet","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am attempting to install Ambari Server 2.7.1, which depends on python 2.7. From my understanding, the default Centos 7 Python is 2.7.5, which \"should\" be fine, but when I go to install the Ambari Server using yum, it fails stating:\n\"Error: Package: ambari-server-2.7.1.0-169.x86_64 (ambari-2.7.1.0) Requires: python-xml\".\nWhen I search the yum repos I have installed (Base, CR, Debuginfo, Fasttrack, Sources, Vault, EPEL, HDP, HDF, AMBARI, and mysql57-community), I cannot find python-xml anywhere, but from searching google, found that it should be part of base Python 2.7.\nI have also tried \"yum clean all\" and this has no effect on the problem.\nWhat am I missing?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":426,"Q_Id":52725799,"Users Score":0,"Answer":"Well, I am not sure there will be an acceptable answer to this...I spun up an identical base image on my private Openstack cloud and installed Ambari server with no issues...the only difference I could think of between installs was the order in which I installed components. For example, I installed Ambari server followed by MySQL the second go-around, whereas I installed MySQL first, the first time I tried it.\nI could not find anything in the log files that gave me any clues as to what happened...but I do have both instances running in my cloud in the hopes of troubleshooting it further.","Q_Score":0,"Tags":"python,centos7,yum","A_Id":52745159,"CreationDate":"2018-10-09T16:48:00.000","Title":"Centos 7 missing python-xml package","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've written a small flask rest api application and a related client library that uses requests to interface with the api. And now I'm writing test cases using pytest. The tests directly against the flask app run fine using the built in test client.\nHowever, now I'm trying to run tests against the flask app through the client library, and it's failing with errors like:\n\nInvalidSchema(\"No connection adapters were found for '%s'\" % url)\n\nAs I understand, I can separately mock out the requests calls, but is there a way I can test the client library directly against the flask application?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":247,"Q_Id":52729749,"Users Score":1,"Answer":"If you test client library it better to choose mocks your API.\nBut if you want to test client(library) <-> server(flask) integration you need to make some preparation of environment. Like configure client, start server on the same host and port. And then run the tests.","Q_Score":0,"Tags":"python,flask,python-requests,pytest","A_Id":52731357,"CreationDate":"2018-10-09T21:43:00.000","Title":"Testing a flask application with a client that uses requests","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am using Ubuntu and I have a code which needs to install various libraries. I mostly use python2.7, but due to some libraries this code uses, I had to move to python3, these libraries are \nimport urllib.request\nimport urllib.parse\nimport urllib.error\nBut then there was one library import which I could not import in python3, after searching on Google, I found that moving to python3.7 will solve the issue, and it did solve. Import was\nfrom json import JSONDecodeError\nBut now I have the issue of 'from retrying import retry'......After installing it with pip and pip3, I could import it in Python2.7 and python3, but I am failing to import it in python3.7.....\nSo, basically I am jumping across python versions to import the libraries required to run the code of an ex company employee....Please guide me how to import \"retrying\" in python3.7 or any way I can just install all the below ones in one python version\nimport urllib.request\nimport urllib.parse\nimport urllib.error\nfrom json import JSONDecodeError\nfrom retrying import retry","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":287,"Q_Id":52741688,"Users Score":1,"Answer":"I installed python3.5, and everything worked fine in it. Weird it did not work in python3.7, anyways my issue is resolved.","Q_Score":0,"Tags":"python,python-3.x,python-2.7,ubuntu,installation","A_Id":52758487,"CreationDate":"2018-10-10T13:44:00.000","Title":"Unable to install package \"retrying\" in python 3.7","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using python and getting this error.\n\nimport telegram\ntelegram.Bot(token = '###############')\n\nWhen I run this, appears: \n\"AttributeError: module 'telegram' has no attribute 'Bot'\"\nAny ideas how to solve this?","AnswerCount":4,"Available Count":1,"Score":0.049958375,"is_accepted":false,"ViewCount":15267,"Q_Id":52749629,"Users Score":1,"Answer":"Note that your file name (.py) does not the same with your package name.","Q_Score":6,"Tags":"python,api,telegram,attributeerror","A_Id":57895181,"CreationDate":"2018-10-10T22:29:00.000","Title":"AttributeError: module 'telegram' has no attribute 'Bot'","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've tried many different things to fix the problem, but when I use from selenium import webdriver, I always get ImportError: No module named selenium\nSelenium is definitely installed in c:\/Python27\/Scripts using pip install selenium and also tried -m pip install -U selenium\nAny suggestions? I've tried everything I could find on SO.","AnswerCount":3,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":6609,"Q_Id":52751053,"Users Score":2,"Answer":"@Ankur Singh answered my question. I needed to go run conda install -c clinicalgraphics selenium in an Anaconda Prompt window before importing Selenium","Q_Score":1,"Tags":"python,selenium,selenium-webdriver,jupyter-notebook,jupyter","A_Id":52752048,"CreationDate":"2018-10-11T01:57:00.000","Title":"Python: Unable to import Selenium using Jupyter","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've tried many different things to fix the problem, but when I use from selenium import webdriver, I always get ImportError: No module named selenium\nSelenium is definitely installed in c:\/Python27\/Scripts using pip install selenium and also tried -m pip install -U selenium\nAny suggestions? I've tried everything I could find on SO.","AnswerCount":3,"Available Count":3,"Score":0.0665680765,"is_accepted":false,"ViewCount":6609,"Q_Id":52751053,"Users Score":1,"Answer":"I had the same issue using Python 3.7 and pip install selenium --user solved it for me.","Q_Score":1,"Tags":"python,selenium,selenium-webdriver,jupyter-notebook,jupyter","A_Id":62248876,"CreationDate":"2018-10-11T01:57:00.000","Title":"Python: Unable to import Selenium using Jupyter","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've tried many different things to fix the problem, but when I use from selenium import webdriver, I always get ImportError: No module named selenium\nSelenium is definitely installed in c:\/Python27\/Scripts using pip install selenium and also tried -m pip install -U selenium\nAny suggestions? I've tried everything I could find on SO.","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":6609,"Q_Id":52751053,"Users Score":0,"Answer":"Installation success still error?\nLocation problem so do this:-\nCheck where selenium package and file.py located. If both are in different location, move the selenium package to the location where file.py located.","Q_Score":1,"Tags":"python,selenium,selenium-webdriver,jupyter-notebook,jupyter","A_Id":68063740,"CreationDate":"2018-10-11T01:57:00.000","Title":"Python: Unable to import Selenium using Jupyter","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to parse pricing list json files for some aws services. After parsing I am randomly picking a key from key list to get the data. Currently my code loads the json files one at a time, which takes time. I would like to get some suggestions on how I can speed up this process.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":118,"Q_Id":52751533,"Users Score":0,"Answer":"Ended up creating a database on redis server.","Q_Score":0,"Tags":"json,python-3.x,amazon-web-services","A_Id":53009841,"CreationDate":"2018-10-11T03:02:00.000","Title":"Need suggestions for fast and efficient way to parse AWS Pricing List json files","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"started the test in Selenium in addition in my browser I manually clicked and\/or entered the data in the fields.\nIs it possible to save the actions that I made manually - actions logs?\nI want to know what the user's actions during manual test.","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":264,"Q_Id":52792860,"Users Score":0,"Answer":"I suggest you using Selenium with third-party framework like Robotframework.\nIt will be easier to observe actions with those behavior driven test framework.\nAnd they will also help you to capture screenshot while error occurs.","Q_Score":0,"Tags":"java,python,selenium,webdriver","A_Id":52792957,"CreationDate":"2018-10-13T12:22:00.000","Title":"Manual actions in Selenium","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"started the test in Selenium in addition in my browser I manually clicked and\/or entered the data in the fields.\nIs it possible to save the actions that I made manually - actions logs?\nI want to know what the user's actions during manual test.","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":264,"Q_Id":52792860,"Users Score":0,"Answer":"S\u0142awomir, why don't You start record all actions from beginning, after launch webbrowser. For FF55+ there is an addon Katalon Recorder. You can record all steps and than export all actions to Java, Pyton code. Than just copy code from exporter, and You can use it in your webdriver tests.","Q_Score":0,"Tags":"java,python,selenium,webdriver","A_Id":52854082,"CreationDate":"2018-10-13T12:22:00.000","Title":"Manual actions in Selenium","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"started the test in Selenium in addition in my browser I manually clicked and\/or entered the data in the fields.\nIs it possible to save the actions that I made manually - actions logs?\nI want to know what the user's actions during manual test.","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":264,"Q_Id":52792860,"Users Score":0,"Answer":"Below example\n\nSimple example\nDuring the test Selenium opens a page example.com. Next Selenium click on link to example.com\/login. On the login page Selenium enters username and password -- it is correct test\n\nExample with manual actions\nDuring the test Selenium opens a page example.com. Now manually I click on link example.com\/about (but on About page isn't link to example.com\/login Next Selenium try to click on link to example.com\/login, but can't because link to example.com\/login doesn't exist. -- test failed\n\nTest failed because I make manual action, so I want to log all manually actions","Q_Score":0,"Tags":"java,python,selenium,webdriver","A_Id":52860596,"CreationDate":"2018-10-13T12:22:00.000","Title":"Manual actions in Selenium","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm building a PWA with django\/python on the server-side and vue on the client-side and want to use firebase as a database as well as make use of the firebase authentication.\nAfter some thorough research I realised that I had to make a few choices. \nQuestion 1: Authentication \n\nI can do authentication on the client-side or server-side. Which one would be best (more secure) ?\n\nQuestion 2: Database\n\nIn terms of CRUDS I am a bit conflicted. Do I write all my data to firestore from the client-side?\nDo I rather use api's to communicate with my backend and then write data to firestore from the backend? What are the security implications of doing this?\n\nShould I just use both in terms of context? If there are no security implications I would do my authentication client-side and my CRUDS from the server-side. I think I would also have to check authentication to write to the database from the backend.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2897,"Q_Id":52793661,"Users Score":5,"Answer":"Authentication of a user's credentials should always happen on a server, as it can't be securely done on the client's computer. What Firebase Authentication allows however, is that the authentication runs on Google's servers, while you control it from a simple client-side API call.","Q_Score":7,"Tags":"python,django,firebase,vue.js,firebase-authentication","A_Id":52793728,"CreationDate":"2018-10-13T13:54:00.000","Title":"Firebase (client-side vs server-side)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"How to access a folder object inside S3 bucket. How can I access a folder inside S3 bucket using python boto3.\nCode is working for a folder in S3 bucket but to for folders inside S3 bucket","AnswerCount":3,"Available Count":1,"Score":0.1325487884,"is_accepted":false,"ViewCount":4249,"Q_Id":52805052,"Users Score":2,"Answer":"If I understand you correctly.. I had this issue in my python3 script. Basically you need to pass the to the boto3 function an s3 bucket and the file name. Make this file name include the folder, with the forward slash separating them. Instead of passing just the file name and trying to pass the folder as a separate parameter. \nSo if you have MyS3Bucket and you want to upload file.txt to MyFolder inside MyS3Bucket, then pass the file_name=\u201cMyFolder\u201d+\u201d\/\u201c+\u201dfile.txt\u201d as a parameter to the upload function.\nLet me know if you need a code snippet. \nEven if you don\u2019t have the folder in the S3 bucket, boto3 will create it for you on the fly. This is cool because you can grant access in s3 based on a folder, not just the whole bucket at once. \nGood luck!","Q_Score":0,"Tags":"python,amazon-web-services,amazon-s3,boto3","A_Id":55203529,"CreationDate":"2018-10-14T17:03:00.000","Title":"How to upload file to folder in aws S3 bucket using python boto3","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"For example in python if I\u2019m sending data through sockets could I make my own encryption algorithm to encrypt that data? Would it be unbreakable since only I know how it works?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":42,"Q_Id":52806028,"Users Score":1,"Answer":"Yes you can. Would it be unbreakable? No. This is called security through obscurity. You're relying on the fact that nobody knows how it works. But can you really rely on that?\nSomeone is going to receive the data, and they'll have to decrypt it. The code must run on their machine for that to happen. If they have the code, they know how it works. Well, at least anyone with a lot of spare time and nothing else to do can easily reverse engineer it, and there goes your obscurity.\nIs it feasable to make your own algorithm? Sure. A bit of XOR here, a bit of shuffling there... eventually you'll have an encryption algorithm. It probably wouldn't be a good one but it would do the job, at least until someone tries to break it, then it probably wouldn't last a day.\nDoes Python care? Do sockets care? No. You can do whatever you want with the data. It's just bits after all, what they mean is up to you.\nAre you a cryptographer? No, otherwise you wouldn't be here asking this. So should you do it? No.","Q_Score":0,"Tags":"python,encryption","A_Id":52806168,"CreationDate":"2018-10-14T18:54:00.000","Title":"Is it possible to make my own encryption when sending data through sockets?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to retrieve the text data for a specific twitter account and save it for a ML project about text generation, is this possible using the Twitter API?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":164,"Q_Id":52807824,"Users Score":0,"Answer":"Yes. It is possible. Try tweepy. It is a wrapper for Twitter API.","Q_Score":0,"Tags":"python,text,twitter","A_Id":52807837,"CreationDate":"2018-10-14T23:02:00.000","Title":"How do\/can I retrieve text data from a Twitter account in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Scrapy Crawlera was working just well in my Windows machine, but it gets error 111 when I run it in my linux server. Why is that?\nWhen I use curl, I got this error:\ncurl: (7) Failed connect to proxy.crawlera.com:8010; Connection refused","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2658,"Q_Id":52866297,"Users Score":1,"Answer":"It turned out when dealing with ports, CPanel (or maybe Linux?) blocks ports by default if it is not whitelisted in the firewall. I opened it via WHM since I use CPanel, and everything works fine now.","Q_Score":1,"Tags":"python,web-scraping,scrapy,screen-scraping,crawlera","A_Id":52882604,"CreationDate":"2018-10-18T02:53:00.000","Title":"Connection was refused by other side: 111: Connection refused. when using Scrapy Crawlera in a linux server","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a python script that takes in a .csv file and performs some data cleaning and some more advanced operations and gives back a .csv file as its output.\nIs there a library I can use to build a webpage and host it on some server for users to be able to upload the input .csv file into it and be able to download the output .csv file?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":57,"Q_Id":52885542,"Users Score":0,"Answer":"I would say your best choice if you are new to web development, is to start learning Flask and the basics of \"How the Web Works\". There are other libraries such as Django or Cherrypy as well. Flask is a web framework that can run a server side application (a.k.a. back-end) and it's relatively simple out of the box.\nHowever, when you say \"running python script on webpage\" and \"build a webpage\", that's not exactly how it works. You will build your webpage (a.k.a. the client-side application or the front-end) using HTML, CSS and possibly JavaScript. Web browsers can't run python.","Q_Score":0,"Tags":"python","A_Id":52885671,"CreationDate":"2018-10-19T03:34:00.000","Title":"Running python script on webpage","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"So my directory currently looks something like this \nsrc\n \\aws_resources\n \\s3\n s3file.py\n util.py\ntest\n mytest.py\nI'm running the mytest.py file and it essentially imports the s3 file, while the s3 file imports the util.py file\nMy mytest.py says \nfrom src.aws_resources.s3 import *\n\nbut when I run the test file I get an error \nImportError: No module named aws_resources.util\n\nI tried adding \nfrom src.aws_resources import util to mytest.py but I still get the same error. Any suggestions would be appreciated.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":672,"Q_Id":52952756,"Users Score":0,"Answer":"Your current working directory is test, which means any import line will look into the test folder to find modules, if not found, go to the other paths existing in your sys.path (e.g. PYTHONPATH) to find.\nHere are my two assumptions:\n1.) src is not part of your PYTHONPATH\n2.) In your s3file.py you have this line (or similar): from aws_resources.util import *\nSince aws_resources is not a valid folder under test, and it couldn't be found under any of your sys.path, the Python interpreter couldn't find the module and rasied ImportError.\nYou can try one of the following:\n1.) Restructure your folder structure so that everything can be immediately referenced from your current working directory.\n2.) Handle the from aws_resources.util import * line in your s3file.py if you are importing it from another working directory.\n3.) Add src into your PYTHONPATH, or move it to a folder identified in PATH\/PYTHONPATH so you can always reference back to it.\nThe approach really depends on your use case for src and how often it gets referenced elsewhere.","Q_Score":0,"Tags":"python,python-2.7,import","A_Id":52952937,"CreationDate":"2018-10-23T15:28:00.000","Title":"\"No module named \" found when importing file that imports another file","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using updatedItem() function which is either inserting or updating values in Dynamodb. If values are updating I want to fetch those items and invoke a new lambda function. How can I achieve this?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":330,"Q_Id":52985493,"Users Score":0,"Answer":"Don't think previously accepted answer works. The return Attributes never returned the partition\/sort keys, whether an item was updated or created.\nWhat worked for me was to add ReturnValues: UPDATED_OLD. If the returned Attributes is undefined, then you know that an item was created.","Q_Score":1,"Tags":"python-2.7,amazon-web-services,aws-lambda,amazon-dynamodb","A_Id":71245645,"CreationDate":"2018-10-25T09:08:00.000","Title":"Find whether the value has been updated or inserted in Dynamodb?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a client that reads messages from RabbitMq and sent it to a server, the IP of the server is specified in the message, I want the sending and receiving parts to be asynchronous because I will be dealing with multiple servers and I don't want the code to hang and wait for a response. by asynchronous I mean the send and receive work in parallel. any suggestion ? thanks","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":54,"Q_Id":52991421,"Users Score":0,"Answer":"You can use WebSockets for this. The send and receive part are event-driven and work in parallel.\nIf you already have sending and receiving mechanism and you just want them to be in parallel, have a look at greenlets in python.","Q_Score":0,"Tags":"python-3.x,sockets,asynchronous,rabbitmq","A_Id":52991521,"CreationDate":"2018-10-25T14:11:00.000","Title":"Asynchronous send\/receive client","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am doing a project where I want to build a model that estimates the number of retweets at a given time for a given tweet bearing a certain keyword, so that when a new tweet with the same keyword comes in, I can track its number of retweets and see if there is any anomaly. For that, on top of collecting a large number of tweets with that certain keyword for modeling, I need to know for each of the tweets what was the number of retweets on day 1, day 2, etc (the unit of time is arbitrary here, can be in days or in minutes) since it was created. \nI've done some research on stackoverflow, but I have not seen a solution for this particular problem. I understand that twitter API allows you to search for tweets with keywords, but it only gives you the tweets' current number of retweets but not the historical performance. \nI would really appreciate it if anyone can point me to the right direction. Thank you very much!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":270,"Q_Id":53068167,"Users Score":0,"Answer":"There's no API method in any of Twitter's API tiers (standard, premium or enterprise) that would enable you to do this. You'd need to have already been listening for the Tweets, record the Tweet IDs, check them every day for total engagement numbers, and then record the differential every day.","Q_Score":1,"Tags":"python,api,twitter,web-scraping","A_Id":53071904,"CreationDate":"2018-10-30T15:49:00.000","Title":"Get Historical Tweet Metadata from Twitter API in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to connect two computers and send messages between them,and I'm not sure why the socket module doesn't work for me.So is there any other ways to connect two computers?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":200,"Q_Id":53075044,"Users Score":2,"Answer":"If you wanna solve this problem,\nFirst thing , you should check two computer's network connecting, \nin terminal you can use one computer of yours to typing ping ${target_computer_address}and check terminal's echo to make sure two computer network connection is working.\nSecond thing , you can use python to open a tcp port to listen and print recv data in screen ,and in other computer telnet previous computer ip:port just typing someting, make sure python can print you's typing character correct.","Q_Score":0,"Tags":"python,python-3.x,connection","A_Id":53075143,"CreationDate":"2018-10-31T01:28:00.000","Title":"How to connect two computers using python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a script that uploads files to Google Drive. I want to upload python files. I can do it manually and have it keep the file as .py correctly (and it's previewable), but no matter what mimetypes I try, I can't get my program to upload it correctly. It can upload the file as a .txt or as something GDrive can't recognize, but not as a .py file. I can't find an explicit mimetype for it (I found a reference for text\/x-script.python but it doesn't work as an out mimetype).\nDoes anyone know how to correctly upload a .py file to Google Drive using REST?","AnswerCount":2,"Available Count":1,"Score":-0.0996679946,"is_accepted":false,"ViewCount":690,"Q_Id":53096788,"Users Score":-1,"Answer":"Also this is a valid Python mimetype: text\/x-script.python","Q_Score":2,"Tags":"python,rest,google-drive-api,mime-types","A_Id":71426330,"CreationDate":"2018-11-01T07:22:00.000","Title":"What Is the Correct Mimetype (in and out) for a .Py File for Google Drive?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'd like to post a video from url to my Facebook page. Is there any way for it, I already be able to post Images but seems like it's no way for video?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":206,"Q_Id":53112641,"Users Score":0,"Answer":"I'd like to post a video from url to my Facebook page. Is there any way for it, I already be able to post Images but seems like it's no way for video?\n\nThat is correct, videos can not be uploaded by specifying a video URL.\nYou need to perform an actual HTTP upload in multipart\/form-data format.","Q_Score":0,"Tags":"python,django,facebook,facebook-graph-api","A_Id":53114362,"CreationDate":"2018-11-02T04:22:00.000","Title":"Post video to Facebook Page with Facebook SDK","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have Python 3.7.1 installed on Windows 7 from www.python.org.\nWhen I want to \"pip install pylab\" I receive the following message:\n\"pip is configured with locations that require tls\/ssl however the ssl module in python is not available\".\nPlease help me to overcome this problem.\nMany thanks","AnswerCount":5,"Available Count":5,"Score":0.1973753202,"is_accepted":false,"ViewCount":18293,"Q_Id":53137700,"Users Score":5,"Answer":"I also had the same problem. But solved it by just adding path variables in evt variables \nSo check for the following paths \n1. Anaconda root \n2. Anaconda\/Scripts\n3. Anaconda\/Library\/bin\nthe 3rd one was not added and after adding all these evt variables do restart your PC. and the error will be resolved.","Q_Score":10,"Tags":"python,ssl","A_Id":54933554,"CreationDate":"2018-11-04T04:18:00.000","Title":"ssl module in python is not available Windows 7","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have Python 3.7.1 installed on Windows 7 from www.python.org.\nWhen I want to \"pip install pylab\" I receive the following message:\n\"pip is configured with locations that require tls\/ssl however the ssl module in python is not available\".\nPlease help me to overcome this problem.\nMany thanks","AnswerCount":5,"Available Count":5,"Score":0.1586485043,"is_accepted":false,"ViewCount":18293,"Q_Id":53137700,"Users Score":4,"Answer":"What worked for me is:\n conda activate in command prompt and then use pip","Q_Score":10,"Tags":"python,ssl","A_Id":55987478,"CreationDate":"2018-11-04T04:18:00.000","Title":"ssl module in python is not available Windows 7","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have Python 3.7.1 installed on Windows 7 from www.python.org.\nWhen I want to \"pip install pylab\" I receive the following message:\n\"pip is configured with locations that require tls\/ssl however the ssl module in python is not available\".\nPlease help me to overcome this problem.\nMany thanks","AnswerCount":5,"Available Count":5,"Score":0.0,"is_accepted":false,"ViewCount":18293,"Q_Id":53137700,"Users Score":0,"Answer":"Activating conda (from condabin folder: activate.bat) and then from the python.exe folder \"python.exe -m pip install siphon\"","Q_Score":10,"Tags":"python,ssl","A_Id":57626871,"CreationDate":"2018-11-04T04:18:00.000","Title":"ssl module in python is not available Windows 7","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have Python 3.7.1 installed on Windows 7 from www.python.org.\nWhen I want to \"pip install pylab\" I receive the following message:\n\"pip is configured with locations that require tls\/ssl however the ssl module in python is not available\".\nPlease help me to overcome this problem.\nMany thanks","AnswerCount":5,"Available Count":5,"Score":0.0,"is_accepted":false,"ViewCount":18293,"Q_Id":53137700,"Users Score":0,"Answer":"I had the same issue because of corporate firewall settings.\nIf you are on windows go to \"Internet Properties\" ---> \"Connection\" ---> \"LAN Settings\" and check the address (if it is a wpad.dat file, download it and look for \"ProxyPort\" and \"ProxyServer\").\nThen try :\npip --proxy http:\/\/user:password@ProxyServer:ProxyPort install module","Q_Score":10,"Tags":"python,ssl","A_Id":58202904,"CreationDate":"2018-11-04T04:18:00.000","Title":"ssl module in python is not available Windows 7","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have Python 3.7.1 installed on Windows 7 from www.python.org.\nWhen I want to \"pip install pylab\" I receive the following message:\n\"pip is configured with locations that require tls\/ssl however the ssl module in python is not available\".\nPlease help me to overcome this problem.\nMany thanks","AnswerCount":5,"Available Count":5,"Score":1.0,"is_accepted":false,"ViewCount":18293,"Q_Id":53137700,"Users Score":7,"Answer":"According to your comment with the PATH variable you are missing the right entries to this so it cannot find the libraries and so on. \nIf you reinstall python in the first screen you have an option \u2018Add Python x.x.x to PATH\u2019 enable this option and then retry what you did.","Q_Score":10,"Tags":"python,ssl","A_Id":53174338,"CreationDate":"2018-11-04T04:18:00.000","Title":"ssl module in python is not available Windows 7","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to push some files up to s3 with the AWS CLI and I am running into an error:\nupload failed: ... An HTTP Client raised and unhandled exception: unknown encoding: idna\nI believe this is a Python specific problem but I am not sure how to enable this type of encoding for my python interpreter. I just freshly installed Python 3.6 and have verified that it being used by powershell and cmd. \n$> python --version\n Python 3.6.7\nIf this isn't a Python specific problem, it might be helpful to know that I also just freshly installed the AWS CLI and have it properly configured. Let me know if there is anything else I am missing to help solve this problem. Thanks.","AnswerCount":5,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":6262,"Q_Id":53144254,"Users Score":8,"Answer":"I had the same problem in Windows.\nAfter investigating the problem, I realized that the problem is in the aws-cli installed using the MSI installer (x64). After removing \"AWS Command Line Interface\" from the list of installed programs and installing aws-cli using pip, the problem was solved.\nI also tried to install MSI installer x32 and the problem was missing.","Q_Score":12,"Tags":"python-3.x,amazon-s3,aws-cli","A_Id":54337242,"CreationDate":"2018-11-04T18:53:00.000","Title":"AWS CLI upload failed: unknown encoding: idna","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to push some files up to s3 with the AWS CLI and I am running into an error:\nupload failed: ... An HTTP Client raised and unhandled exception: unknown encoding: idna\nI believe this is a Python specific problem but I am not sure how to enable this type of encoding for my python interpreter. I just freshly installed Python 3.6 and have verified that it being used by powershell and cmd. \n$> python --version\n Python 3.6.7\nIf this isn't a Python specific problem, it might be helpful to know that I also just freshly installed the AWS CLI and have it properly configured. Let me know if there is anything else I am missing to help solve this problem. Thanks.","AnswerCount":5,"Available Count":2,"Score":-0.0798297691,"is_accepted":false,"ViewCount":6262,"Q_Id":53144254,"Users Score":-2,"Answer":"Even I was facing same issue. I was running it on Windows server 2008 R2. I was trying to upload around 500 files to s3 using below command.\n\naws s3 cp sourcedir s3bucket --recursive --acl\n bucket-owner-full-control --profile profilename\n\nIt works well and uploads almost all files, but for initial 2 or 3 files, it used to fail with error: An HTTP Client raised and unhandled exception: unknown encoding: idna\nThis error was not consistent. The file for which upload failed, it might succeed if I try to run it again. It was quite weird.\nTried on trial and error basis and it started working well.\nSolution:\n\nUninstalled Python 3 and AWS CLI.\nInstalled Python 2.7.15\nAdded python installed path in environment variable PATH. Also added pythoninstalledpath\\scripts to PATH variable. \nAWS CLI doesnt work well with MS Installer on Windows Server 2008, instead used PIP. \n\nCommand: \n\npip install awscli\n\nNote: for pip to work, do not forget to add pythoninstalledpath\\scripts to PATH variable.\nYou should have following version:\nCommand: \n\naws --version\n\nOutput: aws-cli\/1.16.72 Python\/2.7.15 Windows\/2008ServerR2 botocore\/1.12.62\nVoila! The error is gone!","Q_Score":12,"Tags":"python-3.x,amazon-s3,aws-cli","A_Id":53693183,"CreationDate":"2018-11-04T18:53:00.000","Title":"AWS CLI upload failed: unknown encoding: idna","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Im writing a webapplication, where im trying to display the connected USB devices. I found a Python function that does exactly what i want but i cant really figure out how to call the function from my HTML code, preferably on the click of a button.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":76,"Q_Id":53152383,"Users Score":0,"Answer":"simple answer: you can't. the code would have to be run client-side, and no browser would execute potentially malicious code automatically (and not every system has a python interpreter installed). \nthe only thing you can execute client-side (without the user taking action, e.g. downloading a program or browser add-on) is javascript.","Q_Score":0,"Tags":"python,html,python-2.7","A_Id":53152455,"CreationDate":"2018-11-05T10:20:00.000","Title":"Calling a Python function from HTML","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"We have several microservices on Golang and Python, On Golang we are writing finance operations and on Python online store logic, we want to create one API for our front-end and we don't know how to do it.\nI have read about API gateway and would it be right if Golang will create its own GraphQL server, Python will create another one and they both will communicate with the third graphql server which will generate API for out front-end.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":832,"Q_Id":53159897,"Users Score":2,"Answer":"I do not know much details about your services, but great pattern I successfully used on different projects is as you mentioned GraphQL gateway. \nYou will create one service, I prefer to create it in Node.js where all requests from frontend will coming through. Then from GraphQL gateway you will request your microservices. This will be basically your only entry point into the backend system. Requests will be authenticated and you are able to unify access to your data and perform also some performance optimizations like implementing data loader's caching and batching to mitigate N+1 problem. In addition you will reduce complexity of having multiple APIs and leverage all the GraphQL benefits. \nOn my last project we had 7 different frontends and each was using the same GraphQL gateway and I was really happy with our approach. There are definitely some downsides as you need to keep in sync all your frontends and GraphQL gateway, therefore you need to be more aware of your breaking changes, but it is solvable with for example deprecated directive and by performing blue\/green deployment with Kubernetes cluster. \nThe other option is to create the so-called backend for frontend in GraphQL. Right now I do not have enough information which solution would be best for you. You need to decide based on your frontend needs and business domain, but usually I prefer GraphQL gateway as GraphQL has great flexibility and the need to taylor your API to frontend is covered by GraphQL capabilities. Hope it helps David","Q_Score":2,"Tags":"python,api,go,graphql","A_Id":53173290,"CreationDate":"2018-11-05T18:11:00.000","Title":"How to create Graphql server for microservices?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have created a lambda that iterates over all the files in a given S3 bucket and deletes the files in S3 bucket. The S3 bucket has around 100K files and I am selecting and deleting the around 60K files. I have set the timeout for lambda to max (15 minutes) timeout value. The lambda is consistently returning \"network error\" after few minutes though it seems to run in the background for sometime even after the error is returned. How can get around this?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":14183,"Q_Id":53180543,"Users Score":1,"Answer":"I was testing another function and this error came up as a result. Reading a little in the documentation I found that I activated the throttle option, that it reduces the amount of rate for your function.\nThe solution is to create another function and see if the throttle is making that error.","Q_Score":4,"Tags":"python,amazon-web-services,amazon-s3,aws-lambda,boto3","A_Id":65430231,"CreationDate":"2018-11-06T21:47:00.000","Title":"AWS Lambda : Calling the invoke API action failed with this message: Network Error","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I need a browser in Selenium to work as if it's maximized, so that website that I process perceived it as maximized, but I want it really to be minimized, so that I could work in parallel while the script executes. The simple: driver.maximize_window() just maximizes windows.\nSo, is there a way for a window of browser to be maximized for website, but in reality minimized?","AnswerCount":4,"Available Count":2,"Score":0.049958375,"is_accepted":false,"ViewCount":554,"Q_Id":53189353,"Users Score":1,"Answer":"If you want to work while WebDriver is executed, you can created a virtual instance (with e.g. VirtualBox or HyperV) an run test suite in VM.","Q_Score":0,"Tags":"python,selenium,webdriver","A_Id":53191171,"CreationDate":"2018-11-07T12:19:00.000","Title":"Is there 'as if' maximized mode of browser in Webdriver Selenium, but in reality minimized?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need a browser in Selenium to work as if it's maximized, so that website that I process perceived it as maximized, but I want it really to be minimized, so that I could work in parallel while the script executes. The simple: driver.maximize_window() just maximizes windows.\nSo, is there a way for a window of browser to be maximized for website, but in reality minimized?","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":554,"Q_Id":53189353,"Users Score":0,"Answer":"Maximized is just a browser size from the site perspective. Set the browser size to the screen resolution of your desktop and minimize the browser.","Q_Score":0,"Tags":"python,selenium,webdriver","A_Id":53191735,"CreationDate":"2018-11-07T12:19:00.000","Title":"Is there 'as if' maximized mode of browser in Webdriver Selenium, but in reality minimized?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"my program have one server socket and multiple client socket,\nso, what i want do is When the server close(shutdown),change one of the client sockets to server socket.\nis there a way to do this?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":457,"Q_Id":53200594,"Users Score":0,"Answer":"what i want do is When the server close(shutdown),change one of the\n client sockets to server socket. is there a way to do this?\n\nNo -- once a TCP socket is connected, it can only be used for that one TCP connection, and once the TCP connection has been broken (e.g. by the server exiting), then all you can do with the socket is close it.\nHowever, that doesn't mean you can't have a high-availability chat system like the one you are envisioning. What you can do is have your clients \"plan ahead\" by gathering the IP addresses of all the other clients in advance (either through the server, or if all the clients are on the same LAN, perhaps through broadcast or multicast UDP packets). That way, if\/when the server goes away, the clients all have another IP address handy that they can automatically try to connect to (using a new TCP socket).\nNote that you'll need to make sure that the program running at that IP address is accepting incoming TCP connections (server-style); you also might want to specify some kind of rule so that all of the clients will reconnect to the same IP address (e.g. sort the IP addresses numerically and have all the clients try to connect to the smallest IP address in the list, or something like that).","Q_Score":1,"Tags":"python,sockets","A_Id":53202099,"CreationDate":"2018-11-08T02:14:00.000","Title":"Python Socket programming change server socket","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'd like to fetch mails from a server, but I also want to control when to delete them.\nIs there a way to do this?\nI know this setting is very usual in mail clients, but it seems this option is not well supported by POPv3 specification and\/or server implementations.\n(I'm using python, but I'm ok with other languages\/libraries, Python's poplib seems very simplistic)","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":964,"Q_Id":53231525,"Users Score":2,"Answer":"POP3 by design downloads and removes mail from a server after it's successfully fetched. If you don't want that, then use the IMAP protocol instead. That protocol has support to allow you to delete mail at your leisure as opposed to when it's synced to your machine.","Q_Score":2,"Tags":"python,pop3,poplib","A_Id":53231559,"CreationDate":"2018-11-09T18:34:00.000","Title":"Fetch mails via POP3, but keep them on the server","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there any way to get all games that user has played while using Discord?\nSo based on that I can give them specific roles.\nThanks!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1082,"Q_Id":53233805,"Users Score":1,"Answer":"You'd have to store them, there is no history of games played available in the API","Q_Score":1,"Tags":"python,discord.py-rewrite","A_Id":53254574,"CreationDate":"2018-11-09T22:01:00.000","Title":"How to get all games that user has played while using Discord with discord.py","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm looking for a method to connect client(web browser) to server(without external IP) using p2p.\nAs client-side language i would like to use javascript.\nI was reading about WebRTC Peer-to-peer but i don't know if it only works with two clients(javascript) or if i can use some other language ( PHP, Python, Node.js ).\nI know about signaling, STUN and TURN servers. I have server with external IP address so it won't be a problem.\nMy question is what programming language can i use on the server?","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":1119,"Q_Id":53253718,"Users Score":1,"Answer":"I added to Andrey Suglobov's answer: The client does not receive the WebRTC packets from the server because it doesn't have an external IP. In order to solve this problem, you have to configure it to communicate via the TURN server in the middle.\n[WebRTC server] \u2194 [TURN] \u2194 [NAT] \u2194 [client]\nGenerally, the client uses JavaScript because it's a browser. But WebRTC is a specification that supports P2P on the web. If supporting this specification, it does not matter what programming language you use.\nThank you.","Q_Score":3,"Tags":"javascript,python,node.js,p2p,peer","A_Id":53273494,"CreationDate":"2018-11-11T22:05:00.000","Title":"Peer-to-peer Javascript & something","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm looking for a method to connect client(web browser) to server(without external IP) using p2p.\nAs client-side language i would like to use javascript.\nI was reading about WebRTC Peer-to-peer but i don't know if it only works with two clients(javascript) or if i can use some other language ( PHP, Python, Node.js ).\nI know about signaling, STUN and TURN servers. I have server with external IP address so it won't be a problem.\nMy question is what programming language can i use on the server?","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":1119,"Q_Id":53253718,"Users Score":0,"Answer":"Probably found an answer.\nI can use javascript server side in node.js","Q_Score":3,"Tags":"javascript,python,node.js,p2p,peer","A_Id":53262995,"CreationDate":"2018-11-11T22:05:00.000","Title":"Peer-to-peer Javascript & something","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm pretty much stuck right now.\nI wrote a parser in python3 using the python-docx library to extract all tables found in an existing .docx and store it in a python datastructure.\nSo far so good. Works as it should.\nNow I have the problem that there are hyperlinks in these tables which I definitely need! Due to the structure (xml underneath) the docx library doesn't catch these. Neither the url nor the display text provided. I found many people having similar concerns about this, but most didn't seem to have 'just that' dilemma.\nI thought about unpacking the .docx and scan the _ref document for the corresponding 'rid' and fill the actual data I have with the links found in the _ref xml.\nEither way it seems seriously weary to do it that way, so I was wondering if there is a more pythonic way to do it or if somebody got good advise how to tackle this problem?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":438,"Q_Id":53269311,"Users Score":0,"Answer":"You can extract the links by parsing xml of docx file. \nYou can extract all text from the document by using document.element.getiterator()\nIterate all the tags of xml and extract its text. You will get all the missing data which python-docx failed to extract.","Q_Score":0,"Tags":"python,xml,hyperlink,docx","A_Id":57556934,"CreationDate":"2018-11-12T20:04:00.000","Title":"Extracting URL from inside docx tables","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want a help in robotics stuff..\nMy Question is ... : \nHow to connect my robotic car ( car is having raspberry pi as controller) with my computer via internet .. so the I can control the car from my computer's keybord.. \nPreviously i was using VNC and made a python tkinter script (stored at raspberry pi) and by the help of vnc i control the car but it was not good .. \nMost of the time the when i press the key, function works after sometime and worst thing was that it stores all the commands in a queue or buffer .. \nSo realtime operation was not happenping ( like: if i press forward arrow key for 2 seconds, it evoked moveForward() 20 times which is equal to 2 meters forward move and takes 4 seconds to travel .. BUT after that if i press right arrow key then it evokes moveRight() .. the worst part is it will execute after finishing the moveForward() stored in a queue i.e after 4 seconds .. and not on real time)\nIs there any way to control\/give command to raspberry pi on real time and not in a queue manner via socketing or other thing ?\nnote : i have a static ip address with specific port open and it has to be done over internet.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":72,"Q_Id":53275066,"Users Score":0,"Answer":"The appearance of your car might mainly relate to the whole system response time. The Raspberry Pi may be not fast enough. If there is no necessary, analog signal may on real time.","Q_Score":0,"Tags":"python,sockets,raspberry-pi,real-time,remote-connection","A_Id":53275756,"CreationDate":"2018-11-13T06:30:00.000","Title":"Remote connection between Raspberry pi and other computer through python over internet on real time","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to get browser network logs using selenium to debug request\/responses. Could you please help me to find out a way.\nAnd I'm using selenium 3.14.0 and latest Chrome browser.","AnswerCount":5,"Available Count":1,"Score":0.0399786803,"is_accepted":false,"ViewCount":22344,"Q_Id":53286828,"Users Score":1,"Answer":"Try selenium-wire, I think this is a better way which also provides undetected-chromedriver against bot detection.","Q_Score":8,"Tags":"python,selenium,selenium-webdriver,selenium-chromedriver","A_Id":67264765,"CreationDate":"2018-11-13T17:49:00.000","Title":"How to get browser network logs using python selenium","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to automate the editing, compiling and reading of results of MQL4 file using python, are there any tools like selenium but targeted for UI?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":175,"Q_Id":53295473,"Users Score":0,"Answer":"What works for me is two things:\n\nFirst option: clear the strategy tester logs, use comments in the code with specific string format, copy the logs and parse the data from the comments using python.\nSecond option: parse the expert advisor report using Python, from HTML to pandas dataframe for further processing.","Q_Score":0,"Tags":"python,python-3.x,automation,build-automation,mql4","A_Id":62120583,"CreationDate":"2018-11-14T08:02:00.000","Title":"How to edit, compile and read MQL4 files using Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am just starting the process of learning to access APIs using Python. Initially I presumed OAuth User IDs were generated or supplied by the API owner somehow. Would it actually be fabricated by myself, using alphanumerics and symbols? This is just for a personal project, I wouldn't have a need for a systematic creation of IDs. The APIs I am working with are owned by TDAmeritrade.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":54,"Q_Id":53306198,"Users Score":1,"Answer":"OAuth is has no capabilities for any type of provisioning of users or credentials.\nOAuth 2.0 provides for the Delegation of Authorization\n\nBy the Resource Owner (User)\nto the OAuth Client (Application)\nfor Access to a Resource Server (Website or Api or ???)","Q_Score":1,"Tags":"python,api,oauth","A_Id":53307184,"CreationDate":"2018-11-14T17:56:00.000","Title":"Creation of OAuth User IDs","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I mounted my drive using this : \nfrom google.colab import drive\ndrive.mount('\/content\/drive\/')\nI have a file inside a folder that I want the path of how do I determine the path? \nSay the folder that contains the file is named 'x' inside my drive","AnswerCount":4,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":18640,"Q_Id":53355510,"Users Score":6,"Answer":"The path as parameter for a function will be \/content\/drive\/My Drive\/x\/the_file, so without backslash inside My Drive","Q_Score":7,"Tags":"python,google-colaboratory","A_Id":56513444,"CreationDate":"2018-11-17T20:57:00.000","Title":"How to determine file path in Google colab?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I mounted my drive using this : \nfrom google.colab import drive\ndrive.mount('\/content\/drive\/')\nI have a file inside a folder that I want the path of how do I determine the path? \nSay the folder that contains the file is named 'x' inside my drive","AnswerCount":4,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":18640,"Q_Id":53355510,"Users Score":9,"Answer":"The path will be \/content\/drive\/My\\ Drive\/x\/the_file.","Q_Score":7,"Tags":"python,google-colaboratory","A_Id":53357067,"CreationDate":"2018-11-17T20:57:00.000","Title":"How to determine file path in Google colab?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am writing a Python script to monitor the status of a certain websites and report if any error via email notification.\nI am able to test the http status of the websites and websphere console urls.\nSince the admin (console) is on DMGR , my code is able to check the status of only DMGR but not the nodes inside the DMGR ,whenever few env is down .. only node goes down. I need a way to monitor the node's status as well.\nAny help would be appreciated.\nThanks","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":71,"Q_Id":53378777,"Users Score":0,"Answer":"You application will be deployed to the JVMs on the nodes. \nIs your application configured to be accessible via the Node hostname [check virtualHost config]?\nif yes - thats the url, you need to check.","Q_Score":0,"Tags":"python-3.x,websphere,monitor,email-notifications","A_Id":53408409,"CreationDate":"2018-11-19T16:20:00.000","Title":"Websphere -check the status of nodes from a browser for automation purpose | Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"It just keeps failing when I do sudo apt-get upgrade because it failed to upgrade some package that based on Python3. The error was: undefined symbol: XML_SetHashSalt. I'd been searching around for solutions but there is no such answer on StackOverflow.\nThen at the end, I found an answer on not very related question mention that the library path for libexpat.so.1 pointing to \/usr\/local\/lib\/ may cause the issue. So I try to rename libexpat.so.1 to libexpat.so.1-bk then it works.\nSo I just re-post it here, hope it helps for anyone facing it.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1034,"Q_Id":53405066,"Users Score":0,"Answer":"It seems that you have broken your system.\nIf you are using apt, the \/usr\/local\/ should never be used.\nIf you are using \/usr\/local\/, do not use apt to manage your installation.","Q_Score":1,"Tags":"python-3.x","A_Id":53405117,"CreationDate":"2018-11-21T03:58:00.000","Title":"Apt-get error \"undefined symbol: xml_sethashsalt\" when install\/update python3","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to find out ways to secure REST API communication. The following are the items considered.\n\nEnabled token based authentication using JWT\nEnabled HTTPS and restrict requests over HTTP\n\nOur GUI interacts with service layer APIs which are secured as described above. All the requests from GUI contains sensitive information and hence the data has to be secured at every place.\nIn short, our GUI is secured with role based authentication and APIs are secured as above. Still I feel the communication between GUI and services are not secure enough.\n\nIs the payload from GUI secure enough? or should the payload also be encrypted from GUI itself?\n\nIf this is not the right place to ask this question, I am happy to move it to right place. \nPlease advice!\nThank you in advance","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":3632,"Q_Id":53405327,"Users Score":1,"Answer":"What I understood from the post is that your GUI is secured role based and the api is secured using token and https.\nApart from these, as I understand you app is too sensitive, in that case I would do the following to add some extra layer of security.\n\nAdd a two step verfication for the GUI just to ensure that right person is logged in all time.\nEncrypt the data (i.e. the payload) may be using the public\/private key.In this case the server side needs to be changed a bit as it needs to decrypt the request .\nMake sure your token has a lifetime and expires after certain time.\n\nLet me know if you are looking for something else.","Q_Score":2,"Tags":"python,angularjs,rest,security,encryption","A_Id":53405466,"CreationDate":"2018-11-21T04:35:00.000","Title":"How to secure REST API payload while in transit?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"We are developing an application on Google Cloud that makes use of Cloud Functions in Python. We're looking at developing a generic helper library that many of our functions will import.\nThe problem with this is if the helper library is changed in any way, all our functions will need to be redeployed.\nI'm trying to find a way to host (for want of a better word) our helper library (for example in Google Cloud Storage), and somehow import it into the main.py files, such that any changes to the helper library can be made without having to redeploy the functions. Is this possible at all?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":104,"Q_Id":53448833,"Users Score":3,"Answer":"This is not supported with the provided tools. You can only invoke code that was deployed with the function. There is no \"dynamic\" loading of code over the internet.\nAlso, in my opinion, this is a pretty bad idea, because your functions could break in a very profound way if there's a problem during loading of the remote code, or you accidentally (or someone maliciously) push something wrong. You're going to be better off getting all your code and libraries together at once, test it all at once, and deploy it all at once.\nYou're free to try to implement something yourself, but I strongly advise against it.","Q_Score":1,"Tags":"python,python-3.x,google-cloud-platform,google-cloud-functions","A_Id":53449175,"CreationDate":"2018-11-23T14:52:00.000","Title":"Is there a way to import a python helper library from a deployed Google Cloud Function, outside the function?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Situation\nI wrote a simple program in python. It's a simple socket chatting program. In my program, the client just connect to an address (ip, port) and send a message, while at this time the server is ready and receives the message and prints it. I can assure the program is correct, since I tried on my computer.\nI have a VM instance on Google Cloud Platform, which I can operate through ssh, a simple way provided by google cloud. I can make sure the server is working.\nProblem\nI start a simple tcp server, python program on my google cloud server. Then I start my client program on my computer. But I get this error:\n\nConnectionRefusedError: [WinError 10061] No connection could be made because the target machine actively refused it\n\nor equivalently in Chinese:\n\nConnectionRefusedError: [WinError 10061] \u7531\u4e8e\u76ee\u6807\u8ba1\u7b97\u673a\u79ef\u6781\u62d2\u7edd\uff0c\u65e0\u6cd5\u8fde\u63a5\u3002\n\nHow do I solve this problem and connect to my google cloud server?\nI guess maybe the firewall refused my computer's connection, but have no idea how to solve it.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":3164,"Q_Id":53454732,"Users Score":4,"Answer":"This error means that your program is not listening on 0.0.0.0 port XYZ.\nCheck to see if your program is instead listening on localhost. If it is change to 0.0.0.0 which means all available networks. localhost means do not listen on any network interfaces and only accept connections from inside the computer.\nThen double check the port number.\nTo see if you have something listening run this command (Linux): netstat -at\nLook for a line with your port XYZ.\nWhen you start your program, make sure that it does not error when creating the listener. If you are trying to use a port number below 1024, you will need to lauch the program with root privileges.","Q_Score":1,"Tags":"python,server,google-cloud-platform,google-compute-engine,firewall","A_Id":53455102,"CreationDate":"2018-11-24T02:40:00.000","Title":"Can't connect to my google cloud VM instance through tcp using python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have connected zapier to a webhook I am listening too, which sends a JSON file into my s3 bucket. \nI have some python code that I want to execute when a new file is uploaded into the bucket, in real time over the file. \nWhat is the best way to 'listen' for the upload of this file into the s3 bucket?","AnswerCount":3,"Available Count":1,"Score":0.1325487884,"is_accepted":false,"ViewCount":2916,"Q_Id":53471129,"Users Score":2,"Answer":"David here, from the Zapier Platform team. \nSeems like you've already found your answer, which is great. I just wanted to plug Zapier as an option (since you had mentioned you're already using it). Our S3 integration has a \"new file in bucket\" trigger, which you can combine with any other step (such as a Python Code step). Additionally, you could skip the middleman and structure your zap as:\n\nsome trigger\nAdd file to S3\nRun Python\n\nAnd not need to worry about webhooks at all.\n\u200bLet me know if you've got any other questions!","Q_Score":4,"Tags":"python,amazon-s3,webhooks,zapier","A_Id":53488253,"CreationDate":"2018-11-25T19:34:00.000","Title":"Best way to 'listen' to s3 bucket for new file?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have test that executes for 6 hours. After 2 hours, my driver slows down due to nature of Chrome browser. Solution is to close browser and restart it. I found out that doing driver.quit() helps in performance due to some internal memory that is being used which causes test to become slow. I am wondering is there option to use driver.quit() without closing drivers because i need cookies that were generated in that session as well as not killing Python script that is being ran at that moment.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1274,"Q_Id":53544235,"Users Score":1,"Answer":"The purpose of driver.quit() is to close all browser windows and terminate WebDriver session. So no, you cannot use driver.quit() without closing drivers - that's what it does.\nIn my opinion, you should look at why at all you have this issue:\n\nIs there really a reason to run 6 hours of testing within the same session? Of course there could be special circumstances, but good practice is to cut entire test collection into independent sets, where each set can run on its own, on \"clean\" environment (i.e. new browser session). Not only this will prevent the problem you are facing, but also improves reliability of the tests (i.e. domino effect when one tests messes all future test executions), ability to debug (imagine you have a problem with a test that runs on hour #3, and the problem is not reproducible when you run it alone, or you can't run it alone), and flexibility of the executions.\nWhy does the browser need to be restarted after 2 hours? No, it's not a \"nature of Chrome\". It's a bug somewhere - memory leak, or something else. It might be worth investigating what it is about. Because you can stop tests after 2 hours, but are you going to tell your users to not use your application for more than 2 hours? Even if it's a bug in Selenium driver, it might be worth reporting it to selenium developers for your, and everyone else's benefit.","Q_Score":0,"Tags":"python,selenium,selenium-webdriver,selenium-chromedriver","A_Id":53544604,"CreationDate":"2018-11-29T17:10:00.000","Title":"Selenium webdriver closing without session","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a script that I use to send HTTP requests and Websocket data and I had to use an external program as I couldn't find a Websocket library that supports SOCKS5 proxies. \nSo, I've found about Proxifier and tried running my Python script with a rule that I have written in Proxifier but it didn't yield and good result. \nOther programs seemed to work fine, chrome.exe managed to go through the proxy, and I have no idea why my Python script won't go...","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":847,"Q_Id":53570121,"Users Score":0,"Answer":"Same issues with proxifier.\nSeems reason in portable version of application.\nTried standard version and everything works fine.\nHope this helps.","Q_Score":0,"Tags":"python,python-3.x,proxy","A_Id":55796085,"CreationDate":"2018-12-01T11:02:00.000","Title":"Running a Python script through Proxifier","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"i can connect locally to my mongodb server with the address 0.0.0.0\/0. However, when I deploy my code to the cloud I get the error deploy to google cloud function.\ngoogle cloud function with python 3.7 (beta)\natlas mongo db\npython lib:\n-pymongo\n-dnspython\nError: function crashed. Details:\nAll nameservers failed to answer the query _mongodb._tcp.**-***.gcp.mongodb.net. IN SRV: Server ***.***.***.*** UDP port 53 answered SERVFAIL\nTraceback (most recent call last): File \"\/env\/local\/lib\/python3.7\/site-packages\/pymongo\/uri_parser.py\", line 287, in _get_dns_srv_hosts results = resolver.query('_mongodb._tcp.' + hostname, 'SRV') File \"\/env\/local\/lib\/python3.7\/site-packages\/dns\/resolver.py\", line 1132, in query raise_on_no_answer, source_port) File \"\/env\/local\/lib\/python3.7\/site-packages\/dns\/resolver.py\", line 947, in query raise NoNameservers(request=request, errors=errors) dns.resolver.NoNameservers: All nameservers failed to answer the query _mongodb._tcp.**mymongodb**-r091o.gcp.mongodb.net. IN SRV: Server ***.***.***.*** UDP port 53","AnswerCount":2,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":3909,"Q_Id":53576199,"Users Score":4,"Answer":"finally after stuck 2 day, goblok banget semaleman\njust change connection \nfrom \n\nSRV connection string (3.6+ driver)\n\nto \n\nStandard connection string (3.4+ driver)\n\nmongodb:\/\/:@-shard-00-00-r091o.gcp.mongodb.net:27017,-shard-00-01-r091o.gcp.mongodb.net:27017,-shard-00-02-r091o.gcp.mongodb.net:27017\/test?ssl=true&replicaSet=-shard-0&authSource=admin&retryWrites=true\nor you can see your connection string in atlas mongodb.\nidk why can`t connect with srv connection string in google cloud functions, maybe not suppot now, or just misconfiguration.","Q_Score":3,"Tags":"python,mongodb,google-cloud-functions,mongodb-atlas","A_Id":53582632,"CreationDate":"2018-12-02T00:09:00.000","Title":"All nameservers failed to answer UDP port 53 Google cloud functions python 3.7 atlas mongodb","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"So instead of having data-item-url=\"https:\/\/miglopes.pythonanywhere.com\/ra%C3%A7%C3%A3o-de-c%C3%A3o-purina-junior-10kg\/\"\nit keeps on appearing \ndata-item-url=\"http:\/\/localhost\/ra%C3%A7%C3%A3o-de-c%C3%A3o-purina-junior-10kg\/\"\nhow do i remove the localhost so my snipcart can work on checkout?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":88,"Q_Id":53587049,"Users Score":0,"Answer":"Without more details of where this tag is coming from it's hard to know for sure... but most likely you need to update your site's hostname in the Wagtail admin, under Settings -> Sites.","Q_Score":0,"Tags":"localhost,wagtail,pythonanywhere,snipcart","A_Id":53600830,"CreationDate":"2018-12-03T03:23:00.000","Title":"data-item-url is on localhost instead of pythonanywhere (wagtail + snipcart project)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a Scrapy crawler and I want to rotate the IP so my application will not be blocked. I am setting IP in scrapy using request.meta['proxy'] = 'http:\/\/51.161.82.60:80' but this is a VM's IP. My question is can VM or Machine's IP be used for scrapy or I need a proxy server?\nCurrently I am doing this. This does not throw any error but when I get response from http:\/\/checkip.dyndns.org it is my own IP not updated IP which I set in meta. That is why I want to know if I do need proxy server.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":64,"Q_Id":53645854,"Users Score":0,"Answer":"Definitely you need a proxy server. meta data is only a field in the http request. the server side still knows the public ip that really connecting from the tcp connection layer.","Q_Score":0,"Tags":"python,scrapy,web-crawler","A_Id":53647528,"CreationDate":"2018-12-06T06:38:00.000","Title":"Can VM \/ Machine IP be used instead of Proxy Server for Scrapy","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Python's built-in HTTP clients don't have many features, so even the Python docs recommend using requests. But there's also urllib3, which requests, itself uses, and they share some core developers, making me think they're more complementary than competing.\nWhen would I use urllib3 instead of requests? What features does requests add on top of urllib3?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":48,"Q_Id":53662099,"Users Score":0,"Answer":"Requests was conducted on the basis of urlib3 encapsulation.\nSince Requests are already an encapsulated library, many functions can be simplified. For example: timeout setting, proxy setting, file upload, get cookies, etc.\nHowever,requests can only be used directly and cannot be invoked asynchronously, so requests are slow.\nSo, if you're writing small spiders that don't require much speed, consider using requests.","Q_Score":2,"Tags":"python,python-requests,urllib3","A_Id":53662465,"CreationDate":"2018-12-07T01:38:00.000","Title":"Choosing between Python HTTP clients urllib3 and requests","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a weighted Graph using networkx and the topology is highly meshed. I would like to extract a number of paths between two nodes with distance minimization.\nTo clarify, the dijkstra_path function finds the weighted shortest path between two nodes, I would like to get that as well as the second and third best option of shortest weighted paths between two nodes.\nI tried using all_simple_paths and then ordering the paths in distance minimization order but it is extremely time consuming when the network is meshed with 500 nodes or so.\nAny thoughts on the matter? Thank you for your help!","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":270,"Q_Id":53674393,"Users Score":1,"Answer":"Try networkx's shortest_simple_paths.","Q_Score":0,"Tags":"python-3.x,networkx,dijkstra","A_Id":53677838,"CreationDate":"2018-12-07T17:35:00.000","Title":"Descending order of shortest paths in networkx","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"The website is running using node server. I want to create a python flask rest api to connect to pop up chat window of this website.My python flask is running in 8085 port and node server is running in 8082 port.\nin python flask app.py \n\n@app.route('\/') \n def hello_world():\n return render_template('popupchat.html') \n@app.route('\/chat',methods=[\"POST\"]) def chat():\n\nthis popupchat.html is pointing to website pupupchat window. and there is one bind.js script having $.post(\"\/chat\" , if i want to connect this pop up chat window running in node server to python flask server , how i will connect .\ni appreciate your suggestions","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":949,"Q_Id":53711272,"Users Score":0,"Answer":"If you are just debugging try to adjust the port on the side of the chat window. You can tell the chat window to reach a certain port by altering the url you are trying to reach. Example:\nhttp:\/\/your_ip:your_port\/your_route or in your case http:\/\/your_ip:8085\/your_route\nIf you are already deployed please talk to your admin since this might be depending on your server.","Q_Score":1,"Tags":"javascript,python,html,flask-restful","A_Id":53712290,"CreationDate":"2018-12-10T18:07:00.000","Title":"python flask running in one port connected to UI running in node server different port","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"HIHello everyone\nI have a Raspberry Pi that contain some data,\non another side, I have a server with a server, also there is a webpage connecting with the same server.\nwhat do I need? \n1- the Raspberry Pi must send its data to a server \n2-if the user set some data to the database by webpage the Raspberry Pi must get this data \n3-Raspberry Pi must listen to the database to know if there is an update or not.\nwhat the best way to do these points, is there any protocol for IoT can do these? \nI need any hint :) \nthank u for all.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":483,"Q_Id":53715817,"Users Score":0,"Answer":"Try connecting your RPi with your server through any means of local area connection, it could be a connection through a wifi network or using a lan cable or even through serial ports. If successful in this attempt then you can access disk drive folders inside the server. In the webpage running on server ,make it listen for certain values and write the status to a text file or an database. Then make RPi to continuously monitor those file for data updation and make it work according to it.","Q_Score":0,"Tags":"php,python,database,raspberry-pi,iot","A_Id":53722341,"CreationDate":"2018-12-11T00:40:00.000","Title":"how set\/get data from Raspberry Pi to server","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"My goal is to add 10M contacts to telegram.\nHow can we add contacts to telegram using telegram API? \nI have tried using telethon in which I batched 500 contacts in one request. However, telegram responded all such requests with all contacts in retry_contacts and none were imported.\nI have also found out a solution to convert the txt file of\n10M contacts to csv file and import them using an android app.\nBut this takes approx 10 mins for 10k contacts. So this won't be\na good idea for adding 10M contacts.\nAny other method for having this done is also welcomed.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":9550,"Q_Id":53730525,"Users Score":9,"Answer":"This is not possible. Telegram has deliberately set limits on the number of contacts you can add. Initially you can add about 5000 contacts and after that you can add about 100 more every day. This is because of security not decreasing their API load. If you could add 10M numbers, you could easily map @usernames to numbers which is against Telegram privacy policy.\nIn my experience, the best practical option is to add an array of 10 numbers each time using telethon's ImportContactsRequest, until you get locked. Then try 24 hours later again until you get locked again, and so on. This is the fastest solution and due to Telegram restrictions, if you only have 1 SIM card, it takes around 274 years to add 10M contacts.","Q_Score":6,"Tags":"python,automation,contacts,telegram,telethon","A_Id":53736422,"CreationDate":"2018-12-11T18:46:00.000","Title":"How to add Millions of contacts to telegram?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am creating a REST API in python and currently, there is a route to a list of the resource instructors:\nGET \/instructors\nBut, I also have another client, used as a CRM for users who have an admin-role.\nFor the client, I want to display a list of instructors but with more fields and different properties.\nMy first thought is also to have the route:\nGET \/instructors \nThis obviously conflicts with the route above. \nWhat is the best name for this route?\nGET \/instructors\/admin\nor\nGET \/admin\/instructors\nor \nGET \/instructors?admin=True\nI am not sure how to approach this.\nThanks in advance.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":938,"Q_Id":53787342,"Users Score":0,"Answer":"I am glad that we are finally talking about the naming conventions this totally depends on personal preference and use cases and how your project has been designed so i will keep here my views ..\nLike you said all of the above seem to be good but how i would do is\n\nGET\/Instructors\/all \nGET\/Instructors\/admin \nGET\/Instructors\/any other special role\n\nyou may use queries when something specific has to be done with these roles again as in \nGET\/Instructors\/all?credibility=PHD \nsomething like the above its never a good idea to show every thing on just parent calls like GET\/Instructor as you said firstly it creates confusion and makes the readability of your endpoints difficult with time when the complexity of your application increases.","Q_Score":0,"Tags":"python,rest,api","A_Id":53787444,"CreationDate":"2018-12-14T21:50:00.000","Title":"Rest api naming conventions for consumer and admin","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm trying to use zeep library to provide soap call in python code and its okay it works when I try to run with python. Then, I'm trying to use jython to run this code (i need jython because next step will be on the server that uses jython to compile) and when I try to install lxml for jython it gives me this error:\nerror:Compiling extensions is not supported on Jython\nWhen I search for this situation, I found that jython doesn't support c based libraries. \nSo, there is a solution with jython-jni bridge but I couldn't understand how to be.\nIs there another solution? Or can you give me an obvious example?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":323,"Q_Id":53815005,"Users Score":0,"Answer":"I couldn't achieve to implement jni but i created a new layer between jython and server.I mean, i made a REST call from jython compiler and this call listens my server for soap call and it worked.","Q_Score":0,"Tags":"java,python,lxml,jython","A_Id":53870455,"CreationDate":"2018-12-17T12:12:00.000","Title":"Jython can not use lxml library","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working on socket programming in python. I am a bit confused with the concept of s.listen(5) and multithreading.\nAs I know, s.listen(5) is used so that the server can listen upto 5 clients.\nAnd multithreading is also used so that server can get connected to many clients.\nPlease explain me in which condition we do use multithreading?\nThanks in advance","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1236,"Q_Id":53880028,"Users Score":0,"Answer":"As I know, s.listen(5) is used so that the server can listen upto 5 clients.\n\nNo. s.listen(5) declares a backlog of size 5. Than means that the listening socket will let 5 connection requests in pending stated before they are accepted. Each time a connection request is accepted it is no longer in the pending backlog. So there is no limit (other than the server resources) to the number of accepted connections.\nA common use of multithreading is to start a new thread after a connection has been accepted to process that connection. An alternative is to use select on a single thread to process all the connections in the same thread. It used to be the rule before multithreading became common, but it can lead to more complex programs","Q_Score":0,"Tags":"python-3.x,multithreading,sockets","A_Id":53887480,"CreationDate":"2018-12-21T06:15:00.000","Title":"server.listen(5) vs multithreading in socket programming","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working on a tool for my company created to get data from our Facebook publications. It has not been working for a while, so I have to get all the historical data from June to November 2018. \nMy two scripts (one that get title and type of publication, and the other that get the number of link clicks) are working well to get data from last pushes, but when I try to add a date range in my Graph API request, I have some issues:\n\nthe regular query is [page_id]\/posts?fields=id,created_time,link,type,name\nthe query for historical data is [page_id]\/posts?fields=id,created_time,link,type,name,since=1529280000&until=1529712000, as the API is supposed to work with unixtime\nI get perfect results for regular use, but the results for historical data only shows video publications in Graph API Explorer, with a debug message saying:\n\n\nThe since field does not exist on the PagePost object.\n\nSame for \"until\" field when not using \"since\". I tried to replace \"posts\/\" with \"feed\/\" but it returned the exact same result...\nDo you have any idea of how to get all the publications from a Page I own on a certain date range?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":356,"Q_Id":53883849,"Users Score":0,"Answer":"So it seems that it is not possible to request this kind of data unfortunately, third party services must be used...","Q_Score":0,"Tags":"python,facebook,facebook-graph-api,unix-timestamp","A_Id":53889179,"CreationDate":"2018-12-21T11:15:00.000","Title":"Date Range for Facebook Graph API request on posts level","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"import random\nimport asyncio\nimport json\nimport aiohttp\nimport sys\nimport urllib\nfrom lxml.html.soupparser import parse\nfrom aiohttp import ClientSession\nfrom threading import Thread\n\ndef ttest():\n async def fetch(url, session):\n headers = {\n 'Host': 'example.com'\n }\n cookies2 = {\n 'test': 'test'\n }\n\n data = '{\"test\":\"test\"}'\n async with session.post(url, data=data, headers=headers, cookies=cookies2) as response:\n return await response.read()\n async def bound_fetch(sem, url, session):\n async with sem:\n html = await fetch(url, session)\n print(html)\n\n\n async def run(r):\n url = \"https:\/\/test.com\"\n tasks = []\n sem = asyncio.Semaphore(1000)\n async with aiohttp.ClientSession() as session:\n for i in range(r):\n task = asyncio.ensure_future(bound_fetch(sem, url, session))\n tasks.append(task)\n responses = asyncio.gather(*tasks)\n await responses\n\n number = 1\n loop = asyncio.get_event_loop()\n future = asyncio.ensure_future(run(number))\n loop.run_until_complete(future)\n\nttest()\n\nThis is the error: TypeError: _request() got an unexpected keyword argument 'cookies'\nI want use cookies like you see in the code, but i can not, can anyone help me?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":4119,"Q_Id":53892470,"Users Score":1,"Answer":"The feature was added on aiohttp GitHub master but not released yet.\nPlease either install aiohttp from GitHub or wait for a while for aiohttp 3.5 release.\nI hope to publish it in a few days.","Q_Score":3,"Tags":"python-3.x,python-asyncio,aiohttp","A_Id":53894565,"CreationDate":"2018-12-22T01:54:00.000","Title":"TypeError: _request() got an unexpected keyword argument 'cookies' (aiohttp)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"As a legacy from the previous version of our system, I have around 1 TB of old video files on AWS S3 bucket. Now we decided to migrate to AWS Media Services and all those files should be moved to MediaStore for the access unification.\nQ: Is there any way to move the data programmatically from S3 to MediaStore directly?\nAfter reading AWS API docs for these services, the best solution I've found is to run a custom Python script on an intermediate EC2 instance and pass the data through it.\nAlso, I have an assumption, based on pricing, data organization and some pieces in docs, that MediaStore built on top of S3. That's why I hope to find a more native way to move the data between them.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":376,"Q_Id":53897584,"Users Score":0,"Answer":"I've clarified this with AWS support. There is no way to transfer files directly, although, it's a popular question and, probably, will be implemented.\nNow I'm doing this with an intermediate EC2 server, a speed of internal AWS connections between this, S3 and MediaStore is quite good. So I would recommend this way, at least, for now.","Q_Score":0,"Tags":"python,amazon-web-services,amazon-s3,amazon-ec2,aws-mediastore","A_Id":53904676,"CreationDate":"2018-12-22T16:52:00.000","Title":"Move objects from AWS S3 to MediaStore","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"When I am running a Python micro-service in a dockerized or kubernetes container it works just fine. But with Istio service mesh, it is not working.\nI have added ServiceEntry for two of my outbound external http apis. It seems I can access the url content form inside the container using curl command which is inside service mesh. So, I think the service entries are fine and working. \nBut when I try from the micro-service which uses xml.sax parser in Python, it gives me the upstream connect error or disconnect\/reset before headers though the same application works fine without Istio.\nI think it is something related to Istio or Envoy or Python. \nUpdate: I did inject the Istio-proxy side-car. I have also added ServiceEntry for external MySQL database and mysql is connected from the micro-service.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1036,"Q_Id":53898206,"Users Score":2,"Answer":"I have found the reason for this not working. My Python service is using xml.sax parser library to parse xml form the internet, which is using the legacy urllib package which initiate http\/1.0 request. \nEnvoy doesn't support http\/1.0 protocol version. Hence, it is not working. I made the workaround by setting global.proxy.includeIPRanges=\"10.x.0.1\/16\" for Istio using helm. This actually bypass the entire envoy proxy for all outgoing connections outside the given ip ranges.\nBut I would prefer not to globally bypass Istio.","Q_Score":0,"Tags":"python,kubernetes,istio,envoyproxy","A_Id":53969796,"CreationDate":"2018-12-22T18:17:00.000","Title":"Istio egress gives \"upstream connect error or disconnect\/reset before headers\" errors from python micro-service","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I\u2019m extracting the SSL certificate from a website using the socket + ssl library in python. My understanding that it connects using the preferred method used by the server. \nUsing this method I\u2019m able to identify what version of SSL is used to connect, but I also need to identify whether the website supports SSL v3, in the case when the default connection is TLS. \nIs there a way to identify this information without manually testing multiple SSL connections?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":84,"Q_Id":53900577,"Users Score":0,"Answer":"I don't think sites advertise what they support. Rather, it's negotiated between client and server.\nYou could use the excellent server tester at www.ssllabs.com. It will try lots of configurations and report what the server in question supports. (Hopefully the site doesn't support SSL v3!)","Q_Score":0,"Tags":"python,ssl,openssl,ssl-certificate","A_Id":53900613,"CreationDate":"2018-12-23T01:40:00.000","Title":"Testing SSL v3 Support in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to extract the text just after strong tag from html page given below? how can i do it using beautiful soup. It is causing me problem as it doesn't have any class or id so only way to select this tag is using text.\n{strong}Name:{\/strong} Sam smith{br}\nRequired result\nSam smith","AnswerCount":1,"Available Count":1,"Score":-0.1973753202,"is_accepted":false,"ViewCount":77,"Q_Id":53914377,"Users Score":-1,"Answer":"Thanks for all your answers but i was able to do this by following:\nb_el = soup.find('strong',text='Name:')\nprint b_el.next_sibling\nThis works fine for me. This prints just next sibling how can i print next 2 sibling is there anyway ?","Q_Score":0,"Tags":"python,web-scraping,beautifulsoup","A_Id":53914744,"CreationDate":"2018-12-24T13:58:00.000","Title":"extracting text just after a particular tag using beautifulsoup?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm creating documents from web scraping in python and uploading them to Firestore.\nTo do so I'm adding them to a dictionary and uploading them from a for loop in python, one by one(ideally would be better to upload the collection at once, but that doesn't seem an option). I want to use batches, however they have the 500 limit per batch and I need to do more than 100,000 operations. The operations are merely set() operations and a couple of update()\nIs there a function to know the current size of the batch so I can reinitialize it?\nWhat is the best way to use batches for more than 500 operations in python?","AnswerCount":3,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1725,"Q_Id":53921109,"Users Score":3,"Answer":"The maximum number of operations in a Batch is 500. If you need more operations, you'll need multiple batches.\nThere is no API to determine the current number of operations in a Batch. If you need that, you will have to track it yourself.","Q_Score":3,"Tags":"python,firebase,google-cloud-firestore","A_Id":53921780,"CreationDate":"2018-12-25T09:39:00.000","Title":"How to batch more than 500 operations in python for firestore?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"if someone find my telegram chat ID (instead of my phone number or username), what can do with that?\nis it dangerous?!\nis it a big deal that someone can find my ID? should I worry about it?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":3297,"Q_Id":53924548,"Users Score":2,"Answer":"If someone with a (user)bot wants to send you a message, they can do that via the userID only. But only if they have \"seen\" you. Seeing you is considering their client to receive a message from you, be it in a private message or group.\nBots can only send you a message if you have started it in a private chat, otherwise they can only send you a message in a group they share with you.\nSo there is no real risk of people knowing your ID.","Q_Score":2,"Tags":"telegram,telegram-bot,python-telegram-bot,telepot","A_Id":54479855,"CreationDate":"2018-12-25T18:02:00.000","Title":"what can be done by a telegram chat ID?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using vpython library in spyder. After importing the library when I call simple function like print('x') or carry out any assignment operation and execute the program, immediately a browser tab named localhost and port address opens up and I get the output in console {if I used print function}. \nI would like to know if there is any option to prevent the tab from opening and is it possible to make the tab open only when it is required.\nPS : I am using windows 10, chrome as browser, python 3.5 and spyder 3.1.4.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":238,"Q_Id":53943901,"Users Score":0,"Answer":"There is work in progress to prevent the opening of a browser tab when there are no 3D objects or graph to display. I don't know when this will be released.","Q_Score":1,"Tags":"python,spyder,vpython","A_Id":53982995,"CreationDate":"2018-12-27T10:57:00.000","Title":"Vpython using Spyder : how to prevent browser tab from opening?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have created a chatbot using AWS lex and Lambda. Bot works as expected. I have store the slot data into sessionAttributes. The issue I am facing is when I communicate with bot from my website and if I open another tab of my site, it does't show the previous chat which happened in older tab(here both tabs are open).\nOn every new tab chat starts from start.\nRequirement is to continue from where it was left in previous tab.\nAm I missing any flow here ? I have gone though AWS doc but didn't get any clear picture to do the same. Any example of the same will help better.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":116,"Q_Id":53946681,"Users Score":0,"Answer":"You need to store the chat into some database of your own. On page load, you need to fetch the chat of current session or current user (based on your requirement).\n That way even if the user refreshes the page or open up a new tab, he will be able to see the chat he already had with the chatbot.","Q_Score":0,"Tags":"python,amazon-web-services,aws-lambda,chatbot,amazon-lex","A_Id":53955733,"CreationDate":"2018-12-27T14:32:00.000","Title":"maintain aws lex chat communication on all pages of website","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm attempting to automate a website interaction that requires that a USB key be inserted, an alert box then asks you to verify, and after hitting okay it opens a local program that requires a pin be input to activate the key. I'm looking for a way to have the bot enter the pin into the external program. Is there a library that can automate this for me?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":120,"Q_Id":53948970,"Users Score":0,"Answer":"I ended up using a keyboard input library to take advantage of the fact that windows changes window focus to the security window that opens, so now it emulates keyboard input to press enter, enter the pin, and press enter again.","Q_Score":0,"Tags":"python,selenium,internet-explorer","A_Id":54115153,"CreationDate":"2018-12-27T17:52:00.000","Title":"SSL USB Key Authentication in Selenium","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"so, I've tried to send a javascript XMLHttpRequest on a personally written http server that uses python's core functionality (i.e. socket and regex). took me a while but I finally had it going. so, I tested it out and after debugging the regex for parsing http POST and GET requests, I tested it out through the python console and it worked fine.\nin short, the http server receives a GET loopback request with personal identification, and reads whatever text was sent to it as data. \na tts loopback server.\nI wanted to do it because asides from selenium that honestly seemed like the only way for me to read random text from the screen using javascript (I could create a browser extension that works alongside it). I already created something for parsing html, so that's not the problem. later I wanted to extend the application and create a GUI using java for generally tts'ing files so I could listen to them while programming.\nthe problem was that although the socket was bound to port 80 on the loopback interface (127.0.0.1), when I sent an XMLHttpRequest to localhost, the server was not responding. I checked for incoming connections and there were none. from the terminal it worked fine though.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":52,"Q_Id":53971452,"Users Score":0,"Answer":"if anyone else is wondering, no, it's not possible. unless it bypasses CORS restriction. (sadly). If anyone wants to do something similar, he has to either bypass CORS restrictions OR if you're building with python you can just use selenium and create a \"custom\" browser extension.","Q_Score":0,"Tags":"javascript,python,http,browser","A_Id":53972892,"CreationDate":"2018-12-29T16:49:00.000","Title":"javascript on browser send xmlhttprequest onto loopback server","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to use Scrapy shell to try and figure out the selectors for zone-h.org. I run scrapy shell 'webpage' afterwards I tried to view the content to be sure that it is downloaded. But all I can see is a dash icon (-). It doesn't download the page. I tried to enter the website to check if my connection to the website is somehow blocked, but it was reachable. I tried setting user agent to something more generic like chrome but no luck there either. The website is blocking me somehow but I don't know how can I bypass it. I digged through the the website if they block crawling and it doesn't say it is forbidden to crawl it. Can anyone help out?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":621,"Q_Id":53988585,"Users Score":0,"Answer":"Can you use scrapy shell \"webpage\" on another webpage that you know works\/doesn't block scraping?\nHave you tried using the view(response) command to open up what scrapy sees in a web browser?\nWhen you go to the webpage using a normal browser, are you redirected to another, final homepage?\n- if so, try using the final homepage's URL in your scrapy shell command\nDo you have firewalls that could interfere with a Python\/commandline app from connecting to the internet?","Q_Score":1,"Tags":"python,scrapy,web-crawler","A_Id":53989874,"CreationDate":"2018-12-31T14:33:00.000","Title":"Scrapy shell doesn't crawl web page","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am currently scraping a website for work to be able to sort the data locally, however when I do this the code seems to be incomplete, and I feel may be changing whilst I scroll on the website to add more content. Can this happen ? And if so, how can I ensure I am able to scrape the whole website for processing?\nI only currently know some python and html for web scraping, looking into what other elements may be affecting this issue (javascript or ReactJS etc). \nI am expecting to get a list of 50 names when scraping the website, but it only returns 13. I have downloaded the whole HTML file to go through it and none of the other names seem to exist in the file, i.e. why I think the file may be changing dynamically","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":58,"Q_Id":54008539,"Users Score":0,"Answer":"Yes, the content of the HTML can be dynamic, and Javascript loading should be the most essential . For Python, scrapy+splash maybe a good choice to get started. \nDepending on how the data is handled, you can have different methods to handle dyamic content HTML","Q_Score":0,"Tags":"python,web-scraping","A_Id":54008662,"CreationDate":"2019-01-02T14:59:00.000","Title":"The HTML code I scrape seems to be incomplete in comparison to the full website. Could the HTML be changing dynamically?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a Flask app that uses selenium to get data from a website. I have spent 10+ hours trying to get heroku to work with it, but no success. My main problem is selenium. with heroku, there is a \"buildpack\" that you use to get selenium working with it, but with all the other hosting services, I have found no information. I just would like to know how to get selenium to work with any other recommended service than heroku. Thank you.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":777,"Q_Id":54058237,"Users Score":0,"Answer":"You need hosting service that able to install Chrome, chromedriver and other dependencies. Find for Virtual Private hosting (VPS), or Dedicated Server or Cloud Hosting but not Shared hosting.","Q_Score":1,"Tags":"python,selenium,heroku,hosting","A_Id":54059218,"CreationDate":"2019-01-06T02:49:00.000","Title":"How does selenium work with hosting services?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am doing a sentiment analysis project about local people's attitudes toward the transportation service in Hong Kong. I used the Twitter API to collect the tweets. However, since my research target is the local people in Hong Kong, tweets posted from, for instance, travelers should be removed. Could anyone give me some hints about how to extract tweets posted from local people given a large volume of Twitter data? My idea now is to construct a dictionary which contains traveling-related words and use these words to filter the tweets. But it may seem not to work\nAny hints and insights are welcomed! Thank you!","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":42,"Q_Id":54058996,"Users Score":2,"Answer":"There are three main ways you can do this.\n\nLanguage. If the user is Tweeting in Cantonese - or another local language - there is less chance they are a traveller compared to, say, Russian.\nUser location. If a user has a location present in their profile, you can see if it is within Hong Kong.\nUser timezone. If the user's timezone is the same as HK's timezone, they may be a local.\n\nAll of this is very fuzzy.","Q_Score":0,"Tags":"python,twitter,web-crawler,sentiment-analysis,social-media","A_Id":54060976,"CreationDate":"2019-01-06T05:49:00.000","Title":"How to extract tweets posted only from local people?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using select() to check if there is data in a socket with a timeout of 5 seconds. I'm willing to know if calling select() block only the thread or the whole program, if there is no data in the socket, select() will block for 5 seconds only the thread so the rest of the program can run freely or block everything until timeout?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":49,"Q_Id":54059866,"Users Score":0,"Answer":"Yes, select will only block the current thread. The other threads can work freely.","Q_Score":0,"Tags":"python,select,block","A_Id":54059877,"CreationDate":"2019-01-06T08:30:00.000","Title":"Does calling Select() block only the thread or the whole program?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using selenium to scrape information from a website.\nThe page is behind a login so I can't provide an example - but basically I need to gather data on approx. 800 fields on the one page.\nCurrently it boils down to me navigating to the correct page and then running \nfor i in driver.find_elements_by_xpath('\/\/*[@id]'):\n some_list.append(i.get_attribute('textContent'))\nMy question is;\n\nDoes using get_attribute place any impact on the responding server?\n\nOr is the full page 'cached' and then I am simply reading the values that are already loaded?\nJust want to make sure I'm being kind to the other party and not doing 800 calls for get_attribute!\nThanks.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":50,"Q_Id":54084584,"Users Score":5,"Answer":"get_attribute is retrieving data from sources already downloaded. You are not making requests to a web server when you execute that command","Q_Score":1,"Tags":"python,selenium,web-scraping","A_Id":54085021,"CreationDate":"2019-01-08T02:44:00.000","Title":"Selenium - How much am I hitting the server?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am a Python developer. I wanted to create some online shopping stores which will be fully customized, Database will Mysql, Redis for caching, mail-gun for mailing, AWS hosting and Theme may be customized. I am confused in both platforms Magento and Shopify. Please Help Which have the best integration with python.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":201,"Q_Id":54103457,"Users Score":0,"Answer":"Yes Rehan is correct to go with the magento framework","Q_Score":0,"Tags":"python,magento,flask,e-commerce,shopify","A_Id":64007871,"CreationDate":"2019-01-09T04:54:00.000","Title":"Magento or Shopify which is best for integration with Python or which provide best APIs for Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have apps deployed in AWS, like elastic search and ec2 instances inside VPC. Is there any service I can use to lookup the type of service running on the IP address from my VPC log. All my components are inside VPC I have the vpc log to get the ip address , Mostly it's all private ipv4 addresses.\nAny API in python or Java will be helpful.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1384,"Q_Id":54140167,"Users Score":1,"Answer":"There is no service to identify the service. However you can get an idea on what that IP is associated with via aws ec2 describe-network-interfaces","Q_Score":1,"Tags":"java,python,amazon-web-services,amazon-ec2,vpc","A_Id":54144184,"CreationDate":"2019-01-11T03:52:00.000","Title":"How to identify AWS component by IP address lookup?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"In my logs. I'm getting a large number of connections from a local IP address (which changes every time I restart my application. Is this anything to worry about or should I just block internal IPs from making requests to my server","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":63,"Q_Id":54152468,"Users Score":0,"Answer":"Turns out it's because Heroku uses an internal routing system meaning that connections appear to the server as if they had originated from an internal, private ip address","Q_Score":0,"Tags":"python,heroku,server,twisted","A_Id":54154957,"CreationDate":"2019-01-11T18:56:00.000","Title":"Heroku - Private IP address making requests to server","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am logging to a website with valid credentials, but if my network changes or even device gets changed (within same network); it redirects to Authentication page where I have to put the access code, which I received via email. I want to skip this authentication page and navigate to other pages to continue my process. \nExpected result - Home page of the site\nActual result - Secure Access Code page","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1260,"Q_Id":54180659,"Users Score":1,"Answer":"When you initialise your driver you can configure the browser to load your chrome profile, that is if your using chrome. This may allow you to bypass the authentication page if you have had a previous login with this profile. Not sure if this will work but it worth a shot.","Q_Score":0,"Tags":"python,selenium-webdriver","A_Id":54180917,"CreationDate":"2019-01-14T11:29:00.000","Title":"2 factor authentication handling in selenium webdriver with python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a web-socket client which sends audio binary data in the request and receive them as a response from the web-socket server. I am using pyaudio to read binary audio data from (file\/microphone) which I then sending to the server. Then as the response I receive another binary audio data from the server. The question is can I use my recently opened pyaudio reading stream to play receiving audio in real time or I better need to create another pyaudio stream (Have two streams where one is responsible for binary data reading and another for binary data writing)?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1404,"Q_Id":54183268,"Users Score":1,"Answer":"be aware of the fact, that, as far as I know, reading stream data is kind of generator process. Once you read it, you lose the data - In other words: based on your chunk, you basicly by every 'read' move a pointer that grabs your binary data.\nAnswer\nWhy dont you create 2 threads with 2 streams? Dont be afraid of streams. You can initialize as many as you want.\n\n1 thread receives binary data and push them into stream(from client to your sound device)\n2 thread receive data from your input stream (in binary form from your mic) and you push them to your client?\n\nI am working now little bit with PyAudio and streaming is quite interesting but hard to understand from programming point of view. You can actually create 2 output streams into your headphones and somewhere in a hardware on the way to your headphones the streams sum up themselves so you can simply listen two sounds in the same time. \nNOTE:\nAlso I wanted to say that you dont have to be worry about using threads. The streams works in batch, not in realtime. Whether it is read or write, it works in the way, that you have binary data which you push in a stream and you are done. Hardware accepts the binary data, stream them and only after it finishes, then the stream asks for another data. So if you have sample_rate 44100 and chunk 22050(just example), your loop will be just 0.5s. So you dont have to be even worry about overflows, too much data to handle or your threads getting crazy. \nIn fact, the moment you push data to stream, your python waits for your hardware to finish the job. Its very leightweight.","Q_Score":1,"Tags":"python,audio,audio-streaming,pyaudio","A_Id":54183504,"CreationDate":"2019-01-14T14:20:00.000","Title":"Using one pyaudio stream for both data reading and writing","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"After I launch Locust without the web UI:\n$ locust -f locust_files\/my_locust_file.py --no-web -c 1000 -r 100\nis it possible to change the number of users or hatch rate programmatically during the execution?","AnswerCount":3,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":1247,"Q_Id":54202932,"Users Score":1,"Answer":"No.. that is not possible.. Locust requires the number of virtual users and hatch rate to be defined at test startup.","Q_Score":1,"Tags":"python,locust","A_Id":54203228,"CreationDate":"2019-01-15T16:25:00.000","Title":"In Locust, can I modify the number of users and hatch rate after I start the test?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"After I launch Locust without the web UI:\n$ locust -f locust_files\/my_locust_file.py --no-web -c 1000 -r 100\nis it possible to change the number of users or hatch rate programmatically during the execution?","AnswerCount":3,"Available Count":3,"Score":0.0665680765,"is_accepted":false,"ViewCount":1247,"Q_Id":54202932,"Users Score":1,"Answer":"Warning: unsupported method\nStart locust in the usual way and investigate the calls made by the browser to the endpoints exposed by locust.\nE.g. the call to update the user count is a simple POST to the \/swarm endpoint with the desired locust count and hatch rate:\ncurl \"http:\/\/localhost:8089\/swarm\" -X POST -H \"Content-Type: application\/x-www-form-urlencoded\" --data \"locust_count=10&hatch_rate=1\"","Q_Score":1,"Tags":"python,locust","A_Id":54782803,"CreationDate":"2019-01-15T16:25:00.000","Title":"In Locust, can I modify the number of users and hatch rate after I start the test?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"After I launch Locust without the web UI:\n$ locust -f locust_files\/my_locust_file.py --no-web -c 1000 -r 100\nis it possible to change the number of users or hatch rate programmatically during the execution?","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":1247,"Q_Id":54202932,"Users Score":0,"Answer":"1) If we want to increase the number of users during test:\nRun the same test in parallel with additional number of users\n2) If we want to decrease the number of users during test:\na) Run the second test with required number of users \nb) At the same time stop the first test\nBoth options can be automated with python or even bash scripts.\nDirty hack, but I think this will result desirable effect completely.","Q_Score":1,"Tags":"python,locust","A_Id":59071475,"CreationDate":"2019-01-15T16:25:00.000","Title":"In Locust, can I modify the number of users and hatch rate after I start the test?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to install some packages on the server which does not access to internet. so I have to take packages and send them to the server. But I do not know how can I install them.","AnswerCount":4,"Available Count":1,"Score":-0.0996679946,"is_accepted":false,"ViewCount":17437,"Q_Id":54213323,"Users Score":-2,"Answer":"Download the package from website and extract the tar ball.\nrun python setup.py install","Q_Score":0,"Tags":"python,pip,setup.py,installation-package","A_Id":54213344,"CreationDate":"2019-01-16T08:53:00.000","Title":"Install python packages offline on server","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a large text file of URLs (>1 million URLs). The URLs represent product pages across several different domains.\nI'm trying to parse out the SKU and product name from each URL, such as:\n\nwww.amazon.com\/totes-Mens-Mike-Duck-Boot\/dp\/B01HQR3ODE\/\n\n\ntotes-Mens-Mike-Duck-Boot\nB01HQR3ODE\n\nwww.bestbuy.com\/site\/apple-airpods-white\/5577872.p?skuId=5577872\n\n\napple-airpods-white\n5577872\n\n\nI already have the individual regex patterns figured out for parsing out the two components of the URL (product name and SKU) for all of the domains in my list. This is nearly 100 different patterns.\nWhile I've figured out how to test this one URL\/pattern at a time, I'm having trouble figuring out how to architect a script which will read in my entire list, then go through and parse each line based on the relevant regex pattern. Any suggestions how to best tackle this?\nIf my input is one column (URL), my desired output is 4 columns (URL, domain, product_name, SKU).","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":176,"Q_Id":54319452,"Users Score":1,"Answer":"While it is possible to roll this all into one massive regex, that might not be the easiest approach. Instead, I would use a two-pass strategy. Make a dict of domain names to the regex pattern that works for that domain. In the first pass, detect the domain for the line using a single regex that works for all URLs. Then use the discovered domain to lookup the appropriate regex in your dict to extract the fields for that domain.","Q_Score":0,"Tags":"python,regex,python-3.x","A_Id":54319613,"CreationDate":"2019-01-23T03:02:00.000","Title":"Parsing list of URLs with regex patterns","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have just started myself with AWS cloud automations and have been using python boto3 for automations. I find boto3 is convenient for me becoz im not good with using AWS CLI commands using inside shell script for automations. My question is for AWS cloud automation, is boto3 superior to AWS CLI commands ? or whats is the advantage that python boto3 i having over AWS CLI commands or vice versa ?","AnswerCount":2,"Available Count":1,"Score":0.2913126125,"is_accepted":false,"ViewCount":1641,"Q_Id":54338549,"Users Score":3,"Answer":"If you can use boto3, then that is the far-superior choice. It gives you much more ability to supplement the AWS API calls with additional logic, such as filtering results with. It is also easier to chain API calls, such as making one call for a list of resources, then making follow-up calls to describe each resources in detail.\nThe AWS CLI is very convenient for one-off commands or simple automation, but things get tricky when using --filter and --query commands.","Q_Score":1,"Tags":"python,amazon-web-services,boto3,aws-cli","A_Id":54340510,"CreationDate":"2019-01-24T02:29:00.000","Title":"Advantage of AWS SDK boto3 over AWS CLI commands","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm a beginner to Python and trying to start my very first project, which revolves around creating a program to automatically fill in pre-defined values in forms on various websites.\nCurrently, I'm struggling to find a way to identify web elements using the text shown on the website. For example, website A's email field shows \"Email:\" while another website might show \"Fill in your email\". In such cases, finding the element using ID or name would not be possible (unless I write a different set of code for each website) as they vary from website to website.\nSo, my question is, is it possible to write a code where it will scan all the fields -> check the text -> then fill in the values based on the texts that are associated with each field?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":81,"Q_Id":54350822,"Users Score":1,"Answer":"It is possible if you know the markup of the page, and you can write code to parse this page. In this case you should use xpath, lxml, beautiful soup, selenium etc. You can look at many manuals on google or youtube, just type \"python scraping\"\nBut if you want to write a program able to understand random page on a random site and understand what it should do, it is very difficult, it's a complex task with using machine learning. I guess this task is completely not for beginners.","Q_Score":1,"Tags":"python,python-3.x,selenium-webdriver","A_Id":54351000,"CreationDate":"2019-01-24T16:05:00.000","Title":"Filling forms on different websites using Selenium and Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am new to Selenium. The web interface of our product pops up a EULA agreement which the user has to scroll down and accept before proceeding. This happens ONLY on initial login using that browser for that user. \nI looked at the Selenium API but I am unable to figure out which one to use and how to use it.\nWould much appreciate any suggestions in this regard.\nI have played around with the IDE for Chrome but even over there I don't see anything that I can use for this. I am aware there is an 'if' command but I don't know how to use it to do something like:\nif EULA-pops-up:\n Scroll down and click 'accept'\nproceed with rest of test.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":348,"Q_Id":54354067,"Users Score":0,"Answer":"You may disable the EULA if that is an option for you, I am sure there is a way to do it in registries as well :\nC:\\Program Files (x86)\\Google\\Chrome\\Application there should be a file called master_preferences.\nOpen the file and setting:\nrequire_eula to false","Q_Score":0,"Tags":"python,selenium","A_Id":54354319,"CreationDate":"2019-01-24T19:31:00.000","Title":"How to handle EULA pop-up window that appears only on first login?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"My app is using a third-party API. It polls that API regularly at several endpoints. In also makes some additional calls to the API based on user's interaction with the app. The API is very slow, most requests take well over a second. The API is very flaky - timeouts are common, 500 errors are common, session key often randomly expires (even when defined \"keep_alive\" endpoint is called regularly). There is no option to use another API.\nWhat would be the best practices for dealing with such an API?\nHow to disable concurrent requests to this API on the requests level. So if one request is waiting for a response - the second request is not initiated? This should be done on \"per-domain\" basis, other requests to other domains should still be done concurrently.\nAny other settings to toggle with requests to make it easier to deal with such an API?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":399,"Q_Id":54365153,"Users Score":2,"Answer":"If your main problem is to serialize calls to that API in a multi-threaded (or multi process) application, a simple way would be to wrap it into a new module and consistenly use locking in that module.\nIf different clients can use a web API concurrently and you need to serialize the requests for performance reasons, you could imagine a dedicated serializing proxy. Just use above method in the proxy.","Q_Score":2,"Tags":"python,api,concurrency,python-requests","A_Id":54365613,"CreationDate":"2019-01-25T12:14:00.000","Title":"Python tips for working with an unstable `API`","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm setting up a webapp with a frontend and a backend that communicates with the frontend soley through RESTful methods. How do I make sure that the backend endpoints are only accessed by my own frontend, and not anyone else? I cannot find much information on this.","AnswerCount":2,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":14150,"Q_Id":54369416,"Users Score":8,"Answer":"Look into CORS. And make sure your server only allows access to specific origins.\n\nOn the backend - check if the X-Requested-With header is present in the request and set to XMLHttpRequest. Without a proper CORS handshake this header will be absent.\n\n\nThat being said, this will only protect your API from being used by other front-end apps or from being accessed directly from a browser address bar - because browsers respect CORS. People can still forge requests programmatically\/CLI and set headers to whatever they want.\nSo this is not actually \"securing\" just a way to prevent abuse & hotlinking","Q_Score":34,"Tags":"python,security,falconframework","A_Id":64536791,"CreationDate":"2019-01-25T16:39:00.000","Title":"How to secure own backend API which serves only my frontend?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a bunch of list of links which I'm doing a specific function on each link, the function takes about 25 sec, I use selenium to open each and get the page source of it then do my function, however whenever I build the program and cancel the build, I will have to start all over again.\nNote:I get links from different webs sitemap.\nIs there a way to save my progress and continue it later on?","AnswerCount":3,"Available Count":1,"Score":-0.0665680765,"is_accepted":false,"ViewCount":797,"Q_Id":54370412,"Users Score":-1,"Answer":"You should save the links in a text file. You should also save the index numbers in another text file, probably initializing with 0.\nIn your code, you can then loop through the links using something like:\nfor link in links[index_number:]\nAt the end of every loop, add the index number to the text file holding the index numbers. This would help you continue from where you left off.","Q_Score":0,"Tags":"python,list,selenium","A_Id":54370470,"CreationDate":"2019-01-25T17:46:00.000","Title":"Continuing where I left off Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm working on Twitter Streaming api to get live tweets data.\nI can print that data to console. But what I want is to save the data to a file and that data shouldn't be older than 5 minutes. \nHow do I continuously roll the file that holds the data from last 5 minutes as we can do for log files.\nAt the same time the file should be accessible for reading.\nIs there any way to do that in Python? \nI haven't come across such thing where we can mention such duration for which the file can hold specific data.","AnswerCount":3,"Available Count":1,"Score":-0.0665680765,"is_accepted":false,"ViewCount":56,"Q_Id":54376092,"Users Score":-1,"Answer":"Save the data in a file, with the actual time and check to see if the actual time is different by 5 min. Use time. Or use the sleep function and erase old data each 5 min.","Q_Score":0,"Tags":"python,file,text-files,tweepy,twitter-streaming-api","A_Id":54376116,"CreationDate":"2019-01-26T06:05:00.000","Title":"How do I create a file that doesn't contain information older than 5 minutes?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"is there a way to get the IP of the connected client? (And if possible the port it uses). \nI tried client_socket.getsockname() but it gave me my IP address.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":168,"Q_Id":54378675,"Users Score":0,"Answer":"You have to use the socket.getpeername() method.","Q_Score":1,"Tags":"python,python-2.7,sockets,python-sockets","A_Id":54965585,"CreationDate":"2019-01-26T13:09:00.000","Title":"Get IP of connected client in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How can i use paramiko after it give me this error \"paramiko\\kex_ecdh_nist.py:39: CryptographyDeprecationWarning: encode_point has been deprecated on EllipticCurvePublicNumbers and will be removed in a future version. Please use EllipticCurvePublicKey.public_bytes to obtain both compressed and unco\"","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":288,"Q_Id":54385866,"Users Score":0,"Answer":"I had to disable gathering_facts to get past the warning. For some reason Ansible was getting stuck gathering facts.\nAfter disabling gathering facts, I still get this warning, but Ansible is able to continue execution.\nyaml file \ngather_facts: no","Q_Score":2,"Tags":"python-3.x","A_Id":56402605,"CreationDate":"2019-01-27T07:09:00.000","Title":"How can i use paramiko after it give me this error encode_point has been deprecated on EllipticCurvePublicNumbers","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have built a small program that listens to a port on my laptop and an other program on my phone that connects to that port on my laptop. Now this all works fine but i was wondering if that same module could be used with external ip adresses. If it doesn't work with external ip's, is there a preinstalled module that can work with external ip's?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":58,"Q_Id":54386753,"Users Score":1,"Answer":"If you forward that port in your network bridge (probably your all-in-one router) then yes! Simply listen on IPv4 address 0.0.0.0 or IPv6 address :: (maybe 0:0:0:0:0:0:0:0%0) to ensure that you're listening on all available IP addresses, instead of just localhost, and you're good to go.","Q_Score":2,"Tags":"python-3.x,ip-address","A_Id":54387231,"CreationDate":"2019-01-27T09:31:00.000","Title":"Can I use the python 'socket' module for listening on my external ip? if no, is there a preinstalled module which can?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"The situation:\nI am working on a senior design project that involves calculating the source of a signal by correlating audio from many devices that are all on the same WIFI network. The devices exchange information using REST apis.\nThe architecture is master slave, where the master unit will request audio from all of the slave units. Right now, the slave units need the IP of the master unit. Then they say 'hello' to the master unit who stores their IP, location etc in a list.\nWhat I think I want:\nI would like the slave units to have some way of automatically discover the master unit's IP. I don't think I really care about security. What is the best way to do this?\nIs there an idiomatic way to do this?\nI think I might just not have the correct words to google\nSolutions I have considered:\n1. Assign static IP to all (or just master unit).\n - not ideal because it would only work on one router\n - not slick\n\nMaster unit listens on hard-coded port and minions post to broadcast IP. \n\n\nMay not work on all routers \ndoesn't seem elegant","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":30,"Q_Id":54391482,"Users Score":0,"Answer":"Master unit listens on hard-coded port and minions post to broadcast IP.\n\n\nYes, using a well known port to rendezvous on is the standard way to solve this problem.\nI would turn your approach around a bit. There's more minions than masters, so master should do the broadcasting. A minion might send one (or a handful) of broadcasts upon power-on, to encourage an immediate reply from master. But as hours and days go by, the master should be the one primarily responsible for keeping the population in sync.\nA minion should remember IP of most-recent-master, and try unicasting to that upon startup.\nConsider using a packet format along these lines: {magic_number, version, start_time, num_minions, optional_list_of_minions}\nThe list of minions would include {ip_addr, time_of_last_transaction}, and would be empty if the list no longer fits within some limited size UDP packet. Minions could detect master reboot by noticing that start_time changed, and would re-connect soon after reboot. Do take care to randomly jitter your delay timers, so we won't see a thundering herd of minions.","Q_Score":0,"Tags":"python-3.x,networking,raspbian","A_Id":54392530,"CreationDate":"2019-01-27T18:28:00.000","Title":"Automatic device discovery (python or anything I can run on raspbian)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have written a code in python using youtube data API v3 where I am listing playlists created by a user. I have followed the code suggested by youtube API it runs well in local machine but when I deployed the same code on the server, It runs till authentication but after OAuth, it displays Internal server error in the browser instead of the result.\nis there any major changes that I am missing. Please help me with the issue.\nMy code in not running after oauth2callback function runs.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":78,"Q_Id":54418572,"Users Score":1,"Answer":"I solved the error,\nI was deploying without a secure connection and that was raising an error. also redirect URL must be 'https' that was another problem.","Q_Score":0,"Tags":"python,google-oauth,url-redirection,youtube-data-api,google-authentication","A_Id":54624602,"CreationDate":"2019-01-29T10:11:00.000","Title":"Internal server error after I try to host code on server","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Scrapy has several points\/places where allowed processing scraped data: spider, items and spider middlewares. But I don't understand where I should do it right. I can process some scraped data in all these places. Could you explain to me differences between them in detail?\nFor example: downloader middleware returns some data to the spider (number, short string, url, a lot of HTML, list and other). And what and where i should do with them? I understand what to do, but is not clear where to do it...","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":347,"Q_Id":54421455,"Users Score":0,"Answer":"I will try to explain in order\nSpider is the one where you decide which URLs to make requests to\nDownloadMiddleware has a process_request method which is called before a request to URL is made, and it has process_response method which is called once response from that URL is received\nPipeline is the thing where data is sent when you yield a dictionary from your Spider","Q_Score":2,"Tags":"python,scrapy,scrapy-spider,scrapy-pipeline","A_Id":54435005,"CreationDate":"2019-01-29T12:47:00.000","Title":"In which file\/place should Scrapy process the data?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I get the MessageError: RangeError: Maximum call stack size exceeded when I try to upload a json file and I assume the problem is the size.\nI'm trying to upload a 187Kb json file on Google Colaboratory using the function files.upload(), but it gives me the error \"MessageError: RangeError: Maximum call stack size exceeded.\" When I try to upload manually on the sidebar, it just keeps loading endlessly. Is the another way to upload this file?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":5718,"Q_Id":54430763,"Users Score":0,"Answer":"What I did in response to this error message since I did not want it to count against my Google Drive quota is I uploaded the file in the left pane (labeled \"Files\") of the Google Colab notebook. The drawback to this is that the file will be gone when the notebook is recycled, however it does get the job avoiding both upload and Google Drive caps.","Q_Score":2,"Tags":"python,google-colaboratory","A_Id":60836386,"CreationDate":"2019-01-29T22:46:00.000","Title":"Google Colaboratory - Maximum call stack size exceeded","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have never made an executable application before but from what I have read its pretty easy, using py2exe to generate the exe.\nBut I have a GUI that uses Selenium to scrape data from a backend (No I can not use API calls). How do I add chromedriver to the executable? Also, would all the imports go along when using a compiler?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":399,"Q_Id":54468218,"Users Score":0,"Answer":"When you compile a .py file into an .exe (from my personal experience) all of the imports are included.\nI would personally suggest using pyinstaller. I had quite a few problems using py2exe and as a beginner I found pyinstaller much more user-friendly and easier to troubleshoot.\nAs compiling a file does not alter the .py file, I would suggest getting it to a fully working state and trying it. If it doesn't appear to work or if some of the imports are lost, we can troubleshoot with the error code.","Q_Score":1,"Tags":"python,selenium,selenium-chromedriver,exe,executable","A_Id":54468382,"CreationDate":"2019-01-31T19:48:00.000","Title":"Python Selenium GUI to an executable","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to install the requests library so I can make HTTP requests. I've tried looking for resources online but I can't seem to find tutorials for installing this library on MacOS.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":28458,"Q_Id":54484536,"Users Score":0,"Answer":"You can use pip to install requests library. Try these steps to setup pip and install requests library in macOS:\n\nsudo easy_install pip\npip install requests","Q_Score":1,"Tags":"python,python-3.x,python-requests","A_Id":54484665,"CreationDate":"2019-02-01T17:40:00.000","Title":"Installing requests python library on MacOS","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm getting the following error with my 3 node DAX Cluster:\nFailed to retrieve endpoints\nTraceback (most recent call last):\nMy setup:\n- Private Lambdas\n- Python 3.6\n- amazon-dax-client\n- Config settings - timeout 300ms, max retries 5\n- vpc has 3 subnets\n- The DAX cluster has 3 x r4.large nodes\nThis error happens intermittently - it still \"works\" most of the time but it is alarming given how often it happens.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":193,"Q_Id":54581755,"Users Score":0,"Answer":"The DAX Python client up to and including 1.0.6 is a bit too sensitive when checking the health of cluster nodes and considers them unhealthy when they're not. The errors are benign, but annoying. This will be fixed in an upcoming release.","Q_Score":0,"Tags":"python,amazon-dynamodb-dax","A_Id":54582303,"CreationDate":"2019-02-07T20:28:00.000","Title":"Amazon Dax Failed to retrieve endpoints Traceback (most recent call last):","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Was trying to execute python script in jenkins, that make HTTP request with help of 'requests' module, but got stuck with following error:\nImportError: No module named requests\nBuild step 'Custom Python Builder' marked build as failure\nFinished: FAILURE","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":4697,"Q_Id":54634554,"Users Score":0,"Answer":"You need to install the dependencies of the script, normally in setup.py or requirements.txt.\nIn the case of requirements.txt run: pip install -r requirements.txt\nIn the case of setup.py run: pip install .\nYou should do this in the job that is running the script.\nIf neither of these files exist pip install requests will include the particular dependency you are missing in your question.","Q_Score":0,"Tags":"python,jenkins,python-requests","A_Id":54634665,"CreationDate":"2019-02-11T16:07:00.000","Title":"Jenkins: ImportError: No module named requests","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am trying to create an iframe inside a web content (no matter where exactly need to be inserted). The only way that I need to do this, is just by making it dynamically and also by using Python3+ version. The iframe's information is not important either. Id, class name and more can be random or custom. It is not necessary to insert specific data. Also, I don't need an answer about the height and the width. I already have solved it.\nWhat do I mean dynamically? I mean that if someone clicks on a button then I need to insert this iframe. If someone clicks on another button then I need to delete it, and so on.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":545,"Q_Id":54670433,"Users Score":0,"Answer":"Sounds like HTML \/ Javascript challenge. Why do you think it is related to python?","Q_Score":2,"Tags":"python,python-3.x,iframe","A_Id":54672807,"CreationDate":"2019-02-13T12:39:00.000","Title":"Q: How can I create dynamically an iframe inside a web content using Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Hello stackoverflow people!\nI would like to discuss and see what's the better approach to my problem.\nI have an application that send files to clients using multiple protocols (FTP(S), SFTP, S3, EMail).\nThere's a celery task per directory. A directory can be send to multiple clients and can be send to multiple destinations. e.g. dir1 -> client1 -> FTP and EMail (2 tasks, fine to run in parallel), dir2 -> client1 AND client2 -> same FTP hostname, different remote directories (2 tasks, not fine to run in parallel).\nThis is working fine, however I'm causing client network congestion sometimes, due to multiple connections from multiple workers to the same destination, some clients don't know (or want to implement) QOS.\nI would like to have a logic that don't allow tasks connecting to the same protocol or hostname running at the same time. Per example, a directory that is being send to 2 x S3 buckets, should run once, after it finished the second will start. Or two different directories that is being send to the same FTP server.\nMy initial idea is to implement a celery_worker queue remote control. One queue for each account, protocol. And setup workers with concurrency 1 listening on the queues.\nWondering if any of you had a similar challenge and how did you workaround it.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":201,"Q_Id":54688207,"Users Score":1,"Answer":"Your proposed solution is rather brittle (you really shouldn't rely on celery concurrency settings to control\/prevent concurrent execution) and will probably not solve all the potential race conditions (for example if taskA and taskB are on different queues but need to access a resource that doesn't support concurrent access).\nThere are quite a couple recipes (from rather informal advises to full-blown libs like celery-once) to prevent concurrent execution of one given task. They don't directly solve your own problem but basically the principle is the same: have some shared lock mechanism that the tasks communicate with - try to acquire the lock, only run once they get it, and of course release it. If you're using Redis as result backend, it's rather low read\/write cost and it's 'expire' feature can be really helpful, but you can also just use your SQL database.","Q_Score":1,"Tags":"python,python-3.x,celery,celery-task","A_Id":54690143,"CreationDate":"2019-02-14T10:28:00.000","Title":"Celery task \/ worker assignment logic","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Some time ago I ve written a Python script, using poplib library, which retrieves messages from my pop3 email account. Now I would like to use it to retrieve emails from different mail server which works with IMAP. It works well, but only to retrieve messages from Inbox. Is there any way to also get emails from other folders like Spam, Sent etc? I know I could use imaplib and rewrite the script, but my questions is if it's possible to obtain that with poplib.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":554,"Q_Id":54706093,"Users Score":4,"Answer":"No.\nPOP is a single folder protocol. It is very simple and was not designed for multiple folders.\nYou will need to use IMAP or other advanced protocols to access additional folders.","Q_Score":2,"Tags":"python,email,imaplib,poplib","A_Id":54719050,"CreationDate":"2019-02-15T09:21:00.000","Title":"Checking folders different than inbox using poplib in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Was asking if I could do a search on the entire eclipse library which includes my closed projects. I did a file search(Ctrl + H) but they only shows results for projects that are open.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":157,"Q_Id":54724905,"Users Score":0,"Answer":"When a project is closed, it can no longer be changed in the Workbench and its resources no longer appear in the Workbench, but they do still reside on the local file system. Closed projects require less memory. Also, since they are not examined during builds, closing a project can improve build time.\nA closed project is visible in the Package Explorer view but its contents cannot be edited using the Eclipse user interface. Also, an open project cannot have dependency on a closed project. The Package Explorer view uses a different icon to represent a closed project.","Q_Score":0,"Tags":"python,eclipse,search,project","A_Id":54724960,"CreationDate":"2019-02-16T15:55:00.000","Title":"How to search through eclipse closed projects?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to run some automatic scripts on a webpage with selenium and python, but the problem I'm having is that the webpage is loaded using document.write().\nI have to find some elements, but they are not shown because when I view the source, it is shown as document.write(A lot of JS) instead of the html.\nHow can I do it so that I can view the HTML source code? I know there is the function driver.execute_script(), but I have to specify the script to run, and I don't think it will work.\nThe page is correctly rendered, only problem is the source cannot be parsed.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":349,"Q_Id":54781641,"Users Score":0,"Answer":"As it turns out after some digging into the code, Selenium does the search in the rendered final view, but the problem was not the document.write(), but the fact that the field I was looking for was in an iframe, which selenium could not find on the default frame. \nAll I had to do was search through the iframes and find the ones that I needed.","Q_Score":0,"Tags":"javascript,python,selenium-webdriver,document.write","A_Id":54800939,"CreationDate":"2019-02-20T08:14:00.000","Title":"Selenium find element with document.write","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a WebApp on Flask+gunicorn+nginx. I need to send 200 requests from my other server at the same time to a WebApp , save the response and its speed. Also I need to send Json POST in this 200 requests.\nHow to do it right?\nUse python script or CURL?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":123,"Q_Id":54783844,"Users Score":0,"Answer":"I suggest you to use \"Postman\" for any kind of API testing. It is one the best tools available in the market for API testing , monitoring, documenting, and also sharing the results(as well as test scripts) for free.\nIf you don't want to use any other tool, then i suggest you to use a python script.","Q_Score":0,"Tags":"python,curl,flask,server,gunicorn","A_Id":54784754,"CreationDate":"2019-02-20T10:16:00.000","Title":"How to load server with python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am making APIs.\nI'm using CentOS for web server, and another windows server 2016 for API server.\nI'm trying to make things work between web server and window server.\nMy logic is like following flow.\n1) Fill the data form and click button from web server\n2) Send data to windows server\n3) Python script runs and makes more data\n4) More made data must send back to web server\n5) Web server gets more made datas\n6) BAMM! Datas append on browser!\nI had made python scripts.\nbut I can't decide how to make datas go between two servers..\nShould I use ajax Curl in web server?\nI was planning to send a POST type request by Curl from web server to Windows server.\nBut I don't know how to receipt those datas in windows server.\nPlease help! Thank you in advance.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":213,"Q_Id":54799927,"Users Score":1,"Answer":"First option: (Recommended)\nYou can create the python side as an API endpoint and from the PHP server, you need to call the python API.\nSecond option:\nYou can create the python side just like a normal webpage and whenever you call that page from PHP server you pass the params along with HTTP request, and after receiving data in python you print the data in JSON format.","Q_Score":0,"Tags":"php,python,curl","A_Id":54800059,"CreationDate":"2019-02-21T05:36:00.000","Title":"Run python script by PHP from another server","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Is there any way to directly send files from one API to another FTP server without downloading them to local in Python 3.\nCurrently we downloading from one API to local and then sending it to FTP server, want to avoid that hop from data flow by directly sending files to server.","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":1136,"Q_Id":54804532,"Users Score":0,"Answer":"One of the options would be having another API function (TransferFile, ...), which will transfer data from API server to FTP site. Then you just call that API method from your code without downloading data to the local server.","Q_Score":1,"Tags":"python,api,ftp","A_Id":54804765,"CreationDate":"2019-02-21T10:14:00.000","Title":"Upload file Directly to FTP Server in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there any way to directly send files from one API to another FTP server without downloading them to local in Python 3.\nCurrently we downloading from one API to local and then sending it to FTP server, want to avoid that hop from data flow by directly sending files to server.","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":1136,"Q_Id":54804532,"Users Score":0,"Answer":"The FTP protocol has provision for initiating a data transfert between two remote hosts from a third party client. This is called the proxy mode. Unfortunately, most servers disable it for security reasons, because it used to be a very efficient way for DOS attacks.\nIf you have control on both servers and if both use FTP and if they are not publicly exposed, this can be very efficient.\nIn any other use case, the data will have to pass through the client. The best that can be done is to open both connections and transfer data to the target host as soon as it has been received from the source without storing it on disk.","Q_Score":1,"Tags":"python,api,ftp","A_Id":54804881,"CreationDate":"2019-02-21T10:14:00.000","Title":"Upload file Directly to FTP Server in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there any way to directly send files from one API to another FTP server without downloading them to local in Python 3.\nCurrently we downloading from one API to local and then sending it to FTP server, want to avoid that hop from data flow by directly sending files to server.","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":1136,"Q_Id":54804532,"Users Score":0,"Answer":"you can use byte data of the file(it will store as in-memory) and pass that to another API.","Q_Score":1,"Tags":"python,api,ftp","A_Id":54804658,"CreationDate":"2019-02-21T10:14:00.000","Title":"Upload file Directly to FTP Server in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"For a connection is to be made, a bounded socket should be listening for clients. Client needs to know both ip address and port. For bounding a socket why we need an ip address of the server itself when the program(which listens for clients) itself is running on the server?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":15,"Q_Id":54829284,"Users Score":1,"Answer":"Simply because a server has multiple addresses, at least the loopback one at 127.0.0.1 (IP v4) and one per physical network interfaces. For example a corporate proxy has commonly two interfaces, one on the internal network and one on the public one. Most have a third one for the DMZ. Being member of different networks, those interfaces must have different addresses. And it make sense to open some services on only one interface.\nBut you also can use the ANY address (0.0.0.0 in IPv4) that means to accept connections on any interface.","Q_Score":0,"Tags":"python,tcp","A_Id":54829438,"CreationDate":"2019-02-22T14:30:00.000","Title":"why socket binding in server needs its ip address >","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to make a discord bot, and I read that I need to have an older version of Python so my code will work. I've tried using \"import discord\" on IDLE but an error message keeps on coming up. How can I use Python 3.6 and keep Python 3.7 on my Windows 10 computer?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":323,"Q_Id":54852821,"Users Score":0,"Answer":"Install in different folder than your old Python 3.6 then update path\nUsing Virtualenv and or Pyenv\nUsing Docker\n\nHope it help!","Q_Score":2,"Tags":"python,discord","A_Id":54853052,"CreationDate":"2019-02-24T14:21:00.000","Title":"how can I use python 3.6 if I have python 3.7?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to make a discord bot, and I read that I need to have an older version of Python so my code will work. I've tried using \"import discord\" on IDLE but an error message keeps on coming up. How can I use Python 3.6 and keep Python 3.7 on my Windows 10 computer?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":323,"Q_Id":54852821,"Users Score":0,"Answer":"Just install it in different folder (e.g. if current one is in C:\\Users\\noob\\AppData\\Local\\Programs\\Python\\Python37, install 3.6. to C:\\Users\\noob\\AppData\\Local\\Programs\\Python\\Python36).\nNow, when you'll want to run a script, right click the file and under \"edit with IDLE\" will be multiple versions to choose. Works on my machine :)","Q_Score":2,"Tags":"python,discord","A_Id":54853010,"CreationDate":"2019-02-24T14:21:00.000","Title":"how can I use python 3.6 if I have python 3.7?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a folder containing multiple images, i want to write these images as a video file on Google drive. Is there any way to achieve this?\nI cannot write the images to video file in local system and then upload to Google Drive, because of space constraint.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":39,"Q_Id":54863190,"Users Score":1,"Answer":"The google drive api is a file store api. It contains information about the files it contains and it will allow you to upload new files. To some extent it can covert one file type to another. For example it can covert an excel file to a google sheets file and back and forth.\nThe google drive api does not have the ablity to allow you to upload two images and have them coverted into a video.\nYou will need to encode the video locally on your machine and then upload the video after.","Q_Score":1,"Tags":"python-3.x,google-drive-api","A_Id":54864817,"CreationDate":"2019-02-25T09:36:00.000","Title":"Write a set of images to a Video file on Google Drive","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"While running pip install InstagramAPI im getting the following error.\n\n\"networkx 2.1 has requirement decorator>=4.1.0, but you'll have\n decorator 4.0.11 which is incompatible\"\n\nSo I uninstalled decorator and reinstalled it using pip install decorator==4.1.0. I confirmed with pip list the version of decorator. I then tried to pip install InstagramApi I got the same error \n\n\"networkx 2.1 has requirement decorator>=4.1.0, but you'll have\n decorator 4.0.11 which is incompatible.\"\n\nand my decorator module was regressed to version 4.0.11!!!\nSomeone please explain whats going on here. Thank you","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":94,"Q_Id":54875301,"Users Score":0,"Answer":"Why don't you use a virtual python environment (the python package) It should take care of these Dependency problems.","Q_Score":0,"Tags":"python,instagram-api","A_Id":54875371,"CreationDate":"2019-02-25T21:57:00.000","Title":"InstagramAPi install error with decorator version","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am running Eclipse Che v6.18.1 in Google Chrome on MacBook Pro OS v10.10.5 (Yosemite). Eclipse Che workspace runs in a Docker container.\nHow can I open a new browser tab (in the same browser window as Eclipse Che) from within a Python code executed in Eclipse Che? \nSo not a new Google Chrome instance from within a Docker container (much too slow) but a new tab in already existing browser window on the host machine.\nIn Eclipse Che it is possible to preview an HTML file in the project Workspace (right-click => Preview). Then the HTML file opens in the next tab to the Eclipse Che IDE. How could I use that feature from within a Python code to open a new browser tab?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":569,"Q_Id":54882713,"Users Score":0,"Answer":"Are you trying to open a preview window similar to the sample nodejs Yeoman application? Or are you trying to open a new tab from the source code in someone's browser?\nIf it's the latter, then I do not think it is possible (or a good idea!) - Che does not run in a security context that will allow it to instruct the browser to open a new tab or window.","Q_Score":1,"Tags":"python,eclipse-che,codenvy","A_Id":55222067,"CreationDate":"2019-02-26T09:52:00.000","Title":"How can I open a new browser tab from Python in Eclipse Che?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am able to parse from file using this method: \nfor event, elem in ET.iterparse(file_path, events=(\"start\", \"end\")):\nBut, how can I do the same with fromstring function? Instead of from file, xml content is stored in a variable now. But, I still want to have the events as before.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":319,"Q_Id":54891949,"Users Score":0,"Answer":"From the documentation for the iterparse method:\n\n...Parses an XML section into an element tree incrementally, and\n reports what\u2019s going on to the user. source is a filename or file\n object containing XML data...\n\nI've never used the etree python module, but \"or file object\" says to me that this method accepts an open file-like object as well as a file name. It's an easy thing to construct a file-like object around a string to pass as input to a method like this.\nTake a look at the StringIO module.","Q_Score":2,"Tags":"python","A_Id":54892048,"CreationDate":"2019-02-26T18:30:00.000","Title":"Elementree Fromstring and iterparse in Python 3.x","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am automating our mobile app on iOS and Android. When I get the search results(list of elements), I want to iterate through those all elements matching my xpath.\nThe problem is that - Appium returns only those elements which are visible in the viewport, which makes sense. However, I would like to get all elements matching my xpath\/locator strategy, although the elements are not in viewport. To get further set of elements, I have to scroll to those elements and get them into viewport.\nIs there any configuration provided by appium, to enable this feature? Or will I have to continue scrolling to those elements before accessing?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":57,"Q_Id":54912357,"Users Score":0,"Answer":"You need to handle scrolling on your own.\nOn Android, Appium can make a snapshot of what is currently in the viewport.\nYou can get a list of elements and iterate them, then scroll by screen hight and get another list of elements, iterate them. Repeat it until the new list is empty - make sure you don't get same elements twice.\nOn iOS, it's more tricky: the driver will return you elements including the ones not in the viewport. For reliable interaction, I suggest scrolling to each element.","Q_Score":0,"Tags":"python-2.7,appium","A_Id":54922735,"CreationDate":"2019-02-27T18:36:00.000","Title":"How to get all items matching xpath, which are not in viewport?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have 10 different IP cameras that I need to access in a FLASK server.\nI would like to know the best way to do that.\nCurrently, I have a dictionary that uses an ID to map to a VideoCapture object. A client sends the cam ID to the server and the server accesses the video capture object and returns a captured frame via the read function.\nSo technically I have 10 different objects of VideoCapture. Another method that I have used is, that upon getting camera ID, if the current cam ID is different from the received cam ID, then the video cap object is replaced with a new one.\nMy question is that is opening 10 video captures at one time fine? My server should be time sensitive. Does opening 10 captures congest the network? If yes then should I stick to the one object approach that always creates a new object on ID change or is there any better way to do this? Thank you.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1328,"Q_Id":54919482,"Users Score":1,"Answer":"The first way you used is OK. For every camera, you should keep one capture object. It will work faster than replacing one capture object with multiple connections. If you open RTSP connection, then it will not congest the network until you start reading frames. So you can go with the first way.\nOpening and then releasing one capture object for multiple connections will slow down the speed because in every new connection it needs time to access the camera.","Q_Score":1,"Tags":"python,opencv","A_Id":54919941,"CreationDate":"2019-02-28T06:13:00.000","Title":"OpenCV Capturing multiple RTSP Streams - Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"My Spyne server WSDL shows \n\nI want it to show https instead of http.\nBasically, \nNotice the difference lies in http and https.\nHow do I tell my Spyne server to address this?\nHave gone through the docs multiple times but could not figure it out.\nThanks!","AnswerCount":2,"Available Count":1,"Score":-0.0996679946,"is_accepted":false,"ViewCount":274,"Q_Id":54920093,"Users Score":-1,"Answer":"WSDL url is constructed from the first request.\nUpon starting the server, request the wsdl from the secure domain and it should work.","Q_Score":0,"Tags":"python,soap,https,spyne","A_Id":54932502,"CreationDate":"2019-02-28T07:00:00.000","Title":"Spyne SOAP server WSDL file cannot show https","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have to automate a task which involves lots of google searching, which I am doing through selenium and python. After 20 searches google says suspicious activity detected and gives a reCaptcha to prove I am not a robot. I have tried other ways (like changing profile) but still the same problem.\nHow to get rid of it?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1817,"Q_Id":54940720,"Users Score":1,"Answer":"I solved this by rotating a decent pool of proxies with an inner load balancer, switching user agent and use captcha solving APIs where appropriate. Having a good amount of clean IP addresses and using them wisely has the biggest impact so far.","Q_Score":1,"Tags":"python,selenium,google-chrome,selenium-chromedriver,recaptcha","A_Id":55326670,"CreationDate":"2019-03-01T08:32:00.000","Title":"Google search using selenium causing suspicious network traffic and shows reCaptcha","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a set of xml files representing rules manager rules. There are several elements that map to tables or table-attribute pairs.\nI would like to discover all the elements that affect intersecting sets of tables or table-attributes.\nI can work in java or possibly python, or command line Windows\/Mac. Can someone suggests a tool or approach? Thanks so much!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":28,"Q_Id":54945282,"Users Score":0,"Answer":"Experiment with xmlStarlet\nTry Saxon for cross file xpath queries\nBoth can work from the command line","Q_Score":0,"Tags":"java,python","A_Id":55133607,"CreationDate":"2019-03-01T13:04:00.000","Title":"Generate relationship data","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to download files from a website using Python 3.\nDirect parsing of the URL doesn't work because the URL forwards to the login page everytime, where you need to login using the Google Login button, which forwards to Google.\nIs there a way to sign in and download the files using Python script? Maybe by implementing cookies in some way?\nThanks in advance.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":192,"Q_Id":54974983,"Users Score":0,"Answer":"You can use selenium, which can automatically fill the login form for you.","Q_Score":0,"Tags":"python,python-3.x,web-scraping,download,google-signin","A_Id":54975349,"CreationDate":"2019-03-03T23:48:00.000","Title":"Python - Download files from Website with Google Login","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm starting to learn telegram bot. My plan was to have a bot that daily sends you at a specific time a message, but also I wanted the option to manually poll the bot for getting that daily message, or a random one. Right now I have a bot running on pythonanywhere that can respond to the 2 commands, but what about sending the user the daily message at some time? \nWhat's the way to go, create a channel and then schedule a task on my webhook to daily send the message to the channel, or store all chat-id in my service and talk to them everytime? The first one seems obviously better but I was wondering if there's some trick to make everything works in the bot.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":131,"Q_Id":54981185,"Users Score":0,"Answer":"For now I managed to create a channel, make the bot an admin and scheduled a task on the webhook to daily send a message as admin to the channel","Q_Score":0,"Tags":"python,telegram,telegram-bot","A_Id":55000584,"CreationDate":"2019-03-04T10:22:00.000","Title":"Telegram: Store chat id or Channel with bot inside?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to count words which their visibility are visible in browser.\nI'm using Scrapy to get link and parse theme with Selector.\nProblem is I can only count all texts in spite of their visibility (hidden, in menu, in blockquote...) and the searching sites is a list of url (not the same structure)\nDo you have any suggestion?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":54,"Q_Id":54994471,"Users Score":0,"Answer":"Scrapy just give you the page source(ctrl+u) to get the rendered page you have to use the Selenium or splash my splash is little speed compare to the Selenium but Selenium give you full control","Q_Score":0,"Tags":"python,web-scraping,scrapy","A_Id":55007667,"CreationDate":"2019-03-05T02:15:00.000","Title":"How to imitate the browser to find & count text","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I want to send a text file from one serial device(slcan0) to another serial device(slcan1) can this operation be performed in SocketCAN? The serial CAN device I am using is CANtact toolkit. Or can the same operation be done in Python-can?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":485,"Q_Id":55017807,"Users Score":0,"Answer":"When you want to send a text file over the CAN bus, you have to decide which CAN-ID you want so sent for sending and receiving.\nMost likely your text file is larger than 8 bytes, so you would have to use a higher level protocol on CAN.\nISO-TP will allow 4095 of data in one message.\nIf this is still not enough, you would have to invent another protocol for sending and receiving the data. E.g. first send the length of data, then send the data in chunks of 4095 bytes.\nOnce you have figured this out, it does not really matter whether you use SocketCAN, Python-CAN, pyvit or anything else.","Q_Score":0,"Tags":"python,socketcan,python-can","A_Id":55027926,"CreationDate":"2019-03-06T07:34:00.000","Title":"How to send and receive a file in SocketCAN or Python-can?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"The context here is simple, there's a lambda (lambda1) that creates a file asynchronously and then uploads it to S3.\nThen, another lambda (lambda2) receives the soon-to-exist file name and needs to keep checking S3 until the file exists.\nI don't think S3 triggers will work because lambda2 is invoked by a client request\n1) Do I get charged for this kind of request between lambda and S3? I will be polling it until the object exists\n2) What other way could I achieve this that doesn't incur charges?\n3) What method do I use to check if a file exists in S3? (just try to get it and check status code?)","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":783,"Q_Id":55031936,"Users Score":0,"Answer":"Let me make sure I understand correctly. \n\nClient calls Lambda1. Lambda1 creates a file async and uploads to S3\nthe call to lambda one returns as soon as lambda1 has started it's async processing. \nClient calls lambda2, to pull the file from s3 that lambda1 is going to push there. \n\nWhy not just wait for Lambda one to create the file and return it to client? Otherwise this is going to be an expensive file exchange.","Q_Score":0,"Tags":"python-3.x,amazon-web-services,amazon-s3,aws-lambda,boto3","A_Id":55032258,"CreationDate":"2019-03-06T20:52:00.000","Title":"Long polling AWS S3 to check if item exists?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am tuning hyperparameters for a neural network via gridsearch on google colab. I got a \"transport endpoint is not connected\" error after my code executed for 3 4 hours. I found out that this is because google colab doesn't want people to use the platform for a long time period(not quite sure though).\nHowever, funnily, after the exception was thrown when I reopened the browser, the cell was still running. I am not sure what happens to the process once this exception is thrown.\nThank you","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":4543,"Q_Id":55034469,"Users Score":0,"Answer":"In google colab, you can use their GPU service for upto 12 hours, after that it will halt your execution, if you ran it for 3-4 hours, it will just stop displaying Data continuously on your browser window(if left Idle), and refreshing the window will restore that connection.\nIn case you ran it for 34 hours, then it will definitely be terminated(Hyphens matter), this is apparently done to discourage people from mining cryptocurrency on their platform, in case you have to run your training for more than 12 hours, all you need to do is enable checkpoints on your google drive, and then you can restart the training, once a session is terminated, if you are good enough with the requests library in python, you can automate it.","Q_Score":0,"Tags":"python,neural-network,jupyter-notebook,jupyter,google-colaboratory","A_Id":55154500,"CreationDate":"2019-03-07T00:56:00.000","Title":"Google colab Transport endpoint is not connected","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"on Twilio's tutorial it sets action parameter to \/handleDialCallStatus but I have no clue what happens when it redirects to the url. How can I handle the status of calls .How can I redirect to another url when the call has completed","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":620,"Q_Id":55048107,"Users Score":1,"Answer":"Twilio evangelist here.\nWhen the call ends, the action URL tells Twilio where to send a GET or POST request. A DialCallStatus is passed to the action URL according to one of the following scenarios:\n\nNobody picks up, DialCallStatus=no-answer\nThe line is busy, DialCallStatus=busy\nWhen calling a conference and the call is connected, DialCallStatus=answered\nSomeone answered the call and was connected to the caller, DialCallStatus=connected\nAn invalid phone number was provided, DialCallStatus=failed\nCall canceled via the REST API before it was answered, DialCallStatus=canceled\n\nHow do you handle these scenarios? In the action attribute URL of the Dial verb.\n\nThe web app hosted at this action URL can then look at the DialCallStatus and send a response to Twilio telling it what to do next.\nYou can replace your_url with another URL (absolute or relative) to redirect there, and Twilio will continue the initial call after the dialed party hangs up. No TwiML verbs included after that will be reachable, so if you want to take more actions on that initial call, you need to respond to Twilio's request with TwiML instructions on how to handle the call.\nAny TwiML verbs included after this will be unreachable, as your response to Twilio takes full control of the initial call. If you want to take more actions on that initial call, you must respond to Twilio's request with TwiML instructions on how to handle the call.\nHope this helps.","Q_Score":1,"Tags":"python,twilio,twilio-api","A_Id":55072306,"CreationDate":"2019-03-07T16:03:00.000","Title":"How to handle dial call status with twilio-python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Pycharm\nSSH->Remote docker\nWe are using a remote interpreter for python on Pycharm which works great on an SSH connection. We are in a phase to convert our main work on docker container. It is important for us to keep to the development process on the remote servers and not on the local computer. But it is also important to be able to do it over docker container, and not just ssh as this saves a lot of time and effort when starting a new development server.\nRemote docker -> Securely remote docker\nWhat we are seeking is a way to be able to make a remote docker connection within Pycharm securely. It seems that when generating the Tls certificate, we need to bind it to the host IP's. This IP\/host bounding prevents us to quickly start new servers for development as this force to generate a certificate per IP.\nMy question, is it possible to make a secure connection for docker engine from Pycharm without bound the remote docker to its host IP?\nEdit:\nPossible option so far\nWild card certificate:\nAs Jan Garaj suggestions, use a wild card certificate. Then connect each new server to a new subdomain. The wild card will be the same for each of them.\nPros: This suppose to do the trick\nCons: It requires to set up a new subdomain for each server\nSSH tunnle\nSet the docker socket to allow connection from localhost. Then each developer can set up an ssh tunnel from his computer to the remote machine. In Pycharm setup python interpeter to docker socket via localhost with the same port as the tunnel.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":376,"Q_Id":55049331,"Users Score":1,"Answer":"Use (= buy or create\/install\/use own CA + generate) wildchar TLS certificate, which will cover all current servers and also any new servers.","Q_Score":3,"Tags":"python,python-3.x,docker,pycharm","A_Id":55050534,"CreationDate":"2019-03-07T17:07:00.000","Title":"How to setup a secure connection on remote docker on remote server with pycharm","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm writing a scraper for the dark web. One step involves creating a torrc file in \/etc\/tor\/, which requires root access. To do this, I just run the python file with sudo (i.e. 'sudo python filename.py').\nHowever, I encountered an error with selenium:\nRunning Firefox as root in a regular user's session is not supported\nI googled the error and found solutions on how to bypass that. I would rather not run it as root if possible. \nHow can I run the first part of the code as root, but not the second part?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":106,"Q_Id":55108034,"Users Score":0,"Answer":"UNIX\/Linux does not blithely turn root privilege on and off quite so easily. You need to isolate the root portions in a separate script and run only those parts under root privilege. This is also basic system security: grant only the needed privilege for any function.","Q_Score":0,"Tags":"python,linux,selenium,sudo","A_Id":55108243,"CreationDate":"2019-03-11T18:17:00.000","Title":"How do I control what parts of a python script run as root","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm writing a scraper for the dark web. One step involves creating a torrc file in \/etc\/tor\/, which requires root access. To do this, I just run the python file with sudo (i.e. 'sudo python filename.py').\nHowever, I encountered an error with selenium:\nRunning Firefox as root in a regular user's session is not supported\nI googled the error and found solutions on how to bypass that. I would rather not run it as root if possible. \nHow can I run the first part of the code as root, but not the second part?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":106,"Q_Id":55108034,"Users Score":0,"Answer":"I\u2019m not familiar with Tor, but you can try to relax the rights on the tor file.\nRunning something similar to chmod +x \/etc\/tor\/your_file that will allow Firefox to use the file even if not run as a privileged user.","Q_Score":0,"Tags":"python,linux,selenium,sudo","A_Id":55108245,"CreationDate":"2019-03-11T18:17:00.000","Title":"How do I control what parts of a python script run as root","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a requirement in which I need to automate the start and stop of the AWS EC2 Instances (within Autoscaling group) daily. This is mainly to prevent cost. I have built a Python script to start and stop EC2 instances but it's not working properly as EC2 instances are within an Autoscaling group.\nDoes anybody know any solution for this?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":409,"Q_Id":55130327,"Users Score":0,"Answer":"What you need to do is automate the auto scaling parameters, for desired instances, min instances and max instances. Ideally, you want to change the desired instance amount. This will cause the auto scaler to terminate excessive instances, to meet the desired instance amount.","Q_Score":0,"Tags":"python,amazon-web-services,autoscaling","A_Id":55130351,"CreationDate":"2019-03-12T20:41:00.000","Title":"Start and stop AWS EC2 instance","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I've been looking into the hyperledger indy framework and I wanted to start to build an app to get started but I noticed that there's the sdk that uses Libindy but there's also the Libvcx that is on top of Libindy but I don't know which one to use since they both seem to do the same.","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":918,"Q_Id":55133748,"Users Score":1,"Answer":"The indy-sdk repository is the Indy software that enables building components (called agents) that can interact with an Indy ledger and with each other.\nIn 2019, at a \"Connect-a-thon\" in Utah, USA, developers from a variety of organizations gathered to demonstrate interoperability across a set of independently developed agent implementations. At that time, a further idea developed that led to the creation of Hyperledger Aries. What if we had agents that could use DIDs and verifiable credentials from multiple ecosystems? Aries is a toolkit designed for initiatives and solutions focused on creating, transmitting, storing and using verifiable digital credentials. At its core are protocols enabling connectivity between agents using secure messaging to exchange information.\nLibvcx is a c-callable library built on top of libindy that provides a high-level credential exchange protocol. It simplifies creation of agent applications and provides better agent-2-agent interoperability for Hyperledger Indy infrastructure.\nYou need LibVCX if you want to be interoperably exchange credentials with other apps and agents, in others words if you want to be comply with Aries protocol.\nIn this case LibVCX Agency can be used with mediator agency which enables asynchronous communication between 2 parties.","Q_Score":4,"Tags":"python,hyperledger-indy","A_Id":63579718,"CreationDate":"2019-03-13T03:08:00.000","Title":"What's the difference between hyperledger indy-sdk and Libvcx?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've been looking into the hyperledger indy framework and I wanted to start to build an app to get started but I noticed that there's the sdk that uses Libindy but there's also the Libvcx that is on top of Libindy but I don't know which one to use since they both seem to do the same.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":918,"Q_Id":55133748,"Users Score":7,"Answer":"As you've said, LibVCX is built on top of LibIndy. \nLibIndy\nProvides low level API to work with credentials and proofs. It provides operations to create create credential requests, credentials, proofs. It also exposes operations for communication with Hyperldger Indy ledger. \nWhat Libindy doesn't handle is the credential exchange. If you write backend which issues credential and a mobile app which can request and receive credentials using Libindy, you'll have to come up with some communication protocol to do so. Is it gonna be HTTP? ZMQ? How are you going to format messages? This is what LibVCX does for you. You will also have to come up with solution how will you securely deliver messages and credentials from server to client when the client is offline.\nLibVCX\nLibVCX is one of several implementations of Hyperledger Aries specification. LibVCX is built on top of LibIndy and provides consumer with OOP-style API to manage connections, credentials, proofs, etc. It's written in Rust and has API Wrappers available for Python, Javascript, Java, iOS.\nLibVCX was designed with asynchronicity in mind. LibVCX assumes existence of so called \"Agency\" between the 2 parties communicating - a proxy which implements certain Indy communication protocol, receives and forwards messages. Therefore your backend server can now issue and send a credential to someone whom it has talked to days ago. The credential will be securely stored in the agency and the receiver can check whether there's any new messages\/credentials addressed for him at the agency.\nYou can think of agency as a sort of mail server. The message is stored there and the client can pull its messages\/credentials and decrypt them locally.\nWhat to use?\nIf you want to leverage tech in IndySDK perhaps for a specific use case and don't care about Aries, you can use vanilla libindy.\nIf you want to be interoperably exchange credentials with other apps and agents, you should comply with Aries protocol. LibVCX is one of the ways to achieve that.","Q_Score":4,"Tags":"python,hyperledger-indy","A_Id":55600304,"CreationDate":"2019-03-13T03:08:00.000","Title":"What's the difference between hyperledger indy-sdk and Libvcx?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using the Scrapy (Scrapy==1.6.0) library with Python3. I am wondering, where in the code does Scrapy actually do the HTML request? I want to set a breakpoint there so I can see exactly what headers \/ cookies \/ urls \/ and user agent is actually being passed.\nAlso, where exactly is the response received as well? Right now my spider is failing to find any pages, so I imagine I'm getting either a blank HTML document or a 403 error, however I have no idea where to look to confirm this.\nCan anyone familiar with the scrapy library point me to exactly where in code I can check these parameters?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":41,"Q_Id":55148001,"Users Score":0,"Answer":"I believe you can check out scrapy\/core\/engine.py method _download.\nThough I'd suggest you make use of scrapy shell. It will let you execute particular request, inspect response, open response in browser to see what was received by Scrapy. Also with a bit more of tuning you can import your spider in your shell and call a particular method of your spider and put a breakpoint there.\nIf your spider fails to find any pages then the problem is likely to be with your spider, not the framework.","Q_Score":0,"Tags":"python,web-scraping,scrapy","A_Id":55148667,"CreationDate":"2019-03-13T17:31:00.000","Title":"Where does Scrapy actually do the html request?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm using Selenium to locate elements in a page. Is there any way to combine two methods together?\nExample:\nMethod 1: driver.find_elements_by_partial_link_text('html')\nMethod 2: driver.find_elements_by_class_name('iUh30')\nI will ideally like a method that finds elements that has both the partial link text and class name specified.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":469,"Q_Id":55159886,"Users Score":1,"Answer":"You can use xpath to combine both selectors:\ndriver.find_elements_by_xpath(\"\/\/*[@class='iUh30'][text()[contains(.,'html')]]\")\nThe \/\/* looks for any element with any tag. Might be , might be
, , anything. You can just change it to the desired tag.\nThe above find the element by exact class name. You can also use [contains(@class, 'partial_class')] to find elements by partial class.\nThe [text()[contains(.,'html')]] looks for elements which partial text is \"html\"","Q_Score":0,"Tags":"python-3.x,selenium-webdriver","A_Id":55160119,"CreationDate":"2019-03-14T10:14:00.000","Title":"Selenium locating elements by more than one method","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am new at this part of web developing and was trying to figure out a way of creating a web app with the basic specifications as the example bellow:\n\nA user1 opens a page with a textbox (something where he can add text or so), and it will be modified as it decides to do it.\n\nIf the user1 has problems he can invite other user2 to help with the typing.\n\n\nThe user2 (when logged to the Channel\/Socket) will be able to modify that field and the modifications made will be show to the user1 in real time and vice versa.\n\n\nOr another example is a room on CodeAcademy:\n\nImagine that I am learning a new coding language, however, at middle of it I jeopardize it and had to ask for help.\n\nSo I go forward and ask help to another user. This user access the page through a WebSocket (or something related to that).\n\n\nThe user helps me changing my code and adding some comments at it in real time, and I also will be able to ask questions through it (real time communication)\n\n\nMy questions is: will I be able to developed certain app using Django Channels 2 and multiplexing? or better move to use NodeJS or something related to that?\nObs: I do have more experience working with python\/django, so it will more productive for me right know if could find a way working with this combo.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":238,"Q_Id":55170669,"Users Score":1,"Answer":"This is definitely possible. They will be lots of possibilities, but I would recommend the following.\n\nHave a page with code on. The page has some websocket JS code that can connect to a Channels Consumer.\nThe JS does 2 simple things. When code is updated code on the screen, send a message to the Consumer, with the new text (you can optimize this later). When the socket receives a message, then replace the code on screen with the new code.\nIn your consumer, add your consumer to a channel group when connecting (the group will contain all of the consumers that are accessing the page)\nWhen a message is received, use group_send to send it to all the other consumers\nWhen your consumer callback function gets called, then send a message to your websocket","Q_Score":0,"Tags":"python,django,websocket,multiplexing","A_Id":55243910,"CreationDate":"2019-03-14T19:33:00.000","Title":"How does multiplexing in Django sockets work?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am looking to keep the browser window open even after test execution.\nI would like to keep it open indefinitely.\nAs of now , as a work around I am just using sleep() to keep the window from closing.\nAny help would be much appreciated. Thank you !","AnswerCount":3,"Available Count":1,"Score":0.2605204458,"is_accepted":false,"ViewCount":4086,"Q_Id":55186282,"Users Score":4,"Answer":"Simple - do not call Close Browser at the end.","Q_Score":3,"Tags":"python,selenium,selenium-webdriver,robotframework","A_Id":55197099,"CreationDate":"2019-03-15T15:49:00.000","Title":"Is there a way to keep the browser window open after the test execution in RobotFramework?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"It is visible on the Audience Overview page as a pie chart of Percent New Visitors vs % Returning Visitors","AnswerCount":2,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":728,"Q_Id":55201171,"Users Score":2,"Answer":"There is no metric 'Percent New Visitors' and 'Percent Returned Visitors'. But you can use ga:userType and calculate %.\nAnd also metric ga:percentNewSessions exists - the percentage of sessions by users who had never visited the property before. May be it will be helpful.","Q_Score":0,"Tags":"python,google-analytics,google-analytics-api,google-analytics-firebase","A_Id":55202208,"CreationDate":"2019-03-16T20:25:00.000","Title":"How to find the % returning visitors through the Google Analytics API","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"It is visible on the Audience Overview page as a pie chart of Percent New Visitors vs % Returning Visitors","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":728,"Q_Id":55201171,"Users Score":0,"Answer":"Clarification: The pie chart actually shows % of Sessions by new and returning visitors, not % returning users. As zborovskaya stated, there is a ga:percentNewSessions which can be used (100 - %new = %returning session).","Q_Score":0,"Tags":"python,google-analytics,google-analytics-api,google-analytics-firebase","A_Id":55218898,"CreationDate":"2019-03-16T20:25:00.000","Title":"How to find the % returning visitors through the Google Analytics API","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there any rationale behind the choice to use distinct methods for sending packets at L2 and L3 in Scapy? Could Scapy not just check if the packet being sent is L2 or higher?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":93,"Q_Id":55205365,"Users Score":1,"Answer":"If Scapy were to detect whether the passed packet was L2 or L3, it would mean it has to hardcode a list of layers that are considered \u201cLayer 3\u201d and \u201cLayer 2\u201d.\nIf you make a custom layer 3, it wouldn\u2019t know in what category it falls, thus it leaves you the choice.\nAlso that\u2019s historical, dates back from 2008, you can\u2019t break it :-)","Q_Score":0,"Tags":"python,scapy","A_Id":55205923,"CreationDate":"2019-03-17T08:52:00.000","Title":"Why uses Scapy different methods to distinguish L2 and L3 packets?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to grab an ip address from a string and facing an issue.please help.\ninet addr:11.11.11.11 Bcast:11.11.11.111 Mask:111.111.11.1.\nThis is the string I have and I need ip address next to addr:\nI have tried the following code and failed to do in python: \nip = re.findall(r'(?:\\d{1,3}\\.)+(?:\\d{1,3})', line) and get index 0 item.\nResult : This is actually giving me nothing in return","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":250,"Q_Id":55228136,"Users Score":0,"Answer":"Your REGEX could be more specific, I think you could use something like :\naddr:(?\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3})\nIn python:\nmatch = re.match(r'addr:(?\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3})', line)\nYou can then access the ip group by calling match.group('ip').","Q_Score":2,"Tags":"python,regex,python-3.x","A_Id":55228210,"CreationDate":"2019-03-18T18:46:00.000","Title":"How to grab first ip address from a string","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to find the latest git issue number for a repository using python for all users rather than an authenticated user.\nIs there any way i could do this? either through the API or any library?\nI looked at the github docs and from my understanding it is not possible to list issues for all users but only authenticated users.\nI looked at pygithub where all i could find was how to create an issue through the library","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":249,"Q_Id":55236480,"Users Score":0,"Answer":"@Kevin Vizalil\nYou can use the Github API's to get the list of issues or single issue\nplease check https:\/\/developer.github.com\/v3\/issues\/#list-issues\nedit:\ne.g. https:\/\/api.github.com\/repos\/vmg\/redcarpet\/issues?sort=created&direction=desc","Q_Score":0,"Tags":"python,git,github,request,github-api","A_Id":55236763,"CreationDate":"2019-03-19T08:30:00.000","Title":"How to find the latest git issue number on github using python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working on a web automation project. I need to be able to pull pages, assess data, and be able to interact with the page (e.g. login, enter values, and post to the site.) As a derivative of the logins, I think I will need something that will allow me to remain logged in given a credential (e.g. store the credential or cookies.)\nI've already used UrlLib & Requests libraries to pull files and the pages themselves.\nI am trying to decide on the best Python library for the task.\nAny suggestions would be highly appreciated.\nthank you!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1516,"Q_Id":55285021,"Users Score":1,"Answer":"@n1c9\n\nIf you can reliably recreate the HTTP requests being used to authenticate logins and speed is important, urllib\/requests for making those HTTP requests and beautifulsoup for parsing the HTML responses would be best. Otherwise, Selenium is where you'll have the most luck. Let me know if you want more details. \n\nLooks like Selenium is the right answer.","Q_Score":1,"Tags":"python,beautifulsoup,urllib3,python-requests-html","A_Id":55291345,"CreationDate":"2019-03-21T16:24:00.000","Title":"beautiful soup vs selenium vs urllib","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a contanarized flask app with external db, that logs users on other site using selenium. Everything work perfectly in localhost. I want to deploy this app using containers and found selenium container with google chrome within could make the job. And my question is: how to execute scripts\/methods from flask container in selenium container? I tried to find some helpful info, but I didn't find anything. \nShould I make an API call from selenium container to flask container? Is it the way or maybe something different?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":115,"Q_Id":55299697,"Users Score":1,"Answer":"As far as i understood, you are trying to take your local implementation, which runs on your pc and put it into two different docker containers. Then you want to make a call from the selenium container to your container containing the flask script which connects to your database.\nIn this case, you can think of your containers like two different computers. You can tell docker do create an internal network between these two containers and send the request via API call, like you suggested. But you are not limited to this approach, you can use any technique, that works for two computers to exchange commands.","Q_Score":0,"Tags":"python,selenium,docker,flask,google-chrome-headless","A_Id":55302692,"CreationDate":"2019-03-22T12:32:00.000","Title":"How to execute script from container within another container?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"In Windows I need to send an NBNS name query packet (which uses UDP protocol), and I need to send 255 packets and get an answer for each. With Scapy it takes a year, so I wanted to know if there is a way to speed it up or maybe use sockets instead?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":138,"Q_Id":55309306,"Users Score":0,"Answer":"You could use two threads, one that is sending packets using a conf.L3socket() socket from scapy and another thread that is receiving packets using sniff().","Q_Score":1,"Tags":"python-2.7,sockets,networking,scapy","A_Id":72091366,"CreationDate":"2019-03-23T00:07:00.000","Title":"Scapy send packets speed up in Windows","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I've got an AWS Lambda, written in python and behind API Gateway, which makes a network request out to a third party.\nShortly after that request, a separate request will be made by the third party to the URL of my choosing - I need to get a hold of the body of that request and return it in the response from my Lambda.\nIf perhaps I have the third party send to a second Lambda, how can I hold the first Lambda open, waiting for an event from the second Lambda?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":373,"Q_Id":55325063,"Users Score":0,"Answer":"(In the hope that someone offers a better idea...)\nWhat I'm currently intending to do is stand up a redis (Elasticache) cluster. Lambda A will send out a request with an X-Request-ID and then setup a redis pubsub().subscribe(X-Request-ID). Lambda B will receive the response and do a redis pubsub().publish(X-Request-ID, response). Lambda A will then return the response or, if not received in time, timeout.\nInelegant but I think it works.","Q_Score":0,"Tags":"python,amazon-web-services,asynchronous,aws-lambda","A_Id":56402127,"CreationDate":"2019-03-24T14:54:00.000","Title":"AWS Lambda (in python) waiting for asynchronous event","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working on a task where I need to web scrape boxofficemojo site.\nI coded everything and it is working perfectly fine in my local machine.\nThere are around 19000 urls that I need to scrape. As it is obviously a time consuming process, I don't want to run it on my local machine. Instead I want to run it on an aws ec2 instance.\nThe ec2 instance is Ubuntu 18.04. I have verified python versions and the libraries used in script are present or not and everything.\nHowever, if I try\nrequests.get('http:\/\/www.boxofficemojo.com') ,\nit is giving me 503 response. If I print the response text, it is saying We are in process of updating site right now. But the same thing is working in my local machine.\nWhy am I getting this wierd behaviour in ec2 instance.\nI tried testing internet connection inside the ec2 instance by issuing ping command . It's working fine.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":101,"Q_Id":55328681,"Users Score":4,"Answer":"There are public sites, and public api's that specifically block calls from ec2 instances (and probably other cloud providers). \nIt's not impossible that some of the sites you are trying to scrape, simply blacklist ec2 instances ip ranges to cut down on the 'bots' that are eating up their resources ... I have come across this several times, for several sites.\nThe NBA stats api is one example I am familiar with, but I have come across others as well - the sites you are scraping may be some of them as well.","Q_Score":0,"Tags":"python,amazon-web-services,amazon-ec2,web-scraping,python-requests","A_Id":55328905,"CreationDate":"2019-03-24T21:19:00.000","Title":"Why am I getting different http responses from different locations?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am working on a Project in Raspberry Pi Zero, with apache as web server. The website is in PHP and based on user inputs, it updates a XML file in the server.\nThere is also a python program program running parallely with the web server. This python program contantsly reads the XML and grabs the values from XML, stores them locally and checks for changes in them and if there is any changes it performs some UART communications with external devices, sometimes based on these external communication from the devices, python also updates the XML.\nPython reads the XML every 2 seconds, and the problem is sometimes, when the python is doing the read operation, if the user prodives input and if PHP inserts the new value to the same XML, python crashes. The client wants to reduce the 2 second delay to .1 second, which means Python will be reading fastly and any changes from PHP will crash it.\nIs there a way to get somekind of file lock between python and PHP so that, when Python is reading or writing PHP waits and if PHP is writing Python waits. Priority goes to Python above PHP.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":65,"Q_Id":55328712,"Users Score":0,"Answer":"Would be better to have an api call first which would suggest if the data is being changed currently and when was the last data changed.\nThis way you can avoid the crash which is happening due to sharing of the resource","Q_Score":0,"Tags":"php,python,xml,file-handling,file-locking","A_Id":55329011,"CreationDate":"2019-03-24T21:23:00.000","Title":"Handling a common XML file between PHP and Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to use BeautifulSoup method inside my module method. If I import as from BeautifulSoup import BeautifulSoup that is working in while running python through CMD\n How can I import BeautifulSoup3 library that I installed in python 2.7 into my OpenERP module.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":36,"Q_Id":55341833,"Users Score":0,"Answer":"Just copy and paste the BeautifullSoup.py file from python27\\Lib\\site-packages\\BeautifullSoup.py into the folder path openERP7>Server>server and it will do the trick.","Q_Score":1,"Tags":"python-2.7,beautifulsoup,openerp-7","A_Id":55350713,"CreationDate":"2019-03-25T16:00:00.000","Title":"How to import BeautifulSoup into a python method OpenERP 7 module","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I\u2019m looking into web scraping \/crawling possibilities and have been reading up on the Scrapy program. I was wondering if anyone knows if it\u2019s possible to input instructions into the script so that once it\u2019s visited the url it can then choose pre-selected dates from a calendar on the website. ?\nEnd result is for this to be used for price comparisons on sites such as Trivago. I\u2019m hoping I can get the program to select certain criteria such as dates once on the website like a human would.\nThanks,\nAlex","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":66,"Q_Id":55361023,"Users Score":2,"Answer":"In theory for a website like Trivago you can use the URL to set the dates you want to query but you will need to research user agents and proxies because otherwise your IP will get blacklisted really fast.","Q_Score":1,"Tags":"python,scrapy,web-crawler,data-extraction","A_Id":55368161,"CreationDate":"2019-03-26T15:37:00.000","Title":"Scrapy and possibilities available","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm have bandwidth data which identifies protocol usage by tonnage and hour. Based on the protocols, you can tell when something is just connect vs actually being used (1000 bits compared to million or billions of bits) in that hour for that specific protocol. The problem is When looking at each protocol, they are all heavily right skewed. Where 80% of the records are the just connected or what I'm calling \"noise. \nThe task I have is to separate out this noise and focus on only when the protocol is actually being used. My classmates are all just doing this manually and removing at a low threshold. I was hoping there was a way to automate this and using statistics instead of just picking a threshold that \"looks good.\" We have something like 30 different protocols each with a different amount of bits which would represent \"noise\" i.e. a download prototypical might have 1000 bits where a messaging app might have 75 bits when they are connected but not in full use. Similarly they will have different means and gaps between i.e. download mean is 215,000,000 and messaging is 5,000,000. There isn't any set pattern between them.\nAlso this \"noise\" has many connections but only accounts for 1-3% of the total bandwidth being used, this is why we are tasked with identify actual usage vs passive usage.\nI don't want any actual code, as I'd like to practice with the implementation and solution building myself. But the logic, process, or name of a statistical method would be very helpful.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":36,"Q_Id":55363520,"Users Score":0,"Answer":"Do you have labeled examples, and do you have other data besides the bandwidth? One way to do this would be to train some kind of ML classifier if you have a decent amount of data where you know it's either in use or not in use. If you have enough data you also might be able to do this unsupervised. For a start a simple Naive Bayes classifier works well for binary solutions. As you may be away, NB was the original bases for spam detection (is it spam or not). So your case of is it noise or not should also work, but you will get more robust results if you have other data in addition to the bandwidth to train on. Also, I am wondering if there isn't a way to improve the title of your post so that it communicates your question more quickly.","Q_Score":0,"Tags":"python,r,statistics","A_Id":55462001,"CreationDate":"2019-03-26T17:57:00.000","Title":"Determining \"noise\" in bandwidth data","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I was working on a decentralized python chat room. I was just wondering, if you don't open up a port, you can't receive messages. But, if you open up a port, you are vulnerable to getting hacked or something. So, is there a way to communicate with Python between a server and a client without opening up ports?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":39,"Q_Id":55399309,"Users Score":0,"Answer":"At least one of the client or the server should have an open port (typically, the server). As soon as a TCP connection is established (typically, by the client), a random (= chosen by the operating system) port will be used by the client to be reachable by the server.","Q_Score":0,"Tags":"python,sockets","A_Id":55399380,"CreationDate":"2019-03-28T13:52:00.000","Title":"Is there a way to communicate between server and client in Python without opening up ports?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to crawl the football data from the whoscored.com, the website has incapsula web oriented security which is not letting me crawl. Initially, I tried to give user_agent and changed the header then it worked but that's only for the first page. As I need to crawl some other parts of the website it keeps blocking me to request the website. It's getting exhausting now since the blocking time has been increasing.\nIs there anybody who could suggest something to bypass the incapsula security mechanisms.\nI need data for study purposes.\nI have gone through all the old the previous question asked about this topic. but it does not help.\nTools. anaconda, language python, and library beautiful soup\nThanks.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":343,"Q_Id":55404628,"Users Score":0,"Answer":"if you mimic browser headers and appropiate time between requests, it will probably work\nlook at your request headers and that of your browser","Q_Score":0,"Tags":"python,data-science","A_Id":55663189,"CreationDate":"2019-03-28T18:32:00.000","Title":"By passing incapsula security- python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I just started studying flask and built a toy website to see how well I am doing. I have a flask website built in python 3.6 and I have tested it on my windows computer and everything goes very well. Now I want to host the website on an ubuntu ec2 instance. But first, I am testing if everything runs well on my ec2 instance and am stuck at trying to access port 5000 on my ec2 instance My app is currently serving on port 127.0.0.1:5000 of my linux server. I have tried to connect to my.ec2.public.ip:5000 and my.ec2.private.ip:5000 with no success. Could someone help me? Thanks.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":287,"Q_Id":55426401,"Users Score":0,"Answer":"By default a new AWS instance would not allow port 5000 to be accessed, so you will need to modify the security group to allow access on that port. You do this thru the AWS console.","Q_Score":0,"Tags":"amazon-ec2,python-3.6,flask-sqlalchemy,ubuntu-18.04","A_Id":55426431,"CreationDate":"2019-03-29T22:51:00.000","Title":"How do I access 127:.0.0.1:5000 on Ubuntu Server 16.04 LTS (HVM), SSD Volume Type","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"We are trying to get the owned games of a lot of users but our problem is that after a while the API call limit (100.000 a day) kicks in and we stop getting results. \nWe use 'IPlayerService\/GetOwnedGames\/v0001\/?key=APIKEY&steamid=STEAMID' in our call and it works for the first entries. \nThere are several other queries like the GetPlayerSummaries query which take multiple Steam IDs, but according to the documentation, this one only takes one. \nIs there any other way to combine\/ merge our queries? We are using Python and the urllib.request library to create the request.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":807,"Q_Id":55505061,"Users Score":0,"Answer":"Depending on the payload of the requests you have the following possibilities:\n\nif each request brings only the newest updates, you could serialize the steam ID's when you get the response that you've hit the daily limit\nif you have the ability to control via the request payload what data you receive, you could go for a multithreaded \/ multiprocessing approach that consume the request queries and the steam ID's from a couple of shared resources","Q_Score":0,"Tags":"python,urllib,steam-web-api","A_Id":55505148,"CreationDate":"2019-04-03T22:08:00.000","Title":"Steam Web API GetOwnedGames multiple SteamIDs","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to import xml.dom.minidom in order to read and edit an xml file but am getting an error.\nimport xml.dom.minidom\nI get the error message:\nModuleNotFoundError: No module named 'xml.dom'; 'xml' is not a package","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":253,"Q_Id":55513629,"Users Score":2,"Answer":"This may be due to a file name issue (such as xml.py). There may be a conflict with the file name when compiling. Rename it to solve the issue.","Q_Score":0,"Tags":"python,xml","A_Id":55513753,"CreationDate":"2019-04-04T10:27:00.000","Title":"How can I import xml.dom.mindom","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've been getting a lot of spam emails from various colleges. I wrote a simple Python program to go through my emails, and find all the ones whose sender ends in .edu.\nThen, to delete it, I though I was supposed to add the \\\\Deleted flag to those emails using: Gmail.store(id, \"+FLAGS\", \"\\\\Deleted\"). This did not work, and some of them didn't even disappear from the inbox. \nAfter more research, I found that instead I had to use Gmail.store(id, \"+X-GM-LABELS\", \"\\\\Trash\"). \nSo I updated the program to use that, but now it doesn't see any of the emails that I previously added the \\\\Deleted flag to.\nIs there any way I can reset all those emails and then trash them afterward?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":371,"Q_Id":55525410,"Users Score":1,"Answer":"They should be in your All Mail folder. Use the WebUI and search for them and trash them, or select the \"[Gmail]\\All Mail\" folder (watch out for localization, this can change name for non-English users).","Q_Score":0,"Tags":"python,email,gmail,imaplib","A_Id":55544626,"CreationDate":"2019-04-04T21:47:00.000","Title":"Python imaplib recover emails with \\\\Deleted flag and move to trash","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I recent upgraded web2py and starting using username=True, the form returned via auth\/profile no longer contains the user email address.\nHow can a user change email address under the standard api?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":93,"Q_Id":55547054,"Users Score":1,"Answer":"With or without username=True, the email address is not editable via the current Auth API (this was changed about a year ago, presumably for security reasons). For now, you'll have to implement your own email change functionality. For extra security, you might want to require password verification, and maybe send a verification email to the new address (and possibly a notification to the old address upon completion of the change).","Q_Score":1,"Tags":"python,web2py","A_Id":55550260,"CreationDate":"2019-04-06T08:00:00.000","Title":"How to change email address of user in web2py using standard auth api with username=True","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"When I try to build chromium on Windows10, a python error occurs,\nit says \"ImportError: No module named win32file\". How to savle this?","AnswerCount":1,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":13868,"Q_Id":55551188,"Users Score":20,"Answer":"My solution is: python -m pip install pywin32.\nThen you will see module win32file in the path of C:\/python27\/Lib\/site-packages\/win32file.pyd","Q_Score":11,"Tags":"python-2.7","A_Id":55551200,"CreationDate":"2019-04-06T16:40:00.000","Title":"python ImportError: No module named win32file","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I got a bit lost in the boto3 API details and struggle to find an example to access an S3 bucket using python. I need to use an existing pem file rather than the typical access and secret key. Works fine using an ftp client but I need to get it running also with python.\nAnyone that can point me in the right direction (or suggest alternatives using python)","AnswerCount":2,"Available Count":1,"Score":0.2913126125,"is_accepted":false,"ViewCount":1571,"Q_Id":55552553,"Users Score":3,"Answer":"This is not possible.\nThe types of authentication used on AWS are:\n\nUsername and password associated with an IAM User, used to login to the web management console.\nAccess Key and Secret Key associated with an IAM User, used to make API calls to AWS services\nPrivate Key (PPK\/PEM) used to login to Linux instances.\n\nPrivate Keys are used to login to an operating system and are unrelated to AWS. They are a standard means of accessing Linux systems and identify users defined on the computer itself rather than on AWS.\nAPI calls to AWS require the Access Key and Secret Key and have no relationship to PPK\/PEM keys.","Q_Score":2,"Tags":"python,amazon-web-services,amazon-s3","A_Id":55554101,"CreationDate":"2019-04-06T19:10:00.000","Title":"How do I connect to an Amazon S3 bucket with a pem file using python (boto3)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I often encounter elements which I cannot right click to inspect their xpath or css. \nI want to ask what other ways exist to click on those elements ?","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":1061,"Q_Id":55556644,"Users Score":0,"Answer":"Control + Shift + C or F12 will open the dev tools for you, you can then click on the cursor mode on your browser.","Q_Score":1,"Tags":"python,selenium,google-chrome,xpath,google-chrome-devtools","A_Id":55558250,"CreationDate":"2019-04-07T07:32:00.000","Title":"How to interact with elements those cannot be inspected through css\/xpath within google-chrome-devtools","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I often encounter elements which I cannot right click to inspect their xpath or css. \nI want to ask what other ways exist to click on those elements ?","AnswerCount":4,"Available Count":3,"Score":0.049958375,"is_accepted":false,"ViewCount":1061,"Q_Id":55556644,"Users Score":1,"Answer":"You can use Ctrl + Shift + C, it will open the devtools with selecting an element to inspect enabled. Just move the mouse cursor to the element and click, it will scroll the html view in devtools to the correct place.\nAlternatively, you can press F12 and toggle the selecting an element to inspect button (top left corner of the devtools).","Q_Score":1,"Tags":"python,selenium,google-chrome,xpath,google-chrome-devtools","A_Id":55558180,"CreationDate":"2019-04-07T07:32:00.000","Title":"How to interact with elements those cannot be inspected through css\/xpath within google-chrome-devtools","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I often encounter elements which I cannot right click to inspect their xpath or css. \nI want to ask what other ways exist to click on those elements ?","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":1061,"Q_Id":55556644,"Users Score":0,"Answer":"If you want to get elements locator but the right click doesn't work then try the following. \nFirst, Open the dev tools window by pressing Ctrl+Shift+I. \nIf that doesn't work then first open the dev tool then load the page. \nAfter opening the dev tools click on \"select element\" tool, the printer icon at the left of the dev tool. You can directly get this tool by pressing Ctrl+Shift+C.\nThen hover on the element that you want to get the locator. The element will be selected in the DOM in the elements tab. Right click on the elements in the DOM. \nThen from the context menu go to copy -> copy selector for CSS selector or copy XPath for XPath. \nNow you have got the locator of that element in the clipboard.","Q_Score":1,"Tags":"python,selenium,google-chrome,xpath,google-chrome-devtools","A_Id":55556724,"CreationDate":"2019-04-07T07:32:00.000","Title":"How to interact with elements those cannot be inspected through css\/xpath within google-chrome-devtools","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"using pip I installed new version of HtmlTestRunner in python 3.6, but while I run python file through command prompt its throws error. \n\nTraceback (most recent call last):\n File \"seleniumUnitTest.py\", line 3, in \n import HtmlTestRunner\n ModuleNotFoundError: No module named 'HtmlTestRunner'","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":1410,"Q_Id":55565407,"Users Score":2,"Answer":"Using python program.py instead of python3 program.py fixed my problem in Windows 10.","Q_Score":2,"Tags":"python,python-3.x,selenium,selenium-webdriver","A_Id":65629114,"CreationDate":"2019-04-08T02:11:00.000","Title":"Already installed Html test runner but it shows error \"ModuleNotFoundError: No module named 'HtmlTestRunner' \"","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am new to Machine Learning. I am trying to build a classifier that classifies the text as having a url or not having a url. The data is not labelled. I just have textual data. I don't know how to proceed with it. Any help or examples is appreciated.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1580,"Q_Id":55576378,"Users Score":1,"Answer":"Since it's text, you can use bag of words technique to create vectors.\n\nYou can use cosine similarity to cluster the common type text.\nThen use classifier, which would depend on number of clusters.\nThis way you have a labeled training set. \n\nIf you have two cluster, binary classifier like logistic regression would work. \nIf you have multiple classes, you need to train model based on multinomial logistic regression\nor train multiple logistic models using One vs Rest technique.\n\nLastly, you can test your model using k-fold cross validation.","Q_Score":0,"Tags":"python,machine-learning,classification","A_Id":55576706,"CreationDate":"2019-04-08T15:02:00.000","Title":"How to classify unlabelled data?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"We have created website online exam portal in nodejs and mongodb hosted on amazon aws. Problem is the descriptive answer evaluation code is in python. So is there any way we can combine both. Run python code in nodejs. We have used express.js for nodejs and mongoose framework for mongodb.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":37,"Q_Id":55597443,"Users Score":0,"Answer":"You can use lambda for your python, then hit it from nodejs app, and return the response for your nodejs app.","Q_Score":0,"Tags":"python,node.js,mongodb","A_Id":55597496,"CreationDate":"2019-04-09T16:36:00.000","Title":"Is there any way to use python code in nodejs project","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1},{"Question":"I hope you are all having an amazing day. So I am working on a project using Python. The script's job is to automate actions and tasks on a social media platform via http requests. As of now, one instance of this script access one user account. Now, I want to create a website where I can let users register, enter their credentials to the social media platform and run an instance of this script to perform the automation tasks. I've thought about creating a new process of this script every time a new user has register, but this doesn't seem efficient. Also though about using threads, but also does not seem reasonable. Especially if there are 10,000 users registering. What is the best way to do this? How can I scale? Thank you guys so much in advance.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":292,"Q_Id":55602143,"Users Score":1,"Answer":"What is the nature of the tasks that you're running?\nAre the tasks simply jobs that run at a scheduled time of day, or every X minutes? For this, you could have your Web application register cronjobs or similar, and each cronjob can spawn an instance of your script, which I assume is short-running, to carry out a the automated task one user at a time. If the exact timing of the script doesn't matter then you could scatter the running of these scripts throughout the day, on seperate machines if need be.\nThe above approach probably won't scale well to 10,000 users, and you will need something more robust, especially if the script is something that needs to run continuously (e.g. you are polling some data from Facebook and need to react to its changes). If it's a lot of communication per user, then you could consider using a producer-consumer model, where a bunch of producer scripts (which run continously) issue work requests into a global queue that a bunch of consumer scripts poll and carry out. You could also load balance such consumers and producers across multiple machines.\nOf course, you would definitely want to squeeze out some parallelism from the extra cores of your machines by carrying out this work on multiple threads or processes. You could do this quite easily in Python using the multiprocessing module.","Q_Score":0,"Tags":"python,web-services,automation,httprequest,scale","A_Id":55603470,"CreationDate":"2019-04-09T22:21:00.000","Title":"How to serve a continuously running python script to multiple users (Social Media Bot)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have updated and installed Microsoft Power BI Desktop Report Server (64-bit).\nPrior to this I have already installed python 3.6 in machine. I am trying run python script on Power BI. but I don't see any option.\nI know this can be enabled by selecting \"Python Script\" in Preview Feature.\nBut the problem is I cannot see \"Preview Feature\" available in Options.\nCan anyone help me how I can execute python script there.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":775,"Q_Id":55613905,"Users Score":0,"Answer":"The Preview Feature is not available with Report server version. We need to have Power BI Desktop Cloud. Also version should be August 2018 or above.","Q_Score":0,"Tags":"python,powerbi,visualization,powerbi-desktop,preview-feature","A_Id":55757798,"CreationDate":"2019-04-10T13:32:00.000","Title":"Preview feature not available in Power BI destop Report server Jan 2019 version","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using getstream.io to create feeds. The user can follow feeds and add reaction like and comments. If a user adds a comment on feed and another wants to reply on the comment then how I can achieve this and also retrieve all reply on the comment.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":583,"Q_Id":55644656,"Users Score":0,"Answer":"you can add the child reaction by using reaction_id","Q_Score":1,"Tags":"python,getstream-io","A_Id":55813711,"CreationDate":"2019-04-12T04:46:00.000","Title":"How to add reply(child comments) to comments on feed in getstream.io python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Good afternoon everyone,\nFor a school project I will make a RC car using a c8051 microcontroller and to send the uart data to it I'm using a ESP32 so that I can display a webpage so that the user choose the direction of the car. I've spent a lot of time on micropython doc's page and tutorial for TCP sockets and I see in every one of them that to check if the webpage was requested they use something like:\nIf(request==6):\nAnd I can't figure out why 6, what that represents??\nI appreciate any help given.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":217,"Q_Id":55690355,"Users Score":0,"Answer":"The answer found in the comment section of the link given\n\"In the while loop, after receiving a request, we need to check if the request contains the \u2018\/?led=on\u2019 or \u2018\/?led=on\u2019 expressions. For that, we can apply the find() method on the request variable. the find() method returns the lowest index of the substring we are looking for.\nBecause the substrings we are looking for are always on index 6, we can add an if statement to detect the content of the request. If the led_on variable is equal to 6, we know we\u2019ve received a request on the \/?led=on URL and we turn the LED on.\nIf the led_off variable is equal to 6, we\u2019ve received a request on the \/?led=off URL and we turn the LED off.\"","Q_Score":0,"Tags":"microcontroller,esp32,micropython","A_Id":55703438,"CreationDate":"2019-04-15T13:32:00.000","Title":"Esp32 micropython webserver TCP socket to check http get request","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"GRPC server does queue the requests and serve them based on the maxWorker configuration which is passed when the sever starts up. How do I print the metric- number of items in the queue .? Essentially, I would like to keep track of the numbers of requests in waiting state.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2149,"Q_Id":55699452,"Users Score":0,"Answer":"You can pass your own executor to serverBuilder.executor(). Note that you are then responsible for shutting down the executor service after the server has terminated.","Q_Score":3,"Tags":"grpc,grpc-java,grpc-python","A_Id":55712069,"CreationDate":"2019-04-16T01:49:00.000","Title":"keep track of the request queue in grpc server","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am looking for a way to use web drivers (ChromeDriver, IEDriver, GeckoDriver etc., all together) with my native python app such that, the app will figure out the browser, and choose the driver accordingly, and will do some actions (like click an element or get data). I want to do the task in python without using selenium","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":618,"Q_Id":55707406,"Users Score":1,"Answer":"It would theoretically be possible to use the driver executables without Selenium. All WebDriver implementations operate using the same mechanism. That mechanism is starting an HTTP server running locally, and listening on a well-known set of end points (URLs) for HTTP requests containing well-defined JSON bodies.\nIt\u2019s fully possible to even start a WebDriver implementation like IEDriverServer.exe, geckodriver, or chromedriver and automate the browser even using a tool like cURL, so using a Python HTTP client library and JSON parser is certainly in the realm of the possible. However, doing so requires a fairly thorough understanding of the protocol used in communicating with the driver, and gaining that understanding is distinctly non-trivial. In fact, use of that protocol without needing to know the details of it is one of the very reasons for Selenium\u2019s existence.\nWhile what you say you want to do is possible, I would by no means call it recommended. Attempting to go down that road seems like a lot of effort for a very marginal benefit, when you consider you need to worry about lifetime of the executable process you spawn, proper formatting of the HTTP request bodies, and handling all of the potential responses from the remote end. You\u2019d be reinventing a whole lot of things that Selenium already does for you. Your question doesn\u2019t show any indication of why you don\u2019t want to use Selenium, so it\u2019s difficult to provide any further guidance as to alternatives or mitigations to the things you find objectionable about it.","Q_Score":1,"Tags":"python-3.x,automation,webdriver,selenium-chromedriver","A_Id":55767473,"CreationDate":"2019-04-16T11:45:00.000","Title":"How can we use ChromeDriver, IEDriver, GeckoDriver without selenium in python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Uing BAsh console i installed Steem Module for Python3.6\nand everything seems right but when i try to run my python script in Pythonanywhere it says there is problem connecting with api.steemit.com\nbut the script works fine in my PC. \ncode \n import steem\n from steem import Steem\n s=Steem()\n s.get_account_history(your_ac, index_from=-1, limit=12)\nlog is here. Is there any way to solve this issue?\nWARNING:root:Retry in 1s -- MaxRetryError: HTTPSConnectionPool(host='api.steemit.com', port=443): Max retries exceeded with url: \/ (Caused by NewC\nonnectionError(': Failed to establish a new connection: [Errno 111] Connectio\nn refused',))\nWARNING:urllib3.connectionpool:Retrying (Retry(total=19, connect=None, read=None, redirect=0, status=None)) after connection broken by 'NewConnect\nionError(': Failed to establish a new connection: [Errno 111] Connection refu\nsed',)': \/","AnswerCount":1,"Available Count":1,"Score":0.537049567,"is_accepted":false,"ViewCount":159,"Q_Id":55779701,"Users Score":3,"Answer":"External connections for free accounts on PythonAnywhere are passed through a proxy. The error you're getting looks like the library you're using is not using the proxy. Check the docs for the library to see how to configure it to use a proxy. If it does not support it, contact the authors to see if they can add support for proxies.","Q_Score":1,"Tags":"https,pythonanywhere,connection-refused","A_Id":55797086,"CreationDate":"2019-04-21T03:46:00.000","Title":"Connection Refused in pythonAnwhere","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to extract audio track information (specifically language of the audio) from a live stream that i will be playing with libVLC . Is it possible to do this in javascript or python without writing new code for a wrapper?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":74,"Q_Id":55780457,"Users Score":0,"Answer":"Not sure about javascript, but the Python wrapper will let you do this.","Q_Score":0,"Tags":"javascript,python,node.js,libvlc","A_Id":55788853,"CreationDate":"2019-04-21T06:29:00.000","Title":"Is it possible to get audio track data from libVLC by using javascript\/python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I want to access files stored in my s3 buckets running python script in ec2 machine . Boto3 python packages facilitates this but is there some other way files stored in the s3 bucket could be accessed simply providing url of s3 bucket rather than importing boto3 in python program ?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1484,"Q_Id":55783565,"Users Score":0,"Answer":"AWS CLI and shell scripts instead of writing a python application and installing boto3 is what I recently did. I worried about python version being installed and didn't want to install boto3, we were using a variant of an Amazon Linux which all will have AWS CLI and will also have installed jq command tool is a great way to get around installing boto3. It can be supplement with python as well. I decided to go with shell scripting, because my program was relatively simple.","Q_Score":0,"Tags":"python,amazon-web-services,amazon-s3,amazon-ec2","A_Id":55786037,"CreationDate":"2019-04-21T14:21:00.000","Title":"How to access files stored in s3 bucket from python program without use of boto3?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have roughly 80tb of images hosted in an S3 bucket which I need to send to an API for image classification. Once the images are classified, the API will forward the results to another endpoint.\nCurrently, I am thinking of using boto to interact with S3 and perhaps Apache airflow to download these images in batches and forward them to the classification API, which will forward the results of the classification to a web app for display.\nIn the future I want to automatically send any new image added to the S3 bucket to the API for classification. To achieve this I am hoping to use AWS lambda and S3 notifications to trigger this function.\nWould this be the best practice for such a solution?\nThank you.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":159,"Q_Id":55795614,"Users Score":1,"Answer":"For your future scenarios, yes, that approach would be sensible:\n\nConfigure Amazon S3 Events to trigger an AWS Lambda function when a new object is created\nThe Lambda function can download the object (to \/tmp\/) and call the remote API\nMake sure the Lambda function deletes the temporary file before exiting since the Lambda container might be reused and there is a 500MB storage limit\n\nPlease note that the Lambda function will trigger on a single object, rather than in batches.","Q_Score":0,"Tags":"python,amazon-web-services,amazon-s3,aws-lambda,airflow","A_Id":55800992,"CreationDate":"2019-04-22T14:01:00.000","Title":"Large Scale Processing of S3 Images","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have an application which handles websocket and http requests for some basic operations and consuming push data over sockets. Nothing is very computation intensive. Some file tailing, occasional file read \/ write is all that it has to do with heavy processing currently. I want to deploy this to Linux. I have no static files to handle \nCan a tornado application handle 50-100 websocket and http clients without needing ngnix ? I don't want to use another server for this. How many clients can it handle on its own ?\nEverywhere I search I get ngnix, and I don't want to involve it","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":132,"Q_Id":55801051,"Users Score":1,"Answer":"Yes, Tornado can easily handle 50-100 websocket and http clients without needing Ngnix. You only need Nginx as a reverse proxy if you're running multiple Tornado processes on separate ports. \nIf you're running a single process or multiple process on a single port, you don't need Nginx.\nI've seen benchmarks which show that with a single Tornado process, you can serve around 5,000 connections per second if your response message size is around 100 KB; and over 20,000 requests per second for 1 KB response size. But this also depends on your CPU speed.\nI think it's safe to assume with an average CPU and around 1 GB RAM, you can easily serve around a 2,000-3,000 requests per second.","Q_Score":0,"Tags":"python,server,tornado,production-environment","A_Id":55805718,"CreationDate":"2019-04-22T20:57:00.000","Title":"Can tornado support 50 -100 websocket clients using its default http server without involving ngnix","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to use Google Cloud Profiler in a python script running locally. It seems it is having problems to connect with a metadata server:\n\nWARNING:googlecloudprofiler.client:Failed to fetch instance\/zone from GCE metadata server: HTTPConnectionPool(host='metadata', port=80): Max retries exceeded with url: \/computeMetadata\/v1\/instance\/zone (Caused by NewConnectionError(': Failed to establish a new connection: [Errno -2] Name or service not known',))\nWARNING:googlecloudprofiler.client:Failed to fetch instance\/name from\n GCE metadata server: HTTPConnectionPool(host='metadata', port=80): Max\n retries exceeded with url: \/computeMetadata\/v1\/instance\/name (Caused\n by NewConnectionError(': Failed to establish a new connection: [Errno -2] Name\n or service not known',))\n\nSince the app seems to be running correctly and the profiler is collecting data successfully, is it OK if I just ignore the warnings or will I likely encounter some problems in the future?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":588,"Q_Id":55813180,"Users Score":2,"Answer":"If you're running locally (and haven't, for example, manually zone in the config), these warnings are expected, so ignoring them is definitely okay.\n(Disclosure: I work at Google on Stackdriver Profiler)","Q_Score":0,"Tags":"python,google-cloud-platform,google-cloud-profiler","A_Id":56082596,"CreationDate":"2019-04-23T14:09:00.000","Title":"google-cloud-profiler metadata server WARNING","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm trying to run a Selenium Script (Using PHP) using a Webserver.\nI'm working on Kali and to simulate the Webserver I use Xampp.\nI tried to run the selenium script on Xampp by the following steps:\n-Download the Php Webdriver Bindings, put them in the folder 'htdpcs' of xampp and edit the 'example.php' file following the settings of my own device.\n-Download and execute the Selenium Server Standalone, on port :4444.\nIn the end, I download the geckodriver and I execute the file, but I got the this error:\nOSError: [Errno 98] Address already in use\nHow to fix it in order to run the php-selenium script?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":269,"Q_Id":55816936,"Users Score":0,"Answer":"The Selenium Server will fork a geckodriver as soon as it needs one to start a new browser session. You should not start a geckodriver yourself. If you want to use its Webdriver API yourself you can start it with the --webdriver-port argument.","Q_Score":0,"Tags":"python,selenium,selenium-webdriver","A_Id":55820106,"CreationDate":"2019-04-23T17:59:00.000","Title":"Selenium Servers and Geckodriver don't run at same port","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a problem with my project. It involves downloading data from various websites.\nFor now it's tens of pages, but in the future it will be thousands of pages depending on the country. Each page has its own script.\nTo support these scripts, I created a main script that calls each subsequent script on a separate thread using the multi-threading library. \nThe script on the input has a path to the file with data already downloaded to the database, and on the output it gives the second file only with new data. At the moment, this solution was used only for 4 pages. It works as I assume. But I wonder what happens when these websites will be a few hundred or a few thousand?\nI think I could create a separate main script depending on the country, but as a result, it would give at least several hundred pages (scripts) per country, so according to my logic, several hundred threads run at one time.\nDoes it have a chance to operate on such a number of websites? I opted for multithreading due to the large number of web content download operations.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":66,"Q_Id":55831248,"Users Score":0,"Answer":"Ok, sounds smart :) Now I have 2 scripts with 'ThreadPool' method. Works fine. But to execute these scripts I have 'main_script' which until today I have separate threads for each script. So in main_script I can use too 'ThreadPool' method? And then how looks like a cost of CPU and generally perfomance? Because in future I want have e.g. 100 scripts in main_script (e.g. max_workers = 5) and every script have too ThreadPool (e.g. max_workers=5), so in one time I have a 5x5 = 25 threads?","Q_Score":0,"Tags":"python,multithreading,web-scraping","A_Id":56000171,"CreationDate":"2019-04-24T13:29:00.000","Title":"How handle multithreading on big-scale web scraping project?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm learning a bit about the automation of web applications using python \nI came across this module: \nfrom selenium.webdriver.common.keys import Keys\nWhat is the meant by keys and use of keys?","AnswerCount":2,"Available Count":1,"Score":0.2913126125,"is_accepted":false,"ViewCount":2663,"Q_Id":55877964,"Users Score":3,"Answer":"The Keys are referring to keyboard keys. It's used to ID the different Keys on the keyboard. Ex: Keys.RETURN means your ENTER key in your keyboard.\nThese keys can then be used by selenium for keyboard emulation or other stuff.","Q_Score":0,"Tags":"python","A_Id":55878045,"CreationDate":"2019-04-27T06:37:00.000","Title":"What is the use of from selenium.webdriver.common.keys import Keys","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I receive selenium.common.exceptions.WebDriverException: Message: Service \/usr\/bin\/google-chrome unexpectedly exited. Status code was : 0\nFor background, this is a Linux system and I am typing all the information in the terminal. I looked at many questions. A lot of them recommending uninstalling and reinstalling Chrome. I did that several times. I typed in whereis Google Chrome and found the location. I did not get a .exe file though so I used \"\/usr\/bin\/google-chrome\". Linux doesn't appear to create a .exe file. I am bringing this up because I am not sure if this contributed to my error.\nThis is after I typed in \nmy_path = \"\/usr\/bin\/google-chrome\"\nbrowser = webdriver.Chrome(executable_path=my_path)\nI get many lines of responses on the terminal referencing files in my python3.6 library. Before the main error of it saying it exited unexpectedly, I get \n\"file \"home\/ganesh\/.local\/lib\/python3.6\/site-packages\/selenium\/webdriver\/chrome\/webdriver.py, line 73 in init\"\n\"file \"home\/ganesh\/.local\/lib\/python3.6\/site-packages\/selenium\/webdriver\/chrome\/webdriver.py, line 98 in start\"\nThe thing is that my terminal successfully opens the Chrome browser. However, I get the webdriver exception message I had.\nIn addition, the code ,\nbrowser = webdriver.Chrome(executable_path=my_path)\nclearly didn't fully work because later in the program when I type in \nbrowser.(something else), it doesn't work and says \"name 'browser' is not define\"\nI am hoping for the webdriver exception error to be resolved and for me to successfully be able to call browser in my code later on\nThis question is not a duplicate as marked by someone here. The question that he referred to as answering my question does not answer my question - in that version Chrome exited. In mine, Chrome did not exit, it stayed open. In addition, the previous question has solutions in Windows and Mac, but not for Linux which is my operating system.\nIt is my first week using Linux.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":87,"Q_Id":55885675,"Users Score":1,"Answer":"Welcome to SO. If the chromedriver file is located in \/usr\/bin\/google-chrome folder then your my_path should be my_path = \"\/usr\/bin\/google-chrome\/chromedriver\"","Q_Score":0,"Tags":"python,linux,python-3.x,google-chrome,selenium-webdriver","A_Id":55885757,"CreationDate":"2019-04-27T23:19:00.000","Title":"Chrome opens succesfully from terminal but I get webdriver common exception message","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I made python script using Selenium webdriver. I use that script to login onto system of my faculty and looking on posts. Script refresh page every 60s to see if there is any new post, if there is new post I would recieve Telegram message on my phone. There is no problem with that, problem is that I have to run that script on my laptop 24h to get notifications, which is not possible since I carry it around. \nQuestion is, how can I run that script 24h? Is there any better solution to monitor that page and send messages if there is new post? \nI tried pythonanywhere but I don't have too much experience in that field so I didn't manage to make it work since always some module is missing...","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":905,"Q_Id":55889195,"Users Score":1,"Answer":"I'd just use a Windows virtual machine from AWS\/Microsoft Azure\/Google\/etc. This may be a bit overdoing it in your situation, but you could have one VM connected to another VM that'd be running your script, if it's something that requires an always-on user interface and can't be run in a Linux headless browser. \nInstalling something like AppRobotic personal macro edition on any of the above cloud services would work great. The pro version that's on AWS Marketplace would also work great, but it'd be overdoing it in your use case.","Q_Score":1,"Tags":"python,selenium,webautomation","A_Id":55889276,"CreationDate":"2019-04-28T10:09:00.000","Title":"How to run python script with selenium all the time but not on my PC","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I made python script using Selenium webdriver. I use that script to login onto system of my faculty and looking on posts. Script refresh page every 60s to see if there is any new post, if there is new post I would recieve Telegram message on my phone. There is no problem with that, problem is that I have to run that script on my laptop 24h to get notifications, which is not possible since I carry it around. \nQuestion is, how can I run that script 24h? Is there any better solution to monitor that page and send messages if there is new post? \nI tried pythonanywhere but I don't have too much experience in that field so I didn't manage to make it work since always some module is missing...","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":905,"Q_Id":55889195,"Users Score":1,"Answer":"Welcome !\nThe best way for you would be to use a server so you don't have to run it locally on your computer.\nYou can use an online VPS on which you install your software or you may even try to run it locally on something like a Raspberry Pi.\nIn both case, you will have to deal with linux commands.\nGood luck","Q_Score":1,"Tags":"python,selenium,webautomation","A_Id":55889236,"CreationDate":"2019-04-28T10:09:00.000","Title":"How to run python script with selenium all the time but not on my PC","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a list of approx. 52 websites which lead to about approx. 150 webpages that i require scraping on. Based on my ignorance and lack of research i started building crawlers per webpage which is starting to become to difficult to complete and maintain. \nBased on my analysis thus far I already know what information i want to scrape per webpage and it is clear that these websites have their own structure. On the plus side i noticed that each website has some commonalities in their web structure among their webpages.\nMy million dollar question, is there a single technique or single web crawler that i can use to scrape these sites? I already know the information that I want, these sites are rarely updated in terms of their web structure and most of these sites have documents that need to be downloaded. \nAlternatively, is there a better solution to use that will reduce the amount of web crawlers that I need to build? additionally, these web crawlers will only be used to download the new information of the websites that i am aiming them at.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":76,"Q_Id":55900736,"Users Score":0,"Answer":"[\u2026] i started building crawlers per webpage which is starting to become to difficult to complete and maintain [\u2026] it is clear that these websites have their own structure. [\u2026] these sites are rarely updated in terms of their web structure [\u2026]\n\nIf websites have different structures, having separate spiders makes sense, and should make maintenance easier in the long term.\nYou say completing new spiders (I assume you mean developing them, not crawling or something else) is becoming difficult, however if they are similar to an existing spider, you can simply copy-and-paste the most similar existing spider, and make only the necessary changes.\nMaintenance should be easiest with separate spiders for different websites. If a single website changes, you can fix the spider for that website. If you have a spider for multiple websites, and only one of them changes, you need to make sure that your changes for the modified website do not break the rest of the websites, which can be a nightmare.\nAlso, since you say website structures do not change often, maintenance should not be that hard in general.\nIf you notice you are repeating a lot of code, you might be able to extract some shared code into a spider middleware, a downloader middleware, an extension, an item loader, or even a base spider class shared by two or more spiders. But I would not try to use a single Spider subclass to scrape multiple different websites that are likely to evolve separately.","Q_Score":0,"Tags":"python,scrapy,web-crawler","A_Id":55919394,"CreationDate":"2019-04-29T09:44:00.000","Title":"Using a Single Web Crawler to Scrape Multiple websites in a predefined format with attachments?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a list of approx. 52 websites which lead to about approx. 150 webpages that i require scraping on. Based on my ignorance and lack of research i started building crawlers per webpage which is starting to become to difficult to complete and maintain. \nBased on my analysis thus far I already know what information i want to scrape per webpage and it is clear that these websites have their own structure. On the plus side i noticed that each website has some commonalities in their web structure among their webpages.\nMy million dollar question, is there a single technique or single web crawler that i can use to scrape these sites? I already know the information that I want, these sites are rarely updated in terms of their web structure and most of these sites have documents that need to be downloaded. \nAlternatively, is there a better solution to use that will reduce the amount of web crawlers that I need to build? additionally, these web crawlers will only be used to download the new information of the websites that i am aiming them at.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":76,"Q_Id":55900736,"Users Score":0,"Answer":"I suggest you crawl specific tags such as body, h1,h2,h3,h4,h5, h6,p and... for each links. You can gather all p tags and append them into a specific link. It can be used for each tags you want to crawl them. Also, you can append related links of tags to your database.","Q_Score":0,"Tags":"python,scrapy,web-crawler","A_Id":55920004,"CreationDate":"2019-04-29T09:44:00.000","Title":"Using a Single Web Crawler to Scrape Multiple websites in a predefined format with attachments?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm trying to write a simple Python application to manage muted words for Twitter. The interface in the browser and the application are cumbersome to use.\nLooking through the API documentation, it seems that it is possible to create and destroy muted users but not words. Am I missing something, or is this simply not possible?\nI have been trying the python-twitter library but the functionality is missing there too. I realise this is probably an API limitation as opposed to the library.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":404,"Q_Id":55916527,"Users Score":3,"Answer":"No, this is not possible. The API only has methods for muting users, not words.","Q_Score":4,"Tags":"python,twitter","A_Id":55918277,"CreationDate":"2019-04-30T08:11:00.000","Title":"Is there a way to create and destroy muted words using the Twitter API","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to run a python program in the online IDE SourceLair. I've written a line of code that simply prints hello, but I am embarrassed to say I can't figure out how to RUN the program. \nI have the console, web server, and terminal available on the IDE already pulled up. I just don't know how to start the program. I've tried it on Mac OSX and Chrome OS, and neither work. \nI don't know if anyone has experience with this IDE, but I can hope. Thanks!!","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":63,"Q_Id":55929577,"Users Score":1,"Answer":"Can I ask you why you are using SourceLair?\nWell I just figured it out in about 2 mins....its the same as using any other editor for python.\nAll you have to do is to run it in the terminal. python (nameoffile).py","Q_Score":0,"Tags":"python,ide","A_Id":55929861,"CreationDate":"2019-04-30T22:47:00.000","Title":"How to run a python program using sourcelair?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Strugging on this problem for a while so finally asking for some help from the experts.\nLanguage: python \nThe problem\/setup:\nI have many clients, client[n], client[n] .. etc\nI have many servers, server[n], server[n] .. etc\nEach server can plugin to 5 external ws connections. At any time I may need to open [x] ws connections; maybe 2, maybe 32, the total ws connections i need, thus servers needed, is dynamic... \nEach client maybe connecting 1 ws connection from server[1], 1 ws connection from server[2] .. .etc \nHow I imagine the flow working \n\nNew client[1] is loaded, needing 2 ws feeds\nNew client[1] broadcasts [xpub\/xsub ?] message to all servers saying, 'hey, I need these 2 ws connections, who has them?' \nServer[1] with the ws connections reply to client[1] (and only that client) - 'I got what youre looking for, talk to me' \nclient[1] engages in req\/reply communication with server[1] so that client[1] can utilize server[1]'s ws connection to make queries against it, eg, 'hey, server[1] with access to ws[1], can you request [x]' .. server[1] replies to client[1] 'heres the reply from the ws request you made' \n\ntldr\n\nclients will be having multiple req\/rep with many servers\nservers will be dealing with many clients \nclient need to broadcast\/find appropriate clients to be messaging with","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":375,"Q_Id":55934405,"Users Score":0,"Answer":"I'll focus on the discovery problem. How do clients know which servers are available and which ws connections each one has?\nOne approach is to add a third type of node, call it broker. There is a single broker, and all clients and servers know how to reach it. (Eg, all clients and servers are configured with the broker's IP or hostname.)\nWhen a server starts it registers itself with the broker: \"I have ws feeds x,y,z and accept requests on 1.2.3.5:1234\". The broker tracks this state, maybe in a hash table.\nWhen a client needs ws feed y, it first contacts the broker: \"Which server has ws feed y?\" If the broker knows who has feed y, it gives the client the server's IP and port. The client can then contact the server directly. (If multiple servers can access feed y, the broker could return a list of servers instead of a single one.)\nIf servers run for a \"long\" time, clients can cache the \"server X has feed y\" information and only talk to the broker when they need to access a new feed.\nWith this design, clients use the broker to find servers of interest. Servers don't have to know anything about clients at all. And the \"real\" traffic (clients accessing feeds via servers) is still done directly between clients and servers - no broker involved.\nHTH. And for the record I am definitely not an expert.","Q_Score":0,"Tags":"python,sockets,message-queue,zeromq,pyzmq","A_Id":55968720,"CreationDate":"2019-05-01T09:49:00.000","Title":"python zmq many client to many server discovery message patterns","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a question about modbus tcp request. There are two options in python for modbus tcp library. Auto_open and Auto_close, these are keep tcp connection open and open\/close tcp connection for each request. Which one should I use? Which one is beneficial for Modbus Tcp devices? What is your suggestions? Thank you.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":95,"Q_Id":55934968,"Users Score":0,"Answer":"it is better to use Auto_close method and close port after each data transmission\nthis help you to connect more than one mocbus\/tcp devices on same port(502).and the connection should be closed when you haven't data for transmitting","Q_Score":0,"Tags":"python,networking,tcp,modbus-tcp","A_Id":56304206,"CreationDate":"2019-05-01T10:43:00.000","Title":"About Modbus TCP request","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using Python+Selenium to scrape data from a site which lists companies' info.\nFor each company I need 2 data points - email and url.\nThe problem is - for some companies email is not indicated and if I separately get a list of urls and emails I won't be able to fit the pairs (list of emails will be shorter than list of url and I won't know which of the emails is missing).\nSo I thought maybe there is a way to get root elements of each of the companies' blocks (say, it is div with class \"provider\") and then search inside each of them for email and url.\nIs it possible and if yes - how?","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":306,"Q_Id":55943694,"Users Score":1,"Answer":"Ok, I found the solution.\nFirst you collect all the blocks with fields you need to get. Example:\nproviders = browser.find_elements_by_class_name('provider-row')\nAnd then you use find_elements_by_xpath() method with locator starting with \".\/\/\" which means search inside a specific element. Example:\nproviders[0].find_elements_by_xpath(\".\/\/li[@class='website-link website-link-a']\/a[@class='sl-ext']\")","Q_Score":1,"Tags":"python,selenium","A_Id":55944216,"CreationDate":"2019-05-01T22:54:00.000","Title":"Selenium+Python. How to locate several elements within a specific element?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm new to kubernetes and I'm setting up some test environments to do some experiments with in minikube. So I have two pods running python, connected trough a service (I verified that they are selected trough the same service). \nI want to open a socket connection between the two. The container ip is equal to the IP specified in the service. On Pod1 I create the socket and connect to localhost and the port belonging to the container. \nI'm not sure whether I should actually connect to localhost or the name of the service. Then on the other pod (Pod2) I connect to the name of the service (if I understand correctly the service should exposes the IP with the name that corresponds to the service name). \nPod2 refuses to connect to Pod1 in the aforementioned configuration. \nIf I let Pod1 create a socket with as IP address the service name I get the \"Cannot assign requested address\" on socket creation.\nSo I think I'm just choosing the IP address wrongly.\nThanks in advance","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1269,"Q_Id":55977340,"Users Score":0,"Answer":"In Kubernetes, it is difficult to have always the IP address of a given POD. A pretty solution consists in making your application binding to 0.0.0.0 (this means you application will listen on all local interfaces including the Pod primary interface).\nThen you can expose you application running in that Pod using a Kubernetes service.","Q_Score":0,"Tags":"python,docker,sockets,kubernetes","A_Id":67442922,"CreationDate":"2019-05-03T21:32:00.000","Title":"What IP to bind to python socket in Kubernetes?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm working on a piece of code that will be able to hit multiple web APIs (hardware that has APIs telling about the machine status) not blocking when one is waiting on the other to wait for the response, once the response of one arrives it'll be emitted on a websocket. One requirement is not to kill the APIs so hit them let's say once per 5 seconds as long as the main process is running.\nThe important part I'm struggling with is how to even approach it. \nWhat I did to this point is that the main process is spawning separate threads for different APIs and that thread is hitting the API emitting the response to the websocket time.sleep(5) and repeat. The main process is responsible to start new \"workers\" and kill ones that are not needed anymore also to restart ones that should be working but are not due to i.e. an exception.\nI have no idea if multi-threading is the way to go here - let's say I aim to \"crawl\" through 300 APIs.\nIs spawning long lived workers the right way to achieve this? Should those be processes instead? Should I maybe have the main process coordinate executing short-living threads that will do the API call and die and do that every 5 seconds per API (that seems way worse to maintain)? If the last option, then how to handle cases where a response takes more than 5 seconds to arrive?\nSome people are now talking about Python's asyncio like it's the golden solution for all issues but I don't understand how it could fit into my problem.\nCan someone guide me to the right direction?","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":471,"Q_Id":55994649,"Users Score":2,"Answer":"Let me rephrase this and tell me whether I'm right:\n\nI want to visit ~300 APIs frequently such that each is hit approximately every 5 seconds. How do I approach this and what worker\/process management should I use?\n\nThere's basically two different approaches:\n\nSpawn a thread for each API that is currently being watched (i.e. touched frequently) -- only feasible if at any given time only a subset of your total number of possible APIs is being watched.\nSet up a worker pool where all workers consume the same queue and have a management process fill the queue according to the time restrictions -- probably better when you always want to watch all possible APIs.\n\nEdit after your first comment:\nYou know the number of APIs you want to watch, so the queue's length should never grow larger than that number. Also, you can scan the queue in your main process frequently and check whether an API address you want to add already is in there and don't append it another time.\nTo avoid hitting APIs too frequent, you can add target timestamps along with the API address to the queue (e.g. as a tuple) and have the worker wait until that time is reached before firing the query to that API. This will slow down the consumption of your entire queue but will maintain a minimum delay between to hits of the same API. If you choose to do so, you just have to make sure that (a) the API requests always respond in a reasonable time, and (b) that all API addresses are added in a round-robin manner to the queue.","Q_Score":0,"Tags":"python,multithreading,multiprocessing,python-asyncio","A_Id":55995044,"CreationDate":"2019-05-05T17:38:00.000","Title":"Spawning workers in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm looking for a way to download surveys that are still open on Qualtrics so that I can create a report on how many surveys are completed and how many are still in progress. I was able to follow their API documentation to download the completed surveys to a csv file but I couldn't find a way to do the same for the In Progress surveys. Thanks in advance for your help.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":156,"Q_Id":55997128,"Users Score":0,"Answer":"Not through the API. You can do it manually through the Qualtrics interface.\nIf you need to use the API and the survey is invite only, an alternative would be to download the distribution history for all the distributions. That will tell you the status of each invitee.","Q_Score":0,"Tags":"python,qualtrics","A_Id":56006150,"CreationDate":"2019-05-05T22:56:00.000","Title":"Is there a way to download \"Responses In Progress\" survey from Qualtrics?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have to execute a .py file (python file) saved on my local machine (say @C:\\User\\abc\\dosomething.py) from postman from its pre-request section.\nEssentially I need to call the .py file from javascript code but from postman only.\nHow do I do that ?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2506,"Q_Id":56002752,"Users Score":0,"Answer":"Unfortunately this is not possible. The Pre-request and test-scripts are executed in a sandbox environment.","Q_Score":1,"Tags":"python-2.7,postman","A_Id":56004157,"CreationDate":"2019-05-06T09:52:00.000","Title":"How to execute python script from postman?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a SOAP url , while running the url through browser I am getting a wsdl response.But when I am trying to call a method in the response using the required parameter list, and it is showing \"ARERR [149] A user name must be supplied in the control record\".I tried using PHP as well as python but I am getting the same error.\nI searched this error and got the information like this : \"The name field of the ARControlStruct parameter is empty. Supply the name of an AR System user in this field.\".But nowhere I saw how to supply the user name parameter.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1235,"Q_Id":56016625,"Users Score":0,"Answer":"I got the solution for this problem.Following are the steps I followed to solve the issue (I have used \"zeep\" a 3rd party module to solve this):\n\nRun the following command to understand WSDL:\n\npython -mzeep wsdl_url\n\nSearch for string \"Service:\". Below that we can see our operation name\nFor my operation I found following entry:\n\nMyOperation(parameters..., _soapheaders={parameters: ns0:AuthenticationInfo})\nwhich clearly communicates that, I have to pass parameters and an auth param using kwargs \"_soapheaders\"\nWith that I came to know that I have to pass my authentication element as _soapheaders argument to MyOperation function.\n\nCreated Auth Element:\n\nauth_ele = client.get_element('ns0:AuthenticationInfo')\nauth = auth_ele(userName='me', password='mypwd')\n\nPassed the auth to my Operation:\n\ncleint.service.MyOperation('parameters..', _soapheaders=[auth])","Q_Score":0,"Tags":"php,python,web-services,soap,wsdl","A_Id":56127550,"CreationDate":"2019-05-07T06:21:00.000","Title":"Getting ARERR 149 A user name must be supplied in the control record","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am running a Python Script on the Raspberry Pi in order to get measured data out of a Smart Plug. In my script I need to write the IP-Address of the Smart Plug so that I can retrieve the data it was measured. The problem is that I need to be able to take the Smart Plug to different places without having to hard code its new local IP-Address every time. \nI have the MAC Address so I am hoping there is an \"easy\" way to add a couple lines of code and retrieve the local IP-Address from the MAC (?) in the Python Script. Thanks!","AnswerCount":3,"Available Count":1,"Score":-0.1325487884,"is_accepted":false,"ViewCount":7632,"Q_Id":56022026,"Users Score":-2,"Answer":"The local ip address is not based on the MAC address. The router uses DHCP to give the devises an ip address. So there is no way to tell the router which IP he must give you other than changing the settings. \nI would rather try to broadcast the ip and on the raspberry listen on the broadcast channel for the message you are looking for.","Q_Score":2,"Tags":"python,ip-address,hostname,mac-address","A_Id":56022228,"CreationDate":"2019-05-07T11:53:00.000","Title":"Get local IP Address from a known MAC Address in Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I Have a python script that monitors a website, and I want it to send me a notification when some particular change happens to the website.\nMy question is how can I make that Python script runs for ever in some place else (Not my machine, because I want it to send me a notification even when my machine is off)?\nI have thought about RDP, but I wanted to have your opinions also.\n(PS: FREE Service if it's possible, otherwise the lowest cost)\nThank you!","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":724,"Q_Id":56056794,"Users Score":1,"Answer":"I would suggest you to setup AWS EC2 instance with whatever OS you want. \nFor beginner, you can get 750 hours of usage for free where you can run your script on.","Q_Score":0,"Tags":"python,web-services,monitoring,network-monitoring","A_Id":56057837,"CreationDate":"2019-05-09T09:52:00.000","Title":"How to make a python script run forever online?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to make a program that can be able to make a search request on most websites such as YouTube, ESPN, My university course timetable etc... \nI have looked online for various solutions but many of them point to simply adding your search query at the end of the url you are \"getting\", but that doesn't seem to work with all websites some of them don't update their URL's when you manually make a search, while many others might give each and every URL a unique 'id'. Would it be possible to scrape a search bar from any website and then specifying a search query and entering it? Is there a function for that?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":557,"Q_Id":56075283,"Users Score":0,"Answer":"It's possible to use a text-based web browser and automate the search with a script. Then you can download the site you get from this browser and scrape it with BeautifulSoup or something else.","Q_Score":0,"Tags":"python,beautifulsoup,request,python-requests","A_Id":56075364,"CreationDate":"2019-05-10T10:02:00.000","Title":"Is there any way to scrape a search box using BeautifulSoup\/requests and then search and refresh?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm trying to make a program that can be able to make a search request on most websites such as YouTube, ESPN, My university course timetable etc... \nI have looked online for various solutions but many of them point to simply adding your search query at the end of the url you are \"getting\", but that doesn't seem to work with all websites some of them don't update their URL's when you manually make a search, while many others might give each and every URL a unique 'id'. Would it be possible to scrape a search bar from any website and then specifying a search query and entering it? Is there a function for that?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":557,"Q_Id":56075283,"Users Score":1,"Answer":"You need to Use Selenium Instance to do that. You can not achieve it using BeautifulSoup or requests.","Q_Score":0,"Tags":"python,beautifulsoup,request,python-requests","A_Id":56075496,"CreationDate":"2019-05-10T10:02:00.000","Title":"Is there any way to scrape a search box using BeautifulSoup\/requests and then search and refresh?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm currently working on a project that involves crawling data from various (about 50 websites). There is one website which has multiple pages we need to scrape, but this website doesn't allow multiple session. (the website is authenticated). \nIs there a way to pause the one spider until the other one is finished?\nI've been researching this for the past day. I found some ways you can pause, but it seems these are only working for the whole CrawlerProcess.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":56,"Q_Id":56114106,"Users Score":0,"Answer":"The solution was actually fairly easy. Each spider has a unique identifying code. When setting the CrawlerProcess up, the program checks if the unique code is the same as the one that needs pausing, and if so, it passes the spider instance to the spider that needs to run first, which will then pause it with self.custom_settings['XX_SPIDER'].crawler.pause() and when done, in the closed() function, will unpause it with self.custom_settings['XX_SPIDER'].crawler.unpause()","Q_Score":0,"Tags":"python,scrapy","A_Id":56518384,"CreationDate":"2019-05-13T14:07:00.000","Title":"Pausing individual spiders in a CrawlerProcess","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have two computers with internet connection. They both have public IPs and they are NATed. What I want is to send a variable from PC A to PC B and close the connection. \nI have thought of two approaches for this:\n1) Using sockets. PC B will have listen to a connection from PC A. Then, when the variable will be sent, the connection will be closed. The problem is that, the sockets will not communicate, because I have to forward the traffic from my public IP to PC B.\n2) An out of the box idea, is to have the variable broadcasted online somewhere. I mean making a public IP hold the variable in HTML and then the PC would GET the IP from and get the variable. The problem is, how do I make that variable accessible over the internet?\nAny ideas would be much appreciated.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":233,"Q_Id":56129037,"Users Score":2,"Answer":"Figured a solution out. I make a dummy server using flask and I hosted it at pythonanywhere.com for free. The variables are posted to the server from PC A and then, PC B uses the GET method to get them locally.","Q_Score":1,"Tags":"python,python-3.x,sockets,networking","A_Id":56148950,"CreationDate":"2019-05-14T11:09:00.000","Title":"Send variable between PCs over the internet using python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I built a website some time ago with Flask. Now all of a sudden when I try to navigate there I get the following:\nNET::ERR_CERT_COMMON_NAME_INVALID\nYour connection is not private\nAttackers might be trying to steal your information from www.mysite.org (for example, passwords, messages, or credit cards). Learn more\nDoes anyone know what's going on?","AnswerCount":3,"Available Count":1,"Score":0.1325487884,"is_accepted":false,"ViewCount":5243,"Q_Id":56138678,"Users Score":2,"Answer":"The error means: The host name you use in the web browser does not match the one used in the certificate.\nIf your server has multiple DNS entries you need to include all of into the certificate to be able to use them with https. If you access the server using it's IP address like https:\/\/10.1.2.3 then the IP address also have to present in the certificate (of course this only makes sense if you have a static IP address that never changes).","Q_Score":3,"Tags":"python,flask,error-handling","A_Id":68351111,"CreationDate":"2019-05-14T21:13:00.000","Title":"NET::ERR_CERT_COMMON_NAME_INVALID - Error Message","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Is there any python library that allows me to have python server that communicates with python client on a non web port and where arguments as well as results can be passed in native python data type and transformed by library into requisite types needed for transfer protocol","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":25,"Q_Id":56140650,"Users Score":0,"Answer":"You may be looking for Tornado TCP, The data structure can be passed by Json.","Q_Score":0,"Tags":"python,server","A_Id":56140720,"CreationDate":"2019-05-15T01:29:00.000","Title":"python library to create python server and client that can also handle data transfer over ports in native python types","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a problem. Last year I developed a Telegram Gateway that uses a socket and the Telethon library (installed through pip).\nThe problem is that this project is installed in an other pc and uses an old version of Telethon, foundamental to use thread (with the new Telethon version I can't use the thread with socket etc., it is changed). \nI need to install the same Telethon version in order to use the same gateway.\nIn the pc, if I run the command:\npip show telethon\nit shows the following message:\nYou are using pip version 8.1.1, however version 19.1.1 is available. You should consider upgrading via etc......\nThen, I need to install the same version of telethon (8.1.1).\nBut if I try to install it in an other pc, through this command:\npip3 install telethon==8.1.1\nit prints the following red error message:\nNo matching distribution found for telethon==8.1.1\nWhy?\nI really need to use the same version of telethon, in order to run the old gateway.\nThank you very much!!!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":653,"Q_Id":56194596,"Users Score":1,"Answer":"That line is a message printed by pip telling you to consider upgrading pip itself. The version number you are looking for is printed below it:\n\n$ pip3 show telethon\nYou are using pip version 8.1.1, however version 19.0.3 is available.\nYou should consider upgrading via the 'pip install --upgrade pip' command.\nName: Telethon\nVersion: 1.6.2\nSummary: Full-featured Telegram client library for Python 3\nHome-page: https:\/\/github.com\/LonamiWebs\/Telethon\nAuthor: Lonami Exo\nAuthor-email: totufals@hotmail.com\nLicense: MIT\nLocation: \/usr\/local\/lib\/python3.7\/site-packages\nRequires: rsa, pyaes\nRequired-by:","Q_Score":0,"Tags":"python,pip,telethon","A_Id":56194768,"CreationDate":"2019-05-17T23:11:00.000","Title":"Old version of Telethon","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"My Telegram bot code was working fine for weeks and I didn't changed anything today suddenly I got [SSL: CERTIFICATE_VERIFY_FAILED] error and my bot code no longer working in my PC.\nI use Ubuntu 18.04 and I'm usng telepot library.\nWhat is wrong and how to fix it?\nEdit: I'm using getMe method and I don't know where is the certificate and how to renew it and I didn't import requests in my bot code. I'm using telepot API by importing telepot in my code.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1148,"Q_Id":56220056,"Users Score":1,"Answer":"Probably your certificate expired, that is why it worked fine earlier. Just renew it and all should be good. If you're using requests under the hood you can just pass verify=False to the post or get method but that is unwise.\nThe renew procedure depends on from where do you get your certificate. If your using letsencrypt for example with certbot. Issuing sudo certbot renew command from shell will suffice.","Q_Score":2,"Tags":"python,python-3.x,ssl,telegram-bot,telepot","A_Id":56220074,"CreationDate":"2019-05-20T11:26:00.000","Title":"\"SSL: CERTIFICATE_VERIFY_FAILED\" error in my telegram bot","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How do I add an additional CA (certificate authority) to the trust store used by my Python3 AWS Lambda function?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":6930,"Q_Id":56225178,"Users Score":5,"Answer":"If you only need a single CA, then get your crt file and encode it into a pem using the following command in linux:\n\nopenssl x509 -text -in \"{your CA}.crt\" > cacert.pem\n\nIf you need to add CA's to the default CA bundle, then copy python3.8\/site-packages\/certifi\/cacert.pem to your lambda folder. Then run this command for each crt:\n\nopenssl x509 -text -in \"{your CA}.crt\" >> cacert.pem\n\nAfter creating the pem file, deploy your lambda with the REQUESTS_CA_BUNDLE environment variable set to \/var\/task\/cacert.pem. \n\/var\/task is where AWS Lambda extracts your zipped up code to.","Q_Score":6,"Tags":"python,python-3.x,amazon-web-services,aws-lambda,ssl-certificate","A_Id":59638101,"CreationDate":"2019-05-20T16:51:00.000","Title":"Python AWS Lambda Certificates","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I built a crawler that gets the product information of a list of products entered by the user. Sometimes, the crawler freezes, especially if the list of products is long and if the crawler runs on headless mode.\nThe bug seems random and is not reproducible, which makes me think it is caused by the resource utilization of the website being crawled. \nSince this is a non-reproducible bug, I don't think I can fix it, but is there a way to detect that the crawler has frozen and try again?\nHere is some information about the crawler and the bug:\n\nThe crawler is built using Selenium and Python.\nThe bug occurs with different websites and products.\nThe bug occurs in the \"normal\" mode, but occurs more often in the headless mode.\n\nThanks!\nFelipe","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":239,"Q_Id":56247296,"Users Score":0,"Answer":"If the problem isn't related to the browser, it is because the code is busy on getting data in headless mode. If your code working in the normal mode instead of headless mode, you see only the working part.\nI assume you made a GUI. If it is so, you are trying to access GUI but the same program working on crawling. That's why GUI is freezing.\nYou can solve this by using the Threading library or any other multiprocessing method. This will allow you to run more than one process at the same time. So, you can freely use other functions on the GUI and crawl a website without freezing.","Q_Score":1,"Tags":"python,selenium,web-scraping,web-crawler,headless","A_Id":63178050,"CreationDate":"2019-05-21T22:57:00.000","Title":"Python-Selenium crawler freezes, especially in headless mode (non reproducible bug)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am connecting my slave via TCP\/IP, everything looks fine by using the Wireshark software I can validate that the CRC checksum always valid \u201cgood\u201d, but I am wondering how I can corrupt the CRC checksum so I can see like checksum \u201cInvalid\u201d. Any suggestion how can I get this done maybe python code or any other way if possible.\nThank you all \nTariq","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":406,"Q_Id":56268062,"Users Score":1,"Answer":"I think you use a library that computes CRC. You can form Modbus packet without it, if you want simulate bad CRC condition","Q_Score":1,"Tags":"python,tcp,checksum,crc,modbus-tcp","A_Id":56268121,"CreationDate":"2019-05-23T04:26:00.000","Title":"How corrupt checksum over TCP\/IP","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"User queries to dialogflow agent through google assistant using voice commands.I want to add some data in that user query. can we change the request parameters sent to agent? If yes, where? The code for google assistant library is in python.\nI am working on python 3.5 on raspberry pi 3. I tried modifying event.py file located at google\/library\/assistant\/ . But before the event for ON_RECOGNISING_SPEECH_FINISHED I got response from google assistant.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":66,"Q_Id":56269802,"Users Score":1,"Answer":"No, if you're using the Google Assistant SDK, you interact with the agent as if it was any other Google Assistant surface. There is no way to add additional context that is exclusive to your device.","Q_Score":0,"Tags":"python-3.x,raspberry-pi3,dialogflow-es,google-assistant-sdk","A_Id":56278248,"CreationDate":"2019-05-23T07:08:00.000","Title":"change request format sent to dialogflow agent","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I noticed that the same transaction had a different transaction ID the second time I pulled it. Why is this the case? Is it because pending transactions have different transaction IDs than those same transactions once posted? Does anyone have recommendations for how I can identify unique transactions if the trx IDs are in fact changing?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":415,"Q_Id":56282490,"Users Score":0,"Answer":"Turns out that the transaction ID often does change. When a transaction is posted (stops pending), the original transaction ID becomes the pending transaction ID, and a new transaction ID is assigned.","Q_Score":4,"Tags":"javascript,python,api,banking,plaid","A_Id":57262467,"CreationDate":"2019-05-23T20:27:00.000","Title":"How to identify Plaid transactions if transaction ID's change","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"With graph = nx.node_link_graph(json.loads(\"json_string\")) it is possible to load a graph that is represented in JSON format.\nNow my problem is that I already have a networkx graph in my program and only want to add JSON formated components dynamically during runtime.\nFor example somewhere the string ' {\"source\": 1, \"target\": 2, \"weight\": 5.5} ' is created and should then result in a new edge between node 1 and 2.\nWhat is the best way to realise this?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":284,"Q_Id":56294819,"Users Score":0,"Answer":"The simpliest way you can add the new edge to the graph G if you have a dict like this:\nd = {\"source\": 1, \"target\": 2, \"weight\": 5.5} (you can convert it from string to dict with dict(s) or json.loads(s)) is:\nG.add_weighted_edges_from([(d['source'], d['target'], d['weight'])])","Q_Score":0,"Tags":"python,json,networkx","A_Id":56295146,"CreationDate":"2019-05-24T14:37:00.000","Title":"Is there a way to add single nodes\/edges in JSON format to networkx graph?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am currently building a media website using node js. I would like to be able to control Kodi, which is installed of the server computer, remotely from the website browser.How would I go about doing this? My first idea was \n\nto simply see if I could somehow pipe the entire Kodi GUI into the\nbrowser such that the full program stays on the server\nand just the GUI is piped to the browser, sending commands back to\nthe server;\n\nhowever, I could find little documentation on how to do that.\nSecond, I thought of making a script (eg Python) that would be able to control Kodi and just interface node js with the Python script, but again, \nI could find little documentation on that.\n Any help would be much appreciated. \nThank You!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":93,"Q_Id":56306073,"Users Score":0,"Answer":"Can't you just go to settings -> services -> control and then the 'remote control via http' settings? I use this to login to my local ip e.g. 192.168.1.150:8080 (you can set the port on this page) from my browser and I can do anything from there","Q_Score":0,"Tags":"python,node.js,kodi","A_Id":57015259,"CreationDate":"2019-05-25T15:12:00.000","Title":"Controlling Kodi from Browser","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to return some items from inside a spider before starting to parse requests. Because I need to make sure some parent items exist in database before parsing child items.\nI now yield them from the parse method first thing, and this seem to work fine. But I was wondering if there is a better way to do this?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":67,"Q_Id":56315488,"Users Score":1,"Answer":"Instead of yielding the items, write them into the database directly on the constructor of the pipeline where you add regular items to the database.","Q_Score":0,"Tags":"python,scrapy","A_Id":56322104,"CreationDate":"2019-05-26T16:42:00.000","Title":"Scrapy returning items within spider before first request\/parse","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Did anybody use to write services, connected with ITSMChangeManagement of OTRS? Is there any API to this thing? I need smth. like GenericTicketConnector. Thank you.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":63,"Q_Id":56339794,"Users Score":1,"Answer":"No, there is nothing with ITSMChangeManagement. Only Tickets, FAQ and ConfigItems can be used with the GenericInterface of the Community Edition","Q_Score":0,"Tags":"python,api,otrs,itsm","A_Id":56340530,"CreationDate":"2019-05-28T10:08:00.000","Title":"OTRS ITSMChange API service","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm currently trying to establish a connection between a chrome-extension and a NativeMessagingHost. Everything works fine on Windows, but it won't do on Linux (either arch, kali or ubuntu). \nHow it fails: \n\nCan't find manifest for native messaging host my_host.json\n\nMy current state:\n\nhave my host manifest under ~\/.config\/google-chrome-beta\/NativeMessagingHosts\/my_host.json\n\nin there: \"name\":\"my_host.json\" and the path to my python script which handles the messages, also the unique chrome-extension code unter allowed_origins\n\nin manifest.json of my extension given the permission for nativeMessaging\nfurthermore in popup.js: var port = chrome.runtime.connectNative(\"my_host.json\"); and the same name used in sendNativeMessage\n\nWhat I tried so far:\n\ntried with google-chrome-beta and -stable\ndeleted file endings e.g. my_host.json to my_host, or removing the python ending\neasier paths where my python script lays\nalso tried to put my_host into \/Library\/Google\/Chrome\/NativeMessagingHosts which typically should be the Mac path - but hey, may it worked (not..)\n\nI get no error starting the application, copied the message from terminal whilst starting chrome with logging.\nI pretty much went through the example google gave, adjusting paths etc.\nAlso went through differnt posts, but it seems no one has the same problem, or no one tries to do something similar.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":257,"Q_Id":56361604,"Users Score":0,"Answer":"So I figured out where the problem was: \nI changed the name of my host from my_host.json to com.my.host.json and set \"name\" to com.my.host (where I firstly had the ending .json as well, which probably caused the problem too). Furthermore I changed var port = chrome.runtime.connectNative(\"my_host.json\") in my js-file to [...](\"com.my.host\") where the ending .json was not right too. Everything works now. Thank you for your suggestions!","Q_Score":0,"Tags":"javascript,python,google-chrome-extension","A_Id":56376376,"CreationDate":"2019-05-29T13:18:00.000","Title":"Manifest for native messaging host not found","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am running into a limitation with an API I am working with. At the moment the max rows it can return is 10000. The goal is to access a month worth of data which sometimes can be up 200000 rows. This API allows filtering by start_at_row and row_limit. It also returns row_count in the response. I will need to make multiple requests in order to capture all 200000 rows. I'm stuck on figuring out how to trigger the next request after the first 10000 rows and how to pick at exactly at 10001 and run the next 10000 rows. Is there a common method used for this problem? Any advice is welcome. Let me know which pieces of information from my code I can provide that can be useful.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":89,"Q_Id":56370374,"Users Score":0,"Answer":"Update, I was able to figure this out on my own after a little trial and error and more research on Google.\nI added start_at_row and row_limit as global variables that equal to 1 and 2 respectively.\nFrom there I added a while loop(start_at_row <= row_limit), I then captured the row_count from the response given by the API and made it equal to role_limit. \nTo break the loop I made and an if statement with start_at_row > row_limit","Q_Score":1,"Tags":"xml,python-3.x,beautifulsoup","A_Id":56403222,"CreationDate":"2019-05-30T01:06:00.000","Title":"How to iterate through smaller chunks of data in API response","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I\u2019m writing a HTTP\/1.1 client that will be used against a variety of servers.\nHow can I decide a reasonable default keep-alive timeout value, as in, how long the client should keep an unused connection open before closing? Any value I think of seems extremely arbitrary.","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":1106,"Q_Id":56373516,"Users Score":2,"Answer":"First note that that with HTTP keep alive both client and server can close an idle connection (i.e. no outstanding response, no unfinished request) at any time. This means especially that the client cannot make the server keep the connection open by enforcing some timeout, all what a client-side timeout does is limit how long the client will try to keep the connection open. The server might close the connection even before this client-side timeout is reached.\nBased on this there is no generic good value for the timeout but there actually does not need to be one. The timeout is essentially used to limit resources, i.e. how much idle connections will be open at the same time. If your specific use case will never visit the same site again anyway then using HTTP keep-alive would just be a waste of resources. If instead you don't know your specific usage pattern you could just place a limit on the number of open connections, i.e. close the longest unused connection if the limit is reached and a new connection is needed. It might make sense to have some upper limit timeout of 10..15 minutes anyway since usually after this time firewalls and NAT routers in between will have abandoned the connection state so the idle connection will no longer work for new requests anyway.\nBut in any case you also need to be sure that you detect if the server closes a connection and then discard this connection from the list of reusable connections. And if you use HTTP keep-alive you also need to be aware that the server might close the connection in the very moment you are trying to send a new request on an existing connection, i.e. you need to retry this request then on a newly created connection.","Q_Score":0,"Tags":"python-3.x,sockets,http,tcp,python-asyncio","A_Id":56374276,"CreationDate":"2019-05-30T07:18:00.000","Title":"HTTP\/1.1 client: How to decide good keep-alive timeout default?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to get a list of all the items available for purchase on a certain web page. However, the web page only loads 12 items at a time until the user scrolls down and then 12 more are loaded. Is there a way, in C# or Python, using any open source libraries, to be able to \"see\" all of the items available without physically going to the page and scrolling down?\nUsing Chrome's developer tools, I can only \"see\" the 12 items in the HTML window until I physically scroll down on the web page and it loads more.\nNOTE: I'm relatively new at C#\/Python web scraping, so I appreciate any in depth answers!\nEDIT: If I were to use something like Selenium, would it be possible to load everything programatically? If so, how?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":628,"Q_Id":56380988,"Users Score":0,"Answer":"The Web browser itself can obviously load more items, and a web browser is just a program, so yes, it's possible to write a program to do it.\nBut you'd essentially be writing a mini web browser, which is a lot of work.\nDoes the website itself offer a way to load more items at once? i.e. some websites offer a link or a dropdown menu in the corner of the page that allows you to increase the maximum number of items shown.","Q_Score":1,"Tags":"c#,python,html,web-scraping","A_Id":56381247,"CreationDate":"2019-05-30T15:14:00.000","Title":"How can I load all items on a web page when using an HTML parser?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"There is a table with a list of users and their email on a webpage. At the top of the page, is a search\/filter input field that allows the user to type an email or username and filter for the result as he\/she is typing. \nThe problem: However, when I use the send_keys() method instead of doing this manually, nothing is filtered in the table view.\nThis is happening on the Safari browser on the iPhone 7 Plus (real device, not simulator). Some other information:\n\niOS version: 12.2\nAppium version: 1.13.0-beta.3\nSelenium version: 2.53.1\nProgramming language: Python 2.7.15\n\nIn addition to send_keys(), i've tried to use set_value(), i've also tried to execute JS and setting the attribute value, and also tried to send key for each character (in a for loop with a delay between each character).\nI'm expecting for example, element.send_keys(\"test1000@test.com) to filter the table view on the web page so that the only user that is displayed has the associated test1000@test.com email as it does when I go through the site manually.\nIn actuality, send_keys() does not do that and nothing in the table view gets filtered.\nAny help or guidance would be appreciated!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":288,"Q_Id":56382223,"Users Score":0,"Answer":"Explicit wait for table to get populate in DOM\nsendKeys search String and additional key Tab\ntextbox.sendKeys(\"test1000@test.com\"+Keys.TAB)\nExplicit Wait for filter to get applied and table to get refreshed.\nfind elements for the newly populated table with applied filters.","Q_Score":0,"Tags":"selenium,appium,appium-ios,python-appium","A_Id":56395049,"CreationDate":"2019-05-30T16:37:00.000","Title":"Using send_keys() on a search\/filter input field does not filter results in table","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm new to web scraping and am trying to learn more. I know some websites load products on the back end before they make it available to the general public. Is there a way I can access that information using an HTML parser or any other library?\nI suspect the website developers use dynamic javascript to alter the information after loading. Or use different tags\/classes to hide the information?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":42,"Q_Id":56382627,"Users Score":0,"Answer":"I see two questions here: \n1)Can I access information on the webserver that isn't sent to the client page? \nNo. You can only scrape what exists on the page. Anything else would be illegally accessing a non-public server and goes beyond scraping into hacking.\n2) If the site loads asynchronously and\/or dynamically, can I access the content that loads after the main portion of the html?\nYes, using browser automation tools like selenium, you can approximate a user experiencing the site and wait for the full content to load before you scrape it. This is different from simple requests\/beautifulsoup, which only gathers the HTML at the point when you send the request.","Q_Score":0,"Tags":"python,web-scraping,beautifulsoup","A_Id":56382871,"CreationDate":"2019-05-30T17:05:00.000","Title":"If a website loads a product on the back end but does not publish it for the public yet, can I access that information?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I want to redirect mysite.com\/index.php to mysite.com\/clientarea.php using .htaccess does not work for me help out.\nwith simple code.\nI have tried several time and finally the site is not available \n\nTry running Windows Network Diagnostics.\n DNS_PROBE_FINISHED_NXDOMAIN","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":46,"Q_Id":56420275,"Users Score":0,"Answer":"Based on that error, it appears you're redirecting to a different hostname than mysite.com - I assume unintentionally. \nIf you can post your .htaccess code, the solution may be easy to provide.","Q_Score":0,"Tags":"django,python-3.x,.htaccess,whmcs","A_Id":56420311,"CreationDate":"2019-06-03T01:49:00.000","Title":"How Can i redirect url to another url with the .htaccess in cpanel","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Appium is not finding element with MobileBy Accessibility ID and I'm calling the method from AppiumDriver class that inherits from the SeleniumDriver class. Any ideas? Thanks.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":151,"Q_Id":56421105,"Users Score":0,"Answer":"It ended up being, I had to use MobileBy.ID not MobileBy.Accessibility_ID and the element was found. The framework code was not an issue. Just using wrong locator type.","Q_Score":0,"Tags":"python,automation,appium","A_Id":56429709,"CreationDate":"2019-06-03T04:21:00.000","Title":"Appium is not finding element when using a method to find it from a python appium class that inherits from a selenium class","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have two simple lambda functions. Lambda 1 is invoking lambda 2 (both do a simple print for text).\nIf both lambdas are outside of a VPC then the invocation succeeds, however as soon as I set them both in to access a VPC (I need to test within a VPC as the full process will be wtihin a VPC) the invocation times out.\nDo I have to give my lambda access to the internet to be able invoke a second lambda within the same VPC?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":648,"Q_Id":56425420,"Users Score":1,"Answer":"If your lambda functions are inside a VPC you need to configure your both lambda functions into private subnet not public subnet. That is the AWS recommended way.","Q_Score":0,"Tags":"python,python-3.x,aws-lambda","A_Id":56448070,"CreationDate":"2019-06-03T10:27:00.000","Title":"Unable to invoke second lambda within VPC - Python AWS Lamba Function","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to extract\/sync data through Pardot API v4 into a local DB. Most APIs were fine, just used the query method with created_after search criteria. But the Visit API does not seem to support neither a generic query of all visit data, nor a created_after search criteria to retrieve new items. \nAs far as I can see I can only query Visits in the context of a Visitor or a Prospect.\nAny ideas why, and how could I implement synchronisation? (sorry, no access to Pardot DB...)\nI have been using pypardot4 python wrapper for convenience but would be happy to use the API natively if it makes any difference.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":185,"Q_Id":56494946,"Users Score":2,"Answer":"I managed to get a response from Pardot support, and they have confirmed that such response filtering is not available on the Visits API. I asked for a feature request, but hardly any chance to get enough up-votes to be considered :(","Q_Score":1,"Tags":"python-3.x,pardot","A_Id":58770292,"CreationDate":"2019-06-07T13:05:00.000","Title":"Pardot Visit query API - generic query not available","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a python script that loops through all of the AWS accounts we have and lists the EC2 instances in each one. \nI want to turn it into an AWS Lambda function. But I can't figure out how to pull the AWS credentials that would allow me to list the servers in all the accounts.\nHow can I achieve this in AWS Lambda?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":316,"Q_Id":56496230,"Users Score":0,"Answer":"When you create lambda you have so specify a role\nIn IAM you can attach required permission to a lambda role. \nIf you want to use some specific set of credentials in a file, you can utilize AWS Systems Manager to retrieve credentials. \nThough, I would recommend role on lambda","Q_Score":0,"Tags":"python,aws-lambda","A_Id":56496798,"CreationDate":"2019-06-07T14:28:00.000","Title":"Loop through AWS Accounts in Lambda Python Function","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a python script that loops through all of the AWS accounts we have and lists the EC2 instances in each one. \nI want to turn it into an AWS Lambda function. But I can't figure out how to pull the AWS credentials that would allow me to list the servers in all the accounts.\nHow can I achieve this in AWS Lambda?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":316,"Q_Id":56496230,"Users Score":1,"Answer":"Create a role with cross account permissions for ec2:ListInstances\nAttach the role to the lambda function","Q_Score":0,"Tags":"python,aws-lambda","A_Id":56501963,"CreationDate":"2019-06-07T14:28:00.000","Title":"Loop through AWS Accounts in Lambda Python Function","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm making an http request in my code and I am curious if making a HEAD request still makes the syn, syn-ack, and ack? I've never heard of HEAD until now.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":65,"Q_Id":56512169,"Users Score":1,"Answer":"\"Does making a HEAD request still require the 3 way handshake?\"\nIt depends. HEAD request is a concept in HTTP, the application layer; while \"3 way handshake\" is a concept in TCP, the transport layer.\nThus, whether HEAD request requires \"3 way handshake\" depends on whether it is using TCP. For HTTP\/1.1 and HTTP\/2, the answer is yes. For HTTP\/3, the answer is no, as HTTP\/3 is based on UDP.","Q_Score":0,"Tags":"http,python-requests,http-headers,httprequest","A_Id":56519591,"CreationDate":"2019-06-09T05:16:00.000","Title":"Does making a HEAD request still require the 3 way handshake?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I use the requests Module with python At the moment. I use \"r.status_code == 200\" to check if the response is valid. But for my project right now it gaves me false posetives. It would be better for me to check if a response is valid with a Keyword check on the sourcecode or something like this is this possible ?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1387,"Q_Id":56517575,"Users Score":0,"Answer":"If I understand your question correctly, its possible for your request to give you back empty or invalid data that is not useful to you. \nWithout knowing what this request return you could look for specific fields in the request, for example if the request returns a json look for specific fields using if json_file.get(\"field_name\", False)","Q_Score":0,"Tags":"python,request","A_Id":56517613,"CreationDate":"2019-06-09T19:03:00.000","Title":"Python requests check if success","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm trying to build a network listener in python, which is able to capture the local network traffic from my own computer. \nI've tried with sockets, but i find it very hard to use this method, because i'm not on a Linux-machine. \nSo, is there another method i'm not aware of, now when it's about only monitoring my own traffic? :)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":159,"Q_Id":56538288,"Users Score":0,"Answer":"What you need is called sniffer, try scapy framework.","Q_Score":0,"Tags":"python-3.x,http,networking","A_Id":56566080,"CreationDate":"2019-06-11T07:08:00.000","Title":"How to make a network listener in python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Whenever I run my tests on my computer, they work relatively fine. At the very least, selenium runs without problems.\nAs soon as I run the same tests on a docker container I start running in all kinds of errors that selenium started throwing, such as: \"Element not clickable at point...\", \"Element is not interactable...\", etc.\nNone of these happen when I run the tests on my computer normally.\nI have a Linux Debian 9 computer, docker 1.11, Chrome 72.0, chromedriver 2.41, selenium 3.12. Test are done using py.test and in headless chrome.\nMy Dockerfile is simple, installing all of the packages for python and putting my tests in there and running them. I run a custom-made Dockerfile and don't have the option to use the premade seleniumHQ ones.\nI have first tried running a demo test where I first encountered that problem. I managed to solve it by editing the test code to bypass the exception and trying again. After that succeeded, I tried running a few more complicated tests and kept running into different errors I didn't run into before.\nEvery solution I found was aimed at solving the thrown exception, but I suspect there's a deeper issue at hand and I can't figure out what it is.\nTo reiterate: running tests on my computer (both headless and otherwise) works like a charm, but running the same tests in a docker container fails with several selenium errors thrown. For some reason, the tests don't seem to run properly and selenium keeps throwing errors.\nI've been trying to solve it for a while now and can't seem to figure out what the problem is.\nI'd like to know WHY this problem occurs and how to solve the root cause of it. If it's a known issue and the solution is indeed simply to solve every exception as it comes, I'd like to know that too.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":617,"Q_Id":56546422,"Users Score":1,"Answer":"As far as I could find, and how I solved this, is to just follow the exceptions and fix them as they come.\nThe general problem areas I encountered were the click event on buttons\/textbox and clearing the text from a textbox.\nThe solution involved:\n\nA call to webdriver.execute_script('arguments[0].click();', element) to replace button click events.\nA call to webdriver.execute_script('arguments[0].focus();', element) to replace textbox click events.\nA call to webdriver.execute_script('arguments[0].value = \"\";', element) to replace textbox clears.\n\nFrom where I stand, these solved most if not all the sudden exceptions and the rest worked as intended.","Q_Score":0,"Tags":"python-3.x,selenium-webdriver,docker-container,web-testing","A_Id":56836785,"CreationDate":"2019-06-11T14:51:00.000","Title":"Running Selenium in container brings sudden selenium errors where there were none before","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"What are the differences among the three choices in their usages and functions? Are there lists for their arguments?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":34,"Q_Id":56547919,"Users Score":0,"Answer":"ChromeOptions are specific to Chromedriver. (you can pass prefs into the ChromeDriver constructor...) Capabilities are for Selenium WebDriver. (so more global... things like setting paths to driver, browser etc...)","Q_Score":0,"Tags":"python,selenium,webdriver,selenium-chromedriver","A_Id":56552660,"CreationDate":"2019-06-11T16:16:00.000","Title":"What are the differences among ChromeOptions, capabilities and preferences in Chromedriver?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"The ACS url in the Google SSO SAML setup where Google is the Identity Provider has to start with https. Therefore, I've not been able to use a localhost url. Is there a way how I could test Google SSO SAML on a local server? What url (or other details) do I need to enter?","AnswerCount":5,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":5382,"Q_Id":56581853,"Users Score":0,"Answer":"I had same problem , i ran my app on localhost with https using local iis with self signed certificate and it worked just fine.\nThis way its easy to debug saml response from google rather than using remote urls.","Q_Score":5,"Tags":"python,google-cloud-platform,single-sign-on,saml-2.0,google-sso","A_Id":59552333,"CreationDate":"2019-06-13T13:40:00.000","Title":"Test Google SSO SAML on Localhost","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I tried doing it by creating a css file style.css in the same folder, copied the source code of the bulma link provided and then linked it to my html doc. But then, it shows no css features at all.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1985,"Q_Id":56595054,"Users Score":0,"Answer":"Although I didn't achieve what I was trying to do, I guess the only solution is either copying the exact design provided via Bulma link or ending up writing the whole css code all by own which I don't really prefer until essentially needed. So, I'd rather stick to the pre-defined design provided by Bulma.","Q_Score":1,"Tags":"html,css,django,python-3.x,bulma","A_Id":56610545,"CreationDate":"2019-06-14T09:15:00.000","Title":"Is there any way in which I can edit \"https:\/\/cdnjs.cloudflare.com\/ajax\/libs\/bulma\/0.7.5\/css\/bulma.css\" file?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am calling a web API in Python, getting data back in XML, converting the response into a dictionary using xmltodict, but for several elements, sometimes I get a dictionary (single element) and sometimes I get a list (multiple elements) in response.\nI first started to use \"if isinstance(..., dict):\" - that could solve my problem but is not so elegant and requires quite some lines of code in my case.\nI then found out \"force_list\" which I think is exactly what I need, but I need to apply it to several elements and I can't find the right syntax - I'm not even sure if that's possible.\nThe code I am trying to make work:\nresponse = xmltodict.parse(xml, force_list=({'Child'},{'Brother'}))\nWith force_list={'Child'} only, the code works as expected.\nWith the above code, I do not get any error message but when checking the result with the \"type\" function, I still have dictionaries where I would expect to get lists.\nI tried other syntax and got error messages.","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":1419,"Q_Id":56613717,"Users Score":2,"Answer":"I think I found the right syntax (it seems to be working as I expected):\nresponse = xmltodict.parse(xml, force_list=('Child','Brother'))\nJust posting in case anyone would look for the same answer in the future.","Q_Score":2,"Tags":"python,xmltodict","A_Id":56636203,"CreationDate":"2019-06-15T19:53:00.000","Title":"Can I \"force_list\" more than one parameter in xmltodict?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I\u2019m trying to send a large parquet file to RDS Postgres using Lambda. When I try to test the lambda function. I\u2019m facing the below error:\nCalling the invoke API action failed with this message: Network Error\nI tried sending with limited rows, I haven\u2019t faced any issue but when I tried to send the whole file which is of 300 mb, then I\u2019m getting the above error.\nCan someone help me with this?","AnswerCount":4,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":7702,"Q_Id":56652783,"Users Score":0,"Answer":"I had this problem when the Lambda function was inside a VPC and couldn't mount my EFS volume via File System Config. This was with a totally empty new function, no code yet.\nNo solution yet but that might get you started down the correct path.","Q_Score":10,"Tags":"python","A_Id":72070877,"CreationDate":"2019-06-18T15:46:00.000","Title":"Calling the invoke API action failed with this message: Network Error","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I\u2019m trying to send a large parquet file to RDS Postgres using Lambda. When I try to test the lambda function. I\u2019m facing the below error:\nCalling the invoke API action failed with this message: Network Error\nI tried sending with limited rows, I haven\u2019t faced any issue but when I tried to send the whole file which is of 300 mb, then I\u2019m getting the above error.\nCan someone help me with this?","AnswerCount":4,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":7702,"Q_Id":56652783,"Users Score":0,"Answer":"Update your browser to latest version. Some browsers with additional built-in features may block necessary script on web page.","Q_Score":10,"Tags":"python","A_Id":63502397,"CreationDate":"2019-06-18T15:46:00.000","Title":"Calling the invoke API action failed with this message: Network Error","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I\u2019m trying to send a large parquet file to RDS Postgres using Lambda. When I try to test the lambda function. I\u2019m facing the below error:\nCalling the invoke API action failed with this message: Network Error\nI tried sending with limited rows, I haven\u2019t faced any issue but when I tried to send the whole file which is of 300 mb, then I\u2019m getting the above error.\nCan someone help me with this?","AnswerCount":4,"Available Count":4,"Score":1.0,"is_accepted":false,"ViewCount":7702,"Q_Id":56652783,"Users Score":9,"Answer":"I got the same error, which was weird because it was working fine for very small files. I tried with a larger file (only 5mb), and got the error message.\nSolution: check the memory allocation for the lambda function. I was using the standard minimum (128mb). Changed it to 1gb, it worked fine. The error message is horrible.","Q_Score":10,"Tags":"python","A_Id":59071261,"CreationDate":"2019-06-18T15:46:00.000","Title":"Calling the invoke API action failed with this message: Network Error","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I\u2019m trying to send a large parquet file to RDS Postgres using Lambda. When I try to test the lambda function. I\u2019m facing the below error:\nCalling the invoke API action failed with this message: Network Error\nI tried sending with limited rows, I haven\u2019t faced any issue but when I tried to send the whole file which is of 300 mb, then I\u2019m getting the above error.\nCan someone help me with this?","AnswerCount":4,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":7702,"Q_Id":56652783,"Users Score":0,"Answer":"I get this error every time I set up a Lambda, and then it disappears over the space of a day. Not sure why though.","Q_Score":10,"Tags":"python","A_Id":58645065,"CreationDate":"2019-06-18T15:46:00.000","Title":"Calling the invoke API action failed with this message: Network Error","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am given a network packet whose last 64 bytes (128 hex characters) are the RSA-512 digital signature of the SHA-256 hash of the packet. I take a truncated version of this packet (everything except the last 64 bytes) and calculate the hash myself, which is working fine, however I need a way to get back the hash that generated the signature in the first place\nI have tried to do this in Python and have run into problems because I don't have the RSA private key, only the public key and the Digital Signature. What I need is a way to take the public key and signature and get the SHA-256 hash back from that to compare it to the hash I've generated. Is there a way to do this? Any crypto libraries would be fine. I am using hashlib to generate the hash","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":902,"Q_Id":56676828,"Users Score":0,"Answer":"The original hash was signed with the private key. To get the original hash, you need to decrypt the signature with the public key, not with the private key.","Q_Score":1,"Tags":"python,rsa,digital-signature","A_Id":56679386,"CreationDate":"2019-06-19T22:44:00.000","Title":"How to get the original hashed input to an RSA digital signature?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to be able to search for any user using facebook API v3.3 in python 3. \nI have written a function that can only return my details and that's fine, but now I want to search for any user and I am not succeeding so far, it seems as if in V3.3 I can only search for places and not users\n\nThe following function search and return a place, how can I modify it so that I can able to search for any Facebook users?\n\ndef search_friend():\n graph = facebook.GraphAPI(token)\n find_user = graph.search(q='Durban north beach',type='place')\n print(json.dumps(find_user, indent=4))","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":116,"Q_Id":56687213,"Users Score":1,"Answer":"You can not search for users any more, that part of the search functionality has been removed a while ago.\nPlus you would not be able to get any user info in the first place, unless the user in question logged in to your app first, and granted it permission to access at least their basic profile info.","Q_Score":0,"Tags":"python-3.x,facebook-graph-api","A_Id":56687408,"CreationDate":"2019-06-20T13:36:00.000","Title":"how can i search for facebook users ,using facebook API(V3.3) in python 3","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I automated the Test Suite using Robot Framework-Selenium base on PYTHON 27. This same Test Suite needs to execute on client side. \nBut Company do not want to share code base with client. \nIs there anyway that I can create binary file of Robot Framework and share the same binary file with client instead of sharing a actually code ?\nlike in Java we create .jar file.\nKindly help me I am looking for solution from last three days.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":343,"Q_Id":56709077,"Users Score":0,"Answer":"I don't think there's anything you can do. Robot doesn't have the ability to parse binary files, so any solution would have to include converting your binary files to text files on the client machine.","Q_Score":1,"Tags":"python,robotframework,executable,robotium","A_Id":56709784,"CreationDate":"2019-06-21T19:00:00.000","Title":"How to hide robot framework code base and share only binary file to client?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"So i have this homework, i need to do a email client that sends emails, notifies when a new email arrives,etc.\nOne of my problems is verifying a domains reputation if the user writes an url in the body or subject of the email, if the domain can be danger i shouldn't send the email. I tried mywot.com api, but i can't get a key to try coding.\nI searched for other apis like domaintools, whoisxml, urlvoid but they have a ton of documentation, and i just get lost reading all of it, also they services are limited to free users.\nIs there another api that i can try? o there's another way to valid user urls?\nThanks for your answers.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":276,"Q_Id":56711961,"Users Score":0,"Answer":"most of them are free until a certain number of queries are hit (like 1000 month). \nFor a fully free API you'll probably need to implement it yourself, since indexing domain lists takes a lot of effort.\nCheck the free intervals of senderbase and virustotal","Q_Score":0,"Tags":"python,json,xml,url,dns","A_Id":61699316,"CreationDate":"2019-06-22T01:39:00.000","Title":"How to see Domains reputation with python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am running some crawls in order to test if the results I get deviate. For this effort I created two test suites, the first one I created with the requests and BeautifulSoup library, the other one is based on selenium. I would like to find out, if pages detect both bots in the same way. \nBut I am still unsure if I am right, by assuming that requests and BeautifulSoup are independent from Selenium. \nI hope its not a dump question, but I haven't find any proper answer yet (maybe because of the wrong keywords). However, any help would be appreciated. \nThanks in advance\nI checked the requests documentation. I wrote a mail to the developer, without any answer. And of course I checked on google. I found something about scrapy vs selenium but well... are requests and BeautyfulSoup related to scrapy?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2861,"Q_Id":56728377,"Users Score":1,"Answer":"The python requests module does not use Selenium, neither does BeautifulSoup. Both will run independent of a web browser. Both are pure python implementations.","Q_Score":1,"Tags":"python,selenium,python-requests","A_Id":56728393,"CreationDate":"2019-06-23T23:19:00.000","Title":"Does requests rely on selenium?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I wrote a web scraper in Python, which works very well on my Laptop at home. After deploying it to AWS EC2 the performance of the scraper deteriorates. Now I am confused about the performance of EC2 instances (even of the micro and small instances, see further details below).\nScraper in python:\nGenerally, inner loop of the scrapes does the following:\n(1) the scraper looks up urls on a site (20 per site, one site = one \"site_break\"). In a second step it (2) gets the soruce code of each url, in a third step it (3) extracts the necessary information into an dataframe and in the fourth step it (4) saves the dataframe as pkl.\nAfter all loops it opens and merges the dataframs and saves it as csv.\nThe crucial (most time consuming parts) are:\n(2) download of source codes (I\/O limited by download speed): the program fills the RAM with the source code\n(3) processing of the sources codes (CPU 100%)\nTo use the RAM fully and stick together similar processes, every loop consists of site_break = 100, i.e. 100 sites * 20 urls\/site = 2000 urls. This fills the RAM of my PC to 96% (see below). Since I have to wait for server responses in step 1 and step 2, I implemented threading with maxWorkers=15 (alternatively 20-35 with similar results). This implementation cuts the run time by 80%. I am sure I could get some other .% by implementing asyncio. Nevertheless, I want to start with the lean MVP. In the processor consuming step 3 I didn't implement multiprocessing (yet), because my goal was an cost efficient\/free implemenation on t2.micro (with just one processor).\nSpecification:\nHome PC: Intel Core i7-6500 CPU, 2.59 Ghz (2 Cores, 4 logical Processors), RAM 8.00 GiB, 64-bit, x64, 50Mbit\/s Download-rate (effectively up to 45 Mbit\/s), Python 3.7.3, conda env\nEC2 t2.micro: vCPUs = 1, RAM 1.0 GiB, Network Performance \"low to moderate\" (research in forums tell my this could be something above 50 Mbit), Ubuntu 18.04, Python 3.7.3, conda env\nEC2 t3a.small: vCPUs = 2, RAM 2.0 GiB, Network Performance \"low to moderate\" but another AWS site tells me: \"up to 5 Gbit\/s\", Ubuntu 18.04, Python 3.7.3, conda env\nSince the RAM of the t2.micro is just 1 GiB, I lowered the site_break from 100 to 25. Afterwards, the RAM still got full, so I decreased it in further steps from 25 to 15, 12, 10 and finally 5. For 12, 10 and especially for 5 it works pretty well:\nI needed 5:30min for on loop with site_break = 100 on my PC. t2.micro need 8-10sec for site_break = 5, which leads to 3:00min for analogous 100 sites, which satisfied me in the first moment.\nUnfortunately, the following issue appears: After 20-30 loops the performance plumments. The time for on loop increases rapidly from 8sec to over 2min. My first assumption was it to be the low RAM, during the small loops it doesn't seem run full. After stopping and cleaning the RAM, the performance drops after the second or third loop. If I start it a few hours later the first case (with drop after 20-30 loops) repeats.\nBecause I firstly thought it has to do with the RAM, i launched a second instance on t3a.small with more CPU, RAM and \"up to 5 Gbit\/a\" network performance. I sliced to looks to site_break = 25 and startet the script. I is still running with a constant speed of 1:39-1:55min per loop (which is half as fast as t2.micro in its best phase (10 sec for 5 => 50 sec for 25).\nParallely, I started the script from my home PC with site_break = 25 and it is constantly faster with 1:15-1:30min per loop. (Stopping the time manuall results in 10-15sec slower for downloading and 10-15 sec slower for processing).\nThis all confuses me.\nNow my questions:\n\nWhy does the t2.micro detetoriate after several loops and why does the performance vary so wildly?\nWhy is the t3a.small 50% slower than the t2.micro? I would assume that the \"bigger\" machine would be faster in any regard.\n\nThis lets me stuck:\n\nDon't want to use my home PC for regularly (daily scraping), since the connection aborts at 4am for a tiny period of time and leads to hanging up of the script). Moreover, I don't want the script run manually and the PC all the time and block my private internet stream.\n\nt2.micro: Is useless, because the performance after the deterioration is not acceptable.\n\nt3a.small: performance is 10-20% lower than private PC. I would expect it to be better somehow? This lets my doubt to scrape over an EC2. Moreover, I can't understand the lower performance in comparison to t2.micro at the beginning.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":830,"Q_Id":56734108,"Users Score":1,"Answer":"Why does the t2.micro detetoriate after several loops and why does the performance vary so wildly?\n\nIf your RAM is not getting full, then this is most likely because Amazon is limiting the resources your instance is consuming whether that is CPU or I\/O. Amazon will give you more compute and throughput for a while (to accommodate any short-term spikes) but you should not mistake that for baseline performance.\n\nWhy is the t3a.small 50% slower than the t2.micro? I would assume that the \"bigger\" machine would be faster in any regard.\n\nT3 instances are designed for applications with moderate CPU usage that experience temporary spikes in use. With t3 you are either paying a premium to be able to afford larger and more frequent spikes, or, you are getting less baseline performance (for the same price) to be able to afford larger and more frequent spikes. This does not match the web-scraping profile where you want constant CPU and I\/O.","Q_Score":1,"Tags":"python,performance,amazon-web-services,amazon-ec2,web-scraping","A_Id":64767158,"CreationDate":"2019-06-24T09:57:00.000","Title":"Web Scraping Performance issues EC2 vs Home PC","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I found upload_from_file and upload_from_filename, but is there a function or method to upload an entire folder to Cloud Storage via Python?","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":2959,"Q_Id":56759262,"Users Score":0,"Answer":"I don't think directly in the Python API, no, but there is in the commandline tool gsutil. You could do a system call from the python script to call out to the gsutil tool as long as you're authenticated on commandline in the shell you're calling the Python from.\nThe command would look something like:\ngsutil -m cp -r gs:\/\/","Q_Score":3,"Tags":"python,directory,google-cloud-platform,upload","A_Id":56762660,"CreationDate":"2019-06-25T17:31:00.000","Title":"Upload a folder to Google Cloud Storage with Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I found upload_from_file and upload_from_filename, but is there a function or method to upload an entire folder to Cloud Storage via Python?","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":2959,"Q_Id":56759262,"Users Score":0,"Answer":"Google Cloud Storage doesn't really have the concept of \"directories\", just binary blobs (that might have key names that sort of look like directories if you name them that way). So your current method in Python is appropriate.","Q_Score":3,"Tags":"python,directory,google-cloud-platform,upload","A_Id":56780840,"CreationDate":"2019-06-25T17:31:00.000","Title":"Upload a folder to Google Cloud Storage with Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"So when I run my python selenium script through Jenkins, how should I write the driver = webdriver.Chrome()\nHow should I put the chrome webdriver EXE in jenkins?\nWhere should I put it?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":44,"Q_Id":56784344,"Users Score":0,"Answer":"If you have added your repository path in jenkins during job configuration, Jenkins will create a virtual copy of your workspace. So, as long as the webdriver file is somewhere in your project folder structure and as long as you are using relative path to reference it in your code, there shouldn't be any issues with respect to driver in invocation.\nYou question also depends on several params like:\n1. Whether you are using Maven to run the test\n2. Whether you are running tests on Jenkins locally or on a remote machine using Selenium Grid Architecture.","Q_Score":0,"Tags":"python,selenium,jenkins,webdriver","A_Id":56784544,"CreationDate":"2019-06-27T05:15:00.000","Title":"So when I run my python selenium script through jenkins, how should I write the 'driver = webdriver.Chrome()'?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Can anyone help me to grasp this paragraph :\n\u2022 JSON and XML have good support for Unicode character strings (i.e., humanreadable text), but they don\u2019t support binary strings (sequences of bytes without\na character encoding). Binary strings are a useful feature, so people get around\nthis limitation by encoding the binary data as text using Base64. The schema is\nthen used to indicate that the value should be interpreted as Base64-encoded.\nThis works, but it\u2019s somewhat hacky and increases the data size by 33%\nIf i understand well, for example i use a REST API on my equipement to get some information with python.\nFor each response, it's in JSON and it's in unicode format.\nSo OK\nBut i don't really grasp the story of binary string.\nIs it the fact that each character is not in UTF8 format ?\nWhy my equipement don't response in JSON encoded in byte and not in unicode ?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":677,"Q_Id":56790769,"Users Score":0,"Answer":"JSON and XML are text formats. They're designed to be readable by people, displayed and manipulated with typical text editors like vi, emacs, or edlin, using whatever character encoding scheme is convenient for the platform. All the character encoding schemes in use, however, have some data patterns that are not allowed to represent characters -- for instance, they may have a pattern that indicates \"end-of-string\".\nIf you want to include completely arbitrary data in JSON or XML, you need a way to encode that data in valid text strings and indicate the text is not the actual data you want to use. Base64 is one way of doing that encoding, and it is commonly used for this purpose. The disadvantages are the overhead of doing the encoding and decoding when writing or reading the data, the encoded text string is slightly larger than the original arbitrary binary, and that you have to remember to do the encoding and decoding.","Q_Score":0,"Tags":"python,json","A_Id":56791501,"CreationDate":"2019-06-27T12:15:00.000","Title":"JSON, response in unicode or binary","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Problem\nI have a Lambda function in Account 1, which retrieves EC2 instance status.\nSimilarly, I want to retrieve EC2 instance status in other 4 accounts.\nWhat I did\nI Created trust relationship with the other 4 account by updating the IAM role.\nQuestion:\nWill my python code (residing in my lambda function in account 1) is enough to retrieve ec2 instance status from the other 4 accounts? or should I do something more ?\nPlease suggest!\nThanks in advance.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":22,"Q_Id":56809483,"Users Score":1,"Answer":"Each AWS Account is separate. You cannot access details of one AWS Account from another AWS Account. However, you can temporarily assume an IAM Role from another account to gain access.\nInstead, you will need to:\n\nCreate an IAM Role for the central Lambda function (Lambda-Role) and grant it permission to call AssumeRole\nCreate an IAM Role in each account that you wish to access, grant it permission to call DescribeInstances and configure it to trust Lambda-Role\nThe Lambda function can then loop through each account and:\n\n\nCall AssumeRole for that account, which will return temporary credentials\nUse those credentials to call DescribeInstances","Q_Score":0,"Tags":"python-3.x,amazon-web-services,aws-lambda","A_Id":56813988,"CreationDate":"2019-06-28T15:23:00.000","Title":"Is my python code enough to retrieve ec2 instance status from other accounts?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to create a program that runs automatically every time I download a file from browser.\nFor example, when I download an image file from chrome, the program runs automatically and perform tasks. Is it possible?","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":2023,"Q_Id":56815065,"Users Score":2,"Answer":"I think you need some kind of \"listening\" script running on background which will monitor files in download directory","Q_Score":2,"Tags":"python","A_Id":56815135,"CreationDate":"2019-06-29T04:26:00.000","Title":"How to make python program run automatically every time I download a file?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"My setup:\nI am using python3 and the socket module to setup a socket using ipv4 and tcp, listening and receiving data via an open port. I have a simple server and client where the client sends a request and the server responses. \nMy problem:\nI did some research and many people suggest that an open port is like an open door so how can I lock it? My goal is to secure my server and my client and not the data that is transmitted (which means the data shouldn't be altered but it does not matter if somebody reads it). I just want to make sure that neither the server nor the client receives wrong data or can be hacked in any way. If both the server and the client are normal computers with build-in firewalls are those sufficient?\nQuestions:\n\nHow can I make sure that the data I transmit can't be altered?\nIs the firewall (normal firewall that every computer has built-in) of the server sufficient when listening, receiving and sending data via an open port? If not what can I do to make sure the server can't be hacked in any way (obviously not entirely but as good as possible)?\nSame as question 2. just for a client (which as far as I am concerned does use an open port or at least not like the server)\n\nPS: If possible using python.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":412,"Q_Id":56816388,"Users Score":0,"Answer":"A Basic level of security for the server side would be to send a random key along with the data for verification of trusted client. If the list of clients that are going to send data are known you can just whitelist the IP addresses which accept data only from a specific list of IP addresses.","Q_Score":1,"Tags":"python,sockets,security,tcp,server","A_Id":56816527,"CreationDate":"2019-06-29T09:15:00.000","Title":"How to secure a python socket listening, sending and receiving on an open port?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am creating a Discord bot that needs to check all messages to see if a certain string is in an embed message created by any other Discord bot. I know I can use message.content to get a string of the message a user has sent but how can I do something similar with bot embeds in Python?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":467,"Q_Id":56822722,"Users Score":0,"Answer":"Use message.embeds instead to get the embed string content","Q_Score":1,"Tags":"python,bots,embed,message,discord","A_Id":56823113,"CreationDate":"2019-06-30T05:53:00.000","Title":"Using a Discord bot, how do I get a string of an embed message from another bot","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to interact (i.e. clicking buttons within app, taking keyboard input and typing for me, etc.) with the Cisco AnyConnect Client, specifically. \nHowever, I would like to know if there is way that is viable with Python 3 to interact with any other Mac applications.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":58,"Q_Id":56839710,"Users Score":0,"Answer":"I was able to do this using PyAutoGUI like @Mark Setchell stated","Q_Score":0,"Tags":"python-3.x,macos,user-interface","A_Id":57814669,"CreationDate":"2019-07-01T17:09:00.000","Title":"Is there a way to interact with Mac Applications with Python 3?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am looking to use a public API running on a distant server from within my company. For security reasons, I am supposed to redirect all the traffic via the company's PROXY. Does anyone know how to do this in Python?","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":312,"Q_Id":56841342,"Users Score":1,"Answer":"Set the HTTP_PROXY environment variable before starting your python script\ne.g. export HTTP_PROXY=http:\/\/proxy.host.com:8080","Q_Score":0,"Tags":"python,proxy,api","A_Id":56841343,"CreationDate":"2019-07-01T15:01:00.000","Title":"Configure proxy with python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am looking to use a public API running on a distant server from within my company. For security reasons, I am supposed to redirect all the traffic via the company's PROXY. Does anyone know how to do this in Python?","AnswerCount":2,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":312,"Q_Id":56841342,"Users Score":2,"Answer":"Directly in python you can do :\nos.environ[\"HTTP_PROXY\"] = http:\/\/proxy.host.com:8080.\nOr as it has been mentioned before launching by @hardillb on a terminal :\nexport HTTP_PROXY=http:\/\/proxy.host.com:8080","Q_Score":0,"Tags":"python,proxy,api","A_Id":56841409,"CreationDate":"2019-07-01T15:01:00.000","Title":"Configure proxy with python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am having issues accessing my Binance account information via API in Python. It always gives the APIError Exception but I am able to ping the exchange and get candlestick data successfully. I read through the API documentation and made sure that the API key is valid and I don't think I am missing anything.\nbinance_client = BinanceClient(api_key=api_key, api_secret=api_secret)\n print(binance_client.get_account(recvWindow=60000000))","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":4680,"Q_Id":56843717,"Users Score":0,"Answer":"maximum value for recvWindow is 60000.","Q_Score":1,"Tags":"python,binance","A_Id":65251985,"CreationDate":"2019-07-01T23:49:00.000","Title":"binance.exceptions.BinanceAPIException: APIError(code=-1022): Signature for this request is not valid","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working with python flask's requests module. I have installed requests module using :\npip install requests\nAnd verified that requests module exists when I run :\npip list\nBut when I run my python application , I receive import Error for requests module.\nI noticed that pip is installing the module in C:\\Users\\xx\\Documents\\Projects\\Python\\Python3REST\\lib\\site-packages\\ folder BUT the interpreter is looking for the module in C:\\Users\\xx\\Documents\\Projects\\Python\\Python3REST\\lib\\site-packages\\flask\\ folder.\nI have tried running the command :\npip install --install-option=\"Path to install in\" requests\nBut threw some other error. \nThe import error I am getting states :\nImportError: cannot import name 'requests' from 'flask'\n(C:\\Users\\xx\\Documents\\Projects\\Python\\Python3REST\\lib\\site-packages\\flask\\__init__.py)\nI am working in virtualenv in Windows 10. \nI appreciate any help I can get.\nThank you","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":5384,"Q_Id":56861605,"Users Score":1,"Answer":"what if you add that folder to your path? using sys.path.extend?","Q_Score":2,"Tags":"python,pip,virtualenv","A_Id":56861629,"CreationDate":"2019-07-03T00:52:00.000","Title":"pip list shows installed module but still getting import error","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a rather complex Flask project structure set up that uses blueprints to separate different \"modules\" from one another, and then different routes in each module. I now want to add socket functionalities to this project using the flask-socketio library. My goal is to assign each socket to it's own thread, managed by an object. The problem I am facing now is that I'm unsure how to properly separate the sockets from one another.\n(Please note that I am very deep in experimentation phase right now, I don't have a final concept yet and therefore no conclusive code snippets to show)\nFrom my understanding each socket has a unique socket-ID, which is stored in flask.request.sid while the context is within a socketio-event. That is great, because I can use this to match events to the open socket, meaning I know which \"disconnect\"-event belongs to which \"connect\"-event, since both have the same socket-ID if they are part of the same socket connection.\nNow the problems start though. I am attempting to find a way that based on which route the socket is called from the socket has different event handlers. Meaning the route ...\/admin\/a manages different events than ...\/admin\/b, because both routes lead to different logical parts of my web application. On an even larger scale, I only have one global socketio-object shared over all blueprints of my application. If I add an event handler to react to socket feedback of blueprint 1, I absolutely don't want blueprint 2 to be able to also trigger it with it's own sockets. \nWhat are the recommended ways of handling socket separation? Is that even something that's getting used in practise or do I have a fundamentally wrong understanding of how socket connections should be used in web applications?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":441,"Q_Id":56867887,"Users Score":2,"Answer":"If you have different logical groups of socket events that you want to be independent of each other, you can use a different namespace for each. In Socket.IO one or more namespaces can be multiplexed on the same physical transport.\nIf you prefer something simpler, just avoid conflicts in the name that you assign to your events.","Q_Score":1,"Tags":"python,flask,socket.io,flask-socketio","A_Id":56878579,"CreationDate":"2019-07-03T10:18:00.000","Title":"How to separate sockets for logically separated Flask routes?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I just wonder how apache server can know the domain you come from you can see that in Vhost configuration","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":19,"Q_Id":56875383,"Users Score":0,"Answer":"By a reverse DNS lookup of the IP; socket.gethostbyaddr(). \nResults vary; many IPs from consumer ISPs won't resolve to anything interesting, because of NAT and just not maintaining a generally informative reverse zone.","Q_Score":0,"Tags":"python","A_Id":56875761,"CreationDate":"2019-07-03T17:35:00.000","Title":"How can I find domain that has been used from a client to reach my server in python socket?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am getting only upto 10000 members when using telethon how to get more than 10000 \nI tried to run multiple times to check whether it is returning random 10000 members but still most of them are same only few changed that also not crossing two digits\nExpected greater than 10000\nbut actual is 10000","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":131,"Q_Id":56884669,"Users Score":0,"Answer":"there is no simple way. you can play with queries like 'a*', 'b*' and so on","Q_Score":0,"Tags":"python,telegram,telethon","A_Id":56884748,"CreationDate":"2019-07-04T09:21:00.000","Title":"how to get the memebrs of a telegram group greater than 10000","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to run a python script in Google Cloud which will download 50GB of data once a day to a storage bucket. That download might take longer than the timeout limit on the Google Cloud Functions which is set to 9 minutes.\nThe request to invoke the python function is triggered by HTTP.\nIs there a way around this problem ? I don't need to run a HTTP Restful service as this is called once a day from an external source. (Can't be scheduled) .\nThe whole premise is do download the big chuck of data directly to the cloud. \nThanks for any suggestions.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1242,"Q_Id":56888434,"Users Score":2,"Answer":"9 minutes is a hard limit for Cloud Functions that can't be exceeded. If you can't split up your work into smaller units, one for each function invocation, consider using a different product. Cloud Run limits to 15 minutes, and Compute Engine has no limit that would apply to you.","Q_Score":4,"Tags":"python,google-cloud-functions,long-running-processes","A_Id":56888658,"CreationDate":"2019-07-04T12:55:00.000","Title":"Long running python process with Google Cloud Functions","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"The title clearly says it, \nHow do I go about dragging the message author into the voice channel the bot is currently in?\nSay my bot is in a voice channel alone. A command is called to play sound to the author only. \nBut the author isn't in a voice channel, so I can't use the move_to(*) method, hence the word drag.\nI scrounged the API reference for connections, but I can't seem to find any.\nIs it even possible to drag users into a voice channel?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":179,"Q_Id":56924550,"Users Score":2,"Answer":"Of course you cannot forcibly connect a client to a voice channel - this would directly violate user privacy. Your idea is akin to 'dragging people into phone calls' unexpectedly.","Q_Score":0,"Tags":"python,discord,discord.py,python-3.7,discord.py-rewrite","A_Id":57067105,"CreationDate":"2019-07-07T17:35:00.000","Title":"discord.py rewrite - dragging message author to a voice channel","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a list of about 100 HTML webpages (all have different structures, such as divs, anchors, classes, etc.) and I am trying to scrape the title of each page (where the title is under a certain div and class). To do this, I was using get requests and Beautifulsoup, however, this takes way to long (10 minutes every time I want to do it)!\nI used a timer to see what is taking the most time: it's the get requests. Apparently Python (3.7) executes the code one after another, and since each get request takes about 5-6 seconds, it's taking approximately 500-600 seconds to complete the ~100 requests. \nI've searched for ways to make these requests work faster and came across many different solutions. However, a common theme seemed to be that making my requests asynchronous (so all requests start at the same time) will solve the problem (by making it faster).\nThere were many possible solutions for doing this that I read online including: multithreading, using grequest, using Scrapy, parsing lxml, etc. However, I'm new to programming and am not skilled enough to learn and experiment with each way (in fact, I tried following the answers to similar questions on SO, but wasn't successful), so I am unsure what is the best route for me to take. \nI don't need anything fancy; all I want to do is extract the titles from the HTML documents as text and then print them out. I don't need to download any CSS files, images, media, etc. Also, I'm hoping to keep the code as simple\/bare as possible. How can I do this as fast as possible in Python? I'd appreciate it if someone could suggest the best path to take (i.e. using Scrapy), and give a short explanation of what I must do using that tool to get the results I'm hoping for. You don't have to write out the whole code for me. Thanks!","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":154,"Q_Id":56928294,"Users Score":1,"Answer":"One of the idea which I can suggest is taking all the urls in Csv and keep few headings like path,title div,body div, image div as per your requirement and keep adding the particular div(div class=\u201dtitle\u201d).\nEx:\n PATH TITLE DIV IMAGE DIV BODY DIV \nSimilarly, you can give all links in one csv file nd read it through python script so that all data is pulled.","Q_Score":1,"Tags":"python,html,parsing,web-scraping,scrapy","A_Id":56928682,"CreationDate":"2019-07-08T04:05:00.000","Title":"How to scrape many HTML documents quickly using Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"The above mentioned error occurred while trying to run a cloud function in Google Cloud Platform. The error occurred in its main.py written in python in the line \"storage_client=storage.Client()\" \nI have also checked the github repository for google-cloud-python\/storage\/google\/cloud\/storage\/_http.py line 33 where it is showing error but I have done nothing with those variables anywhere, I reckon\nAny help will be appreciated","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":369,"Q_Id":56932945,"Users Score":2,"Answer":"I just experienced this same issue-- \nShort answer-- upgrade the google-cloud-core package: e.g. in my case I had \ngoogle-cloud-core==0.29.1\nUpgrading to version 1.0.2 solved my issue:\npip3 install --upgrade google-cloud-core==1.0.2\nFor me, this arose from installing all my python packages from a requirements.txt file, which had explicit versions. Sometime later, I must have upgraded and the packages did not stay aligned.","Q_Score":1,"Tags":"google-cloud-platform,google-cloud-functions,google-cloud-storage,google-api-python-client","A_Id":57914005,"CreationDate":"2019-07-08T10:24:00.000","Title":"How to fix error \"File \"\/...\/google\/cloud\/storage\/_http.py\", line 33, in __init__ TypeError: __init__() takes 2 positional arguments but 3 were given\"","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Hello I am developing a web scraper and I am using in a particular website, this website has a lot of URLs, maybe more than 1.000.000, and for scraping and getting the information I have the following architecture.\nOne set to store the visited sites and another set to store the non-visited sites.\nFor scraping the website I am using multithreading with a limit of 2000 threads.\nThis architecture has a problem with a memory size and can never finish because the program exceeds the memory with the URLs\nBefore putting a URL in the set of non-visited, I check first if this site is in visited, if the site was visited then I will never store in the non-visited sites.\nFor doing this I am using python, I think that maybe a better approach would be storing all sites in a database, but I fear that this can be slow\nI can fix part of the problem by storing the set of visited URLs in a database like SQLite, but the problem is that the set of the non-visited URL is too big and exceeds all memory\nAny idea about how to improve this, with another tool, language, architecture, etc...?\nThanks","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":108,"Q_Id":56938689,"Users Score":1,"Answer":"At first, i never crawled pages using Python. My preferd language is c#. But python should be good, or better.\nOk, the first thing your detected is quiet important. Just operating on your memory will NOT work. Implementing a way to work on your harddrive is important. If you just want to work on memory, think about the size of the page.\nIn my opinion, you already got the best(or a good) architecture for webscraping\/crawling. You need some kind of list, which represents the urls you already visited and another list in which you could store the new urls your found. Just two lists is the simplest way you could go. Cause that means, you are not implementing some kind of strategy in crawling. If you are not looking for something like that, ok. But think about it, because that could optimize the usage of memory. Therefor you should look for something like deep and wide crawl. Or recursive crawl. Representing each branch as a own list, or a dimension of an array.\nFurther, what is the problem with storing your not visited urls in a database too? Cause you only need on each thread. If your problem with putting it in db is the fact, that it could need some time swiping through it, then you should think about using multiple tables for each part of the page.\nThat means, you could use one table for each substring in url:\nwwww.example.com\/\nwwww.example.com\/contact\/\nwwww.example.com\/download\/\nwwww.example.com\/content\/\nwwww.example.com\/support\/\nwwww.example.com\/news\/\nSo if your url is:\"wwww.example.com\/download\/sweetcats\/\", then you should put it in the table for wwww.example.com\/download\/.\nWhen you have a set of urls, then you have to look at first for the correct table. Afterwards you can swipe through the table.\nAnd at the end, i have just one question. Why are you not using a library or a framework which already supports these features? I think there should be something available for python.","Q_Score":0,"Tags":"python,performance,web-scraping,architecture","A_Id":56953619,"CreationDate":"2019-07-08T16:03:00.000","Title":"What is the best approach to scrape a big website?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Hello I am developing a web scraper and I am using in a particular website, this website has a lot of URLs, maybe more than 1.000.000, and for scraping and getting the information I have the following architecture.\nOne set to store the visited sites and another set to store the non-visited sites.\nFor scraping the website I am using multithreading with a limit of 2000 threads.\nThis architecture has a problem with a memory size and can never finish because the program exceeds the memory with the URLs\nBefore putting a URL in the set of non-visited, I check first if this site is in visited, if the site was visited then I will never store in the non-visited sites.\nFor doing this I am using python, I think that maybe a better approach would be storing all sites in a database, but I fear that this can be slow\nI can fix part of the problem by storing the set of visited URLs in a database like SQLite, but the problem is that the set of the non-visited URL is too big and exceeds all memory\nAny idea about how to improve this, with another tool, language, architecture, etc...?\nThanks","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":108,"Q_Id":56938689,"Users Score":1,"Answer":"2000 threads is too many. Even 1 may be too many. Your scraper will probably be thought of as a DOS (Denial Of Service) attach and your IP address will be blocked.\nEven if you are allowed in, 2000 is too many threads. You will bottleneck somewhere, and that chokepoint will probably lead to going slower than you could if you had some sane threading. Suggest trying 10. One way to look at it -- Each thread will flip-flop between fetching a URL (network intensive) and processing it (cpu intensive). So, 2 times the number of CPUs is another likely limit.\nYou need a database under the covers. This will let you top and restart the process. More importantly, it will let you fix bugs and release a new crawler without necessarily throwing away all the scraped info.\nThe database will not be the slow part. The main steps:\n\nPick a page to go for (and lock it in the database to avoid redundancy).\nFetch the page (this is perhaps the slowest part)\nParse the page (or this could be the slowest)\nStore the results in the database\nRepeat until no further pages -- which may be never, since the pages will be changing out from under you.\n\n(I did this many years ago. I had a tiny 0.5GB machine. I quit after about a million analyzed pages. There were still about a million pages waiting to be scanned. And, yes, I was accused of a DOS attack.)","Q_Score":0,"Tags":"python,performance,web-scraping,architecture","A_Id":57539538,"CreationDate":"2019-07-08T16:03:00.000","Title":"What is the best approach to scrape a big website?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I need to scrape a web page, but the problem is when i click on the link on website, it works fine, but when i go through the link manually by typing url in browser, it gives Access Denied error, so may be they are validating referrer on their end, Can you please tell me how can i sort this issue out using selenium in python ? \nor any idea that can solve this issue? i am unable to scrape the page because its giving Access Denied error.\nPS. i am working with python3\nWaiting for help.\nThanks","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":322,"Q_Id":56940886,"Users Score":0,"Answer":"I solved myself by using seleniumwire ;) selenium doesn't support headers, but seleniumwire supports, so that solved my issue.\nThanks","Q_Score":0,"Tags":"python","A_Id":56940887,"CreationDate":"2019-07-08T17:12:00.000","Title":"How to set Referrer in driver selenium python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I was trying to using python requests and mechanize to gather information from a website. This process needs me to post some information then get the results from that website. I automate this process using for loop in Python. However, after ~500 queries, I was told that I am blocked due to high query rate. It takes about 1 sec to do each query. I was using some software online where they query multiple data without problems. Could anyone help me how to avoid this issue? Thanks!\nNo idea how to solve this.\n--- I am looping this process (by auto changing case number) and export data to csv....\nAfter some queries, I was told that my IP was blocked.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":46,"Q_Id":56943620,"Users Score":0,"Answer":"Optimum randomized delay time between requests. \nRandomized real user-agents for\neach request.\nEnabling cookies.\nUsing a working proxy pool and\nselecting a random proxy for each request.","Q_Score":0,"Tags":"python,web-scraping,python-requests,export-to-csv","A_Id":56954734,"CreationDate":"2019-07-08T23:22:00.000","Title":"While query data (web scraping) from a website with Python, how to avoid being blocked by the server?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to build REST API which will communicate with MYSQL DB The application will have some heavy processing after I fetch data from DB and return. Node.js being single threaded might have some issues i fell.\nI want to know if I should go with node.js or python is there any other technology I should be using ?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":294,"Q_Id":56945173,"Users Score":1,"Answer":"With my exp, python work with MySQL better than NodeJS in multi thread. But i think you should try some other solution if it's really heavy process, like using Spark for data processing.","Q_Score":0,"Tags":"python,node.js,rest,api,backend","A_Id":56945205,"CreationDate":"2019-07-09T04:03:00.000","Title":"Node.js, python for API","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am currently developing a proprietary PDF parser that can read multiple types of documents with various types of data. Before starting, I was thinking about if reading PowerPoint slides was possible. My employer uses presentation guidelines that requires imagery and background designs - is it possible to build a parser that can read the data from these PowerPoint PDFs without the slide decor getting in the way? \nSo the workflow would basically be this:\n\nAt the end of a project, the project report is delivered in the form of a presentation. \nThe presentation would be converted to PDF.\nThe PDF would be submitted to my application.\nThe application would read the slides and create a data-focused report for quick review.\n\nThe goal of the application is to cut down on the amount of reading that needs to be done by a significant amount as some of these presentation reports can be many pages long with not enough time in the day.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":201,"Q_Id":56975372,"Users Score":0,"Answer":"A PowerPoint PDF isn't a type of PDF.\nThere isn't going to be anything natively in the PDF that identifies elements on the page as being 'slide' graphics the originated from a PowerPoint file for example.\nYou could try building an algorithm that makes decision about content to drop from the created PDF but that would be tricky and seems like the wrong approach to me.\nA better approach would be to \"Export\" the PPT to text first, e.g. in Microsoft PowerPoint Export it to a RTF file so you get all of the text out and use that directly or then convert that to PDF.","Q_Score":0,"Tags":"python,parsing,pdf,pdf-scraping","A_Id":56994393,"CreationDate":"2019-07-10T16:55:00.000","Title":"Is it possible for a PDF data parser to read PowerPoint PDFs?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1},{"Question":"By trying to find an optimization to my server on python, I have stumbled on a concept called select. By trying to find any code possible to use, no matter where I looked, Windows compatibility with this subject is hard to find.\nAny ideas how to program a TCP server with select on windows? I know about the idea of unblocking the sockets to maintain the compatibility with it. Any suggestions will be welcomed.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":98,"Q_Id":56979085,"Users Score":1,"Answer":"Using select() under Windows is 99% the same as it is under other OS's, with some minor variations. The minor variations (at least the ones I know about) are:\n\nUnder Windows, select() only works for real network sockets. In particular, don't bother trying to select() on stdin under Windows, as it won't work.\nUnder Windows, if you attempt a non-blocking TCP connection and the TCP connection fails asynchronously, you will get a notification of that failure via the third (\"exception\") fd_set only. (Under other OS's you will get notified that the failed-to-connect TCP-socket is ready-for-read\/write also)\nUnder Windows, select() will fail if you don't pass in at least one valid socket to it (so you can't use select([], [], [], timeoutInSeconds) as an alternative to time.sleep() like you can under some other OS's)\n\nOther than that select() for Windows is like select() for any other OS. (If your real question about how to use select() in general, you can find information about that using a web search)","Q_Score":0,"Tags":"python,windows,sockets,select","A_Id":56979188,"CreationDate":"2019-07-10T21:47:00.000","Title":"TCP Socket on Server Side Using Python with select on Windows","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am currently using python to write some appium test. Because I am behind a corporate firewall my traffic needs to go via a proxy. \nI have set my http_proxy and https_proxy variables, but it seems like this is not being picked up by python during execution.\nI tried the exact same test using javascript and node and the proxy get picked up and everything works so I am sure the problem is python not following the proxy settings.\nHow can I make sure python is using correct proxy settings?\nI am using python 2.7 on macos mojave\nThanks!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":154,"Q_Id":56987653,"Users Score":0,"Answer":"So I figured out that appium currently does not support options to provide a proxy when making a remote connection. As temporary solution I modified the remote_connection module of selenium that appium inherits forcing it to use a proxy url for the connection.\nMy python knowledge is not that good but I think it shoudnt take much effort for someone to make a module that wraps\/override the appium webdriver remote connection to include a proxy option.","Q_Score":0,"Tags":"python","A_Id":57199327,"CreationDate":"2019-07-11T10:56:00.000","Title":"python ignores environmet proxy settings","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I wanted to get the waiting time of vehicles in SUMO and work it into the TraCI interface. For example I want to receive the getwaitingtime() of each vehicle Id within a certain area of the network.(meaning they are stopped or waiting in a queue). Then I want to add the total waiting time of Vehicles based on lane or each direction. After the total Time is added I want to assign this value to lets say X. and Use the value of X to perform some mathematical calculations and give decision to change the traffic light.\ngetwaitingtime(). VehID().","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":607,"Q_Id":56998904,"Users Score":0,"Answer":"When the vehicle is stopped it doesn't accumulate waiting time. Rather you can do this check using isStopped command. This will return True for each simulation step the vehicle is stopped or is in a stopped state.\nAs for the accumulation of waiting time, the waiting time counter is set to 0 each time the vehicle's speed is greater than 0.1 ms. So getWaitingTime might not give you an accurate measure of the total waiting time for a single vehicle. Use getAccumulatedWaitingTime to get the accumulated waiting time for the pre-defined or user-defined waiting time memory. This accumulated waiting time can be tested against the simulation time steps (aggregate) and then you can know for sure if the vehicle has been in the queue for a long time or not.","Q_Score":0,"Tags":"python,sumo,traffic-simulation","A_Id":57004916,"CreationDate":"2019-07-12T00:38:00.000","Title":"SUMO TraCi : How to assign the getwaitingtime() of a VehId and add that total Waiting time per lane?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have tried automating Facebook login using Selenium WebDriver. Can I do the same automatic login on Android applications instead of web pages? And if yes, what dependency do I need? Or do I need some sort of API?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":82,"Q_Id":57042809,"Users Score":0,"Answer":"the answer is no selenium dosen't supp mobile app but you can use Appium a good free open source tool.","Q_Score":0,"Tags":"python,python-3.x,python-2.7,selenium,selenium-webdriver","A_Id":57042967,"CreationDate":"2019-07-15T15:23:00.000","Title":"Selenium automatic login can be used on web pages; can it also be used on Android apps?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have tried automating Facebook login using Selenium WebDriver. Can I do the same automatic login on Android applications instead of web pages? And if yes, what dependency do I need? Or do I need some sort of API?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":82,"Q_Id":57042809,"Users Score":0,"Answer":"One option how I do it is to run the android app in an emulator such as Android Studio on Windows, and then use a Windows automation tool like AppRobotic Personal to simulate user actions, allowing me to log in on the android app.","Q_Score":0,"Tags":"python,python-3.x,python-2.7,selenium,selenium-webdriver","A_Id":57043197,"CreationDate":"2019-07-15T15:23:00.000","Title":"Selenium automatic login can be used on web pages; can it also be used on Android apps?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm using rpyc server to get data using selenium when a connection to a client is established, the problem is that the url I'm trying to access occasionally prompts a reCaptcha to fill in order to access the data needed.\nI don't really need to find a way to automate a completion, what I do want is to find a way to stream the browser from the server to the client if it encounters a reCaptcha, in a manner that allows the user to interact with the browser, and fill the reCaptcha manually himself, and from there to let the server go on with the rest of his code.\nSomething similar to Teamviewer's functionality, to implement in my setup.\nI actually couldn't find any direction to follow on that subject yet, and couldn't figure out a method to try myself.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":70,"Q_Id":57053912,"Users Score":0,"Answer":"If you work with Selenium, then you have the opportunity to programmatically wait for elements or to detect elements. You could just have your program wait for the ReCaptcha element and make an output to the user in the console that he should solve the ReCaptcha. In the background your program already waits for the elements that appear when the ReCaptcha is solved. Once the user has solved the ReCaptcha, the program will automatically resume.","Q_Score":0,"Tags":"python,recaptcha,rpyc","A_Id":57054034,"CreationDate":"2019-07-16T09:15:00.000","Title":"Is it possible to let the client interact with recaptcha on the server side?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I made a chatbot in AWS Lex and deployed it to facebook. However, I want the chat to display a typing animation before the bot replies. Right now the bot just replies instantly. I want there to be a typing animation to make the bot been more human like. \nIs there some setting in FB Developer I can turn on? I can't seem to find it anywhere. All I see are things for the API call but I am not using any REST calls in my chatbot.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":25,"Q_Id":57096570,"Users Score":0,"Answer":"What if you wait 1 second before sending the message?","Q_Score":0,"Tags":"python,amazon-web-services","A_Id":57096707,"CreationDate":"2019-07-18T14:12:00.000","Title":"AWS Lex and Facebook typing animation","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to download files from a website using python requests module and beautifulsoup4 but the problem is that you have to wait for 5 seconds before the download button appears.\nI tried using requests.get('URL') to get the page and then parse it with beautifulsoup4 to get the download link but the problem is that you have to wait 5 seconds (if you were to open it with an actual browser) in order for the button to appear so when I pass the URL to requests.get() the initial response object doesn't have the button element I searched a lot on google but couldn't find any results that helped me.\nIs there a way to \"refresh\" the response object? or \"wait\"? that is to update it's contents after five seconds as if it were opened with a browser?\nI don't think this is possible with the requests module. What should I do?\nI'm running Windows10 64x \nI'm new so sorry if the formatting is bad. :(","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1237,"Q_Id":57108768,"Users Score":0,"Answer":"HTTP is stateless, every new request goes as a different request to the earlier one. We typically imeplement states in cookies, browser stoarges and so on. Being a plain HTTP client, there is no way for requests to refresh a request, and the next request will be a compleletly new request.\nWhat you're looking for is some client that understands JavaScript and can handle page update automatically. I suggest you to look at selenium which can do browser automation.","Q_Score":1,"Tags":"python,html,python-requests","A_Id":57108853,"CreationDate":"2019-07-19T08:46:00.000","Title":"Is there a way to \"refresh\" a request?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I wrote a Python script which scrapes a website and sends emails if a certain condition is met. It repeats itself every day in a loop.\nI converted the Python file to an EXE and it runs as an application on my computer. But I don't think this is the best solution to my needs since my computer isn't always on and connected to the internet.\nIs there a specific website I can host my Python code on which will allow it to always run?\nMore generally, I am trying to get the bigger picture of how this works. What do you actually have to do to have a Python script running on the cloud? Do you just upload it? What steps do you have to undertake?\nThanks in advance!","AnswerCount":3,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":1140,"Q_Id":57126697,"Users Score":3,"Answer":"well i think one of the best option is pythonanywhere.com there you can upload your python script(script.py) and then run it and then finish.\ni did this with my telegram bot","Q_Score":1,"Tags":"python,cloud,hosting","A_Id":63171389,"CreationDate":"2019-07-20T16:40:00.000","Title":"How to host a Python script on the cloud?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I wrote a Python script which scrapes a website and sends emails if a certain condition is met. It repeats itself every day in a loop.\nI converted the Python file to an EXE and it runs as an application on my computer. But I don't think this is the best solution to my needs since my computer isn't always on and connected to the internet.\nIs there a specific website I can host my Python code on which will allow it to always run?\nMore generally, I am trying to get the bigger picture of how this works. What do you actually have to do to have a Python script running on the cloud? Do you just upload it? What steps do you have to undertake?\nThanks in advance!","AnswerCount":3,"Available Count":2,"Score":0.1325487884,"is_accepted":false,"ViewCount":1140,"Q_Id":57126697,"Users Score":2,"Answer":"You can deploy your application using AWS Beanstalk. It will provide you with the whole python environment along with server configuration likely to be changed according to your needs. Its a PAAS offering from AWS cloud.","Q_Score":1,"Tags":"python,cloud,hosting","A_Id":57130698,"CreationDate":"2019-07-20T16:40:00.000","Title":"How to host a Python script on the cloud?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Some modules (for example lxml) return ModuleNotFoundError when I run code with them using python3 command. But when I use python3.6 command everything works well. Why it is so?\nP.S.\npython3 --version returns Python 3.7.2","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":47,"Q_Id":57180069,"Users Score":2,"Answer":"This is due to your environment variables pointing python to 3.7.2 you probably have python 3.6 installed too but your python is pointing to 3.7.2.\nYou can change it under the computer environment variables and changing to path of your python to your 3.6","Q_Score":0,"Tags":"python,python-3.x,python-3.6","A_Id":57180136,"CreationDate":"2019-07-24T09:51:00.000","Title":"ModuleNotFoundError when i use python3","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have been successfully using the Python Tweepy library to download data (tweets) from Twitter. I had substantial help with my code using Tweepy. For the next stage of my research, I need to access the Premium Search API, which I cannot do using Tweepy. Twitter recommends using TwitterAPI for premium search, available from @geduldig on GitHub. The problem is I'm new to Python and it would be a steep learning curve for me to learn TwitterAPI. Am I able to use TwitterAPI just to access the premium search API, but use Tweepy for other tasks (implement search query, etc)?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":69,"Q_Id":57186757,"Users Score":0,"Answer":"I do not know specifics about the libraries you asked, but of course, you could use both the libraries in your program. \nYou need to spend a little bit more time grokking the TwitterAPI library. I do not think that amounts to a steep learning curve.","Q_Score":0,"Tags":"python,github,twitter,tweepy,twitterapi-python","A_Id":57186922,"CreationDate":"2019-07-24T15:45:00.000","Title":"Python Novice: where to start with Python and TwitterAPI","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a python code in PyCharm in which i am using\nimport requests\nbut the terminal is showing me the following error:\n(venv) ahmad@Ahmad:~\/Desktop\/Spiders$ python test.py\nTraceback (most recent call last):\n File \"test.py\", line 1, in \n import requests\nModuleNotFoundError: No module named 'requests'\nBut I have installed pip and requests as well.","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":19489,"Q_Id":57212573,"Users Score":0,"Answer":"You can just found the icon of Search in right highest corner and search for 'import modules'. Then click on \"Project: [YOUR_PROJECT]\" choose 'Python Interpreter' and then click on 'plus' button. Thus you can install 'requests' module manually.\n(Some sites say that you can call Search by pressing Alt + Enter, but it doesn`t work in my case)","Q_Score":4,"Tags":"python,pip,python-requests","A_Id":68539185,"CreationDate":"2019-07-26T02:58:00.000","Title":"ModuleNotFoundError: No module named 'requests' in PyCharm","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a python code in PyCharm in which i am using\nimport requests\nbut the terminal is showing me the following error:\n(venv) ahmad@Ahmad:~\/Desktop\/Spiders$ python test.py\nTraceback (most recent call last):\n File \"test.py\", line 1, in \n import requests\nModuleNotFoundError: No module named 'requests'\nBut I have installed pip and requests as well.","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":19489,"Q_Id":57212573,"Users Score":0,"Answer":"I had to go to the CLi (command line) and run \">pip install requests\" - this after trying unsuccessfully from the Pycharm 'Setting\/Interpreter' gui menu.","Q_Score":4,"Tags":"python,pip,python-requests","A_Id":69285809,"CreationDate":"2019-07-26T02:58:00.000","Title":"ModuleNotFoundError: No module named 'requests' in PyCharm","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a python code in PyCharm in which i am using\nimport requests\nbut the terminal is showing me the following error:\n(venv) ahmad@Ahmad:~\/Desktop\/Spiders$ python test.py\nTraceback (most recent call last):\n File \"test.py\", line 1, in \n import requests\nModuleNotFoundError: No module named 'requests'\nBut I have installed pip and requests as well.","AnswerCount":4,"Available Count":3,"Score":1.0,"is_accepted":false,"ViewCount":19489,"Q_Id":57212573,"Users Score":8,"Answer":"you can use this : \njust go to file > sitting > project : name > project interpreter > add > search and install","Q_Score":4,"Tags":"python,pip,python-requests","A_Id":57226478,"CreationDate":"2019-07-26T02:58:00.000","Title":"ModuleNotFoundError: No module named 'requests' in PyCharm","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a list of url and many of them are invalid. When I use scrapy to crawl, the engine will automatically filter those urls with 404 status code, but some urls' status code aren't 404 and will be crawled so when I open it, it says something like there's nothing here or the domain has been changed, etc. Can someone let me know how to filter these types of invalid urls?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1284,"Q_Id":57216139,"Users Score":0,"Answer":"In your callback (e.g. parse) implement checks that detect those cases of 200 responses that are not valid, and exit the callback right away (return) when you detect one of those requests.","Q_Score":0,"Tags":"python,scrapy,web-crawler","A_Id":57308027,"CreationDate":"2019-07-26T08:32:00.000","Title":"How to check if a url is valid in Scrapy?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I tried to connect it with SVN in aws ec2. It is showing Network timeout. After that i tried scp to transfer again it is showing the same error connection refused. Then i tried AWS S3 ec2 copy command to copy the file from local machine to S3 bucket but again it is showing the same error.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":13,"Q_Id":57218711,"Users Score":0,"Answer":"The approach which I use to move code or files from local server to ec2 instance is git.\nIt maintains my file changes with proper commit messages as well as a proper set of revisions or iterations I have done until now.","Q_Score":0,"Tags":"python-3.x","A_Id":57218757,"CreationDate":"2019-07-26T11:01:00.000","Title":"AWS to local machine file sharing","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"My gethostbyaddr() is giving a different host name from what I need. The host name that I need does not even show up in the alias list.\nSo, I was confused with the weird hostnames that I get when i try gethostbyaddr for different websites. \nSo, I tried getting the ip of amazon using gethostbyname. Then I used the resulting IP in gethostbyaddr() but I did not get the hostname of amazon. I read the official documentation and it states that the alias list returned contains the alternative host names, but I still do not get www.amazon.com\nSo This is what I tried doing.\nsocket.gethostbyname('www.amazon.com')\nAnd my result was: '13.35.134.162'\nThen I input this IP:\nsocket.gethostbyaddr('13.35.134.162')\nBut my result is:\n('server-13-35-134-162.del54.r.cloudfront.net', [], ['13.35.134.162'])\nCan someone explain why 'www.amazon.com' is not displayed and what is this hostname that I get?","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":343,"Q_Id":57230607,"Users Score":1,"Answer":"A website name is not equal to hostname. There is no 1:1 relationship in general. One computer can serve many websites. OTOH a busy website is served by many computers (so called load balancing). CDNs (content delivery networks) use some BGP-4 tricks (BGP-4 = an important routing protocol) to connect you to a server geographically near you - they run several \"clones\" of a webiste in different locations.\nWhat are your needs? If you want to be sure you are connected to the right website, rely on HTTPS certificates.","Q_Score":0,"Tags":"python,python-3.x,sockets","A_Id":57230747,"CreationDate":"2019-07-27T08:47:00.000","Title":"gethostbyaddr gives a different host name","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"My gethostbyaddr() is giving a different host name from what I need. The host name that I need does not even show up in the alias list.\nSo, I was confused with the weird hostnames that I get when i try gethostbyaddr for different websites. \nSo, I tried getting the ip of amazon using gethostbyname. Then I used the resulting IP in gethostbyaddr() but I did not get the hostname of amazon. I read the official documentation and it states that the alias list returned contains the alternative host names, but I still do not get www.amazon.com\nSo This is what I tried doing.\nsocket.gethostbyname('www.amazon.com')\nAnd my result was: '13.35.134.162'\nThen I input this IP:\nsocket.gethostbyaddr('13.35.134.162')\nBut my result is:\n('server-13-35-134-162.del54.r.cloudfront.net', [], ['13.35.134.162'])\nCan someone explain why 'www.amazon.com' is not displayed and what is this hostname that I get?","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":343,"Q_Id":57230607,"Users Score":1,"Answer":"This has nothing to do with python, but rather how DNS works. A single IP address can host many web sites and thus have many host names. As a result, the name to IP lookup is not always reversible.","Q_Score":0,"Tags":"python,python-3.x,sockets","A_Id":57230754,"CreationDate":"2019-07-27T08:47:00.000","Title":"gethostbyaddr gives a different host name","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to parse a page, keeping HTML and JS the same as in my own browser. Site must think, that I am logged using the same browser, I need to \"press\" some buttons using JS and find some elements.\nWhen using requests library or selenium.webdriver.Firefox(), site think I am from a new browser. But I think selenium must help.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":112,"Q_Id":57232605,"Users Score":0,"Answer":"Requests cannot process JavaScript, nor can it parse HTML and CSS to create a DOM. Requests is just a very nice abstraction around making HTTP requests to any server, but websites\/browsers aren't the only things that use HTTP.\nWhat you're looking for is a JavaScript engine along with an HTML and CSS parser so that it can create an actual DOM for the site and allow you to interact with it. Without these things, there'd be no way to tell what the DOM of the page would be, and so you wouldn't be able to click buttons on it and have the resulting JavaScript do what it should. \nSo what you're looking for is a web browser. There's just no way around it. Anything that does those things, is, by definition, a web browser. \nTo clarify from one of your comments, just because something has a GUI, that doesn't mean it isn't automatic. In fact, that's exactly what Selenium is for (i.e. automating the interactions with the GUI that is the web page). It's not meant to emulate user behavior exactly 1:1, and it's actually an abstraction around the WebDriver protocol, which is meant for writing automated tests. However, it does allow you to interact with the webpage in a way that approximates how a user would interact with it.\nYou may not want to see the GUI of the browser, but luckily, Chrome and Firefox have \"headless\" modes, and Selenium can control headless instances of those browsers. This would have the browser GUI be hidden while Selenium controls it, which sounds like what you're looking for.","Q_Score":0,"Tags":"python,selenium,parsing,web,browser","A_Id":57240490,"CreationDate":"2019-07-27T13:30:00.000","Title":"Navigate webpage as if from my browser (Python, selenium)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I built a python app that ran happily in its docker container until I added some http calls.. Using the library 'requests.' I did some research and alpine linux 3.7 has a library called 'py-requests'.. Only I can't seem to install this on my own system, to change my code to use it, and errors are thrown just leaving 'import requests' in my code. Py-pip doesn't seem to work either.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1115,"Q_Id":57261543,"Users Score":-1,"Answer":"I couldn't use pip install because I was blocked by a corporate firewall. I added --proxy mycompanyproxy.com:80 --trusted-host pypi.org --trusted-host files.pythonhosted.org to my dockerfile's pip install command and was able to get what I needed. Thanks for commenting, and sorry for the vagueness, I am a noob :)","Q_Score":0,"Tags":"python,docker,alpine","A_Id":57317442,"CreationDate":"2019-07-29T21:18:00.000","Title":"How to use py-requests in alpine linux docker?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to implement very simple authentication for updating values on my server. Here's the situation:\nI have a door sensor hooked up to a raspberry pi. Every time the sensor is triggered ('Opened' or 'Closed'), I send a POST request out to my Digital Ocean droplet at 'api.xxxxxx.com' which points to a restify server. The POST request body contains the sensor state, a time-stamp, and an API key. The RESTify server also has a file called 'constants.js' that contains the same API key. If the API key sent from the RPi is the same as the one in the constants file on my droplet, it allows values to update (latest state\/time). If not, it just sends back an error message.\nThe API key is a password sent through SHA3-256.\nIs this scheme okay for what I'm doing? The only thing I could think of is if someone found the endpoint, they might be able to spam requests to it, but nothing else. The API key (on my local raspberry pi and on the droplet) are kept in different files and excluded from git, so viewing git files would not reveal anything.\nI don't expect anyone to have access to my droplet or raspberry pi either, so if I set up SSH correctly I don't see how it (the API key in the files) could be leaked either.\nEDIT: Forgot to say that I'm using Python on the Raspberry Pi to send out POSTs. The droplet is running a RESTify server (JS).","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":48,"Q_Id":57262893,"Users Score":1,"Answer":"Well, you are vulnerable to network snooping. If anyone can snoop either of the network links, then they can steal the API key and are free to use your service with it. \nHTTPS on both links would prevent that. HTTPS could also prevent any sort of DNS hijack that could trick the Pi into sending the APIKey to a false host (thus stealing it that way).\nOther than that, your API key is your secret that controls access so as long as it is secured in storage at both ends and secured in transit and it's sufficiently hard to guess, you are OK.","Q_Score":0,"Tags":"javascript,python,node.js,authentication,restify","A_Id":57263175,"CreationDate":"2019-07-30T00:41:00.000","Title":"Very simple authentication on my own server, is this okay?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Trying to figure out how to make python play mp3s whenever a tag's text changes on an Online Fantasy Draft Board (ClickyDraft).\nI know how to scrape elements from a website with python & beautiful soup, and how to play mp3s. But how do you think can I have it detect when a certain element changes so it can play the appropriate mp3?\nI was thinking of having the program scrape the site every 0.5seconds to detect the changes,\nbut I read that that could cause problems? Is there any way of doing this?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":40,"Q_Id":57291451,"Users Score":0,"Answer":"The only way is too scrape the site on a regular basis. 0.5s is too fast. I don't know how time sensitive this project is. But scraping every 1\/5\/10 minute is good enough. If you need it quicker, just get a proxy (plenty of free ones out there) and you can scrape the site more often.\nJust try respecting the site, Don't consume too much of the sites ressources by requesting every 0.5 seconds","Q_Score":1,"Tags":"python,html,beautifulsoup,mp3","A_Id":57292691,"CreationDate":"2019-07-31T13:05:00.000","Title":"Is it possible to write a Python web scraper that plays an mp3 whenever an element's text changes?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have 2 web elements with the same XPath and name but under the different navigation links. Both links point to the same page but with a different title. How should I access each of the links uniquely?\nI am a beginner to python selenium programming","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":23,"Q_Id":57300618,"Users Score":0,"Answer":"You have the answer in the question itself. You must try identifying the elements using title with combination of other attributes if they have unique titles.\nPS: Please(always) add the code you have tried and the relevant(in this case HTML) required files\/snippet which can help us understand and investigate better. Questions with enough information get faster replies :)","Q_Score":0,"Tags":"python-3.x,selenium-chromedriver","A_Id":57302283,"CreationDate":"2019-08-01T01:05:00.000","Title":"Web element with same name and xpath under two elements in the navigation pane","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I was learning how to play music using selenium so I wrote a program which would be used as a module to play music. Unfortunately I exited the python shell without exiting the headless browser and now the song is continuously playing.\nCould someone tell me how I can find the current headless browser and exit it?","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":84,"Q_Id":57305632,"Users Score":1,"Answer":"You need to include in your script to stop the music before closing the session of your headless browser.","Q_Score":0,"Tags":"python-3.x,selenium-webdriver","A_Id":57306386,"CreationDate":"2019-08-01T09:16:00.000","Title":"Stop music from playing in headless browser","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I was learning how to play music using selenium so I wrote a program which would be used as a module to play music. Unfortunately I exited the python shell without exiting the headless browser and now the song is continuously playing.\nCould someone tell me how I can find the current headless browser and exit it?","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":84,"Q_Id":57305632,"Users Score":1,"Answer":"If you are on a Linux box, You can easily find the process Id with ps aux| grep chrome command and Kill it. If you are on Windows kill the process via Task Manager","Q_Score":0,"Tags":"python-3.x,selenium-webdriver","A_Id":57308855,"CreationDate":"2019-08-01T09:16:00.000","Title":"Stop music from playing in headless browser","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am trying to add a package to PyPi so I can install it with Pip. I am trying to add it using twine upload dist\/*. \nThis causes me to get multiple SSL errors such as raise SSLError(e, request=request) requests.exceptions.SSLError: HTTPSConnectionPool(host='upload.pypi.org', port=443): Max retries exceeded with url: \/legacy\/ (Caused by SSLError(SSLError(\"bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')])\"))).\nI am using a school laptop and I presume that this is something my administrator has done however I can install stuff with pip by using pip3 install --trusted-host pypi.org --trusted-h\\ost files.pythonhosted.org.\n I was wondering if there was another to add my package to pip?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1790,"Q_Id":57354747,"Users Score":5,"Answer":"My guess is your school has something in place where they are replacing the original cert with their own, you could maybe get around it using --cert and referencing the path for your schools cert, but I think an easier workaround is to copy the files to a non school computer and upload from there.","Q_Score":5,"Tags":"python,ssl,pypi,twine","A_Id":57393456,"CreationDate":"2019-08-05T08:24:00.000","Title":"When adding package to PyPi SSL error occurs","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"All,\nI have googled this to death but I can't figure out how to parse this type of JSON packet.\n{\n\"recipients\":[\n\"\\\"Name 1\\\" \",\n\"\\\"Name 2\\\" \",\n\"\\\"Name 3\\\" \"\n]\n}\nAny help would be appreciated.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":12,"Q_Id":57364095,"Users Score":0,"Answer":"very simple, as my code:\nimport json\nj = json.loads('{\"recipients\": [\"Name 1\", \"Name 2\",\"Name 3\"] }')\nprint (j['recipients'])\n=> ['Name 1', 'Name 2', 'Name 3']","Q_Score":0,"Tags":"json,python-2.7","A_Id":57364203,"CreationDate":"2019-08-05T18:19:00.000","Title":"Assistance with parsing Json packet","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am in a situation where I have two endpoints I can ask for a value, and one may be faster than the other. The calls to the endpoints are blocking. I want to wait for one to complete and take that result without waiting for the other to complete.\nMy solution was to issue the requests in separate threads and have those threads set a flag to true when they complete. In the main thread, I continuously check the flags (I know it is a busy wait, but that is not my primary concern right now) and when one completes it takes that value and returns it as the result. \nThe issue I have is that I never clean up the other thread. I can't find any way to do it without using .join(), which would just block and defeat the purpose of this whole thing. So, how can I clean up that other, slower thread that is blocking without joining it from the main thread?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1503,"Q_Id":57377390,"Users Score":0,"Answer":"One bad and dirty solution is to implement a methode for the threads which close the socket which is blocking. Now you have to catch the exception in the main thread.","Q_Score":2,"Tags":"python,python-3.x,multithreading,asynchronous","A_Id":57377511,"CreationDate":"2019-08-06T13:39:00.000","Title":"Clean up a thread without .join() and without blocking the main thread","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am doing a small project involving web scraping basic html pages hosted on an internal server we use at work. Upon entry to these pages a popup . . . which looks exactly like a windows.prompt() asks for username and a password with a message like \"Enter credentials for Proxy: \". Is there a way to automatically inject and submit these values on the prompt box using JavaScript or even python?\nYes I do have access but every time I go to a different page it will re-prompt me and I am trying to do this for a very large amount of pages.\nI have already tried inspecting the page but there does not seem to be any element for a popup.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":86,"Q_Id":57401627,"Users Score":1,"Answer":"I suspect you may be getting the login prompt due to the web server's setup for the page and not due to anything included in the content of the page in question. You may need to adjust your app's code to handle the login behavior (setting cookies in your request header for example).","Q_Score":0,"Tags":"javascript,python,html","A_Id":57402523,"CreationDate":"2019-08-07T20:06:00.000","Title":"How to enter in values for a window.prompt() using javascript?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a large tar.gz archive file having nxml files and total size is around 5gb.\nMy aim is to extract files from it but, I do not have to extract all of them. I have to extract all those files whose name is greater than a threshold value.\nFor example:\nLet us consider 1000 is our threshold value. So\npath\/to\/file\/900.nxml will not be extracted but\npath\/to\/file\/1100.nxml will be extracted.\nSo my requirement is to make a conditional extraction of files from the archive.\nThanks","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":123,"Q_Id":57406452,"Users Score":1,"Answer":"You can also use --wildcards option of tar.\nFor example in the case when your threshold is 1000 you can use tar -xf tar.gz --wildcards path\/to\/files\/????*.nxml. The ? will match one character and using * will match any number of character. This pattern will look for any file name with 4 or more characters.\nHope this helps.","Q_Score":1,"Tags":"python,bash,gzip,python-2.x,tar","A_Id":57424232,"CreationDate":"2019-08-08T06:31:00.000","Title":"Conditional extraction of files from an Archive file","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am very new to python and cannot seem to figure out how to accomplish this task. I want to connect to a website and extract the certificate information such as issuer and expiration dates.\nI have looked all over, tried all kinds of steps but because I am new I am getting lost in the socket, wrapper etc.\nTo make matters worse, I am in a proxy environment and it seems to really complicate things.\nDoes anyone know how I could connect and extract the information while behind the proxy?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1079,"Q_Id":57432064,"Users Score":0,"Answer":"Python SSL lib don't deal with proxies.","Q_Score":0,"Tags":"python,ssl,python-requests,pyopenssl,m2crypto","A_Id":71436836,"CreationDate":"2019-08-09T14:23:00.000","Title":"Trying to extract Certificate information in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a program that imports the selenium package. It runs fine inside pyCharm but would not run on command prompt, saying \"No module named selenium\" (Windows 7). Is there an easy way to get all the setup in pyCharm and make them available in the command prompt?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":29,"Q_Id":57433767,"Users Score":0,"Answer":"This is because you have imported the packages under a virtual environment(venv) . Therefore the package imports are valid only for the virtual environment. So in order to work it globally you have to install it via command prompt without activating a venv.","Q_Score":0,"Tags":"python,python-venv","A_Id":57433864,"CreationDate":"2019-08-09T16:08:00.000","Title":"Making Packages Available Outside PyCharm (Python3)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Does anyone know of a solution for checking if a tweet has replies or not, without checking the reply_count field of the JSON response? \nI'm building a crawler and already have a method for scraping a timeline for tweets as well as replies to tweets. In order to increase efficiency I want to find out if a tweet has any replies at all before calling my reply method. I have a standard developer account with Twitter so I do not have access to reply_count.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":123,"Q_Id":57436251,"Users Score":0,"Answer":"Looking for this as well. Only way I found was scraping the page (which is against the Terms of Service)\nreply-count-aria-${tweet.id_str}.*?(\\d+) replies","Q_Score":0,"Tags":"twitter,tweepy,twitterapi-python","A_Id":61215841,"CreationDate":"2019-08-09T19:44:00.000","Title":"Is there a workaround for non-premium Twitter developers for getting reply_count?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a conversation where the bot send out multiple replies.\nFor example:\nUser: \u201cHi\u201d\nBot: \u201cHello\u201d\nBot: \u201cHow can I help you?\u201d \nHere the bot sends out multiple messages. In interactive mode this is working fine. However, after I setup the REST API (following this How to serve chatbot from server?), the bot does not send out multiple messages for obvious reasons. How can I get this working?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":583,"Q_Id":57478153,"Users Score":0,"Answer":"you can use \\n\\n inside a text response to split it into two messages..\nUnfortunately it does not work with button response messages.\nHere You can use custom actions with as many dispatcher.utter_message() events as you want..","Q_Score":2,"Tags":"python,rasa-nlu,rasa-core,rasa","A_Id":71426079,"CreationDate":"2019-08-13T12:53:00.000","Title":"How to make the bot send out multiple messages in sequence over REST API? [Rasa]","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am receiving a SyntaxError: invalid syntax when using Import requests using python 3.7\nI have re-installed requests. \nImport requests\n... \n File \"\", line 10\n Import requests\n ^\nSyntaxError: invalid syntax\nI did not expect any problem. Using Automate The Boring Stuff with Python by Al Sweigart page 237","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":484,"Q_Id":57482476,"Users Score":0,"Answer":"I believe your issue is that you're using Import with a capital 'I' instead of import with a lowercase 'i'.","Q_Score":0,"Tags":"python-3.x,python-requests,installation,python-import","A_Id":57482527,"CreationDate":"2019-08-13T17:14:00.000","Title":"Syntax error on import requests with python 3.7","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to create a webserver on linux in python so that I can use it to get csv files from the linux machine onto a windows machine. I am quite unfamiliar with networking terminology so I would greatly appreciate it if the answer is a little detailed. I dont want to create a website, just the webserver to get the csv file requested.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":755,"Q_Id":57485604,"Users Score":0,"Answer":"You can create an simple file server, using flask, django or any other python web-server frameworks. There are just some small tips:\n\nYou should send csv files through HTTP POST request to web-server. I think multi-part\/form-data is appropriate for this. \nAn small\/simple Database with track files received and their locations on disk.\nYou can also have something to translate csv files to database records on demand. After that you will be able to do very useful things on them.\n\nFor any more questions, ask different questions to get appropriate\/expert answers. (welcome to SO)","Q_Score":0,"Tags":"python,webserver","A_Id":57485935,"CreationDate":"2019-08-13T21:35:00.000","Title":"How to make a python webserver that gives a requested csv file","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have made a pretty big spider that basically extracts the data from an amazon product page.\nThe problem is that sometimes, no data comes back when I extract. After that happens I check the URL that was processed and, following the xpath with a chrome tool, the data is in fact there.\nI know that what me and the Chrome tool sees is not the same as what the spider processes so, is it there any way to actually see the source code the spider is trying to extract from? and will the XPath I make with the chrome tool's help be trustworthy?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":497,"Q_Id":57486244,"Users Score":1,"Answer":"Check the view-source with (Ctrl-U in Chrome). Chrome tools will not always line up with the html source. Probably due to the JavaScript on the page.","Q_Score":0,"Tags":"python-3.x,web-scraping,scrapy","A_Id":57501420,"CreationDate":"2019-08-13T22:56:00.000","Title":"Checking source code in a scrapy response","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I want to send multiple, delayed responses from the webhook written in python, once an intent is been triggered by the user. First response I want immediately after triggering the intent and another response I want after some processing to be performed on top of the user utterance.\nFor example:\n\nUser : I want my account balance.\nBOT : Please tell your account number for details.\nUser : my account number is 218497234.\nBOT : Hold-on a bit we are fetching your details.\nBOT : Welcome John, your account balance is $70000.\n\nIn the above example, this is a bank-bot, which replies to user queries. Currently fetching-up account balance for a user supplying account number. The last two responses from the BOT are from the webhook when say \"account_balance_check\" intent is been triggered. First response is immediate, asking the user to be patient and wait for the account details , while the second response is after fetching the account details from a db using account number.\nAnother way could be to trigger response from the bot, without utterance from the user. In the above case, is there anyway, bot itself can send response to user after telling him to wait? Please note that after telling user to wait, we don't want user utterance to trigger second response.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":331,"Q_Id":57522331,"Users Score":1,"Answer":"Unfortunately, you cannot do that in Dialogflow because once you send the response then your agent will not be able to send the response without any user input or event call.\nHere is an answer if your process does not take a long time. Once you get the user request, send them the waiting message with the \"OK\" suggestion. Once the user clicks on the suggestion, you can show the response. Also, process the request with some API and save your data in a common file that you can access through both API and agent and then show the response to the user from the file.","Q_Score":2,"Tags":"python,dialogflow-es,dialogflow-es-fulfillment","A_Id":57523379,"CreationDate":"2019-08-16T09:40:00.000","Title":"How to send multiple delayed responses from Python Webhook once an intent is been triggered?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm developing a discord bot using discord.py (rewrite branch) for my servers and I need to invite the bot to multiple servers and use it simultaneously. \nMy question is: \nDo I need to set up a new thread for every server or does the bot queue events and handle them one by one? if it does queue them, should I just use that or use separate threads?\nSorry if this is a noobish question but I'm fairly new to discord.py and I don't really understand how it works just yet.\nThanks for reading","AnswerCount":2,"Available Count":1,"Score":-0.0996679946,"is_accepted":false,"ViewCount":8081,"Q_Id":57550170,"Users Score":-1,"Answer":"Multiprocesses, threads or queues can all be used to approach this issue each with their respective advantages and disadvantages . Personally I would use threads as the events that need to take place on each server are independent of each other mostly.","Q_Score":3,"Tags":"python,discord.py-rewrite","A_Id":57550391,"CreationDate":"2019-08-19T03:32:00.000","Title":"How does a discord bot handle events from multiple servers","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"We are using selenium python to automate web application. To launch firefox browser, we need to download geckodriver and place it in \/usr\/bin. but, we found that linux version geckodriver is not compatible with Solaris os. whenever I am running selenium python to run code on solaris v5.11 , we got an error like \"Bad System call(core dumped)\"\nsolaris 11.4\npython 2.7.14\nselenium 3.141.0\ngeckodriver 0.24.0\nplease help to resolve the issue","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":78,"Q_Id":57570736,"Users Score":1,"Answer":"Solaris & Linux use very different system calls, and binaries must be compiled specifically for each one - you cannot copy them across from one system to the other - so you will need to either compile geckodriver yourself or find a version already compiled for Solaris, not Linux.","Q_Score":0,"Tags":"python,selenium,solaris,selenium-firefoxdriver","A_Id":57576724,"CreationDate":"2019-08-20T09:37:00.000","Title":"Geckodriver for Solaris OS","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Scrapy gets 302 redirect to another link. In the link 'https:\/\/xxxxxx.queue-it.net?c.....com' Scrapy does not add the '\/'. It should be'https:\/\/xxxxxx.queue-it.net\/?c.....com'. \nI have tried adding '\/' in middleware.py. Under downloaderMiddleware function. But, it does not work. \nScrapy crawls when I manually add the redirect link with '\/'. However, it is not very dynamic.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":74,"Q_Id":57602863,"Users Score":1,"Answer":"Set 'dont_redirect': True in the specific request or disable redirect globally by setting REDIRECT_ENABLED setting to False.","Q_Score":0,"Tags":"python,web-scraping,scrapy","A_Id":57604799,"CreationDate":"2019-08-22T06:00:00.000","Title":"Scrapy get's redirected to follow 302 and it does not crawl the site","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am using paramiko to create a SFTP server. I have succeeded in uploading and downloading files to and from server on client request.But, I need to send a file from server to client whenever I need without client request. So, instead of breaking my head on making server send a file to client I want to make both machines act as both server and client in different ports so that when I need to send a file from machine A to B I can just Upload it to the SFTP server running on that port. Is this hypothesis possible?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":450,"Q_Id":57615144,"Users Score":0,"Answer":"You already know that you cannot send a file from an server to a client:\nCan I send a file from SFTP Server to the Client without any request from it?\n(The question on Server Fault has been deleted)\n\nTo answer your port question:\nYou do not care about client's port. It is automatically assigned to any available port, without you ever needing to know its value. In general, that's true for any TCP\/IP connection, not only SFTP.\nSo you can just run SFTP server on both machines on the standard port 22. And use your client code on the other machine to connect to it.","Q_Score":0,"Tags":"python,windows,sftp,paramiko","A_Id":57616068,"CreationDate":"2019-08-22T18:23:00.000","Title":"Is it possible to run both SFTP server and client in a same machine on different ports?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"My setup is flask-socketio with a flask-restful webserver.\nEventlet is installed, so in production mode, eventlet webserver is used.\nI understand flask-socketio and eventlet webserver themselves are event-loop based.\nDoes flask-socketio and eventlet webserver runs on the same eventloop (same thread) or in two different threads?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":692,"Q_Id":57619141,"Users Score":1,"Answer":"I think you are confusing the terminology.\nThe event loop is the task scheduler. This is provided by eventlet, and a single event loop is used for the whole application, including the Flask and the Flask-SocketIO parts.\nEach time a request arrives to the eventlet web server, it will allocate a new task for it. So basically each request (be it Flask or Flask-SocketIO, HTTP or WebSocket) will get its own task. Tasks are constantly being created and destroyed as requests are handled.\nWhen you use eventlet, tasks are not threads, they are greenlets, that is why I avoided calling them threads above and used the more generic \"task\" term. They behave like threads in many ways, but they are not.","Q_Score":0,"Tags":"python,flask,flask-restful,flask-socketio,eventlet","A_Id":57630398,"CreationDate":"2019-08-23T02:28:00.000","Title":"Flask-SocketIO with eventlet: Web and Websockets Thread","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am trying to detect the drive failure in Datanode in a Hadoop Cluster. Cloudera Manager API don't have any specific API for that. CM API are only related to Name node or restart services. Are there any suggestions here? Thanks a lot!","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":520,"Q_Id":57621257,"Users Score":1,"Answer":"If you have access to NameNode UI, the JMX page will give you this information. If you hit the JMX page directly it'll be a JSON formatted page, which can be parsed easily.\nWe use HortonWorks primarily, haven't touched Cloudera in a long time, but I assume that can be made available somehow.","Q_Score":0,"Tags":"python-3.x,hadoop,hadoop-yarn,cloudera,cloudera-manager","A_Id":57633508,"CreationDate":"2019-08-23T06:58:00.000","Title":"How to detect in Hadoop cluster if any Datanode drive (Storage) failed","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Seeking a bit of guidance on a general approach as to how one would automate the retrieval of data from a My Google Map. While I could easily export any given layer to KML\/KMZ, I'm looking for a way to do this within a larger script, that will automate the process. Preferably, where I wouldn't even have to log in to the map itself to complete the data pull.\nSo, what do you think the best approach is? Two possible options I'm considering are 1) using selenium\/beautiful soup to simulate page-clicks on Google Maps and export the KMZ or 2) making use of Python Google Maps API. Though, I'm not sure if this API makes it possible to download Google Maps layer via a script. \nTo be clear, the data is already in the map - I'm just looking for a way to export it. It could either be a KMZ export, or better yet, GeoJSON.\nAny thoughts or advice welcome! Thank you in advance.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":280,"Q_Id":57664000,"Users Score":0,"Answer":"I used my browser\u2019s inspection feature to figure out what was going on under the hood with the website I was interested in grabbing data from, which led me to this solution. \nI use Selenium to login and navigate said website, then transfer my cookies to Python\u2019s Requests package. I have Requests send a specific query to the server whose response is in the form of JSON. I was able to figure out what query to send and what form the response would be through the inspection feature previously stated. Once I have the response in JSON I use Python\u2019s JSON package to convert into a Python dict to use however I need. \nSounds like you might not necessarily need Selenium but it does sound like the Requests package would be useful to your use case. I think your first step is figuring out what form the server response is when you interact with the website naturally to get what you want. \nHopefully this helps to some degree!","Q_Score":2,"Tags":"python,selenium,google-maps,geospatial,kmz","A_Id":57664626,"CreationDate":"2019-08-26T19:41:00.000","Title":"How to automate pulling data (KMZ? JSON?) from My Google Maps","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am an extreme beginner with Python and its libraries and installation in general. I want to make an extremely simple google search web scraping tool. I was told to use Requests and BeautifulSoup. I have installed python3 on my Mac by using brew install python3 and I am wondering how to get those two libraries\nI googled around and many results said that by doing brew install python3 it will automatically install pip so I can use something like pip install requests but it says pip: command not found.\nby running python3 --version it says Python 3.7.4","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":165,"Q_Id":57665963,"Users Score":0,"Answer":"Since you're running with Python3, not Python (which usually refers to 2.7), you should try using pip3.\npip on the other hand, is the package installer for Python, not Python3.","Q_Score":0,"Tags":"python-3.x,pip","A_Id":57666013,"CreationDate":"2019-08-26T23:20:00.000","Title":"How to install stuff like Requests and BeautifulSoup to use in Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"This is probably a really simple question, but I can't seem to find an answer online.\nI'm using a Google Cloud Function to generate a CSV file and store the file in a Google Storage bucket. I've got the code working on my local machine using a json service account.\nI'm wanting to push this code to a cloud function, however, I can't use the json service account file in the cloud environment - so how do I authenticate to my storage account in the cloud function?","AnswerCount":1,"Available Count":1,"Score":0.6640367703,"is_accepted":false,"ViewCount":89,"Q_Id":57720636,"Users Score":4,"Answer":"You don't need the json service account file in the cloud environment.\nIf the GCS bucket and GCF are in the same project, you can just directly access it.\nOtherwise, add your GCF default service account(Note: it's App Engine default service account ) to your GCS project's IAM and grant relative GSC permission.","Q_Score":0,"Tags":"python-3.x,google-cloud-platform,google-cloud-functions,google-cloud-storage","A_Id":57721212,"CreationDate":"2019-08-30T04:47:00.000","Title":"Authenticating Google Cloud Storage SDK in Cloud Functions","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have written a script in selenium python which is basically opening up a website and clicking on links in it and doing this thing multiple times..\nPurpose of the software was to increase traffic on the website but after script was made it has observed that is not posting real traffic on website while website is just taking it as a test and ignoring it.\nNow I am wondering whether it is basically possible with selenium or not? \nI have searched around and I suppose it is possible but don't know how. Do anyone know about this? Or is there any specific piece of code for this?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1011,"Q_Id":57729315,"Users Score":0,"Answer":"It does create traffic, the problem is websites sometimes defends from bots and can guess if the income connection is a bot or not, maybe you should put some time.wait(seconds) between actions to deceive the website control and make it thinks you are a person","Q_Score":1,"Tags":"python,selenium,automation,web-traffic","A_Id":57729437,"CreationDate":"2019-08-30T15:12:00.000","Title":"Can selenium post real traffic on a website?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to download a file from onedrive or google drive using wget or a faster method!\nI've tried lots of code i could find on the internet but have not had any luck.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":153,"Q_Id":57733624,"Users Score":1,"Answer":"This has worked for me to download a non-protected-link file:\n\nGo to your OneDrive\nClick on the share link and copy. It will look like: https:\/\/...\/EvZJeK2tIMOs54OA?\nAppend download=1 after ? looking like https:\/\/...\/EvZJeK2tIMOs54OA?download=1\n\nNow you can use it with wget like:\nwget https:\/\/...\/EvZJeK2tIMOs54OA?download=1 -O .\nNote: The -O (capital O) is to define an output name otherwise it will have EvZJeK2tIMOs54OA?download=1 as the file name.","Q_Score":0,"Tags":"python","A_Id":71328453,"CreationDate":"2019-08-30T21:57:00.000","Title":"How to download file from one drive\/google drive","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How to set the source IP address for UDP multicast packages to something else than the interface IP?\nI am trying to write a small router that selectively routes UDP SSDP packages from one network to another. The plan is to do it in python, although I am flexible on that.\nIt seems fairly easy to route SSDP NOTIFY messages: I receive them on one interface and decide which interface to re-broadcast them on. However the protocol for M-SEARCH messages require that the source IP is set to the original source of the message as any service that chooses to respond will respond with a unicast message to the source IP and port.\nExample (heavily simplified):\n\nNetwork A: 192.168.10.0\/24\nNetwork B: 192.168.11.0\/24\nMy router application runs on a multihomed computer on 192.168.10.2 and 192.168.11.2.\n\nA client on network A with IP 192.168.10.10 sends an M-SEARCH message:\n\nSrc IP\/Port: 192.168.10.10 port 40000\nDst IP\/Port: 239.255.255.250 port 1900\n\nMy \"router application\" on 192.168.10.2 receives the packet and would like to rebroadcast it on network B. However I cannot find any method in the socket API that allows me to set the source IP address. Only to pick the source interface.\nThus the rebroadcasted packet now looks like this:\n\nSrc IP\/Port: 192.168.11.2 port xxxxx\nDst IP\/Port: 239.255.255.250 port 1900\n\nAnd now the receiving service is unable to unicast back to the client as the original IP and port are lost.","AnswerCount":1,"Available Count":1,"Score":0.537049567,"is_accepted":false,"ViewCount":403,"Q_Id":57744381,"Users Score":3,"Answer":"How to set the source IP address for UDP multicast packages to something else than the interface IP?\n\nThe only way I know of is to use a RAW socket and construct the IP headers manually. Note that use of RAW sockets is typically restricted to admin users on most platforms.\n\nHowever I cannot find any method in the socket API that allows me to set the source IP address. \n\nBecause there isn't one.\n\nAnd now the receiving service is unable to unicast back to the client as the original IP and port are lost.\n\nCan't you just have your router remember the original source when it receives the M-SEARCH request, and when it receives the unicast reply then forward it to the original requester? That is how most routers usually work.","Q_Score":0,"Tags":"python,sockets,multicast","A_Id":57752433,"CreationDate":"2019-09-01T08:26:00.000","Title":"Setting source address when sending UDP multicast messages","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to send Confidential emails via gmail, although I am not sure it is supported. Via the gmail API, the User.messages fields do not seem to indicate whether an email is confidential or not. I retrieved a confidential and regular email via the v1 gmail api get method and the messages are identical. Is there a way send a confidential gmail email, either with the official api or other service?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":319,"Q_Id":57782111,"Users Score":1,"Answer":"To my knowlage you can't send messages with \"Confidential mode\" through the Gmail API. This is not currently supported.","Q_Score":0,"Tags":"python,google-api,gmail-api,google-api-python-client","A_Id":57782321,"CreationDate":"2019-09-04T05:41:00.000","Title":"Send Confidential email through gmail api","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need help to find the bottleneck with my scrapy\/python based scraper.\nWe are scraping products from Amazon (Italy at the moment) but we are struggling with overall requests throughput.\nWe are using backconnect rotating proxies: StormProxies (50 threads plan) + Proxyrotator (100 threads) + TOR but even 250+ available threads we can scrape only 1\/2 URLs per second...\nWe are running it on OVH dedicated server, 8 core x 16GB ram, redis celery and docker as additional tools\nI am an IT technician, the software is developed by my indian dev guy, if you need additional info or code just ask!\nThanks in advance","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":604,"Q_Id":57795692,"Users Score":0,"Answer":"Amazon, like other big websites, is using IA to detect the incoming requests. It means that if you are requesting a lot of traffic to the same \"logical\" order on the same website, they will detect you and try to ban you (even if you are using other proxies).\nTry to separate and do it on different servers. Instead of using only one big server, use several small servers.\nMy recommendations:\n\nUse proxies that are reliable (different providers)\nUse random time to request every URL (just to confuse the detection algorithm and to avoid being identified as the same user using different IP)\nUse different agents too (rotating)\nBe nice to the site you are going to scrape. See robots.txt file to know more\nIf you are scraping a list of elements in Amazon (for instance, the top products sell in books category), don't scrape it sequentially. Pick up randomly to scrape that list of products in order to be unpredictable\nUse a good structure to scrape (parallelizing). I suggest use:\n\n\nStore the URLs to scrape in Redis\nUse differents ScrapyD servers to run horizontal scraping\nUse monitoring system to review logs, to review errors and manage the servers\n\n\nI hope it helps!","Q_Score":0,"Tags":"python,web-scraping,scrapy,amazon","A_Id":59702840,"CreationDate":"2019-09-04T21:11:00.000","Title":"Scrapy extremely slow: probably bottleneck","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"We are using protobufs to model our networking software. There are many instances, like priorities, where 0 is a valid value. But, when we transport, the fields with 0 values are suppressed. \nIs there a way to change this behavior? That is, differentiate a filed with a valid value of 0, from a field which has not been set, which can be suppressed?\nOur client is gRPC-Java and server is gRPC-Python.\nThank you for your time.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":106,"Q_Id":57796407,"Users Score":1,"Answer":"You can use Protobuf version 2, which can distinguish whether the field has been set. However, gRPC recommend to use Protobuf version 3.\nAn alternative is to set the field to an invalid value, e.g. -1, if the field is NOT set.","Q_Score":1,"Tags":"protocol-buffers,grpc,grpc-java,grpc-python","A_Id":57798038,"CreationDate":"2019-09-04T22:41:00.000","Title":"Protobuf fields where a value of 0 is valid","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have an url of tweet and want to get texts of tweet from the url in Python. But I have no idea. I searched about Tweepy, But I think It's for search, upload tweets not to get texts from url.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":183,"Q_Id":57824627,"Users Score":0,"Answer":"You could probably use Beautiful Soup to grab the info you need via cURL. They have decent documentation. Does Twitter have an API you could use?","Q_Score":0,"Tags":"python,twitter,tweets","A_Id":57824762,"CreationDate":"2019-09-06T15:13:00.000","Title":"How to get texts from an url of tweets in Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have some kind of single page application which composes XHR requests on-the-fly. It is used to implement pagination for a list of links I want to click on using selenium.\nThe page only provides a Goto next page link. When clicking the next page link a javascript function creates a XHR request and updates the page content.\nNow when I click on one of the links in the list I get redirected to a new page (again through javascript with obfuscated request generation). Though this is exactly the behaviour I want, when going back to the previous page I have to start over from the beginning (i.e. starting at page 0 and click through to page n)\nThere are a few solutions which came to my mind:\n\nblock the second XHR request when clicking on the links in the list, store it and replay it later. This way I can skim through the pages but keep my links for replay later\nSomehow 'inject' the first XHR request which does the pagination in order to save myself from clicking through all the pages again \n\nI was also trying out some simple proxies but https is causing troubles for me and was wondering if there is any simple solution I might have missed.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":150,"Q_Id":57861058,"Users Score":0,"Answer":"browsermobproxy integrates easily and will allow you to capture all the requests made. It should also allow you to block certain calls from returning.\nIt does sound like you are scraping a site, so it might be worth parsing the data the XHR calls make and mimicking them.","Q_Score":0,"Tags":"python,selenium","A_Id":57914026,"CreationDate":"2019-09-09T20:53:00.000","Title":"Logging and potentially blocking XHR Requests by javascript using selenium","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"What's going to happen?\nI got a big amount of Scrapy crawlers, written in Python 2.7. I need to convert them all to support Python 3.\nThis transition must be done completely in 1 go. I can only start using the Python 3 crawlers once they're all ready for deployment. I cannot deploy 1 crawler at a time.\nWhat have I done\/what do I have?\nI currently have a remote branch, which is the master branch. Lets call that Remote-A. That is the branch that holds all the crawlers, and they get executed daily. This branch must remain functional.\nFor that remote branch, I have the local folder\/branch, where I fix bugs and create new ones. Lets call that Local-A. From the master, I push and pull from that.\nNow, as \"all operations must remain active\", I need a separate remote branch for Python 3 crawlers. That branch can be called Remote-B. I've created that manually, so the whole repository has two branches now: Master(Remote-A), and the one for Python 3 crawlers(Remote-B), which is a copy of the Master branch.\nI've also created a folder manually, and downloaded a zip from the Python 3 branch. That folder is called Local-B. My idea is to either delete all the (2.7) crawlers from Python 3 branch, or just start replacing them one by one.\nTo sum it up. I got Local A connected to Remote A. I also need Local B connected to Remote B. These two connections should not get mixed.\nMind you, I'm not very comfortable with GIT, and I'm the only responsible for the transition project, so I want everything to go smooth as silk. I know that it's easy to cause a lot of damage in GIT.\nSo, my workflow requires me to keep the running crawlers operational on daily basis, and work on upgrading the old ones to Python 3. How do I make the switches between A- and B-sides easy, without causing a havoc?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":107,"Q_Id":57869506,"Users Score":2,"Answer":"If I understand your question, I am afraid you are a bit confused about the concepts of branch and remote. A remote would be github, and also a local mirror of your git repo.\nYou can have as many mirrors as you like. All would contain all the branches.\nIn your case, and also if I understand your question, I would do as follows:\n\nhave a deploy branch (which what you seem to name as Remote-A - a name that is confusing to me): this branch should always be correct\nhave a python2 development branch (suggested name: py2dev), in case you need to perform modifications in the python2 deployed code before deploying the python3 code\nhave a python3 migration branch (suggested name: py3) where you would be migrating the python2 code\n\nWhile the python3 migration is not ready, you would develop on the py2dev branch in your computer. In case you want to publish some change, you would publish these changes in your remote (i.e. github), and then pull these changes from that remote (github) in your deployed repo.\nWhen the python3 migration is ready, you would push your changes again to a remote, and then fetch them, and do a git checkout py3 in the deployed server. If things go wrong, you can do a git checkout deploy and you would be safe again.\nThere are many git workflows. This is supposed to be a simple one.","Q_Score":0,"Tags":"python,git,github,scrapy","A_Id":57869932,"CreationDate":"2019-09-10T11:06:00.000","Title":"How do I switch between remote github branches and local branches\/folders with ease?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have an API (in python) which has to alter files inside an EC2 instance that is already running. I'm searching on boto3 documentation, but could only find functions to start new EC2 instances, not to connect to an already existing one.\nI am currently thinking of replicating the APIs functions to alter the files in a script inside the EC2 instance, and having the API simply start that script on the EC2 instance by accessing it using some sort of SSH library. \nWould that be the correct approach, or is there some boto3 function (or in some of the other Amazon\/AWS libraries) that allows me to start a script inside existing instances?","AnswerCount":3,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":2456,"Q_Id":57896987,"Users Score":3,"Answer":"Unless you have a specific service running on that machine which allows you to modify mentioned files. I would make an attempt to log onto EC2 instance as to any other machine via network. \nYou can access EC2 machine via ssh with use of paramiko or pexpect libraries.","Q_Score":1,"Tags":"python,amazon-web-services,amazon-ec2,boto3","A_Id":57897084,"CreationDate":"2019-09-11T21:24:00.000","Title":"Running Python Script in an existing EC2 instance on AWS","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"According to performance it is more than obvious that web scraping with BautifulSoup is much faster than using a webdriver with Selenium. However I don't know any other way to get content from a dynamic web page. I thought the difference comes from the time needed for the browser to load elements but it is definitely more than that. Once the browser loads the page(5 seconds) all I had to do is to extract some tags from a table. It took about 3-4 minutes to extract 1016 records which is extremely slow in my opinion. I came to a conclusion that webdriver methods for finding elements such as find_elements_by_name are slow. Is find_elements_by.. from webdriver much slower than the find method in BeautifulSoup? And would it be faster if I get the whole html from the webdriver browser and then parse it with lxml and use the BeautifulSoup?","AnswerCount":4,"Available Count":1,"Score":0.049958375,"is_accepted":false,"ViewCount":1214,"Q_Id":57938852,"Users Score":1,"Answer":"Look into 2 options:\n1) sometimes these dynamic pages do actually have the data within