[{"Question":"I was trying to scrape data from a website.\nThe code is working but the site blocks my IP address when I was trying to scrape all scrolling pages. Please let me know if there is any suggestions on how to solve this problem. Thanks","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":71,"Q_Id":67010968,"Users Score":0,"Answer":"You could use proxies.\nIp-addresses can be bought very cheaply then you can iterate through a list of IP-addresses while simultaneously varying your browser and other user agent parameters.","Q_Score":0,"Tags":"python,json,selenium,request","A_Id":67011055,"CreationDate":"2021-04-08T19:51:00.000","Title":"How to prevent IP blocking while scraping data","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a server. Client can send a path and server should cd to that path. But here is the thing. Imagine I have a test2 directory in test1 directory and the path to test1 directory is C:\\test1. The client can access test2 by cd test2 and \\test1\\test2 and if he wants to go back he can use \\test1 (I searched and found os.chdir but it needs the full path and I don't have it) and he shouldn't be free to send E:\\something or anything like that. Just the directories that are in test1. what do you suggest? what can I use to achieve this?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":82,"Q_Id":67012663,"Users Score":0,"Answer":"You can store the default path as a kind of root path and path.join(root, client_path) this way you have a complete path that has to start with C:\\test1\nThe issue you have to overcome is deciding if you have to join the current path or the root path with the client's command. I would first check if the directory exists in the current working directory if not I would try finding it in the \"root\" path","Q_Score":0,"Tags":"python,sockets,directory,cd","A_Id":67012716,"CreationDate":"2021-04-08T22:25:00.000","Title":"Change directory in a server without leaving the working directory","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Problem\nWhen trying to access and purchase a specific item from store X, which releases limited quantities randomly throughout the week, trying to load the page via the browser is essentially pointless. 99 out of 100 requests time out. By the time 1 page loads, the stock is sold out.\nQuestion\nWhat would be the fastest way to load these pages from a website -- one that is currently under high amounts of stress and timing out regularly -- programmatically, or even via the browser?\nFor example, is it better to send multiple requests and wait until a \"timed out\" response is received? Is it best to retry the request after X seconds has passed regardless? Etc, etc.\nTried\nI've tried both solutions above in browser without much luck, so I'm thinking of putting together a python or javascript solution in order to better my chances, but couldn't find an answer to my question via Google.\nEDIT:\nJust to clarify, the website in question doesn't sporadically time out -- it is strictly when new stock is released and the website is bombarded with visitors. Once stock is bought up, the site returns to normal. New stock releases last anywhere from 5 minutes to 25 minutes.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":48,"Q_Id":67030593,"Users Score":0,"Answer":"The best way is to inspect the website and find out how the http query's are done. Maybe a special buy There is no fastest way because you want to load from a server that is stressed, you will have the same 'luck', as others. You could decrease theping of your internet, but will do minimal good.","Q_Score":0,"Tags":"javascript,python,selenium,bots,puppeteer","A_Id":67034331,"CreationDate":"2021-04-10T03:42:00.000","Title":"Best way to increase odds of connecting to a server that is temporarily timing out due to an influx of visitors?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Problem\nWhen trying to access and purchase a specific item from store X, which releases limited quantities randomly throughout the week, trying to load the page via the browser is essentially pointless. 99 out of 100 requests time out. By the time 1 page loads, the stock is sold out.\nQuestion\nWhat would be the fastest way to load these pages from a website -- one that is currently under high amounts of stress and timing out regularly -- programmatically, or even via the browser?\nFor example, is it better to send multiple requests and wait until a \"timed out\" response is received? Is it best to retry the request after X seconds has passed regardless? Etc, etc.\nTried\nI've tried both solutions above in browser without much luck, so I'm thinking of putting together a python or javascript solution in order to better my chances, but couldn't find an answer to my question via Google.\nEDIT:\nJust to clarify, the website in question doesn't sporadically time out -- it is strictly when new stock is released and the website is bombarded with visitors. Once stock is bought up, the site returns to normal. New stock releases last anywhere from 5 minutes to 25 minutes.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":48,"Q_Id":67030593,"Users Score":0,"Answer":"There can be so many reasons why you are getting so many request timeouts from the server. It may be from your client application or from the server application settings (to reduce unfavorable request behaviours from certain clients) or a simple DNS resolution taking too long. One thing that is sure though is that bugging the server with so many requests at a time will definitely not guarantee you less timeouts, but may aggravate the situation.\nOne way you can solve the problem (if you don't have control of the server side) is to monitor the server application behaviour from your end for at least a day or two. A simple script that sends test requests at regular intervals might do the trick. You can measure parameters like request resolution time, frequency of failed requests, and type (cause) of failure (if that is deducible). These parameters can be measured over a given period of time (a day or two) to know statistically when it is more favourable to make request to the server. This \"profiling\" of the server may not always be accurate but can be done regularly with better thought out parameters to get better results. BTW... Enough data may even benefit from some AI data analytics :).","Q_Score":0,"Tags":"javascript,python,selenium,bots,puppeteer","A_Id":67034686,"CreationDate":"2021-04-10T03:42:00.000","Title":"Best way to increase odds of connecting to a server that is temporarily timing out due to an influx of visitors?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have been trying to scrape customer product reviews from Bestbuy for laptop products to analyze feedbacks but was unable to scrape it out. Any help here would be highly appreciated!!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":133,"Q_Id":67036047,"Users Score":0,"Answer":"If BestBuy is anything like EBay, they may have their front end code highly obfuscated. If this is the case it may be easier to use a headless selenium driver to extract reviews.","Q_Score":0,"Tags":"python,screen-scraping","A_Id":67036212,"CreationDate":"2021-04-10T15:22:00.000","Title":"How to Scrape Bestbuy for Customer product reviews using Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I would like to load\/check the new content that has been loaded to a section of a page. Some pages update all the time, but the section that I want updates only once in couple hours or minute. Although, no one knows when there will be the new content uploaded to that section. This can happen 24\/7. What I want to accomplish is whenever there is a new content upload to that section, do something immediately(in this case, go into the link and load the page). The only thing I can think of as of now is checking that section of the page as frequent as possible, ie. every 30 seconds, every minute. However, there are thousands of pages(~6000 roughly) that I want to check on. I don't think this is an ideal way to do, let alone if that's possible for the frequency I want.\nI'm just wondering if there is a way to do it without asking my bot to scrape every single minute for each page?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":40,"Q_Id":67040149,"Users Score":1,"Answer":"Nope, there is no magic spell here. Web pages do not have a \"notification\" option. If you want the info, you'll need to poll for the info. Yes, it's going to be wasteful, which is why you should ask yourself why you are doing this.","Q_Score":0,"Tags":"python,web-scraping,scrapy","A_Id":67040163,"CreationDate":"2021-04-10T23:22:00.000","Title":"Is there a way to monitor a page 24\/7 and when there is an update, load the new content","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am unfamiliar with the instaboy.py python package. I am wondering if there are any security issues with this package like possibly getting information leaked. I am wondering how does the API work if there are a lot of people using this package. Wouldn't you need your own personal Instagram API token? I am confused by the whole concept and if anyone could explain even just a little bit it will be much appreciated.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":427,"Q_Id":67041693,"Users Score":0,"Answer":"Bots are now easily detected by Instagram. Your account could be banned for 3 days, 7 days, 30 days or definitively if Instagram detects too many attempts\nUsually bots simulate a browser via Sellenium and then create a \"browse like a human\" bot to create likes, follow, unfollow, etc.","Q_Score":0,"Tags":"python,api,instagram-api,facebook-access-token,instagram-graph-api","A_Id":67127196,"CreationDate":"2021-04-11T05:16:00.000","Title":"Is instabot.py safe to use?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am testing Interactive Brokers Python API in a Hobby project. I am using Interactive Brokers Gateway (rather TWS). My project is a simple Django-based application. I can connect successfully and receive real-time data using the method reqMktData(). Everything is OK so far.\nBut when I refresh the page it shows 504 Not Connected, although in IB Gateway it shows there is a running connection. To stop this, During a page reload I am trying to disconnect the previous connection using the Eclient disconnect() method available in the API, but it can not disconnect the running connection.\nCan anyone have any idea how can I disconnect a running connection in IB Gateway and start a new connection?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":742,"Q_Id":67044292,"Users Score":0,"Answer":"Changing my client Id seems to fix it for me. Maybe toggle back and forth between 2 of them?","Q_Score":2,"Tags":"python,interactive-brokers","A_Id":67148810,"CreationDate":"2021-04-11T11:13:00.000","Title":"Interactive Brokers API - How to disconnect an existing connection of IB Gateway and establish a new connection using Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How can delete all of user's linked accounts without having to make multiple API calls for each account or unregistering the user? I currently using documented \/accounts\/ endpoint to delete each one however sending a separate requests for each deletion takes far too long for users with multiple accounts.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":94,"Q_Id":67091698,"Users Score":0,"Answer":"You can unregister the user which effectively deletes all of the user\u2019s accounts.","Q_Score":0,"Tags":"python-3.x,yodlee","A_Id":67121613,"CreationDate":"2021-04-14T12:23:00.000","Title":"Yodlee: How to delete every user's linked accounts in a single API call","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am trying to scrape information off of a website (on Chrome) where I need to click on each subpage to get the information I need. After about 7, I get blocked by the website. I think if I was able to switch IPs either each time or once I get blocked, that would work.\nI am using Selenium to open the site and navigate to the subpages. I have tried using a try-catch block so far and a while loop but I am getting errors I do not know how to address.\nDoes anyone have an alternative approach or previous success doing this?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":292,"Q_Id":67111892,"Users Score":0,"Answer":"You can use rotating proxies to change your IP per request or with a time interval but If you don't want to use any proxy you can restart your router to get a new IP address from your ISP but If you have a static IP from your ISP even through you restar your router your IP will stay the same.","Q_Score":2,"Tags":"python,python-3.x,selenium-webdriver,web-scraping,ip-address","A_Id":67113586,"CreationDate":"2021-04-15T15:57:00.000","Title":"In python with selenium, how do you rotate IP addresses?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In my code I need to get only the main text not the header or footer data. I also would like to filter out any html\/css\/js code that is received with the request. How would I do this? I have tried making a request with requests, looking through the data with beautiful soup and then printing the body content. The issue with this is that it is also picking up the footer and header contents. Thanks for any responses in advance!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":387,"Q_Id":67119560,"Users Score":1,"Answer":"Use the browser developer tools (Usually F12) to find out what element contains the content you are looking for. Usually content other than headers and footers will be in
or
elements.\nYou can then use something like soup.article.get_text() to retrieve text from the containing element.","Q_Score":0,"Tags":"python,python-3.x,beautifulsoup,python-requests,request","A_Id":67120465,"CreationDate":"2021-04-16T05:26:00.000","Title":"Python beautiful soup get only body content without header or footer data","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"do someone know if there is a way to open a URL when someone write \/start or some command.\nI already tried with requests, but it hasnt worked.\nThanks!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":455,"Q_Id":67126382,"Users Score":0,"Answer":"You can redirect users to other groups\/chats by providing them with an invite link or the username of the group\/channel in the @username format. If you want that to happen on the press of a button, you can use InlineKeyboardButton by passing the invite link directly to the button, i.e. InlineKeyboardButton(text='some_text', url='http:\/\/t.me\/\u2026'). The URL will open (i.e. redirect the user to the target chat), when the button is clicked.","Q_Score":0,"Tags":"python,telegram,telegram-bot,python-telegram-bot,py-telegram-bot-api","A_Id":67128157,"CreationDate":"2021-04-16T13:45:00.000","Title":"Python Telegram Bot open URL\/join another Telegram Group after pressing telegram.KeyoardButton","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I can list the files in the source bucket but when I try to download them I am getting \"Client error 403\" , the source team has server side encryption AES256 enabled.\nSo when I try :\nclient.download_fileobj(bucket, file, f, ExtraArgs={\"ServerSideEncryption\": \"AES256\"})\nI am getting ValueError: Invalid extra_args key 'ServerSideEncryption', must be one of: VersionId, SSECustomerAlgorithm, SSECustomerKey, SSECustomerKeyMD5, RequestPayer\nHow can I fix this issue?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":465,"Q_Id":67129240,"Users Score":1,"Answer":"It should work without mentioning ExtraArgs={\"ServerSideEncryption\": \"AES256\"}.\nWhen SSE algorithm is AES256, you don't need to mention that while downloading object, only while uploading it.\nWhile downloading it, you need to make sure that the credentials, you are using to download the object, have access to the key that is used to encrypt the object.","Q_Score":0,"Tags":"python,amazon-s3,boto3,amazon-kms","A_Id":67163065,"CreationDate":"2021-04-16T16:49:00.000","Title":"Download files with server side encryption SSE AES256 using boto3","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I\u2019m relatively knew to the selenium package and have been using it for a couple weeks. My current script uses selenium to scrape data, I analyze the data by running a few tests, and if there is a datastring that passes said tests python texts me using Twilio. I\u2019m currently using my mac to run all of this but I was looking to run this script every 5 minutes, headless, and on a platform such that I dont need to keep my computer on. I have been looking at some potential solutions and it seems as though running this on a headless raspberry pie is the right option. I was wondering if anyone see\u2019s any potential problems with doing so as I haven\u2019t seen a thread with someone using Twilio? And, I\u2019ve encountered problems trying to set up a cron task to automate it on my mac because of selenium and was wondering if this will be possible on the pi (looking at the raspberry pi 4)? Sorry, if this is a little long winded, appreciate the help.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":80,"Q_Id":67143080,"Users Score":0,"Answer":"Run the script though any CI CD tools like Jenkins ,GoCD,Gitlab in a scheduled job so that the script would run in every 5 minutes in Agent node specified and you don't have to keep your computer on.","Q_Score":1,"Tags":"python,selenium,raspberry-pi,scheduled-tasks","A_Id":67144465,"CreationDate":"2021-04-17T21:23:00.000","Title":"Running Selenium Automated Scrips on Raspberry Pi 4","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Currently, I have two python bots running on VDS, both of them are using selenium and running headless chrome to get dynamically generated content. While there was only one script, there was no problem, but now, it appears that the two scripts fight for the chrome process (or driver?) and only get it once the other one is done.\nHave to mention, that in both scripts, Webdriver is instantiated and closed within a function, that itself is ran inside a Process of multiprocessing python module.\nRunning in virtual environment didn't do anything, each script has their own file of chrome driver in their respective directories, and by using ps -a I found that there are two different processes of chromedriver running and closing, so I am positive that scripts aren't using the same chrome. \nSometimes, the error says \"session not started\" and sometimes \"window already closed\".\n\nMy question is - how do I properly configure everything, so that the scripts don't interfere with each other?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":47,"Q_Id":67144301,"Users Score":0,"Answer":"For anyone having the same problem - double-triple-quadriple-check that the function, that you're passing in the Process, is the one instantiating Webdriver. I can't believe this problem is fixed just like that.","Q_Score":2,"Tags":"python,selenium,selenium-webdriver,python-multiprocessing,ubuntu-18.04","A_Id":67184590,"CreationDate":"2021-04-18T00:50:00.000","Title":"How to manage several selenium scripts running at once on VDS?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Please can you provide an example of how this can used in a python notebook environment with docplex. I see examples with java on ATSP problem. The point is I do not know how to create these cuts upfront. Given a LP root node, I can generate the cut. So, \"add_user_cut_constraint(cut_ct, name=None)\" should in a way take in as input the root node. How do I retrieve that in a generic way?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":73,"Q_Id":67163382,"Users Score":0,"Answer":"Look at this code in my contribs repository:\nhttps:\/\/github.com\/PhilippeCouronne\/docplex_contribs\/blob\/master\/docplex_contribs\/src\/cut_callback.py\nIt is not a notebook, but you'll get the idea on how to interface callbacks with Docplex.","Q_Score":0,"Tags":"python,jupyter-notebook,docplex","A_Id":67165020,"CreationDate":"2021-04-19T13:54:00.000","Title":"Examples in docplex to use the user cut callback method technology","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In my project, I need to use two telegram bots, which are linked by one database. I faced the following difficulty: having received a photo_id from one bot, I can only use it in this bot, the other bot does not have access to the files.\nApiTelegramException: A request to the Telegram API was unsuccessful. Error code: 400. Description: Bad Request: wrong file identifier\/HTTP URL specified\nAt the same time, exactly the same line of code in the first bot successfully sends a photo\nIf I try to make an URL with this file, then it is downloaded, therefore, you will not be able to send a photo via the link.\nIs it possible to use documents received from another bot without saving them to the database?","AnswerCount":1,"Available Count":1,"Score":-0.1973753202,"is_accepted":false,"ViewCount":335,"Q_Id":67164791,"Users Score":-1,"Answer":"Yes... well maybe. Are you trying to avoid writing to the database to avoid writing to the database specifically or are you trying to avoid any further file handling?\nIf you are trying to avoid the database because its a database - my proposal would need you to create a shared memory space to do so. Maybe others know of easier ways.\nTry memcache or redis. Both have python libraries, I'd personally go with redis due to my own experiences.\nI don't know what your architecture is like for checking for updates - I assume you have some kind of scheduler ongoing? In which case you check redis\/memcache for updates periodically, download\/retransmit the data if it exists, then clear it.","Q_Score":1,"Tags":"python,telegram","A_Id":67165602,"CreationDate":"2021-04-19T15:21:00.000","Title":"How to connect two or more telegram bots for file exchange","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"What is the approach to correctly encoding and parsing variable length messages over TCP? Ex suppose we want to send a message which consists of a mix of string texts and a binary file.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":102,"Q_Id":67166352,"Users Score":1,"Answer":"It depends on the protocol you're implementing on top of TCP. Its specification will tell you the correct approach to use.\nIf you're designing the protocol, generally you just follow the design of whatever existing protocol is closest to what you're doing. Common schemes include:\n\nYou encode each message as text ending with a newline character. The receiver just reads blocks of data and searches them for newline characters.\n\nYou encode each message as a variable length block and send a 4-byte integer length (in network byte order) prior to each block. The receiver reads blocks of data, when it has 4 bytes, it determines the length of the message, when it has that many more bytes, it \"snips off\" the message and parses any leftover.\n\nYou encode a message in a format like XML or JSON.","Q_Score":0,"Tags":"python,tcp,client-server","A_Id":67166604,"CreationDate":"2021-04-19T17:05:00.000","Title":"TCP transfer of messages of variable length","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've got a selenium webscraper and I need for it to be able to access my file explorer directories and files. I want to essentially upload images from the file explorer to a website I've got opened up. But I don't know where to even begin. I've looked at the Windows API, the File System Access API, as well as the os module. I'm confused as to which one has the functionality I mentioned, if any of them. I'm working with Python at the moment, but I'm open to alternatives in other languages. If you can point me to anything that can help I'll be very grateful","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":38,"Q_Id":67168949,"Users Score":0,"Answer":"Selenium does not control the windows interface (or file explorer) what you can do is locate the element in your website that receives the filepath for that which you wish to upload and send the path using send_keys(filepath).","Q_Score":2,"Tags":"python,python-3.x,selenium","A_Id":67169038,"CreationDate":"2021-04-19T20:29:00.000","Title":"How can I can delete\/move\/upload files within File Explorer? | Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"So i have a hash that i want to pass in the body of get request i am currently using requests library from python, i am doing something like this in my code. I am using falcon framework.\nrequests.get(url, headers=head, data=json.dumps({\"user_id\": 436186}))\nis it the right approach to pass the data in body of get request? Because i am not able to hit the api and getting 400 from the other side i suspect it's because of data not being able to pass in the get request.\nOr is there some other library which has the support?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":287,"Q_Id":67175371,"Users Score":0,"Answer":"Basically i was mispelling one field of the header that why it was throwing 400 error the above code works fine for sending data in the body of get request, silly mistake indeed","Q_Score":1,"Tags":"python,web-frameworks,falcon","A_Id":67176172,"CreationDate":"2021-04-20T08:38:00.000","Title":"How to pass data in body of a get request in python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a way to add a comment to an order's timeline via the REST API ?\nIf so, what's the scope access ? And how to do this.\nThanks","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":683,"Q_Id":67186251,"Users Score":2,"Answer":"You cannot add a comment to the timeline. You can see your App's interactions with an order on the timeline, exposed by Shopify, but you cannot inject stuff yourself. If you want to decorate an order with comments, you would add your comment to the order notes. That works fine, but as you can tell, it is not timestamped by Shopify, so it lacks an \"official\" standing... nonetheless. Just use Notes.","Q_Score":1,"Tags":"python,rest,shopify,shopify-api","A_Id":67187016,"CreationDate":"2021-04-20T20:56:00.000","Title":"Add comment to an order's timeline in Shopify","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"The site i'am scraping shows an exception every almost 20 minutes 'the session is expired because of inactivity '\nIs there a way to fix that ?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":66,"Q_Id":67209576,"Users Score":0,"Answer":"The simplest solution is to click on some element on the page or perform some action with Actions class, like hovering over some sensitive element or even refresh the page.","Q_Score":0,"Tags":"python,selenium,webdriver","A_Id":67210531,"CreationDate":"2021-04-22T08:30:00.000","Title":"How to periodically relogin with selenium Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Pre history:\nI send to opened by my client by web browser - socket, packets with encrypted (html page - payload ) by my vpn.\nClient receives packets, and before Windows can process them, catches this packets ( using Pydivert) and decrypts payload, and sends it back to Windows network stack. But web browser is not loading the page.\nSo, I tried to send some more data after the real page data, and page was loaded, but with this new added data, that was sent after real page.\nI'm assuming that web browser gets the page, but waits for more data.\nSo, what I'm asking is - How to tell client, that I will send exact amount of bytes?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":41,"Q_Id":67212094,"Users Score":0,"Answer":"The answer were in the header of http packet, as i was sending 1500 bytes , but in CONTENT-LENGTH was said that i will send 1505 bytes.","Q_Score":0,"Tags":"python,sockets,networking","A_Id":67355838,"CreationDate":"2021-04-22T11:11:00.000","Title":"How to tell web browser opened - socket, how much bytes i will send to it","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to send an XML message from my existing Python application to the existing SonicMQ JMS ESB broker deployed in our organization.\nI cannot seem to find any Python library to send JMS messages to SonicMQ. The only one I could find is Spring-Python, which seems to implement only the connection to WebSphereMQ.\nIs there any Python library to send JMS messages to SonicMQ?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":118,"Q_Id":67213784,"Users Score":1,"Answer":"Talking about Python and JMS is a bit of a red herring. JMS is a Java API standard into various messaging products. It's not a messaging product itself.\nThe way I'd think of this is...\n\nIs there a Python interface to the specific SonicMQ product. I have no background w\/ SonicMQ but a quick google search mainly brings up your question so my bet is no.\nIs use of Jython an option, using its capability of calling Java APIs? If so, I'd expect Jython could talk to any JMS implementation using that. So yes you should be able to use JMS API to use SonicMQ just like a Java app.","Q_Score":0,"Tags":"python,jms,sonicmq","A_Id":67324114,"CreationDate":"2021-04-22T12:59:00.000","Title":"Can I send JMS messages from Python to SonicMQ?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"When waiting for an event with discord.py you can use commands.Bot.wait_for('message', timeout=30, check=check) or something similar with 'reaction_add'.\nIs there a way to wait for a message OR reaction? The only way I can think of is starting another thread and running two commands.Bot.wait_for() at the same time, but that seems really scuffed.\nIf there is a method that allows you to wait for multiple types of events that'd be great to know. If anyone has any ideas please let me know.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":636,"Q_Id":67221036,"Users Score":0,"Answer":"Use commands.Bot.add_listener(function, 'on_message') to create a listener, and when the commands.Bot.wait_for passes or times out, remove the listener.","Q_Score":0,"Tags":"python,discord,discord.py","A_Id":67221239,"CreationDate":"2021-04-22T21:32:00.000","Title":"How do you wait for multiple types of events in discord.py?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I was wondering for my project if Telegram could send me a delay or ban if I'm using two Telethon scripts each of them connecting to a different Telegram account in the same machine?\nThey will just be reading messages, nothing too fancy. At the moment one has been running without any issues.\nThank you","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":147,"Q_Id":67234344,"Users Score":1,"Answer":"There is no limit on the number of different accounts you can have on the same IP\/machine. Telegram uses sockets to connect so if a limit existed it would be related to the number of active connections your machine can handle.","Q_Score":1,"Tags":"python,telegram,telethon","A_Id":67234479,"CreationDate":"2021-04-23T17:17:00.000","Title":"Is there any potential problem when using two Telethon scripts for two different Telegram accounts in the same IP?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working on a project that just scrapes data from 3 devices (2xserial and 1xssh). I have this part implemented no problem.\nI am now heading towards the second part where I need be be able to send the data I need using protobuf to the clients computer where they will receive and display on their own client.\nThe customer has provided examples of their GRPC servers, and it's written in C#.\nCurrently, for security reasons, our system uses RedHat 8.3 and I am using a SSH Protocol Library called Paramiko for the SSH part. Paramiko is a Python library. Also the machine I am extracting data from only runs on Linux.\n\nHere are my main questions, and I apologize if I got nowhere.\n1.) The developer from the client side provided us with a VM that has a simulator and examples written in C# since their side was written in C#. He says that it's best to use the C# because all examples can be almost re-used as it's all written, etc. While I know it's possible to use C# in Linux these days, I've still have no experience doing so I don't know how complicated\/tedious this can get.\n2.) I write code in C# and wrap all the python code, which is also something I've never done, but I would be doing this all in RedHat.\n3.) I keep it in python because sending protobuf messages works across languages as long as it is sent properly. Also from the client side, I'm not sure if they will need to make adjustments if receiving protobuf messages written in Python(I don't think this is the case because it's just serialized messages, yea?).\nAny advice would be appreciated. I am looking to seek more knowledge outside my realm.\nCheers,\nZ","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":332,"Q_Id":67237330,"Users Score":0,"Answer":"If you're happy in Python, I would use option 3. The key thing is to either obtain their .proto schema, or if they've used code-first C# for their server: reverse-engineer the schema (or use tools that generate the schema from code). If you only have C# and don't know how to infer a .proto from that, I can probably help.\nThat said: if you want to learn some new bits, option 1 (using C# in your system) is also very viable.\nIMO option 2 is the worst of all worlds.","Q_Score":1,"Tags":"python,c#,protocol-buffers,grpc,redhat","A_Id":67240519,"CreationDate":"2021-04-23T22:05:00.000","Title":"GRPC: Sending messages from Python to C#, best method?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am currently trying to develop a tool to use Google Search Console API in order to get some data from my website.\nMy main goal is to get the 'Links Report' such as Top linking domains & Top linked pages in an automated way.\nI don't know if this is available via Google API. I have found nothing till now.\nIs this even possible to get such list via the Google Search Console API?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":155,"Q_Id":67250516,"Users Score":0,"Answer":"I'm doing the same thing, hoping someone will answer that question","Q_Score":1,"Tags":"python-3.x,google-api,google-api-client,google-api-python-client,google-search-console","A_Id":67280565,"CreationDate":"2021-04-25T06:44:00.000","Title":"How to to get links report with Google Search Console API?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Every time when I'm using send_keys(Keys.RETURN) it does not coniform my input. There is just a \u25a1 symbol. How can I prevent that?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":26,"Q_Id":67253723,"Users Score":0,"Answer":"You should use send_keys(Keys.ENTER) instead","Q_Score":0,"Tags":"python,selenium,web-scraping","A_Id":67253875,"CreationDate":"2021-04-25T13:04:00.000","Title":"Python Selenium send_keys(Keys.RETURN) function does not work","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Can you advise me on the analogs of the socket library on Python? The task is this, I need to write a very simple script with which I could execute remote commands in cmd windows. I know how this can be implemented using the socket library, but I would like to know if there are any other libraries for such a case.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":33,"Q_Id":67272337,"Users Score":0,"Answer":"Sockets is a low level mechanism by which two systems can communicate each other. Your OS provides this mechanism, there's no analogs.\nNext examples come from the application layer and they work with sockets in their lower communication layers: a socket open by your http server, usually 80 or 443 or a websocket open by your browser to communicate with your server. Or the DNS query that your browser executes when tries to resolve a domain name, also works with sockets between your PC and the DNS server.","Q_Score":0,"Tags":"python,sockets","A_Id":67279572,"CreationDate":"2021-04-26T19:00:00.000","Title":"analogs of socket in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I want to find a free cloud storage service with free API, that could help me back up some files automatically.\nI want to write some script (for example python) to upload files automatically.\nI investigated OneDrive and GoogleDrive. OneDrive API is not free, GoogleDrive API is free while it need human interactive authorization before using API.\nFor now I'm simply using email SMTP protocol to send files as email attachments, but there's a max file size limition, which will fail me in the future, as my file size is growing.\nIs there any other recommendations ?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":500,"Q_Id":67275889,"Users Score":0,"Answer":"gdownload.py using Python3\n\n\n\n from apiclient.http import MediaIoBaseDownload\n from apiclient.discovery import build\n from httplib2 import Http\n from oauth2client import file, client, tools\n import io,os\n \n CLIENT_SECRET = 'client_secrets.json'\n SCOPES = ['https:\/\/www.googleapis.com\/auth\/admin.datatransfer','https:\/\/www.googleapis.com\/auth\/drive.appfolder','https:\/\/www.googleapis.com\/auth\/drive']\n \n store = file.Storage('tokenWrite.json')\n creds = store.get()\n if not creds or creds.invalid:\n flow = client.flow_from_clientsecrets(CLIENT_SECRET, SCOPES)\n flags = tools.argparser.parse_args(args=[])\n creds = tools.run_flow(flow, store, flags)\n DRIVE = build('drive', 'v2', http=creds.authorize(Http()))\n \n files = DRIVE.files().list().execute().get('items', [])\n \n def download_file(filename,file_id):\n #request = DRIVE.files().get(fileId=file_id)\n request = DRIVE.files().get_media(fileId=file_id)\n fh = io.BytesIO()\n downloader = MediaIoBaseDownload(fh, request,chunksize=-1)\n done = False\n while done is False:\n status, done = downloader.next_chunk()\n print(\"Download %d%%.\" % int(status.progress() * 100))\n fh.seek(0)\n f=open(filename,'wb')\n f.write(fh.read())\n f.close()\n \n rinput = vars(__builtins__).get('raw_input',input)\n fname=rinput('enter file name: ')\n for f in files:\n if f['title'].encode('utf-8')==fname:\n print('downloading...',f['title'])\n download_file(f['title'],f['id'])\n os._exit(0)","Q_Score":2,"Tags":"python,google-drive-api,backup,onedrive,cloud-storage","A_Id":67333345,"CreationDate":"2021-04-27T01:43:00.000","Title":"What cloud storage service allow developer upload\/download files with free API?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I want to make a bot with discord.py that can stream videos from mp4 into a voice channel. Is it possible? and if it is possible how would I be able to do it (and sorry if this is a stupid question i am a beginner)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":104,"Q_Id":67286450,"Users Score":0,"Answer":"At the moment it isn't possible, you can only stream audio.","Q_Score":1,"Tags":"python,discord.py","A_Id":67286558,"CreationDate":"2021-04-27T15:50:00.000","Title":"how to make a video stream bot for discord.py?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"my goal is to make a bot that is able to show youtube videos through screen sharing or camera. Does anyone know how to do it?\nI tried to find out how to do it but I have not managed to find something similar on internet, even on stack overflow.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":100,"Q_Id":67299823,"Users Score":0,"Answer":"It happens that zoom will share a video in the way that you want.","Q_Score":0,"Tags":"python,api,youtube","A_Id":67604712,"CreationDate":"2021-04-28T11:58:00.000","Title":"how to make a discord bot in python that can show a youtube video?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have two USB to CAN devices (can0 and can1), they both are connected to a Linux machine which has socketcan installed in it. I have read the basics of CANopen protocol, i have not seen any example that can establish communication between two CANopen devices using Python CANopen library.\nI read in the documentation that each devices must have a .eds file, so I took a sample .eds file from the Python CANopen library from christiansandberg github and trying to establish communication and make them talk to each other using PDO's, but I could not able to do that.\nWe have a battery and wanted to communicate with it, the battery works on can-open protocol and they have provided the .eds file for the battery. I guess a usb2can device with the CANopen Python library can do the work. But I just don't know how to establish communication between the usb2can device and the battery. It would be helpful with any insights in framing the packets.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":760,"Q_Id":67318027,"Users Score":1,"Answer":"This is what you need to do:\n\nGet the necessary tools for CAN bus development. This means some manner of CAN listener in addition to your own application. It also means cables + terminating resistors. The easiest is to use DB9 dsub connectors. An oscilloscope is also highly recommended.\nRead the documentation about the device to figure out how to set node id and baudrate, or at least which default settings it uses.\nFind out which Device Profile the device uses, if any. The most common one is CiA 401 \"generic I\/O module\". In which case the default settings will be node id 1, baudrate 125kbps.\nYour application will need to act as NMT Master - network managing master - on this bus. That is, the node responsible for keeping track of all other nodes.\nIf the device is CANopen compliant and you've established which baudrate and node id it uses, you'll get a \"NMT bootup\" message at power up. Likely from node 1 unless you've changed the node id of the device.\nYou'll need to send a \"NMT start remote node\" message to the device to bring it from pre-operational to operational.\nDepending on what Device Profile the device uses, it may now respond with sending out all its enabled PDO data once, typically with everything set to zero.\nNow check the documentation of the device to find out which data that resides in which PDO. You'll need to match TPDO from the device with RPDO in your application and vice versa. They need to have the same COBID - CAN identifiers, but also the same size etc.\nCOBID is set in PDO communication settings in the Object Dictionary. If you need to change settings of the device, it needs to be done with SDO access of the device Object Dictionary.\nMore advanced options involve PDO mapping, where you can decide which parts of the data you are interested in that goes into which PDO. Not all devices support dynamic PDO mapping - it might use static PDO mapping in which case you can't change where the data comes out.\nOther misc useful stuff is the SAVE\/LOAD features of CANopen in case the device supports them. Then you can store your configuration permanently so that your application doesn't need to send SDOs at start-up for configuration every time the system is used.\nHeartbeat might be useful to enable to ensure that the device is up and running on regular basis. Your application will then act as Heartbeat consumer.","Q_Score":0,"Tags":"pdo,can-bus,canopen,python-can","A_Id":67330251,"CreationDate":"2021-04-29T13:06:00.000","Title":"CANopen protocol communication between two nodes using python canopen package","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"1: When it says 15 requests per 15 minute window, does this really mean I can only send 15 requests per 15 minutes?\n2: Do I really need to set up a Twitter bot to send basic requests like getting a list of a user's followers? Is there a way to get the data through a URL, like in most web APIs? I'm making software that will be used by other people, so it can't have a bot auth token in the code.\nI know I'm pretty much asking if what it blatantly says is true, but I'm just having trouble believing that the Twitter API is really this bad.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":47,"Q_Id":67336766,"Users Score":0,"Answer":"It sounds like you are specifically asking about the friends and followers endpoints. Yes, this is limited to 15 requests in a 15 minute window. Other endpoints \/ features have different rate limits.\n\nThe Twitter API requires authentication. You do not need to set up a \"bot\", but you will need a registered Twitter developer account, and a Twitter app, in order to use the API. If your app will be used by other people, you would need to implement Sign-in with Twitter to enable them to authenticate with your app; you can then store their access token (until or unless they revoke it) to make requests on their behalf. This is pretty standard for any multi-user web app.","Q_Score":0,"Tags":"twitter,twitterapi-python","A_Id":67337161,"CreationDate":"2021-04-30T15:42:00.000","Title":"Questions about the Twitter API","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to grab some information from a site just for education purpose, however i cannot send requests because of the protection. I get The typical Checking-your-browser page shows up first and then i'm being redirected repeatedly.\nhow i can bypass this protection in python selenium?","AnswerCount":4,"Available Count":1,"Score":-0.049958375,"is_accepted":false,"ViewCount":14883,"Q_Id":67341346,"Users Score":-1,"Answer":"SOLUTION JULY 2021\njust add user agent argument in chrome options and set user agent to any value\nops = Options() ua='cat' ops.add_argument('--user-agent=%s' % ua) driver=uc.Chrome(executable_path=r\"C:\\chromedriver.exe\",chrome_options=ops)","Q_Score":12,"Tags":"python,selenium,selenium-chromedriver,cloudflare","A_Id":68390117,"CreationDate":"2021-04-30T22:59:00.000","Title":"How to bypass Cloudflare bot protection in selenium","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm looking for a way for a bot to wait for a reply from a user after a command. For example, you first type \"\/ask\", then the bot waits for a plain message (not a command) from the user and after the user replies is stores his\/her reply in a variable\nI'm sure this is quite simple, but all the tutorials I've seen are in Russian and the documentation for python-telegram-api is very chaotic and I'm not the most advanced\nIf I'm dumb, sorry, just please help a fellow beginner out","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":782,"Q_Id":67344301,"Users Score":1,"Answer":"Okay, this was pointless. I thought you couldn't use arguments, but the post I read was 5 years old so... I'm stupid. I just used arguments instead, thanks for the help tho, really appreciate it","Q_Score":3,"Tags":"python,telegram-bot,python-telegram-bot","A_Id":67345719,"CreationDate":"2021-05-01T08:33:00.000","Title":"Telegram bot await reply from user after command Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a project in which the backend is written in FastAPI and the frontend uses React. My goal here is to add a component that will monitor some of the pc\/server's performances in real time. Right now the project is still under development in local environment so basically I'll need to fetch the CPU\/GPU usage and RAM (derived from my PC) from Python and then send them to my React app. My question here is, what is the cheapest way to accomplish this? Is setting an API and fetching a GET request every ten seconds a good approach or there're some better ones?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":293,"Q_Id":67348632,"Users Score":1,"Answer":"Explanation & Tips:\nI know EXACTLY what you're describing. I also made a mobile app using Flutter and Python. I have been trying to get multiple servers to host the API instead of one server. I personally think Node.Js is worth checking out since it allows clustering which is extremely powerful. If you want to stick with python, the best way to get memory usage in python is using psutil like this: memory = psutil.virtual_memory().percent, but for the CPU usage you would have to do some sort of caching or multi threading because you cannot get the CPU usage without a delay cpu = psutil.cpu_percent(interval=1). If you want your API to be fast then the periodic approach is bad, it will slow down your server, also if you do anything wrong on the client side, you could end up DDOSing your API, which is an embarrassing thing that I did when I first published my app. The best approach is to only call the API when it is needed, and for example, flutter has cached widgets which was very useful, because I would have to fetch that piece of data only once every few hours.\nKey Points:\n-Only call the API when it is crucial to do so.\n-Python cannot get the CPU usage in real-time.\n-Node performed better than my Flask API (not FastAPI).\n-Use client-side caching if possible.","Q_Score":0,"Tags":"python,reactjs,algorithm,fastapi","A_Id":67445140,"CreationDate":"2021-05-01T16:45:00.000","Title":"Proper way to show server's RAM, CPU and GPU usage on react app","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I create a graphviz graph with python automatically with python. Part of this automatically generated graphs have a repetition of the same node (actually for different purposes), so I want to show them separately. Is there some kind of configuration which may allow to do that. Manually, I can do that if I play with the dot file by adding some spaces on some node labels.\nI create the graph starting from definitions of the edges. I mean what I have is a list of edges, not a list of nodes. Instead of configuration, I would appreciate if you can suggest a code part that does this. Simply adding a space to all new node repetitions in the edge list does not work, cause some should not have that space.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":83,"Q_Id":67366867,"Users Score":1,"Answer":"I decided to add white spaces to the graph right from the start. So, they are defined as different nodes. When I need a node_name to query something from some dict for example, I just trim the space. It worked quite fine.","Q_Score":1,"Tags":"python,graphviz,pygraphviz","A_Id":67368871,"CreationDate":"2021-05-03T09:53:00.000","Title":"How to add same node multiple times in different parts of the graph","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am new to AWS with python. I came across boto3 initially, later somone suggested cdk. What is the difference between aws cdk and boto3?","AnswerCount":4,"Available Count":2,"Score":0.049958375,"is_accepted":false,"ViewCount":2598,"Q_Id":67378945,"Users Score":1,"Answer":"You're creating an application that needs to use AWS services and resources. Should you use cdk or boto-3?\nConsider if your application needs AWS services and resources at build time or run time.\nBuild time: you need the AWS resources to be available IN ORDER TO build the application.\nRun time: you need the AWS resources to be available via API call when your application is up and running.\nAWS CDK setups the infrastructure your application needs in order to run.\nAWS SDK compliments your application to provide business logic and to make services available through your application.\nAnother point to add is that AWS CDK manages the state of your deployed resources internally, thus allowing you to keep track of what has been deployed and to specify the desired state of your final deployed resources.\nOn the other hand, if you're using AWS SDK, you have to store and manage the state of the deployed resources (deployed using AWS SDK) yourself.","Q_Score":9,"Tags":"python,amazon-web-services,boto3,aws-cdk","A_Id":67465128,"CreationDate":"2021-05-04T04:47:00.000","Title":"Amazon cdk and boto3 difference","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am new to AWS with python. I came across boto3 initially, later somone suggested cdk. What is the difference between aws cdk and boto3?","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":2598,"Q_Id":67378945,"Users Score":0,"Answer":"I am also new to AWS, here is my understanding for relevant AWS services and boto3\n\nAWS Cloud Development Kit (CDK) is a software library, available in different programming languages, to define and provision cloud infrastructure*, through AWS CloudFormation.\n\nBoto3 is a Python software development kit (SDK) to create, configure, and manage AWS services.\n\nAWS CloudFormation is a low-level service to create a collection of relation AWS and third-party resources, and provision and manage them in an orderly and predictable fashion.\n\nAWS Elastic Beanstalk is a high-level service to deploy and run applications in the cloud easily, and sits on top of AWS CloudFormation.","Q_Score":9,"Tags":"python,amazon-web-services,boto3,aws-cdk","A_Id":69205573,"CreationDate":"2021-05-04T04:47:00.000","Title":"Amazon cdk and boto3 difference","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using Selenium to automate a browser task through a Python script.\nThere is a text-box in my browser that I need to fill with info, but the XPath is formatted as below:\n\n\/\/*[@id=\"NameInputId14514271346457986\"]\n\nThe problem is that: everytime the number before the Id (14514271346457986) changes. Is there a way to refer to this XPath something like:\n\n\/\/*[@id.start-with=\"NameInputId\"]\n\nSorry if it is a dumb question - I started to using Selenium this week and I couldn't find this info on documentation.","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":38,"Q_Id":67387966,"Users Score":1,"Answer":"Sure, you can use xpath like \/\/*[contains(@id,\"NameInputId\")] but I guess this possibly will not be an unique locator. In this case the xpath should be more complex to contain additional attributes or some parent element","Q_Score":1,"Tags":"python,selenium,xpath","A_Id":67388038,"CreationDate":"2021-05-04T15:44:00.000","Title":"Reference to XPath - is it possible?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am mapping the geographic data of different IP Addresses using 2 different APIs and python\n\nIpstack\nIPAPI\n\nHowever, some of the results are showing different locations for a given IP. How does this happen?\nwhat is the difference which I may encounter when using 2 or more different methodologies?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":22,"Q_Id":67388402,"Users Score":0,"Answer":"I think it happened because these addresses used NAT (Network Address Translation), when 2 different IP addresses use the same public IP to save IP address.\nYou can read more about NAT to understand it","Q_Score":0,"Tags":"python,ip","A_Id":67388733,"CreationDate":"2021-05-04T16:14:00.000","Title":"Getting different locations as the source for a given IP when using 2 different APIs?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am mapping the geographic data of different IP Addresses using 2 different APIs and python\n\nIpstack\nIPAPI\n\nHowever, some of the results are showing different locations for a given IP. How does this happen?\nwhat is the difference which I may encounter when using 2 or more different methodologies?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":22,"Q_Id":67388402,"Users Score":0,"Answer":"These APIs work by having a massive database of IP Addresses. When you send a request, they either look up the IP address that you sent in, or they guess an approximate location using similar IP addresses that they know the location of. Because the two databases don't communicate, as two separate companies own them, it is natural to expect that there will be some variation. It's important to note too that the locations are only approximate. Sometimes they don't even get the city right, much less the address. ISP companies try as best they can to preserve the privacy of their customers.","Q_Score":0,"Tags":"python,ip","A_Id":67388769,"CreationDate":"2021-05-04T16:14:00.000","Title":"Getting different locations as the source for a given IP when using 2 different APIs?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I\u2019ve deployed my first application today utilizing plotly Dash and I\u2019m using Dash Auth as my authentication to login to the application.\nHowever, the way our system works, it\u2019s dependent on a \u2018health check\u2019 which requires a given URL from the dash app to return a 200 status code to ensure the site is running well.\nFrom my understanding, the Dash Auth throws a 401 error first to display the login page, then returns either 200 or 403 based on the input.\nThis initial 401 then crashes our system because it\u2019s expecting a 200 for the health check.\nMy question is, what link can I supply our system so that I get a 200 status code returned instead of the 401 while still using Dash Auth? The base path, like url.com\/ seems to throw the 401 still. Thank you!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":137,"Q_Id":67390972,"Users Score":1,"Answer":"You need to create a login page which can send 200 response.\nor else simple solution is to add 401 as a required success code in EC2 Target Groups as 401 means unauthorized error.\nProcess:\nEC2 -> Load balancing -> Target groups -> select target group -> health checks -> set Success Codes as 401.","Q_Score":0,"Tags":"python,plotly-dash,http-status-code-401,plotly-python","A_Id":70595319,"CreationDate":"2021-05-04T19:26:00.000","Title":"Plotly 'Dash Auth' Login Throws 401 Error. How to get 200?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I need to login to multiple accounts (same website) at once and I need to open multiple tabs.\nEach tab needs to be independent so that I can login to different accounts.\nis there any way to achieve this using selenium or any other automation tool.\nGreetings.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":140,"Q_Id":67416173,"Users Score":0,"Answer":"yes indeed you can by using chrome driver and selenium\ncode:\ndriver= webdriver.Chrome(executable_path=\"ChromeDriverExeLocation\")\ndriver1= webdriver.Chrome(executable_path=\"ChromeDriverExeLocation\")\ndriver.get(\"googleloginform\")\ndriver1.get(\"googleloginform2\")","Q_Score":0,"Tags":"python,selenium,automation","A_Id":67428593,"CreationDate":"2021-05-06T10:19:00.000","Title":"how to open multi tabs in chrome with selenium with different session","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I created a instagram bot to search for hashtags, like the posts, comment and follow the accounts from the posts but it can't even get to the point of login into the account. I have Instapy and selenium installed but I keep running into errors concerning webdriver please help with how to solve the problem.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":37,"Q_Id":67422441,"Users Score":0,"Answer":"Post the error output; difficult to figure out otherwise.","Q_Score":0,"Tags":"python,instapy","A_Id":67422597,"CreationDate":"2021-05-06T16:37:00.000","Title":"what must be installed first for instapy to run without errors?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"from tweepy import *\nfrom tweepy.streaming import StreamListener\n------getting this error------\nUnused import api from wildcard importpylint(unused-wildcard-import)\nUnused import debug from wildcard importpylint(unused-wildcard-import)\nUnused import AppAuthHandler from wildcard importpylint(unused-wildcard-import)\nUnused import Cache from wildcard importpylint(unused-wildcard-import)\nUnused import FileCache from wildcard importpylint(unused-wildcard-import)\nUnused import MemoryCache from wildcard importpylint(unused-wildcard-import)\nUnused import Cursor from wildcard importpylint(unused-wildcard-import)\nUnused import RateLimitError from wildcard importpylint(unused-wildcard-import)\nUnused import TweepError from wildcard importpylint(unused-wildcard-import)\nUnused import DirectMessage from wildcard importpylint(unused-wildcard-import)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":63,"Q_Id":67425764,"Users Score":0,"Answer":"These are Pylint warnings, not errors.\nThe Tweepy import itself should still have been successful.","Q_Score":0,"Tags":"python,wildcard,tweepy,twitter-oauth,twitter-streaming-api","A_Id":67432558,"CreationDate":"2021-05-06T20:48:00.000","Title":"In Python during importing Tweepy library like | from tweepy import * getting error","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am looking for an api that shows me the latest binance smart chain tokens, is there such a thing available or do I need to run the binance smart chain node for this?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":176,"Q_Id":67426271,"Users Score":0,"Answer":"There is no such api, but you may want to track all new transactions in Binance Smart Chain and check if a new contract was created.","Q_Score":0,"Tags":"python,api,blockchain,binance","A_Id":70640270,"CreationDate":"2021-05-06T21:38:00.000","Title":"API showing new Binance Smart Chain Tokens","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to create bot in python with referral feature but I didn't get any method to create referral links for user in telegram docs. can you suggest any alternative approach for it. so that we can create a link and any new member can follow that link to start bot and referral got registered in referrers account.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1583,"Q_Id":67432314,"Users Score":0,"Answer":"Try to use some redirect method in your webserver code.\n\nGenerate the referral link and save in some storage\nWhen user open the link, after check method redirect to bot","Q_Score":1,"Tags":"python,telegram-bot,refer","A_Id":67433001,"CreationDate":"2021-05-07T09:27:00.000","Title":"is there any method or api available to create unique referral link for telegram bot with python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Basically, as the title says, after I turned my program into an exe it gives me the ModuleNotFoundError and says that I do not have selenium installed. When I try installing selenium with pip it tells me selenium is already installed. Sorry if this is similar to another question, I spent a while trying to find an answer and gave up.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":25,"Q_Id":67453338,"Users Score":0,"Answer":"Turns out Pycharm is just bad and I messed up my Path and or was using a venv by accident. I had similar errors and reinstalling Pycharm fixed them.","Q_Score":0,"Tags":"python,pyinstaller","A_Id":67953219,"CreationDate":"2021-05-09T00:34:00.000","Title":"After using pyinstaller to make my program into an exe, selenium stops working","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am thinking to develop some automatic task from selenium, today i doit from my local computer, with a local browser. Im looking to deploy this python app in a webserver, and I need to know if exist any way to start a local selenium browser from an app deployed on a webserver.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":20,"Q_Id":67453370,"Users Score":0,"Answer":"You can use any cloud computing company (azure, google cloud, AWS - just to name a few). most of them offers free trials with balance that can be sufficient for a few months depending upon what resources you need.\nonce you have an account with one of this services you need to setup a virtual machine. I installed Ubuntu Linux and I used it with RDP to connect to a Desktop GUI interface. I installed the driver for the browser I wanted to use and just activated my script.\nthis took me around 2 hours to setup and it was my first time doing this ever.","Q_Score":0,"Tags":"python,selenium","A_Id":67454050,"CreationDate":"2021-05-09T00:41:00.000","Title":"its possible start a local selenium task, from a webserver?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have created a chatbot using nltk, keras and tkinter . And i have also created a website using python and flask . how can i intergrate both of them. i.e i can i make my chatbot run after the website gets open(run)\nwhen i am importing the chatgui.py(this is chat bot file) and executing it in my main.py(this is the python file file that is building the website using the flask framework) , only any one of them is running, not the both.\nPlz suggest me some idea , how can i make both of them run.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":69,"Q_Id":67468883,"Users Score":0,"Answer":"When a client visits your site, the client computer will not execute python code. Only the server side will execute the python code, so the tkinter part of your app is not needed. The user's gui is rendered with html\/javascript in their browser.\nThere are a lot of ways to go about it, but I think the most common approach would be to scrap the GUI part written with tkinter, and instead re-create a javascript based GUI that will be served by flask. Have chats instead pass between the client and server with javascripts fetch api.","Q_Score":0,"Tags":"python,flask,tkinter,keras,nltk","A_Id":67469041,"CreationDate":"2021-05-10T10:32:00.000","Title":"How to run a chatbot created using nltk , keras and tkinter on a website created using python and flask?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I want to check if my Discord bot has permission to join a voice channel before attempting to connect to it. I'm using the Python discord API. I tried passing in the Bot.user object to VoiceChannel.permissions_for() function but it requires a Member object.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":287,"Q_Id":67475387,"Users Score":0,"Answer":"To get member object for the bot user, you can use ctx.guild.me. It will return the member object if command is called in a guild.","Q_Score":0,"Tags":"python,discord,discord.py","A_Id":67475999,"CreationDate":"2021-05-10T17:52:00.000","Title":"How to check if my bot has permission to view a voice channel before trying to join it?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to make application that send live values to AWS cloud watch like sensor data. Is there any way to send live data to cloud watch ?I made a python code that publish custom metric to AWS cloud successfully .How can i change this value Frequently?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":96,"Q_Id":67478768,"Users Score":1,"Answer":".How can i change this value Frequently?\n\nYou can't change existing metric values as they are immutable. You can only add new ones based on the increasing values of your timestamps.","Q_Score":1,"Tags":"python-3.x,amazon-web-services,raspberry-pi,boto3","A_Id":67478821,"CreationDate":"2021-05-10T23:31:00.000","Title":"How to update metrics in aws cloud watch using boto3 through python code?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am using the AWS EMR Cluster service.\nIt is a situation in which machine learning tasks such as spark-build are being performed by referring to the model file with the S3 Bucket between EMR Cluster use.\nI request a lot of head and list requests from S3, but I am wondering if it is normal for AWS EMR to send a lot of list and head requests to the S3 model file.\nSymptom: AWS EMR is about 2.7 million head and list requests per day to S3.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":182,"Q_Id":67479760,"Users Score":0,"Answer":"A lot of list\/head requests get sent.\nThis is related to how directories are emulated on the hadoop\/spark\/hive S3 clients; every time a progress looks to see if there's a directory on a path it will issue a LIST request, maybe a HEAD request first (to see if its a file).\nThen there's the listing of the contents, more LIST requests, and finally reading the files. There'll be one HEAD request on every open() call to verify the file exists and to determine how long it is.\nFiles are read with GET Requests. Every time there's a seek()\/buffer read on the input stream and the data isn't in a buffer the client has to do one of\n\nread to the end of the current ranged get (assuming its a ranged GET), discarding the data, issue a new ranged GET\nabort() the HTTPS connection, negotiate a new one. Slow.\n\nOverall then, a lot of IO, especially if the application is inefficient about caching the output of directory listings, whether files exist, doing needless checks before operations (if fs.exists(path) fs.delete(path, false)) and the like.\nIf this is your code, try not to do that\n(disclaimer: this is all guesses based on the experience of tuning the open source hive\/spark apps to work through the S3A connector. I'm assuming the same for EMR)","Q_Score":0,"Tags":"list,amazon-s3,python-requests,amazon-emr,head","A_Id":67485267,"CreationDate":"2021-05-11T02:25:00.000","Title":"I am wondering if it is normal for AWS EMR to send a lot of list and head requests for S3 model files","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I would like to scrape multiple URLS but they are of different nature, such as different company websites with different html backend. Is there a way to do it without coming up with a customised code for each url?\nUnderstand that I can put multiple URLS into a list and loop them","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":32,"Q_Id":67484189,"Users Score":0,"Answer":"I fear not, but I am not an expert :-)\nI could imagine that it depends on the complexity of the structures. If you want to find a the text \"Test\" on every website, I coul imagine that soup.body.findAll(text='Test') would return all occurences of \"Test\" on the website.\nI assume you're aware of how to loop through a list here, so that you'd loop through the list of URLS and for each check whether the searched string occurs (maybe you are looking for sth else , i.e. an \"apply\" button or \"login\" ?\nall the best,","Q_Score":1,"Tags":"python,web,web-scraping,beautifulsoup","A_Id":67484276,"CreationDate":"2021-05-11T09:39:00.000","Title":"Scraping Information from multiple URLS that are different in structure","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Situation: My AWS Lambda analyze a given file and return cleaned data.\nInput: path of the file given by the user\nOuptut: data dictionnary\nActually in my lambda I:\n\nsave the file from local PC to an s3\nload it from the s3 to my lambda\nanalyze the file\ndelete it from the s3.\n\nCan I simplify the process by loading in the lambda \"cash memory\" ?\n\nload it from local PC to my lambda\nanalyze the file","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":78,"Q_Id":67500929,"Users Score":0,"Answer":"First of all, you might use the wrong pattern. Just upload file to S3 using AWS SDK and handle lambda with S3:CreateObject event.","Q_Score":0,"Tags":"python,amazon-web-services,aws-lambda","A_Id":67501770,"CreationDate":"2021-05-12T09:31:00.000","Title":"AWS Lambda Python: Can I load a file from local PC when calling my lambda?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I tried crawling a specific site using selenium and webdriver_manager.chrome, and my code crawled elements of that site totally. But after crawling, the following error message appears in the console window.\nERROR:gpu_init.cc(426) Passthrough is not supported, GL is disabled\nWhen I first found it, I unchecked Hardware hardware accleration of chrome also it didn't solve the problem.","AnswerCount":4,"Available Count":1,"Score":0.049958375,"is_accepted":false,"ViewCount":103063,"Q_Id":67501093,"Users Score":1,"Answer":"I got this error as a result of using NVIDIA's Quadro view so for me the fix was to disable that.","Q_Score":35,"Tags":"python,selenium,selenium-webdriver,selenium-chromedriver","A_Id":68651358,"CreationDate":"2021-05-12T09:42:00.000","Title":"Passthrough is not supported, GL is disabled","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"When I use python3 -m http.server as far as I know it's supposed to create a webpage of my machine.\nI did that and I managed to open the page in the same machine, but I couldn't open it in another machine. It just shows me \"Unable to connect\".","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":38,"Q_Id":67517454,"Users Score":0,"Answer":"Okay so regardless of what you have tried so far, I'll show you the whole process or how I usually do it\nGo to the directory you want to serve the content of\nRun python3 -m http.server 8080\nNow if you open in the browser of the same machine you can use localhost:8080 but if you want to open it in another machine then make sure both machines are connected to same network and note down ip address of the first machine\nThen in the second machine's browser enter the following:\nipaddressofyourmachine:8080\ne.g: 192.168.1.2:8080","Q_Score":0,"Tags":"html,python-3.x,linux,http,command-line","A_Id":67517791,"CreationDate":"2021-05-13T10:12:00.000","Title":"I cannot use the page from python3 -m http.server","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using Firebase authentication to authenticate users. Whenever, the user is logged in, I get the user's ID token with user.getIdToken(true) and set it on the local storage. With that token in the authorization header, I am requesting my back-end API.\nOn the back-end side, I am using Firebase admin SDK to authenticate the request and the user with the client-side ID token passed in the request authorization header.\nThis works for a while. But after some time I get error:\n\nExpiredIdTokenError: Token expired, 1620908095 < 1620915515\n\nI saw that Firebase refreshes the ID token on its own. But I don't think that's the case. I have looked through the developer tools network tab, and there's also an observer method to check whenever the token has changed => onIdTokenChanged(), but the token is never refreshed.\nI couldn't find any information on the Firebase docs either, and I was hoping if you could help me:\n\nHow can I generate a token without expiration limit to last until signed out or at least for some more time (1 week maybe)?\nIf I cannot the set the expiry limit of the token, what steps should I take so that I can send a valid unexpired token when I am request data from my back-end? Do I have to call user.getIdToken(true) every-time and get a fresh token before I request from my back-end API?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":164,"Q_Id":67521781,"Users Score":2,"Answer":"The idTokenChanged() observer is a bit misleading. It will fire when the token is refreshed, but the token is only refreshed automatically when you also use other Firebase products (like its database or file storage). In other cases, as you said you should call user.getIdToken(), which will refresh an expired token for you if necessary, everytime you call your API. You don't need to pass true into this method unless you want to have a completely fresh token everytime (which you most likely don't need).\nTo my knowledge you cannot control the expiration of tokens generated with the client SDK, for that you would need to generate your own tokens on the server.","Q_Score":2,"Tags":"javascript,python,firebase,vue.js,firebase-authentication","A_Id":67521899,"CreationDate":"2021-05-13T15:18:00.000","Title":"How to handle expired user ID token in firebase?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am writing a trading bot in python for Binance platform. I have selected 300 cryptos. Binance has a websocket API for each pair. I am able to fetch price data for one pair. I need to parallely fetch prices for 300 cryptos and do some calculations. The data is pushed every 100ms. Each pair has a different url. So I guess I need to open 300 websocket connections in parallel.\nAll of this should be done under 100 ms and store the data in a single list. I haven't used multiprocessing, multithreading, asyncio etc so I have no I idea how to do this in python.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":162,"Q_Id":67537138,"Users Score":0,"Answer":"I would try with:\n\nUse a *.yaml with all urls as config file\nHave an empty and global list in the main thread where you will store the newest value (list) or more (dict) sent by each socket url\nOpen 1 socket per url in an own Thread with threading.thread.\nCheck if you have to ping from time to time so your client isnt disconnected.","Q_Score":0,"Tags":"python,python-3.x,multithreading,websocket,python-multiprocessing","A_Id":67694531,"CreationDate":"2021-05-14T15:45:00.000","Title":"Need help regarding parallelization for 300 websocket connections in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"May be this question is so simple for many but I am looking most relevant answer from the community\nI am running My YouTube channel and after loading uploading a video and getting the video link, it's very hard and time to post manually about the video on all social media (with relevant hashtags).\nI am looking for a Python script that automate my this task, many commercial software\/websites are offering such services, but I don't want to share my information with...\nLooking for the best response.\nThanks","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":261,"Q_Id":67537678,"Users Score":1,"Answer":"that's kinda hard to be done, there's some automated systems that can do this but they are not stable and ig they are forbidden in some social medias","Q_Score":0,"Tags":"python,python-3.x,automation,youtube,social-media","A_Id":67537765,"CreationDate":"2021-05-14T16:23:00.000","Title":"Automate Social Media Posts Using Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am running a python app in a different network namespace and it opens a TCP connection to a websocket. The problem is that this connection has microfreezes. It would run fine for a minute approximately and then it would hang for a second. I think it's the network namespace problem because if I run outside it there is no problem.\nI monitored the TCP buffers with ss -tm and what I notice is that when the freeze starts, the buffers also start to fill up. They seem to be empty the rest of the time. Any help would be appreciated.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":50,"Q_Id":67539287,"Users Score":0,"Answer":"I found the problem, the app tried to connect to a localhost socket. It failed\nbecause that socket is open with the main ip and not in the network namespace.\nBecause the app is written in python it would hang while trying to connect.","Q_Score":0,"Tags":"python,linux,networking","A_Id":67540910,"CreationDate":"2021-05-14T18:25:00.000","Title":"network namespace TCP microfreeze","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am creating a bot for my server but someone is removing the reaction on my server, so I want to detect that who is removing the reaction. But I have no idea how to do that or even if it is possible or not.\nYour help is highly appreciated","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":523,"Q_Id":67545320,"Users Score":0,"Answer":"Unfortunately, The API does not give the information on who removed the reaction.","Q_Score":1,"Tags":"python,discord,discord.py","A_Id":67549401,"CreationDate":"2021-05-15T09:40:00.000","Title":"How to detect that who removed the reaction discord.py","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I currently have a socketio webserver using WSGI and I host it with gunicorn. I also have set up a javascript client which uses a web browser. I've managed to get these two to be able to communicate.\nI'm working on creating an information service which takes events which happen in a game from a log and then parses it through a seperate python script to create a readable GUI. But that bit doesn't really matter.\nOnce I have my string of text from my seperate python program, how do I send it to all clients connected to my webserver? The program isnt part of the socketIO server so, as far as I'm aware, can't use emit().\nMy idea was to create a seperate python client which connected to the socketio server and do it that way. I've illustrated the flow below:\n\nsocketIO python Client -- DATA --> socketIO server\nsocketIO server -- DATA --> ALL socketIO clients.\n\nI'm struggling to be able to work out a way to perform this? Could anyone help or suggest a more efficient way? Let me know if my explanation is unclear.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":33,"Q_Id":67550430,"Users Score":0,"Answer":"Thanks Miguel, I hadn't seen that in the documentation. Despite reading it over and over, I was stuck for a while. My stupid mind didn't come across the thought that I'd need to install Redis-server...\nAll working now!\nLesson learned: check requirements for my program...","Q_Score":0,"Tags":"python-socketio","A_Id":67661308,"CreationDate":"2021-05-15T19:18:00.000","Title":"Sending data generated from a seperate python program to socketIO clients with a socketIO server","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm kinda new to web security.\n\nDoes certbot automatically provide an RSA key to the server and encrypt\/decrypt the whole path(SSL) of the connection (From client to server and server to client) and make it HTTPS?\nDo I simply not bother about security while developing a web app and simply make my app in HTTP and use certbot to make it secure?\nDoes SSL protect against replication attacks?\nIf the answer to 1 is no, please suggest a python module to encrypt the Flask app.\n\nThanks.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":46,"Q_Id":67566716,"Users Score":0,"Answer":"Certbot provides an SSL certificate which can be used to encrypt your connection and register you with a third party (let's encrypt). When users connect to your app, they will first verify your servers identity with let's encrypt before encrypting your connection. If you're using a server like NGINX then the easiest way to use certbot will be with the nginx certbot plugin.\nHTTPS is not a security wildcard, it does provide encryption between two endpoints, preventing a host of MITM attacks but there are many threat vectors to consider. Depending on how your app works you'll still need to worry about SQL injections, malicious file uploads, remote code execution etc. HTTPS has great added security benefits absolutely, but it shouldn't replace any other security methods.","Q_Score":0,"Tags":"python,flask,certificate,ssl-certificate,certbot","A_Id":67569062,"CreationDate":"2021-05-17T08:56:00.000","Title":"Does certbot automatically provide encyption?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a Dialogflow CX agent that returns a Custom payload. My client is a Python application using the dialogflowcx_v3beta1 SDK to call DetectIntent. The application needs to forward the custom payload in JSON format to another application, but I have been unable to find a way to convert the structured payload to JSON. There is no schema associated with the custom payload, which could be literally any valid JSON, and because it will simply be forwarded to another component, the application has no reason to interpret the content in any way.\nIs there a way to serialize the custom payload to JSON?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":194,"Q_Id":67571574,"Users Score":0,"Answer":"Unless you're asking a Python question, the \"CX Solution\" could be to use the Fulfillment text instead of the Custom Payload feature, and include there the serialized JSON.","Q_Score":0,"Tags":"python,dialogflow-cx","A_Id":67629482,"CreationDate":"2021-05-17T14:21:00.000","Title":"How can I get Custom Payload as string","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I copy the websocket request from the chrome network tab and try to use it as fetch and clearly is not working because fetch only support HTTP(S).\nIs there a way I can connect to WhatsApp websocket connection by either Python or JavaScript?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1839,"Q_Id":67578454,"Users Score":0,"Answer":"You need to use the flask-socketio and for javascript socket library. The steps are to connect and then send data to the function mentioned. You need to read the documentation as it is pretty simple.","Q_Score":6,"Tags":"javascript,python,websocket,whatsapp","A_Id":68143084,"CreationDate":"2021-05-18T00:21:00.000","Title":"Get websocket connection to WhatsApp using Python or JavaScript?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have tried googling this and cant find much information on it.\nI simply just want to know, are there any drawbacks to accessing binance via API from two devices?\nSpecifically my situation is that I have a trading bot on a VPS that runs 24\/7. while it is running, I would like to work on updates on my computer which entails accessing the account and fetching market data.\nWill this interfere with my trade bot in any way?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":420,"Q_Id":67581122,"Users Score":0,"Answer":"Binance API keys do not have an IP address or session limitations unless you impose them yourself in the settings.\nAlso this is more of a question for Binance customer support, as Binance is private venture and they can decide how you can use your API or not.","Q_Score":0,"Tags":"python,binance,binance-api-client","A_Id":67581989,"CreationDate":"2021-05-18T06:38:00.000","Title":"Accessing Binance with API from two devices at once","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using Python to directly run a script of automatically replying user's comments.\nI have a client secrets file after applying for a web-application. However, when I run for credentials, it first asks me to Please visit this URL to authorize this application and then when I clicked on it, it gives me this error:\nError 400: redirect_uri_mismatch The redirect URI in the request, urn:ietf:wg:oauth:2.0:oob, can only be used by a Client ID for native application. It is not allowed for the WEB client type. You can create a Client ID for native application at.\nWhat application should I have applied for the OAuth in this case.\nI know that this issue could be related to redirect URL. But because I am running this out of my script on my local computer, I am wondering what my URL should be.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":120,"Q_Id":67630391,"Users Score":0,"Answer":"You have to acknowledge that your issue above is precisely due to the redirect URI mismatch. The error response you got from the API is indicating you this.\nTo fix you issue, you'll have to have the same redirect URI set on your project within Google developers console and, at the same time, within your Python script.\nIf you indeed are running your application on your desktop (laptop) computer, then follow on the error message advice: within Google developers console, do set your project type to be of Desktop kind.","Q_Score":0,"Tags":"python-3.x,youtube-api,youtube-data-api","A_Id":67632908,"CreationDate":"2021-05-21T03:17:00.000","Title":"YouTube Data API v3 OAuth setup from Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to get a complete list of files in a folder and all its subfolders regularly (daily or weekly) to check for changes. The folder is located on a server that I access as a network share.\nThis folder currently contains about 250,000 subfolders and will continue to grow in the future.\nI do not have any access to the server other than the ability to mount the filesystem R\/W.\nThe way I currently retrieve the list of files is by using python's os.walk() function recursively on the folder. This is limited by the latency of the internet connection and currently takes about 4.5h to complete.\nA faster way to do this would be to create a file server-side containing the whole list of files, then transfering this file to my computer.\nIs there a way to request such a recursive listing of the files from the client side?\nA python solution would be perfect, but I am open to other solutions as well.\nMy script is currently run on Windows, but will probably move to a Linux server in the future; an OS-agnostic solution would be best.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":80,"Q_Id":67632192,"Users Score":1,"Answer":"You have provided the answer to your question:\n\nI do not have any access to the server other than the ability to mount the filesystem R\/W.\n\nNothing has to be added after that, since any server side processing requires the ability to (directly or indirectly) launch a process on the server.\nIf you can collaborate with the server admins, you could ask them to periodically start a server side script that would build a compressed archive (for example a zip file) containing the files you need, and move it in a specific location when done. Then you would only download that compressed archive saving a lot of network bandwidth.","Q_Score":0,"Tags":"python,smb,fileserver","A_Id":67632560,"CreationDate":"2021-05-21T07:01:00.000","Title":"Request recursive list of server files from client","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to connect two computer using socket library with python. One of the system is my local system and another is an instance in AWS. The one hosted in AWS has its own public address. And my local system only has private ip address (192.168.10.1). I am able to establish connection from my local system to system in AWS. But not the vice versa.\nIs it possible to connect from AWS system to local system (here local system should be listening for other incoming connection)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":28,"Q_Id":67635833,"Users Score":0,"Answer":"You need to configure your router to forward requests from AWS to the computer on your network. It would be good practice to set your local computer with a static IP address or use DHCP reservation to ensure that the address doesn't change.\nwarning: You will also need to ensure your connection is secure mostly likely using a combination of authentication, authorisation and encryption. Forwarding ports exposes your device to the entire world.","Q_Score":0,"Tags":"python,amazon-web-services,sockets","A_Id":67635927,"CreationDate":"2021-05-21T11:19:00.000","Title":"Connecting Server to Client using Socket Programming","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I try to establish a website-connection(web-login) via python. The login appears to need 3 keys (next to password and username of course). 2 of them are handed over via get and the third one is a csrf-key. The csrf-key is not contained in the html body of the current page nor is it in the link. (I checked this explicitly by using strg+f)\nWhat other common ways are there to generate the csrf-key on the fly? (I explicitly checked by inspecting the post request that the csrf is included in the request, but I don't understand how it gets there)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":24,"Q_Id":67648915,"Users Score":0,"Answer":"The csrf key must be somewhere in the webpage you are trying to access.\nThe csrf key is not generated by the user, instead, it is a unique secret value generated by the server-side application and transmitted to the client.","Q_Score":0,"Tags":"python,html,authentication","A_Id":67659054,"CreationDate":"2021-05-22T11:22:00.000","Title":"csrf-key not as hidden input or as get: How can it be generated?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a blog and an application that gives the number of comments and posts on my blog by using my blog's API.\nThe issue I'm having is that I want to have my application receive new comments from my application in real-time.\nMy solution:\nI can have my application calling the API every 30 seconds or so to check whether there is a response (i.e. whether there is a new comment).","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":20,"Q_Id":67660673,"Users Score":0,"Answer":"I think the best solution is to use something called Long Polling to get updates. Its a technique in programming to handle requests with less resources such as CPU being used over time. For a detailed solution for your case search for\n\nLong Polling in Flask application","Q_Score":0,"Tags":"python,api,flask","A_Id":67660767,"CreationDate":"2021-05-23T14:12:00.000","Title":"How to get new (real time) comments from my blog?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Apologies in advance for the vague questions. I am trying to understand Websockets.\nIs the webserver a different process from the WebSocket server?\nIf I have one webpage that is being viewed by different client browsers, and I send new data via the socket server, do all the viewing clients get updates via a single message, or do I have to send one message per client?\nIf I have multiple pages receiving updates from sockets; do I need one socket server per page or can I use one socket server to send to multiple pages? E.G send \"YES\" to \/page1.html and send \"NO\" to \/page2.html using one socket server process?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":133,"Q_Id":67680598,"Users Score":1,"Answer":"Websocket is on client site like a listener. And on Server site like a sender.\nThe Websocket client is listening to the socket. And multiple clients can listen the same socket. This happens by connect to a specific socket from client site.\nTo distinguish which message should be process by a client and which not, the socket could send an \"identifier\" in the package, which will be ignored from the pages which should do nothing.","Q_Score":1,"Tags":"python,websocket","A_Id":67694405,"CreationDate":"2021-05-25T01:26:00.000","Title":"Understanding Python Websockets","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to extract image from Tableau workbook using REST API in python.\nI am able to get token for Tableau authentication and fetch corresponding workbook and view id.\nWhen i am extracting image using API like :-\nurl = \"http:\/\/tableau.xyz.com\/api\/3.7\/sites\/bfda4337123971272\/views\/b55-83e229905e17\/image?image-resolution=high\"\nI am able to save the response in png image and able to get the output of worksheet in image but when the worksheet has large detailed data with long scroll down bar, the image is not getting fully extracted. The response size is less than 50 MB which is maximum limit .\nCan someone please suggest if we need to add any other options in GET URL in python.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":74,"Q_Id":67690005,"Users Score":0,"Answer":"That\u2019s how requesting an image works here.\nIf your view is a worksheet instead of a dashboard or story, you can paginate it by putting a field on the pages shelf, and using the Page Setup menu command in Tableau Desktop to specify page formatting details.\nAnother option is design your dashboard to fit on a page and then have a filter or parameter that allows you to paginate through data. (You can pass a filter or parameter setting with the URL)\nTypically, you want to design a version of your Tableau view that is laid out well for printing. You don\u2019t typically get great results by simply printing a view that was designed to be used interactively on a computer screen. That\u2019s less effort than it sounds, because both views can use the same underlying data sources, and many of the same component views.","Q_Score":1,"Tags":"python,image,api,rest,tableau-api","A_Id":67696532,"CreationDate":"2021-05-25T14:25:00.000","Title":"Extract image Scrolldown issue using Tableau REST API in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"so i'm trying to import a taxii2client server but it's not recognising anything. I've checked and python is correctly installed, i've tried an absolute path to the file location, i've tried to reinstall everything and i can't help but feel like i'm missing something obvious.\nplease bear in mind that i am only just starting out with python.\nthe only code i currently have in my program is just an import:\n\"from taxii2client.v20 import Server\"","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":57,"Q_Id":67703330,"Users Score":0,"Answer":"I'm guessing you haven't installed the module.\nYou can do this with pip in a terminal window:\npip install taxii2-client","Q_Score":0,"Tags":"python,taxii","A_Id":67703396,"CreationDate":"2021-05-26T10:43:00.000","Title":"Trying to connect to a taxii server, but i can't import anything","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Requests doesn't resolve nameservers via proxies argument given to it by default.\nHow can we use make it do that ?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":238,"Q_Id":67708048,"Users Score":1,"Answer":"Bases on requests documentation:\n\nUsing the scheme socks5 causes the DNS resolution to happen on the client, rather than on the proxy server. This is in line with curl, which uses the scheme to decide whether to do the DNS resolution on the client or proxy. If you want to resolve the domains on the proxy server, use socks5h as the scheme.\n\nHence we have to just set socks5h for the scheme of the proxy given to the proxies argument.","Q_Score":0,"Tags":"python,python-3.x,python-requests","A_Id":67708168,"CreationDate":"2021-05-26T15:26:00.000","Title":"How can we use proxy for connecting to dns server on requests?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to send simultaneous get requests with the Python requests module.\nWhile searching for a solution I've come across lots of different approaches, including grequests, gevent.monkey, requests futures, threading, multi-processing...\nI'm a little overwhelmed and not sure which one to pick, regarding speed and code-readibility.\nThe task is to download < 400 files as fast as possible, all from the same server. Ideally it should output the status for the downloads in terminal, e. g. print an error or success message per request.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":139,"Q_Id":67721519,"Users Score":0,"Answer":"I would use threading as it is not necessary to run the downloads on multiple cores like multiprocessing does.\nSo write a function where requests.get() is in it and then start as a thread.\nBut remember that your internet connection has to be fast enough, otherwise it wouldn't be worth it.","Q_Score":1,"Tags":"python,multithreading,networking,download,python-requests","A_Id":67722464,"CreationDate":"2021-05-27T11:48:00.000","Title":"Best way to download files simultaneously with Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"My web product is on my Server. It does not have GPU. I need to run few AI algorithms and display that output on my website. I want to run that code on my another system which consists of GPU . Is that possible If yes? can you please suggest?\nEdit: GPU and CPU are on the same server. Right now Algorithms are not been hosted on any server.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":61,"Q_Id":67725450,"Users Score":0,"Answer":"yes it is kinda possible and let me explain. first, you should have the python script that you wanna run on your computer with a GPU available. second, you need to code a python based server that listens for incoming connections, when a connection connects to the server, the server executes the python file, saves the output through piping and then returns the output to the connection. this can be deployed with website easily. you can use php sockets to send and receive data to your python server.\nIdea : Python Server Running On Specific Port Listening For Incoming Connections ==> A Connection Hits The Server ==> The Servers Runs The Code That contains the AI Of Yours, Saves The Outputs and sends back the output to the client.","Q_Score":0,"Tags":"python-3.x,gpu,artificial-intelligence","A_Id":67735104,"CreationDate":"2021-05-27T15:34:00.000","Title":"Accessing python file from another computer","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is it possible to make a bot using discord.py which can temporarily create new sub-bots and then delete them?\nI cant find anything in the documentation. Is there something I'm overlooking, is it possible to implement with a workaround, or was discord.py not meant to be used like this?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":29,"Q_Id":67726234,"Users Score":0,"Answer":"No you cannot do that but you can use the webhook system to create a \"bot\" which then you can delete. It looks exactly like an unverified bot and is what bots like emoji.gg use","Q_Score":0,"Tags":"python,discord","A_Id":68867176,"CreationDate":"2021-05-27T16:23:00.000","Title":"Sub bot generation in discord.py","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a python script and I'm sending post requests to same url with different data(different ids). I have to send requests for each id and check them continuously to see if there is a change. I'm handling it with iterating an \"ids\" list with for loop and sending request for each id and then iterating the list again and again.\nBut I want to check every one of them for every 10 seconds max and if I have 1000 ids in the list, its getting longer to check first id again. I can solve this by running 10 parallel scripts to check 100 ids with each script. Is there any alternative you would suggest? thanks.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":36,"Q_Id":67727222,"Users Score":0,"Answer":"If you mean literally running 10 parallel scripts, then yes, you improve upon this with multithreading from a single script.","Q_Score":0,"Tags":"python,python-3.x,python-requests","A_Id":67727248,"CreationDate":"2021-05-27T17:32:00.000","Title":"Concurrency for requests","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"When I try to use the runner.py example via the command line, I encounter the following error:\nC:\\Program Files (x86)\\Eclipse\\Sumo\\doc\\tutorial\\traci_tls>runner.py\nTraceback (most recent call last):\nFile \"C:\\Program Files (x86)\\Eclipse\\Sumo\\doc\\tutorial\\traci_tls\\runner.py\", line 122, in \ngenerate_routefile()\nFile \"C:\\Program Files (x86)\\Eclipse\\Sumo\\doc\\tutorial\\traci_tls\\runner.py\", line 47, in generate_routefile\nwith open(\"data\/cross.rou.xml\", \"w\") as routes:\nPermissionError: [Errno 13] Permission denied: 'data\/cross.rou.xml'\nWho knows how to solve this error?\nRegards, Ali","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":110,"Q_Id":67742345,"Users Score":0,"Answer":"Please make a local copy of the tutorial to your home directory before running it. You probably do not have write access to the installation directory.","Q_Score":0,"Tags":"python,sumo","A_Id":67761086,"CreationDate":"2021-05-28T16:17:00.000","Title":"How to use runner.py in sumo via command line and solving its common error(permission denied)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am thinking of creating a web automation using python, basically it will open browser using selenium webdriver proceeds to click on a few buttons, then using requests post method, fill up a form and then continue to use selenium again. So in short I am asking if we are able to use both selenium and python requests interchangeably?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":327,"Q_Id":67744397,"Users Score":1,"Answer":"Of course you can! I use both the libraries interchangeably in the same code file. It is very helpful.\nFor eg. First I use requests library to fetch the webpage, next I use Selenium whenever I have to change specific parameter in the webpage (like selecting a radio button, inserting form credentials, etc.), and then based on the complexity of the source code, I either use BeautifulSoup, or I continue using Selenium!","Q_Score":0,"Tags":"python,selenium,python-requests,http-post,webautomation","A_Id":67744746,"CreationDate":"2021-05-28T19:15:00.000","Title":"Is it possible to use selenium and requests at the same time?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using Visual Studio Community Edition to make a discord bot. I am using discord.py (updated it to the last version) at the moment and whenever I use from discord.ext import commands I encounter following error:\n\nunresolved import discord.ext.tasks","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":48,"Q_Id":67764149,"Users Score":0,"Answer":"Check that the name of your file isn't the same as any module that you might be using. So don't name it discord.py or anything like that.","Q_Score":0,"Tags":"python,discord,discord.py,bots","A_Id":67764801,"CreationDate":"2021-05-30T17:44:00.000","Title":"'unresolved import discord.ext.tasks' issue","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to know if is possible to capture a lot of actions like a flow and export to selenium, in order to repeat that flow.\nFor example, I need to uninstall and reinstall and configure a few applications each day several times, the process is always the same, and it's a long process, so in order to avoid navigate between code to capture all IDs and classes, is there any way of doing that?\nKind regards.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":196,"Q_Id":67834541,"Users Score":0,"Answer":"Using pyautogui or something similar you could record the location of each click and either use the color of certain pixels to intiate different stages or waiting x amount of time before click each saved point on screen","Q_Score":0,"Tags":"python,selenium,google-chrome,firefox,automation","A_Id":67834640,"CreationDate":"2021-06-04T09:07:00.000","Title":"is there any way of capture a flow of actions with chrome and export to selenium?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've downloaded some tweets with Twython.\nI want to get\/access only the 'name' attribute from the 'user' object dictionary (e.g. {'id': 540179903, 'id_str': '540179903', 'name': 'Makis Voridis' etc.\nHow could I solve this??\nThanks!!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":33,"Q_Id":67835740,"Users Score":1,"Answer":"If it is a dictionary, you can simply acess each key by simply doing tweet['name'] and tweet being your dictionary.","Q_Score":0,"Tags":"python,dataframe,twitter,twython","A_Id":67835882,"CreationDate":"2021-06-04T10:32:00.000","Title":"How can I get\/access the 'name' attribute from the user data dictionary in twitter in python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am writing some python x selenium unit tests for the login functionality of a website. I have already written a unit test for a valid login, but I want to write one for the \"Remember Me\" functionality. I could easily just copy\/paste the login unit test code into the new one, but that would make a VERY long block of code. I was wondering if there was any way to utilize another unit test's code for a separate unit test in order to save some room.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":38,"Q_Id":67853737,"Users Score":0,"Answer":"It's not about utilization unit tests.\nYou should write your methods \/ classes in a way they can be utilized easily.\nSo that your login method will call similar methods that \"remember me\" method uses but with appropriate changes according to scenario flow differences.","Q_Score":1,"Tags":"python,selenium,unit-testing,selenium-webdriver","A_Id":67853938,"CreationDate":"2021-06-05T20:36:00.000","Title":"Is there a way to run a unit test inside a different unit test?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"so I got this python script that removes duplicated files and sort them into folders corresponding to their extension .\nwhat I want to do is when someone uploads a folder to the website than clicks a button the python script is called and starts , it creates a folder to store the sorted files for users to later download those files.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":76,"Q_Id":67854173,"Users Score":1,"Answer":"Since javascript is client side, you'd need to set up some sort of backend that can handle the data sent by the user and then sorts it. moustafa linked a very good explanation, I reccomend reading that and you should be sorted.","Q_Score":0,"Tags":"javascript,python","A_Id":67854236,"CreationDate":"2021-06-05T21:50:00.000","Title":"Using a python script by calling it through javascript","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1},{"Question":"I want to check if any files got uploaded in S3.\nIf there were no file uploads in the last 10 hours, I want to trigger a lambda that can be notified by SNS.\nHow can I trigger lambda when there were no uploaded file in S3 in last certain hour?\nHow could I trigger the lambda when there were no files uploaded in the last hour?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":56,"Q_Id":67872516,"Users Score":0,"Answer":"First, you need to know when the most recent object was uploaded to S3. That's not a trivial task for a bucket with lots of objects. You could configure S3 to trigger a Lambda function to run every time an object is uploaded and write a 'last-uploaded-time' timestamp to DynamoDB.\nNext, you need to schedule the checking of the 'last-uploaded-time' timestamp. You could do that with a CloudWatch\/EventBridge scheduled event. Run a Lambda function on a schedule, e.g. every hour.","Q_Score":0,"Tags":"python,boto3","A_Id":67872593,"CreationDate":"2021-06-07T13:26:00.000","Title":"How can I trigger lambda when there are no uploaded files in s3?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I want to find out when an alert happens so I can automatically accept. I've placed self.driver.switch_to.alert.accept() in various places in the code but I always get a selenium.common.exceptions.NoAlertPresentException. When I don't place it anywhere I get an selenium.common.exceptions.UnexpectedAlertPresentException. When I use expected_conditions I get a selenium.common.exceptions.TimeoutException. I don't know what to do at this point. Can anyone help?\nPython\/Django Backend. Using Selenium (Firefox)","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":47,"Q_Id":67878975,"Users Score":0,"Answer":"I don't think there's a method for that, but if you know it will happen, make a while loop with try-catch block until the alert happens","Q_Score":0,"Tags":"python,django,selenium,firefox","A_Id":67879012,"CreationDate":"2021-06-07T21:34:00.000","Title":"python selenium - how can i find out exactly when an alert happens?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have two working discord bots and a test server. I want my first bot to add the second bot to the test server. How would I go about this? I have researched it and found nothing. Any help would be appreciated. Thanks!","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":98,"Q_Id":67911378,"Users Score":1,"Answer":"This isn\u2019t possible, bots can\u2019t run other bots cmds or add other bots to a server.","Q_Score":2,"Tags":"python,discord,discord.py","A_Id":68224980,"CreationDate":"2021-06-09T20:35:00.000","Title":"Is it possible for my discord bot to add another bot to its server?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Hi I was wonder if is posible to integrate an dialogFlow CX agent with skype? Im researched but I have not found nothing up to date. Thanks in advance.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":119,"Q_Id":67948034,"Users Score":0,"Answer":"As Skype is not a built in integration for Dialogflow CX you need you need to create your own integration I recommend looking for how to create your own custom integration and then integrating it with skype or any other service you want","Q_Score":0,"Tags":"python,node.js,chatbot,skype,dialogflow-cx","A_Id":67980759,"CreationDate":"2021-06-12T10:42:00.000","Title":"Is there a way to integrate an dilog flow cx agent with Skype?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Why do we need a client for API calls ? What is client and it's use ?\nThe question may not be great but it is really confusing as a beginner.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":41,"Q_Id":67966847,"Users Score":0,"Answer":"Client is simply the party that calls an api -which exist on the server side-. Some languages require you to create a pre-defined client object to post your requests, but you are always the client when you send a request.","Q_Score":0,"Tags":"python,api,rest","A_Id":67967092,"CreationDate":"2021-06-14T08:09:00.000","Title":"Need of Client in API","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using a NodeJS server to catch a video stream through a WebRTC PeerConnection and I need to send it to a python script.\nI use NodeJS mainly because it's easy to use WebRTC in it and the package 'wrtc' supports RTCVideoSink and python's aiortc doesn't.\nI was thinking of using a named pipe with ffmpeg to stream the video stream but 3 questions arose :\n\nShould I use python instead of NodeJS and completely avoid the stream through a named pipe part ? (This means there is a way to extract individual frames from a MediaStreamTrack in python)\n\nIf I stick with the \"NodeJS - Python\" approach, how do I send the stream from one script to the other ? Named pipe ? Unix domain sockets ? And with FFMpeg ?\n\nFinally, for performance purpose I think that sending a stream and not each individual frames is better and simpler but is this true ?\n\n\nThanks all !","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":293,"Q_Id":68014505,"Users Score":0,"Answer":"Finally, I found that the MediaStreamTrack API of Python's aiortc has recv().\nIt's a Coroutine that returns the next frame. So I will just port my NodeJS script to python using this coroutine to replace RTCVideoSink. No piping or whatsoever !","Q_Score":0,"Tags":"python,node.js,ffmpeg,webrtc,mkfifo","A_Id":68017298,"CreationDate":"2021-06-17T07:07:00.000","Title":"Sending video stream from NodeJS to python in real time","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to get live video from my server side by clicking a button of the gui of client side. How it can be done? I am not getting any idea. Can anybody help me with the concept or by giving me some webpage address where I can get related code? Thanks in advance.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":45,"Q_Id":68032676,"Users Score":0,"Answer":"If you got a website or you have hosted it on your LAN:\nyou can pass the command argument\nbtn.config(command=myfunc)","Q_Score":0,"Tags":"python,sockets,tkinter","A_Id":68032782,"CreationDate":"2021-06-18T09:28:00.000","Title":"How to trigger a server side function by pressing a button in the client side gui using tkinter python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to parse through an Excel sheet that has columns for the website name (column A), the number of visitors (F), a contact at that website's first name (B), one for last name (C), for email (E), and date it was last modified (L).\nI want to write a python script that goes through the sheet and looks at sites that have been modified in the last 3 months and prints out the name of the website and an email.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":37,"Q_Id":68040964,"Users Score":1,"Answer":"It is pretty straightforward to do this. I think a little bit of googling can help you a lot. But in short, you need to use a library called Pandas which is a really powerful tool for handling spreadsheets, datasets, and table-based files.\nPandas documentation is very well written. You can use the tutorials provided within the documentation to work your way through the problem easily. However, I'll give you a brief overview of what you should do.\nFirst open the spreadsheet (excel file) inside python using Pandas and load it into a data frame (read the docs and you'll understand).\nSecond Using one of the methods provided by pandas called where (actually there are a couple of methods) you can easily set a condition (like if date is older than some data) and get the masked data frame (which represents your spreadsheet) back from the method.","Q_Score":0,"Tags":"python,excel","A_Id":68041142,"CreationDate":"2021-06-18T20:13:00.000","Title":"Parsing Excel sheet based on date, number of visitors, and printing email","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In the Twilio docs, there is an option to set the state of a conversation from active, inactive, or closed. It says \"Be aware that closed Conversations do not count towards the Participant-per-Conversation limit.\" However, I am not sure if a closed conversation counts towards the channels per identity limit (1000 in total). Can anyone clarify this? thank you","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":67,"Q_Id":68041712,"Users Score":1,"Answer":"Twilio developer evangelist here.\nI checked with the team and closed conversations do not count towards the channels per identity limit.\nFurther, I'll work to clarify that in the docs.","Q_Score":0,"Tags":"python,twilio,twilio-conversations","A_Id":68158059,"CreationDate":"2021-06-18T21:40:00.000","Title":"Twilio Conversations - Do closed conversations count towards the channels per identity limit","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using Selenoid (or just selenium remote) with python and want to use pyautogui with it.\nIs there any ways to do that?\nI will be thankfull for any information!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":93,"Q_Id":68046157,"Users Score":0,"Answer":"You can look Action Chains. It is part of selenium.","Q_Score":0,"Tags":"python,selenium,pyautogui,selenoid,selenium-remotedriver","A_Id":68046218,"CreationDate":"2021-06-19T11:02:00.000","Title":"PyAutoGUI with Selenium remote?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to create a Discord bot that pings a certain user role when an embedded message contains a certain keyword. To avoid ping spam when multiple embedded messages with the keyword are posted, I would like the bot to ping once, then pause for X seconds and then if there's a new message, react to that and then repeat the process.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":106,"Q_Id":68048699,"Users Score":0,"Answer":"In python, you have to import the Time module, then use time.sleep().\nimport time\ntime.sleep(x amount of seconds)","Q_Score":0,"Tags":"python,discord,discord.py","A_Id":68048776,"CreationDate":"2021-06-19T16:18:00.000","Title":"How to pause on_message for a certain amount of time after it reacts to a message (Discord Py)?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've decided to make a bot social experiment for a game I play where people can just play with bot, but I've run into a problem. I tried to make it loop but the code does not - it only goes 2 level ids then ends the run. How do I make it loop forever instead of going twice then stopping? Could it be gd.py or on my end?\nimports: keep_alive, gd, time, random, os","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":46,"Q_Id":68049060,"Users Score":1,"Answer":"Do you need to know why it only iterates two times? Okay according to your code it should run only 2 times. In for level in levelids: you say the program to iterate through ids in levelids = [10565740,3979721]. So the code runs through these two elements and ends the program.\nSo you need to run it forever? According to your code if you want to run the code indefinitely then you need to add elements to levelids list indefinitely.\nI feel like there is confusion with what you want to do with your code and what code does for you.","Q_Score":0,"Tags":"python","A_Id":68049139,"CreationDate":"2021-06-19T17:04:00.000","Title":"How would I make this loop, it stops after 2 iterations","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to download photos \/ images.\nI want to do that by searching a subject and get all the photos which shows under google search (photos tab).\nI tired to use google crawler (GoogleImageCrawler) but it seems that the photos I'm getting are different from the photos which I can see via google search (photos tab).\nHow can I get and download photos (filter photos by subject) directly from google search engine ?","AnswerCount":2,"Available Count":2,"Score":-0.0996679946,"is_accepted":false,"ViewCount":77,"Q_Id":68117034,"Users Score":-1,"Answer":"Also, try \"pip install download\"\nfrom download import download\npath = download(url, file_path)","Q_Score":2,"Tags":"python,web-crawler,google-crawlers,google-image-search","A_Id":68117232,"CreationDate":"2021-06-24T13:47:00.000","Title":"Why google crawler doesn't get google search photos?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I want to download photos \/ images.\nI want to do that by searching a subject and get all the photos which shows under google search (photos tab).\nI tired to use google crawler (GoogleImageCrawler) but it seems that the photos I'm getting are different from the photos which I can see via google search (photos tab).\nHow can I get and download photos (filter photos by subject) directly from google search engine ?","AnswerCount":2,"Available Count":2,"Score":-0.0996679946,"is_accepted":false,"ViewCount":77,"Q_Id":68117034,"Users Score":-1,"Answer":"Use google_search_py package, install it from Pypi.org and try","Q_Score":2,"Tags":"python,web-crawler,google-crawlers,google-image-search","A_Id":68117187,"CreationDate":"2021-06-24T13:47:00.000","Title":"Why google crawler doesn't get google search photos?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a Python script that creates a new user and configures it, I want this to be ran anytime a user SSHs into the server but the username isn't a valid one, how could I do this?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":28,"Q_Id":68138743,"Users Score":0,"Answer":"That is an incredibly bad idea. How would they learn what password you assigned? Consider how easy it would be to write a denial of service attack to log in as millions of unknown users. Hackers do that EVERY DAY to any public-facing server. Much better idea to have a web site where people register for a new username.","Q_Score":0,"Tags":"python,ssh","A_Id":68139097,"CreationDate":"2021-06-26T02:34:00.000","Title":"How to run a script if a user SSHs into an non-existent user?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I use Python to made a Discord bot to \"auto-fishing\"(Tatsu's fishing game) by sending \"t!fish\".\nBut when it sent the message, Tatsu didn't response. Is it possible to make a bot like a user?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":149,"Q_Id":68142414,"Users Score":0,"Answer":"The task you want your bot to do is impossible because the discord API implicitly makes sure bots dont to react to commands sent by other bots.\nSince if it could you would be able to spam other bots.\nThe only way you can get over this is by giving your bot a user TOKEN instead of a bot TOKEN but then the user TOKEN you use will get banned.","Q_Score":0,"Tags":"python,discord,discord.py","A_Id":68142630,"CreationDate":"2021-06-26T12:35:00.000","Title":"How to make a Discord bot active another bots?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm a bit of a novice to all of this, but I had written a simple web scraper in Python a few months ago that interfaced to Chrome using Selenium and chromedriver (it used to work with v90). I'd run this script every couple of weeks or so to get new data, but when I went to run it today it wouldn't work. I got a message that said \"chrome not reachable\". I can see where the chromedriver window launches (it says, \"this window being controlled by automated software\"), but my script cannot communicate with that window. It will eventually timeout and throw the \"chrome not reachable\" error.\nI thought that this might have to do with the latest chrome updates, so I updated my chromedriver version, but the issue persists. Has anyone seen this recently and do you know a workaround?\nI'm using:\n\nPython v3.9.4\nSelenium v3.141.0\n\nAnd I've tried:\n\nChromeDriver v92.0.4515.43\nChromeDriver v91.0.4472.101\nChromeDriver v90.0.4430.24\n\nThanks for any insight!","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":92,"Q_Id":68145014,"Users Score":0,"Answer":"Well, I didn't change anything; I didn't reboot, I didn't alter my code, I didn't re-download the chromedriver, but today I ran my script and it all works as normal. I don't know what happened earlier.\nThis is not a great answer, but I don't want others to waste time trying to solve a non-existing problem. Thanks all for your help and insight.","Q_Score":0,"Tags":"python,selenium,web-scraping,selenium-chromedriver","A_Id":68168494,"CreationDate":"2021-06-26T17:57:00.000","Title":"Python script using Selenium\/Chromedriver stopped working","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I can retweet a tweet by using the tweepy library. But what I want to do is to quote a tweet. I can\u2019t find anything about it.","AnswerCount":1,"Available Count":1,"Score":0.537049567,"is_accepted":false,"ViewCount":209,"Q_Id":68155458,"Users Score":3,"Answer":"To post a quote Tweet, you include the link to the original Tweet in the body of the new Tweet you are posting.","Q_Score":2,"Tags":"python,twitter","A_Id":68175974,"CreationDate":"2021-06-27T21:34:00.000","Title":"Quote a tweet with Twitter API by using Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm having an issue with a project showing these socket errors like it's still trying to connect 127.0.0.1 - - [29\/Jun\/2021 10:12:12] \"GET \/socket.io\/?EIO=3&transport=polling&t=NfOK-g6 HTTP\/1.1\" 404 - when I try to start\/debug the project. Now this project doesn't use any sockets, but another project in Pycharm I was using earlier did use sockets. It's like theres something cached or something left over affecting other projects once I'm done with the project that does use sockets. I've cleared the cache and restarting the laptop seems to fix it, only I don't want to be constantly restarting my laptop to get rid of the errors. Anyone ever have to deal with\/fix this before? If so how did you? Thanks!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":72,"Q_Id":68181489,"Users Score":0,"Answer":"Turned out to be a chrome tab that still had the other project running I had separated from the other chrome tabs. That tab even being open was causing this issue. Closed the tab and fixed it.","Q_Score":1,"Tags":"python,sockets,socket.io,pycharm","A_Id":68364043,"CreationDate":"2021-06-29T15:16:00.000","Title":"Why is Pycharm showing socket.io errors when starting a project that doesn't use sockets?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have credentials ('aws access key', 'aws secret key', and a path) for a dataset stored on AWS S3. I can access the data by using CyberDuck or FileZilla Pro.\nI would like to automate the data fetch stage and using Python\/Anaconda, which comes with boto2, for this purpose.\nI do not have a \"bucket\" name, just a path in the form of \/folder1\/folder2\/folder3 and I could not find a way to access the data without a \"bucket name\" with the API.\nIs there a way to access S3 programatically without having a \"bucket name\", i.e. with a path instead?\nThanks","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":167,"Q_Id":68188131,"Users Score":1,"Answer":"s3 does not have a typical native directory\/folder structure, instead, it is defined with keys. If the URL starts with s3:\/\/dir_name\/folder_name\/file_name, it means dir_name is nothing but a bucket name. If you are not sure about bucket name but have s3 access parameters and path, then you can\n\nList all the s3_buckets available -\ns3 = boto3.client('s3')\nresponse = s3.list_buckets()\n\nUse s3.client.head_object() method recursively for each bucket with your path as key.","Q_Score":0,"Tags":"python,amazon-s3,path,boto,bucket","A_Id":70870783,"CreationDate":"2021-06-30T03:38:00.000","Title":"python boto - AWS S3 access without a bucket name","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"from pytube import YouTube\n\nurl = 'https:\/\/youtu.be\/7hvcJK5-OLI'\nmy_video = YouTube(url)\n\nprint(\"=================== Video Title ====================\")\nprint(my_video.title) # This prints the title of the youtube video\n\nprint(\"=================== Thumbnail Image ====================\")\nprint(my_video.thumbnail_url) # This prints the thumbnail link in terminal\n\nprint(\"=================== Download Video ====================\")\n\nchoise = input(\"Do you want to download mp3 or mp4: \")\n\nif choise.lower() == 'mp4':\n my_video = my_video.streams.get_highest_resolution() # We set the resolusion of the video\n\nelif choise.lower() == 'mp3':\n my_video = my_video.streams.get_audio_only() # To get only audio we set this\n\nelse:\n print(\"Invalid option! \")\n exit()\n\nmy_video.download()\n\nThis is my code. I am trying to make simple youtube downloader for project but it keep throwing HTTP Error: 404, although even my url is correct","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":396,"Q_Id":68188960,"Users Score":0,"Answer":"You just need to update pytube to latest version 10.8.5\nJust run pip install --upgrade pytube","Q_Score":0,"Tags":"python,pytube","A_Id":68189000,"CreationDate":"2021-06-30T05:36:00.000","Title":"Pytube error: urllib.error.HTTPError: HTTP Error 404: Not Found","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"We are marketing agency that wants all the ads campaign data (Facebook Ads, Google Ads, My Target) to be displayed in the dashboard(Grafana + Prometheus). We were looking for plugins that can extract the data to Prometheus, and then get visualized in Grafana. Did anyone find any plugins\/exporters\/ or any solution that will work with minimum coding?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":76,"Q_Id":68193702,"Users Score":0,"Answer":"There is no such plugin\/exporter. Since this kinds of platforms are correcting the results for like two weeks prior.\nThe best way to visualize this kind of information in Grafana is to use a good persistency layer and store data in a database like Postgres or MySQL.\nSo you need to write your own ETL processes or use third-party services.","Q_Score":0,"Tags":"python,database,export,prometheus,grafana","A_Id":68193797,"CreationDate":"2021-06-30T11:13:00.000","Title":"Export data from advertising platforms (FB Ads, Google Ads, My Target) into Prometheus to visualize in Grafana","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Hello I have a selenium script in python which extract data with login on webpage. It take around 50 sec to execute and I want to deploy that script as an api. But API is getting timeout.\nThere we can also do one thing that we save that data in some google sheet using that script.\nPlease anyone can suggest how can i do this or any relevant content ?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":30,"Q_Id":68203428,"Users Score":0,"Answer":"Could you provide us a screenshot of API timeout or logs? Showing Python code with requests will be also helpful (sorry for answering instead of commenting because I don't have enough reputation points)","Q_Score":0,"Tags":"python,selenium,deployment","A_Id":68207129,"CreationDate":"2021-07-01T02:07:00.000","Title":"Deploy Scraping Scripts in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm working on a page\/project that would allow our customers to check a webpage to see if their network is ready to plug in a cloud based device. The page should query a specific HTTPS URL, check it returns anything other than HTTP 200, then go to the next URL. At the end of the test, it should give a summary of what isn't reachable. Ideally, it should be purely HTML based but I'm struggling to find ways to do this. I also need it to check the certificate status of a URL to make sure the network isn't using behind a firewall running SSL inspection. Is this possible just by using HTML?\nI can easily do this in python\/requests and could just send them a script to run on their computer, but the goal of this project is to make this as seamless as possible. We just send them a link and a non IT person could check if the network is ready then report to us what test failed.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":16,"Q_Id":68232721,"Users Score":0,"Answer":"You can't do these things with pure html. you need to use javascript and also you should know that you cant do network layer operations with just application layer technologies. so simple answer is no you can't do it just by using html.","Q_Score":0,"Tags":"python,html,curl,get","A_Id":68235262,"CreationDate":"2021-07-03T02:11:00.000","Title":"HTML based network testing tool","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I need to run some python code on aws platform periodically(probably once a day). Program job is to connect to S3, download some files from bucket, do some calculations, upload results back to S3. This program runs for about 1 hour so I cannot make use of Lambda function as it has a maximum execution time of 900s(15mins).\nI am considering to use EC2 for this task. I am planning to setup python code into a startup and execute it as soon as the EC2 instance is powered on. It also shuts down the instance once the task is complete. The periodic restart of this EC2 will be handled by lambda function.\nThough this a not a best approach, I want to know any alternatives within aws platform(services other than EC2) that can be best of this job.\nSince","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":276,"Q_Id":68234790,"Users Score":2,"Answer":"If you are looking for other solutions other than lambda and EC2 (which depending on the scenario it fits) you could use ECS (Fargate).\nIt's a great choice for microservices or small tasks. You build a Docker image with your code (Python, node, etc...), tag it and then you push the image to AWS ECR. Then you build a cluster for that and use the cloudwatch to schedule the task with Cloudwatch or you can call a task directly either using the CLI or another AWS resource.\n\nYou don't have time limitations like lambda\nYou don\u2019t also have to setup the instance, because your dependencies are managed by Dockerfile\nAnd, if needed, you can take advantage of the EBS volume attached to ECS (20-30GB root) and increase from that, with the possibility of working with EFS for tasks as well.\n\nI could point to other solutions, but they are way too complex for the task that you are planning and the goal is always to use the right service for the job\nHopefully this could help!","Q_Score":1,"Tags":"python,amazon-web-services,amazon-ec2,aws-lambda,job-scheduling","A_Id":68308309,"CreationDate":"2021-07-03T09:08:00.000","Title":"Run python code on AWS service periodically","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have been scraping emails from a shared mailbox using imap_tools. The script checks the mailbox as frequently as possible and I use msgs = client.fetch(AND(seen=False)) to check only unread emails.\nEven though I check frequently sometimes emails are not scraped because another user has already opened the email.\nIs there another way of checking for new emails, eg using UIDs?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":75,"Q_Id":68247797,"Users Score":0,"Answer":"Emails has message-id header, it is constant.","Q_Score":0,"Tags":"python,imap-tools","A_Id":68264227,"CreationDate":"2021-07-04T18:41:00.000","Title":"Reading New Email Using UIDs with imap_tools","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to run python in vscode and it has been working for a few hours but it suddenly stopped running and now whenever i run it i get this error:\nError: Session cannot generate requests\nat w.executeCodeCell\nI am connecting to a garmin account where i pull the sleeping data and trying to plot it on a graph which worked but now it has stopped working and giving me the above error.\nHow would i fix this?","AnswerCount":3,"Available Count":3,"Score":0.1325487884,"is_accepted":false,"ViewCount":14705,"Q_Id":68259660,"Users Score":2,"Answer":"You might get this error because your script has run exit(). Remove that and rerun, and you should be fine.","Q_Score":18,"Tags":"python,visual-studio-code,garmin","A_Id":69681493,"CreationDate":"2021-07-05T16:55:00.000","Title":"VSCode fails to run python with this error: Error: Session cannot generate requests","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to run python in vscode and it has been working for a few hours but it suddenly stopped running and now whenever i run it i get this error:\nError: Session cannot generate requests\nat w.executeCodeCell\nI am connecting to a garmin account where i pull the sleeping data and trying to plot it on a graph which worked but now it has stopped working and giving me the above error.\nHow would i fix this?","AnswerCount":3,"Available Count":3,"Score":0.1325487884,"is_accepted":false,"ViewCount":14705,"Q_Id":68259660,"Users Score":2,"Answer":"Try to restart the Kernel.\nEverytime I am facing this issue myself, it was because the kernel was loosing the connection and had to be restarted.","Q_Score":18,"Tags":"python,visual-studio-code,garmin","A_Id":69502541,"CreationDate":"2021-07-05T16:55:00.000","Title":"VSCode fails to run python with this error: Error: Session cannot generate requests","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to run python in vscode and it has been working for a few hours but it suddenly stopped running and now whenever i run it i get this error:\nError: Session cannot generate requests\nat w.executeCodeCell\nI am connecting to a garmin account where i pull the sleeping data and trying to plot it on a graph which worked but now it has stopped working and giving me the above error.\nHow would i fix this?","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":14705,"Q_Id":68259660,"Users Score":0,"Answer":"Exit the VS Code and reopen it.\nIt worked for me!","Q_Score":18,"Tags":"python,visual-studio-code,garmin","A_Id":70515365,"CreationDate":"2021-07-05T16:55:00.000","Title":"VSCode fails to run python with this error: Error: Session cannot generate requests","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to fetch swipe up count which show in the insights of Instagram.As Facebook is not providing swipe up count through their graph API so how can I get that data.\nScraping won't work as I already did and I want to fetch those data in python or javascript\nThanks in advance for help","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":116,"Q_Id":68270054,"Users Score":2,"Answer":"for now facebook is not providing this data in graph-api and it is only provided in influences in insights so for now its not possible for now to fetch but you can get by web scraping\nFacebook can provide this data in next version of graph-api","Q_Score":2,"Tags":"python,django,angular,typescript,facebook-insights","A_Id":69811687,"CreationDate":"2021-07-06T11:51:00.000","Title":"How to fetch swipe up count from Instagram story insights graph API?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"It exists a possibility to get a specific file from a specific node executed from a spark-submit?\nMy first approach was getting the list of every nodes in my cluster using spark-submit by a socket, that was the first part, now, I want to connect directly to a specific node to get a specific file, this file is not a HDFS file is a local file on that remote node.\nI cannot use a ftp because I do not have those credentials, they perform a direct connection.\ntextfile is not working, I would like to specify the node name and path of the file.\nEj.\n\ntextfile(remoteNodeConnectedToMyCluster:\/\/\/path\/file.txt)\n\nI hope been clear.\nThanks in advance.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":135,"Q_Id":68295324,"Users Score":0,"Answer":"There is no way to accomplish that, short of installing a server (e.g. FTP, HTTP) on the node to serve the file or running a script on the node to copy it to a distributed file system (e.g. HDFS).\nNote that a properly specified URL would have the form protocol:\/\/host\/path\/to\/file.txt.","Q_Score":0,"Tags":"python,scala,apache-spark,pyspark,spark-submit","A_Id":68301275,"CreationDate":"2021-07-08T03:40:00.000","Title":"Read a remote file from a specific node in my cluster using Spark Submit","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have found how to get the offset and lag of a specific topic and consumer group, but how to get the ip address of consumers in the consumer group?\nJava or Python SDK are both OK.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":89,"Q_Id":68310613,"Users Score":0,"Answer":"ok...Finally I found describe_consumer_groups in KafkaAdminClient","Q_Score":0,"Tags":"java,python,apache-kafka","A_Id":68311004,"CreationDate":"2021-07-09T02:35:00.000","Title":"How to get Kafka consumers' ip address through SDK?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to make a custom 'Share' button with telebot. Is there any option to handle InlineKeyboardButton with switch_inline_query parameter is set? I want to know in which chat\/user the message was sent.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":133,"Q_Id":68315616,"Users Score":0,"Answer":"I made 'share' button by deep link with unique 'start' parameter value.","Q_Score":0,"Tags":"python,telegram,telegram-bot,py-telegram-bot-api","A_Id":68573573,"CreationDate":"2021-07-09T10:55:00.000","Title":"Telebot InlineKeyboardButton how to hanlde switch_inline_query?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm sending a simple 'post' request through the 'requests' module. It works fine when execute it directly through the linux terminal. However, when I set it up through the crontab, the log is indicating and error.\n\nIf I execute the below through the terminal, it works fine.\n\n\n'\/usr\/bin\/python3.6 \/location\/sa\/tb\/uc\/md\/se\/sea.py'\n\n\nIf I setup the crontab as follows, I get an error.\n\n\n\n\n\n\n\n\n\n\n\n\/usr\/bin\/python3.6 \/location\/sa\/tb\/uc\/md\/se\/sea.py >> ~\/Test_log.log 2>&1\n\n\n\n\n\n\n\n\n\n\n\nBelow is the error message:\n\n\nFile\n\"\/usr\/local\/lib\/python3.6\/site-packages\/urllib3\/connectionpool.py\",\nline 600, in urlopen\nchunked=chunked) File \"\/usr\/local\/lib\/python3.6\/site-packages\/urllib3\/connectionpool.py\",\nline 343, in _make_request\nself._validate_conn(conn) File \"\/usr\/local\/lib\/python3.6\/site-packages\/urllib3\/connectionpool.py\",\nline 839, in validate_conn\nconn.connect() File \"\/usr\/local\/lib\/python3.6\/site-packages\/urllib3\/connection.py\", line\n344, in connect\nssl_context=context) File \"\/usr\/local\/lib\/python3.6\/site-packages\/urllib3\/util\/ssl.py\", line\n345, in ssl_wrap_socket\nreturn context.wrap_socket(sock, server_hostname=server_hostname) File \"\/usr\/lib64\/python3.6\/ssl.py\", line 365, in wrap_socket\n_context=self, _session=session) File \"\/usr\/lib64\/python3.6\/ssl.py\", line 776, in init\nself.do_handshake() File \"\/usr\/lib64\/python3.6\/ssl.py\", line 1036, in do_handshake\nself._sslobj.do_handshake() File \"\/usr\/lib64\/python3.6\/ssl.py\", line 648, in do_handshake\nself._sslobj.do_handshake() ConnectionResetError: [Errno 104] Connection reset by peer\n\nWhat did I try?\n\nTried adding absolute path inside the script.\n\nAdded a proxy to the headers, but no go.\n\n\nAny help would be appreciated.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":185,"Q_Id":68316997,"Users Score":0,"Answer":"Some servers don't start re-listen immediately (check_mk flag), while calling multiple requests from a single connection. One of the reason is to avoid DoS attacks and service availability to all users.\nSince your crontab made your script to call the same API multiple times using a single connection, I'd suggest you to add a void timer before making a request, e.g. add time.sleep(0.01) just before calling the API.","Q_Score":0,"Tags":"python,python-3.x,cron","A_Id":68354804,"CreationDate":"2021-07-09T12:42:00.000","Title":"ConnectionResetError while running through cron","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am able to access S3 via boto3 on an ec2 machine, but when I connect via pycharm to the remote interpreter on that exact machine, I get Access Denied.\nI do not think it is the remote interpreter - when I connect to the host via the pycharm terminal, I still get Access Denied. So it looks like there is some pycharm related issue that I am unable to identify.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":92,"Q_Id":68333881,"Users Score":0,"Answer":"a note for the next person looking at this, pycharm connects to the interpreter via sftp, which does not source .profile. so best to configure ~\/.aws\/credentials","Q_Score":1,"Tags":"python,amazon-web-services,pycharm,boto3","A_Id":68334494,"CreationDate":"2021-07-11T06:50:00.000","Title":"aws boto3 access denied when using remote interpreter in pycharm","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"When I create a local host from Nodejs or Python it starts\nlocal host:8000 or 3000\ni don't want 8000 or any number.\nHow can I get only local host?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":76,"Q_Id":68335092,"Users Score":0,"Answer":"Port Number Description: the number you see is the port number. it distinguishes between different applications in a single host (your computer for your case). for example let's say you have a website and a file server running on your computer. when you type localhost in your browser, it reaches your computer ip address but how does it know which application to request for data? webserver or file server? this is where the port number comes handy. when webserver is running on port 9000 for example and you type localhost:9000, you are telling your browser to go to your computer (localhost) and ask webserver application (port 9000) for data.\nAnswer to Your Question: if you type localhost without any port number, it connects to port 80 of the host by default (it's a convention or so) so if you run your application on port 80, you get what you wanted.\nExtra: there is also a 443 port which is the default port for https request but i don't think you are using ssl right now.","Q_Score":0,"Tags":"python,node.js","A_Id":68335330,"CreationDate":"2021-07-11T09:48:00.000","Title":"Create local host without any number","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"When I create a local host from Nodejs or Python it starts\nlocal host:8000 or 3000\ni don't want 8000 or any number.\nHow can I get only local host?","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":76,"Q_Id":68335092,"Users Score":1,"Answer":"This number is called PORT number. Every URL, for example localhost also have PORT 80 by default but you don't need to type it as of HTTP request connects to PORT 80 as default behavior and HTTPS connects to port 443.\nTo archive this, you can follow\/choose one of following two ways.\n\nRUN NODEJS\/PYTHON app on PORT 80 or PORT 443 (if you have SSL certificates). This way you you can access it bylocalhost without adding PORT number to URL.\n\nInstall web-server like apache or nginx and use reserve proxy feature to archive it.\n\n\nNOTE\n\nPoint 1: This is easy way and does not need knowledge required for point 2.\n\nPoint 2: You must have knowledge of web-servers and reserve proxy feature.","Q_Score":0,"Tags":"python,node.js","A_Id":68336495,"CreationDate":"2021-07-11T09:48:00.000","Title":"Create local host without any number","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there any app which can help me access my android device remotely using some api or some python package? Like i want to ring phone, get phone battery information and send messages. is this possible ?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":228,"Q_Id":68341563,"Users Score":0,"Answer":"AirDroid allows you remote control with a browser.","Q_Score":0,"Tags":"python,android,api","A_Id":68341597,"CreationDate":"2021-07-12T03:22:00.000","Title":"Remotely Access your android device using python or api","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to list down all the EC2 instances with its IAM role attached using boto3 in python3. But I don't find any method to get the IAM role attached to existing EC2 instance. is there any method in boto3 to do that ?\nWhen I describe an Instance, It has a key name IamInstanceProfile. That contains instance profile id and arn of the iam instance profile. I don't find name of IAM instance profile or any other info about IAM roles attached to it. I tried to use instance profile id to describe instance profile, But it seems to describe an instance profile, we need name of instance profile (not the id).\nCan someone help on this ? I might be missing something.\nThanks","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1002,"Q_Id":68347014,"Users Score":1,"Answer":"When we describe EC2 instance, We get IamInstanceProfile key which has Arn and id.\nArn has IamInstanceProfile name attached to it.\nArn': 'arn:aws:iam::1234567890:instance-profile\/instanceprofileOrRolename'\nThis name can be used for further operation like get role description or listing policies attached to role.\nThanks","Q_Score":1,"Tags":"python-3.x,amazon-web-services,amazon-ec2,boto3","A_Id":68347354,"CreationDate":"2021-07-12T11:59:00.000","Title":"is there a way to get name of IAM role attached to an EC2 instance with boto3?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am trying to generate the signed url for the Google cloud storage object without expiration time. But when I am creating the signed url with V4 signing process, it is getting expired after seven days.\nIs there any alternative to achieve this?\nAlso, what was the expiration time of V2 signing process?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1088,"Q_Id":68379400,"Users Score":0,"Answer":"The intent with Signed URLs is to provide (time-)limited access to a Cloud Storage URL.\nTake away the time limitation and you may wish to consider just making the URL public.\nSigned URLs are accessible to anything that has the URL. So, if you're concerned with discovery of guessable URLs (e.g. my-bucket\/my-object-path) then you could consider obfuscating the object name, perhaps using base64-encoding to make the URL less-guessable although easily derivable:\nmy-bucket\/my-object-path --> my-bucket\/bXktb2JqZWN0LXBhdGg=","Q_Score":0,"Tags":"python,google-cloud-platform","A_Id":68381358,"CreationDate":"2021-07-14T13:52:00.000","Title":"Create the generated signed url without expiration time for Google cloud storage object","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Is there a simple method or library to allow a websocket to drop certain messages if bandwidth doesn't allow? Or any one of the following?\n\nto measure the queue size of outgoing messages that haven't yet reached a particular client\nto measure the approximate bitrate that a client has been receiving recent messages at\nto measure the time that a particular write_message finished being transmitted to the client\n\nI'm using Tornado on the server side (tornado.websocket.WebSocketHandler) and vanilla JS on the client side. In my use case it's really only important that the server realize that a client is slow and throttle its messages (or use lossier compression) when it realizes that condition.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":84,"Q_Id":68384400,"Users Score":0,"Answer":"You can implement this on top of what you have by having the client confirm every message it gets and then use that information on the server to adapt the sending of messages to each client.\nThis is the only way you will know which outgoing messages haven't yet reached the client, be able to approximate bitrate or figure out the time it took for the message to reach the client. You must consider that the message back to the server will also take time and that if you use timestamps on the client, they will likely not match your servers as clients have their time set incorrectly more often than not.","Q_Score":0,"Tags":"python,websocket,tornado","A_Id":68883748,"CreationDate":"2021-07-14T20:01:00.000","Title":"Allowing message dropping in websockets","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Built a slash command using python which has the following output\nComment : XXXXXX\nudpatedby: XXXXX\nThe comment is posted correctly, but i want to format it,\nI want Comment to be bold currently and response gets those details\ndata = {\"response_type\":\"in_channel\", \"text\":response}\nwondering if there is a way to format the text output in slack","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":61,"Q_Id":68387230,"Users Score":0,"Answer":"The entire response can be formatted by the following example: {\"response_type\":\"in_channel\", \"text\":\"\"+new_response+\"\"}\nwill return the values in a coded format","Q_Score":0,"Tags":"python-3.x,slack-commands","A_Id":68837957,"CreationDate":"2021-07-15T02:39:00.000","Title":"Slack Slash command Text formatting","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I suspect it may be rather kid question \u2013 but anyway.\nHow to open another Telegram chat or group or channel using pyTelegramBotAPI? I want to forward the user (not message, the user himself) to another channel if he clicks certain button.\nI saw content type migrate_to_chat_id in Message class declaration. Should I use it? If so, how to get an id of channel I need? It won't send any message to my bot.\nI would better use \"t.me\/...\" url.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":116,"Q_Id":68395653,"Users Score":0,"Answer":"Partly solved.\nSpeaking about the buttons, it is indeed easy. You just use named parameter url= in InlineKeyboardButton() method.\nFor other cases. You need to open another channel(s) from function depending on several conditions for instance. Still don't know. Import requests and make GET request? I suspect that something for it should already be in pyTelegramBotAPI, but searching in lib files wasn't successful.","Q_Score":0,"Tags":"python,py-telegram-bot-api","A_Id":68426320,"CreationDate":"2021-07-15T14:23:00.000","Title":"Open another Telegram chat\/group\/channel using Bot","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How to make Selenium open a default folder when asked to choose a file?\nI'm trying to upload a file in a website using Selenium but when I click to upload the file and the website opens the folders always open in my home directory.\nI want to always open in a certain folder and choose the name of the file","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":45,"Q_Id":68400581,"Users Score":0,"Answer":"Get the id of the input camp\nHTML code on the webpage:\n\nLike:\nupload = self.driver.find_element_by_id(\"fl_arquivoFormFile\")\nAnd:\nsend.keys(path_to_file) #\/home\/viper\/Documents\/file","Q_Score":0,"Tags":"python,selenium,selenium-webdriver","A_Id":68413102,"CreationDate":"2021-07-15T20:46:00.000","Title":"How to make Selenium open a default folder when asked to choose a file?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am developing a python script that downloads some excel files from a web service. These two files are combined with another one stored in my computer locally to produce the final file. This final file is loaded to some database and PowerBI dashboard to finally visualize data.\nMy question is: How can I schedule this to run it daily if my computer is turned off? As I said, two files are web scraped (so no problem to schedule) but one file is stored locally.\nOne solution that comes to my mind: Store the local file in Google Drive\/OneDrive and download it with the API so my script is not dependent of my computer. But if this was the case, how can I schedule that? What service would you use? Heroku,...?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":196,"Q_Id":68459109,"Users Score":0,"Answer":"I am running the schedule package for exactly something like that.\nIt\u2019s easy to setup and works very well.","Q_Score":0,"Tags":"python,scheduled-tasks,data-pipeline","A_Id":68459239,"CreationDate":"2021-07-20T17:30:00.000","Title":"How can I schedule python script in the cloud?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I was wondering if its possible to forward a call from our Twilio IVR to an outside number, then disconnect the Twilio side.\nThe reason for this is we are using a conferencing system to live stream an event. We do not want to incur Twilio charges from all the forwards staying the Twilio system.\nAt this point we have ruled out using a direct number for the live stream. We wanted it to route through our Main 1855 IVR number.\nHere is Twilio's response \"when sending the call out to another party, this will involve another leg to the call. If the call was to stay within Twilio and not head out to another party, this would be avoided.\"\nWe are looking for any other solution that could work for us.\nBTW we estimate over 1500 concurrent callers to the conference line.\nThank you","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":55,"Q_Id":68473977,"Users Score":1,"Answer":"It isn\u2019t possible. Twilio will remain in the call path. That is how CPaaS platforms work. Two call legs, and you independently control both via your application logic.","Q_Score":0,"Tags":"python,twilio,twilio-api,twilio-twiml,ivr","A_Id":68477996,"CreationDate":"2021-07-21T17:21:00.000","Title":"Twilio Forward calls to outside number, Then disconnect Twilio side","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a way to avoid loading certain elements in selenium? For example, as defined by an XPATH expression?\nMy goal is to avoid loading CAPTCHAs, which take an enormous amount of time to load, but which I do not need to solve or bypass. The goal is not to hide the element, but to avoid the network latency associated with loading the CAPTCHA, which is 10 times the page itself.\nI'm happy using selenium-wire to intercept requests if that is the necessary solution.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":52,"Q_Id":68475092,"Users Score":0,"Answer":"AFAIK this is not supported by Selenium.\nActually you are asking: how to prevent a web page from loading some \/ several web elements on it with Selenium.\nSelenium is actually used to mimic real user's actions performed via the GUI.\nSo user can click element, grab it's text, scroll the page etc. But regular user is not able to avoid web page from loading some elements on the page like CAPTCHA.\nEspecially the issue is with CAPTCHA. It is developed against automated tools like Selenium. So you can not bypass it and not prevent loading it with Selenium.","Q_Score":0,"Tags":"python,selenium","A_Id":68476721,"CreationDate":"2021-07-21T18:56:00.000","Title":"Avoid fetching certain element in Selenium - like a custom ad blocker","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"As above. Encountered these two issues.\nAn element could not be located on the page using the given search parameters.\nThe element does not exist in DOM anymore","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":287,"Q_Id":68483968,"Users Score":1,"Answer":"The first error\n\nAn element could not be located on the page using the given search parameters\n\nIndicates that Selenium could not find element matching the given locator on the page.\nWhile the second error\n\nThe element does not exist in DOM anymore\n\nIndicates that the element was on the page, but no more existing there.\nThis is actually a Selenium Stale Element error.\nStale means old, decayed, no longer fresh. Stale Element means an old element or no longer available element. Assume there is an element that is found on a web page referenced as a WebElement in WebDriver. If the DOM changes then the WebElement goes stale. If we try to interact with an element which is staled then the StaleElementReferenceException is thrown.","Q_Score":1,"Tags":"selenium,appium,python-appium,staleelementreferenceexception","A_Id":68484057,"CreationDate":"2021-07-22T11:24:00.000","Title":"(Appium-Python) Difference of \"Element could not be located on the page using the given search parameters\" & \"Element does not exist in DOM anymore\"","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"When you do a request in python, you simply download the page and your connection is over.\nHowever, if you open it in your browser, for some websites the page content will automatically refresh. For example the stock prices on yahoo finance, or notifications on reddit.\nIs it possible to replicate this behaviour in python: automatic refresh without having to constantly manually re-download the same page entirely?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":73,"Q_Id":68500482,"Users Score":0,"Answer":"The results will be the same if you re-download the page. Don't make it harder than it has to be. If you are hell-bent, you'll need to use something like puppeteer, phantomjs, or selenium.","Q_Score":0,"Tags":"python","A_Id":68500538,"CreationDate":"2021-07-23T14:04:00.000","Title":"Open a web page and let it refresh automatically in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am trying to send an email, which is done with HTML and CSS, with Selenium, it appears that it can't get the page itself, only the text, or the code, so is there a way to copy the page.\nTried:\n\nSending Keys, (Keys.CONTROL + 'A').... (Keys.CONTROL + 'C') and assigning it to a var but didn't get what I want.\n\nConstructing a field in the beginning and manually copy the content and paste it in the field, then getting its value, but same thing, it only gets the text.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":167,"Q_Id":68520449,"Users Score":0,"Answer":"I found it,\nthe Sending Keys,\n(Keys.CONTROL + 'A')\n(Keys.CONTROL + 'C') worked but I just needed to send (Keys.CONTROL + 'V') Not assigning it to a var","Q_Score":0,"Tags":"python,selenium","A_Id":68520691,"CreationDate":"2021-07-25T16:17:00.000","Title":"How to copy HTML content with Selenium","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm trying to change the outgoing smtp ip address, i succeeded to change ip address using source_address=(host,port)\nexample : smtpserver = smtplib.SMTP(\"smtp.gmail.com\", 587,source_address=('185.193.157.60',12323)\nBut i can't find how to add username and password of the proxy ( if the proxy requires username and password )\nI tried : smtpserver = smtplib.SMTP(\"smtp.gmail.com\", 587,source_address=('185.193.157.60',12323, 'username', 'password')\nBut it didn't work","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":29,"Q_Id":68521874,"Users Score":0,"Answer":"From the docs...\nSMTP.login(user, password, *, initial_response_ok=True)","Q_Score":0,"Tags":"python,smtp","A_Id":68521903,"CreationDate":"2021-07-25T19:26:00.000","Title":"SMTP Outgoing IP ( source ip )","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm creating a script that downloads and shifts pdfs into different specific directories based on a search. I have code that generates the folder and subfolders recursively, I simply need to be able to download the pdfs into that file. I'm wondering how I can dynamically change the download location before I download each file in Selenium without having to start a new driver session. I could use os commands to move the files, but their names are a convoluted mess so having them go directly into the specified folder is preferable. Thank you!","AnswerCount":2,"Available Count":1,"Score":-0.1973753202,"is_accepted":false,"ViewCount":54,"Q_Id":68537232,"Users Score":-2,"Answer":"try this\nit will\npipq10\n'''''()'''''\n()qlqst\n(0)piyQ","Q_Score":0,"Tags":"python,selenium,directory,download","A_Id":68543001,"CreationDate":"2021-07-26T23:12:00.000","Title":"Selenium Chrome Changing Directory","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to get the 'id' of LinkedIn profiles using Python.\nBy ID, I mean from https:\/\/www.linkedin.com\/in\/adigup21\/, it should get adigup21.\nI am using this trick ID = (link.lstrip(\"https:\/\/www.linkedin.com\/in\/\").rstrip('\/'))\nBut for some cases, it misses out on characters or is blank (I always make sure the format is same and good)\nIs there any accurate alternative present for this?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":41,"Q_Id":68540749,"Users Score":2,"Answer":"link.rstrip('\/').split('\/').pop()\nrstrip removes the (optional) final slash, split makes an array out of the slash-separated parts, pop extracts the last element.\nBTW, this is just a hack. Manipulating URLs elements is best done with URL parsing, along the lines of\npth=urllib.parse.urlparse(link).path\nOne can then do rstrip\/split\/pop thing on pth.","Q_Score":0,"Tags":"python,string","A_Id":68540838,"CreationDate":"2021-07-27T07:38:00.000","Title":"Unable to get data from string in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am creating an API in PHP which pass some parameters to python and generate a report in pdf format.\nin report_gen.py file I have code to get parameters and generate pdf. Which is working successfully through command line but when I run PHP file from browser or postman it do not works.\nI want to run my PHP file through cronjob so it automatically call in specific time.\n$data = 'Basic Will Johnyy Willson Smith';\n$command = escapeshellcmd(\"python3 report_gen.py $data\");\n$output = shell_exec($command);\necho $output;","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":167,"Q_Id":68543088,"Users Score":0,"Answer":"There can be (at least) two problems.\nFirst you need to check\/know, that the server maid use another environment. I believe, that you did not use the webserver-user for your local checks. So I would try to use absolute pathes. You also have to make sure, that the python-environment (whatever you have configured) has to be the same as your local user.\nSecond you need to make sure, that you have all rights to run the file (the webserver, that is used by the webserver needs to have at least reading right on report_gen.py).","Q_Score":0,"Tags":"python,php,shell-exec","A_Id":68543527,"CreationDate":"2021-07-27T10:17:00.000","Title":"By running python script with php - working in console but not in web browser","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am currently creating a discord bot in Java and decided to write a script in python as it had libraries that proved very useful for web scraping. By using Jython, I was able to run the script. The bot successfully came online, however, it resulted in the following:\nImportError: No module named praw\nWhen I run the python script by itself in another IDE, it works perfectly. I was wondering if I have to add praw as a dependency in gradle? Suggestions are appreciated.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":59,"Q_Id":68547463,"Users Score":0,"Answer":"If you're using python 3 you need to use pip3 when installing packages\n$ pip3 install praw","Q_Score":0,"Tags":"java,python,intellij-idea,praw","A_Id":68547661,"CreationDate":"2021-07-27T15:11:00.000","Title":"Using Python Libraries in Java (IntelliJ)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a Python script that goes into my Gmail inbox and looks for certain emails based off the subject line. I would like to automate this process; however, my credentials seem to expire on a weekly basis. Whenever they expire and I run my script, it opens up my browser and prompts me to authorize my app. Is there anyway this can be bypassed so that I can automate my script and not have to constantly authenticate my app through the browser?\nThanks!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":352,"Q_Id":68552628,"Users Score":0,"Answer":"If your app is in the testing phase the refresh token will expire after a week. If you want your refresh token to last longer you will need to set your application to production and go though the verification process.\nif this is a google workspace email account then you should consider using a service account for authorization and setting up domain wide delegation.","Q_Score":0,"Tags":"python,google-api,google-oauth,google-api-client","A_Id":68562159,"CreationDate":"2021-07-27T23:04:00.000","Title":"How to bypass the Google API Website authentication flow","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm pretty sure this question have been asked before multiple times, however, the solutions are normally about using npm, which afaik isn't applicable to Python scripts. So the problem is I hit the package size limit when trying to upload a package that contains Chromium binary, which by itself exceeds the limit, let alone other libraries and the code itself. If I understood correctly, Lambda layers won't help either as a singular file's size is already more than the allowed limit. Is there a workaround to such issue?\nNote: the package contains Selenium library, ChromeDriver and an unpacked Linux Chromium version","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":206,"Q_Id":68558255,"Users Score":0,"Answer":"I ran into the same issue you did when trying to create a Chromium layer. I found out that you can load a zip file that exceeds the size limit into a layer if you use S3 as the source, so I used that method which did work. By all means use Docker if you are familiar with it but after struggling with the initial issue I chose the easier S3 option.\nEdit: removed reference to console since most here should be comfortable using the CLI.","Q_Score":0,"Tags":"python,amazon-web-services,selenium,aws-lambda,chromium","A_Id":68615621,"CreationDate":"2021-07-28T09:59:00.000","Title":"Is there a way to bypass AWS Lambda package size limit?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to convert xml into JSON but I get this error\n\nxml.parsers.expat.ExpatError: reference to invalid character number: line 84, column 19\n\nbecause that node has \nThere are multiple nodes like this.\nI am using xmltodict library to do this.","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":118,"Q_Id":68572263,"Users Score":2,"Answer":"Repairing broken XML (or any other broken files, e.g. Excel files or PDF files) is always best done by fixing the software that produced the broken data in the first place. Anyone generating XML is doing so for a reason and should be prepared to fix the bugs in their code; and if they aren't prepared to fix bugs in their code, you should ask yourself whether it's a good idea to continue depending on them as a supplier.\nIf you do have to attempt a repair yourself, the first thing to remember is that the data is not XML, so XML tools are no use to you; you need to get in at a lower level (sometimes even the binary level).\nSometimes a simple regular expression replace will do the job: here, for example, you could try replacing  by at which point you have well-formed XML and can start using XML tools to process it. But to do this you always need a good knowledge of the exact nature of the corruptions in your data. For example, this particular replacement won't work if the bad data is in an attribute.","Q_Score":0,"Tags":"python,xml","A_Id":68587592,"CreationDate":"2021-07-29T08:01:00.000","Title":"How to remove invalid character from xml in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"When I send mail with header like this message['Reply-To'] = '' (Python), it work fine on localhost. When I click Reply in Outlook at that received mail, To field is empty. When I send the same mail from production via company SMTP server, the mail also contains empty Reply-To header, however If I click Reply in Outlook, the address from that the mail had been received is prefilled in To field.\nIs there a bug in company SMTP or why does it work only in localhost?\nThank you.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":164,"Q_Id":68596347,"Users Score":1,"Answer":"In Reply-To empty, Outlook would default to the sender address. IMHO that is how it is supposed to work.","Q_Score":0,"Tags":"python,outlook,smtplib","A_Id":68596545,"CreationDate":"2021-07-30T19:42:00.000","Title":"Is it possible to force empty Reply-To header in mail?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am preparing automation for Google Cloud. I am using native Python modules. All my code is stored on a GIT repository. I am using PyCharm, and I added file source in PyCharm in order to use GIT stored files (Settings\/Project\/Project Structure). I added GIT files as sources root. Once I run my code I am still receiving error message like this: ImportError: cannot import name 'resource_manager' from 'google.cloud' (unknown location). On my laptop I have installed required modules for Google automation: google-api-python-client, oauth2client, google-cloud-resource-manager. Rest of the modules work fine, I am able to import custom modules. I have installed Python 3.9, pip v. 21.2.2, google-cloud-resource-manager 1.0.2","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":212,"Q_Id":68631430,"Users Score":1,"Answer":"Try using the following version google-cloud-resource-manager==0.30.0","Q_Score":1,"Tags":"python,google-cloud-platform,google-api-python-client","A_Id":68832043,"CreationDate":"2021-08-03T06:40:00.000","Title":"Google Cloud - resource_manager Python module issue","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using the pyTelegrambotAPI to build a bot to carry out transactions. In the course of carrying out the transaction, I ask the user for their password to authenticate them but I don\u2019t want the password message to show in the chat. This is why I want to hide the password message or if possible hash out the password.\nI\u2019ve searched everywhere and checked the documentation but I can\u2019t seem to find anything.\nI\u2019ll appreciate your solutions. Thanks.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":310,"Q_Id":68632874,"Users Score":1,"Answer":"hello! Unfortunately, there is no way to modify the message of the user who addresses you, what you could do, for example, is to save the message just sent to a database, and then proceed with deleting the message containing the password. Considering the fact that I can't see the bot surce code, I can't help you more than that!\nHave a nice day and happy coding!","Q_Score":0,"Tags":"python,telegram-bot,password-hash,py-telegram-bot-api","A_Id":68655903,"CreationDate":"2021-08-03T08:41:00.000","Title":"How can I hide a message sent from a user to my telegram bot","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to download sftp files from server A and save it in server B using python. Both the servers are linux machines. I tried sftp.get(), but that's only between server and local machine. So far I have not seen any solution online. Is it possible to move the files between two servers? Please help","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":52,"Q_Id":68637176,"Users Score":0,"Answer":"I'm not a Python developer, but I know how sftp works, and I don't think you can do what you are trying to do. The best way I can think is to do a sftp.get() to download the file from Server A to your local machine, and then an sftp.put() to upload it to server B","Q_Score":0,"Tags":"python-3.x,linux,sftp,remote-server","A_Id":68637286,"CreationDate":"2021-08-03T13:42:00.000","Title":"Transferring SFTP files from one remote server(linux) to another remote server(linux) using python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am fairly new to using sockets, and this will probably have a simple answer that I am overlooking, but since an hour of agonizing has not yielded results so... what the heck.\nHow do I receive for .sendall() in the python socket module? By this I mean how do I receive data from a socket with out a buffer? is there a simple solution for this like some sort of conn.recvall() function or do I have it write out logic to do this? If I do have to write logic for it, then how should I do it? Should I just keep using .recv() with some arbitrary buffint or do I have to split the inputs into segments before sending? Which is more efficient, or better? Is there a smarter way to go about it?\nThanks","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":71,"Q_Id":68688695,"Users Score":0,"Answer":"send and sendall will chop your buffer into pieces for sending over the network. It's important to remember that TCP is a streaming protocol, not a packet protocol. If you send 1,024 bytes, it might be received by the other end as 1,024 bytes, or as one of 256 and one of 768, or one of 1,000 and one of 24. The receiver need to know when the transmission is complete. Sometimes it's fixed buffer, sometimes you'll send a byte count first, sometimes you use a special termination character, sometimes you wait for a timeout. The receiver just needs to keep calling .recv until he knows its done.\nSome of the higher level Python packages (like twisted (which I recommend)) can handle that for you.","Q_Score":0,"Tags":"python,sockets,python-sockets","A_Id":68688718,"CreationDate":"2021-08-07T01:01:00.000","Title":"How to receive for sendall in python socket module","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"So I'm working in Linux and I need to install parse for python3 but always get the same error: ImportError: No module named parse. I tried it:\n\nfrom urllib.parse import urlparse\nfrom parser import *\ntry:\nfrom urllib.parse import urlparse\nexcept ImportError:\nfrom urlparse import urlparse (but as I know its only for python2, I work on python3).\nAlso tried to do this pip install parse but had no result. Before it I had the next error \u201cNameError: global name 'parse' is not defined\u201d.\nPlease can you help me, what should I do? I found that some people have the same problem but their resolutions dont help me","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":90,"Q_Id":68692943,"Users Score":1,"Answer":"urllib is in standard library, no need to install. It works ok for me in python 3.x. Probably you have named your script(the .py file you are running) to urllib. This is a common mistake, rename it to something else then it works.\nIt could happen even if you have a python file named urllib in your directory... because when you run your script, python will automatically add it's directory to sys.path(where python searched for modules\/packages). So it gets reached sooner than the original urllib which is in the standard library.\nSearch that file in your directory and delete it.","Q_Score":0,"Tags":"python-3.x","A_Id":68692985,"CreationDate":"2021-08-07T13:48:00.000","Title":"How can I install parse for python3 if I get importError?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"How can I read shortened urls with python?\nI have a list of shortened urls and i would like to get the real url or read the content of the page at least\n{'urlshortened': 3, 'urlshortened': 3, 'urlshortened': 3,}","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":33,"Q_Id":68705386,"Users Score":1,"Answer":"you can use the requests library and then use bs4 to get the page content, you should learn more about web scraping","Q_Score":0,"Tags":"python,python-3.x,list,nltk,url-shortener","A_Id":68705421,"CreationDate":"2021-08-08T23:16:00.000","Title":"How can I read shortened urls with python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"i have cookie get from website . how do i log that cookie using requests ?\ncookie=\"sb=Vma2X7D6JF_aBy6ESWdwm-OL; datr=Vma2X2YjSxJ-JzCD368WGfmL; locale=vi_VN; wd=1366x657; c_user=100029745455196;\"\nhow login with requests?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":28,"Q_Id":68706287,"Users Score":0,"Answer":"No, you can't login with cookie, you need login with credentials: username & password.\nIf you have logged in with credentials, you MAY use that cookie to bypass login step.","Q_Score":0,"Tags":"cookies,python-requests","A_Id":68711448,"CreationDate":"2021-08-09T02:45:00.000","Title":"How to login to a website that uses cookies?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am using paramiko library to connect with a specialized environment. Its based on linux but when we SSH in it provide its own shell. We can write help to get list of all commands that are supported in that session.\nI am using paramiko with python2.7 to provide a CLI client (it automates few things) that connect with the host and let us run the supported commands. Now I would like to provide tab-completion in the client CLI. I am not sure how this can be done. I am thinking there would be some support or some specialize character that can be send to get back response but I am not sure how it can be accomplished.\nI am hoping to avoid sending help command, parse the list of commands supported, and then provide a local tab-completion based on list of command. I want a more generic and dynamic solution.\nAny or all ideas are welcome.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":48,"Q_Id":68725304,"Users Score":1,"Answer":"You can try simulating the partial input and the Tab key press and parsing the results, undoing the simulated input afterwards. But that is not a good idea. You will have to end up re-implementing terminal emulation, what is an insane task. Without a full terminal implementation, you can never be sure that you never get an output that you won't be able to parse.\nThe shell is a black box with input and output. It should only be used as such. You should never try to \"understand\" its output.\nUsing the help command is a way more reliable solution.","Q_Score":0,"Tags":"python-2.7,paramiko,tab-completion","A_Id":68730120,"CreationDate":"2021-08-10T10:41:00.000","Title":"Tab completion over ssh library","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a string variable containing the HTML of an email.\nThe email contains Google drive attachments(links)\nI extract the attachment ID of the same and the Google drive attachments are always in the following format:\n'https:\/\/drive.google.com\/file\/d\/123456789\/view?\/usp=drive_web'\n123456789 being the file ID which I am trying to extract\nWhen there is only one attachment, I extract the ID using the below code:\nhtml_string.split('attachment_1<\/a>random HTML text with multiple '\/'attachment_2<\/a>\nNeed to extract the following list :\n['123456789','987654321']\nUsing a code that would work for any number of attachments","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":55,"Q_Id":68741760,"Users Score":0,"Answer":"Once you fix your html_string.split call, you'll find it is returning a list whose elements (except for the first) each start with one of the numbers you want.","Q_Score":0,"Tags":"python,string,split","A_Id":68741918,"CreationDate":"2021-08-11T12:10:00.000","Title":"Find list of strings between 2 substring combinations in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm writing a web crawler which extracts cleaned news articles text and metadata using diffbot api. It also makes a logging of an article title and text changes if their source were modified since last extraction. I need some automatic way to distinguish between erased and changed article: news portals mostly don't return 404 or other error codes in case if the post was deleted, often they send 200 and page with caption like \"Sorry, the article you looking for was removed\". So, I need a tool or approach to detect that kind of situations, preferably it should be something written in Python or something with web API. I am totally confused and have no idea where even to begin, so any reasonable suggestions widely appreciated.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":24,"Q_Id":68746689,"Users Score":0,"Answer":"you can:\n\nset a minimum length of an article to expect and treat any short text as a removed one\ncompare the Diffbot URI (a unique string) across two articles of the same URL to notice that their body has changed\n\nThese two in tandem should provide you with the diffing capability you seek.","Q_Score":0,"Tags":"python,web-scraping,diffbot","A_Id":68779089,"CreationDate":"2021-08-11T17:42:00.000","Title":"How to distinguish between removed and modified news article during crawling web pages?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a python script that runs locally via a scheduled task each day. Most of the time, this is fine -- except when I'm on vacation and the computer it runs on needs to be manually restarted. Or when my internet\/power is down.\nI am interested in putting it on some kind of rented server time. I'm a totally newbie at this (having never had a production-type process like this). I was unable to find any tutorials that seemed to address this type of use case. How would I install my python environment and any config, data files, or programs that the script needs (e.g., it does some web scraping and uses headless chrome w\/a defined user profile).\nGiven the nature of the program, is it possible to do or would I need to get a dedicated server whose environment can be better set up for my specific needs? The process runs for about 20 seconds a day.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":139,"Q_Id":68763848,"Users Score":1,"Answer":"setting up a whole dedicated server for 20s worth of work is really a suboptimal thing to do. I see a few options:\n\nGet a cloud-based VM that gets spin up and down only to run your process. That's relatively easy to automate on Azure, GCP and AWS.\nDockerize the application, along with the whole environment and running it as an image on the cloud - e.g. on a service like Beanstalk (AWS) or App Service (Azure) - this is more complex, but should be cheaper as it consumes less resources\nGet a dedicated VM (droplet?) on a service like Digital Ocean, Heroku or pythonanywhere.com - dependent upon the specifics of your script, it may be quite easy and cheap to set up. This is the easiest and most flexible solution for a newbie I think, but it really depends on your script - you might hit some limitations.\n\nIn terms of setting up your environment - there are multiple options, with the most often used being:\n\npyenv (my preferred option)\nanaconda (quite easy to use)\nvirtualenv \/ venv\n\nTo efficiently recreate your environment, you'll need to come up with a list of dependencies (libraries your script uses).\nA summary of the steps:\n\nrun $pip freeze > requirements.txt locally\nmanually edit the requirements.txt file by removing all packages that are not used by your script\ncreate a new virtual environment via pyenv, anaconda or venv and activate it wherever you want to run the script\ncopy your script & requirements.txt to the new location\nrun $pip install -r requirements.txt to install the libraries\nensure the script works as expected in its new location\nset up the cornjob","Q_Score":2,"Tags":"python,remote-server","A_Id":68764033,"CreationDate":"2021-08-12T20:32:00.000","Title":"Automate daily python process on remote server for improved reliability","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to create a 1v1-style bot. For this Discord Bot, I need to know if two users are friends in Discord. I looked through the Docs and saw that there's a command to find mutual friends of someone. So then, I thought I could send a friend request to both users (as a bot), and then I could find mutual friends of each other to determine if the two users are friends. However, I couldn't send a friend request or verify mutual friends through the code (403 Forbidden: Endpoint not accessible by a bot user). Do I have to create my own friending system, or is there any way that I can (legally) figure out of two users are Discord friends?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":392,"Q_Id":68765449,"Users Score":2,"Answer":"There is no way you can know if User1 is friend with User2, this is a breach of user privacy. Discord would not allow you to do so.","Q_Score":0,"Tags":"python,discord,discord.py","A_Id":68765566,"CreationDate":"2021-08-13T00:14:00.000","Title":"Discord.py Bot - Mutual Friends","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"What is the different between import library as lib and from library import (....)? I saw people used from library import (....) and import library as lib. And wonder which one is the best practice.\nCan you help me? Thank you.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":82,"Q_Id":68766584,"Users Score":2,"Answer":"There is no functional difference between the two, but for aesthetics, readability and maintainability reasons there are subtle pros and cons to both.\nSome common considerations:\n\nIf a name in the imported module is referenced a lot, it may appear verbose to have the name prepended with its module name every time it is used, e.g. having to repeatedly write lib.func instead of just func. In this case it would make the code look cleaner to import the name from the module so that the name can be used without the module name. For example, if you have a complicated formula such as y = sqrt(cos(x)) - sqrt(sin(x)), you don't want to make the code look more complicated than it already is with y = math.sqrt(math.cos(x)) - math.sqrt(math.sin(x)).\nIf there are a large number of names one wishes to use from an imported module, it would appear too verbose to exhaustively list all those names with a from lib import a, b, c, d... statement. For example, it is common to just import ast for syntax tree traversal since many cases of ast involve references to well over 10 node types, so a simple import ast is usually preferred over an excessively long statement of from ast import Module, Expression, FunctionDef, Call, Assign, Subscript, ....\nThe long statement above also showcases its maintainability issue. If, over time, the logics of the code involves more node types, one would have to add the newly referenced node types to the long list of names imported from the module. Conversely, if one of the names imported from the module becomes unused over time, one should remove it from that long list. None of these would be an issue when you use import ast.\nImporting names from a module pollutes the namespace of the current module, increasing the likelihood of collision with local names or names imported from other modules. This is especially likely when the language of the imported name is a generic one. For example, it is discouraged to do from re import search because search is such a commonly used name, and there may very well be a local variable named search or a function named search imported from another module to cause name collisions.\nAn additional point to the example above is that writing search(...) makes your code less readable than writing re.search(...) also because search is too generic a term. A call to re.search makes it clear that you're performing a regex search while a call to search looks ambiguous. So use from lib import a only when a itself is a specific term, such as from math import sqrt.","Q_Score":0,"Tags":"python","A_Id":68767003,"CreationDate":"2021-08-13T03:58:00.000","Title":"import library different method","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I created an discord.py bot. Soon after I created it. To make it look decent I want to add discord user profile banner.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":268,"Q_Id":68813816,"Users Score":2,"Answer":"Banners for bots aren't supported by Discord yet. It is only available to normal users with an active Nitro subscription.","Q_Score":0,"Tags":"python,discord.py","A_Id":68814502,"CreationDate":"2021-08-17T08:09:00.000","Title":"How to add Discord user profile banner in my discord.py bot","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have created a website that scrapes multiple hockey websites for game scores. It runs perfectly on my local server and I am now in the process of trying to deploy. I have tried using pythonanywhere.com but selenium does not seem to be working on it. For any one who has deployed a website that uses selenium\/webdriver, what is the easiest\/best platform to deploy a website like this (it does not have to be free like pythonanywhere, as long as it is not too expensive, lol!). Thanks in advance","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":186,"Q_Id":68836594,"Users Score":0,"Answer":"You can use the AWS, GCP, or Digitalocean Linux servers. In this case, you first have to install chrome in Linux and then put the relevant version of the chrome driver in your project directory. Make sure to check the chrome version first and then put the relevant Chrome driver on your machine.","Q_Score":0,"Tags":"python,selenium-webdriver,web-deployment","A_Id":68837385,"CreationDate":"2021-08-18T17:05:00.000","Title":"Deploying web scraping website with selenium chrome","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"so I am making a small app, which searches the web via keyword in PyWebIO input() function and then shows results in put_table() and after the results are ready I need to start another search and instead of updating the page with F5 or browser, I want to create a put_button() which will update or get me back to input(), can anyone help, please?\nIm just not sure what should I put in onlick= attribute in this case.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1171,"Q_Id":68851167,"Users Score":0,"Answer":"from pywebio.session import run_js\nput_button(\"ReUpload_images\",onclick = lambda: run_js('window.location.reload()'))","Q_Score":1,"Tags":"python,button,web-applications,refresh","A_Id":70953706,"CreationDate":"2021-08-19T16:03:00.000","Title":"PyWebIO button to refresh the page","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Ok, so I am trying to create a bot that displays information through an api, that uses https:\/\/[PROJECT_ID].firebaseio\/.json?shallow=trye&download=myfilename.txt , but it doesnt show all the information in the firebase, for instance this request will bring up one set of info, but then I change .json to something like \"music.json\" and then it gives entirely new data seperate to the other-, does anyone know how I could get all json file names to make this process alot easier?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":61,"Q_Id":68851922,"Users Score":0,"Answer":"Downloading the JSON from a given path in the database should give all data under that path. I just tested this on a database of my own, and got the entire database.\nIf you're only seeing some of the data, it might be caused by how you process the resulting JSON file.","Q_Score":0,"Tags":"python,database,firebase,api,python-requests","A_Id":68852078,"CreationDate":"2021-08-19T17:00:00.000","Title":"Python Requests Firebase - how to find all firebase json filenames?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am trying to figure out whether it is possible to send a response back to the user from my python flask app to twilio, which is integrated with dialogflow.\nUsing Twilio and python only, and using messaging response I can send images back to the user.\nHowever, once I link the integration with dialogflow, I'm not sure what to pass back to dialogflow for it to recognise the image link.\nCurrently I am using fulfillment text to send text from python to Twilio\/dialogflow.\nPlease help.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":72,"Q_Id":68853865,"Users Score":0,"Answer":"Thanks for your reply. I have managed to solve this issue. For others experiencing a similar issue, using Client.messages.create function, it allows you to attach a file, using the mediaUrl parameter.","Q_Score":0,"Tags":"python,twilio,dialogflow-es","A_Id":68919702,"CreationDate":"2021-08-19T19:55:00.000","Title":"Dialogflow - sending media from python flask app to twilio","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Hi there fellow programmers.\nI've been learning Python for a few months and I've built some automation scripts for WhatsApp Web tasks, like sending messages and media, extracting contacts from groups, etc.\nI did some research here on StackOverFlow, on YouTube and Google, but couldn't find any specific resources on how to build an User Interface in Python to manage Selenium tasks.\nThe question is if it's possible to join my Selenium scripts into a Software where I can control and choose functionalities via an User Interface.\nI know there are libraries for building User Interfaces like Tkinker and PySimpleGui, but I think it would be good for me and anyone experiencing the same issue if an experienced programmer just pointed an effective path to solve this issue.\nMy goal is to make possible for an average user, with no programming background, to enjoy the easiness of doing repetitive tasks on WhatsApp Web by using all the power that Python and Selenium has to offer.\nThank you.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":110,"Q_Id":68854026,"Users Score":0,"Answer":"It is definitely possible to make a nice and user-friendly GUI for your Selenium scripts!\nI'm using Python with Selenium and PySimpleGUI (the tkinter version). I like PySimpleGUI because it makes it possible to have a much shorter, easily readable code (no extensive boilerplates needed): it is possible to use a lot of defaults and only specify those variables that really need to be set or changed. On PySimpleGUI's GitHub, there are quite many useful examples to get you started as well as to learn some more advanced tricks.\nWhere PySimpleGUI didn't allow to access some specific features (which was just a couple of times for our tasks, e.g., concerning the WM attributes), we were able to access the underlying tkinter structures.","Q_Score":0,"Tags":"python,selenium,user-interface,whatsapp,user-experience","A_Id":68914060,"CreationDate":"2021-08-19T20:11:00.000","Title":"Is it possible to add an user interface to a Selenium based application in Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Hi there fellow programmers.\nI've been learning Python for a few months and I've built some automation scripts for WhatsApp Web tasks, like sending messages and media, extracting contacts from groups, etc.\nI did some research here on StackOverFlow, on YouTube and Google, but couldn't find any specific resources on how to build an User Interface in Python to manage Selenium tasks.\nThe question is if it's possible to join my Selenium scripts into a Software where I can control and choose functionalities via an User Interface.\nI know there are libraries for building User Interfaces like Tkinker and PySimpleGui, but I think it would be good for me and anyone experiencing the same issue if an experienced programmer just pointed an effective path to solve this issue.\nMy goal is to make possible for an average user, with no programming background, to enjoy the easiness of doing repetitive tasks on WhatsApp Web by using all the power that Python and Selenium has to offer.\nThank you.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":110,"Q_Id":68854026,"Users Score":0,"Answer":"You could use tkinter and buttons and textfield control specific routines of a selenium application.\nPygame could also work.","Q_Score":0,"Tags":"python,selenium,user-interface,whatsapp,user-experience","A_Id":68854423,"CreationDate":"2021-08-19T20:11:00.000","Title":"Is it possible to add an user interface to a Selenium based application in Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm making an app that cleans-etc-search a CSV that is updated daily on a website. I was using an EC2 to download the file using python pandas.read_csv(url) to an EBS, but now I want to make the app serverless.\nI want to automate the download from 'https:\/\/sam.gov\/api\/prod\/fileextractservices\/v1\/api\/download\/Contract%20Opportunities\/datagov\/ContractOpportunitiesFullCSV.csv?privacy=Public' and upload it to S3 serverless. I'm not sure if is possible to do it serverless. Is there a better way to do it?\nThe file size is about 500 MB.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":241,"Q_Id":68877139,"Users Score":1,"Answer":"A lambda is exactly what you would want to use for this kind of scenario. Do the following:\n\nCreate the S3 bucket\nWrite the lambda function\nConfigure an IAM role to give lambda permission to write to the S3 bucket\nConfigure an EventBridge task to trigger the lambda function daily","Q_Score":0,"Tags":"python-3.x,amazon-web-services,amazon-s3,aws-lambda,aws-serverless","A_Id":68877185,"CreationDate":"2021-08-21T22:17:00.000","Title":"Is there a way to download a csv file from a website and upload it directly to Amazon S3 using Lambda?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Does the Xpath change if the content inside the XPath changes?\nI.e. the website changes the text in the XPath from 'supports' to 'support'. Would the XPath change even if the text change or will it stay the same?","AnswerCount":3,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":317,"Q_Id":68883977,"Users Score":1,"Answer":"XPath is a syntax to locate element on the page based on it attributes like tag name, class name, id, href etc values.\nAlso it can be located relatively to other elements.\nSo, if you are locating the element based on it's tag name and class name (for example) and you changing the element text content this will not affect selecting this element with previously created XPath locator.\nHowever if you are locating the element based on it's text content and you changing that text - that XPath locator will obviously not find that element no more since now no more presented element with the old text on this web page.","Q_Score":1,"Tags":"python,selenium,xpath,bots","A_Id":68884204,"CreationDate":"2021-08-22T18:13:00.000","Title":"Does an XPath change if the content inside it changes?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Does the Xpath change if the content inside the XPath changes?\nI.e. the website changes the text in the XPath from 'supports' to 'support'. Would the XPath change even if the text change or will it stay the same?","AnswerCount":3,"Available Count":3,"Score":0.0665680765,"is_accepted":false,"ViewCount":317,"Q_Id":68883977,"Users Score":1,"Answer":"You make the common mistake of thinking that every element has \"an XPath\". Not so - there are any number of XPath expressions that will select a particular element. Just as you might be John Smith, Mary Smith's husband, Pete Smith's second son, Susan Smith's dad, or the guy wearing red trainers, so elements can be identified in XPath by any number of their distinguishing characteristics: and any particular XPath expression will continue to select that element so long as those characteristics don't change.","Q_Score":1,"Tags":"python,selenium,xpath,bots","A_Id":68888609,"CreationDate":"2021-08-22T18:13:00.000","Title":"Does an XPath change if the content inside it changes?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Does the Xpath change if the content inside the XPath changes?\nI.e. the website changes the text in the XPath from 'supports' to 'support'. Would the XPath change even if the text change or will it stay the same?","AnswerCount":3,"Available Count":3,"Score":0.0665680765,"is_accepted":false,"ViewCount":317,"Q_Id":68883977,"Users Score":1,"Answer":"See, it's bad practice to have xpath with harcoded text, cause if you are viewing website in English then xpath (let's say) \/\/div[text()='support'] represent at least one node in DOM, where as if same website in other countries (let's say German ) have some different text for support, right ? so your automation scripts will fail due to not having a common xpath locator.\nHaving said that, xpath is least preferable over ID, class name, tag name, css selector, link text, partial link text, and if none of them are working obviously you will have to use xpath. Also xpath comes handy if you wanna move upward in DOM.","Q_Score":1,"Tags":"python,selenium,xpath,bots","A_Id":68890470,"CreationDate":"2021-08-22T18:13:00.000","Title":"Does an XPath change if the content inside it changes?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"So I've looked around and although there are a bunch of similarly phrased questions but I haven't found one that addresses my question. I don't really want to trawl through Stack Overflow, so here's to hoping this isn't a duplicate.\nSo I coded a Discord Embed that requires pinging to work. The text is displaying as a discord ping should look with the light blue background and such, but there is no ping and users simply get a new message notification instead of a ping. This is the case for role mentions as well as user mentions. For user mentions I used author.mention and for role mentions I used the ID. Does anyone know how I can change this \"setting?\"\nOne possible workaround that I have thought up is that I could ping the needed parties and then instantly delete the ping right before sending the embed, but for my peace of mind I would prefer if the ping was the one which is displayed in the embed.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1739,"Q_Id":68885347,"Users Score":0,"Answer":"So as i know you can`t do a \"Ping\" in a Embed at least not what you call a ping. To ping People you have to do a ping in a normal message. You could do this before the embed and delete it or you could not delete it.","Q_Score":0,"Tags":"python,discord.py,embed","A_Id":68910098,"CreationDate":"2021-08-22T21:35:00.000","Title":"How to ping people\/roles inside a Discord Embed?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am creating a glue job(Python shell) to export data from redshift and store it in S3. But how would I automate\/trigger the file in S3 to download to the local network drive so the 3rd party vendor will pick it up.\nWithout using the glue, I can create a python utility that runs on local server to extract data from redshift as a file and save it in local network drive, but I wanted to implement this framework on cloud to avoid having dependency on local server.\nAWS cli sync function wont help as once vendor picks up the file, I should not put it again in the local folder.\nPlease suggest me the good alternatives.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":421,"Q_Id":68897012,"Users Score":1,"Answer":"If the interface team can use S3 API or CLI to get objects from S3 to put on the SFTP server, granting them S3 access through an IAM user or role would probably be the simplest solution. The interface team could write a script that periodically gets the list of S3 objects created after a specified date and copies them to the SFTP server.\nIf they can't use S3 API or CLI, you could use signed URLs. You'd still need to communicate the S3 object URLs to the interface team. A queue would be a good solution for that. But if they can use an AWS SQS client, I think it's likely they could just use the S3 API to find new objects and retrieve them.\nIt's not clear to me who controls the SFTP server, whether it's your interface team or the 3rd party vendor. If you can push files to the SFTP server yourself, you could create a S3 event notification that runs a Lambda function to copy the object to the SFTP server every time a new object is created in the S3 bucket.","Q_Score":0,"Tags":"python,amazon-web-services,amazon-s3,aws-glue","A_Id":68915684,"CreationDate":"2021-08-23T17:54:00.000","Title":"download file from s3 to local automatically","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"For the Google Drive Python API, in all the tutorials I have seen, they require\nusers to create a project in their Google Dashboard, before obtaining a client ID and a client secret json file. I've been researching both the default Google Drive API and the pydrive module.\nIs there a way for users to simply login to their Google Account with username and password,\nwithout having to create a project? So once they login to their Google Account, they are free to\naccess all files in their Google Drive?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":230,"Q_Id":68914837,"Users Score":0,"Answer":"It's not possible to use the Drive API without creating a GCP project for the application. Otherwise Google has no idea what application is requesting access, and what scope of account access it should have.\nUsing simply a username and password to log in is not possible. You need to create a project and use OAuth.","Q_Score":0,"Tags":"python,google-drive-api,pydrive","A_Id":68918319,"CreationDate":"2021-08-24T22:43:00.000","Title":"Google Drive Python API without Creating Project","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I do not have experience in programming. Now I try to learn React. I found different APIs provided by companies for free. But this way I can practice only GET requests. Because, no one wants me to delete, add or edit on their servers :)\nSo my question is:\nHow front end developers can practice DELETE, POST and PUT requests?","AnswerCount":2,"Available Count":1,"Score":-0.0996679946,"is_accepted":false,"ViewCount":351,"Q_Id":68922428,"Users Score":-1,"Answer":"My suggestion is that you download a local server from MySql and then use your local server along with your IDE and make the requests in your own database","Q_Score":2,"Tags":"reactjs,python-requests,axios,frontend,rest","A_Id":68922463,"CreationDate":"2021-08-25T11:52:00.000","Title":"Where can I practice DELETE, PUT and POST requests. Do some free API exist that allow to do this?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm extremely new to Elasticsearch and I can't seem to find an answer that will help me to get Python to detect if the data from the documents I have in an s3 bucket are already uploaded in elasticsearch. My goal is to have it see if the data from the s3 bucket is already in there, if it is then skip it, and move onto the next one until it finds a document that has data not uploaded yet. Can someone help me, please?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":34,"Q_Id":68926282,"Users Score":1,"Answer":"I think the easiest way would be to use DynamoDB to store that kind of information. So each file that you upload to ES, gets a record in DDB. Thus you can always verify if the file had been uploaded to ES, by checking for the presence\/absence of records in DDB.","Q_Score":0,"Tags":"python,amazon-web-services,elasticsearch,amazon-s3","A_Id":68931012,"CreationDate":"2021-08-25T16:03:00.000","Title":"Elasticsearch and S3 bucket: how do I get Python to detect if data from s3 bucket are already in elasticsearch?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am currently doing a simulation in which each second has 200k points. I want to send this in real-time as much as possible with very minimal delay. The problem is sending 1 packet in lorawan has delay and some packets are not sending which natural.\nMy question is, How can I send this 200k points into single packet. For example, after 1 second I will send all data (200k points) into the network, in a packet.\nBTW, I am using python.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":68,"Q_Id":68943507,"Users Score":0,"Answer":"The use case you have is not one for LoRaWAN. It is for low data, low need applications over wide areas. 200k points (which I must assume by the name is not a single byte per unit) every second is a datatransfer of at least 720MB. That is way to much.\nThis is never going to work, you need to move to WiFi\/Bluetooth to those kinds of transfer but your range is going to decrease dramatically.","Q_Score":0,"Tags":"python,compression,packet,lorawan","A_Id":68965249,"CreationDate":"2021-08-26T18:12:00.000","Title":"How to send thousands of data points in lorawan into single packet?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am attempting to download a small image file (e.g. https:\/\/cdn4.telesco.pe\/file\/some_long_string.jpg) in the shortest possible time.\nMy machine pings 200ms to the server, but I'm unable to achieve better than 650ms.\nWhat's the science behind fast-downloading of a single file? What are the factors? Is a multipart download possible?\nI find many resources for parallelizing downloads of multiple files, but nothing on optimizing for download-time on a single file.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":24,"Q_Id":68952284,"Users Score":0,"Answer":"It is not so easy to compare those two types of response time...\nThe commandline \"machine ping\" is a much more \"lowlevel\" and fast type of response in the network architecture between two devices, computers or servers.\nWith a python-script that asks for a file on a remote webserver you have much more \"overhead\" in the request where every instance consumes some milliseconds, like the speed of your local python runtime, the operating system from you and the remote server (win\/osx\/linux), the used webserver and its configuration (apache\/iis\/ngix) etc.","Q_Score":0,"Tags":"python,curl,download,get,wget","A_Id":68953684,"CreationDate":"2021-08-27T10:59:00.000","Title":"How to mimize download time for a single .jpg file download?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Lets say that the PUSH socket is sending messages every 1 second, and the PULL socket is receiving messages every 10 seconds.\nSo, in 100 seconds, the PUSH socket has sent 100 messages, while the PULL socket has only received 10.\nNow, what happens if the PUSH socket dies, and the PULL socket keeps running?\nWill it still receive messages?\nAlso, is there a limit to the messages that the PUSH socket with hold with nobody receiving it?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":74,"Q_Id":68996196,"Users Score":0,"Answer":"You'll find it all in the documentation, but I think the answers are as follows:\nTo understand what will happen when the PUSH socket dies, it helps to understand a bit about how ZMQ works. When you write a message to the PUSH socket, you're actually asking a ZMQ management thread to buffer and transfer that message. That talks (using the ZMTP protocol) to the ZMQ management thread in the client which buffers it. The client's management thread will then inform the client application that a message has been received. The client ZMQ management thread keeps the message in its buffer until the application has read the message.\nThe size of the buffers are not infinite. For a constant stream of messages, at some point the client management thread's message buffer can fill up, meaning that it refuses to take messages from the sender, meaning that the sender's management thread's buffers start to fill up. Eventually, it will refuse to take messages from the sending application, and a zmq_send() blocks.\nYou can alter the size of these buffers by setting high water marks on sockets, but by default they grow on demand (I think). That means that messages can be accumulated until memory is exhausted, but regardless if all the buffers are full the PUSH socket write blocks.\nSo, as long as the management threads are cooperating to keep the client end buffer containing at least something, messages are flowing at the peak rate of the combined applications (1 every 10 seconds in your example). The issue is what do these threads decide to do if the client application isn't reading at the rate the sending application is writing?\nI believe that the policy changed between ZMQ version 3 and 4. I think that in v3, they were biased to accumulate messages in the sending end. But in v4 I think they switched, and messages would be accumulated in the client end buffers.\nThis means that, so long as the management thread buffers haven't filled up, for version 4 if the PUSH end dies then all of the messages sent but not yet read have been transferred across the network and are waiting in the PULL end's management thread buffers, and can be read. Whereas in version 3, there'd be more messages kept in the PUSH end management thread buffers, and they've not been sent when the PUSH end dies.\nI may have got that version 3, 4 thing the wrong way round.","Q_Score":0,"Tags":"python,sockets,message-queue,zeromq,pyzmq","A_Id":69049816,"CreationDate":"2021-08-31T09:27:00.000","Title":"What happens when Python ZMQ PULL socket is receiving messages at a different speed than the PUSH socket?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"ImportError: pycurl: libcurl link-time ssl backends (schannel) do not include compile-time ssl backend (openssl)\ni use win10 + py3.9 + pycurl-7.44.1-cp39-cp39-win_amd64.whl + i can't use import ,please help me","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":324,"Q_Id":69001495,"Users Score":0,"Answer":"pycurl version\npip install pycurl==7.43.0.5","Q_Score":1,"Tags":"python,python-3.x,xml,pycurl,pyspider","A_Id":69068217,"CreationDate":"2021-08-31T15:31:00.000","Title":"libcurl link-time ssl backends (schannel) do not include","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to do some text mining of trending topics from El Salvador, but this country does not have a WOEID active code.\nAny other alternatives?\nThanks in advance for your support.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":19,"Q_Id":69017380,"Users Score":0,"Answer":"try using API.trends_closest(lat, long). I'm not sure this will work.","Q_Score":0,"Tags":"python,twitter","A_Id":69040591,"CreationDate":"2021-09-01T16:16:00.000","Title":"Twitter trending topics for a country without WOEID","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to deploy a simple web app consisting front end reactjs and backend python for the api routes\nI want to allow api reqeusts by reactjs only, that will be deployed on the same server\nIn other words I do not want others to be able to e.g use postman to call the api\nIs this possible? I do not wish to add authentication at this point as it's a really small project","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":345,"Q_Id":69056323,"Users Score":1,"Answer":"One option is to have flask listen on 127.0.0.1 and then only local users will be able to connect to the api. Another option, you can bind the flask app on a unix socket file (instead of a ip:port) and set the file permissions for that socket so that only the users you want will be able to access it.","Q_Score":0,"Tags":"python,flask,flask-restful","A_Id":69056606,"CreationDate":"2021-09-04T14:42:00.000","Title":"python flask limit connection to requests from local network only?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm sorry that this has already been asked, but I have not found anything helpful.\nI am trying to create a discord bot and am following YouTube videos.\nWhen I try to run my code, I keep getting the error:\nModuleNotFoundError: No module named 'discord'\ndespite having installed discord and discord.py.\nI checked where it has been asked before and some people have said that using older Python versions makes it work, but newer ones don't. Why not? How can I use an older Python version without ruining the latest one?\nThank you in advance.","AnswerCount":2,"Available Count":2,"Score":-0.0996679946,"is_accepted":false,"ViewCount":106,"Q_Id":69058866,"Users Score":-1,"Answer":"Did you tried pip install discord and pip install discord.py?","Q_Score":0,"Tags":"python,discord","A_Id":69062399,"CreationDate":"2021-09-04T20:49:00.000","Title":"ModuleNotFoundError: No module named 'discord' \/ how to switch python versions","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm sorry that this has already been asked, but I have not found anything helpful.\nI am trying to create a discord bot and am following YouTube videos.\nWhen I try to run my code, I keep getting the error:\nModuleNotFoundError: No module named 'discord'\ndespite having installed discord and discord.py.\nI checked where it has been asked before and some people have said that using older Python versions makes it work, but newer ones don't. Why not? How can I use an older Python version without ruining the latest one?\nThank you in advance.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":106,"Q_Id":69058866,"Users Score":0,"Answer":"this happened to me and none of my modules worked.\nuninstall python and then reinstall it; so that it is updated.","Q_Score":0,"Tags":"python,discord","A_Id":69062447,"CreationDate":"2021-09-04T20:49:00.000","Title":"ModuleNotFoundError: No module named 'discord' \/ how to switch python versions","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Have been successfully controlling an arduino uno board with python serial libraries on my own computer but would like to use google colaboratory so my grandson can plug and play without having libraries installed on his computer. can serial libraries be imported into the colaboratory notebook env?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":426,"Q_Id":69060072,"Users Score":0,"Answer":"You can easily install any package in Jupyter notebooks by inserting a bang command such as !pip install pyserial.\nUnfortunately, this isn't really what you're looking for. Google colabs running remotely from your computer (this is in large part why they're popular as they can use googles beefier hardware). This means you would be installing pyssrial into a server computer that has no local access to your adruino.\nI would recommend exporting your code as a notebook and getting him set up locally on his computer with the Jupyter environment. Obviously this is not as plug and play as you want. I can't really think of an easy way to get as plug in play as you want with configurable scripts.","Q_Score":0,"Tags":"python-3.x,pyserial,arduino-uno","A_Id":69063442,"CreationDate":"2021-09-05T01:48:00.000","Title":"Are python serial communications libraries available in the google colaboratory online dev environment?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm currently trying to create a spider which crawls each result and takes some info from each of them. The only problem is that I don't know how to find the URL that I'm currently on (I need to retrieve that too).\nIs there any way to do that?\nI know how to do that using Selenium and Scrapy-Selenium, but I'm only using a simple CrawlSpider for this project.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":31,"Q_Id":69063931,"Users Score":0,"Answer":"You can use:\ncurrent_url = response.request.url","Q_Score":0,"Tags":"python,scrapy","A_Id":69064957,"CreationDate":"2021-09-05T13:44:00.000","Title":"Is there any way to find the URL that you are currently scraping?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm trying to write a code which collects data from website using REST API.\nThere is an authentication and I correctly send POST request (let's name it LogIn) with credentials and correct response. Then, I would like to call another POST command (let's name it GetData), but I get \"Unauthorized: Access is denied due to invalid credentials.\"\nI'm using python requests.session() to keep all cookies and I've noticed something wired. If I log in the web browser and replace ASP.NET_SessionId cookie in my python GetData request (like that: self.session.cookies.update({'ASP.NET_SessionId': 'XXX'})) the response is correct.\nI've checked and python LogIn request also generates ASP.NET_SessionId cookie but somehow it is not valid and only if I copy it from browser it is correct.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":55,"Q_Id":69074443,"Users Score":0,"Answer":"You do not keep\/send the authentication cookie from login to the next call.\nWhen you call the login page the login is answer with an authentication cookie, and maybe with a session cookie.\nOn the next call you have to provide this two cookies to been able to get the data - the one is assure that you have the permissions to read them","Q_Score":0,"Tags":"python,asp.net,api,python-requests,rest","A_Id":69075954,"CreationDate":"2021-09-06T12:19:00.000","Title":"ASP.NET_SessionId access denied","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm trying to write a code which collects data from website using REST API.\nThere is an authentication and I correctly send POST request (let's name it LogIn) with credentials and correct response. Then, I would like to call another POST command (let's name it GetData), but I get \"Unauthorized: Access is denied due to invalid credentials.\"\nI'm using python requests.session() to keep all cookies and I've noticed something wired. If I log in the web browser and replace ASP.NET_SessionId cookie in my python GetData request (like that: self.session.cookies.update({'ASP.NET_SessionId': 'XXX'})) the response is correct.\nI've checked and python LogIn request also generates ASP.NET_SessionId cookie but somehow it is not valid and only if I copy it from browser it is correct.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":55,"Q_Id":69074443,"Users Score":0,"Answer":"Is it a feasible short-term workaround to manually append headers from the Login response onto subsequent requests, rather than using requests.session()?","Q_Score":0,"Tags":"python,asp.net,api,python-requests,rest","A_Id":69075385,"CreationDate":"2021-09-06T12:19:00.000","Title":"ASP.NET_SessionId access denied","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a S3 bucket and some Python code, the code read all the available files for the current day and download them to s3 (it reads the files from FTP in an ascending order, based on the datetime in the filename when the file gets uploaded to FTP), so for example I have downloaded file 1 and file 2 in the last run and uploaded them to S3, now I know FTP has a new file file 3 available, then a new run will download files in the following order: file1 file2 and file3 and upload all the files again in the same order to the same S3 path (file1 and file2 gets overwritten, and new file file 3 will also be uploaded to s3).\nMy question is what's the easiest way to identify the newly-uploaded file file3 in Python?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":77,"Q_Id":69116144,"Users Score":0,"Answer":"The easiest way I can think of to see the difference between 'updated' files and newly created files is simply doing a try\/except GetObject before the PutObject. This is preferred over first doing the PutObject then trying to figure out what changed since S3 has no easy way of retrieing objects by 'Modified date' or simular.\nSo if your question was about checking which files were already present in S3 before uploading, try doing the GetObject first :).","Q_Score":0,"Tags":"python,python-3.x,amazon-web-services,amazon-s3,ftp","A_Id":69116592,"CreationDate":"2021-09-09T10:01:00.000","Title":"What's the easieat way to get the latest uploaded file in S3 (when other existing files get overwritten) - Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm working on a bot in discordpy. I have a discord server for only people in school. Everyone has a school email, and I want to make a bot that can send a code to verify them.\nWhere I am stuck is that when the bot askes for an email, it wont let me DM the bot back. In discord I'm being told that my message can't be delivered, because we don't share any mutual servers, etc. There are no errors in the python log. Also, when a new person joins, they are banned right away so they can't read or write anything. All of the verification is happening in DM's. The person is only unbanned after they've been verified.\nI have tried googling everywhere, but I just cannot find an answer. If there is one that already exists, could you please point me to it?\nIs there a solution to this? Or should I try something else, like blocking the user from reading all channels except one?\nThank you so much for your help! It really means a lot to me. ;D","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":210,"Q_Id":69144082,"Users Score":0,"Answer":"It's simply something that you aren't allow to do in Discord. It doesn't depend on the Discord API.\n\nOr should I block the user to see al the channels except one?\n\nthis can be the best alternative.","Q_Score":0,"Tags":"python,discord,discord.py","A_Id":69144104,"CreationDate":"2021-09-11T14:54:00.000","Title":"Message isn't delivered when I try to DM my bot in discord","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm working on a bot in discordpy. I have a discord server for only people in school. Everyone has a school email, and I want to make a bot that can send a code to verify them.\nWhere I am stuck is that when the bot askes for an email, it wont let me DM the bot back. In discord I'm being told that my message can't be delivered, because we don't share any mutual servers, etc. There are no errors in the python log. Also, when a new person joins, they are banned right away so they can't read or write anything. All of the verification is happening in DM's. The person is only unbanned after they've been verified.\nI have tried googling everywhere, but I just cannot find an answer. If there is one that already exists, could you please point me to it?\nIs there a solution to this? Or should I try something else, like blocking the user from reading all channels except one?\nThank you so much for your help! It really means a lot to me. ;D","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":210,"Q_Id":69144082,"Users Score":0,"Answer":"What I did for my school server is to have all the channels be locked, then send a verification DM. They can't see the students and they don't have permission to make a new invite, so the user has to complete the verification. After that, send the replies to a mod-only channel for approval.\nIf they have their DMs closed then... That's on them.","Q_Score":0,"Tags":"python,discord,discord.py","A_Id":69147964,"CreationDate":"2021-09-11T14:54:00.000","Title":"Message isn't delivered when I try to DM my bot in discord","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Does anybody know if the socketio library is compatible with Twilio media stream?\nI have a sanic blueprint application and want to retrieve audio inside it. I build a websocket async server with socketio and attached the application to it.\nI'm getting \"31920 Stream - WebSocket - Handshake Error\" which may suggest that socketio is a not compatible.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":59,"Q_Id":69179511,"Users Score":0,"Answer":"FYI I just got a confirmation from Twilio Voice Support Team that SocketIO is not compatible with Streams and will not work.","Q_Score":0,"Tags":"python,server,socket.io,twilio,twilio-twiml","A_Id":69189691,"CreationDate":"2021-09-14T14:20:00.000","Title":"Twilio Media Stream and websocket server socketio python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working on a project in which through the boto3 SDK I need to obtain the information from Alternate Contacts and Contact Information.\nIs there a method that do this with boto3? Thanks!","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":303,"Q_Id":69194410,"Users Score":0,"Answer":"Yeah I've run into that problem many times before. When dealing with large scale organizations this is kindof a bottleneck sometimes. It's currently not possible to automate easily.\nSome corporates I've seen get around this by tagging account's with a 'BillingContact' and 'Technical Contact's etc. and build their own logic around those tags using lambda's. This doesn't help in letting account's owners receive messages from AWS directly, but it gives some possibilities to email account owners using your own logic to have some form of governance.","Q_Score":0,"Tags":"python,amazon-web-services,boto3,aws-billing","A_Id":69199196,"CreationDate":"2021-09-15T13:46:00.000","Title":"How to get Contact Information and Alternate Contact in AWS\/Billing using boto3?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Is there a way to have an AWS Lex chatbot send a follow-up response to a user if the user has not replied after a certain amount of time?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":58,"Q_Id":69214293,"Users Score":0,"Answer":"Lex responds to direct inputs. Lex, by itself, will not prompt the user to engage if the user does not respond within a specified timeframe.\nThat is custom logic that you would need to implement outside of Lex.","Q_Score":0,"Tags":"python-3.x,aws-lambda,amazon-lex","A_Id":69222864,"CreationDate":"2021-09-16T19:43:00.000","Title":"AWS Lex Chatbot - Send user a follow up message if no response detected","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to scrape Instagram posts using selenium without using an API. The problem I run into is that Instagram will automatically redirect me to\nhttps:\/\/www.instagram.com\/accounts\/login\/\ninstead of grabbing the link of the post I want. Is there a way to stop Instagram from redirecting me on selenium?\nhere is an example of a json link I am trying to go to instead of the redirected account logins:\nhttps:\/\/www.instagram.com\/kingjames\/?__a=1\nI dont want to have to log into an Instagram account.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":173,"Q_Id":69231057,"Users Score":0,"Answer":"It probably does not work properly because Instagram does not like datacenter ip ranges and it throws 302 redirect to login in this case. Try residential proxies.","Q_Score":0,"Tags":"python,selenium,selenium-webdriver,selenium-chromedriver,instagram","A_Id":69337699,"CreationDate":"2021-09-18T02:08:00.000","Title":"Is there a way to stop selenium from being redirected to login? Scraping Instagram with no api","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I made a FastAPI API that is used by some other back-end and not directly by a front-end. I'm trying to add a security layer with oAuth but all I'd need is a unique access token that this other API would use every time it wanted to consume mine. I haven't seen anything like this on the documentation and I was wandering if this is possible without having to define a user-login model.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":31,"Q_Id":69250702,"Users Score":0,"Answer":"I figured that instead of using any oAuth library I would just implement this solution myself by adding a function to every endpoint I wanted to secure that requires a pre made unique token. By using hashlib library I can choose any hashing algorithm I want to check if the hashed string matches the defined token so the actual password doesn't live in the code.","Q_Score":0,"Tags":"python,oauth-2.0,fastapi","A_Id":69255594,"CreationDate":"2021-09-20T07:42:00.000","Title":"How to create an oAuth unique token for back-end connection","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am using the latest version telethon 1.23. when i connected everything is good. But as soon as I start to download contact avatars (using download_profile_photo ) after the third or fifth count, the account number goes to the ban by telegram. The user has been deleted\/deactivated (caused by GetDialogsRequest).\nPlease help!!!","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":273,"Q_Id":69264742,"Users Score":0,"Answer":"I'm not the only one with this error.\nAfter the release of update 1.24 everything worked.","Q_Score":1,"Tags":"python,telethon","A_Id":70820581,"CreationDate":"2021-09-21T07:08:00.000","Title":"The user has been deleted\/deactivated (caused by GetDialogsRequest)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"hope you're having a good day.\nI am using UDP to send packets of data from a microcontroller to a python server on my PC (I could not get the speeds I needed over TCP).\nI want to send a repeat request to the microcontroller if a packet is dropped (assuming this is the easiest method for error correction over UDP)?\nI am aware UDP uses checksum and if this is incorrect it will be dropped by the receiver. Is there a flag or equivalent in python socket so that when a packet is dropped, I can ask the microcontroller to send the packet again?\nThanks in advance for your time,\nWill","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":74,"Q_Id":69334590,"Users Score":1,"Answer":"UDP is an unreliable protocol. It has no mechanism to \"ask for sending again\" if a packet gets dropped because of the wrong checksum. It has also no mechanism to detect duplicate packets, reordering or lost packets. There is no kind of flag which could be switch on to get a reliable transport based on UDP.\nIf you need reliable transport either use TCP or implement you own custom reliability layer on top of UDP.","Q_Score":0,"Tags":"python,sockets,error-handling,udp","A_Id":69334742,"CreationDate":"2021-09-26T11:46:00.000","Title":"UDP Incorrect checksum triggers repeat request instead of dropping packet","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there any way to detect if a link is invalid in python webbot? I need to tell the user that the link was invalid but I don't how how to detect it.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":48,"Q_Id":69336015,"Users Score":0,"Answer":"You could try sending an HTTP request, opening the result, and have a list of known error codes, 404, etc. You can easily implement this in Python and is efficient and quick. Be warned that SOMETIMES (quite rarely) a website might detect your scraper and artificially return an Error Code to confuse you.","Q_Score":0,"Tags":"python,webbot","A_Id":69336351,"CreationDate":"2021-09-26T14:45:00.000","Title":"Problem with detecting if link is invalid","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have three docker containers. First container is a Wordpress container which starts a wordpress instance. Second container is a selenium server container created from selenium\/standalone-chrome image which is supposed to do some initial setup of the wordpress via UI interactions. I have a python script that has all the commands to send to the selenium server. I am running this script in a python container as the third container. All the containers are spawned using docker-compose and are in the same network, so that communication can happen.\nOnce the python container is finished running the script it exits, however the selenium server and the wordpress container keep running. Once I am done with the script, I want to stop the selenium server container as well but keep the wordpress container running.\nI had a thought to run a script inside the python container as entrypoint which first executes the script and then issues a command to stop the other container but for that I guess the python container should also have docker available inside it. So, I think this will not work. Is there a simple way to achieve this?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":94,"Q_Id":69357877,"Users Score":0,"Answer":"The command\ndocker ps --filter=name='my-container'\nwill show you if the interesting container is still here\nexample\ndocker ps\nshows many containers\nbut you can filter\ndocker ps --filter=name='cadvisor' CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 984f35929991 google\/cadvisor \"\/usr\/bin\/cadvisor -\u2026\" 3 years ago Up 2 hours 0.0.0.0:1234->8080\/tcp cadvisor\n`\nand so a script can test the presence of both containers, only one, and do a\ndocker stop xxx\nwhen needed","Q_Score":0,"Tags":"python,docker,selenium,docker-compose","A_Id":69358516,"CreationDate":"2021-09-28T07:59:00.000","Title":"Is it possible to stop a docker container after other docker container inside the same network exits?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I want to do a button click or copy-paste with \"request\" like selenium\nbut as far as i know it can't on request\nIs there no way?\nselenium is too slow ..","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":46,"Q_Id":69370289,"Users Score":0,"Answer":"requests is read-only. It does not construct the DOM. If the button click triggers a new request, then you can formulate your own GET or POST request that simulates it.","Q_Score":0,"Tags":"python","A_Id":69370325,"CreationDate":"2021-09-29T04:08:00.000","Title":"button click or copy paste with Python's requests","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to figure out how to send an entire address balance in a post EIP-1559 transaction (essentially emptying the wallet). Before the London fork, I could get the actual value as Total balance - (gasPrice * gas), but now it's impossible to know the exact remaining balance after the transaction fees because the base fee is not known beforehand.\nIs there an algorithm that would get me as close to the actual balance without going over? My end goal is to minimize the remaining Ether balance, which is essentially going to be wasted. Any suggestions would be highly appreciated!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":427,"Q_Id":69371882,"Users Score":2,"Answer":"This can be done by setting the 'Max Fee' and the 'Max Priority Fee' to the same value. This will then use a deterministic amount of gas. Just be sure to set it high enough - comfortably well over and above the estimated 'Base Fee' to ensure it does not get stuck.","Q_Score":2,"Tags":"python,ethereum,web3,web3py","A_Id":69986683,"CreationDate":"2021-09-29T07:10:00.000","Title":"Sending entire Ethereum address balance in post EIP-1559 world","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm trying to access data from Outlook and can download files from exchangelib with a password. But I'd like access without a password. Do we've any alternate for this stuff?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":69,"Q_Id":69386156,"Users Score":0,"Answer":"You cannot use exchangelib to connect to the Exchange server without credentials of some sort, but exchangelib supports a variety of auth methods, and not all credentials contain a password. OAuth uses tokens, Kerberos and SSPI use a security context already available in your Windows session, certificate-based auth uses an on-disk file AFAIK, etc.","Q_Score":0,"Tags":"python-3.x,outlook,exchangelib","A_Id":69389331,"CreationDate":"2021-09-30T04:28:00.000","Title":"Is possible to retrieve data and download files from outlook without password using ExChangeLib (EWS)?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to access data from Outlook and can download files from exchangelib with a password. But I'd like access without a password. Do we've any alternate for this stuff?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":69,"Q_Id":69386156,"Users Score":0,"Answer":"An alternative to \"this stuff\"? Meaning an alternative to security? No.\nThe best you can do without credentials is to use Outlook Object Model on a machine where Outlook is already installed and configured to access the folders and messages from a mailbox in the configured local profile.","Q_Score":0,"Tags":"python-3.x,outlook,exchangelib","A_Id":69386500,"CreationDate":"2021-09-30T04:28:00.000","Title":"Is possible to retrieve data and download files from outlook without password using ExChangeLib (EWS)?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Workflow:\n\nWebsite:\n\ndoesn't have an API\nrequires login\nclicking on a button to download a file\n\nIs a Javascript button\n\n\n\n\nsave file to a download location\n\nQuestion:\n\nIs there a way to do this through python?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":201,"Q_Id":69389235,"Users Score":1,"Answer":"I suggest you to use Selenium to mimic the browser environment.\n\nTry to login by giving your credentials using selenium.\nfind the button tag and use the html-tag-id to click on it.\nTry to find the download location of the file \/ try to download it directly using the button","Q_Score":0,"Tags":"python,browser","A_Id":69389288,"CreationDate":"2021-09-30T08:55:00.000","Title":"python - how can I use web browser click and download a file","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a lambda function where, after computation is finished, some calls are made to store metadata on S3 and DynamoDB.\nThe S3 upload step is the biggest bottleneck in the function, so I'm wondering if there is a way to \"fire-and-forget\" these calls so I don't have do wait for them before the function returns.\nCurrently I'm running all the upload calls in parallel using asyncio, but the boto3\/S3 put_object call is still a big bottle neck.\nI tried using asyncio.create_task to run coroutines without waiting for them to finish, but as expected, I get a bunch of Task was destroyed but it is pending! errors and the uploads don't actually go through.\nIf there was a way to do this, we could save a lot on billing since as I said S3 is the biggest bottleneck. Is this possible or do I have to deal with the S3 upload times?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":115,"Q_Id":69399774,"Users Score":1,"Answer":"If there was a way to do this,\n\nSadly there is not, unless you are going to use other lambda function to do the upload for you. This way your main function would delegate time consuming file processing and upload to a second function in an asynchronous way. Your main function can then return immediately to the caller, and the second function does that heavy work in the background.\nEither way, you will have to pay for the first or second function's execution time.","Q_Score":1,"Tags":"python,amazon-s3,aws-lambda,boto3,python-asyncio","A_Id":69399858,"CreationDate":"2021-09-30T23:27:00.000","Title":"Fire-and-forget upload to S3 from a Lambda function","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I know that i can extract elements using css selectors with libs such as bs4. but I have a problem where I don't know names of css classes used to style elements I need to extract, but I know that all this elements have common rule applied to them(\"position:fixed;\" in my case). Is there any convenient way(some library) that I can use to do this?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":47,"Q_Id":69423235,"Users Score":0,"Answer":"Selenium is a very good python library for automation, but it allows for web scraping and selecting multiple using their CSS selectors and more. This should allow you to store all the elements with the\n\n(\"position:fixed;\" in my case)\n\nUsing selenium you would be able to extract the data, then you could manipulate the data however you would want. There are many ways to parse the data itself, so please be more specific on that part.\nIf in doubt send the website link as a comment and ill check the CSS selectors for you.\nHope this has solved the issue.","Q_Score":0,"Tags":"python,parsing","A_Id":69423303,"CreationDate":"2021-10-03T08:27:00.000","Title":"Is there any convenient way to parse html elements by ccs styles applied to them in python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I wanted to test a registration form with a lot of text boxes. Instead of manually using the .sendkeys() to each text box, is there a way to automatically input texts into each and every textbox in the page?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":56,"Q_Id":69424101,"Users Score":0,"Answer":"You may use page factory to store the locators and use from there every time or you can try the Data\/Text driven framework also.","Q_Score":0,"Tags":"python,selenium,automated-tests","A_Id":69424337,"CreationDate":"2021-10-03T10:47:00.000","Title":"Is there a way to automatically input text in the different text box using Python Selenium?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I would like to launch multiple selenium sessions and only one of them make visible. An user can interact with this webdriver window, and I want to retranslate all his actions on other sessions.\nHow can I do that on python?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":45,"Q_Id":69455662,"Users Score":0,"Answer":"No.\nSelenium can control only those sessions, which have been initialized by its own.\nYou cannot retranslate test actions for more that 1 session.\nThe only similar thing you can do is to create few (as much as you need) tests, which will contain the same copy-pasted steps inside itself and run each of them in a separate window.","Q_Score":0,"Tags":"python,selenium,multiprocessing","A_Id":69456467,"CreationDate":"2021-10-05T18:39:00.000","Title":"Can I copy actions from selenium session to other sessions?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm looking for a method to estimate of a server member's original join date.\nThe problem with just getting member.joined_at is that if a member leaves and rejoins, it resets this date. So the best alternative seems to be getting the date of the oldest message sent by a member.\nHowever, member.history(limit=1, oldest_first=True) seems to just return the oldest message in the member's DM.\nIs there any way in the api to find a member's oldest message in a server? This seems to be something only available to users via the search bar.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":126,"Q_Id":69469041,"Users Score":2,"Answer":"Unfortunately discord is channel-based - which mean, you have to search for messages in a certain channel such as DMChannel or a TextChannel.\nIn this case you will need to loop over all visible TextChannel in your server and do a search since the beginning of the channel, which is resourcefully costly.\nDiscord API does not allow you to search via its search functionalities so that will be your only way. I believe bigger bot simply has a database that store the first time it sees someone joins the server to avoid this problem altogether.","Q_Score":0,"Tags":"python,discord.py","A_Id":69469084,"CreationDate":"2021-10-06T16:04:00.000","Title":"discord.py find first message of a member in a guild","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to make a scripted download\/install of the latest version of python.\nFor Golang I can use the following URL to determine the newest version of Golang.\nhttps:\/\/golang.org\/VERSION?m=text and then download it.\nIs there a similar URL or some other way to get the latest version of Python3?\nI do not want to hardcode the version number... my script should simply install\/update the Python3 installation in the target directory.","AnswerCount":1,"Available Count":1,"Score":-0.1973753202,"is_accepted":false,"ViewCount":23,"Q_Id":69483895,"Users Score":-1,"Answer":"Hi you could use https:\/\/www.python.org\/downloads\/release\/python-x\nand then enter the python Version you want without the dots.\nFor Python 3.8.10 the link would look like this:\nhttps:\/\/www.python.org\/downloads\/release\/python-3810\/","Q_Score":0,"Tags":"python-3.x","A_Id":69483975,"CreationDate":"2021-10-07T15:31:00.000","Title":"Scripted Python Install for Latest Version","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Telethon client is not connecting using string session after some use.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":51,"Q_Id":69497064,"Users Score":0,"Answer":"You might have terminated the session from settings>devices.\nAnd provide more details so we can understand better what's the problem","Q_Score":0,"Tags":"python-3.x,telegram,telethon","A_Id":69616302,"CreationDate":"2021-10-08T13:57:00.000","Title":"Telethon client is not connecting after using some times","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"What is the most effective way to determine all edges in a graph that can be removed so that there will not arise new bridges in a given graph.\nCurrently I always copy the graph, remove an edge and check whether the bridges and number of bridges change. I repeat this for every edge and then return the list of edges. Choose one in the list and remove it.\nWhen I want to remove several edges I repeat the whole process several times.\nWhat would be a better\/more efficient approach. (I always need the whole candidate list.)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":206,"Q_Id":69516360,"Users Score":0,"Answer":"I think you are looking for the edges that when removed give rise to all k-edge components with k=2.\n\nSearch for all k-edge components in G using nx.algorithms.connectivity.edge_kcomponents.k_edge_components (with k=2). This will yield all vertex sets resulting from cuts with a cut size of 2.\n\nInduce a subgraph for each vertex set, and combine subgraphs using nx.compose into a graph H.\n\nDetermine the difference between G and H with nx.difference(G, H).","Q_Score":0,"Tags":"python,networkx","A_Id":69532157,"CreationDate":"2021-10-10T15:06:00.000","Title":"NetworkX Graph removing edges whose removal will not add bridges to the graph","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to create a browser which is chromium based and do not require any kind of dependencies or installation to launch, and do not allow browser specific functionalities like reloading, open in new tab, back and next button, developer tools and also want to hide the address bar.\nWhile looking out for options we have already tried JavaFx web view with JxBrowser, it serves the purpose, but firstly it is dependent on Java and also the size of JRE files is about 170 MB which is large for our requirements, so was looking out for option for which size should be less than 50 MB.\nSo, please let know what would be best approach for creating this browser.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":70,"Q_Id":69525990,"Users Score":0,"Answer":"With jlink and jpackage you can reduce the size of the Java solution.","Q_Score":0,"Tags":"python,java,visual-c++,chromium,chromium-embedded","A_Id":69528632,"CreationDate":"2021-10-11T12:10:00.000","Title":"How to build our custom chromium based lightweight browser with does not require installation to launch","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"After I added my bot written discord-interactions and discord.py to the server, the available slash-commands stopped showing, although they were previously available on the same server. Also commands are available in direct messages. In server permissions are allowed to use Slash-commands. How can I fix this problem?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":3175,"Q_Id":69554579,"Users Score":0,"Answer":"Slash commands, if not explicitly registered to a group of servers will take roughly an hour to register globally. I suggest waiting and coming back if it still doesn't work.","Q_Score":0,"Tags":"python,discord.py","A_Id":69555775,"CreationDate":"2021-10-13T11:20:00.000","Title":"Slash-commands are not available on the discord server","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am making a series of API calls using Python similar to the following:\nresponse = requests.post('https:\/\/httpbin.org\/post', data = {'key':'value'})\nWhen my API call is successful, I am able to view the cookies using response.cookies giving me the cookies in the following type: requests.cookies.RequestsCookieJar.\nI then want to store these cookies in MacOS Keychain so that I can use them later. I am doing this with keyring similar to the following:\nkeyring.set_password(\"test\", \"test\", cookies)\nAlthough KeyChain requires the storage type to be text (UTF-8 encoding). How can I serialize the cookies so they can be stored? And how can I repackage the cookies for a future request after retrieving them as text?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":163,"Q_Id":69563346,"Users Score":0,"Answer":"To store the cookies, it might be as simple as using cookies = json.dumps(dict(cookies)) to convert the RequestsCookieJar to a dictionary and then a string (readable as JSON). That will likely satisfy the keyring storage type requirement.\nLikewise, to convert this json string back to a dictionary for a future request, you can load the cookies like this: cookies = json.loads(cookies)","Q_Score":0,"Tags":"python,cookies,serialization,python-keyring","A_Id":69564911,"CreationDate":"2021-10-13T23:11:00.000","Title":"How can I serialize a Python request's cookies for UTF-8 storage?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a Python project and I wonder if I should use the:\nhttps:\/\/canary.discord.com\/api\/v9\/\nor\nhttps:\/\/discord.com\/api\/v9\/\ndoes it make a difference, idk which one to use.\nI know that both work, but I don't know the difference.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":40,"Q_Id":69567283,"Users Score":0,"Answer":"Canary is Discord's alpha testing program. ... The Canary Build's purpose is to allow users to help Discord test new features. Bugs on the Canary build are supposed to be reported on the Discord Testers server. Unlike PTB or Stable, Canary's icon is orange instead of purple.\nSo if you are not interested in the testing program. You should use https:\/\/discord.com\/api\/v9\/","Q_Score":0,"Tags":"python,api,discord","A_Id":69567376,"CreationDate":"2021-10-14T08:15:00.000","Title":"Discord API Links which one?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have confusion between Rest API and Websocket. Although WS is faster than REST API in terms of data fetching. My application is to Buy a Coin(any item) on any Exchange (Binance, Kucoin, Coinbase) as fast as I can. In the documentations, Every exchange provides Endpoints for order place(Buying the coin) only for REST API. They only provide a Coin price stream for Websockets.\nIs it Possible to Buy or post something using Websockets or do we have to use REST API for that Purpose?\nThanks in Advance","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":192,"Q_Id":69570811,"Users Score":1,"Answer":"i don't think you need websocket to place an order, even if you are developing a trading bot, placing an order will not happen a lot.\nso i suggest you use REST API\nalso, i think the only useful way to have websocket, is to get K lines.","Q_Score":0,"Tags":"python,rest,websocket,binance,binance-api-client","A_Id":69604388,"CreationDate":"2021-10-14T12:34:00.000","Title":"REST API vs Websocket for buying an asset","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have code using selenium that successfully exports an xls file from my desired website. Once the xls file is downloaded I am unsure how to use selenium to open the file. Once this data gets opened and downloaded I would like it to automatically be flowing into Power BI.\nAny thoughts on how to achieve this?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":30,"Q_Id":69640850,"Users Score":0,"Answer":"You don't need Selenium. It's plain vanilla to directly access a web based Excel file with Power BI. This is what Power BI is made for.\nHowever, if you already have an Excel file locally, I'd suggest you upload it to Onedrive and import it into Power BI from there. And once your report is in the Online service you can directly refresh it via schedule or trigger.\nImporting the Excel file from disk is not the recommended way of doing things unless it's a one-off.","Q_Score":0,"Tags":"python,selenium,web-scraping,xpath,powerbi","A_Id":69641166,"CreationDate":"2021-10-20T05:41:00.000","Title":"Trying to Automate Opening of Excel File and then flowing that data into Power BI (Used Selenium to web scrape exported xls file)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I recently changed my OS to win11 which afterwards i've not been able to run selenium.\nWhen running, it displays this as the output \"selenium.common.exceptions.WebDriverException: Message: unknown error: cannot connect to chrome at localhost:1111 from chrome not reachable\"\nI used \"chrome.exe --remote-debugging-port=1111\" to open chrome using CMD but it displays this \"'chrome.exe' is not recognized as an internal or external command, operable program or batch file.\" instead of asking for admin right and then opening as usual.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":54,"Q_Id":69655459,"Users Score":0,"Answer":"I managed to fix chrome's error. I added chrome application path to the environment variable and it worked.\nThis C:\\Program Files\\Google\\Chrome\\Application\\ is fixed the CMD error.","Q_Score":0,"Tags":"python,selenium,google-chrome,automation,webdriver","A_Id":69663237,"CreationDate":"2021-10-21T03:19:00.000","Title":"Python Selenium Chrome not running after OS upgrade","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"i have problem with import telebot in pyTelegramAPI.\nMy error is this:\nTraceback (most recent call last):\nFile \"\/home\/yaser\/Desktop\/pyhton codes\/Insta-tel-bot\/insta\/telegram.py\", line 2, in \nimport telebot\nModuleNotFoundError: No module named 'telebot'\nI have python 3.9.7 and Pip 21.\n\nI don't install telebot.\ncan you help me please.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":223,"Q_Id":69680189,"Users Score":0,"Answer":"What platform do you use for pyhton?\nIf you use pyhcharm you can install modules in packages side.","Q_Score":0,"Tags":"python","A_Id":71482639,"CreationDate":"2021-10-22T16:22:00.000","Title":"Can't import telebot from pyTelegramAPI","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm currently developing a Telegram bot using python\/Pytelegrambotapi.\nIn my local machine, if the bot is started and left for sometime without any request, It terminates and throws the error\nrequests.exceptions.ReadTimeout: HTTPSConnectionPool(host='api.telegram.org', port=443): Read timed out. (read timeout=25)\nCan someone help me handling this error.\nThanks in advance","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":248,"Q_Id":69688314,"Users Score":0,"Answer":"Sometime network or Telegram issues happen.\nHandle exceptions or use bot.infinity_polling() instead of bot.polling()\nedit: Try this function with these parameters (you can adjust later as you'd like)\n\nbot.infinity_polling(timeout=10, long_polling_timeout = 5)","Q_Score":0,"Tags":"python,bots,telegram,telegram-bot,py-telegram-bot-api","A_Id":69688351,"CreationDate":"2021-10-23T12:48:00.000","Title":"Pytelegrambotapi Throws ReadTimeout error","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am new to Python and Selenium and working on Windows.\nI am currently learning a course in Udemy about Selenium and Python in order to create an automated script aka a web bot\nSteps in the lecture:\n\nCreate a virtual environment and install pip in it\n\nActivating the virtual environment\n\nRunning Python.exe\n\nTypes import selenium import webdriver'\n\nTypes browser = webdriver.Chrome()\n\nChrome browser opens. But at the same time, it gives this error message:\nDevTools listening on ws:\/\/127.0.0.1:57721\/devtools\/browser\/714e788a-2c6a-452b-b89f-403520a5ab75\n\n\n\n\n\n[17820:23420:1024\/073800.031:ERROR:chrome_browser_main_extra_parts_metrics.cc(230)] crbug.com\/1216328: Checking Bluetooth availability started. Please report if there is no report that this ends.\n[17820:23420:1024\/073800.032:ERROR:chrome_browser_main_extra_parts_metrics.cc(233)] crbug.com\/1216328: Checking Bluetooth availability ended.\n[17820:20848:1024\/073800.033:ERROR:usb_descriptors.cc(160)] Device descriptor parsing error.\n[17820:20848:1024\/073800.034:ERROR:device_event_log_impl.cc(214)] [07:38:00.034] USB: usb_device_win.cc:93 Failed to read descriptors from \\?\\usb#vid_046d&pid_c332#198b38733838#{a5dcbf10-6530-11d2-901f-00c04fb951ed}.\n[17820:23420:1024\/073800.033:ERROR:chrome_browser_main_extra_parts_metrics.cc(236)] crbug.com\/1216328: Checking default browser status started. Please report if there is no report that this ends.\n[17820:20848:1024\/073800.038:ERROR:device_event_log_impl.cc(214)] [07:38:00.038] USB: usb_device_handle_win.cc:1048 Failed to read descriptor from node connection: A device attached to the system is not functioning. (0x1F)\n[17820:20848:1024\/073800.039:ERROR:device_event_log_impl.cc(214)] [07:38:00.038] USB: usb_device_handle_win.cc:1048 Failed to read descriptor from node connection: A device attached to the system is not functioning. (0x1F)\n[17820:23420:1024\/073800.324:ERROR:chrome_browser_main_extra_parts_metrics.cc(240)] crbug.com\/1216328: Checking default browser status ended.\n[12304:24324:1024\/073956.924:ERROR:gpu_init.cc(453)] Passthrough is not supported, GL is disabled, ANGLE is\n\n\n\n\nI am still able to browser.get('webpages') but it gives more error messages like\nTraceback (most recent call last):\nFile \"\", line 1, in \nFile \"C:\\Users\\Jonathan\\venvs\\automation\\lib\\site-packages\\selenium\\webdriver\\remote\\webdriver.py\", line 430, in get\nself.execute(Command.GET, {'url': url})\nFile \"C:\\Users\\Jonathan\\venvs\\automation\\lib\\site-packages\\selenium\\webdriver\\remote\\webdriver.py\", line 418, in execute\nself.error_handler.check_response(response)\nFile \"C:\\Users\\Jonathan\\venvs\\automation\\lib\\site-packages\\selenium\\webdriver\\remote\\errorhandler.py\", line 243, in check_response\nraise exception_class(message, screen, stacktrace)\nselenium.common.exceptions.WebDriverException: Message: unknown error: net::ERR_NAME_NOT_RESOLVED\n(Session info: chrome=95.0.4638.54)\nStacktrace:\nBacktrace:\nOrdinal0 [0x00E43AB3+2505395]\nOrdinal0 [0x00DDAE41+2076225]\nOrdinal0 [0x00CE2498+1057944]\nOrdinal0 [0x00CDF0A1+1044641]\nOrdinal0 [0x00CD52C2+1004226]\nOrdinal0 [0x00CD5EC2+1007298]\nOrdinal0 [0x00CD550A+1004810]\nOrdinal0 [0x00CD4BC8+1002440]\nOrdinal0 [0x00CD3D5D+998749]\nOrdinal0 [0x00CD4016+999446]\nOrdinal0 [0x00CE3A6A+1063530]\nOrdinal0 [0x00D356ED+1398509]\nOrdinal0 [0x00D259F3+1333747]\nOrdinal0 [0x00D35168+1397096]\nOrdinal0 [0x00D258BB+1333435]\nOrdinal0 [0x00D023E4+1188836]\nOrdinal0 [0x00D0323F+1192511]\nGetHandleVerifier [0x00FCCB36+1554566]\nGetHandleVerifier [0x01074A0C+2242396]\nGetHandleVerifier [0x00ED0E0B+523099]\nGetHandleVerifier [0x00ECFEB0+519168]\nOrdinal0 [0x00DE02FD+2097917]\nOrdinal0 [0x00DE4388+2114440]\nOrdinal0 [0x00DE44C2+2114754]\nOrdinal0 [0x00DEE041+2154561]\nBaseThreadInitThunk [0x76F3FA29+25]\nRtlGetAppContainerNamedObjectPath [0x77107A9E+286]\nRtlGetAppContainerNamedObjectPath [0x77107A6E+238]\n\nWould really appreciate if someone can explain what I can do to resolve this error. Thanks!!","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":2110,"Q_Id":69692994,"Users Score":0,"Answer":"These are not error messages.\nThe 'Traceback' is a containing the function calls made in your code at a specific point.\nThey should not interfere with your code.\nAn error would be explictly stated: for e.g.\nAttributeError\nImportError\nIndexError\nKeyError\nNameError\nSyntaxError\nTypeError\nValueError","Q_Score":0,"Tags":"python,selenium,selenium-webdriver","A_Id":69697549,"CreationDate":"2021-10-24T00:17:00.000","Title":"I need help regarding Selenium and chrome webdrivers. An error pops up in my CommandPrompt and i am not sure how to solve it","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am new to Python and Selenium and working on Windows.\nI am currently learning a course in Udemy about Selenium and Python in order to create an automated script aka a web bot\nSteps in the lecture:\n\nCreate a virtual environment and install pip in it\n\nActivating the virtual environment\n\nRunning Python.exe\n\nTypes import selenium import webdriver'\n\nTypes browser = webdriver.Chrome()\n\nChrome browser opens. But at the same time, it gives this error message:\nDevTools listening on ws:\/\/127.0.0.1:57721\/devtools\/browser\/714e788a-2c6a-452b-b89f-403520a5ab75\n\n\n\n\n\n[17820:23420:1024\/073800.031:ERROR:chrome_browser_main_extra_parts_metrics.cc(230)] crbug.com\/1216328: Checking Bluetooth availability started. Please report if there is no report that this ends.\n[17820:23420:1024\/073800.032:ERROR:chrome_browser_main_extra_parts_metrics.cc(233)] crbug.com\/1216328: Checking Bluetooth availability ended.\n[17820:20848:1024\/073800.033:ERROR:usb_descriptors.cc(160)] Device descriptor parsing error.\n[17820:20848:1024\/073800.034:ERROR:device_event_log_impl.cc(214)] [07:38:00.034] USB: usb_device_win.cc:93 Failed to read descriptors from \\?\\usb#vid_046d&pid_c332#198b38733838#{a5dcbf10-6530-11d2-901f-00c04fb951ed}.\n[17820:23420:1024\/073800.033:ERROR:chrome_browser_main_extra_parts_metrics.cc(236)] crbug.com\/1216328: Checking default browser status started. Please report if there is no report that this ends.\n[17820:20848:1024\/073800.038:ERROR:device_event_log_impl.cc(214)] [07:38:00.038] USB: usb_device_handle_win.cc:1048 Failed to read descriptor from node connection: A device attached to the system is not functioning. (0x1F)\n[17820:20848:1024\/073800.039:ERROR:device_event_log_impl.cc(214)] [07:38:00.038] USB: usb_device_handle_win.cc:1048 Failed to read descriptor from node connection: A device attached to the system is not functioning. (0x1F)\n[17820:23420:1024\/073800.324:ERROR:chrome_browser_main_extra_parts_metrics.cc(240)] crbug.com\/1216328: Checking default browser status ended.\n[12304:24324:1024\/073956.924:ERROR:gpu_init.cc(453)] Passthrough is not supported, GL is disabled, ANGLE is\n\n\n\n\nI am still able to browser.get('webpages') but it gives more error messages like\nTraceback (most recent call last):\nFile \"\", line 1, in \nFile \"C:\\Users\\Jonathan\\venvs\\automation\\lib\\site-packages\\selenium\\webdriver\\remote\\webdriver.py\", line 430, in get\nself.execute(Command.GET, {'url': url})\nFile \"C:\\Users\\Jonathan\\venvs\\automation\\lib\\site-packages\\selenium\\webdriver\\remote\\webdriver.py\", line 418, in execute\nself.error_handler.check_response(response)\nFile \"C:\\Users\\Jonathan\\venvs\\automation\\lib\\site-packages\\selenium\\webdriver\\remote\\errorhandler.py\", line 243, in check_response\nraise exception_class(message, screen, stacktrace)\nselenium.common.exceptions.WebDriverException: Message: unknown error: net::ERR_NAME_NOT_RESOLVED\n(Session info: chrome=95.0.4638.54)\nStacktrace:\nBacktrace:\nOrdinal0 [0x00E43AB3+2505395]\nOrdinal0 [0x00DDAE41+2076225]\nOrdinal0 [0x00CE2498+1057944]\nOrdinal0 [0x00CDF0A1+1044641]\nOrdinal0 [0x00CD52C2+1004226]\nOrdinal0 [0x00CD5EC2+1007298]\nOrdinal0 [0x00CD550A+1004810]\nOrdinal0 [0x00CD4BC8+1002440]\nOrdinal0 [0x00CD3D5D+998749]\nOrdinal0 [0x00CD4016+999446]\nOrdinal0 [0x00CE3A6A+1063530]\nOrdinal0 [0x00D356ED+1398509]\nOrdinal0 [0x00D259F3+1333747]\nOrdinal0 [0x00D35168+1397096]\nOrdinal0 [0x00D258BB+1333435]\nOrdinal0 [0x00D023E4+1188836]\nOrdinal0 [0x00D0323F+1192511]\nGetHandleVerifier [0x00FCCB36+1554566]\nGetHandleVerifier [0x01074A0C+2242396]\nGetHandleVerifier [0x00ED0E0B+523099]\nGetHandleVerifier [0x00ECFEB0+519168]\nOrdinal0 [0x00DE02FD+2097917]\nOrdinal0 [0x00DE4388+2114440]\nOrdinal0 [0x00DE44C2+2114754]\nOrdinal0 [0x00DEE041+2154561]\nBaseThreadInitThunk [0x76F3FA29+25]\nRtlGetAppContainerNamedObjectPath [0x77107A9E+286]\nRtlGetAppContainerNamedObjectPath [0x77107A6E+238]\n\nWould really appreciate if someone can explain what I can do to resolve this error. Thanks!!","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":2110,"Q_Id":69692994,"Users Score":0,"Answer":"Those are not error messages. I get them all the time and nothing goes wrong.","Q_Score":0,"Tags":"python,selenium,selenium-webdriver","A_Id":69695944,"CreationDate":"2021-10-24T00:17:00.000","Title":"I need help regarding Selenium and chrome webdrivers. An error pops up in my CommandPrompt and i am not sure how to solve it","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using python request module to hit rest api. I have to use SSL for security measures.\nI see that i can set\nrequests.get(url,verify=\/path\/ca\/bundle\/) \nHowever i am confused as to what needs to be passed as CA_BUNDLE?\nI get the server certificate using\ncert = ssl.get_server_certificate((server,port))\nCan someone let me know, how i should use this certificate in my request? Should i convert the cert to X509\/.pem\/.der\/.crt file ?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":124,"Q_Id":69702941,"Users Score":0,"Answer":"Solved it. Apparently i needed to get the entire certificate chain and create a CA bundle out of it.","Q_Score":0,"Tags":"python,ssl,python-requests,pyopenssl","A_Id":69791202,"CreationDate":"2021-10-25T04:40:00.000","Title":"Python Request: SSL Verify","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to upload some files to sharepoint via office365 REST Python client.\nOn documentation on github, I found two examples:\n\none for larger files where this is executed:\n\n uploaded_file = target_folder.files.create_upload_session(local_path, size_chunk, print_upload_progress).execute_query()\n\none for small files :\ntarget_file = target_folder.upload_file(name, file_content).execute_query() .\n\nIn my case, I want to be able to upload files who are small and also files who are very large.\nFor testing, I wanted to see if the method for larger files works with smaller files too.\nWith a small file, while putting the size_chunk at 1Mb, the uploading was done, but the file uploaded was empty (0b), so I lost my content while uploading.\nI wanted to know if there is someone who knows how can we do something more generic for whatever size of files. Also I don't understand what is the size chunk for larger files case. Do you know how one should choose it?\nThank you so much!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":225,"Q_Id":69720779,"Users Score":0,"Answer":"This problem is solved by installing office365-rest-python-client instead of office365-rest-client.","Q_Score":0,"Tags":"python,rest,office365api,sharepoint-rest-api,office365-rest-client","A_Id":69992999,"CreationDate":"2021-10-26T09:41:00.000","Title":"Uploading files of different sizes on sharepoint office365 REST Python Client","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is Pyautogui detectable by websites? (I know Selenium can be detectable) I'm going to use it to click and move the mouse. I would like to know if some kind of script can detect clicks and movements made by Pyautogui, I don't know much about web programming and I'm learning to do automations.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":493,"Q_Id":69746707,"Users Score":2,"Answer":"No, as far as I know, there is no way for a website to detect the device or software (such as PyAutoGUI) which is making inputs, however, a site could detect robotic mouse movement etc., and you will not be able to pass CAPTCHAs.","Q_Score":1,"Tags":"python,pyautogui","A_Id":69746784,"CreationDate":"2021-10-27T23:47:00.000","Title":"Is Pyautogui detectable by web scripts?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I was wondering if there is any way to make decisions based on webhook responses in Mautic. To elaborate, I post a request via webhook and the corresponding API responds with an error (e.g. 411). I want to create a campaign that has a block depending on the response of that webhook if it receives 200 decision 1 is made and if it receives 411, another decision is made.\nhow can I implement this?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":40,"Q_Id":69787645,"Users Score":0,"Answer":"I don't think such feature exists by default, but there can be alternative ways to do this.\n\nsending an api call from the system in question(like on success make a call do set some tag or field) same goes for error. However this is practical as long as you have control over that system.\nCreate a custom campaign Decision node which will listen to the responses of the webhook(in this case you will need to send the webhook using campaign only), Again the block here is that you will need to know how to code a custom campaign decision or will need to look for a developer who can do it.","Q_Score":0,"Tags":"python,mautic","A_Id":70380681,"CreationDate":"2021-10-31T14:42:00.000","Title":"make a decision based on response of webhook in mautic","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have an open net tab on my app and now I want to work by selenium to do several works.How can I define its driver?I know how to define a driver and open a webpage but I don't know this one. Can you help me?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":21,"Q_Id":69793608,"Users Score":0,"Answer":"finally,\nyou can take its html code by\n\ndriver.page_source","Q_Score":0,"Tags":"python,android,html,selenium,driver","A_Id":69952442,"CreationDate":"2021-11-01T07:08:00.000","Title":"how define driver in python for an open net page","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"After new update of python-engineio am receiving, \"Unexpected error \"packet is too large\", closing connection socketio flask\" in the python-engineio logs, when I try to send large amount of data.\nIt was working fine few days ago.\nFor server am using : flask_socketio\nFor client am using : python_socketio[client]\nAny help is appreciated\nThanks","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":80,"Q_Id":69804803,"Users Score":0,"Answer":"This is a security related change, to prevent DoS attacks. Set the max_http_buffer_size argument in your SocketIO constructor to your desired maximum size. The default is 1MB, same as the reference JavaScript implementation.","Q_Score":0,"Tags":"python,flask,websocket,socket.io,flask-socketio","A_Id":69813919,"CreationDate":"2021-11-02T02:37:00.000","Title":"Unexpected error \"packet is too large\", closing connection socketio flask","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"im trying to retrieve the date and time of a message, and the date is correct, but the time is about 6 hours off. how can i fix this?\nit is 5pm currently, but this line of code returns 23:00\n\nmsgDate = update.message.date\n\nedit: it is returning the minutes properly, so its close, just not sure on what to do about the hours.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":176,"Q_Id":69818062,"Users Score":1,"Answer":"It seems as though function built into python-telegram-bot uses the UTC timezone, so yes, it does return the proper date and time, just converted into UTC","Q_Score":0,"Tags":"python,telegram,python-telegram-bot","A_Id":69820400,"CreationDate":"2021-11-02T23:32:00.000","Title":"Python-telegram-bot api 'update.message.date' is returning the wrong time. how do i fix this?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a spider that crawls for contact details from given url(s).\nWorks fine, but some of the data its collecting comes from css rules on the pages, for example some <\/svg> attributes may appear as valid numbers. Or some image mappings in <\/script> like 404_static_desk_1920-w375@1x.jpg may appear as valid email addresses.\nHow can I make scrapy ignore certain tags and totally ignore html attributes?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":42,"Q_Id":69836420,"Users Score":0,"Answer":"If you're using a CSS selector you can add :not(svg):not(script) at the end of your selector.\nFor example if you want to select all element except svg and script: *:not(svg):not(script)\nIf you edit your question and add an example of how you extract the data I could help you further.","Q_Score":0,"Tags":"python,scrapy,web-crawler","A_Id":69838712,"CreationDate":"2021-11-04T08:41:00.000","Title":"How to make Python Scrapy skip css rules and html attributes","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a python code which is requesting a Rest API. The API has more than 5000+ pages so i tried to request it but always i am getting error at 2000th request.\nThe error is: \"df = pd.json_normalize(json_data[\"items\"])\nKeyError: 'items'\"\nHow can i solve this problem ?\nP.S. In locally, the code is working clearly.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":41,"Q_Id":69846637,"Users Score":0,"Answer":"I found a solution for this problem. Like @elad said, this is not a airflow error. Due to airflow web service and scheduler the system working a little bit slower. So, my token expired while airflow tasks running. I reorganized my code, I generated token in loop with specific conditions such as try-except. Thanks for everything !","Q_Score":0,"Tags":"python,api,airflow","A_Id":69879207,"CreationDate":"2021-11-04T22:37:00.000","Title":"Why airflow is returning error while requesting Rest API?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"23:56 ~ $ python3.8 connect.py Traceback (most recent call last):\nFile \"connect.py\", line 7, in \nfrom binance.client import Client File \"\/home\/utkandD\/.local\/lib\/python3.8\/site-packages\/binance\/init.py\",\nline 10, in \nfrom binance.depthcache import DepthCacheManager, OptionsDepthCacheManager, ThreadedDepthCacheManager # noqa File\n\"\/home\/utkandD\/.local\/lib\/python3.8\/site-packages\/binance\/depthcache.py\",\nline 7, in \nfrom .streams import BinanceSocketManager File \"\/home\/utkandD\/.local\/lib\/python3.8\/site-packages\/binance\/streams.py\",\nline 13, in \nfrom websockets.exceptions import ConnectionClosedError ModuleNotFoundError: No module named 'websockets.exceptions';\n'websockets' is not a package","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":137,"Q_Id":69870762,"Users Score":1,"Answer":"That looks like you have something called \"websockets\" on your Python path (probably websockets.py) before the websockets module, so your import fails because you are trying to import from the wrong thing.","Q_Score":0,"Tags":"pythonanywhere","A_Id":69872029,"CreationDate":"2021-11-07T08:13:00.000","Title":"\"ModuleNotFoundError: No module named 'websockets.exceptions'; 'websockets' is not a package\" pythonanywhere i get such error","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"api.telegram.org\/bot\/getWebhookInfo --> {\"ok\":true,\"result\":\"url\":\"\",\"has_custom_certificate\":false,\"pending_update_count\":0}}\napi.telegram.org\/bot\/getUpdates --> {\"ok\":false,\"error_code\":409,\"description\":\"Conflict: terminated by other getUpdates request; make sure that only one bot instance is running\"}\nI get these answers from API. How to fix them?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":718,"Q_Id":69880043,"Users Score":0,"Answer":"At each given time only one process may call getUpdates. You apparently have another process doing that.","Q_Score":0,"Tags":"python,telegram-bot","A_Id":69880193,"CreationDate":"2021-11-08T07:43:00.000","Title":"Error 409. Conflict: terminated by other getUpdates request; make sure that only one bot instance is running","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am using the libraries http.server and http.client in order to build a server\/client structure. The server is always on and the client makes requests.\nWhen the server is called, it calls a scraper that returns a dictionary and currently I send it like this:\nself.wfile.write(str(dictionary).encode(\"utf-8\"))\nHowever the client receives a String that I have to parse. Is it possible that the server could send a dictionary? (Or even better, a json)","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":126,"Q_Id":69883139,"Users Score":2,"Answer":"You should try to use json.dumps({\"a\": 1, \"b\": 2}) and send your JSON object instead of encoding it as utf-8.","Q_Score":1,"Tags":"python,json,dictionary,server,client","A_Id":69883181,"CreationDate":"2021-11-08T12:02:00.000","Title":"How to send a dictionary with python http server\/client","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have to parse and validate HIPAA 834 EDI file and generate 997 response with the success or error message\nSample 834 EDI File:\nISA00 00 30261401960 30261105741 2105250609*^005011891712750T*:~\nGSBE161401960Facets202105250609171275X005010X220A1~\nST8340001*005010X220A1~\nREF3800417558~\nQTYDT958~\nQTY**1381~\nQTY**1381~\nN1INHealthPLAN FI161105741~\nINSY18030XNA**ACNN~\nF0F951747732~\nREF1L00417558~\nREF170001~\nREFDX0001~\nDTP336D8*20040202~\nPERIP**EMmvastola@wscschools.orgHP7169543565~\nN3*130 Rosewood Dr.~\nN4West SenecaNY*14224~\nDMGD819810817MM~\nHD024**HLTCPO1Y000*FAM~\nINSY18030XNA**ACNN~\nDTP303D8*20200701~\nINSN01030XNA**NN~\nREF0F951747732~\nREF1L00417558~\nREF170001~\nREFDX0001~\nNM1IL1TestmemberJessica***34962703984~\nN3*130 Rosewood Dr.~\nN4West SenecaNY*14224~\nDMGD819820720*F~\nHD024**HLTCPO1Y000*FAM~\nDTP303D8*20200701~\nDTP348D8*20200701~\nINSN19030XNA**FN~\nREF0F951747732~\nREF1L00417558~\nREF170001~ REFDX0001~\nNM1IL1testySofia***34992599285~\nN3*130 Rosewood Dr.~\nN4West SenecaNY*14224~\nDMGD820120524*F~\nHD030**HLTCPO1Y000*FAM~\nDTP303D8*20200701~\nDTP348D8*20200701~\nSE470001~\nGE1171275~\nIEA1189171275~\nplease help me out to resolve the issue, I'm not understanding how to use pyx12 parser library which is available in python or implement the code using pyx12","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":225,"Q_Id":69927141,"Users Score":0,"Answer":"It looks like you've lost a bit of your formatting.\nThe ~ is the segment terminator. The * character is the element separator. So you should first split by the segment end, read each segment in, and then parse each element.\nThere is the concept of qualifiers, backed by a dictionary. The N1 segment typically holds names, with N3 and N4 being address segments. The first element in the N1 segment (N101) is a qualifier. It describes the data in the N102 using a code value. The 834 has many of these.\nREF170001~\nREFDX0001~\nIn this case 17 and DX describe what value is in the REF02.\nYou should download a tool like EDI Notepad so that you can understand what each element is, what it means and how to parse it into something your application can understand.","Q_Score":0,"Tags":"python,edi,x12","A_Id":69928443,"CreationDate":"2021-11-11T10:58:00.000","Title":"How to parse HIPAA 834 EDI File and generate 997 using python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a kibana dashboard where there are some links provided for the user to click on. The link calls a flask service, which does some processing and redirects an URL using flask's redirect API, so that Kibana dashboard shows the processed values. Now, the flask is replaced with seldon core for predictions. Is there any way to redirect an URL, like it can be done in flask?\nThanks in advance.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":55,"Q_Id":69927182,"Users Score":0,"Answer":"Have you completely removed the flask app? If not you can use the flask to handle all interactions with the frond end and just use Seldon for the predictions.\nThis is a micro-services approach. You can receive the request as you used to in flask and then from within flask call the Seldon micro-service, get the prediction and then redirect to the results page with the new results from within flask.\nThis is good because if you change your prediction logic or tools in the future you wont have to redo all this work. You ll just change the function that flask calls to get the results. Also you frond end and back-end will be agnostic of the implementation details of the prediction method.","Q_Score":0,"Tags":"python,flask,seldon-core","A_Id":69927340,"CreationDate":"2021-11-11T11:01:00.000","Title":"Flask redirect alternative in Seldon core","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I try to read two one billion record files from a Unix server for file comparison.\nI tried in python with paramiko package but it is very slow to connect and read the Unix files. That's why I chose Java.\nIn Java when I read the file I am facing memory issues and performance issues.\nMy requirement: first read all records from Unix server file1, then read second file records from Unix server and finally compare the two files.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":49,"Q_Id":69946752,"Users Score":0,"Answer":"As you are working in UNIX, I would advise you to sort the files and use the diff commandline tool: UNIX' commandline commands are quite powerful. Please show us an excerpt of the files (maybe you might need cut or awk scripts too).","Q_Score":0,"Tags":"python,java,unix","A_Id":69970573,"CreationDate":"2021-11-12T17:20:00.000","Title":"How to handle two one billion record for file comparison","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I try to read two one billion record files from a Unix server for file comparison.\nI tried in python with paramiko package but it is very slow to connect and read the Unix files. That's why I chose Java.\nIn Java when I read the file I am facing memory issues and performance issues.\nMy requirement: first read all records from Unix server file1, then read second file records from Unix server and finally compare the two files.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":49,"Q_Id":69946752,"Users Score":0,"Answer":"Sounds you want to process huge files. Rule of thumb is they will exceed your RAM so never hope to get them read all at once.\nInstead try to read meaningful chunks, process them, then forget them.\nMeaningful chunks could be characters, words, lines, expressions, objects.","Q_Score":0,"Tags":"python,java,unix","A_Id":69947114,"CreationDate":"2021-11-12T17:20:00.000","Title":"How to handle two one billion record for file comparison","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm making a cryptocoin app with my friends and i used to make app with python until i got several error on buildozer and python-for-android for days. I can't build apk so i decided to switch to kotlin and curious about something. This app will run with help of a python bot which is running on a server. Can i connect a python server with kotlin client and dump json data sended from python server?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":66,"Q_Id":69950684,"Users Score":3,"Answer":"Yes you can. You can consume APIs with Android Retrofit library. In Google official docs you find a codelab on how to use it.","Q_Score":1,"Tags":"python,kotlin,sockets,server","A_Id":69953177,"CreationDate":"2021-11-13T01:15:00.000","Title":"Can i connect a python server with a kotlin client?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"as you can read in the title I have to build a reliable P2P data transfer Using UDP(I'm a student), so what I asking you guys is to not give me the code, actually, I hate copying and pasting so much, I feel pain in doing it, I'm asking for help to tell me what tool do I need, I'm familiar with Javascript and Java and Python, feel free to help me if you know any of these languages.\nthe reliability part will be achieved through checksum and ACKs(Acknowledgements), so I have to Implement them:\n-I know how checksum can be calculated.\n-ACKs can be Implemented in the way of Stop-and-wait protocol(I think it's the simplest one), if you know another protocol, that is okay.\nI'm really lost, I don't know from where I begin if you have some code examples, please share a link that would be helpful, so I can build an idea from where do I start.\nthanks.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":90,"Q_Id":69979530,"Users Score":0,"Answer":"Here are some reliable UDP examples in github like kcp, quic, utp. You can read them and get the essential from them.\nAs for how to do it, there are some little suggestions:\n\nYou need ACK to check if any packet\/datagram lost, and then retransmit it\nYou need to use FEC\nCongestion algorithm can be considered later.\n\nWhen you try to build your own code, do a little goal in every step, don't build a complex protocol at the very begining.","Q_Score":0,"Tags":"javascript,python,java,udp,p2p","A_Id":69985643,"CreationDate":"2021-11-15T19:00:00.000","Title":"how can I build reliable P2P data transfer using UDP protocol","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have an idea to make a twitter bot that block annoying accounts:\nmy question is can I block accounts on behalf of other user?\nI am using tweepy\nI already have tokens for my account","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":50,"Q_Id":69987869,"Users Score":1,"Answer":"Yes, if you have access tokens for the other user. You can get these by using the sign-in with Twitter API, and have the other user go through that process to authorize your app.","Q_Score":0,"Tags":"python,twitter,bots,tweepy","A_Id":69991606,"CreationDate":"2021-11-16T10:56:00.000","Title":"how to block twitter accounts on behalf of others with tweepy?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to migrate the Google Drive files of an account \"xxxxxxxx@gmail.com\" to another Google Workspace account \"xxxxxxx@mycompany.com\"\nWhat is the best way to achieve this and maintain all the metadata of the original file ?\nmetadata = shared users and folder structure.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":67,"Q_Id":69990516,"Users Score":1,"Answer":"You will need to do a file.list to list all of the files in your current drive account.\nThen you will need to do a file.get on each of the files. The main issue will be the file metadata. As not all file metadata is writeable which means if you do a pure file.get with fields = * you will have a problem using that metadata for the file.create on the workspace account.\nYou will need to go though each of the metadata items and remove the ones that are not writeable so that you can upload them.","Q_Score":0,"Tags":"python,api,google-drive-api,migration,workspace","A_Id":69990862,"CreationDate":"2021-11-16T14:02:00.000","Title":"Migrate\/Move Google Drive files to new organization account","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I scraping some pages and these pages check my IP if it is a vpn or proxy (fake IP) if it is found fake the site is blocking my request please if there is a way to change my IP every x time with real IP Without using vpn or proxy or restart router\nNote: I am using a Python script for this process","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":351,"Q_Id":69996722,"Users Score":1,"Answer":"You IPAddress is fixed by your internet service provider, if you reset your home router, u sometimes can take another IPAddress depending on various internal questions.\nSome Websites, block by the User-Agent, IP GeoLocation of your request or by rate limit.. but if u sure its is by IP, so the only way to swap your IPAddress is through by VPNTunneling or ProxyMesh.","Q_Score":0,"Tags":"python,proxy,ip-address,vpn","A_Id":69997096,"CreationDate":"2021-11-16T22:20:00.000","Title":"I want to change my ip address without using vpn or proxy","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to extract project relevant information via web scraping using Python+ Spacy and then building a table of projects with few attributes , example phrases that are of interest for me are:\n\nThe last is the 300-MW Hardin Solar III Energy Center in Roundhead, Marion, and McDonald townships in Hardin County.\nIn July, OPSB approved the 577-MW Fox Squirrel Solar Farm in Madison County.\nSan Diego agency seeking developers for pumped storage energy project.\nThe $52.5m royalty revenue-based royalty investment includes the 151MW Old Settler wind farm\n\nHere I have highlighted different types of information that I'm interested in , I need to end up with a table with following columns :\n{project name} , {Location} ,{company}, {Capacity} , {start date} , {end Date} , {$investment} , {fuelType}\nI'm using Spacy , but looking at the dependency tree I couldn't find any common rule , so if I use matchers I will end up with 10's of them , and they will not capture every possible information in text, is there a systematic approach that can help me achieve even a part of this task (EX: Extract capacity and assign it to the proper project name)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":195,"Q_Id":70010363,"Users Score":0,"Answer":"You should be able to handle this with spaCy. You'll want a different strategy depending on what label you're using.\n\nLocation, dates, dollars: You should be able to use the default NER pipeline to get these.\nCapacity, fuel type: You can write a simple Matcher (not DependencyMatcher) for these.\nCompany: You can use the default NER or train a custom one for this.\nProject Name: I don't understand this from your examples. \"pumped storage energy project\" could be found using a Matcher or DependencyMatcher, I guess, but is hard. What are other project name examples?\n\nA bigger problem you have is that it sounds like you want a nice neat table, but there's no guarantee your information is structured like that. What if an article mentions that a company is building two plants in the same sentence? How do you deal with multiple values? That's not a problem a library can solve for you - you have to look at your data and decide whether that doesn't happen, so you can ignore it, or what you'll do when it does happen.","Q_Score":1,"Tags":"python,nlp,spacy,information-retrieval,information-extraction","A_Id":70014635,"CreationDate":"2021-11-17T19:22:00.000","Title":"Information extraction with Spacy with context awareness","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"ssl.SSLEOFError: EOF occurred in violation of protocol (_ssl.c:749)\nDuring handling of the above exception, another exception occurred:\nurllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='api.wandb.ai', port=443): Max retries exceeded with url: \/graphql (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation\nof protocol (_ssl.c:749)'),))\nI've already installed ndg-httpsclient, pyopenssl, pyasn1, and it didn't work for me.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1243,"Q_Id":70015637,"Users Score":0,"Answer":"OK, disconnecting the VPN connection on my device can fix the problem.","Q_Score":0,"Tags":"python,ssl","A_Id":70015638,"CreationDate":"2021-11-18T06:55:00.000","Title":"requests.exceptions.SSLError: HTTPSConnectionPool(host='api.wandb.ai', port=443)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using the Google OR Tools vehicle routing implementation and am trying to incorporate traffic times into my time matrix by using the Google Maps API. However, the Google Maps API has limitations on how big of time matrices it can build, how many requests can be done in certain amounts of time, etc.\nI know that the Google OR Tools VRP expects this time matrix, but I don't need the travel times between all combinations of all origins and destinations. For example, I am inputting pickup\/dropoff pairs, for which it does not make sense to calculate the travel time from each dropoff to its assigned pickup. Additionally, perhaps I could also not calculate the travel time between locations that are far away (I'd establish some maximum distance) from one another. It would reduce the computational complexity to not have to call the API for these combinations and instead have certain constants as placeholders in the time matrix for these combinations.\nCan this routing model be run in loops, such that for the first iteration I only calculate the travel times between the most likely assignments and inside each loop each driver gets assigned a pickup\/dropoff pair and then in the next loop the travel times between already made assignments don't need to be calculated anymore? I don't even know if this would change the computation time.\nHas anyone else had this problem before? I'd be interested in hearing any advice and\/or additional heuristics to use.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":136,"Q_Id":70029533,"Users Score":0,"Answer":"VRP travel matrix required input is all the possible distance between visit location. You can reduce the complexity of the problem by assuming the distance between A to B is equal to B to A, this will also reduce the API call to the Google Map API. To be noted, the travel matrix shape must always be symmetrical.\nThe distance between locations is required for the VRP solving heuristic to find the next optimal nodes to be visited.\nIf you are certain that there are some locations that will not be visited after visiting some location, you can set the distance between those locations as Big M (i.e, sys.maxsize()). However, be careful with the direction constraint (pickup dropoff constraints), if you set Big M between the 2 locations that are linked by the constraint, the solver will definitely fail.","Q_Score":1,"Tags":"python,google-maps-api-3,or-tools","A_Id":70172002,"CreationDate":"2021-11-19T03:31:00.000","Title":"Google OR Tools VRP with time windows and Google Maps API","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to open discord and join a voice channel with a voice command. All I find is about bots, while I'm trying to do it with the user, me, in this case. Opening discord is not a problem, but I have no idea of how to do the voice channel thing, I'm still a begginer.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":56,"Q_Id":70043346,"Users Score":1,"Answer":"First of all, discord does not allow you execute client side commands directly or more specifically, pull users to voice channels. Making an user force-join a voice channel via a command would be a serious security exploit.\nWhat you can do to make users join channels, is to let them join a waiting room of some sort, than pulling them to channels from there.\nNow as far as i understand, you want to join a specific channel via a voice command yourself. In that case i would suggest not using discord API. I would implement a web scraper (in this case something like a web scraper would suffice since discord is basically running its website as an app, Press Ctrl+Shift+I and you will understand what i mean) to target a text containing the voice channel name i want to join. I would get that name from voice recognition. Then get that text position on screen and click it. You could use pyautogui for that purpose.\nTo be fair, this is not a beginner project at all, however with sufficient research and work you can make it.\nCheers","Q_Score":0,"Tags":"python,discord","A_Id":70046782,"CreationDate":"2021-11-20T04:57:00.000","Title":"Open Discord and make user join a specific voice channel","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to get the location geotag from an already posted Instagram post using the Basic Display Instagram API?\nI can't find any endpoints that let me get the location. Am I missing something or is that something we just cannot do?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":233,"Q_Id":70049191,"Users Score":0,"Answer":"This can be done by opening the post, right-clicking on top of the post, tick mark Properties, and then Location. You will get a map with location coordinates in decimal degrees which you can use to pinpoint that spot on Google Maps or another mapping service. APIs calculate this too but APIs are for developers only so it is not accessible to an average Instagram user unless you're building an app related to it.","Q_Score":0,"Tags":"python,instagram-api","A_Id":70061412,"CreationDate":"2021-11-20T19:27:00.000","Title":"How to get the location geotag from an Instagram post using the API","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"This is probably a dumb question, but I just want to make sure with the below.\nI am currently using the requests library in python. I am using this to call an external API hosted on Azure cloud.\nIf I use the requests library from a virtual machine, and the requests library sends to URL: https:\/\/api-management-example\/run, does that mean my communication to this API, as well as the entire payload I send through is secure? I have seen in my Python site-packages in my virtual environment, there is a cacert.pem file. Do I need to update that at all? Do I need to do anything else on my end to ensure the communication is secure, or the fact that I am calling the HTTPS URL means it is secure?\nAny information\/guidance would be much appreciated.\nThanks,","AnswerCount":3,"Available Count":1,"Score":0.1325487884,"is_accepted":false,"ViewCount":626,"Q_Id":70052068,"Users Score":2,"Answer":"Post requests are more secure because they can carry data in an encrypted form as a message body. Whereas GET requests append the parameters in the URL, which is also visible in the browser history, SSL\/TLS and HTTPS connections encrypt the GET parameters as well. If you are not using HTTPs or SSL\/TSL connections, then POST requests are the preference for security.\nA dictionary object can be used to send the data, as a key-value pair, as a second parameter to the post method.\n\nThe HTTPS protocol is safe provided you have a valid SSL certificate on your API. If you want to be extra safe, you can implement end-to-end encryption\/cryptography. Basically converting your so called plaintext, and converting it to scrambled text, called ciphertext.","Q_Score":1,"Tags":"python,python-requests,python-requests-html","A_Id":70052118,"CreationDate":"2021-11-21T05:31:00.000","Title":"Python - Requests Library - How to ensure HTTPS requests","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am gonna have an open source app in which it needs to send some data to an fastapi python api, how can i make it so that only the app can make requests to the api and not some random person abusing the api endpoint?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":106,"Q_Id":70061034,"Users Score":1,"Answer":"There are so many ways to do that. Even some of the techniques doesn't bother the API endpoint.\n\nIP Restriction: You can restrict an IP from cloud provider which IP can call the API. Even you can have multiple IPs.\nAPI KEY: You can provide an API KEY to the API client. When the request come along with the provided key then you work on it otherwise ignore.\n\nThe IP method is much better than doing on the application end.","Q_Score":0,"Tags":"python,api,fastapi","A_Id":70061121,"CreationDate":"2021-11-22T04:54:00.000","Title":"Only let one app have access to api endpoint?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am able to login and receving SAMLResponse through HTTP-Redirect binding and also I can able to decrypt using privatekey and able to retrive claims.\nMy question is still do we need to verify saml response(ADFS)? if its how to do that\ndo I need to use IP(identity provider) public key ? will it available in IP(Metadata)?\nI have SAML response in the following request parameter\nSAMLResponse = base64(deflate(data))\nsignature = hashvalue\nsigAlg = sha256\nhow to validate?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":111,"Q_Id":70076829,"Users Score":0,"Answer":"Yes you need to also verify the digital signature of the SAML response. This is because the encryption is done using your public this is available to anyone that has access to your metadata and does not give any assurance that it was produces by your IdP.\nTo verify that you IdP is the one that produced the SAMLResponse you verify the digital signature of the SAMLResponse using the IdP public key. This is typically available in the IdP metadata.","Q_Score":0,"Tags":"python,saml,adfs","A_Id":70077533,"CreationDate":"2021-11-23T07:21:00.000","Title":"How to verify ADFS encrypted SAML Response (Assertion)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"so I have a server and did an api for it so I can update patch files to my server, however now when I update the some batch files in the server, I always have to stop running the server and than run it again to see the changes, I was wondering what can I do so that my server restart it's self","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":62,"Q_Id":70091885,"Users Score":0,"Answer":"Yes you can,\nMake requests api send a json like {'do': 'refresh_server'}-\nthen just type exit(), then run the file again using the os module.\nEdit: This is solution for windows","Q_Score":0,"Tags":"python,api","A_Id":70091948,"CreationDate":"2021-11-24T07:05:00.000","Title":"How can I restart my server using a request api","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have a python script whose boto3 operations\/function calls must be restricted to a single IAM user which has extremely limited access. My understanding is that the execution of the script depends on the configured profile for AWS CLI. Would that sort of restriction have to done inside the script?","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":40,"Q_Id":70120741,"Users Score":2,"Answer":"The script could be created as a AWS Lambda function. Only the single IAM user should then be given access to execute that function.\nAnother script can be written to invoke that Lambda (boto3.client(\"lambda\").invoke()). Anyone can execute that script, but anyone but the right user will get an AccessPermissions error.\nNote:\n\nThere are limitations on the execution time\/memory allocation for AWS lambdas, which might make this a bad solution for your current script. That really depends on what your script exactly does.","Q_Score":0,"Tags":"python,amazon-web-services,boto3","A_Id":70122608,"CreationDate":"2021-11-26T07:19:00.000","Title":"Running a python script on AWS which is executable by a single IAM user only","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"import websocket\nSOCKET = 'wss:\/\/socket.coinex.com\/'# This is the problem\ndef on_open(ws):\nprint('oppend connection')\ndef on_close(ws):\nprint('close connection')\ndef on_message(ws, message):\nprint('received message')\nws = websocket.WebSocketApp(SOCKET, on_open=on_open,on_close=on_close, on_message=on_message)\nws.run_forever()","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":224,"Q_Id":70123734,"Users Score":0,"Answer":"are you sending request parameters?\nmaybe thats the problem, and also websocket stream is limited to only one market subscription.","Q_Score":0,"Tags":"python,websocket,cryptography,bots,trading","A_Id":70592961,"CreationDate":"2021-11-26T11:31:00.000","Title":"I`m trying to build crypto trading bot in coinex, but i can`t find websocket stream link for coinex(for accessing to candle sticks)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am having trouble authenticating against a web service that has Oauth provided by google.\nBasically, I want to login with my google account to a web page to do some scraping on it.\nAs the web service is not mine, I don't have the app secret_key, only the clientID, redirect_URL and scope that I could recover from seeing the parameters of request method used while being logged in.\nOnce authenticated, the web page only requieres a cookie named SID (Session ID I would guess) to answer back as an authenticated user. There is no Bearer token, just the SID cookie.\nIs it possible to automate this type of authentication? I've read many topics related but they all need the secret_key which I don't have because I'm not the owner of the app.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":108,"Q_Id":70128506,"Users Score":0,"Answer":"(Cannot comment due to rep)\nYes, what you're asking is possible. You could theoretically follow and match all the requests to authenticate yourself successfully to get the SID and perform scraping, albeit this would be a very difficult task for some basic web-scraping, it's like programming a full-blown scientific calculator to do 5 + 5. What you are asking is a really difficult task, you're going to run into all sorts of security issues and be asked for phone\/authenticator app\/email verification when attempting to login to your account with Python requests and then you'd need to keep track of those security cookies and keeping them updated, it's a real mess and would be extremely difficult for anyone.\nI think the better method would be to manually authenticate yourself and get the SID cookie and hard-code that into your scraper within the cookie HTTP header.\nI understand this brings up the concern of what to do when the SID cookie expires. Since you haven't said the site, It would be hard for me to imagine a site that makes you authenticate yourself with Google often rather than having their own internal SID\/JWT refreshing system to keep you logged in.\nMy recommendations would be:\n\nCheck the expiration of the SID cookie, if it's viable to manually copy-and-paste it after authenticating yourself, do that.\nIf the SIDs expire soon, check if there's an API request anywhere to get yourself a new SID (Without going through the OAuth again), in your Network panel look for the set-cookie response header setting a new SID, you might need to change and keep track of these inside your program but it'll be much easier than writing a program to login to Google.\nIf there's no way to refresh the SID and they expire often and you need to do long-term web scraping and sitting there getting a new cookie manually every 30 minutes isn't enough, I'd recommend looking into doing this with Puppeteer\/Chromium as it'll be much easier than doing it via Python HTTP requests.","Q_Score":0,"Tags":"python,oauth,requests-oauthlib","A_Id":70145552,"CreationDate":"2021-11-26T18:07:00.000","Title":"How to login by oauth to third party app with python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a udp server in python that i'm testing out by sending packets with netcat -u server_ip server_port\non the udp server, I can receive the packets with\ndata,addrport = socket.recvfrom(some_number) \u2014 I can read the data received and see the other socket's address port with addrport.\nBut if I try to use socket.getpeername() on the same variable instead it gives the OSError: [Errno 107] Transport endpoint is not connected error.\nWhat causes this? I'm confused as my netcat terminal doesn't close after sending, which I assume means its already connected to my UDP socket.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":142,"Q_Id":70144941,"Users Score":1,"Answer":"I can receive the packets with data,addrport = socket.recvfrom(some_number)\n\nrecvfrom means that you are working with an unconnected UDP socket, i.e. the case where a single socket could receive packets from various sources and also send data to various sources using sendto. getpeername instead expects a connected socket, i.e. one which will only receive data from a single source (using recv not recvfrom) and only send to a single source (using send not sendto). This is the case with TCP established sockets (the ones returned by accept) but also with UDP socket which are explicitly connected by calling connect.","Q_Score":0,"Tags":"python,sockets,select,udp","A_Id":70145212,"CreationDate":"2021-11-28T15:38:00.000","Title":"python UDP socket - recvfrom works, but getpeername() gives \"Transport endpoint is not connected\" error?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have written this to upload 10000 photos to Instagram, one each hour and whenever I run it I get these errors\n\nINFO - Instabot version: 0.117.0 Started\nINFO - Not yet logged in starting: PRE-LOGIN FLOW!\nERROR - Request returns 429 error!\nWARNING - That means 'too many requests'. I'll go to sleep for 5\nminutes.\n\nthis is my code am I doing anything wrong? Can someone please point it out and explain?\n\nfrom instabot import Bot import time\nbot = Bot()\nimage = 1\nbot.login(username=\"username\", password=\"password\")\nwhile image < 10000:\nphoto = str(image)\nbot.upload_photo(f\"{photo}.png\")\ntime.sleep(3600)\nimage += 1","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1424,"Q_Id":70148664,"Users Score":1,"Answer":"You just need to go to api.py\nThis is a file in this InstaBot library\nIn case if you're using vscode editor then just ctrl+click on the last link shown in error logs in terminal of vscode\nThen comment out the complete chunk of code starting from 559 to 585(complete if block)\nNow you're good to go","Q_Score":1,"Tags":"python,instagram,instapy","A_Id":72203259,"CreationDate":"2021-11-29T00:32:00.000","Title":"instabot ERROR, Why am I getting these errors and how to fix please?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am struggling with the basic understanding of classes and esp. inherited classes (if that is a right term).\nMy child class is supposed to use functions from the parent but super() seems to init the parent class while I want to use their functions directly instead.\nWhat I plan to do:\nMy program will be a web scraper. It is supposed to scrape different webpages which always returns the same data structure.\nSuch use case could be:\nProduct title will be searched in multiple markets. Amazon.com, Ebay.com, craiglist.com and Aliexpress while each of the pages will give me their cheapest price.\nThe parent is the selenium class handling the browser, scraping, exceptions etc.\nNow for each market I would implement a separate file\/class that manages the specific xpath etc to find and pass the command to the parent class to execute.\nThis would give me for example following file\/class structure:\n\nSelenium\n\namazon_com\nebay_com\ncraiglist_com\naliexpress_cn\n\n\n\nHowever each of the sub classes\/files like would need the functions of the selenium class.\ninitially I had one long list of functions within the selenium class. However due an ever growing number of markets to check and constantly changing class names etc. I would much prefer to separate it in several files (potentially ending up in the ball park of 50 sub classes...).\nSince I even struggle to explain my problem I believe I have overlooked a very simple solution or missed a design concept entirely. Can someone point me to some good reading source I could use to learn and crack that nut?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":39,"Q_Id":70149403,"Users Score":0,"Answer":"To call the function foo on a parent class, you can do super().foo()","Q_Score":1,"Tags":"python,selenium,class","A_Id":70149420,"CreationDate":"2021-11-29T03:21:00.000","Title":"python outsource code into class but using parent functions","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"So i am working on a small project that is primarily web scraping. I have one small problem though. I need to check if a specific element of the website has a background color of red, or green. To be able to see the element, i need to insert some text into a text field first... How would i go about that?\nI am scraping with BeautifulSoup right now.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":67,"Q_Id":70152698,"Users Score":0,"Answer":"Giving the exact code would help us to solve the problem.\nIf you want to interact with the webpage, you should use packages like selenium. That will help you add a text in the textfield. After performing this operation, you may see the clasfs\/style of the required item if changed.","Q_Score":0,"Tags":"python,web-scraping","A_Id":70153308,"CreationDate":"2021-11-29T09:55:00.000","Title":"Insert text into text field with python, for web scraping","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"This isn't absolutely necessary but it would make the code a lot shorter and presumably quicker. I would like to perform, using Selenium, the same actions on web elements elements that normally belong to the same class but if one of the elements is clicked, its class is dynamically changed to designate that it's \"the active\" element. I basically can only locate these elements by class name (XPATH). Is there a way to store elements that belong to different classes to the same variable to then perform manipulations on that variable?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":44,"Q_Id":70173413,"Users Score":1,"Answer":"If using XPath is the only option, you could take advantage of using logical operators in your selectors for your elements.\nFor example the following XPath:\n\/\/div[starts-with(@class, 'myclass')]|\/\/div[starts-with(@class, 'myclass active')]\nThe above XPath says 'find all div tags which have a class which starts with 'myclass' OR all div tags which start with 'myclass active'.\nOther XPath expressions like contains can be used to match text in any attribute on an element\/collection of elements. Be careful when choosing the selector though to ensure that only the elements you want selected are selected .","Q_Score":0,"Tags":"python,selenium,selenium-chromedriver","A_Id":70174982,"CreationDate":"2021-11-30T17:13:00.000","Title":"Is there a way to contain web elements with different attributes in the same variable?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"We are using Python robot framework automation. Is it possible to download and install edge driver automatically while script run based on the version passed as parameter?We are using selenium 3.14 , robot framework selenium library 4.5.0 and python 3.7","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":143,"Q_Id":70187322,"Users Score":0,"Answer":"No, that is not possible. Installation of any webdriver must be done before running tests, because it requires to have prepared the environment variable PATH and proper execution permissions for the webdriver.","Q_Score":0,"Tags":"python,selenium,robotframework","A_Id":70189461,"CreationDate":"2021-12-01T15:54:00.000","Title":"edge driver is not getting installed using robotframework","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I would like to import youtube-dl into my code for a discord.py bot\nI have not understood how to install the youtube-dl package on my pc. (I downloaded the .exe file etc, but I don't understand).\nThanks in advance for any help given.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":476,"Q_Id":70205067,"Users Score":0,"Answer":"You need to execute pip install youtube-dl in your terminal\nAfter that, you can import the module with from youtube-dl import *","Q_Score":0,"Tags":"python,discord.py,youtube-dl","A_Id":70205118,"CreationDate":"2021-12-02T18:58:00.000","Title":"installing youtube-dl for import youtube_dl in my code","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using the command : pyinstaller -w -F main.py\nAnd I got this message when I tried to run the exe file:\n\nTraceback (most recent call last):\nFile \"main.py\", line 1, in \nFile \"PyInstaller\\loader\\pyimod03_importers.py\", line 476, in exec_module\nFile \"ms_word.py\", line 2, in \nFile \"PyInstaller\\loader\\pyimod03_importers.py\", line 476, in exec_module\nFile \"docx_init_.py\", line 3, in \nFile \"PyInstaller\\loader\\pyimod03_importers.py\", line 476, in exec_module\nFile \"docx\\api.py\", line 14, in \nFile \"PyInstaller\\loader\\pyimod03_importers.py\", line 476, in exec_module\nFile \"docx\\package.py\", line 9, in \nFile \"PyInstaller\\loader\\pyimod03_importers.py\", line 476, in exec_module\nFile \"docx\\opc\\package.py\", line 9, in \nFile \"PyInstaller\\loader\\pyimod03_importers.py\", line 476, in exec_module\nFile \"docx\\opc\\part.py\", line 12, in \nFile \"PyInstaller\\loader\\pyimod03_importers.py\", line 476, in exec_module\nFile \"docx\\opc\\oxml.py\", line 12, in \nFile \"src\\lxml\\etree.pyx\", line 74, in init lxml.etree\nImportError: cannot import name _elementpath\n\nIt seems it is related with docx import, since this error doesn't appear when I disabled docx from my code.\nDoes anybody knows how to solve this problem?\nThank you.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":641,"Q_Id":70239932,"Users Score":0,"Answer":"I encountered same problem when using Pycharm with Python 3.8.10. I tried to solve it but no solution found. However, the same program when used with system interpreter version 3.9.6 and packed with pyinstaller on system cmd produced no errors. In general, it seems when packed in Pycharm it produces many errors (even when the user's Python versions are the same as the system's Python version).\nBefore encountering this error, I got stuck in the pythoncom module isn't in frozen sys.path error.\nJust add --hidden-import lxml._elementpath in pyinstaller ... line.","Q_Score":2,"Tags":"python,pyinstaller,python-docx","A_Id":70508049,"CreationDate":"2021-12-06T01:09:00.000","Title":"pyinstaller: ImportError: cannot import name _elementpath","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have attempted to configure Google API restrictions for the Distance Matrix API. I am using this API from my personal computer only at this time. Eventually, I may also call this from an AWS EC2 instance. I am using Python to connect to the API. I have added my IP4 addresses in the format 00.000.000.00 and 00.000.000.00\/32. I have added IP6 in the format 0000:000::\/64 and 0000:000::1. I have also tried 0000:000:0000::\/64 and 0000:000:0000::1. My IP changes each time I connect to the internet via my cable provider. I assume they use an IP range. I cannot figure out how to specify this range other than the attempts above. When I use the specific IP6 address (31 digits) from each login, the API works - until I log off. That is not very scalable for repeated usage. The error returned unless I enter the specific IP6 address is:\n\n\"error_message\" : \"This IP, site or mobile application is not authorized to use this API key. Request received from IP address 0000:000:0000:0000:0000:0000:0000:0000, with empty referer\"","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":46,"Q_Id":70253438,"Users Score":0,"Answer":"As per Google support, API restrictions do not support dynamic IP addresses. The solution is to have a static IP. Either restrict the address using HTTP or restrict the app by setting daily limits.","Q_Score":0,"Tags":"python-3.x,google-api,google-distancematrix-api","A_Id":70308754,"CreationDate":"2021-12-07T00:05:00.000","Title":"Configuring Google API Key Restrictions","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have created a knative service[gRPC server] in aks cluster, I have exposed the service using istio gateway private static IP,\nAfter using the command kubectl get ksvc I have got an address sample-app.test.10.153.157.156.sslio.io\nWhen I try to use this address in the python client, it throws error saying failed to connect addresses, but if I try to hit the service using\ncurl sample-app.test.10.153.157.156.sslio.io I am able to hit the service, I don't know what i am missing here.. please suggest..","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":100,"Q_Id":70272070,"Users Score":0,"Answer":"GRPC uses HTTP\/2. You may need to explicitly name your port h2c. I'm assuming that you've tested the container locally without Knative in the path and have been able to make a grpc call in that case.","Q_Score":2,"Tags":"grpc-python,knative,istio-gateway,knative-serving","A_Id":70274050,"CreationDate":"2021-12-08T08:38:00.000","Title":"how to call knative service [grpc server] by using a python client","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"So I've been thinking of the lightest way possible to run multiple different instances of a selenium process through the browser. Is there a way to \"automate\" a process perhaps through the source code only without having to use additional resources to run other images, videos etc?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":35,"Q_Id":70284193,"Users Score":0,"Answer":"Selenium headless browsing is the lightest way to automate selenium webpages.\nA headless browser is a term used to define browser simulation programs that do not have a GUI. These programs execute like any other browser but do not display any UI. In headless browsers, when Selenium tests run, they execute in the background. Almost all modern browsers provide the capabilities to run them in a headless mode.","Q_Score":0,"Tags":"python,selenium,automation","A_Id":70284225,"CreationDate":"2021-12-09T02:54:00.000","Title":"What's the lightest way to automate Selenium webpage navigation?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Using the python library requests and uploading larger files I will get the error RemoteDisconnected('Remote end closed connection without response').\nHowever it will work if I change the default User-Agent of the library to something like \"Mozilla\/5.0\".\nDoes anybody know the reason for this behaviour ?\nEdit: Only happens with Property X-Explode-Archive: true","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":37,"Q_Id":70292340,"Users Score":0,"Answer":"Are there any specific pattern of timeout that you could highlight in this case?\nFor example: It times out after 60 seconds every time (of that sort)?\nI would suggest to check the logs from all the medium configured with the Artifactory instance. Like, Reverse-proxy & the embedded-tomcat too. As the issue is specific to large-sized files, correlate the timeout pattern with the timeouts configured from all the entities which would give us a hint towards this issue.","Q_Score":0,"Tags":"python-requests,artifactory","A_Id":70299814,"CreationDate":"2021-12-09T15:17:00.000","Title":"Uploading larger files with User-Agent python-requests\/2.2.1 results in RemoteDisconnected","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"We are using symphony in our company and I am trying to send a message to the alert bot in symphony.\nSomeone sent me a small python script which does this already which uses the socket library.\nThey send the message as socket.send(msg) using import socket in their script.\nQuestion is : what is socket.send comparable with in kdb ? It's not a http post so it's not the .Q.hp .. Is this similar -> {h:hopen hsym`$\"host:port\";h\"someMessageCompatibleWithSymbphonyBot\";hclose h}\nUPDATE: I have been told that my kdb message is not pure tcp. Can anyone point me in the right direction?","AnswerCount":1,"Available Count":1,"Score":0.761594156,"is_accepted":false,"ViewCount":98,"Q_Id":70297951,"Users Score":5,"Answer":"hopen works for kdb-to-kdb, not kdb-to-other so yes in that sense it's not pure tcp.\nNormally when kdb needs to communicate with another system by tcp you would use some sort of middleman library to handle the communication layer.\nIn theory you could use the python script\/package in your kdb instance if you use one of the numerous kdb<>python interfaces (pyq, embedpy, qpython etc)","Q_Score":3,"Tags":"python,python-3.x,kdb","A_Id":70298183,"CreationDate":"2021-12-09T23:20:00.000","Title":"the equivalent of python socket.send() in kdb","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Im trying to unittests methods in fileA.py. fileA.py contains imports to firebasehandler.py where Im setting a connection to firebase. The methods I'm trying to test have no relation or need at all with anything from firebasehandler.py, but when running the tests I don't want to go through the credentials checking phase. What can I do to skip that import when running the unittests?","AnswerCount":2,"Available Count":1,"Score":-0.0996679946,"is_accepted":false,"ViewCount":25,"Q_Id":70332628,"Users Score":-1,"Answer":"I guess you can mock the imported object or method from fileA.py in your UT.","Q_Score":0,"Tags":"python,unit-testing","A_Id":70332707,"CreationDate":"2021-12-13T09:44:00.000","Title":"How to skip imports that are not needed while unittesting python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"could you help me with followings.\nparamiko 16.1.0 library is in [python default] \/usr\/local\/lib\/python2.7\/site-packages\nBut I can not upgrade paramiko 16.1.0 library to current paramiko 2.8.0 library; so I had to download paramiko 2.8.0 library into my path \/home\/pylib\nPlease let me know how I can force python to use \/home\/pylib\/paramiko [paramiko 2.8.0 library] in my python code\nNote: for now I am stuck with Python 2.7;I can not update PYTHONPATH","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":38,"Q_Id":70381193,"Users Score":1,"Answer":"It's good practice to use virtual environments to avoid conflicts like this.\nWhat is a virtual environment?\nA virtual environment is basically like creating a fresh installation of Python for only one project. This allows you to easily have two different versions of a library installed for different projects.\nHow do I use virtual environments?\nInstall the virtualenv package: pip2 install virtualenv\nGo to the root of your project cd path\/to\/project\/root\nCreate a virtualenv: virtualenv -p \/usr\/bin\/python2 venv\nActivate the environment: . venv\/bin\/activate\nInstall the package version you want: pip2 install paramiko==2.8.0\nRun your program: python something.py\nTo exit the virtual environment use: deactivate\nNext time you want to run your program make sure you activate the environment with . venv\/bin\/activate first. None of the other steps need to be repeated.","Q_Score":1,"Tags":"python","A_Id":70381623,"CreationDate":"2021-12-16T14:59:00.000","Title":"force python to use my downloaded (to path : \/home\/pytlib) Paramiko library","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am a newbie to network programming in python. I would like to know if there is any way that we can code in python to detect this kind of scan. I would like to build a open source project by using the method that you might suggest.\nThanks in advance !!","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":389,"Q_Id":70384109,"Users Score":0,"Answer":"unfortunately there is no actual way to achieve this since port scanning have no standard protocol which can be used to indicate, it is just like a regular socket connection, be it a client connection to fetch a web page for example. (it can be port scanner for port 80 or actual client who wants specific page)\nyou can develop an algorithm that checks the number of requests received to say.. 100 random ports, and if at least x of them points to those random ports within a time range, it can possibly be a port scanner.\nbe aware this is will not always work, since port scanning doesn't always mean all ports, it can also be a range of ports, specific port and so on.","Q_Score":1,"Tags":"python,python-3.x,port,port-scanning","A_Id":70384388,"CreationDate":"2021-12-16T18:43:00.000","Title":"How to detect TCP Port Scan using python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to build a React app which sends an image to an Rest API and returns a processed image.\nWhat is the best way to send images through Rest API ?\nMy current assumption is using \"base64\" encoding to send images as strings,but the size of my images will be around 5-10MB and I dont think base64 will cut it.\nPlease help me out here,I am build the front-end using ReactJS & NodeJs,the Rest API will be build using python Flask or FastAPI.","AnswerCount":2,"Available Count":1,"Score":0.2913126125,"is_accepted":false,"ViewCount":432,"Q_Id":70393592,"Users Score":3,"Answer":"You shouldn't be sending the images this way at all. The rough approach might be to upload images to some storage (S3 or whatever), then use API just to communicate the reference to that image (id, URI). Basically, you just need to send the info about who uploaded the image (user id) and where it is stored (filesystem path of the image, S3 reference, etc.), then you'll be able to relate the two entities and handle the images processing separately.","Q_Score":1,"Tags":"javascript,python,reactjs","A_Id":70393681,"CreationDate":"2021-12-17T13:00:00.000","Title":"What is best way to send images to Rest API?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"After developing a Python CLI app, I realized that it's time for it to have an Electron frontend.\nHow can the Electron app communicate with the Python app in response to a user action on the UI?\nUpdate: Is it typical for the Python CLI app to be converted into a long-running server using like asyncio, and is Kafka for IPC an overkill?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":171,"Q_Id":70395354,"Users Score":1,"Answer":"It depends, but in general it should be possible to use the existing CLI for IPC. You can spawn the CLI app as a subprocess from Electron and communicate with it through standard text streams. Of course, this method is simplistic and works properly only if GUI \"owns\" a CLI instance and CLI does't need to live longer than the GUI. Also, things become more complicated if either app must be a singleton (e.g. a second GUI instance must connect to the same one CLI instance). In such cases, a server makes sense.","Q_Score":0,"Tags":"javascript,python,node.js,python-3.x,electron","A_Id":70395575,"CreationDate":"2021-12-17T15:26:00.000","Title":"Communication between Electron Frontend and Python Backend","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"So, I've built this program using python what is currently running in the terminal.\nMy goal is to eventually design the application in a modern way like (discord, slack, or any other 2021 downloaded desktop-app),but I'm not really sure what to use.\nThe thing is, I know React\/Electron would be the best way to build\/design a desktop application like discord, teams etc. However, I'm looking to keep my python as some sort of backend, while using lets say Electron as front\nHow can I keep my python functions, while designing a modern GUI\/front end?\nThanks for advice","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":164,"Q_Id":70448013,"Users Score":0,"Answer":"You could use the Tkinter python module although it is not to much like react.","Q_Score":0,"Tags":"python,user-interface,desktop-application","A_Id":70448055,"CreationDate":"2021-12-22T11:02:00.000","Title":"Best way to build a desktop app while keeping my python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I just started learning how to code ( in python ) and i was wondering how can I randomly ask questions for a quiz without the answer that follows?\nFor example, I'd want the robot to ask 'What's the capital of France?' but without it saying 'Paris'?\nquestions = [(\"What's the capital of France?\", \"Paris\"), (\"Who painted the Mona Lisa?\", \"Da Vinci\")]\nTy :)","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":30,"Q_Id":70476222,"Users Score":0,"Answer":"random.choice will just return a tuple (since those are the items in your list). So you can access just the first element while printing by doing [0].\nFor example, print(random.choice(questions)[0]).\nIn the larger program you'd want to assign the tuple to a variable, so that later you fetch the answer for the same question (by using [1]) instead of randomly selecting again.","Q_Score":0,"Tags":"python,string,discord,tuples","A_Id":70476254,"CreationDate":"2021-12-24T20:05:00.000","Title":"How can i random.choice a question without the answer for a discord robot quiz game?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a long text. how to get only \"Brandshubs\" from below HTML?\noutput = Brandshubs3.7","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":37,"Q_Id":70483920,"Users Score":1,"Answer":"Just use re.search('Brandshubs', s) or findall function in python re library.\nBut you will get the string Brandshubs always, that's not meaningful so I guess you want to check exists or count the times? For that you can check\/count\nthe results of these functions' return directly","Q_Score":0,"Tags":"python,python-3.x","A_Id":70484175,"CreationDate":"2021-12-26T02:44:00.000","Title":"how to find out a specific text from a long text in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"My python program uses an input variable (a number 1-10) each day. Preferably, I want to create a simple website with only 1 input field (number) and a button which then executes the python script. Is there any easy way to do so? I don't have any experience with making websites.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":33,"Q_Id":70496171,"Users Score":0,"Answer":"Your requirements looks like you need to create a simple website. I feel you can use python frameworks like Flask or Django for this task. There are tons of tutorials available which can help you complete the challenge","Q_Score":0,"Tags":"python,web","A_Id":70496299,"CreationDate":"2021-12-27T13:33:00.000","Title":"Creating website with button that executes python script","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm very new to python and programming in general, and I'm looking to make a discord bot that has a lot of hand-written chat lines to randomly pick from and send back to the user. Making a really huge variable full of a list of sentences seems like a bad idea. Is there a way that I can store the chatlines on a different file and have the bot pick from the lines in that file? Or is there anything else that would be better, and how would I do it?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":49,"Q_Id":70501825,"Users Score":0,"Answer":"You can store your data in a file, supposedly named response.txt\nand retrieve it in the discord bot file as open(\"response.txt\").readlines()","Q_Score":0,"Tags":"python,discord,discord.py","A_Id":70501865,"CreationDate":"2021-12-28T01:00:00.000","Title":"discord.py: too big variable?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm very new to python and programming in general, and I'm looking to make a discord bot that has a lot of hand-written chat lines to randomly pick from and send back to the user. Making a really huge variable full of a list of sentences seems like a bad idea. Is there a way that I can store the chatlines on a different file and have the bot pick from the lines in that file? Or is there anything else that would be better, and how would I do it?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":49,"Q_Id":70501825,"Users Score":0,"Answer":"I'll interpret this question as \"how large a variable is too large\", to which the answer is pretty simple. A variable is too large when it becomes a problem. So, how can a variable become a problem? The big one is that the machien could possibly run out of memory, and an OOM killer (out-of-memory killer) or similiar will stop your program. How would you know if your variable is causing these issues? Pretty simple, your program crashes.\nIf the variable is static (with a size fully known at compile-time or prior to interpretation), you can calculate how much RAM it will take. (This is a bit finnicky with Python, so it might be easier to load it up at runtime and figure it out with a profiler.) If it's more than ~500 megabytes, you should be concerned. Over a gigabyte, and you'll probably want to reconsider your approach[^0]. So, what do you do then?\nAs suggested by @FishballNooodles, you can store your data line-by-line in a file and read the lines to an array. Unfortunately, the code they've provided still reads the entire thing into memory. If you use the code they're providing, you've got a few options, non-exhaustively listed below.\n\nConsume a random number of newlines from the file when you need a line of text. You would look at one character at a time, compare it to \\n, and read the line if you've encountered the requested number of newlines. This is O(n) worst case with respect to the number of lines in the file.\n\nRather than storing the text you need at a given index, store its location in a file. Then, you can seek to the location (which is probably O(1)), and read the text. This requires an O(n) construction cost at the start of the program, but would work much better at runtime.\n\nUse an actual database. It's usually better not to reinvent the wheel. If you're just storing plain text, this is probably overkill, but don't discount it.\n\n\n[^0]: These numbers are actually just random. If you control the server environment on which you run the code, then you can probably come up with some more precise signposts.","Q_Score":0,"Tags":"python,discord,discord.py","A_Id":70501924,"CreationDate":"2021-12-28T01:00:00.000","Title":"discord.py: too big variable?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Hello I am new to python and I wanted to know how I can load an image from a directory on the computer in a html page using python?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":63,"Q_Id":70503799,"Users Score":0,"Answer":"Can you add more details to your question, please? It is unclear what is the aim here.","Q_Score":0,"Tags":"python,html,css,django","A_Id":70503838,"CreationDate":"2021-12-28T07:08:00.000","Title":"How to load an image file from my local hard drive to html page using python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm using pusher for a python-vuejs app.\nI have a function that sends data to the pusher, the data content is {'message':{'value':id_value}}\nThe function is executed via an api rest POST request, when I trigger the function and send the data to the pusher on the page with the url host\/data-url the pusher console shows the correct informations for the first time.\nWhen I execute the POST request again (without refreshing the page), the data is gotten twice (gotten means that it is physically there not just a pusher console output), if i do the request again it is gotten 3 times and so on.\nDoes anyone have any idea on how to initialize pusher after each request or something because if I refresh the page and send the data, it works again and i get it only once.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":127,"Q_Id":70509966,"Users Score":0,"Answer":"I figured it out if anyone have the same problem.\n1.You should subscribe the moment you call the page.\n2.If you are triggering the channel.bind with a button click (or any event launcher ) , make sure to unbind before the click and not after.","Q_Score":0,"Tags":"python-3.x,vue.js,pusher","A_Id":70521237,"CreationDate":"2021-12-28T16:43:00.000","Title":"Pusher showing already sent data","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I want to schedule a spider from scrapyd-client by giving the command line arguments as well.\ne.g: scrapy crawl spider_name -a person=\"John\" -a location=\"porto\" -o local.csv\nAbove command works well when running spider directly from scrapy, but it does not work when running it from rest API using scrapyd-client.\nBasically the question is how to send scrapy's command line arguments like (-a, -o) in scrapyd-client?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":79,"Q_Id":70528192,"Users Score":0,"Answer":"Use the flag -d instead of the -a flag","Q_Score":0,"Tags":"python,scrapy,scrapyd-deploy","A_Id":70530856,"CreationDate":"2021-12-30T06:33:00.000","Title":"How to send scrapy command line argument to scrapyd-client","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I made research but I couldn't find any way to get most popular rune for specific champion. There are many rune websites that do this but I don't know whether they use their own rune pages or they use api. So I need a way to get most popular rune for specific champion in League of Legends. Is it possible to do this via riot api or another python package?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":120,"Q_Id":70548584,"Users Score":1,"Answer":"No, it's not possible to filter by runes in the API itself.\nYou CAN fetch as many matches as you can and analyze them for the specific champion and his runes. Then you have a small amount of data to work with and read out popular \/ good runes.\nThis is how the rune websites work too.\nTo answer your question: I don't think there is a library for this, you can easily program a match fetcher by yourself and save the queries in a databse.","Q_Score":0,"Tags":"python,riot-games-api","A_Id":70579897,"CreationDate":"2022-01-01T11:33:00.000","Title":"How can I get most chosen League of Legends rune for specific champion with riot api?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to send requests to a deployed app on a cloud run with python, but inside the test file, I don't want to hardcode the endpoint; how can I get the URL of the deployed app with python script inside the test file so that I can send requests to that URL?","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":151,"Q_Id":70580631,"Users Score":2,"Answer":"You can use gcloud to fetch the url of the service like this\n\ngcloud run services describe SERVICE_NAME\n--format=\"value(status.url)\"","Q_Score":0,"Tags":"python,google-cloud-run","A_Id":70581001,"CreationDate":"2022-01-04T14:50:00.000","Title":"Retrive endpoint url of deployed app from google cloud run with Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How can I monthly scrape a website for products' prices and send an email if there are any changes in the prices?\nIs there's any way to do that automatically using Python? What libraries should I use?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":97,"Q_Id":70581782,"Users Score":0,"Answer":"You can use crontab or airflow and set your scraper to run every month.","Q_Score":0,"Tags":"python,email,web-scraping,beautifulsoup","A_Id":70581831,"CreationDate":"2022-01-04T16:12:00.000","Title":"Scrape Webpages at regular intervals automatically with python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I would like to know if I can create an python API that receives username and password from a Google user and has access to Google services from that user.\nex:\nUser gives me google email and password and I can move files between user GCP buckets\nIf there is no way to do this directly, I would like to know ways to perform this operation without a frontend, only with the user's email and password","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":41,"Q_Id":70600993,"Users Score":1,"Answer":"It is not possible to retrieve the password for a Google Account via an API.\nYour question:\n\nI would like to know if I can create an python API that receives\nusername and password from a Google user and has access to Google\nservices from that user.\n\nYes, you could create a Python API that asks the user to enter their username and password. That is easy to do via an HTTP form.\nHowever, even with the username and password, Google security would quickly block the account. All authentication is browser-based for user identities and requires a human to interact with the browser.","Q_Score":0,"Tags":"python,authentication,google-cloud-platform","A_Id":70601166,"CreationDate":"2022-01-05T23:57:00.000","Title":"Python API using User Account to use User GCP Services (Storage, Engines, etc)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am giving a python requests to a server using requests package. When analyzed the log of the same, I saw https:\/\/www.xxxxxxxxx.com:443 \"GET \/ HTTP\/1.1\" 200 499. I confused about the status code.\nWhat is 200 and 499? which represent actual code?","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":219,"Q_Id":70602249,"Users Score":0,"Answer":"499 represents Content-Length and it is not the status code.","Q_Score":0,"Tags":"http,web,https,python-requests","A_Id":71571232,"CreationDate":"2022-01-06T03:51:00.000","Title":"what is 499 in http response GET \/ HTTP\/1.1\" 200 499","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am giving a python requests to a server using requests package. When analyzed the log of the same, I saw https:\/\/www.xxxxxxxxx.com:443 \"GET \/ HTTP\/1.1\" 200 499. I confused about the status code.\nWhat is 200 and 499? which represent actual code?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":219,"Q_Id":70602249,"Users Score":0,"Answer":"HTTP 200 means the connection was successful. When you make an HTTP request you usually want a response code of 200.\nHTTP error 499 simply means that the client shut off in the middle of processing the request through the server. The 499 error code puts better light that something happened with the client, that is why the request cannot be done.","Q_Score":0,"Tags":"http,web,https,python-requests","A_Id":70602274,"CreationDate":"2022-01-06T03:51:00.000","Title":"what is 499 in http response GET \/ HTTP\/1.1\" 200 499","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to get the list of users accounts who have liked a tweet.\nReading the documentation of twitter api, it only returns up to 100 accounts. My question is: Is there another way to get more than 100 accounts with other method?\nThank you","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":55,"Q_Id":70627509,"Users Score":0,"Answer":"There is no way to get this information on arbitrary past Tweets. If you are the owner of an account you can track the likes using the Account Activity API after you post a Tweet, and can then keep track of the like actions. This would be the way to get the account information for likes on Tweets, beyond the API limit of 100 (which exists in both v1.1 and v2).","Q_Score":0,"Tags":"python,api,web,web-scraping,twitter","A_Id":70628144,"CreationDate":"2022-01-07T21:45:00.000","Title":"Getting more than 100 users accounts who have liked a tweet","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a python code on my desktop where I call Google APIs to get some data. I am trying to deploy it on AWS Lambda to run it periodically but I and running into some issues. Below are the steps I followed:\n\nDownloaded google package using pip3 install google-api-python-client -t . Zipped this folder and uploaded it to a layer in AWS Lambda\nLinked the layer with my function but when I am trying to execute the lambda function, I get the following error:\n\n\"errorMessage\": \"Unable to import module 'lambda_function': No module named 'googleapiclient'\",\nIn my code I have the following import statement:\nfrom googleapiclient.discovery import build\nPlease let me know if I am missing something and how to debug this.\nRegards,\nDbeings","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":291,"Q_Id":70643567,"Users Score":0,"Answer":"You typically receive this error when your Lambda environment can't find the specified library in the Python code. This is because Lambda isn't prepackaged with all third party python libraries.\nin your local environment \"googleapiclient\" are compiled and available during runtime but it's not available in lambda runtime.\nTo resolve this error, create a deployment package(pre-compilied\/zip) or Lambda layer that includes the libraries that you want to use in your Python code for Lambda.\nbest of luck","Q_Score":0,"Tags":"amazon-web-services,aws-lambda,google-api,google-api-python-client","A_Id":70643777,"CreationDate":"2022-01-09T16:58:00.000","Title":"Google API Python Client Call with AWS Lambda","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am trying to create a xlsx from a template exported from Microsoft dynamics NAV, so I can upload my file to the system.\nI am able to recreate and fill the template using the library xlsxwriter, but unfortunately I have figured out that the template file also have an attached XML source code file(visible in the developer tab in Excel).\nI can easily modify the XML file to match what I want, but I can't seem to find a way to add the XML source code to the xlsx file.\nI have searched for \"python adding xlsx xml source\" but it doesn't seem to give me anything I can use.\nAny help would be greatly appreciated.\nBest regards\nMartin","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":48,"Q_Id":70667250,"Users Score":2,"Answer":"Xlsx file is basically a zip archive. Open it as archive and you'll probably be able to find the XML file and modify it. \u2013\nMak Sim\nyesterday","Q_Score":1,"Tags":"python,xlsx,xlsxwriter,dynamics-nav","A_Id":70687607,"CreationDate":"2022-01-11T13:00:00.000","Title":"Adding XML Source to xlsx file in python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to edit the python request to add TLS settings (by TLS settings I mean tls finger printing int, ja3).","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":311,"Q_Id":70672970,"Users Score":0,"Answer":"The JA3 fingerprint is based on ciphers and order and various TLS extensions and order. While ciphers and order can be changed features like the TLS extension order are not accessible from Python. This means there is no way to emulate a specific JA3 fingerprint from Python and thus also not from requests.","Q_Score":0,"Tags":"python,python-3.x,ssl,python-requests,tls1.2","A_Id":70673166,"CreationDate":"2022-01-11T20:21:00.000","Title":"How to edit request in python to add TLS settings?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a method in graph-tool through which checking whether two nodes are connected (as first neighbours) or not without having to iterate?\nFor example, something like graph_tool.is_connected(v,u) which returns a boolean depending on whether v and u are or not connected vertices. Something like a function to check just whether a certain edge exists.\nThanks","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":121,"Q_Id":70679146,"Users Score":1,"Answer":"It is solved by checking the result of the function g.edge(v,u). If add_missing=False it just returns None whenever the edge does not exist. Thanks to @NerdOnTour for the comment","Q_Score":0,"Tags":"python,graph-tool,complex-networks","A_Id":70679684,"CreationDate":"2022-01-12T09:29:00.000","Title":"Check whether two vertices are connected using graph-tool on Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Using list_accounts boto3 I was able to get the Joined Timestamp, however this time I want to capture the closed timestamp of all accounts in my AWS Organization that are in closed status. Can someone tell me if there is a Boto3 function available to fetch this data ? TIA","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":38,"Q_Id":70696073,"Users Score":1,"Answer":"This is not possible. If an account is closed or not has nothing to do with the organization and their for you cant use boto3(oragnization) to get the info like joined timestamp with the list_accounts. With the list_accounts you just see the the time stamp you joined (this is info related to organization) you can not see the timestamp of when the account was created (this info is related to the account).","Q_Score":0,"Tags":"python,amazon-web-services,aws-lambda,boto3","A_Id":70696272,"CreationDate":"2022-01-13T11:47:00.000","Title":"How can I capture the closed timestamp of an AWS Account using Boto3","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am a Senior High School IEA student and I am trying to develop a discord bot for a project. It is in my interest for the bot to have robust functions and have the capability to play music. I've glanced across multiple tutorials on youtube and I still find myself scratching my head.\nWhats the difference between discord.py and discord.py rewrite?\nAre you supposed to pip install discord.py[voice] separately from those two?\nIs discord.py[voice] compatible with discord.py rewrite? Or is there a different version of voice on rewrite?\nAlso is there a specific IDE that you particularly recommend using for developing a discord bot?\nSome clarity would be nice!","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":302,"Q_Id":70698875,"Users Score":1,"Answer":"discord.py is now up to date, so it will work fine for the project. I believe (I haven't worked with it before) discord.py[voice] is just a superset of discord.py with audio functionality, so installing it should leave you all set for your project.\nAs for an IDE, I recommend visual studio code or pycharm, this part isn't really important, just pick an editor that you like, get used to it, and it will get the job done fine.\nAlso, it's worth noting that mainline development for discord.py has recently ended, this shouldn't affect your project because it's a very recent change, but keep this in mind in the future in case things start to become incompatible.","Q_Score":0,"Tags":"python,discord,discord.py","A_Id":71302949,"CreationDate":"2022-01-13T15:13:00.000","Title":"What is the difference between discord.py and discord.py rewrite? And are they both compatible with discord.py[voice]?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Hi I've a Lambda function treated as a webhook. The webhook may be called multiple time simultaneously with same data. In the lambda function, I check if the transaction record is present in the DynamoDB or not. If it's present in db the Lambda simply returns otherwise it execute further. The problem arises here that when checking if a record in db the Lambda get called again and that check fails because the previous transaction still not inserted in db. and transaction can get executed multiple times.\nMy question is how to handle this situation. will SQS be helpful in this situation?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":112,"Q_Id":70730423,"Users Score":0,"Answer":"\" if it's present in db the lambda simply return otherwise it execute further\" given that, is it possible to use FIFO queue and use some \"key\" from the data as deduplication id (fifo) and that would mean all duplicate messages would never make it to your logic and then you would also need\ndynamodb's \"strongly consistent\" option.","Q_Score":0,"Tags":"python,aws-lambda,amazon-dynamodb,amazon-sqs,atomic","A_Id":70734230,"CreationDate":"2022-01-16T13:22:00.000","Title":"Prevent duplicate DynamoDB transaction","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm busy programming a personal browser in python for day-to-day use and potentially to replace other browsers on my rig i.e firefox and chrome.\nI'd just gotten the basic framework down and opened google on it to test, when the first site I opened flooded me with ads which got me wondering.\nAre there any potential security threats I'm opening myself up to by using a homemade browser and what kind of prevention measures would one of these established companies put into place to protect there users? Above and beyond personal AV software of course.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":40,"Q_Id":70771907,"Users Score":0,"Answer":"There is a module called 'woob' - web outside of browser which it is said that \"Its nice project for non-sensitive data but a big fat watering hole target for financial info.\" Which is a module that could be updated , like the requests module that could be updated with theft features , and thats the problem - the use of another module that you have no control over for sensitive data.\nThe measures you could take is that current modules are intact and secure , and monitor any updates or prevent updates .","Q_Score":0,"Tags":"python,security,browser,ads","A_Id":70772621,"CreationDate":"2022-01-19T13:57:00.000","Title":"Potential Security Risk of Browser Coded in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am building a Web app and I need to access an external API which only accepts an URL to my image in order to process it. These images will be taken from the app, therefore I am looking for a way to generate a URL based on them.\nIs there any way to do that directly from python, generating the link?\nEdit: The web app will only be local for now, so these images are only local, on my computer.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":75,"Q_Id":70786473,"Users Score":0,"Answer":"The external API will not be able to access your local images even if you host them using a local webserver. Instead, you can upload the images to storage services like AWS S3 or Azure Blob storage, and then provide those image URLs to the external API.\nThis would be the simplest to do if your images aren't many in number.\nIf there are lot of images, then instead of uploading each one of them, you may want to run a local webserver[ e.g nodejs, etc] and then use ngrok tunnels so that you can get a public URL that you can give to external API.","Q_Score":0,"Tags":"python,image,api,url,web-applications","A_Id":70787275,"CreationDate":"2022-01-20T12:42:00.000","Title":"How can I convert a local image to a URL for accessing external APIs?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"When I run import requests I receive an error: ImportError: No module named requests. I have looked at other responses to similar questions and tried nearly everything and nothing is working. I'm using MacOS and my project is on my desktop in a folder with a single file with the one line of code I wrote above. When I run pip3 list, the request package is installed.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":52,"Q_Id":70796540,"Users Score":0,"Answer":"I understand you, python can be a mess.\nMacos has a default installation of python version 2.XX\nwhen you type 'python' in terminal it executes the preinstalled python v2\nNow since you are using pip3 I assume you've installed latest python 3.XX and\npip3 will install modules for python 3 only not for the default python 2\nSo when you are running your script.py,\ninsted of\npython script.py\ntry\npython3 script.py","Q_Score":0,"Tags":"python","A_Id":70796610,"CreationDate":"2022-01-21T04:48:00.000","Title":"Receiving a 'ImportError: No module named requests' error in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Hi how can I get an element by attribute and the attribute value in Python Selenium?\nFor example I have class=\"class1 class2 class3\".\nNow I want to get the element with the attribute class what ca.rries the classes \"class1 class2 class3\".\nIs this possible?\nIf I use xpath, I always need to add the element type, input, option,...\nI try to avoid the element type since it varies sometimes.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":77,"Q_Id":70804166,"Users Score":0,"Answer":"The CSS selectors would be formatted like this:\n'[attribute]'\n'[attribute=\"value\"]'\nFor example, the selector for the input field on google.com would be:\n'input[name=\"q\"]'","Q_Score":0,"Tags":"python,selenium","A_Id":70804353,"CreationDate":"2022-01-21T16:03:00.000","Title":"Python - Selenium get Element by attribute and the full attribute value","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"is it possible to mention roles or people on a specified date in Discord?\nExample of a command: .date 24\/01\/2022 19:00 @role and then the role will be mentioned on 24th of January at 19:00?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":82,"Q_Id":70816472,"Users Score":1,"Answer":"It's definitely possible.\nYou're going to want to utilize the datetime module (or whichever time module you prefer)\nA couple things to consider:\n\nUnless you go with a separate db, memory will be wiped any time the\nbot is disconnected or restarted and your reminder countdown clock\nprocess will stop.\nIf you go with a separate db, you're going to want to store the user\nID, channel ID, and datetime object from the command (so your bot\nknows who to reply to, how to reply, and when)\n\nI would explore a tasks.loop to check against the datetime column in the db every \"x\" seconds and if the current datetime == the reminder time datetime, then a message is sent to the user ID in the specified channel ID.","Q_Score":0,"Tags":"python,discord.py","A_Id":70841003,"CreationDate":"2022-01-22T19:38:00.000","Title":"Mention people at a specified date Discord py","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Scenario: Lets say I have a REST API written in Python (using Flask maybe) that has a global variable stored. The API has two endpoints, one that reads the variable and returns it and the other one that writes it. Now, I have two clients that at the same time call both endpoints (one the read, one the write).\nI know that in Python multiple threads will not actually run concurrently (due to the GIL), but there are some I\/O operations that behave as asynchronously, would this scenario cause any conflict? And how does it behave, I'm assuming that the request that \"wins the race\" will hold the other request (is that right)?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":54,"Q_Id":70831421,"Users Score":1,"Answer":"In short: You should overthink your rest api design and implement some kind of fifo queue.\nYou have to endpoints (W for writing and R for reading). Lets say the global variable has some value V0 in the beginning. If the clients A reads from R while at the same time client B writes to W. Two things can happen.\n\nThe read request is faster. Client A will read V0.\nThe write request is faster. Client A will read V1.\n\nYou won't run into an inconsistent memory state due to the GIL you mentioned, but which of the cases from above happens, is completely unpredictable. One time the read request could be slightly faster and the other time the write request could be slightly faster. Much of the request handling is done in your operating system (e.g. address resolution or TCP connection management). Also the requests may traverse other machines like routers or switches in you network. All these things are completly out of your control and could delay the read request slightly more than the write request or the other way around. So it does not matter with how many threads you run your REST server, the return value is almost unpredictable.\nIf you really need ordered read write interaction, you can make the resource a fifo queue. So each time any client reads, it will pop the first element from the queue. Each time any client writes it will push that element to the end of the queue. If you do this, you are guaranteed to not lose any data due to overwriting and also you read the data in the same order that it is written.","Q_Score":0,"Tags":"python,multithreading,rest","A_Id":70831893,"CreationDate":"2022-01-24T09:14:00.000","Title":"How Python handles asynchronous REST requests","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I wanted to make a python program that can read data online (could be JSON), and also capable of writing\/updating that data. One of my option here is to use google spreadsheets API, but I also want to know if there are any other good alternatives (free or not).\nI was planning to make an online a list of dictionaries that contain codes, and then the python program would then write that code as 'used' in the online data after being used.\nI'm a beginner, so I don't know where to start from here.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":54,"Q_Id":70833238,"Users Score":2,"Answer":"You can also use AWS or Firebase. All of them are easy to integrate into your code.","Q_Score":0,"Tags":"python,json,database,csv","A_Id":70833271,"CreationDate":"2022-01-24T11:36:00.000","Title":"Where can I store, read, and write data online using python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have written a Python script using Selenium to automate a workflow on a third-party website. It works fine on my machine.\nBut when I try to run the same script on a GCP instance, I get Cloudflare's 1020 Access Denied error. I am using Google Chrome headless as the Selenium webdriver.\nI am guessing the website owner has put a blanket firewall restriction on GCP instance external IPs.\nMy questions:\n\nDoes my assumption makes sense? It is even possible to put such a restriction?\nHow do I bypass the firewall? What if I set static IP to the GCP instance? Or some way to use VPN through the headless Chrome?\nWould changing the cloud provider help? Any less well know cloud provider which won't be blocked?\n\nAny other suggestions?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":204,"Q_Id":70835801,"Users Score":0,"Answer":"Yes, the Cloudflare firewall can block IP ranges amongst other options so this in entirely possible.\nNot sure you should ask how to circumvent security. A static IP might work or it might not, it depends entirely on the unknown restrictions set by the website operator. Again, VPN may or may not work depending on what restrictions the website operator set up.\nSince we can't know what restrictions are in place another cloud provider might work or it might not. It could also stop working if the website operator decides to block that IP range as well.\n\nThe only way to be sure is to ask the website operator.","Q_Score":0,"Tags":"python,selenium,google-cloud-platform,firewall,cloudflare","A_Id":70836559,"CreationDate":"2022-01-24T14:53:00.000","Title":"Cloudflare gives access denied when accessing a website from GCP instance","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"im having problem to use tweepy on vscode, it keep reporting a missing import of tweepy and i dont know why.\non power shell shows that requirement already satisfied, and i can see the tweepy on vscode if i search it, so what is going on ?\nPS C:\\Windows\\System32> pip install tweepy\nRequirement already satisfied: tweepy in c:\\users\\arthu\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\\localcache\\local-packages\\python39\\site-packages (4.5.0)\nRequirement already satisfied: requests-oauthlib<2,>=1.0.0 in c:\\users\\arthu\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\\localcache\\local-packages\\python39\\site-packages (from tweepy) (1.3.0)\nRequirement already satisfied: requests<3,>=2.27.0 in c:\\users\\arthu\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\\localcache\\local-packages\\python39\\site-packages (from tweepy) (2.27.1)\nRequirement already satisfied: idna<4,>=2.5 in c:\\users\\arthu\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\\localcache\\local-packages\\python39\\site-packages (from requests<3,>=2.27.0->tweepy) (2.5)\nRequirement already satisfied: urllib3<1.27,>=1.21.1 in c:\\users\\arthu\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\\localcache\\local-packages\\python39\\site-packages (from requests<3,>=2.27.0->tweepy) (1.26.8)\nRequirement already satisfied: charset-normalizer~=2.0.0 in c:\\users\\arthu\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\\localcache\\local-packages\\python39\\site-packages (from requests<3,>=2.27.0->tweepy) (2.0.10)\nRequirement already satisfied: certifi>=2017.4.17 in c:\\users\\arthu\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\\localcache\\local-packages\\python39\\site-packages (from requests<3,>=2.27.0->tweepy) (2021.10.8)\nRequirement already satisfied: oauthlib>=3.0.0 in c:\\users\\arthu\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\\localcache\\local-packages\\python39\\site-packages (from requests-oauthlib<2,>=1.0.0->tweepy) (3.1.1)\nPS C:\\Windows\\System32>","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":163,"Q_Id":70854486,"Users Score":0,"Answer":"i had the same problem in using tweepy in vs code\nso just to be sure i installed tweepy by using pip install tweepy in shell terminal and\nthe terminal in vs code(just to be sure) and ubuntu as well after that i closed the vs code and then opened it again and it worked, hopefully it works with you","Q_Score":0,"Tags":"python,visual-studio-code,tweepy","A_Id":72203262,"CreationDate":"2022-01-25T19:22:00.000","Title":"missing import for tweepy on vscode","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm grabbing a list from a website using their API and saving it as variable \"playlistNames\". In a later function when I call \"playlistNames\" to manipulate the data, is it making another API call? or is the data just stored locally in the \"playlistNames\" variable?\nSorry for such a silly question, I can't seem to google this properly.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":39,"Q_Id":70873223,"Users Score":0,"Answer":"If you are running the entire script where you wrote the API request and the execution passes through it every time you run it, it will call it. Otherwise it will run it from the saved variable assuming you are on the same kernel.","Q_Score":0,"Tags":"python","A_Id":70873276,"CreationDate":"2022-01-27T04:07:00.000","Title":"If I pull data from an API and save it in a variable, does it hit the API every time I call that variable?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"As said in the title it doesn't work the bot doesn't respond and no error shows up. Works on my pc","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":34,"Q_Id":70916764,"Users Score":0,"Answer":"The bot isn't working for me either. Better to just program your own, based off the code on Discord.py.","Q_Score":0,"Tags":"python,repl.it","A_Id":70992100,"CreationDate":"2022-01-30T16:45:00.000","Title":"Cogs don't work on repl.it discord.py (i used the method with flask to keep it always on)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am facing a strange issue that Click Button \"locator\" is not working for ie browser when I am executing from laptop (ie, not connected to an external monitor). It identifies and move to the next step but not clicking it.\nCatching point is, If I try to connect my laptop to a monitor, the code works for IE as well. Its not working only when I am running the script from laptop. Same code working fine in Chrome in laptop.\nHas anyone faced the same issue, do I need to do any resolution setting?\nI am using robot framework + Python + selenium. Sample code given below -\n${btn_Login} \/\/*[@id=\"btnLogin\"]\nClick Button ${btn_Login}","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":30,"Q_Id":70935378,"Users Score":2,"Answer":"Its running in IE after changing the Scale and layout to 100% under display setting.","Q_Score":0,"Tags":"python-3.x,selenium-webdriver,robotframework","A_Id":70938914,"CreationDate":"2022-02-01T03:49:00.000","Title":"Click button is not working for ie browser while executing from Laptop","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a lambda function lambda1 that gets triggered by an API call and computes the parameters for another job downstream that will be handled by a different function lambda2.\nThe resources required to complete the downstream job are not available immediately and will become available at some future time datetime1 which is also calculated by lambda1.\nHow do I make lambda1 schedule a message in an SNS topic that will be sent out at datetime1 instead of going out immediately? The message sent out at the correct time will then trigger lambda2 which will find all the resources in place and execute correctly.\nIs there a better way of doing this instead of SNS?\nBoth lambda1 and lambda2 are written in Python 3.8","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":121,"Q_Id":70936176,"Users Score":1,"Answer":"You would be better off using the AWS Step Functions. Step functions are generally used for orchestrating jobs with multiple Lambda functions involved and they support the wait state that you need to run a job at a specific time.\nBasically, you will create multiple states. One of the states will be wait state where you will input the wait condition (timestamp at which it will stop waiting). This is what you will send from Lambda1. The next state would be task state which will be your Lambda2.","Q_Score":0,"Tags":"python-3.x,amazon-web-services,aws-lambda,amazon-sns","A_Id":70937211,"CreationDate":"2022-02-01T06:05:00.000","Title":"AWS Send SNS message from lambda at a specified time","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm writing a script to automate a task on chrome,I'm using Python and Selenium,the problem is that the task requires clicking on a button in a chrome extension popup to finish the task but I can't initiate the click() class on a chrome extension popup using selenium.\nDo you have any idea on how we can solve such a problem ?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":47,"Q_Id":70937640,"Users Score":0,"Answer":"Alright, since no one answered my question I will answer how I solved this issue\nall I did is that I used the Pynput library to interact with the chrome extension by simulating keyboard clicks.\nto toggle between pages and elements I had to use the Tab button which converts to\nkeyboard = Controller() keyboard.press(Key,tab) keyoard.release(Key,tab)\nin python code and added some delay between each activity so it can run smoothly without any issues.\nit might not be the best solution but at least it works now","Q_Score":0,"Tags":"python,selenium,automation,automated-tests","A_Id":71080715,"CreationDate":"2022-02-01T08:45:00.000","Title":"can't interact with popup chrome extention using Selenium in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"When using Selenium, I often use driver.find_element(By.XPATH, ) rather than driver.find_element(By.CSS_SELECTOR, ). I find it easy to copy the XPATH rather than understanding the HTML structure of the website.\nBut I had a little problem. Recently I noticed that most of my scripts using XPATH don't work because the XPATHtends to change. Is their a way to fix this problem? And is there a difference between xpath and full xpath?","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":85,"Q_Id":70942965,"Users Score":1,"Answer":"This is a basic problem with screen scraping. The information on an HTML page is designed for human users, not for software access, and it will change over time based on the perceived needs of human users, ignoring the needs of screen scrapers.\nYou haven't said what you're using Selenium for. The two main users are (a) software testing (to check that your software is displaying the screen correctly) and (b) scraping data from third-party web sites. The strategy for solving the problem is different for the two cases.\nFor testing, try to test as much of the functionality of your application as possible using unit tests that don't rely on looking at the HTML; only look at the HTML where you actually need to test the user interface. For those tests, you're going to have to face the fact that when the HTML changes, the tests have to change.\nFor extracting data from third-party web sites, use a published API to the data in preference to screen-scraping if you possibly can - even if you have to pay for access, it will be cheaper in the long run. Scraping the data off HTML pages is inefficient and it leaves you completely exposed to unannounced changes the screen appearance.\nHaving said that, there are ways of writing XPath that make it more resilient to such changes. But only if you guess correctly what aspects of the page are likely to change, and what's likely to remain stable. It's not a difference between \"xpath\" and \"full xpath\" as you suggest, rather there are different ways of writing XPath expressions to make them resilient to changes in the HTML. Clearly for example \/\/tr[td[1]='London']\/td[2] is more likely to keep working than \/\/div[3]\/div[1]\/table[9]\/tbody\/tr[43]\/td[2].\nBut the best advice is, if you want to write an application that's resilient to change, steer clear of screen scraping entirely.","Q_Score":1,"Tags":"python,selenium,xpath,css-selectors","A_Id":70945336,"CreationDate":"2022-02-01T15:16:00.000","Title":"XPATHs tend to change with time, making finding elements by XPATH not useful","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"When using Selenium, I often use driver.find_element(By.XPATH, ) rather than driver.find_element(By.CSS_SELECTOR, ). I find it easy to copy the XPATH rather than understanding the HTML structure of the website.\nBut I had a little problem. Recently I noticed that most of my scripts using XPATH don't work because the XPATHtends to change. Is their a way to fix this problem? And is there a difference between xpath and full xpath?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":85,"Q_Id":70942965,"Users Score":1,"Answer":"You have to learn how to create correct locators.\nAutomatically generated XPath or CSS Selector locators are extremely fragile. This is making them almost useless.\nAgain, both automatically created XPath and CSS Selector locators.\nCreating good locators will make your code much more stable but still, any Selenium based code needs maintenance after changes involved by FrontEnd developers since they are changing the page structure and elements on the page.\nAs about XPaths, generally there are relative and absolute XPaths.\nAbsolute XPath defines a full and the explicit path from the page top to the specific element node.\nWhile relative XPath defines some short unique locator for some element node.","Q_Score":1,"Tags":"python,selenium,xpath,css-selectors","A_Id":70943168,"CreationDate":"2022-02-01T15:16:00.000","Title":"XPATHs tend to change with time, making finding elements by XPATH not useful","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I try to edit the Roles from a Member trough the member.edit(roles = list_of_roles) command. For the most Users this works just finde, but for some the I get the Error discord.ext.commands.errors.CommandInvokeError: Command raised an exception: Forbidden: 403 Forbidden (error code: 50013): Missing Permissions altrough the bot HAS the heighest role on the Server AND admin rights. (No the Role that I try to remove\/add is NOT highter or at the same Level as the bot role) Is there anything I might missed? because I dont understand why he cant remove the roles from some members (the members dont have admin rights).","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":128,"Q_Id":70945500,"Users Score":0,"Answer":"After many hours I found out, you cant remove the booster role. This is why \u00edt doesn't work how it should.","Q_Score":1,"Tags":"python,discord,discord.py","A_Id":70957201,"CreationDate":"2022-02-01T18:14:00.000","Title":"Discord.py missing permissions error with permissions","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How to using SAML with TThreadedSelectorServer ? i really can not find any articals for it.\nIn my understanding, TThreadedSelectorServer is an advanced server based on NIO, so I want to use SAML together","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":15,"Q_Id":70952669,"Users Score":0,"Answer":"I think you're misunderstanding what Thrift does.. it's a way to create server side stubs.. but you still must write logic yourself.. including handling SAML\nif you have a more specific question I think it'd be easier to offer advice how to get started","Q_Score":0,"Tags":"python,java,thrift,thrift-protocol","A_Id":71100988,"CreationDate":"2022-02-02T08:45:00.000","Title":"How TThreadedSelectorServer support saml \uff1f","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to scrape an HTML element in a webpage. The content of this element are generated by Javascript and thus cannot be scraped by simply running a requests.GET:\nresponse = requests.get(url).\nI read in other posts that Selenium can be used to solve this issue, but it requires an actual browser installed and the use of the corresponding driver. This code is meant to be ran on different machines that frequently change, and so I cannot write it so that it only works if a particular browser is installed.\nIf there is a way to scrape the Javascript content without relying on a particular browser then that is what I'm looking for, no matter the module.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":413,"Q_Id":70990207,"Users Score":3,"Answer":"Aside from automating a browser your other 2 options are as follows:\n\ntry find the backend query that loads the data via javascript. It's not a guarantee that it will exist but open your browser's Developer Tools - Network tab - fetch\/Xhr and then refresh the page, hopefully you'll see requests to a backend api that loads the data you want. If you do find a request click on it and explore the endpoint, headers and possibly the payload that is sent to get the response you are looking for, these can all be recreated in python using requests to that hidden endpoint.\n\nthe other possiblility is that the data hidden in the HTML within a script tag possibly in a json file... Open the Elements tab of your developer tools where you can see the HTML of the page, right click on the tag and click \"expand recursively\" this will open every tag (it might take a second) and you'll be able to scroll down and search for the data you want. Ignore the regular HTML tags, we know it is loaded by javascript so look through any \"script\" tag. If you do find it then you can hopefully find it in your script with a combination of Beautiful Soup to get the script tag and string slicing to just get out the json.\n\n\nIf neither of those produce results then try requests_html package, and specifically the \"render\" method. It automatically installs a headless browser when you first run the render method in your script.\nWhat site is it, perhaps I can offer more help if I can see it?","Q_Score":1,"Tags":"javascript,python,selenium,web-scraping","A_Id":70990553,"CreationDate":"2022-02-04T17:05:00.000","Title":"Python Scraping JavaScript page without the need of an installed browser","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I created a telegram robot that forwards video files by selecting Start. And I want them to be deleted automatically after 1 hour. Can anyone help me?\ncontext.bot.sendDocument(chat_id=update.message.chat_id, document='https:\/\/t.me\/mychanel\/2',caption=\"1\")\ncontext.bot.sendDocument(chat_id=update.message.chat_id, document='https:\/\/t.me\/mychanel\/3',caption=\"2\")\ncontext.bot.sendDocument(chat_id=update.message.chat_id, document='https:\/\/t.me\/mychanel\/4',caption=\"3\")\nhow can I delete auto those?\u261d\u261d\u261d\u261d","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":76,"Q_Id":70997774,"Users Score":0,"Answer":"You can do\nmessage =context.bot.sendDocument(chat_id=update.message.chat_id, document='https:\/\/t.me\/mychanel\/2',caption=\"1\")\nAnd save the message.message_id with a timestamp in a data structure, list or dict, after that you can schedule a periodic thread that do\nbot.delte_message(message_id,chat_id) on the expired videos","Q_Score":0,"Tags":"telegram,telegram-bot,python-telegram-bot","A_Id":71002656,"CreationDate":"2022-02-05T11:56:00.000","Title":"deleted automatically telegram bot","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is it possible to implement selenium in a code of a backend of a website we\u2019re building.\nI am building a website that scraps data from several websites using Python as a backend so I would like to know","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":25,"Q_Id":71007476,"Users Score":0,"Answer":"scrape the data and store it in a database. And then display that scraped data from the database.\nRun scraping in a scheduled way to update your data.","Q_Score":0,"Tags":"python,selenium","A_Id":71008371,"CreationDate":"2022-02-06T13:10:00.000","Title":"Implement the Selenium library Python for a website backend","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have more than 10TB of content in my google shared drive I want to copy all that content in my onedrive account. I tried mover.io but it's not working for me. What should I do. Also tried google colab but I couldn't find great code of python\nIt's been 2nd day and I couldn't find anything that works properly and fast.\nI don't want to use any 3rd-party mover like multi-cloud ...","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":100,"Q_Id":71017262,"Users Score":0,"Answer":"Create a temporary RDP through GitHub action, in that RDP you will get a high-speed internet connection, download the file in it, and upload it to onedrive.\nyou can also use rabb.it","Q_Score":0,"Tags":"javascript,python,google-drive-api,onedrive,movefile","A_Id":71017428,"CreationDate":"2022-02-07T10:48:00.000","Title":"Transfer files between google-drive and one drive","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using telethon to scrape members from a group, I can filter out active and non-active members but when adding members to another group I mostly get UserPrivacyRestrictedError.\nbecause of that I usually get PeerFloodError after few request. is there a way to get participants that does not have privacy enabled?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":118,"Q_Id":71018733,"Users Score":0,"Answer":"No. You can only find that out by trying to perform that action.","Q_Score":0,"Tags":"python,bots,telegram,telethon","A_Id":71039307,"CreationDate":"2022-02-07T12:40:00.000","Title":"Telethon check if user has privacy settings enabled","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How do we determine if a website uses an API or not? For ex. If I want to scrap a data from particular website to create a Database, should I check that whether that website allows me to scrap data or does that website allow API? How can I check it?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":27,"Q_Id":71019392,"Users Score":1,"Answer":"The website owner's wishes should be put in a file called robots.txt. If you use a scraping framework like scrapy, reading and respecting this file is typically built into the framework. If not, you must parse and respect this file manually, which can be some work. Alternatively, read it manually, if you're just scraping a handful of sites.\nAs an example, check out [https:\/\/stackoverflow.com\/robots.txt](stackoverflow's robots.txt file).","Q_Score":0,"Tags":"python,web-scraping","A_Id":71019613,"CreationDate":"2022-02-07T13:29:00.000","Title":"How do we determine if a website uses an API or not?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am completly lost because I got no idea anymore what I am doing wrong.\nI want to make a simple POST request to a certain address. I visited the website using firefox, opened it's terminal, copied the POST request as cURL and executed in the terminal. The recieved response has status 200 but it's body is unreadable like \"\ufffd\ufffd\ufffd\ufffd\ufffd\ufffdq9i\".\nBut when I use Postman->Import->cURL and execute the request it works?! Also status 200 but this time the body contains a properly readable html code just as expected.\nSo I though maybe it's because Postman is adjusted the request. So I opened the code panel to the right side of the program and exported Postman's request again as cURL, python - http.client and python - request, but none of them are working?! Again I just recieve an unreadable body. How on earth can this happen?\nI'm using the same machine for all requests, there is no VPN or something so it cannot be related to IP address. There is no authentification or anything.\nThere is just maybe one hint I noticed: The response recieved in Postman is exactly one byte shorter then the one recieved in cURL or python. Could this be the problem? Is Postman handling the response's body differently?\nI appreciate any help a lot!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":142,"Q_Id":71019617,"Users Score":1,"Answer":"cURL is displaying the raw response of the body, while Postman and Firefox process the response. In your case, I suspect that you request a zipped response with a header like Accept-Encoding: gzip, deflate. If you remove that header, you will get the uncompressed response.\nIf there is no such header in your request, it would be good to see the request you are trying to execute.","Q_Score":0,"Tags":"python,curl,python-requests,postman,http.client","A_Id":71022073,"CreationDate":"2022-02-07T13:46:00.000","Title":"Postman POST request works but doesnt when exported to cURL \/ request \/ http.request","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a scrapy spider that scrapes products information from amazon based on the product link.\nI want to deploy this project with streamlit and take the product link as web input, and product information as output data on the web.\nI don't know alot about deployment, so anyone can help me with that.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":89,"Q_Id":71022331,"Users Score":1,"Answer":"You can create a public repository on GitHub with streamlit and connect your account with 0auth. Then you can deploy it on the streamlit servers after signing in the streamlit website.","Q_Score":3,"Tags":"python,web-scraping,scrapy,streamlit","A_Id":71022799,"CreationDate":"2022-02-07T16:56:00.000","Title":"Deploy Scrapy Project with Streamlit","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am using TCP socket for server and client communication. The clients are multiple raspberry pi and server is on window. My clients only connect to server when they want to sent the message. client act like send receive and disconnect.\n\nBut my question is how to I communicate if there are 50 to 100 clients.\nIs it possible to connect if all clients wants to connect at a time to server and if not then how many clients it can connect to server and it depends on what.\nCan anyone show an python simple example of TCP socket using multithreading handling multiple clients.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":24,"Q_Id":71025136,"Users Score":0,"Answer":"It can be done.\nTCP is actually handled by your OS, not by userspace code. So the best thing you may do is to kindly ask your OS about it via the (system) calls it exposes.\nAs such, OS knows and is prepared for parallel calls from many user-space threads. It is protected by an appropriate locking mechanism to ensure serializability.\nWhat you should or should not be concerned about is performance and error handling. What do you do if the Windows server you have does not accept TCP connection? What would you do if it did accept, however TCP retransmission happens over and over again and you can't pass a message? What do you do if the Windows service forcibly terminates the connection? There are many questions to ask in distributed systems.\nPerformance is another story. You haven't mentioned it, so I won't spend much time on it. Let me know if I should though, I'll update the post.\n\nNow about Python specifics. Due to the \"global interpreter lock\" in the CPython, multithreading along with its Thread does not make your code truly parallel; at best you get single-treaded multitasking a-ka simplest possible concurrency. That shouldn't be a big problem for you, since TCP is inherently IO-bound.\nThe simplest you can do is to spawn a dedicated Thread per session. That kinda implies long-living sessions, otherwise, it is a resource waster and performance killer. Again, you did not mention the requirements, so I won't suggest anything.","Q_Score":1,"Tags":"python,multithreading,tcpsocket","A_Id":71025265,"CreationDate":"2022-02-07T20:43:00.000","Title":"How do I handle 50 clients to server using multithreading","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to communicate with a vehicle control unit (VCU) over can. I have figured out the commands (index, data and frequency) and can verify the functionality through PCanView on Windows. Now I am using Nvidia Xavier system with python-can library to send the same commands, and I can verify the commands with candump. However when I power the vehicle engine on while sending these commands, the canbus freezes (this is when the VCU starts expecting the can commands I am sending, it goes into fault state if it doesn't receive the data it expects)\nI have successfully used python-can in the past to talk to other can devices and I am confident about the correctness of the code itself.\nHardware connection is fine too, because I can receive non-VCU messages from the vehicle. I can also receive VCU messages after I restart the canbus.\nWhat could be causing the bus to freeze? And is there a way to prevent it? (By setting some config in the socket-can layer itself?)\nPlease note that restarting the bus will not fix the problem as the vehicle cannot recover once it goes into fault without a restart.\nAny help will be appreciated!","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":77,"Q_Id":71026164,"Users Score":0,"Answer":"Ok, it turns out it was a hardware problem. The length of CAN cables was a bit too much. The bus receives a lot of data transmission when the vehicle is turned on and the CAN cable was flooded with data. I still don't know the mechanics of the fault but decreasing the cable length made it all work.","Q_Score":0,"Tags":"linux-kernel,can-bus,socketcan,python-can","A_Id":71134010,"CreationDate":"2022-02-07T22:21:00.000","Title":"Canbus freezes - how to ignore error frames?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to communicate with a vehicle control unit (VCU) over can. I have figured out the commands (index, data and frequency) and can verify the functionality through PCanView on Windows. Now I am using Nvidia Xavier system with python-can library to send the same commands, and I can verify the commands with candump. However when I power the vehicle engine on while sending these commands, the canbus freezes (this is when the VCU starts expecting the can commands I am sending, it goes into fault state if it doesn't receive the data it expects)\nI have successfully used python-can in the past to talk to other can devices and I am confident about the correctness of the code itself.\nHardware connection is fine too, because I can receive non-VCU messages from the vehicle. I can also receive VCU messages after I restart the canbus.\nWhat could be causing the bus to freeze? And is there a way to prevent it? (By setting some config in the socket-can layer itself?)\nPlease note that restarting the bus will not fix the problem as the vehicle cannot recover once it goes into fault without a restart.\nAny help will be appreciated!","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":77,"Q_Id":71026164,"Users Score":0,"Answer":"The cable length could be the reason, but take care about the bus topology and especially where the CAN terminations are located.","Q_Score":0,"Tags":"linux-kernel,can-bus,socketcan,python-can","A_Id":71430529,"CreationDate":"2022-02-07T22:21:00.000","Title":"Canbus freezes - how to ignore error frames?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a database of thousands of files online, and I want to check what their status is (e.g. if the file exists, if it sends us to a 404, etc.) and update this in my database.\nI've used urllib.request to download files to a python script. However, obviously downloading terabytes of files is going to take a long time. Parallelizing the process would help, but ultimately I just don't want to download all the data, just check the status. Is there an ideal way to check (using urllib or another package) the HTTP response code of a certain URL?\nAdditionally, if I can get the file size from the server (which would be in the HTTP response), then I can also update this in my database.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":34,"Q_Id":71040199,"Users Score":2,"Answer":"If your web server is standards-based, you can use a HEAD request instead of a GET. It returns the same status without actually fetching the page.","Q_Score":0,"Tags":"python,python-3.x,http,urllib","A_Id":71040218,"CreationDate":"2022-02-08T20:02:00.000","Title":"How to check HTTP status of a file online without fully downloading the file?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Hi I am using the google calendar API for a school project and my tokens have expired and I don't know how to refresh them. Please help!!!","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":28,"Q_Id":71055074,"Users Score":0,"Answer":"Generally, when a token expires, make a new one using the same process and replace the old one in your code with the new one.","Q_Score":1,"Tags":"python,api,calendar","A_Id":71055412,"CreationDate":"2022-02-09T18:46:00.000","Title":"Refresh token for google calendar API","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to save mail via Python and get this error\n\n-2147352567, 'Exception occurred.', (4096, 'Microsoft Outlook', 'Unable to write to file: C:\\...\\ docs. Right-click the folder containing the file you want to write to, and then select 'Properties' from the menu and check your permissions for this folder.'\n\nAccount has all the permissions and access to the folder.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":85,"Q_Id":71069259,"Users Score":1,"Answer":"Please show your code. Make sure you pass a fully qualified file name that includes both the path and the file name, not just a path or a file name. It looks like you are only passing the path (C:\\docs).","Q_Score":0,"Tags":"python,file-io,outlook,win32com","A_Id":71070610,"CreationDate":"2022-02-10T17:01:00.000","Title":"Error saving e-mail from outlook via Python win32com.client: Unable to write to file","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to save mail via Python and get this error\n\n-2147352567, 'Exception occurred.', (4096, 'Microsoft Outlook', 'Unable to write to file: C:\\...\\ docs. Right-click the folder containing the file you want to write to, and then select 'Properties' from the menu and check your permissions for this folder.'\n\nAccount has all the permissions and access to the folder.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":85,"Q_Id":71069259,"Users Score":0,"Answer":"The system drive C:\\ requires admin privileges for writing. Try choosing another drive or folder.","Q_Score":0,"Tags":"python,file-io,outlook,win32com","A_Id":71069323,"CreationDate":"2022-02-10T17:01:00.000","Title":"Error saving e-mail from outlook via Python win32com.client: Unable to write to file","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to scrap a website that requires me to first fill out certain dropdowns. However, most of the dropdown selections are hidden and only appear in the DOM tree when I scroll down WITHIN the dropdown. Is there a solution I can use to somehow mimic a scroll wheel, or are there other libraries that could complement Selenium?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":57,"Q_Id":71080322,"Users Score":0,"Answer":"requests and BeautifulSoup are two libraries in python that can assist with scraping data. They allow you to get the url and make instances within the html language.\nIn order to inspect a specific part of a website you just need to right click & inspect on the item you want to scrape. This will open all the hidden paths you speak of to that specific tag.","Q_Score":1,"Tags":"python,selenium,web-scraping","A_Id":71104959,"CreationDate":"2022-02-11T12:46:00.000","Title":"how to scrollIntoView() inside a specific dropdown(div) in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"how can I run the telegram auto message bot when I turn off my computer with the API I made with python?\nOtherwise, it exits the terminal and the bot closes automatically.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":45,"Q_Id":71082428,"Users Score":0,"Answer":"Are you asking why when your computer turns off, and the python file stops running, the bot no longer functions?\nYou would need to have either a server or another device to run it then, should you wish to turn your computer off\nI'm not sure if I misunderstood this or not.","Q_Score":0,"Tags":"python,telegram,telethon,py-telegram-bot-api","A_Id":71082488,"CreationDate":"2022-02-11T15:17:00.000","Title":"How can I use the Telegram automatic message bot when I am not there?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to use a function from a nodejs library (import { signLimitOrder } from '@sorare\/crypto';) from a Python script.\nIs there a way to do such a thing?\nI am trying to use the subprocess python library in order to launch a node command, but I am not sure if that is the best approach","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":36,"Q_Id":71102628,"Users Score":1,"Answer":"Sure, subprocess module will work.\nOr you could run a REST \/ gRPC server from Node and communicate with that","Q_Score":0,"Tags":"python,node.js,subprocess","A_Id":71102780,"CreationDate":"2022-02-13T16:22:00.000","Title":"How to use a function from a nodejs module inside a Python script","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm making a music bot for my discord server and I want it to run 24\/7 on repl.it but when I run it on my computer I add executable=\".\/ffmpeg.exe\" to the from_probe function. Nevertheless, replit doesn't support executable files so I need to find an other way to make this work. I tried installing ffmpeg package, I also looked up for tutorials how to use ffmpeg-python with youtube_dl. None of these worked. If you need some additional info, just ask me in the comment section.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":160,"Q_Id":71104685,"Users Score":0,"Answer":"It's not possible on replit. FFmpeg was working on replit before, it does not work now, you possibly could find another module to play music.","Q_Score":0,"Tags":"python,ffmpeg,discord.py,youtube-dl,replit","A_Id":71295381,"CreationDate":"2022-02-13T20:42:00.000","Title":"How to run a music bot on replit (discord.py)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How are you guys?\nSo to get straight to the point I wanted to create a Discord bot using Python but the thing is that when I want to install the discord module using pip by typing pip install discord but since I'm using a PC without admin rights I have an error saying that it can't launch this program (pip)\nSo I'm here to ask if anyone knows how can I install the Discord module in python without pip ?\nThanks in advance!\n~Sami","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":72,"Q_Id":71164527,"Users Score":0,"Answer":"It depends on which IDE you are using\nFor example, you can download pycharm if you don't already have it and you can download the package from pycharm itself.\nHow you can do that is going to file \u279c Settings \u279c Project \u279c Python Interpreter \u279c and click the + sign over the package label and there you will be able to search for any package you want like discord.py","Q_Score":0,"Tags":"python,module,pip,discord,discord.py","A_Id":71164657,"CreationDate":"2022-02-17T19:48:00.000","Title":"How can I add the Discord module in python without using pip?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've done a Python program that reads data from GCP, GKE etc, compile some of them in a spreadsheet and replace placeholder text in a slide for reporting purposes.\nI have several functions that call the slide replacing text function (that use the replaceAllText method of the slide API). It's currently making around 70 calls of that function, and so, 70 requests.\nI know that I can use the batch.add() method to make fewer requests but here is my problem. The replacing function is called a lot so I tried to use a global variable for my batch object definition : batch_slide = SLIDE_SERVICE.new_batch_http_request()\nIn the replacing function I used the keyword global in front of the variable. In the main function I execute the batch when I need it to but now the result in the slide is chaotic.\nWhen I used one request per replacement, everything was perfect, every field replaced correctly, but now with one batch, it's not. It seems random as I run it multiple times and the replaced field changed every time.\nI put wait times after the execution to maybe ive it time to replace before doing other stuff but doesn't seems to fix it. I didn't find yet how to inspect my batch_slide object, it's not iterable.\nAny help is appreciated, ask for more details if needed.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":21,"Q_Id":71172115,"Users Score":0,"Answer":"So for anyone wanting to do something similar here is how I did it.\nI used a global variable on the request instead of the batch object, and when needed calling a batch execute with the big request.\nThe request will maybe take a bit more time to process but you only make one this way so no risks of timeouts. Just stacks the request list.","Q_Score":0,"Tags":"python,google-api,google-slides-api","A_Id":71555659,"CreationDate":"2022-02-18T10:42:00.000","Title":"Google SLIDE API global batch.add request Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am making a discord bot in discord.py that doesn't react to commands, simply providing information. client = commands.Bot() Throws an error if I dont have command_prefix = '' in it. Is there a way to bypass this?","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":97,"Q_Id":71215506,"Users Score":0,"Answer":"You should use client = discord.Client(), since the bot class adds the command functionality, which you don't want. This is the intended way, and you can still access the other functionality.","Q_Score":0,"Tags":"python,discord,discord.py","A_Id":71223652,"CreationDate":"2022-02-22T03:19:00.000","Title":"Can I make a discord bot with discord.py without a prefix?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using coc.nvim in neovim together with Pylint.\nIf I try to import my own module e.g. src.reverse_linked_list or an installed module like selenium, CoC displays the error message\n[pylint E0401] [E] Unable to import 'xxxxx' (import-error)\ndouble checked that init.py is in my directories\nRunning the code does not lead to any errors\nDoes anyone know how to fix this?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":161,"Q_Id":71291601,"Users Score":0,"Answer":"The python path need to be the same when running pylint inside neovim vs when running the code. The import using src.reverse_linked_list is suspicious in this regard, src is not generally used in the import.","Q_Score":0,"Tags":"python,pylint,neovim","A_Id":71292862,"CreationDate":"2022-02-28T07:10:00.000","Title":"PyLint not recognizing modules but code runs fine","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Suppose, I have sent a post request from react to Django rest API and that request is time taking. I want to get how many percentages it has been processed and send to the frontend without sending the real response?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":58,"Q_Id":71294020,"Users Score":1,"Answer":"There are two broad ways to approach this.\n\n(which I would recommend to start with): Break the request up. The initial request doesn't start the work, it sends a message to an async task queue (such as Celery) to do the work. The response to the initial request is the ID of the Celery task that was spawned. The frontend now can use that request ID to poll the backend periodically to check if the task is finished and grab the results when they are ready.\n\nWebsockets, wherein the connection to the backend is kept open across many requests, and either side can initiate sending data. I wouldn't recommend this to start with, since its not really how Django is built, but with a higher level of investment it will give an even smoother experience.","Q_Score":0,"Tags":"python,django,api,django-rest-framework","A_Id":71294294,"CreationDate":"2022-02-28T11:02:00.000","Title":"How to send partial status of request to frontend by django python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a Windows batch file that says:\n\"C:\\Users\\\\Anaconda3\\Scripts\\pdoc3.exe\" \nIn the past, this launched the pdoc web server and a browser window. However, I just switched Python distributions (now using pdoc 0.9.2 on Anaconda). Now, this same batch file (just with the executable path updated) dumps part of my documentation to the console and returns without launching a web server. However, my other batch file:\n\"C:\\Users\\\\Anaconda3\\Scripts\\pdoc3.exe\" -o .\/docs \nworks fine.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":38,"Q_Id":71298913,"Users Score":1,"Answer":"OK, this is kinda dumb, but I had installed pdoc3 instead of pdoc by mistake. Removing pdoc3 and installing pdoc solved the problem.\n(I had to use pip instead of conda. The provided install string conda install -c auto pdoc didn't work.)","Q_Score":0,"Tags":"python,batch-file,webserver","A_Id":71344298,"CreationDate":"2022-02-28T17:38:00.000","Title":"Python: pdoc doesn't launch browser","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to connect to Microsoft Graph API.\nI have given the relevant Graph API credentials for authentication.\nIt doesn't seem to connect and returns the following error:\nrequests.exceptions.ConnectionError: HTTPSConnectionPool(host='login.microsoftonline.com', port=443): Max retries exceeded with url: \/---(tenant-id)---\/oauth2\/v2.0\/token (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 11001] getaddrinfo failed')) \nOne thing to note is that this is in a corporate environment. Have set in the proxy to allow connection to the address but still fails to connect. Any help would be highly appreciated. Thanks.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":631,"Q_Id":71311755,"Users Score":0,"Answer":"Make sure you imported request, are you using below code to established a new connection?\nimport requests\npage = requests.get('login.microsoftonline.com') ,","Q_Score":0,"Tags":"python,python-requests,microsoft-graph-api","A_Id":71319435,"CreationDate":"2022-03-01T16:35:00.000","Title":"Graph API - Failed to establish a new connection: [Errno 11001] getaddrinfo failed'))","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm building a basic Instagram bot, and it got me wondering, how could I get to auto-run at a certain time, or when a certain condition is satisfied (eg. when there's a new file in a folder)?\nHelp much appreciated.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":26,"Q_Id":71342166,"Users Score":1,"Answer":"make a while loop and check your condition each time, suggest you to put some kind of sleep (eg. asynco.sleep() or time.sleep()) and if the condition is true then run the bot","Q_Score":0,"Tags":"python,selenium,selenium-webdriver,automation,bots","A_Id":71343448,"CreationDate":"2022-03-03T18:37:00.000","Title":"How can I get bot to auto-run at a certain time, or when a certain condition is satisfied?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Do I have to get data from tcp socket with the buffer size? I know the data packages are divided with \\n, so is there any way to get data from the server until I hit \\n ?\npython socket package has recv() method accepting buffer as the only parameter.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":53,"Q_Id":71342495,"Users Score":0,"Answer":"No; TCP doesn\u2019t provide any message-framing functionality. Recv() will always just give you as many bytes as it can (up to the limit you specified) and then it is up to your code to handle those bytes appropriately regardless of how many (or how few) were passed to you by any given recv() call.\nFor example you might add the received bytes to the end of a buffer, then search the buffer for the first newline character; if you find one, remove it and whatever precedes it from the buffer and handle the removed text; repeat until no more new lines are found in the buffer.","Q_Score":0,"Tags":"python,sockets,tcp,buffer,tcpclient","A_Id":71344054,"CreationDate":"2022-03-03T19:05:00.000","Title":"Recieving TCP socket data by buffer","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have created a testing telegram bot with some commands, but the username is not what I want.\nBut I found that the username of my bot is unable to change, thus I need to create a new telegram bot.\nIs there any method to copy all the existing commands of old bot to the new bot, instead of create all the commands again in the new bot?\nOr is there any method to change the username of the old bot?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":34,"Q_Id":71351039,"Users Score":0,"Answer":"With my experience unfortunatly I say no, is not possible to copy commands from others bot and is not possible change the username of the bot, is possible only change the name of it.\nThis answer refers to the Bot API 5.7 (the latest release at the moment)","Q_Score":0,"Tags":"telegram,telegram-bot,python-telegram-bot,py-telegram-bot-api","A_Id":71353921,"CreationDate":"2022-03-04T11:54:00.000","Title":"is that possible to copy all the commands of previous telegram bot to a new bot","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Suppose you want to call some API (open without any KEY needed) which has some rate limit per second (10 requests max per second). Suppose now, I am calling a function (with multiprocessing.Pool) which request data from API and process it. Is there some way to switch ip in order to not get blocked? Maybe a list of ip\/ proxies. Can somebody tell me a way to get this done?\nThanks!","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":85,"Q_Id":71364551,"Users Score":0,"Answer":"There certainly are a lot of hacks you can use to get around rate limiting, but you should take a moment and ask yourself 'should I?' You'd likely be violating the terms of service of the service, and in some jurisdictions you could be opening yourself to legal action if you are caught. Additionally, many services implement smart bot detection which can identify bot behavior from request patterns and block multiple IPs. In extreme cases, I've seen teams block the IP range for an entire country to punish botting.","Q_Score":0,"Tags":"python,proxy,multiprocessing","A_Id":71364665,"CreationDate":"2022-03-05T17:51:00.000","Title":"Python Multiprocessing Rate Limit","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Suppose you want to call some API (open without any KEY needed) which has some rate limit per second (10 requests max per second). Suppose now, I am calling a function (with multiprocessing.Pool) which request data from API and process it. Is there some way to switch ip in order to not get blocked? Maybe a list of ip\/ proxies. Can somebody tell me a way to get this done?\nThanks!","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":85,"Q_Id":71364551,"Users Score":0,"Answer":"You unfortunately can't retrieve data from an api and do multiprocessing with different ip's because the api will already have your Ip assigned to its request variable. Some Web pages also have http 402 errors which means it is possible to get timed out for sending too many requests.","Q_Score":0,"Tags":"python,proxy,multiprocessing","A_Id":71364643,"CreationDate":"2022-03-05T17:51:00.000","Title":"Python Multiprocessing Rate Limit","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to use simple Salesforce python package to make SOQL calls but the salesforce link I am tyring to connect to is a non-prod environment (uat.lightning.force.com). Howerver, I keep running in to \"AttributeError\" exception.\nDo you know guys if there is a workaround?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":52,"Q_Id":71404929,"Users Score":0,"Answer":"Don't use the lightning domain. Use generic test.salesforce.com (unless admin blocked it, you can check in Setup -> My Domain). Or productiondomain--uat.my.salesforce.com.\nIf you still get errors - check login history and post your connection code?","Q_Score":0,"Tags":"python,salesforce,simple-salesforce","A_Id":71405365,"CreationDate":"2022-03-09T05:59:00.000","Title":"Use simple Salesforce package to make SOQL calls using custom salesforce URL as a parameter","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there any way to signup to Instagram with Python selenium\/requests or anything else? I've used selenium before but can't even select drop down lists in HTML for date of birth!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":31,"Q_Id":71405293,"Users Score":0,"Answer":"I will give you a hint, first click on the dropdown list with:\ndriver.find_element(By.XPATH, 'dropdown list xpath').click()\nand then you will be able to see the date html and select\/click on it, repeat for day month and year\nIf you can't find the dropdown list html, inspect element, click on the mouse on a box icon and click on the dropdown list, it will automatically show you the html code position. Then you can also right-click, copy and then copy XPATH to directly get the xpath.","Q_Score":0,"Tags":"python,selenium,instagram","A_Id":71431640,"CreationDate":"2022-03-09T06:45:00.000","Title":"sign up in Instagram with python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have deployed the python flask based app in aws. It is running fine on http:\/\/. I need to convert this to https. I have sent request for admin to enable port 443 for https. Will that automatically make my app to https or do I need to install or setup something else to make it happen. Please help. Thanks","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":26,"Q_Id":71409245,"Users Score":0,"Answer":"You have multiple choices for this;\nUse ACM (Easiest?!):\nif you're using AWS loadbalancers, you can create a certificate using ACM service and assign it to your loadbalancer and modify your Target Groups in EC2 panel.\nIf you are using cloudfront, you can also configure your SSL\/TLS there. (Not changing the loadbalancer and target groups). It will work as an upper layer.\nUse other certificate providers excluding AWS ACM:\nYou can setup something like Lets Encrypt or use Cloudflare services.\nNote: it really depends on how your cloud stack currently is, you maybe be only deploying on EC2 Server and having Nginx configured and having everything else done outside of AWS with other services or you can have Lets Encrypt certificate on your ALB.\nThis post just gives you some keywords, you can search and see exact instruction\/tutorial for every solutions.","Q_Score":0,"Tags":"python-3.x,amazon-web-services,api,flask","A_Id":71413553,"CreationDate":"2022-03-09T12:18:00.000","Title":"How to convert http to https api url deployed in aws","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"The behavior is random.\nI made pcaps with wireshark and tcpdump. Both show the packets length correctly.\nWhen I do sock.recv, I randomly receive data from 2 consecutive packets. The behaviour is rare. From aprx 100 packets, 1-2 recv contains data from 2 consecutive packets.\nThe packets are sent very fast. Some are received bellow 1ms. However this is not a good indicator because other packets received in similar time diff are read correctly.\nThe socket is AF_INET, SOCK_STREAM, non blocking and it is implemented using selectors.\nThe socket is a client","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":30,"Q_Id":71413110,"Users Score":0,"Answer":"As @jasonharper says, TCP is a protocol that provides you with a stream of bytes. It doesn't provide you with packets.\nIf you have a protocol that runs over TCP, and that has a notion of individual packets, there is no guarantee that a single chunk of bytes delivered on a TCP socket will begin at the beginning of a higher-level packet or will end at the end of the same higher-level packet. A packet may be broken up between two chunks, and a chunk may include bits of more than one packet. The only guarantee you get from TCP is that the data you get is received in the order in which it's transmitted.\nAs noted in the comment, protocols that run atop TCP generally either use some form of terminator to mark the end of a packet or put a byte count at the beginning of a packet to indicate how many bytes are in the packet.","Q_Score":1,"Tags":"python","A_Id":71431069,"CreationDate":"2022-03-09T16:54:00.000","Title":"Why does Python sock.recv receives data from 2 different packets?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Hi I was trying to create a Bybit Trading bot and when I started testing it it stopped working and it keeps throwing the same error(10003) Invalid api key. I have checked it several times and it was correct. Do you know what can be the reason for that?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":197,"Q_Id":71451240,"Users Score":0,"Answer":"While generating the API key, there is an option at the bottom for whitelisted IP addresses which would be allowed to access the server with the generated API keys. It is possible that you might not have whitelisted your IP address.","Q_Score":0,"Tags":"python,api,pycrypto,bybit","A_Id":71716514,"CreationDate":"2022-03-12T16:17:00.000","Title":"Bybit API Python Invalid API key","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Can you help me? I have a problem with my python project I have to start a socket that listens as a server at startup but then when I want I have to be able to connect to administer the server how can I create a socket that starts at startup and be able to interact with the program when I want??","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":16,"Q_Id":71466647,"Users Score":0,"Answer":"I Just want to Say of there's a methot to start a socket.bind at startup and the actual interactive program after","Q_Score":0,"Tags":"python,python-3.x,sockets,server,startup","A_Id":71473132,"CreationDate":"2022-03-14T10:58:00.000","Title":"Python run socket at startup and the rest of the program when i want","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to convert ROS Image to stream(not video format eg.mp4) and send it to the Janus webrtc server with python. The reason why I need stream is the params of 'MediaPlayer' in aiortc module which helps communicate with Janus(webrtc) server, can only receive video format and stream format(eg. \/dev\/video0) and I should not save it as video format.\nSo what I am thinking now is converting ros image to a gstreamer.\nHow can I convert it? or is there any good solution?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":62,"Q_Id":71476335,"Users Score":0,"Answer":"try use web_video_server or jpeg_streamer","Q_Score":0,"Tags":"python,stream,webrtc,ros,janus","A_Id":71643230,"CreationDate":"2022-03-15T02:24:00.000","Title":"How can I convert Ros images to stream?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am trying to create my first Twitter bot using Python.\nIts job is to check twitter for new tweets that meet a certain condition, and then to retweet that post.\nHowever, the program keeps finding old posts that meet that condition, but I'm only interested in tweets posted AFTER the bot starts.\nIs there something I can do about this?","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":113,"Q_Id":71483626,"Users Score":1,"Answer":"Streaming with Twitter API v1.1 or v2 will only return new real-time Tweets.","Q_Score":0,"Tags":"python,twitter,bots,streaming,tweepy","A_Id":71486570,"CreationDate":"2022-03-15T14:05:00.000","Title":"How can I stream only NEW tweets with tweepy","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to create my first Twitter bot using Python.\nIts job is to check twitter for new tweets that meet a certain condition, and then to retweet that post.\nHowever, the program keeps finding old posts that meet that condition, but I'm only interested in tweets posted AFTER the bot starts.\nIs there something I can do about this?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":113,"Q_Id":71483626,"Users Score":0,"Answer":"I found a way to do this.\n1a. I used datetime module to get my current system time\n1b. I converted this to a string and pulled only the date and time\n2a. I collected the tweet timestamp for each tweet \"tweet.created_at\"\n2b. I converted this to a string and pulled only the date and time\n3. For each tweet that was found by Stream service, I checked to see if it was older than my current date.\n4. If it was, I skipped it and went to the next.\nFrom my search, I think this is the only way to do this.","Q_Score":0,"Tags":"python,twitter,bots,streaming,tweepy","A_Id":71485554,"CreationDate":"2022-03-15T14:05:00.000","Title":"How can I stream only NEW tweets with tweepy","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to regex a list of URLs.\nThe links format looks like this:\nhttps:\/\/sales-office.ae\/axcapital\/damaclagoons\/?cm_id=14981686043_130222322842_553881409427_kwd-1434230410787_m__g_&gclid=Cj0KCQiAxc6PBhCEARIsAH8Hff2k3IHDPpViVTzUfxx4NRD-fSsfWkCDT-ywLPY2C6OrdTP36x431QsaAt2dEALw_wcB\nThe part I need:\nhttps:\/\/sales-office.ae\/axcapital\/damaclagoons\/\nI used to use this:\nre.findall(':\/\/([\\w\\-\\.]+)', URL)\nHowever, it gets me this:\nsales-office.ae\nCan you help, please?","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":45,"Q_Id":71488504,"Users Score":0,"Answer":"Based on your example, this looks like it would work:\n\\w+:\/\/\\S+\\.\\w+\\\/\\S+\\\/","Q_Score":0,"Tags":"python,regex","A_Id":71488596,"CreationDate":"2022-03-15T20:19:00.000","Title":"how to regex this link?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I try to intercept requests by seleniumwire.\nIf I dont use option --user-data-dir everything is fine. All requests are showed by driver.requests.\nBut I need parse some sites with authentication. So I provide in --user-data-dir option profile with remembered accounts. But in this case HTTPS requests not intercepted.\nCommand driver.requests showes only requests to google-ads and some other trash.\nSo how to intercept HTTPS requests while providing profile?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":23,"Q_Id":71539741,"Users Score":0,"Answer":"I should to disable all proxy extensions.\noptions.add_arguments(\"--disable-extensions\")","Q_Score":0,"Tags":"python-3.x,selenium,https,selenium-chromedriver,seleniumwire","A_Id":71563645,"CreationDate":"2022-03-19T16:03:00.000","Title":"Seleniumwire dont intercept HTTPS if user-data-dir is defined","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"In python 3.9 I wrote a TCP server that never calls receive(). And a client that sends 1KB chunks to the server. Previously I'm setting send- and receive buffer sizes in the KB-range.\nMy expectation was to be able to send (send-buffer + receive-buffer) bytes before send() would block. However:\n\nOn Windows 10: send() consistently blocks only after (2 x send-buffer + receive-buffer) bytes.\nOn Raspberry Debian GNU\/Linux 11 (bullseye):\n\nsetting buffer seizes (with setsockopt) results in twice the buffer (as reported by getsockopt).\nsend() blocks after roughly (send-buffer + 2 x receive-buffer) bytes wrt the buffer sizes set with setsockopt.\n\n\n\nQuestions: Where does the \"excess\" data go? How come, the implementation behave to differently?\nAll tests where done on the same machine (win->win, raspi->raspi) with various send\/ receive buffer sizes in the range 5 - 50 KB.","AnswerCount":1,"Available Count":1,"Score":-0.1973753202,"is_accepted":false,"ViewCount":34,"Q_Id":71541204,"Users Score":-1,"Answer":"TCP is a byte stream, there is no 1:1 relationship between sends and reads. send() copies data from the sender's buffer into a local kernel buffer, which is then transmitted to the remote peer in the background, where it is received into a kernel buffer, and finally copied by receive() into the receiver's buffer. send() will not block as long as the local kernel still has buffer space available. In the background, the sending kernel will transmit buffered data as long as the receiving kernel still has buffer space available. receive() will block only when the receiving kernel has no data available.","Q_Score":0,"Tags":"python,sockets,tcp,buffer","A_Id":71541508,"CreationDate":"2022-03-19T19:07:00.000","Title":"How many bytes can be send() over tcp without ever receive(), before send() blocks -- dependent on buffer sizes?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"In my company intranet, any request to an external website X in Internet will be redirected to an internal page containing a button that I have to click on. Then the external website X in Internet will be loaded.\nI want to write a program that automatically clicks this button for me (so I don't have to click it manually). After that, the program will make the browser redirect to a re-configured website Y (not X) for the purpose of security testing.\nI don't have much experience with Python. So I would be really thankful if someone can tell me how I can write such a program.\nMany thanks","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":44,"Q_Id":71624216,"Users Score":0,"Answer":"Python has Selenium and BS4 library to help You out, but if You are not experienced with python, You might as well pick up node.js and puppeteer, its far superior in my opinion.","Q_Score":0,"Tags":"python,python-3.x,google-chrome,browser,microsoft-edge","A_Id":71624264,"CreationDate":"2022-03-25T23:48:00.000","Title":"How to write a python program to interact with the browser?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working on a project to convert Python code that's using the soon-to-be-discontinued Adwords API to the new version Google Ads API v10.\nI have a query that needs a few metrics, but when I use the main customer ID that works to connect, I get REQUESTED_METRICS_FOR_MANAGER error saying I need to \"issue separate requests against each client account under the manager account\".\nHow do I generate a client account to do this? I haven't seen any examples of this step.\nThanks much!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":107,"Q_Id":71670081,"Users Score":0,"Answer":"The account ids I needed were in a local MySQL database that a colleague on the project guided me to look at.\nSo in general: check with your management and colleagues ...","Q_Score":0,"Tags":"python-3.x,google-ads-api","A_Id":71683533,"CreationDate":"2022-03-30T00:02:00.000","Title":"Google ads api v10: how to generate \"client account\" to use to get metrics?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm making a program that does face detection. When it detects an unknown face, it saves the frame and sends a text to a list of phone numbers. The issue I'm facing is that I would like to attach the jpg file that is saved (of that frame) to the text. However the way I'm sending the text it needs to be a url. I haven't done much with cloud hosting which I think would be the easiest way to get a url for it.. what would be a good place to do that that's free? Also, it does have to be a http\/https url, I tried doing the local file url but it does actually have to be a web one. Would cloud hosting it be the easy option, or would it be easier to just host a local html site that holds the image? And how would I go about hosting it in python if that is the easier option?\nI've looked for cloud hosting options, but I'm not really sure what to use that's free, and how to go about doing it from python.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":16,"Q_Id":71680816,"Users Score":0,"Answer":"I believe that recommendation of services is not allowed on this site, but you do need to have it hosted online, because when you send an image it usualy doesn't send the actual file. It just tell the reciepiat where to look for the file, so both you and the reveiver need to have acces to the site. A way that I know could work is getting a firebase sub-domain, but that would probs be more work then needed, but it should be a solid solution if you aren't planning on sending over 10 of then a day or to have 50 people look at them.","Q_Score":0,"Tags":"python,cloud,web-hosting,detection,face","A_Id":71680925,"CreationDate":"2022-03-30T16:24:00.000","Title":"Getting a URL from a JPG file in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am very new to this and i have tried to look for the answer to this but unable to find any.\nI am using Selenium+chromedriver, trying to monitor some items I am interested in.\nExample:\na page with 20 items in a list.\nCode:\n#list of items on the page\nsearch_area = driver.find_elements_by_xpath(\"\/\/li[@data-testid='test']\")\nsearch_area[19].find_element_by_xpath(\"\/\/p[@class='sc-hKwDye name']\").text\n\nthis returns the name of item[0]\n\nsearch_area[19].find_element_by_css_selector('.name').text\n\nthis returns the name of item[19]\n\nwhy is xpath looking at the parent html?\nI want xpath to return the name of item within the WebElement \/list item. is it possible?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":30,"Q_Id":71714931,"Users Score":0,"Answer":"found the answer, add a . in front\nhope this is gonna help someone new like me in the future.\nfrom\nsearch_area[19].find_element_by_xpath(\"\/\/p[@class='sc-hKwDye name']\").text\nto\nsearch_area[19].find_element_by_xpath(\".\/\/p[@class='sc-hKwDye name']\").text","Q_Score":0,"Tags":"python,selenium,xpath","A_Id":71715485,"CreationDate":"2022-04-02T04:40:00.000","Title":"xpath to check only within WebElement in Selenium \/ Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am very new to this and i have tried to look for the answer to this but unable to find any.\nI am using Selenium+chromedriver, trying to monitor some items I am interested in.\nExample:\na page with 20 items in a list.\nCode:\n#list of items on the page\nsearch_area = driver.find_elements_by_xpath(\"\/\/li[@data-testid='test']\")\nsearch_area[19].find_element_by_xpath(\"\/\/p[@class='sc-hKwDye name']\").text\n\nthis returns the name of item[0]\n\nsearch_area[19].find_element_by_css_selector('.name').text\n\nthis returns the name of item[19]\n\nwhy is xpath looking at the parent html?\nI want xpath to return the name of item within the WebElement \/list item. is it possible?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":30,"Q_Id":71714931,"Users Score":0,"Answer":"What you are passing in find_element_by_xpath(\"\/\/p[@class='sc-hKwDye name']\") is relative Xpath. You can pass the full Xpath to get the desired result.","Q_Score":0,"Tags":"python,selenium,xpath","A_Id":71716673,"CreationDate":"2022-04-02T04:40:00.000","Title":"xpath to check only within WebElement in Selenium \/ Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am sending and recieving packets between two boards (a Jeston and a Pi). I tried using TCP then UDP, theoritically UDP is faster but I want to verify this with numbers. I want to be able to run my scripts, send and recieve my packets while also calculating the latency. I will later study the effect of using RF modules instead of direct cables between the two boards on the latency (this is another reason why I want the numbers).\nWhat is the right way to tackle this?\nI tried sending the timestamps to get the difference but their times are not synched. I read about NTP and Iperf but I am not sure how they can be run within my scripts. iperf measures the trafic but how can that be accurate if your real TCP or UDP application is not running with real packets being exchanged?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":77,"Q_Id":71723297,"Users Score":0,"Answer":"It is provably impossible to measure (with 100% accuracy) the latency, since there is no global clock. NTP estimates it by presuming the upstream and downstream delays are equal (but actually upstream buffer delay\/jitter is often greater).\nUDP is only \"faster\" because it does not use acks and has lower overhead. This \"faster\" is not latency. Datacam \"speed\" is a combo of latency, BW, serialization delay (time to \"clock-out\" data), buffer delay, pkt over-head, and sometimes processing delay, and\/or protocol over-head.","Q_Score":0,"Tags":"python,tcp,udp,ntp","A_Id":71924115,"CreationDate":"2022-04-03T05:42:00.000","Title":"What is the right way to measure server\/client latency (TCP & UDP)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am sending and recieving packets between two boards (a Jeston and a Pi). I tried using TCP then UDP, theoritically UDP is faster but I want to verify this with numbers. I want to be able to run my scripts, send and recieve my packets while also calculating the latency. I will later study the effect of using RF modules instead of direct cables between the two boards on the latency (this is another reason why I want the numbers).\nWhat is the right way to tackle this?\nI tried sending the timestamps to get the difference but their times are not synched. I read about NTP and Iperf but I am not sure how they can be run within my scripts. iperf measures the trafic but how can that be accurate if your real TCP or UDP application is not running with real packets being exchanged?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":77,"Q_Id":71723297,"Users Score":0,"Answer":"While getting one-way latency can be rather difficult and depend on very well synchronized clocks, you could make a simplifying assumption that the latency in one direction is the same as in the other (and no, that isn't always the case) and measure round-trip-time and divide by two. Ping would be one way to do that, netperf and a \"TCP_RR\" test would be another.\nDepending on the network\/link speed and the packet size and the CPU \"horsepower,\" much if not most of the latency is in the packet processing overhead on either side. You can get an idea of that with the service demand figures netperf will report if you have it include CPU utilization. (n.b. - netperf assumes it is the only thing meaningfully consuming CPU on either end at the time of the test)","Q_Score":0,"Tags":"python,tcp,udp,ntp","A_Id":72481274,"CreationDate":"2022-04-03T05:42:00.000","Title":"What is the right way to measure server\/client latency (TCP & UDP)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Is it possible?\nI want to create a command for my discord bot that makes the bot offline for everyone. I want to type that command in any discord server and then the bot just goes offline. This is much easier than going into your files or terminal to press ctrl+c. Is a command like this possible to even make? And if so, can I please know how to add that command? I want to add some admin commands to my bot and this is definitely a great addition to the list. Thanks -","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":121,"Q_Id":71752045,"Users Score":0,"Answer":"If I'm understanding what you're trying to do here, you can just add a command that exits the bot.py program by making a call to something like sys.exit(). This would just terminate the program running the bot, making the bot go offline.","Q_Score":0,"Tags":"python,discord,discord.py,command,bots","A_Id":71752227,"CreationDate":"2022-04-05T12:45:00.000","Title":"How do you create a command that makes the bot go OFFLINE in discord.py","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm making a bot in discord.py, and I'm getting to the stage of making more complex commands and making my easier commands better and more flexible.\nMy most recent endeavor is my \/say command. I'm making it able to do tts message and spoilers, but I can't find anything on the latter. Is it as simple as the tts or is it more complex?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":69,"Q_Id":71769943,"Users Score":1,"Answer":"You can mark your text as spoiler by putting <||||> around it. Example: \u201d<||hi||>\u201d","Q_Score":0,"Tags":"python,discord,discord.py","A_Id":71770036,"CreationDate":"2022-04-06T15:55:00.000","Title":"Spoiler messages in discord.py","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am getting the following error whenever I try to execute pip install [any package name]:\n\nERROR: Could not install packages due to an OSError:\nHTTPSConnectionPool(host='files.pythonhosted.org', port=443): Max\nretries exceeded with url:\n\/packages\/31\/58\/d97b7af5302e63bfb3881931fda4aaacbbdcbc31257f983c06703d304c1e\/streamlit_chat-0.0.2.1-py3-none-any.whl\n(Caused by\nConnectTimeoutError(, 'Connection to files.pythonhosted.org\ntimed out. (connect timeout=15)'))\n\nI have already tried the following solution that I found on stack, but it doesn't work and I get the same error:\n\npip install --trusted-host=pypi.python.org --trusted-host=pypi.org --trusted-host=files.pythonhosted.org --upgrade --proxy=http:\/\/127.0.0.1:3128 [package name]\n\npip install --trusted-host pypi.python.org --trusted-host files.pythonhosted.org --trusted-host pypi.org [package name]\n\n\nI use a windows system and sublime as my regular coding environment.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":223,"Q_Id":71788177,"Users Score":0,"Answer":"It can happen due network firewall settings, try to connect to other network and then run the command. Worked for me.","Q_Score":0,"Tags":"python,pip","A_Id":72456643,"CreationDate":"2022-04-07T19:53:00.000","Title":"Pip Install Package Error: Could not install packages due to an OSError","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've tried to install jax.lib on win10. It seems like jax.lib is installed but when I run the spyder and write 'import jax', it says\n\nmodule 'jaxlib.xla_extension.jax_jit' has no attribute\n'set_enable_x64_cpp_flag'\n\nI have python 3.10 and cuda version 11.6.\nCould you please help me with it?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":168,"Q_Id":71825432,"Users Score":0,"Answer":"I just upgraded JAX by writing\n\npip install --upgrade jax jaxlib\n\non the anaconda command prompt and the problem is resolved.","Q_Score":0,"Tags":"python,jax","A_Id":71826049,"CreationDate":"2022-04-11T09:22:00.000","Title":"Module 'jaxlib.xla_extension.jax_jit' has no attribute 'set_enable_x64_cpp_flag'","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am currently trying to write a python script that will open my companies inventory system which is a link in google chrome, sign in, and then click the save as excel button that is posted on top of a specific page. This will hopefully automate the process of opening the link, navigating over to the tab, clicking export, then exporting this data daily.\nAny idea of where to start? I was thinking maybe can get this done using web scraping but not sure with the log in details needed. Also, how can I export this file once in? Just need some ideas to start me on this journey. Any and all help is appreciated!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":22,"Q_Id":71829379,"Users Score":0,"Answer":"Simply start with selenium python automation\n\nDivide you whole process in smaller tasks and write python code\nfor that each task:)","Q_Score":0,"Tags":"python,html,web-scraping","A_Id":71829997,"CreationDate":"2022-04-11T14:15:00.000","Title":"Exporting Excel Data from Webpage","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have an C++ game which sends a Python-SocketIO request to a server, which loads the requested JSON data into memory for reference, and then sends portions of it to the client as necessary. Most of the previous answers here detail that the server has to repeatedly search the database, when in this case, all of the data is stored in memory after the first time, and is released after the client disconnects.\nI don't want to have a large influx of memory usage whenever a new client joins, however most of what I have seen points away from using small files (50-100kB absolute maximum), and instead use large files, which would cause the large memory usage I'm trying to avoid.\nMy question is this: would it still be beneficial to use one large file, or should I use the smaller files; both from an organization standpoint and from a performance one?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":61,"Q_Id":71835065,"Users Score":0,"Answer":"Is it better to have one large file or many smaller files for data storage?\n\nBoth can potentially be better. Each have their advantage and disadvantage. Which is better depends on the details of the use case. It's quite possible that best way may be something in between such as a few medium sized files.\nRegarding performance, the most accurate way to verify what is best is to try out each of them and measure.","Q_Score":0,"Tags":"python,c++,file,server,network-programming","A_Id":71835136,"CreationDate":"2022-04-11T22:37:00.000","Title":"Is it better to have one large file or many smaller files for data storage?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I created a bot to sends automatic whatsapp messages. But I have a question I want to hide window after qr code scanned, is this possible without using --headless option?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":34,"Q_Id":71849936,"Users Score":0,"Answer":"No you can't hide the window once you have initialized the Browsing Context in the normal (headed) mode within the same session. Neither you can add the --headless option for the same ongoing session.\nA cleaner way would be to call driver.quit() within tearDown(){} method to close and destroy the current ChromeDriver and Chrome Browser instances gracefully and then span a new set of ChromeDriver and Chrome Browser instance with the new set of configurations.","Q_Score":0,"Tags":"python,selenium,selenium-webdriver","A_Id":71850352,"CreationDate":"2022-04-12T22:46:00.000","Title":"Selenium: Can I hide window after login screen","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to block a Twitter user using Python scripting and the Tweepy API. I am able to extract users, IDs, followers and tweets with no problem. When I try to call api.create_block(screen_name = '')\nI get an exception\n401 Unauthorized\n(and that user is not blocked). I have been googling but only found old posts referring to my Windows time being not in sync. I synced the time and no improvement. I also tried blocking by ID but no luck.\nCan anybody help ?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":40,"Q_Id":71860303,"Users Score":0,"Answer":"I am using OAuth 2.0 which does not allow separate read or write permissions. I tried with refreshed credentials but still the same.\nI did enable OAuth 1.0a and regenerated credentials (again) and now I can block users, Thanks !","Q_Score":0,"Tags":"python,block,tweepy,http-status-code-401,unauthorized","A_Id":71874964,"CreationDate":"2022-04-13T15:45:00.000","Title":"Python Tweepy block user returns 401 Unauthorized","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I was able to use selenium to log into a scheduling website and click to the list of clients. Every client can be clicked on, to gather info about how many appointments they have left. What I want to do now is loop through all the clients, clicking on them, getting whatever info I need in an array or whatever (problem for later).\nAs of right now my main question would actually just be clicking on one client and then clicking on the next one until the list is complete. I can figure out the rest later.\nHow do I go about doing this? In previous questions I see that many people already have the list of URLs ready, here I obviously don't.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":32,"Q_Id":71875624,"Users Score":1,"Answer":"You can first fetch all the links you would want to click on by using\nfindElements method.\nThen you will need a loop using foreach.\npseduo code will be\nforeach(linkwebelement in listoflinks){\nlink.click\ndo your work\ngo back to page\n}\nyou may come across in here stale element excpetion, if you do, you will need page handle again.\nhope this helps.","Q_Score":0,"Tags":"python,selenium,automation","A_Id":71879672,"CreationDate":"2022-04-14T17:33:00.000","Title":"Looping through a list of clicks in selenium python and gathering info","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am running a Istio setup where my python flask service running behind gunicorn.\nwhen debugging the logs from the service, the flask service successfully execute the api call while the calling client is receiving 503 error from the rest call. I suspect this might be some issue with side car proxy or gunicorn server where it is processing the request.\nAlso, I am hitting the service directly from another pod in the namespace and hence not going through ingress gateway and virtualservice","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":65,"Q_Id":71900310,"Users Score":0,"Answer":"In my case it was the SSL between Google Front End (ILB) and Istio service mesh. Somehow the connection between GFE and Istio gateway over TLS was not reliable. I converted that to HTTP from HTTP2(https) and it started working.\nI will debug later why this https between those 2 was not working but moving HTTP2 to HTTP solved my issue.","Q_Score":0,"Tags":"python,gunicorn,istio","A_Id":71993063,"CreationDate":"2022-04-17T08:15:00.000","Title":"Istio - Gunicorn - Python getting 503 upstream connect error or disconnect\/reset before headers. reset reason: connection failure","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I tried to extract data the site below with selenium library but Unfortunately I got next error. is there anyone who can help me??\nfrom selenium import webdriver\nurl = 'https:\/\/www150.statcan.gc.ca\/n1\/pub\/71-607-x\/2021004\/exp-eng.htm?r1=(1)&r2=0&r3=0&r4=12&r5=0&r7=0&r8=2022-02-01&r9=2022-02-011'\ndriver = webdriver.Chrome()\ndriver.get(url)\ntable = driver.find_elements_by_class_name('data')\nfor table1 in table:\nvalue = table1.find_element_by_xpath('.\/\/[@id=\"report_table\"]\/tbody\/tr[1]\/td[6]').text\nquantity = table1.find_element_by_xpath('.\/\/[@id=\"report_table\"]\/tbody\/tr[1]\/td[7]').text\nunit = table1.find_element_by_xpath('.\/\/*[@id=\"report_table\"]\/tbody\/tr[1]\/td[8]').text\nprint(value, quantity, quantity)\nthe Error is:\nenter image description herePS C:\\Users\\asus\\Desktop\\home> [840:7000:0417\/201023.654:ERROR:device_event_log_impl.cc(214)] [20:10:23.656] USB: usb_device_handle_win.cc:1049 Failed to read descriptor from node connection: A device attached to the system is not functioning. (0x1F)\n[840:7000:0417\/201023.654:ERROR:device_event_log_impl.cc(214)] [20:10:23.658] USB: usb_device_handle_win.cc:1049 Failed to read descriptor from node connection: A device attached to the system is not functioning. (0x1F)\n[840:7000:0417\/201023.654:ERROR:device_event_log_impl.cc(214)] [20:10:23.659] USB: usb_device_handle_win.cc:1049 Failed to read descriptor from node connection: A device attached to the system is not functioning. (0x1F)\n[840:7000:0417\/201023.659:ERROR:device_event_log_impl.cc(214)] [20:10:23.660] USB: usb_device_handle_win.cc:1049 Failed to read descriptor from node connection: A device attached to the system is not functioning. (0x1F)\n[840:16136:0417\/201113.924:ERROR:util.cc(126)] Can't create base directory: C:\\Program Files\\Google\\GoogleUpdater","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1594,"Q_Id":71903319,"Users Score":0,"Answer":"I think you should link to your webDriver using its path if you haven't already\n\ndriver = webdriver.Chrome(\"your WebDriver Path\")","Q_Score":1,"Tags":"python,selenium-webdriver,web-scraping","A_Id":71903600,"CreationDate":"2022-04-17T15:51:00.000","Title":"error in webscraping by python with selenium library","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm getting an error \"Unable to determine handler from trigger event\" when hitting an AWS API Gateway endpoint. The endpoint triggers a lambda, which is using FastAPI and Mangum.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":60,"Q_Id":71920070,"Users Score":0,"Answer":"Lambda integration in the GET method on API Gateway needs to have \"Use Lambda Proxy integration\" ticked on.","Q_Score":0,"Tags":"python,aws-api-gateway,fastapi","A_Id":71920071,"CreationDate":"2022-04-19T05:27:00.000","Title":"Mangum \"Unable to determine handler from trigger event\"","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"How can I quickly download audio from YouTube by URL or ID and send it to Telegram bot? I've been using youtube-dl to download audio, save it on hosting and after that send it to user. It takes 1-2 minutes to do that. But other bots (like this one @LyBot) do this with the speed of light. How do they do this?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":45,"Q_Id":71950060,"Users Score":2,"Answer":"As it says in their documentation \"I send audio instantly if it has already been downloaded through me earlier. Otherwise, the download usually takes no more than 10 seconds.\"\nThey probably store a file the first time its downloaded by any user so that it can be served instantly for subsequent requests.","Q_Score":1,"Tags":"python,youtube,telegram,telegram-bot","A_Id":71950107,"CreationDate":"2022-04-21T06:51:00.000","Title":"How can I quickly download and send audio from youtube?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"When iam running the test case it will open browser and the process will continue...my task was to get browser performance from developer tool record that performance until the test case run and download it as json file...can anyone help me","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":15,"Q_Id":71995824,"Users Score":0,"Answer":"There are no details to the setup, but in general:\n\nTake current time after the Test-Setup\nRun Test\nTake current time before the Test-Teardown\nCompute the difference between the times and export it to JSON, database or a monitoring solution.\n\nYou could write an abstract test case which implements this and use it for all the tests that require time measurements.","Q_Score":0,"Tags":"javascript,python,html","A_Id":71996678,"CreationDate":"2022-04-25T07:23:00.000","Title":"how to fetch browser performance when we run test cases using python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have python code that depends on specific libraries like selenium and interaction with google chrome to extract data from the web.\nmy code works fine but i need a lot of records to do analysis, so i can't leave my computer on, to run the script for a month.\nThat's why I thought of running the script in a cloud service like aws but I don't have a clear idea of \u200b\u200bhow to do it, because I need the script to not stop\nand I would rather not have to pay for it (or at least not that much money)\nThat said, my code opens a website, looks for a specific text data and saves it in a csv document.\nI thank you in advance for the help","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":27,"Q_Id":71998750,"Users Score":1,"Answer":"You will have to check the terms of each cloud service as many do have downtime\/restarts on their free tiers.\nThe kind of task you're describing shouldn't be very resource hungry, so you may be better off setting up your own server using a Raspberry Pi or similar.","Q_Score":0,"Tags":"python,selenium,web-scraping,automation,cloud","A_Id":71998861,"CreationDate":"2022-04-25T11:23:00.000","Title":"Run pythom code in cloud without stopping","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have an FTP server that stores a very heavy text file (over 250G) and I need to run some queries on it ie. parse it and extract some specific data. Is there a way in python to interact with it without the need to download it all? I am aware of ftplib package but couldn't find get it to work for this specific task. I guess what I would like to do is: connect to FTP server, open a text file, run the queries, save the output of the queries, close the file and disconnect from FTP.\nPS. I checked the forum for a possible duplicate but couldn't find anything that could answer my question. However, apologies if it's been asked before.\nMany thanks","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":32,"Q_Id":72011236,"Users Score":0,"Answer":"This is not a question about the Python nor the library you are using.\nIn general, FTP protocol cannot be used for such operations.\nSo if you do not have any other way to access the server (like an SSH shell access) or if the FTP server do not have any very special and rare proprietary capabilities, you cannot do it.","Q_Score":1,"Tags":"python,ftp,ftplib","A_Id":72011749,"CreationDate":"2022-04-26T09:06:00.000","Title":"Querying\/parsing file contents on FTP server in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to find a button on a website, however this button won't always be there and there are multiple buttons that are identical in everyway. The xpath of the buttons is :\n\/\/*[@id=\"inspire\"]\/div[3]\/main\/div\/div\/div[3]\/div[i]\/div\/div\/div[2]\/button\/div\nWhere i is the i'th button. To find the correct button however I have to check the text in the first element (\/div[1]) of the '\/div' list before the '\/button\/ (where you can see '\/div[2]'. This text is specific per button. And I have a specific string I'm looking for in these div[1]'s and I only need the button above which is this specific string.\n(I have already checked the string is indeed on the page so the button does exist everytime I get to this step. I just need to find the button that is underneath it.)\nThanks in advance!","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":27,"Q_Id":72012670,"Users Score":0,"Answer":"You can use elems =elem.find_elements_by_id(\"id\" )\nThen use loop to seperate all elements that have same locator .","Q_Score":0,"Tags":"python,selenium,findelement","A_Id":72012790,"CreationDate":"2022-04-26T10:50:00.000","Title":"Finding identical buttons with selenium python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I want to create MS Teams chat-bot without using MS Bot Framework.\nHowever, in the official document, there was only an example of using the MS Bot Framework.\nI want to develop this process through message processing through FastAPI and my own AI logic.\nIs there a guide for proper usage?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":63,"Q_Id":72093935,"Users Score":1,"Answer":"If you're only looking to send the odd notification to a user you can do this via Graph as Conrad said (there are reasons why this might not be the best solution and taking the bot approach is usually the better option - being able to send a notification from an \"app\" rather than a user with rich adaptive cards, for example)\nThe main thing to remember here is that a \"bot\" is just an api endpoint and the \"bot framework sdk\" largely exists to simplify the process of parsing and processing the messages that are sent on that endpoint (and some additional complexities around auth with the Azure bot service, etc). The interaction between Teams and your bot is also not request\/response it's actually request\/request and the SDK does a reasonable job of abstracting this so you don't have to worry about it.\nHaving said that, as long as you have an api endpoint that will accept the messages being sent from Teams (and proxied through the bot service) you don't have to use the sdk, plus it's all open source so you can inspect the framework to see what it's doing... I'd highly recommend using it though as it really does make your life a lot easier and some of those message structures aren't very well documented... the bottom line is that it's not trivial, but it is possible!","Q_Score":0,"Tags":"python,microsoft-teams,fastapi","A_Id":72223142,"CreationDate":"2022-05-03T01:16:00.000","Title":"Can I create a Teams bot without using Bot Framework?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using VS Code and Anaconda3.\nCurrently trying to install ChromeDriver_Binary but, when I try to execute code, I get this error:\n\nselenium.common.exceptions.SessionNotCreatedException: Message: session not created: This version of ChromeDriver only supports Chrome version 102\nCurrent browser version is 100.0.4896.127 with binary path C:\\Program Files (x86)\\Google\\Chrome\\Application\\chrome.exe","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":8234,"Q_Id":72111139,"Users Score":0,"Answer":"chrome browser and the chromedriver.exe(Path provided by the project) versions should match to the same version.","Q_Score":4,"Tags":"python,selenium-chromedriver","A_Id":72436083,"CreationDate":"2022-05-04T10:03:00.000","Title":"This version of ChromeDriver only supports Chrome version 102","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"As the title says, I've just upgraded to Ubuntu 22.04 LTS and my previously working setup now says ImportError: libssl.so.1.1: cannot open shared object file: No such file or directory when starting Jupyter, and equivalently throws Could not fetch URL https:\/\/pypi.org\/simple\/jupyter\/: There was a problem confirming the ssl certificate: HTTPSConnectionPool(host='pypi.org', port=443): Max retries exceeded with url: \/simple\/jupyter\/ (Caused by SSLError(\"Can't connect to HTTPS URL because the SSL module is not available.\")) - skipping whenever trying to use pip.\nLibssl is actually available at \/usr\/lib\/x86_64-linux-gnu\/libssl.so.1.1. I could change LD_LIBRARY_PATH but this seems to be a workaround.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":429,"Q_Id":72119046,"Users Score":0,"Answer":"I resolved this problem by reinstalling the environment.\nI use pipenv and pyenv. I removed the pipenv environment and the Python version using pyenv. Then reinstalled both.","Q_Score":0,"Tags":"python,ubuntu,libssl,ubuntu-22.04","A_Id":72216913,"CreationDate":"2022-05-04T20:20:00.000","Title":"libssl not found by Python on Ubuntu 22.04","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I want to implement the Cognito Hosted UI for some users in my Django application.\nI have successfully been able to redirect the users to the desired url for authentication using the following:\nreturn redirect(https:\/\/....amazoncognito.com\/oauth2\/authorize?client_id=....redirect_uri=localhost).\nI am able to successfully authenticate and redirect back to my localhost where the url in the browser is localhost\/?code=xyz. I do not understand how I can retrieve this code xyz back in python to perform next steps? I see that in the Django Terminal that it reads the required code. This is what the terminal shows:\n[04\/May\/2022 16:08:15] \"POST \/login HTTP\/1.1\" 302 0\n[04\/May\/2022 12:09:04] \"GET \/?code=xyz HTTP\/1.1\" 200 8737\nI just do not know how to get this code xyz in my views.py so that I can continue the login. I tried variations of request.GET that did not work.\nAny help is appreciated!!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":18,"Q_Id":72119094,"Users Score":0,"Answer":"I just figured it out 5 days later (what 5 days of not looking at your code can do!)\nrequest.GET.get(\u2018code\u2019) gives back the 'xyz' that shows up in the url in the browser.","Q_Score":2,"Tags":"python,django,authentication,amazon-cognito","A_Id":72190667,"CreationDate":"2022-05-04T20:25:00.000","Title":"How to get the code returned by Cognito Hosted UI Autentication in Django?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Hy, I have a simple question, how can I join a new telegram channel with its public ID only in telethon via python?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":34,"Q_Id":72152091,"Users Score":0,"Answer":"No, you can't. You can join a public channel by its username.","Q_Score":0,"Tags":"python,telethon","A_Id":72474778,"CreationDate":"2022-05-07T11:36:00.000","Title":"How to join a telegram channel with only channel id in telethon?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"This issue is coming right now. why i don't know. But i was not facing this issue before 2 \/ 3 days.\nThis error is coming, when my 'import request' starts running. I have tried every single solution on internet but nothing seems to worked.\n\"C:\\Program Files\\Python310\\python.exe\"\n\"E:\/IT Vedant\/Rough Work\/1mg.py\"\nTraceback (most recent call last):\nFile \"E:\\IT Vedant\\Rough Work\\1mg.py\", line 2, in \nimport requests\nFile \"C:\\Program Files\\Python310\\lib\\site-packages\\requests_init_.py\", line 58, in \nfrom . import utils\nFile \"C:\\Program Files\\Python310\\lib\\site-packages\\requests\\utils.py\", line 26, in \nfrom .compat import parse_http_list as parse_list_header\nFile \"C:\\Program Files\\Python310\\lib\\site-packages\\requests\\compat.py\", line 7, in \nfrom .packages import chardet\nFile \"C:\\Program Files\\Python310\\lib\\site-packages\\requests\\packages_init.py\", line 3, in \nfrom . import urllib3\nFile \"C:\\Program Files\\Python310\\lib\\site-packages\\requests\\packages\\urllib3_init_.py\", line 10, in \nfrom .connectionpool import (\nFile \"C:\\Program Files\\Python310\\lib\\site-packages\\requests\\packages\\urllib3\\connectionpool.py\", line 38, in \nfrom .response import HTTPResponse\nFile \"C:\\Program Files\\Python310\\lib\\site-packages\\requests\\packages\\urllib3\\response.py\", line 5, in \nfrom .collections import HTTPHeaderDict\nFile \"C:\\Program Files\\Python310\\lib\\site-packages\\requests\\packages\\urllib3_collections.py\", line 1, in \nfrom collections import Mapping, MutableMapping\nImportError: cannot import name 'Mapping' from 'collections' (C:\\Program Files\\Python310\\lib\\collections_init.py)\nProcess finished with exit code 1","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":171,"Q_Id":72167725,"Users Score":1,"Answer":"You are using python 3.10\ntry changing from\nfrom collections import Mapping\nto from collections.abc import Mapping","Q_Score":0,"Tags":"python,python-requests,importerror","A_Id":72167833,"CreationDate":"2022-05-09T06:25:00.000","Title":"Python : ImportError: cannot import name 'Mapping' from 'collections' (C:\\Program Files\\Python310\\lib\\collections\\__init__.py)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to create a directory in my remote system using python script or by socket programming. I have remote system's Username, password and IP address. I am able to do this in my local machine but not in remote. Please help!","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":21,"Q_Id":72168693,"Users Score":0,"Answer":"To create a directory on a remote machine, you will have to first connect to it.Telnet and SSH and SSH is used to connect to remote machines. Obviously TELNET or SSH service should be running on the remote machine, otherwise you won't be able to connect.Since in case of Telnet,data is transfered in plain text, it's better to use SSH protocol.\nOnce connected to the remote machine using SSH, you will be able to execute commands on the remote machine.\nNow since you want to do everything in Python, you will have to write a complete SSH client in Python. Which is greate for learning, because you will learn about socket programming and cryptography.\nIf you are in a hurry, you can use a good SSH library.\nIf you are getting network connection error, please check whether SSH is installed in the remote machine or not. If yes, then check firewall settings.","Q_Score":0,"Tags":"python,directory,remote-access,networkcredentials,azure-pipeline-python-script-task","A_Id":72314904,"CreationDate":"2022-05-09T07:58:00.000","Title":"How to create directory in remote system using python script","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Many answers about using the Youtube Data API v3 to get the thumbnail of a playlist; and many answers for how to get or set the thumbnail of a video.\nBut none about how to set the thumbnail of a playlist. The documentation shows nothing and no searches into the API documentation or Stack Overflow find the question or its answers.\nI tried using the thumbnails().set() method for setting thumbnails to videos, but that returns a permission denied error indicating that using playlist IDs in its request for video IDs is not a good monkey patch.\nPlease help.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":36,"Q_Id":72230074,"Users Score":0,"Answer":"My original question is invalid.\nYouTube allows users to set thumbnails of videos to be any properly formatted image. YouTube then only allows users to set a specific video thumbnail as the playlist thumbnail. It is not possible to set an image as the thumbnail of the playlist unless it is a thumbnail of a video in that playlist; and in that use case the video is associated to the playlist for the representative thumbnail.","Q_Score":1,"Tags":"youtube-api,youtube-data-api,google-api-python-client","A_Id":72244432,"CreationDate":"2022-05-13T13:04:00.000","Title":"Q: set thumbnail of a playlist?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Is there a way to set timeout when specific element in webpage loads\nNotice: I am talking about loading webpage using driver.get() method\nI tried setting page loads timeout to 10s for example and check whether element is present but if it is not present i'll have to load it from start.\nEdit:\nClearly said that I don't want to load full url\nI want driver.get() to load url until element found and then stop loading more from url\nIn your examples you used simply driver.get() method which will load full url and then execute next command. One way is to use driver.set_page_load_timeout()","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":44,"Q_Id":72257278,"Users Score":0,"Answer":"webdriver will wait for a page to load by default. It does not wait for loading inside frames or for ajax requests. It means when you use .get('url'), your browser will wait until the page is completely loaded and then go to the next command in the code. But when you are posting an ajax request, webdriver does not wait and it's your responsibility to wait an appropriate amount of time for the page or a part of page to load; so there is a module named expected_conditions.","Q_Score":0,"Tags":"python,selenium","A_Id":72257755,"CreationDate":"2022-05-16T09:51:00.000","Title":"How to load page until specific element found selenium python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to access a website which requires username and password\ncode snippet\n...\nimport urllib3\nhttp = urllib3.PoolManager()\nurl = 'http:\/\/192.168.1.1'\nheaders = urllib3.make_headers(basic_auth='root:admin')\nr = http.request('GET', url, headers=headers)\n...\nI get response 200 OK , even if I pass wrong credentials\nplease advice","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":21,"Q_Id":72271239,"Users Score":2,"Answer":"You get 200 OK because you're making a http GET request.\nTo authenticate yourself with the credentials make a http POST request.","Q_Score":0,"Tags":"python,authentication,urllib3","A_Id":72271314,"CreationDate":"2022-05-17T09:09:00.000","Title":"Basic Authentication does not work with urllib3","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need reflection, vision and documentation on my problem.\nI wrote a python script to calculate something from an API and export the result in a CSV file. Then, I use a JavaScript script to display the data from this CSV file on a building website.\nI need to have the latest data available for my website, so I opened a VM instance in Google Cloud Platform (Google Compute Engine) and set a Crontab job to run automatically my python script. The calculation is now executed every day and the result is exported to the CSV file, but stored in this VM instance.\nHere is my goal: How can I get my CSV file on my website? The CSV is always on the virtual machine and I do not know how to communicate with my JavaScript script to the VM. Do I have to communicate directly with the VM? Do I have to go through another step before (server, API, etc.)?\nI cannot find a specific solution for my problem on the internet.\nThanks in advance.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":64,"Q_Id":72308508,"Users Score":0,"Answer":"How can I get my CSV file on my website?\n\nBy making your python script output the CSV into your website's root folder.\nExample, if you're running apache, chances are your root folder is somewhere in \/var\/www\/html\/...\nIf the script is generated from another machine (not the one with your website), then I would host it and make the server hosting your website fetch it via cronjob.\nBasically:\nIf your CSV is generated from the same machine as the website that will use it - simply output it to the website's folder\nIf your CSV is generated from another machine, make it publicly accessible and have your website's machine cronjob fetch that CSV a few minute after it's generated.","Q_Score":0,"Tags":"javascript,python,python-3.x,csv,google-cloud-platform","A_Id":72326689,"CreationDate":"2022-05-19T17:05:00.000","Title":"I'm looking to connect a Google VM to a website","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I ran pip install python-binance.\nfrom binance.client import Client\nand get the error\nModuleNotFoundError: No module named 'binance\nI have renamed the binance.py file because some people have said that solved it for them but still getting the error?\nTried a bunch of other things like uninstall and reinstall","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":33,"Q_Id":72311889,"Users Score":1,"Answer":"Found the solution to this problem.\nIn the python interpreter I had the wrong package. I had 'binance' which I removed and added 'python-binance' package","Q_Score":1,"Tags":"python,binance,modulenotfounderror,binance-api-client","A_Id":72312284,"CreationDate":"2022-05-19T22:55:00.000","Title":"ModuleNotFoundError: No module named 'binance'","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I installed the selenium package via pip and again in the pycharm ide.\nWhy do we add the selenium package under the project setting? What is the difference between these two?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":31,"Q_Id":72324852,"Users Score":1,"Answer":"Pycharm redirects to calling pip internally. The only difference would be that Pycharm allows you to set project specific interpreters, which you'd need to first manually activate in the terminal in order to install packages to them, rather than your default\/system interpreter","Q_Score":0,"Tags":"python,pip,pycharm","A_Id":72337649,"CreationDate":"2022-05-20T21:17:00.000","Title":"Why do we install a library via pip and again add it in pycharm under project settings?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to make a GUI that takes your Spotify playlist, analyzes it and guesses your age. I'm looking thru their API tutorial and the only way to get the user's playlist is to have them connect using authorization. Is there a way to use the copied link that is available from the share playlist option instead? That seems much more accessible than having them sign into their account each time.\nUsing python and Tkinter for this project.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":19,"Q_Id":72326518,"Users Score":0,"Answer":"Unless these playlists are public, no, they always will need to AUTH.","Q_Score":0,"Tags":"python,tkinter,spotify","A_Id":72326568,"CreationDate":"2022-05-21T03:48:00.000","Title":"Copy URL from Spotify Playlist","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Only a quick simple question. Just coded my discord bot using pycharm. Followed a video on it and got it now running 24\/7. However.. my question is if I want to add, replace, or remove a command do I have to use both replit and pycharm when adding removing or replacing a command or can I just stick to using replit?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":21,"Q_Id":72326718,"Users Score":0,"Answer":"You can just edit .py file on replit and reboot the bot to have it applied.","Q_Score":0,"Tags":"python,pycharm,replit","A_Id":72384114,"CreationDate":"2022-05-21T04:40:00.000","Title":"Adding commands for discord bot in replit","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"So I've been trying to make a program using Python that refreshes my Discord user token once every five minutes, but most of the tutorials online are about refreshing your Oauth2 access token, so I am currently very confused. Can anyone help me on the modules and functions to use, or are the Oauth2 access token and user token same things. Thanks!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":26,"Q_Id":72341799,"Users Score":0,"Answer":"Your Discord Token change when changing the password of your account so you will need to change your password every 5 minutes using some api calls and get the current token from another specific api call. you can find the api call urls and the data you need to post \/ get in the browser devconsole.","Q_Score":0,"Tags":"python,discord,python-requests-html","A_Id":72364161,"CreationDate":"2022-05-22T22:49:00.000","Title":"How to refresh a Discord user token with python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Error:\n\ngoogle.auth.exceptions.RefreshError: ('invalid_grant: Token has been expired or revoked.', {'error': 'invalid_grant', 'error_description': 'Token has been expired or revoked.'})\n\nHowever, another app I use, with a different account, never runs into any issues. I use the same Python OAuth Quickstart for both.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":33,"Q_Id":72370166,"Users Score":0,"Answer":"Token has been expired or revoked.\n\nBasically means just that either the user has revoked your access or google has. Users can remove your access directly in their google account when ever they want to.\nGoogle expired tokens\n\nIf you are using a gmail scope, and the user changes their password. Your refresh token will probably be revoked.\nIf your app is still in testing and the refresh token is more then seven days old the users consent will be removed and the refresh tokens will be revoked.\nIf the refresh token has not been used in more then six months the refresh token will be revoked.\nIf the user authroizes you app you get a refresh token, if the do it again you get another refresh token. both will work. You can have up to 50 outstanding refresh tokens for a user. If you request again then the first one will be expired. Ensure you are always storing the most recent refresh token.\n\nno matter what the cause your application should be configured in a way as to request authorization from them again if the refresh token has expired.","Q_Score":1,"Tags":"python-3.x,gmail,google-oauth,gmail-api,google-api-python-client","A_Id":72373650,"CreationDate":"2022-05-24T22:37:00.000","Title":"I've been working with the Google API. Sometimes my refresh token refreshes and other times it fails and causes a 'RefreshError.' Why? How to fix?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I've built a web application which uses spotify api and it uses client id and client secret which is present in main.py i.e on the backend side of my application to fetch data regarding songs. Now I want to deploy the app on heroku and want to know whether it will be safe to deploy it like this or should I move client id and secret somewhere else.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":18,"Q_Id":72380893,"Users Score":0,"Answer":"Don't include the client id and the cilent secret into the code. Just add there a comment and the user can just add his own secrets.","Q_Score":0,"Tags":"python,api,heroku","A_Id":72381040,"CreationDate":"2022-05-25T16:00:00.000","Title":"How to safely store client secret?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I\u2019m writing a code that reads data from an OPC server. This is a summary:\n\nurl = opc.tcpXXXXX\nclient = Client(url)\nclient.connect()\n\nAnd then in a while loop I want to read data from the server for several days:\n\nWhile True:\nData1 =client.get_node(\u201cns=4;i=3\u201d)\nData1_Val = Data1.get_value()\n#write it to sql table\ntime.sleep(120)\n\nI\u2019m reading 20 nodes the same way in the same while loop.\nAt first, all works fine. But, after a while the script would still be running but without any data acquisition! What I mean is that after about 2 hours, I will no longer get data from the server.\nWhat could possibly be the problem?\nThanks!","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":34,"Q_Id":72382790,"Users Score":1,"Answer":"That will be really difficult to say with so less information\u2026\nAnyway if you want to often read the value of a bunch of nodes from the server, you should use the OPC UA subscription.\nThis will be more efficient and you shouldn\u2019t miss any value","Q_Score":1,"Tags":"python,opc-ua","A_Id":72389207,"CreationDate":"2022-05-25T18:48:00.000","Title":"Python OPC-UA client stops reading after a while","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I\u2019m writing a code that reads data from an OPC server. This is a summary:\n\nurl = opc.tcpXXXXX\nclient = Client(url)\nclient.connect()\n\nAnd then in a while loop I want to read data from the server for several days:\n\nWhile True:\nData1 =client.get_node(\u201cns=4;i=3\u201d)\nData1_Val = Data1.get_value()\n#write it to sql table\ntime.sleep(120)\n\nI\u2019m reading 20 nodes the same way in the same while loop.\nAt first, all works fine. But, after a while the script would still be running but without any data acquisition! What I mean is that after about 2 hours, I will no longer get data from the server.\nWhat could possibly be the problem?\nThanks!","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":34,"Q_Id":72382790,"Users Score":1,"Answer":"What is your opc server ?\n1-) Ex. in kepserverex opc server runtime has 2 hour for a free version. then it ill stoped read data and publish to opc server. Maybe other opc servers are like this.\n2-) OPC servers include some config settings like connection time or alive time or someting like that. Check your server settings.\n3-) Some Opc servers need certificate, if u connect wihout certificate it ill close session after a while for security.\n4-) Sometimes our request failed from server , because server cant read data of your wish. This error can lead to logging out, it can also be in the configuration settings of this server.\nU should check them, if u can say your opc server name i can search","Q_Score":1,"Tags":"python,opc-ua","A_Id":72431835,"CreationDate":"2022-05-25T18:48:00.000","Title":"Python OPC-UA client stops reading after a while","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"We have an application where the client needs to request some information stored on DynamoDB filtering by date. We have the service created with API Gateway and a Lambda retrieving the corresponding records on DynamoDB, so we have to retrieve all the necessary records in <30segs.\nThe volume of the records keeps increasing and we have thought in the following:\n\nThe client will ask for records in a concrete order (0-100, 100-200, 200-300, etc...) in order to display them on a concrete page on the frontend.\nThe backend will handle requests (and therefore search on DynamoDB) for that concrete order of records (0-100, 100-200, etc...)\n\nIs there any way on DynamoDB to get the records from a concrete position to a concrete position? Or the only way is to retrieve all the records for that date range and then send the concrete positions to the client?\nThank you in advance,\nBest regards.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":25,"Q_Id":72388767,"Users Score":1,"Answer":"You don\u2019t specify a schema so I\u2019m going to give you one. :)\nSetup a sort key that\u2019s the position number. Then you can efficiently retrieve by position number range.\nOr instead of using ordinals if you want to use timestamps then just pass to the client the sort key starting point for their next request and use it as the lowrr value for the next query.\nThere\u2019s no way to efficiently find the Nth many item in an item collection.","Q_Score":0,"Tags":"python,lambda,amazon-dynamodb,aws-api-gateway","A_Id":72388956,"CreationDate":"2022-05-26T08:14:00.000","Title":"Retrieve records in DynamoDB by position","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"My pip has suddenly stopped working. I get the following error:\n\nTraceback (most recent call last):\nImportError: cannot import name 'ssl_match_hostname' from partially initialized module 'pip.vendor.urllib3.packages' (most likely due to a circular import) (c:\\users\\ed\\onedrive\\software dev\\own projects\\market analysis\\venv\\lib\\sit\ne-packages\\pip_vendor\\urllib3\\packages_init.py)\n\nThis is run within a virtual environment and other virtual environments are working fine. I can't quite remember but I think I did a pip update a few days ago and haven't used it since.\nAre there any solutions to regain use of pip?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":27,"Q_Id":72404757,"Users Score":0,"Answer":"You can try to uninstall and then reinstall pip, that would clear up anything (hopefully) broken or corrupted.","Q_Score":0,"Tags":"python,pip,virtualenv","A_Id":72404893,"CreationDate":"2022-05-27T11:30:00.000","Title":"pip cannot import name 'ssl_match_hostname' error","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Basically my scenario is that I have the webpage open and want to copy some of the text from the website that is open on my screen ( there is a whole login process every time ) . For Security reasons, I do not want to have to continuously login to the webpage and for that reason, requests are not suitable. I also do not want to use selenium as it will open up a new browser when I wish to use my existing one. My question is with my browser already open on the page I want info from, is there some sort of script I can make that will retrieve certain information on the page for me and save it somewhere (almost like a macro but it's able to retrieve certain elements) . Is this a possibility?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":26,"Q_Id":72414411,"Users Score":0,"Answer":"I'm not sure if I understood the question correctly.\nOne way might be to download the entire .html and process the respective data \"locally\" after downloading the .html.","Q_Score":0,"Tags":"python","A_Id":72414437,"CreationDate":"2022-05-28T09:19:00.000","Title":"Scrape website data without BS or selenium (Python)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"A couple of months ago Selenium worked flawlessly, but when I ran my program this morning it was not working. The current Chrome browser version is 102.0.5005.63. I downloaded the the latest version of ChromeDriver (ChromeDriver 102.0.5005.61) and restarted the computer. I am still getting the same error message:\n\"This version of ChromeDriver only supports Chrome version 100\nCurrent browser version is 102.0.5005.63 with binary path...[insert path here]\"\nI have placed the ChromeDriver on my desktop, in the folder that the Python file is contained in, and in the binary path where Chrome is stored. Nothing changes; I always get the same error message.\nDoes anyone have any insight into this?\nThank you","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":966,"Q_Id":72419875,"Users Score":1,"Answer":"I came accross the exact same issue but with nodejs. the way I fixed it was deleting the node_modules folder then reinstalling all dependicies , maybe you can do something similar like that to see if it works","Q_Score":0,"Tags":"python,selenium-chromedriver","A_Id":72446065,"CreationDate":"2022-05-28T23:18:00.000","Title":"ChromeDriver Not Working Even After Updating to Latest Version","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have to crawl several URLs in order to extract contact information (HTML). Some URLs could not exist.","AnswerCount":1,"Available Count":1,"Score":-0.3799489623,"is_accepted":false,"ViewCount":25,"Q_Id":72437019,"Users Score":-2,"Answer":"you use the request module\ntry each url, and if they return 404 they are not valid","Q_Score":0,"Tags":"python,html,beautifulsoup,scrapy,web-crawler","A_Id":72437072,"CreationDate":"2022-05-30T15:53:00.000","Title":"How to crawl URLs listed in a CSV with Python and export the contact data in another CSV? Thanks","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"if I send a really huge number of requests thru python for scrapping more than 1.5 million data from a website that has a google tracker, does google track this traffic? can the website owner reverse the request to detect who am I even though I'm using VPN?\nany suggestions?\nthanks","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":19,"Q_Id":72493879,"Users Score":0,"Answer":"It really depends on whether you render the responses.\nIf you only fetch the body and don't render\/execute JS in it, then you won't trigger any JS libraries loaded to the page, that would include any front-end driven analytics.\nThat said, however, there's still backend (the web server) where each of your requests will likely be logged, including ip, useragent, referrer and about twenty more dimensions, potentially.\nShort answer: yes, if the owner makes a real effort, theoretically, it is possible to get to you even if you use VPN. To be honest, VPN is a very poor tool to hide your identity\/ip. Mostly it only helps with trivial scripted detections. You want to read more on digital forensics before doing anything inappropriate online. It will quickly show how many great ways there are to find a person on the web.","Q_Score":0,"Tags":"python,google-analytics,python-requests","A_Id":72494706,"CreationDate":"2022-06-03T18:40:00.000","Title":"does google tracker detect python requests?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to use slack web client API to pull slack messages including threads for a specific date. The conversations.history API only returns the parent message in the case of threaded messages. There is conversations.replies API that returns the message threads, but it requires ts of the parent message to be passed in the request so it will only return conversations related to one thread.\nIs there a way to pull all message history including replies for a data range rather than having to combine a call to conversations.history API and then multiple calls to conversations.replies for each message with thread_ts?\nThis approach of combining both APIs won't work if the reply was posted on the specific date we want to pull, but the root thread message was posted on an older date. The root message won't be returned in conversations.history and hence we won't be able to get that particular message in the thread using conversations.replies.\nIt's strange that Slack doesn't provide such API to pull all messages including threaded ones.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":407,"Q_Id":72512171,"Users Score":0,"Answer":"Unfortunately, there is no way to capture all threads in a workspace with a single API call. The conversations.history is already a very data-heavy method. Most developers calling this method don't need thread information and including that in the reply would be a bit of an overkill. Calling conversations.replies should return all the replies corresponding to that parent message regardless of the date it was posted in unless otherwise specified using the latest or oldest parameters.","Q_Score":0,"Tags":"python,slack,slack-api","A_Id":72520801,"CreationDate":"2022-06-06T01:19:00.000","Title":"Slack - Get messages & threads for date range via Slack WebClient","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I was wondering, is there any API to use in order to get the latest version number of the browsers. (Chrome, Firefox, Opera, Safari)\nI have tried the web scraping methods in python in order to get the stable versions of each browser from Wikipedia. However, I am looking for a more efficient way to check the client browser.\nI would appreciate it if someone helps me with this issue.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":130,"Q_Id":72512655,"Users Score":0,"Answer":"Browsers usually set their version in the HTTP User-Agent header. You could check that against the list of versions your application supports.","Q_Score":0,"Tags":"python,api","A_Id":72513384,"CreationDate":"2022-06-06T03:06:00.000","Title":"Get the Latest version of browsers","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to download content from a website which has a sort of paywall.\nYou have a number of free articles you can read and then it requires a subscription for you to read more.\nHowever, if you open the link in incognito mode, you can read one more article for each incognito window you open.\nSo I am trying to download some pages from this site using Python's requests library.\nI request the URL and then parse the result using Bs4. However it only works for the first page in the list, the following ones don't have content but have instead the message with \"buy a subscription etc.\".\nHow to avoid this?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":357,"Q_Id":72531871,"Users Score":0,"Answer":"I think you can try to turn off javascript in the browser, it may work, but not 100%.","Q_Score":0,"Tags":"web-scraping,python-requests","A_Id":72531900,"CreationDate":"2022-06-07T13:15:00.000","Title":"How to avoid this sort of paywall when scraping with Python requests?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Essentially I am trying to query the NIST API using a CPE, to collect all the vulnerabilities about that CPE using python. The nist documentation says if you have an api key (which i do) you get 100 requests in a rolling 60 second window, but when i only make 24 requests using aiohttp, asyncrnously i get a 403 forbidden error, with the text stating Request forbidden by administrative rules. The documentation also states you should \"sleep\" your scripts for six seconds between requests. So my question is how many requests should i send as those statements seem to be contradicting one another. And why am i getting this error ?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1061,"Q_Id":72549122,"Users Score":0,"Answer":"The public rate limit is 5 requests in a rolling 30 second window. That means you need to have a sleep time of 6 secs between requests.\nIn the case you have an API key, you are allowed to make 50 requests in a rolling 30 second window. That is approx. 2 requests per second without needing to sleep. If you plan to make more requests in the allowed 30 second window, then you may have to implement an appropriate sleep time.","Q_Score":1,"Tags":"python,api,rate-limiting","A_Id":76265336,"CreationDate":"2022-06-08T16:21:00.000","Title":"Rate limits for NIST API","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Im trying to run a script on a AWS EC2 Windows 2019 instance but its not running even tho the library is installed. The lib in question is \"discord-py-slash-command\" but other libs like requests install and run normally (they are imported earlier in the file)\nI tried pip install discord-py-slash-command I tried python -m pip install discord-py-slash-command all successfully install the library but the script doesnt recognize it. I also reinstalled Python with all extra add-ons and I still have no idea what to do next","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":85,"Q_Id":72549244,"Users Score":0,"Answer":"Please check your Interpreter again. Your project interpreter may be not the python version that you Installed.\nIn Pycharm, you can simply check it by pressing Ctrl+Alt+S > Project > Python Interpreter","Q_Score":0,"Tags":"python,pip","A_Id":72549452,"CreationDate":"2022-06-08T16:30:00.000","Title":"Python not recognizing an installed library?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Environment\nPythonnet version: 3.0.0a2 PRE-RELEASE\nPython version: 3.10.5\nOperating System: Windows 10\n.NET Runtime: .Net core 6.0 and 5.0\nDetails\nHave created a simple program to\n\nAdd 2 numbers\nRead XML from File\nConvert Base64 Encode\nUsed PythonNet CLR to import the dll and access all the above methods\n\nOn .NET core 6.0:\n\nAdd 2 numbers worked like charm\nRead XML and Covert Base64 threw error\nSystem.TypeLoadException: Could not load type 'System.Text.Encoding' from assembly 'System.Text.Encoding, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'.\nat DllExport.XMLReader(String filePath)\n\nOn .NET core 5.0 :\nAdd 2 numbers worked\nRead XML worked\nConvert Base64 did not work and threw error\nSystem.TypeLoadException: Could not load type 'System.Convert' from assembly 'System.Runtime, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'.\nat ReusableLibariesConsole.Program.Base64_Encode(Byte[] data)\nWe have set the .NET version to 2.0\nthen all 3 errors disappeared however further methods such as Encryption did not work again\nSystem.TypeLoadException: Could not load type 'System.Security.Cryptography.PasswordDeriveBytes' from assembly 'System.Security.Cryptography.Csp, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'.\nat DllExport.EncryptionManagerClass.Encrypt(String inputData, String password, Int32 bits)\nWe tried lot of fixes such as setting CPU to x64, changing target framework and nothing worked.\nPlease let us know if you need further information","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":94,"Q_Id":72572500,"Users Score":0,"Answer":"I happened to have been battling with a similar issue about a week ago\nwith the error: System.TypeLoadException: Could not load type 'System.Data.Dataset' form assembly 'System.Data.Common', Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a\nTo resolved this I downgraded the .NET version to 4.8","Q_Score":1,"Tags":"python,c#,.net","A_Id":72938493,"CreationDate":"2022-06-10T10:06:00.000","Title":"Load C# from Python: Unable to Load the .NET dependencies while accessing the methods","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm reading file paths from an XML file with Python and they print with substituted characters:\n\nfile:\/\/\/Volumes\/Storage\/Music\/Media\/Bjo%CC%88rk\/Post\/02%20Hyperballad.m4a\n\nI'm guessing this has to do with their encoding that's maybe not UTF-8?\nIs there an easy way to get a cleaned up string, like this:\n\nfile:\/\/\/Volumes\/Storage\/Music\/Media\/Bj\u00f6rk\/Post\/02 Hyperballad.m4a\n\nThanks.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":21,"Q_Id":72605656,"Users Score":0,"Answer":"urllib.parse.unquote() did the trick.","Q_Score":0,"Tags":"string,python-3.10","A_Id":72607790,"CreationDate":"2022-06-13T15:51:00.000","Title":"Replace substituted characters in file paths strings from XML file in Python 3?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"So I'm working on a project where I should work with Netmiko. I had installed the module using pip3 install netmiko command and it installed successfully. But, when I try to import netmiko module in the python3 console, it throws an error saying \"Getting ModuleNotFoundError: No module named 'importlib.resources\".\nThe next step I took was trying to install pip install importlib-resources and still faced the same issue.\nSorry guys I'm a newbie, need some help with this one.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":3046,"Q_Id":72640384,"Users Score":1,"Answer":"You have to change in import in the file reporting the error from importlib.resource to importlib_resources.\nI had the same problem and found that solution online, it worked for me.","Q_Score":1,"Tags":"linux,pip,python-importlib,netmiko","A_Id":72745806,"CreationDate":"2022-06-16T04:14:00.000","Title":"Getting ModuleNotFoundError: No module named 'importlib.resources'","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"How to fetch Twitter data from multiple Tweet IDs using Twitter API?\nIt is showing\n{\"errors\":[{\"message\":\"You currently have Essential access which includes access to Twitter API v2 endpoints only. If you need access to this endpoint, you\u2019ll need to apply for Elevated access via the Developer Portal. You can learn more here: https:\/\/developer.twitter.com\/en\/docs\/twitter-api\/getting-started\/about-twitter-api#v2-access-leve\",\"code\":453}]}\nIs there any way to fetch with only having essential access ?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":27,"Q_Id":72680181,"Users Score":0,"Answer":"As said try applying for elevated access via the Developer portal.\nAs soon as you get the access, you'll be able to fetch the data from Twitter API's itself.","Q_Score":1,"Tags":"python,twitter,twitterapi-python","A_Id":72680197,"CreationDate":"2022-06-19T20:35:00.000","Title":"Fetch Twitter ID","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"The script is crashing on Chrome version 103 only it's still running on 102. In one instance, it ran but crashed while executing a loop for its fifth time. It is selenium + python. What should I do?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1490,"Q_Id":72719298,"Users Score":0,"Answer":"New Answer\nChrome Version 103.0.5060.134, selenium is now working as normal\nOld answer\nI actually:\n\nUninstalled Google Chrome\nRestarted the computer\nInstalled an older version of Chrome(whilst the Internet is OFF)\nRun MS config & disable Chrome auto-update\nRestart the computer.\n\nNow it runs just FINE.\nNB: This can make your computer vulnerable to attacks so don't use your main PC. Use Isolated Environments\n.","Q_Score":0,"Tags":"selenium-webdriver-python","A_Id":72878108,"CreationDate":"2022-06-22T16:53:00.000","Title":"Selenium for Chrome version 103 is just crashing, I know it's not my code because whilst its still crashing at one time it still ran","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have hundreds of files in my local directory and I want to treat them as parts of a single file and upload them to s3, I want them as a single file in s3. ideally, I want to use s3fs or boto3 to accomplish this but any other approach is also good.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":203,"Q_Id":72722412,"Users Score":0,"Answer":"There is no provided command that does that, so your options are:\n\nCombine the files together on your computer (eg using cat) and then upload a single file using boto3, or\nIn your Python code, successively read the contents of each file and load it into a large string, then provide that string as the Body for the boto3 upload (but that might cause problems if the combined size is huge)","Q_Score":0,"Tags":"python,amazon-web-services,amazon-s3,upload,multipart-upload","A_Id":72722633,"CreationDate":"2022-06-22T21:53:00.000","Title":"how to upload multiple files from local to one file in s3","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"What I want to do - Now I want to crawl the contents (similar to stock prices of companies) in a website. The value of each element (i.e. stock price) is updated every 1s. However, this web is a lazy-loaded page, so only 5 elements are visible at a time, meanwhile, I need to collect all data from ~200 elements.\nWhat I tried - I use Python Splinter to get the data in the div.class of the elements, however, only 5-10 elements surrounding the current view appear in the HTML code. I tried scrolling down the browser, then I can get the next elements (stock prices of next companies), but the information of the prior elements is no longer available. This process (scrolling down and get new data) is too slow and when I can finish getting all 200 elements, the first element's value was changed several times.\nSo, can you suggest some approaches to handle this issue? Is there any way to force the browser to load all contents instead of lazy-loading?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":83,"Q_Id":72729352,"Users Score":0,"Answer":"there is not the one right way. It depends on how is the website working in background. Normaly there are two options if its a lazy loaded page.\n\nSelenium. It executes all js scripts and \"merges\" all requests from the background to a complete page like a normal webbrowser.\n\nAccess the API. In this case you dont have to care for the ui and dynamicly hidden elements. The API gives you access to all data on the webpage, often more than displayed.\n\n\nIn your case, if there is an update every second it sounds like a\nstream connection (maybe webstream). So try to figure out how the\nwebsite gets its data and then try to scrape the api endpoint directly.\nWhat page is it?","Q_Score":0,"Tags":"python,selenium,dynamic,lazy-loading,splinter","A_Id":72787548,"CreationDate":"2022-06-23T11:26:00.000","Title":"In Python Splinter\/Selenium, how to load all contents in a lazy-load web page","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Is it possible to create a python script to take screenshots of multiple open tabs? If anyone can point me in the right direction","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":32,"Q_Id":72732928,"Users Score":0,"Answer":"You have to switch to each window and take screenshot of each window. All window in one screenshot is not possible. Use window handle to switch to each window and capture screenshot of each window.","Q_Score":0,"Tags":"python,selenium","A_Id":72733000,"CreationDate":"2022-06-23T15:39:00.000","Title":"Taking screenshots of multiple open browser tabs using Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is it possible to create a python script to take screenshots of multiple open tabs? If anyone can point me in the right direction","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":32,"Q_Id":72732928,"Users Score":0,"Answer":"Something like this would switch to different tabs (x being the iterator). 0 Would be the first tab and then 1,2 so on.\ndriver.switch_to.window(driver.window_handles[x])","Q_Score":0,"Tags":"python,selenium","A_Id":72733353,"CreationDate":"2022-06-23T15:39:00.000","Title":"Taking screenshots of multiple open browser tabs using Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have tried using the command line to run the import, I have reinstalled python, I've checked my interpreter, I have spent hours of searching, and nothing works. I have tried using pip install -U discord-py-interactions and pip install -U discord-py-slash-command but neither of them have worked. It just keeps saying ModuleNotFoundError: No module named 'discord_slash'. Does anyone have any idea what's going on? Thanks for any help.","AnswerCount":1,"Available Count":1,"Score":-0.3799489623,"is_accepted":false,"ViewCount":506,"Q_Id":72748847,"Users Score":-2,"Answer":"Try with pip install discord-py-slash-command","Q_Score":0,"Tags":"python,discord,discord.py","A_Id":74963103,"CreationDate":"2022-06-24T19:55:00.000","Title":"discord-py-slash-command import problems, cant use discord_slash","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a turtle RDF file that contains all information related to an ontology and some instances and an N3 file that has different rules.\nMy goal is to execute the N3 rules on top of the Turtle file. Is it possible to use RDFLib (Python library) or any other library to do this task?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":27,"Q_Id":72774873,"Users Score":1,"Answer":"I think so using RDFLib: just parse the Turtle data first and then the N3 file afterwards into the same Graph. Be sure to use a formula-aware store. The default \"memory\" store is formula-aware.","Q_Score":0,"Tags":"python,rdflib,turtle-rdf,n3","A_Id":72785866,"CreationDate":"2022-06-27T15:46:00.000","Title":"Execute N3 rules on top of a Turtle file using Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"while importing googletrans i am getting this error AttributeError: module 'httpcore' has no attribute 'SyncHTTPTransport","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":711,"Q_Id":72796594,"Users Score":2,"Answer":"googletrans==3.0.0 use very old httpx (0.13.3) and httpcore version. You just need to update httpx and httpcore to latest version and go to googletrans source directory in Python310\/Lib\/site-packages. In the file client.py, fix 'httpcore.SyncHTTPTransport' to 'httpcore.AsyncHTTPProxy'. And done, perfect. Even, Async, a concurrency model that is far more efficient than multi-threading, and can provide significant performance benefits and enable the use of long-lived network connections such as WebSockets.\nif you got error 'Nonetype'...group. Try: pip install googletrans==4.0.0-rc1","Q_Score":1,"Tags":"python,api","A_Id":73005547,"CreationDate":"2022-06-29T06:17:00.000","Title":"AttributeError: module 'httpcore' has no attribute 'SyncHTTPTransport","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I came across an issue with performance testing of payments-related endpoints.\nBasically I want to test some endpoints that make request themselves to a 3rd-party providers' API.\nIs it possible from Locust's tests level to mock those 3rd-party API for the endpoints I intend to actually test (so without interference with the tested endpoints)?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":220,"Q_Id":72802596,"Users Score":0,"Answer":"I actually skipped the most important part of the issue, namely I am testing the endpoints from outside of the repo containing them (basically my load test repo calls my app repo). I ended up mocking the provider inside of the app repo, which I initially intended to avoid but turned out to be only reasonable solution at the moment.","Q_Score":0,"Tags":"python,testing,performance-testing,load-testing,locust","A_Id":73148442,"CreationDate":"2022-06-29T13:52:00.000","Title":"Performance testing with Locust with mocking 3rd party API","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I came across an issue with performance testing of payments-related endpoints.\nBasically I want to test some endpoints that make request themselves to a 3rd-party providers' API.\nIs it possible from Locust's tests level to mock those 3rd-party API for the endpoints I intend to actually test (so without interference with the tested endpoints)?","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":220,"Q_Id":72802596,"Users Score":1,"Answer":"If I understand correctly, you have a service you'd like to load\/performance test but that service calls out to a third-party. But when you do your testing, you don't want to actually make any calls to the third-party service?\nLocust is used for simulating client behavior. You can define that client behavior to be whatever you want; typically it's primary use case is for making http calls but almost any task can be done.\nIf it's your client that makes a request to your service and then makes a separate request to the other third-party service for payment processing, yes, you could define some sort of mocking behavior in Locust to make a real call to your service and then mock out a payment call. But if it's your service that takes a client call and then makes its own call to the third-party payment service, no, Locust can't do anything about that.\nFor that scenario, you'd be best off making your own simple mock\/proxy service of the third-party service. It would take a request from your service, do basic validation to ensure things are coming in as expected, and then just return some canned response that looks like what your service would expect from the third-party. But this would be something you'd have to host yourself and have a method of telling your service to point to this mock service instead (DNS setting, environment variable, etc.). Then you could use Locust to simulate your client behavior as normal and you can test your service in an isolated manner without making any actual calls to the third-party service.","Q_Score":0,"Tags":"python,testing,performance-testing,load-testing,locust","A_Id":72803668,"CreationDate":"2022-06-29T13:52:00.000","Title":"Performance testing with Locust with mocking 3rd party API","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Im new using Selenium Webdriver and i couldn't automate a confirmation javascript onclick button, these button can't inspect to see the xpath;\nthe html button is:\n

tags which section the web page. Some of these will look like

Text<\/h1>, and some will lack the img element but are otherwise identical.\nSuppose I start with a string called name and a BeautifulSoup object called soup. This BeautifulSoup object contains several

tags as described above, each of which is followed by more HTML code. Suppose further that no two

elements contain identical text.\nI'd like to compile a function which does the following:\n\nSearches the BeautifulSoup object for a

element which contains a string that, excluding the content, exactly matches the input string name.\nIf it's not the last

tag in the BeautifulSoup object, return everything from that

tag until the next

tag. The latter tag shouldn't be included in the return, but the former tag can be optionally included or excluded. If it is the last

tag, return everything from that tag to the end of the object.\n\nI'm only just learning BeautifulSoup. I know how to use .find() or .find_all() to track down which

tag matches, but I don't know how to return all the following blocks as well.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":22,"Q_Id":74745976,"Users Score":0,"Answer":"Actually, when you crawl data using BeautifulSoup. All HTML tags will be written down. Just use a loop to find that checks for your image's existence or not.","Q_Score":0,"Tags":"python,beautifulsoup","A_Id":74746022,"CreationDate":"2022-12-09T16:45:00.000","Title":"BeautifulSoup in Python: Get series of tags where first exactly matches input","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm trying to replicate a login to a page with python and the requests module but I need a token bearer.\nThis site doesn't require a login password connection but just an event code (wooclap.com)\nI cannot find when the token is recovered by looking at header and json responses.\nIf you can help me\nThanks","AnswerCount":3,"Available Count":2,"Score":-0.1325487884,"is_accepted":false,"ViewCount":270,"Q_Id":74772283,"Users Score":-2,"Answer":"To send a GET request with a Bearer Token authorization header using Python, you need to make an HTTP GET request and provide your Bearer Token with the Authorization: Bearer {token} HTTP header","Q_Score":0,"Tags":"python,web,token","A_Id":74772334,"CreationDate":"2022-12-12T14:05:00.000","Title":"How get bearer token with requests from python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to replicate a login to a page with python and the requests module but I need a token bearer.\nThis site doesn't require a login password connection but just an event code (wooclap.com)\nI cannot find when the token is recovered by looking at header and json responses.\nIf you can help me\nThanks","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":270,"Q_Id":74772283,"Users Score":0,"Answer":"once you put in a event code check the network tab on you're chrome console there should be a request wich returns the token. either in the reponse header or the json,","Q_Score":0,"Tags":"python,web,token","A_Id":74772388,"CreationDate":"2022-12-12T14:05:00.000","Title":"How get bearer token with requests from python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a Selenium project which is working on a recent version of Firefox. Now I need to make it work on Firefox 47. What would be the best way to do that? Will it work if I use an older version of geckodriver?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":86,"Q_Id":74775001,"Users Score":0,"Answer":"It depends mostly on geckodriver version. So, if you can get geckodriver version compatible to the old Firefox version you want to use - this should work.","Q_Score":0,"Tags":"python,selenium,firefox,geckodriver,selenium-firefoxdriver","A_Id":74775038,"CreationDate":"2022-12-12T17:28:00.000","Title":"How can I use Selenium 4.7.2 (Python) on an old Firefox version (Firefox 47)?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I just want to read a excel file in Sharepoint, but with no authentication.\nRead a sharepoint excel file using the file link, without a authentication.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":55,"Q_Id":74816606,"Users Score":0,"Answer":"If you don't need to authenticate (or program the authentication in), to download, you can try requests.get(url=\"link\")\nor you could use selenium, to browse the website, and download the file.\nAnd then you can open it with pandas.","Q_Score":0,"Tags":"python,excel,pandas,authentication,sharepoint","A_Id":74817350,"CreationDate":"2022-12-15T19:28:00.000","Title":"Python - how to read Sharepoint excel sheet without authentication","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm currently working on a project that integrate Zoho people calendar with google calendar and update all the leave events accordingly, I'm being able to do it using my personal account but my question is can I do it with a Google workspace super admin account? like, can I access all the primary calendars of all employees under our workspace organisation and update event using google calendar api by creating google calendar api credentials using the super admin account?\nI haven't tested it using workspace account cause I'm not getting permission to test it using our company account. Even though I have created google calendar credentials using my official gmail account and I'm being able to fetch all my calendars including my primary one and update events of my choice using gale calendar api.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":94,"Q_Id":74820923,"Users Score":0,"Answer":"Yes, you should be able to access your user's calendars via API and to do so you will need to:\n\nUse a service account\nAllow domain wide delegation to the service account (this way it\nwill have the permissions to impersonate your organization users)\nImplement impersonation in your code\n\nBasically the service account will impersonate the users that you specify and do the changes in their calendar on their behalf","Q_Score":0,"Tags":"python,google-calendar-api,google-workspace","A_Id":74835955,"CreationDate":"2022-12-16T06:32:00.000","Title":"Can I access all the users google calendar under my Google workspace organisation?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a list of incomplete (e.g. missing the city field) or incorrect (e.g. spelling mistakes) human-readable addresses to which my company car needs to go in the future. So for one wrong address, there could be different possible address guesses that are 1000 km away from each other. So for now it's impossible for me to geocode the addresses into coordinates (longitude and latitude).\nWhat I have is a list of GPS coordinates that my car returned for the trips in the past. Every time the company car goes on a trip, it returns its GPS locations every 10 minutes. Also there are only a certain number of regions and cities for my company car to go, so let's assume almost all of the incomplete or incorrect addresses are in the same cities that my car went before.\nI was suggested to use The Place Autocomplete service to guess the addresses. But since I have the historical data, I was wondering there is an algorithm for the guessing.\nThank you.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":30,"Q_Id":74832932,"Users Score":0,"Answer":"There exists many algorithms which compare two strings (e.g. two addresses) and output a similarity rate. For instance : Predict function of T-SQL.","Q_Score":0,"Tags":"python,algorithm,machine-learning,artificial-intelligence,graph-theory","A_Id":74856678,"CreationDate":"2022-12-17T09:05:00.000","Title":"Guessing the correct address based on an incorrect or incomplete address and historical address values?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I was running into this error while using Tweepy with OAuth 2.0 User Auth with PKCE when I tried to refresh the token via the User Handler. It was being returned when trying to call the non-production Tweepy refresh function on an expired token.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":103,"Q_Id":74843168,"Users Score":0,"Answer":"In my case, the issue was happening because I was trying to refresh the same token more than once. The solution is to refresh the expired token once and save the new token to a variable or your database until it needs to be refreshed again. Once refreshed once, the old token is no longer valid.","Q_Score":0,"Tags":"python,django,oauth-2.0,tweepy,twitter-oauth","A_Id":74843169,"CreationDate":"2022-12-18T17:18:00.000","Title":"Twitter OAuth 2.0 Refresh Token Error - (invalid_request) Value passed for the token was invalid","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to run my script for selenium\/chromedrive but keep getting the error below.\nSelenium ver 4.72\nChrome Browser version:Version 108.0.5359.125 (Official Build) (64-bit)\nChromeDriver version: ChromeDriver 108.0.5359.71\n\nMessage: unknown error: Chrome failed to start: exited normally.\n(unknown error: DevToolsActivePort file doesn't exist)\n(The process started from chrome location C:\/Program Files\/Google\/Chrome\/Application\/chrome.exe is no longer running, so ChromeDriver is assuming that Chrome has crashed.)\n\n\nScript:\nfrom selenium import webdriver\nfrom selenium.webdriver.common.keys import Keys\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.chrome.options import Options\nfrom selenium.webdriver.chrome.service import Service\noptions = Options()\noptions.binary_location = \"C:\/Program Files\/Google\/Chrome\/Application\/chrome.exe\"\noptions.add_argument(\"--no-sandbox\")\ns = Service(executable_path=r'C:\/Bin\/chromedriver.exe')\ndriver = webdriver.Chrome(service=s, options=options)\ndriver.get(\"https:\/\/www.walmart.com\")\n\nThanks you for any help\nI have also tried Selenium manager but no good. I'm at my wits end\nI think it might be because my chrome is installed in Application folder instead of user\/appdata? Not too sure. What is the default location for Chrome? I've tried uninstialling chrome\/and appdata and reinistalling but it puts me back at that folder.","AnswerCount":1,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1213,"Q_Id":74859440,"Users Score":0,"Answer":"I got same error.\nIf you dont want to use option --headless\nYou Should try this,\nBefore executing the code, close all the chrome browser windows and try. \u2013","Q_Score":1,"Tags":"python,selenium,web-scraping,selenium-chromedriver,undetected-chromedriver","A_Id":75845623,"CreationDate":"2022-12-20T06:51:00.000","Title":"Selenium\/chrome driver keeps crashing \"Chrome failed to start: exited normally\" and \"DevToolsActivePort file doesn't exist\"","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to run my script for selenium\/chromedrive but keep getting the error below.\nSelenium ver 4.72\nChrome Browser version:Version 108.0.5359.125 (Official Build) (64-bit)\nChromeDriver version: ChromeDriver 108.0.5359.71\n\nMessage: unknown error: Chrome failed to start: exited normally.\n(unknown error: DevToolsActivePort file doesn't exist)\n(The process started from chrome location C:\/Program Files\/Google\/Chrome\/Application\/chrome.exe is no longer running, so ChromeDriver is assuming that Chrome has crashed.)\n\n\nScript:\nfrom selenium import webdriver\nfrom selenium.webdriver.common.keys import Keys\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.chrome.options import Options\nfrom selenium.webdriver.chrome.service import Service\noptions = Options()\noptions.binary_location = \"C:\/Program Files\/Google\/Chrome\/Application\/chrome.exe\"\noptions.add_argument(\"--no-sandbox\")\ns = Service(executable_path=r'C:\/Bin\/chromedriver.exe')\ndriver = webdriver.Chrome(service=s, options=options)\ndriver.get(\"https:\/\/www.walmart.com\")\n\nThanks you for any help\nI have also tried Selenium manager but no good. I'm at my wits end\nI think it might be because my chrome is installed in Application folder instead of user\/appdata? Not too sure. What is the default location for Chrome? I've tried uninstialling chrome\/and appdata and reinistalling but it puts me back at that folder.","AnswerCount":1,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1213,"Q_Id":74859440,"Users Score":0,"Answer":"Same here - Bug still there in chrome\/chromedriver V110.XXXX @ 14 Feb2 023\nDisable the option --headless, that seems to work.","Q_Score":1,"Tags":"python,selenium,web-scraping,selenium-chromedriver,undetected-chromedriver","A_Id":75449047,"CreationDate":"2022-12-20T06:51:00.000","Title":"Selenium\/chrome driver keeps crashing \"Chrome failed to start: exited normally\" and \"DevToolsActivePort file doesn't exist\"","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a RHEL 7 Linux server using Apache 2.4 as the httpd daemon. One of the pages served by Apache is a simple https form that is generated using Python 3.11. Currently, the form is submitting and being processed properly, but we have no way to track where the form was submitted from.\nIdeally, there would be a field for users to enter their user name, but we have no way of validating if the user name is valid or not.\nI would like to add a hidden field to the form that would contain one of the following:\n\nUser name used to log into the clients computer from where the form was submitted.\nComputer name of the clients computer from where the form was submitted.\nIP address of the clients computer from where the from was submitted.\n\nI do not care if this data is discovered by Python while the page is being generated, or by a client side script embedded in the generated web page.\nThe majority of users will be using Windows 10 and Chrome or Edge as their browser, but there will be Apple and Linux users and other browsers as well.\nIs this possible? If so, how?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":15,"Q_Id":74880709,"Users Score":0,"Answer":"Would you like every website to have access to your local user- or computer-name, or other local information available to your browser? While there is an aweful lot of information available to webapps, this privacy invasion is not.\nThe server will have a record about the sending IP address though, naturally - even without it being part of a form.\nAs to the \"how\": The Python script that processes the submitted form does have access to the request parameters, and with it typically the remote IP address. What you do with it (e.g. save it) is yours. You'll obviously also find the remote IP address in the Apache logs - but there it's disassociated with the actual form submission.","Q_Score":0,"Tags":"python,apache,authentication,https,rhel","A_Id":74887410,"CreationDate":"2022-12-21T19:06:00.000","Title":"How to identify the user or machine making an HTTP request to an Apache web server on RHEL 7 using server side Python or client side script?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have created an environment and have python and all other packages installed in it. Openssl is also available when I check using conda list. But unfortunately, I realized pytorch is missing when I check the list of installed packages. When I try to download the pytorch I get the following error.\nCondaSSLError: Encountered an SSL error. Most likely a certificate verification issue.\nException: HTTPSConnectionPool(host='repo.anaconda.com', port=443): Max retries exceeded with url: \/pkgs\/main\/win-64\/current_repodata.json (Caused by SSLError(\"Can't connect to HTTPS URL because the SSL module is not available.\"))","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":103,"Q_Id":74882814,"Users Score":0,"Answer":"I think this problem is related to ssl updates\nrun the below code in terminal and try again;\n\nconda update --all --no-deps certifi","Q_Score":0,"Tags":"python,pytorch,openssl","A_Id":74882885,"CreationDate":"2022-12-21T23:39:00.000","Title":"Error with openssl when trying to install pytorch","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"If I continue to use the latest version of Selenium V3. What is the latest version of Chromedriver and Chrome Browser that I can use?\nI dont want use Selenium 4 for now.\nThx","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":14,"Q_Id":74886349,"Users Score":0,"Answer":"You can use any version of Chrome browser with appropriate Chromedriver version. Also the latest Chrome and Chromedriver and not the latest. You can use both Selenium 3 and Selenium 4, there is no limitation or any kind of enforcement to use Selenium 4 at least for now.","Q_Score":0,"Tags":"python-3.x,selenium,selenium-chromedriver","A_Id":74886438,"CreationDate":"2022-12-22T09:12:00.000","Title":"Use the last V3 Selenium: Last Chromedriver and last Chrome Browser","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Background:\nI'm working on a discord bot that uses requests. The requests are async, so I'm using the library\nasgiref.sync\n(I know I obviously can't use this function for async functions.)\nI implemented sync_to_async into all the requests and things that may take long to process. The function doesn't produce any error. However, I'm not sure if this actually does anything.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":108,"Q_Id":74911643,"Users Score":1,"Answer":"Async does not magically make your program runs in parallel. Within an async function there needs to be certain points where the function \"gives up control\", like when waiting for a response from a remote server.\nIf the function you 'converted' to async either (1) CPU-bound or (2) not giving up control by invoking an await statement somewhere, or (3) both, then the function will run until completion until it finishes.\nTo put it in another way: async is cooperative multitasking. Async functions must, at certain points \"hands over\" control to the async loop to enable others to run.","Q_Score":1,"Tags":"python,asynchronous","A_Id":74912089,"CreationDate":"2022-12-25T05:26:00.000","Title":"Can I use sync_to_async for any function in python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"How do I solve the following problem when I run the code?\nservice = service(executable_path=ChromeDriverManager().install())\nTypeError: 'module' object is not callable","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":66,"Q_Id":74917517,"Users Score":0,"Answer":"I think there are so many problems in your code:-\n\nYou import service module like this:\nfrom selenium import webdriver\nimport Service\nimport ChromeDriverManager\n\n\nbut actually you have to import like this\nfrom selenium import webdriver\nfrom selenium.webdriver.chrome.service import Service\nfrom webdriver_manager.chrome import ChromeDriverManager\n\nUpdate your modules to latest version\n\nDont use modules name as variables","Q_Score":0,"Tags":"python,selenium,selenium-chromedriver","A_Id":74917609,"CreationDate":"2022-12-26T05:34:00.000","Title":"TypeError: 'module' object is not callable","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I want to start a new chat in Telegram with a Python script but I know with neither Telegram's BOT nor its API it is possibile.\nI don't want the user starts the chat with a BOT before!\nHowever I was wandering if you can achieve this in anther way. I mean, when you create a new chat with Telegram application, there will be, somehow, and endpoint which handle this request.\nWhy is it impossibile to create a Python script which emulates this action?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":55,"Q_Id":74923330,"Users Score":0,"Answer":"I don't want the user starts the chat with a BOT before!\n\nThis is not possible. A Bot can never start a chat, that has to be done by the user him\/her self.\nAfter the user has started a conversation with the Bot, you can send anything to the user until the user stops the conversation.","Q_Score":0,"Tags":"python,telegram","A_Id":74931072,"CreationDate":"2022-12-26T19:12:00.000","Title":"Telegram start new chat with Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Im building a Web Scapper, when testing on venv -> [SSL: CERTIFICATE_VERIFY_FAILED]\nBut, when I'm testing on ipython shell -> Perfectly good\nI wondering what the root problem is?\nThanks for your help!`from urllib.request import urlopen\nfrom bs4 import BeautifulSoup\nimport subprocess\nhtml = urlopen('http:\/\/www.pythonscraping.com\/pages\/page3.html')\nbs = BeautifulSoup(html, 'lxml')\nhtml_tags = bs.prettify()`","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":48,"Q_Id":74929059,"Users Score":0,"Answer":"I just understand why it could leads into this problems!\nIf your guys using an virtual env (such as: create a by using python3 -m venv .venv -> it will absolutely isolate (easy to understand right?)\nBut once it isolate from the current machine -> the ip address still not change!\nHowever the dependencies with others are cut down already -> This could lead to the about situation!\nbut when your guys create a venv (by conda) -> it setting more detailed -> so it might by not crash into the problem.\nIf somethings wrong, im happy to get the replies for this one!\nPS: Python Automation Cookbook - packs helped me to understand this problem deeply! and ChatGPT: network dependencies and lib in venv.","Q_Score":0,"Tags":"python,ssl-certificate,urllib,urlopen","A_Id":75230551,"CreationDate":"2022-12-27T12:06:00.000","Title":"urllib.requiest.urlopen error: certificate verify failed on python Virtual Environment","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I was trying to pull incremental data from commercetool api using QueryPredicates by sending last modified datetime but the api is giving full data.\nhttps:\/\/api.{region}.commercetools.com\/{projectkey}\/categories?where=lastModifiedAt > 2022-04-06T00:46:32.037Z\nIn the documentation, it is mentioned as below\nInclude a time range, for example lastModifiedAt > $1-week-ago and ... (replace $1-week-ago with an actual date\nI tried sending last modified datetime as mentioned in documentation and was expecting it to return data greater than that but it's not working","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":39,"Q_Id":74937037,"Users Score":3,"Answer":"The datetime string must be enclosed with double quotes. And the whole predicate must be urlencoded.\nlastModifiedAt%20%3E%20%222022-04-06T00%3A46%3A32.037Z%22","Q_Score":0,"Tags":"python,json,commercetools","A_Id":74937541,"CreationDate":"2022-12-28T06:42:00.000","Title":"Commercetool Incremental Pull","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using Christofides algorithm to calculate a solution for a Traveling Salesman Problem. The implementation is the one integrated in networkx library for Python.\nThe algorithm accepts an undirected networkx graph and returns a list of nodes in the order of the TSP solution. I'm not sure if I understand the algorithm correctly yet, so I don't really know yet how it determines the starting node for the calculated solution.\nSo, my assumption is: the solution is considered circular so that the Salesman returns to his starting node once he visited all nodes. end is now considered the node the Salesman visits last before returning to the start node. The start node of the returned solution is random.\nHence, I understand (correct me if I'm wrong) that for each TSP solution (order of list of nodes) with N nodes that is considered circular like that, there are N actual solutions where each node could be the starting node with the following route left unchanged.\nA-B-C-D-E-F-G-H->A could also be D-E-F-G-H-A-B-C->D and would still be a valid route and basically the same solution only with a different starting node.\nI need to find that one particular solution of all possible starting nodes of the returned order that has the greatest distance between end and start - assuming that that isn't already guaranteed to be the solution that networkx.algorithms.approximation.christofides returns.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":32,"Q_Id":74963650,"Users Score":0,"Answer":"After reading up a bit more on Christofides, it seems like, due to the minimum spanning tree that's generated as first step, the desired result of the first and last node visited being those along the path that are the furthest apart, is already the case.","Q_Score":0,"Tags":"python,networkx,traveling-salesman","A_Id":75038663,"CreationDate":"2022-12-30T16:20:00.000","Title":"Christofides TSP; let start and end node be those that are the farthest apart","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using a python script that login to AWS account with an IAM user and MFA (multi-factor authentication) enabled. The script runs continuously and does some operations (IoT, fetching data from devices etc etc).\nAs mentioned, the account needs an MFA code while starting the script, and it does perfectly. But the problem is script fails after 36 hours because the token expires.\nCan we increase the session token expiration time or automate this task not to ask MFA code again and again?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":56,"Q_Id":74968390,"Users Score":0,"Answer":"Unfortunately not, the value can range from 900 seconds (15 minutes) to 129600 seconds (36 hours). If you are using root user credentials, then the range is from 900 seconds (15 minutes) to 3600 seconds (1 hour).","Q_Score":0,"Tags":"python-3.x,amazon-ec2,boto3,multi-factor-authentication","A_Id":74968968,"CreationDate":"2022-12-31T08:26:00.000","Title":"Increase aws session token expiration time","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"i tryed to fix this problem for hours now but i can't solve it. I did read through some similiar questions but they coudnt help me.\nI want to use the Selectolax HTMLParser Module inside my AWS Lambda Function.\nI Import the module like this from an Layer like this:\nfrom selectolax.parser import HTMLParser\nI always get the error:\n\"errorMessage\": \"cannot import name 'parser' from partially initialized module 'selectolax' (most likely due to a circular import)\nThe Problem lays not in the Name of My Function\/File, i called it \"Test123\". As Selectolax is a public Module, i was afraid to change something after installing it with pip.\nI reinstalled the package at least 3 times and uploaded it again as a layer.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":33,"Q_Id":74975839,"Users Score":0,"Answer":"Reinstalling the Package with an older version(0.3.11) did solve the problem.","Q_Score":0,"Tags":"python,lambda,module,html-parsing","A_Id":74980226,"CreationDate":"2023-01-01T16:30:00.000","Title":"Lambda Selectolax Import partially initialized module 'selectolax'","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have an api (django app), lots of people use it, i want this api handle millions requests.\n\nHow can i make it distributed so this api can handle many requests.\nShould i make producer and consumer in one file?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":40,"Q_Id":74982240,"Users Score":0,"Answer":"You need an HTTP load balancer, not Kafka to scale incoming API requests.\nOnce a request is made, you can produce Kafka events, or try to do something with a consumer as long as you aren't blocking the HTTP response. File organization doesn't really matter, but yes, one producer instance per app, but multiple consumer threads can be started independently, as needed","Q_Score":0,"Tags":"python,django,apache-kafka","A_Id":74985196,"CreationDate":"2023-01-02T12:06:00.000","Title":"Django-kafka. Distributed requests to an endpoint to handle millions of requests","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I would like to know if there is a way for FastAPI to receive a URL of a file as a parameter and save this file to disk? I know it is possible with the requests library using requests.get() method, but is it possible with FastAPI to receive the URL and save it directly?\nI tried using file: UploadFile = File(...), but then it doesn't download the file when the URL is sent.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":55,"Q_Id":74986488,"Users Score":1,"Answer":"I don't believe so. I've come across this before and was unable to find a solution (and ended up using requests like you mentioned), but seeing this I wanted to check again more thoroughly.\nReviewing the uvicorn and fastapi repositories by searching the code itself, I see no functions\/code that reference requests or urllib (they do use urllib.parse\/quote, etc though) that would be 2 likely suspects to build requests. They do use httpx.AsyncClient, but only in tests. I would expect to see some use of these libraries in the main uvicorn\/fastapi libraries if they had code to make external requests.\nSeeing the above, I actually think I will change my code to use httpx.AsyncClient anyways since it is already a dependency.","Q_Score":1,"Tags":"python,download,fastapi,starlette","A_Id":74986708,"CreationDate":"2023-01-02T19:50:00.000","Title":"How to receive URL File as parameter and save it to disk using FastAPI?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I use an API that has ~30 endpoints and I have settings how often I need to send request to each endpoint. For some endpoints it's seconds and for some hours. I want to implement python app that will call each API endpoint (and execute some code) after every N seconds where N can be different for each endpoint. If one call is still in progress when second one kicks in, then that one should be added to queue (or something similar) and executed after the first one finishes.\nWhat would be the correct way to implement this using python?\nI have some experience with RabbitMQ but I think that might be overkill for this problem.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":32,"Q_Id":75016129,"Users Score":0,"Answer":"You could build your code in this way:\n\nstore somewhere the URL, method and parameters for each type of query. A dictionary would be nice: {\"query1\": {\"url\":\"\/a\",\"method\":\"GET\",\"parameters\":None} , \"query2\": {\"url\":\"\/b\", \"method\":\"GET\",\"parameters\":\"c\"}} but you can do this any way you want, including a database if needed.\n\nstore somewhere a relationship between query type and interval. Again, you could do this with a case statement, or with a dict (maybe the same you previously used), or an interval column in a database.\n\nEvery N seconds, push the corresponding query entry to a queue (queue.put)\n\nan HTTP client library such as requests runs continuously, removes an element from the queue, runs the HTTP request and when it gets a result it removes the following element.\n\n\nOf course if your code is going to be distributed across multiple nodes for scalability or high availability, you will need a distributed queue such as RabbitMQ, Ray or similar.","Q_Score":1,"Tags":"python,python-3.x","A_Id":75016339,"CreationDate":"2023-01-05T09:14:00.000","Title":"Sending requests to different API endpoints every N seconds","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to open files from webpage. For example when we try to download a torrent file it redirects us to utorrent app and it continues it work. I also want to open a local file somehow using OS software. Like a video file using pot player. Is there any possible solution for me ,like making a autorun in pc to run that . Anything it may be please help me.\ud83d\ude14\ud83d\ude14\nI searched and found a solution to open a software using protocol, but in this way I cannot open a file in that software.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":44,"Q_Id":75031941,"Users Score":0,"Answer":"the link acts as a magnet so your torrent application is opened maybe delete torrent for sometime till you finish the project, i know how to open image in local files in html but it will only be visible to you, you can do audio and video files also using ","Q_Score":0,"Tags":"javascript,python,html,protocols","A_Id":75032911,"CreationDate":"2023-01-06T14:03:00.000","Title":"Cannot open a local file from webpage","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I want to open files from webpage. For example when we try to download a torrent file it redirects us to utorrent app and it continues it work. I also want to open a local file somehow using OS software. Like a video file using pot player. Is there any possible solution for me ,like making a autorun in pc to run that . Anything it may be please help me.\ud83d\ude14\ud83d\ude14\nI searched and found a solution to open a software using protocol, but in this way I cannot open a file in that software.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":44,"Q_Id":75031941,"Users Score":0,"Answer":"Opening a specific file in a specific software would usually depend on passing some URL parameters to the protocol-URL of the app (e.g., opening a file in VSCode would use a URL like vscode:\/\/\/Users\/me\/file.html, but this functionality would have to be explicitly handled by the app itself though, so the solution for each app would be different).\nOtherwise, if the app doesn't support opening a specific file itself through a URL, you'd have to use some scripting software (e.g. AppleScript if you're on macOS) to dynamically click\/open certain programs on a user's computer.","Q_Score":0,"Tags":"javascript,python,html,protocols","A_Id":75031988,"CreationDate":"2023-01-06T14:03:00.000","Title":"Cannot open a local file from webpage","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have a pyautogui code that repeats a order to click on a webpage but sometimes that webpage freezes and does not load, how could i detect that.\n\nthe webpage in not on selenium and chrome has been opened by pyautogui too\n\nUpdate 1:\nI have just realised that the website will realise that i have been on the website for a long time so it will not load certain elements. This usually happens evry 20 minutes.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":33,"Q_Id":75045356,"Users Score":1,"Answer":"I finally solved the problem by simply reloading the page every 20 minutes which solved the problem.","Q_Score":1,"Tags":"python,pyautogui","A_Id":75046354,"CreationDate":"2023-01-08T03:40:00.000","Title":"Pyautogui inactivity detection","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I can't find how to set up or change the Webhook through API.\nIs it possible to change it, set it when I am buying a number, or select one Webhook URL for all numbers?\nI tried to find this info in the documentation but there was helpful to me","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":73,"Q_Id":75076691,"Users Score":0,"Answer":"You will have to log into your Twilio console.\nFrom the Develop tab, select Phone Numbers, then Manage > Active Numbers.\nYou can set the default Webhook (and back-up alternate Webhook) by clicking on the desired number and entering it under the respective Phone or (if available) SMS fields. You will likely have to set the Webhook (takes 2 seconds) for each phone number purchased as the default is the Twilio Demo URL (replies back with Hi or something)\nThe nature of a Webhook should allow any change in functionality to be done externally (on your end) through your Webhook script's functionality and thus dynamically changing the Webhook URL through the API on a case-by-case basis is discouraged and frankly should not be necessary. Someone may correct me if mistaken.","Q_Score":0,"Tags":"twilio,webhooks,twilio-api,twilio-python","A_Id":75084856,"CreationDate":"2023-01-10T22:57:00.000","Title":"WebHook in Twilio API","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Title is all.\nI think, as result, they are same function.\nAre \"driver.refresh()\" and \"driver.get(current_url)\" the perfecly same?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":27,"Q_Id":75078506,"Users Score":0,"Answer":"Refresh is same as browser refresh. It just reloads the same url. The get('url') option is equivalent of typing out an url in urlbar and pressing enter. Selenium waits for the website to be loaded before executing next script.","Q_Score":0,"Tags":"python,selenium","A_Id":75083476,"CreationDate":"2023-01-11T04:59:00.000","Title":"Are \"driver.refresh()\" and \"driver.get(current_url)\" the same?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to install and use python on Windows 11 for purposes of Meraki API calls. I have installed Python version 3.11 and am now trying to run\npip install --upgrade requests\npip install --upgrade meraki\nbut these command return the following error\nWARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))': \/simple\/requests\/\nERROR: Could not find a version that satisfies the requirement requests (from versions: none)\nERROR: No matching distribution found for requests\nWARNING: There was an error checking the latest version of pip.\nI don't think the firewall is blocking it but I am not sure what I need to look for in the firewall - does anyone know the addresses that need to be unblocked?\nOr is there another reason for this error?\nThanks!\nI tried adding a firewall rule but didn't know what I needed to add.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":49,"Q_Id":75093960,"Users Score":0,"Answer":"Try to use another pip index-url\nFor example:\npip install --upgrade requests -i https:\/\/pypi.tuna.tsinghua.edu.cn\/simple\/ --trusted-host pypi.tuna.tsinghua.edu.cn","Q_Score":0,"Tags":"python,python-3.x,meraki-api","A_Id":75107000,"CreationDate":"2023-01-12T09:16:00.000","Title":"How to fix Python PIP update failing","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have selenium find element line: element.find_elements(By.XPATH, '.\/following::input')\nBut somehow it took waaaay to long to search for all next elements on the page (around 2\/3 second !!!)\nIs it way around it (or I have done something wrong)???","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":24,"Q_Id":75108348,"Users Score":0,"Answer":"Just make it like this and it will sort this problem (will search for the following 10 elements instead of all)\n\nelement.find_elements(By.XPATH, '.\/\/following::input[position()<=10]')","Q_Score":0,"Tags":"python,selenium,xpath","A_Id":75108472,"CreationDate":"2023-01-13T11:03:00.000","Title":"Python \/ Selenium - search following elements too too long time","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"im creating a discord bot in python and i would like to make my bot command the music bot to play music. for example i want my bot to write \/play prompet:[SONG_NAME] in a chat room and let it be recognized and played by the other music bot. if someone has an idea to make it work please help!\ni been trying to just write a string with my own bot \"\/play prompet:[SONG_NAME]\" but the other bot is not reacting.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":49,"Q_Id":75110954,"Users Score":0,"Answer":"You can't do this. Discord.py by default doesn't invoke commands on messages of other bots, unless you override on_message and call process_commands without checking the message author.\nConsequently, if the bot is not yours and you cannot control it, there's nothing you can do about it. If the other bot allows it then it will work without you having to do anything.\nInvoking slash commands from chat will never work, as they're not made to be called by bots.","Q_Score":0,"Tags":"python,discord,discord.py,bots","A_Id":75111110,"CreationDate":"2023-01-13T15:06:00.000","Title":"How to make a discord bot use other discord bot commands?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to access a site that is asking to verify that it is human accessing the site the tool used by the site is cloudflare\nI use the user-agent to access the sites and so far I haven't had any problems, but with the current site I'm facing this barrier and there's a detail I configured a 100 second sleep to do the recognition manually and even so the site recognizes that webdrive is a robot.\noptions.add_argument('--user-agent=\"Mozilla\/5.0 (Windows Phone 10.0; Android 4.2.1; Microsoft; Lumia 640 XL LTE) AppleWebKit\/537.36 (KHTML, like Gecko) Chrome\/42.0.2311.135 Mobile Safari\/537.36 Edge\/12.10166\"')","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":48,"Q_Id":75121677,"Users Score":0,"Answer":"Maybe changing your public IP address would work. I had this issue before and struggled with headers and drivers.\nBut this varied from website to website though.","Q_Score":0,"Tags":"python,selenium","A_Id":75273894,"CreationDate":"2023-01-14T22:26:00.000","Title":"How not to be detected by browser using selenium?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Is it possible to download a video with controlslist=\"nodownload\" and if so how? There is a poster tag and a src tag with urls, but when I tried to open them it only said Bad URL hash.\nthe whole thing looks like this: click<\/a>\nAnd in developer tools, if you inspect that button, it renders:\nclick\/a>\nThis works perfectly. I can click the button and it takes me to the product. However, I am using SEMrush to audit my website, and it is raising the issue that this button is a \"Broken internal link\". The URL that it gives that shows it as broken is:\nhttps:\/\/www.example.com\/indProduct\/productdb/new\nHow do I fix this? The website works completely fine. There are no broken links anywhere. But for some reason, SEMrush logs that as a broken internal link...","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":32,"Q_Id":76616691,"Users Score":0,"Answer":"Some servers will actually decode URLs thus making the URL encoding pointless in the main URL. You can pass your product name in a GET parameter or you can replace the forward slash with a different character and then replace it back when you read it.","Q_Score":1,"Tags":"javascript,python,html,flask,urlencode","A_Id":76627413,"CreationDate":"2023-07-05T02:01:00.000","Title":"URLencoded shows as broken internal link","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1}]