Web Development
int64
0
1
Data Science and Machine Learning
int64
0
1
Question
stringlengths
28
6.1k
is_accepted
bool
2 classes
Q_Id
int64
337
51.9M
Score
float64
-1
1.2
Other
int64
0
1
Database and SQL
int64
0
1
Users Score
int64
-8
412
Answer
stringlengths
14
7k
Python Basics and Environment
int64
0
1
ViewCount
int64
13
1.34M
System Administration and DevOps
int64
0
1
Q_Score
int64
0
1.53k
CreationDate
stringlengths
23
23
Tags
stringlengths
6
90
Title
stringlengths
15
149
Networking and APIs
int64
1
1
Available Count
int64
1
12
AnswerCount
int64
1
28
A_Id
int64
635
72.5M
GUI and Desktop Applications
int64
0
1
0
0
I had problems with importing oauth2 package. In in the init.py file there is a problem with this line getting executed, from ._compat import PY3 . I don't know why installing and running oauth2 is such a mess
false
51,690,431
0.197375
1
0
1
this worked for me: from oauth2._compat import PY3 the error you were getting suggests that you were trying to import __main__.compat instead of oauth2._compat
0
266
0
0
2018-08-04T23:55:00.000
python-3.x,oauth-2.0
I am getting No module named 'oauth2._compat'
1
1
1
51,692,171
0
0
0
Is there any way to implement http connector for sending messages(MT) in Jasmin? According to documentation jasmin HTTP API supports smpp connector only. Update 1: More information of scenario: I have 4 sms providers that I need to implement using Jasmin. one of them is using SMPP protocol and is working fine with jasmin using smpp connector. Other 3 have http protocol (call url with params to send SMS). I want to use http protocol with jasmin to use its routing and other stuff.
false
51,700,964
0.099668
1
0
1
Jasmin only supports HTTP client connectors for MO (mobile originated) messages. Having found myself with the same scenario as yourself, I found the simplest solution was to write an SMPP-to-HTTP service which allows Jasmin to connect to it and relay MT messages via HTTP. Hope that helps
0
1,158
0
3
2018-08-06T05:19:00.000
python,sms-gateway,jasmin-sms
Jasmin HttpConnector for MT
1
2
2
57,054,186
0
0
0
Is there any way to implement http connector for sending messages(MT) in Jasmin? According to documentation jasmin HTTP API supports smpp connector only. Update 1: More information of scenario: I have 4 sms providers that I need to implement using Jasmin. one of them is using SMPP protocol and is working fine with jasmin using smpp connector. Other 3 have http protocol (call url with params to send SMS). I want to use http protocol with jasmin to use its routing and other stuff.
false
51,700,964
0
1
0
0
Here is the overview for adding Http MT support in Jasmin: Add connector class and manager for http MT connector Add router manager Modify smpp protocol module and detach http mt call from this module before it is dispatched to smpp queue. Detachment will be done after router has selected your custom connector and user balance etc is deducted from user account but before transaction is queued. By detachment means use your own queue (rabbitmq queue) and and publish your transaction on this. Create subscriber for rabbitmq and response back as required. Using this method will return same message id and responses like smpp. For more details or help please comment.
0
1,158
0
3
2018-08-06T05:19:00.000
python,sms-gateway,jasmin-sms
Jasmin HttpConnector for MT
1
2
2
57,823,022
0
0
0
I've tried doing client.accept_invite("link of invite in discord.gg or url id(tried both many times)" but It doesn't seem to work, when I put the invite code in, it tries to join, the script just keeps hanging and nothing is happening, on the other hand, every time I tried to join the invite link I needed to reconfirm the email on the account, after I reconfirm it, I try again, and again I need to confirm it, and so on. I've read the discord API and they still have accept_invite in their documentations, I don't see why this won't work? I've tried logging into the account with token AND email+pw, both give same results. No error, just hanging + email needs to be reconfirmed. If anyone knows how to help, would be appreciated.
true
51,708,875
1.2
0
0
1
Client.accept_invite() is a) deprecated b) intended for user accounts Additionally, logging in using email+pw flags your account and could result in punishments (thats the last I heard at least) Do not use this, instead make a bot account and ask users to invite your bot through it's oauth URL. Each time you use an endpoint you shouldnt (like the endpoint used to login with username and password), Discord unverifies and flags your account.
0
437
0
0
2018-08-06T13:32:00.000
python,discord,discord.py
client.accept_invite won't work, account tries to join but instead it just hangs
1
1
2
51,708,927
0
1
0
I'm new to backend and I've written a python script which imports libraries like Flask, sqlalchemy. From taking help from here and there I've been successfully able to get JSON as response to a get call using localhost or http://127.0.0.1/. Now that this is done I want to take this action on a live server so right now I've hostgator and I've created a folder there so it'll be like mydomain.com/api/. Now my questions is that do I need to place an index.html in this folder which makes a call to run myscript.py or I can directly call mydomain.com/api/myscript.py and it'll return the JSON? My script is basically a recommendation model that returns recommendations to users upon request.
false
51,715,736
0
0
0
0
It's possible to make a request via PHP with the curl_* functions, or you can do the same in JavaScript (e.g. within your HTML file) using AJAX (XMLHttpRequest).
0
378
0
1
2018-08-06T21:15:00.000
python,json,flask,flask-sqlalchemy
How to Call API written in Python using HTML or PHP
1
1
2
51,715,768
0
0
0
I want to communicate between electron app and opened other brower. I have to send params from electron to other browser or from browser to electron app. But there is no way. please help me!
false
51,755,103
0.197375
0
0
1
This does not seem very realistic. But I have a few ideas here: Electron -> Browser Webserver which you can open in the Browser write a Browser extension Browser -> Electron A special url like the ones apple uses for the app store (e.g.: itmss://itunes.apple.com/de/store?...) Due to security reasons it is not possible to "talk" to a browser and access other webpages information. Otherwise it would be easy to leak cookies and personal data.
0
77
0
0
2018-08-08T20:25:00.000
python,vue.js,electron
How to communicate between electron app and opened other browser?
1
1
1
51,757,041
0
0
0
I'm trying to push a button on a soccer bookmaker's web page using Selenium's Chromedriver. My problem is that nothing happens when I call Selenium's driver.find_elements_by_class_name('c-events__item c-events__item_col'). Fixed: I was trying to get the info from: a class names 'c-events__more c-events__more_bets js-showMoreBets'. using find_by_class_name() cannot handle spaces as it will think its compound classes, instead I used csselector and it works like a charm now. driver.find_elements_by_css_selector('.c-events__item.c-events__item_col')
false
51,755,916
0
0
0
0
using find_by_class_name() cannot handle spaces as it will think its compound classes, instead I used csselector and it works like a charm now. driver.find_elements_by_css_selector('.c-events__item.c-events__item_col')
0
116
0
1
2018-08-08T21:30:00.000
python,python-3.x,selenium
Selenium calling class elements doesn't work
1
1
2
51,764,087
0
0
0
Twitter announced that Site Streams, User Streams, and legacy Direct Message endpoints, originally slated for retirement on June 19th 2018, will be deprecated on Wednesday August 16, 2018 which provides 3 months from today’s release of the Account Activity API for migration. I am wondering it those APIs have an effect on Tweepy.Stream class
false
51,773,215
0
1
0
0
I am wondering it those APIs have an effect on Tweepy.Stream class Yes. Tweepy is not a special case. On August 16, the streaming APIs will be shut off, and any code in Tweepy which interacted with them will no longer function.
0
90
0
0
2018-08-09T17:55:00.000
python,twitter,tweepy
Does Twitter's changes to their public APIs affect Tweepy?
1
1
1
51,773,664
0
1
0
I have a site www.domain.com and wanted to get all of the urls from my entire website and how many times they have been clicked on, from the Google Analytics API. I am especially interested in some of my external links (the ones that don't have www.mydomain.com). I will then match this against all of the links on my site (I somehow need to get these from somewhere so may scrape my own site). I am using Python and wanted to do this programmatically. Does anyone know how to do this?
true
51,800,600
1.2
0
0
1
I have a site www.domain.com and wanted to get all of the urls from my entire website and how many times they have been clicked on I guess you need parameter Page and metric Pageviews I am especially interested in some of my external links You can get list of external links if you track they as events. Try to use some crawler, for example Screaming Frog. It allows to get internal and external links. Free use up to 500 pages.
0
346
0
0
2018-08-11T14:25:00.000
python,google-analytics,web-crawler,google-analytics-api
Can I get a list of all urls on my site from the Google Analytics API?
1
1
1
51,801,255
0
0
0
I'm wondering if it is possible to use Selenium to write to the Chrome console with python. I do not mean by using send_keys to press F12 and open the console then send more keys to write. If this isn't possible, are they any libraries or APIs that would let me do this? Thanks!
true
51,804,531
1.2
0
0
1
You can look at the problem with another view, try to use browser.execute_script and pass console.log("whatever you want to write in the console") , this will print that sentence in the console You can of course change it to your own needs but you get the idea
0
169
0
1
2018-08-11T23:24:00.000
python,selenium
Using selenium to write to Chrome console
1
1
1
51,804,569
0
0
0
My small AWS EC2 instance runs a two python scripts, one to receive JSON messages as a web-socket(~2msg/ms) and write to csv file, and one to compress and upload the csvs. After testing, the data(~2.4gb/day) recorded by the EC2 instance is sparser than if recorded on my own computer(~5GB). Monitoring shows the EC2 instance consumed all CPU credits and is operating on baseline power. My question is, does the instance drop messages because it cannot write them fast enough? Thank you to anyone that can provide any insight!
true
51,814,464
1.2
1
0
1
It depends on the WebSocket server. If your first script cannot run fast enough to match the message generation speed on server side, the TCP receive buffer will become full and the server will slow down on sending packets. Assuming a near-constant message production rate, unprocessed messages will pile up on the server, and the server could be coded to let them accumulate or eventually drop them. Even if the server never dropped a message, without enough computational power, your instance would never catch up - on 8/15 it could be dealing with messages from 8/10 - so instance upgrade is needed. Does data rate vary greatly throughout the day (e.g. much more messages in evening rush around 20:00)? If so, data loss may have occurred during that period. But is Python really that slow? 5GB/day is less than 100KB per second, and even a fraction of one modern CPU core can easily handle it. Perhaps you should stress test your scripts and optimize them (reduce small disk writes, etc.)
0
56
0
0
2018-08-13T02:25:00.000
python,amazon-s3,amazon-ec2
(AWS) What happens to a python script without enough CPU?
1
1
1
51,814,600
0
0
0
I know that requests.get() provides an HTTP interface so that the programmer can make various requests to a HTTP server. That tells me that somewhere a port must be opened so that the request can happen. Taking that into account, what would happen if the script is stopped (say, by a Key Board Interrupt, so the machine that is executing the script remains connected to the internet) before the request is answered/complete? Would the port/connection remain opened? Does the port/connection close automatically?
false
51,831,726
0.099668
0
0
1
On a much lower level, when a program exits, the OS kernel closes all file descriptors opened by that program. These include network sockets.
0
993
0
15
2018-08-13T23:12:00.000
python,http,python-requests,python-requests-html
Python: What happens if script stops while requests.get() is executing?
1
1
2
51,929,463
0
1
0
I've made a small python script to scrap the web. I would like to make a nice and simple web interface where the user can enter data to search for and have the result displayed as a list. I understand that there's many different ways to do that but don't know which one would be the best in my case. I would like something : Really simple and light Running locally with the less dependencies possible. So far I've thinking about : A NodeJS server displaying content and executing the script A web framework in Python (web.py, Flask, Django..?) A local webserver (XAMPP) and cgi Please note that I don't know much about web in python but I'm a bit used to NodeJS. What would you recommend ? Thanks, Victor
false
51,848,143
0
0
0
0
Socket.io ist Very easy to send Data between Websites and scripts. The Website Connect with the Socket.io Server and Inside the Server the Python Script can be executed
0
2,380
0
0
2018-08-14T18:57:00.000
python,web,interface
Best way to make a web interface for python script
1
2
2
51,848,568
0
1
0
I've made a small python script to scrap the web. I would like to make a nice and simple web interface where the user can enter data to search for and have the result displayed as a list. I understand that there's many different ways to do that but don't know which one would be the best in my case. I would like something : Really simple and light Running locally with the less dependencies possible. So far I've thinking about : A NodeJS server displaying content and executing the script A web framework in Python (web.py, Flask, Django..?) A local webserver (XAMPP) and cgi Please note that I don't know much about web in python but I'm a bit used to NodeJS. What would you recommend ? Thanks, Victor
true
51,848,143
1.2
0
0
1
Personally I prefer gevent, bottle or Flask, and some front end framework like bootstrap or framework7. Gevent easily makes it asynchronous and has websockets built right in, and bottle is the easiest (and fastest) way to build a web app or api.
0
2,380
0
0
2018-08-14T18:57:00.000
python,web,interface
Best way to make a web interface for python script
1
2
2
51,848,647
0
1
0
I found a website i want to send POST requests to and then get the HTML code of the following GET request. Is there any way I can find out how these routes are organised? I wanted to do multiple searches by using a form on the site while looping through an array on a website through python.
false
51,848,283
0
0
0
0
There is a good chance the website has its API available for you to use? Inside developer tools, look at the Network tab and look for AJAX requests or XMLHttpRequests as a hint; perhaps make use of a requests type-package and go from there. There is also a very good chance the website follows a Restful architecture to create its routes. Surely there are interesting reads on the very first results-page of Google for this topic; stack overflow answers most likely. However, if you're question is why would a website not completely exposing its backend then you might not have considered the consequences.
0
35
0
0
2018-08-14T19:07:00.000
html,python-3.x,routing
Learning the Route Structure of a Website
1
1
1
51,849,147
0
0
0
I am fairly proficient in Python and have started exploring the requests library to formulate simple HTTP requests. I have also taken a look at Sessions objects that allow me to login to a website and -using the session key- continue to interact with the website through my account. Here comes my problem: I am trying to build a simple API in Python to perform certain actions that I would be able to do via the website. However, I do not know how certain HTTP requests need to look like in order to implement them via the requests library. In general, when I know how to perform a task via the website, how can I identify: the type of HTTP request (GET or POST will suffice in my case) the URL, i.e where the resource is located on the server the body parameters that I need to specify for the request to be successful
true
51,863,463
1.2
0
0
0
This has nothing to do with python, but you can use a network proxy to examine your requests. Download a network proxy like Burpsuite Setup your browser to route all traffic through Burpsuite (default is localhost:8080) Deactivate packet interception (in the Proxy tab) Browse to your target website normally Examine the request history in Burpsuite. You will find every information you need
0
119
0
0
2018-08-15T17:19:00.000
python,api,http,python-requests
Identifying parameters in HTTP request
1
1
1
51,863,598
0
0
0
I am using netwrokx to calculate the shortest path between different vertices using Dijkstra algorithm. I have a case where I want to connect three different vertices (for example A, B and C in an undirected graph). First I find the shortest path from A to B and then I want to find the shortest path from the path of A to B. What I have tried so far is I have calculate shortest path length from all the nodes of the A to B path to C and then I calculate the shortest path from the node which gives the minimum path length. This is computationally intensive as path may have up to 200 to 300 nodes. Can anyone give me a hint how can I improve approach? or easier way to find the shortest path from the already existing edges to the target ?
false
51,864,195
0
0
0
0
Find the shortest path X = A to B. Find the shortest path Y = A to C. Find the shortest path Z = B to C. combine the paths. shortest_path(G, source, target) imported from networkx will return the shortest path between two vertices. If this method is done by implementing Dijkstra's algorithm, the efficiency is equivalent to Dijkstra's algorithm asymptotically.
0
191
0
2
2018-08-15T18:12:00.000
python-3.x,networkx
networkx shortest path from a path to vertex
1
2
2
51,881,211
0
0
0
I am using netwrokx to calculate the shortest path between different vertices using Dijkstra algorithm. I have a case where I want to connect three different vertices (for example A, B and C in an undirected graph). First I find the shortest path from A to B and then I want to find the shortest path from the path of A to B. What I have tried so far is I have calculate shortest path length from all the nodes of the A to B path to C and then I calculate the shortest path from the node which gives the minimum path length. This is computationally intensive as path may have up to 200 to 300 nodes. Can anyone give me a hint how can I improve approach? or easier way to find the shortest path from the already existing edges to the target ?
true
51,864,195
1.2
0
0
1
Add a new node, 'auxiliary' to your graph. For each node u in the A-B path, add an edge from u to 'auxiliary'. Find the shortest path from C to 'auxiliary'. Truncate that path by removing the final node 'auxiliary'. This is now the shortest path from C to that path. More generally, this approach works whenever you want to find the shortest path from a node to a set of nodes and (with a bit of generalization) it finds the shortest path from one set of nodes to another set.
0
191
0
2
2018-08-15T18:12:00.000
python-3.x,networkx
networkx shortest path from a path to vertex
1
2
2
51,864,929
0
1
0
I am trying to work with Kafka for data ingestion but being new to this, i am kind of pretty much confused.I have multiple crawlers, who extract data for me from web platform. Now, the issue is i want to ingest that extract data to Hadoop using Kafka without any middle scripts/service file . Is it possible ?
true
51,876,515
1.2
0
0
2
without any middle scripts/service file . Is it possible ? Unfortunately, no. You need some service that's writing into Kafka (your scraper). Whether you produce into Kafka HTTP links (then write an intermediate consumer/producer that generates the scraped results), or only produce the final scraped results, that's up to you. You also need a second service consuming those topic(s) that writes to HDFS. This could be Kafka Connect (via Confluent's HDFS Connector library), or PySpark (code you'd have to write yourself), or other options that include "middle scripts/services". If you'd like to combine both options, I would suggest taking a look at Apache Nifi or Streamsets, which can perform HTTP lookups, (X)HTML parsing, and Kafka+HDFS connectors, all configured via a centralized GUI. Note: I believe any Python code would have to be rewritten in a JVM language to support major custom parsing logic in this pipeline
0
1,991
1
1
2018-08-16T11:57:00.000
python,apache-kafka,web-crawler,kafka-producer-api
Data ingestion using kafka from crawlers
1
1
1
51,887,428
0
1
0
I would like to get images from a search engine, to run some automated tests without the need to go online and pick them by hand. I found an old example from 5 years ago (ajax.googleapis.com/ajax/services/search/images), which sadly does not work anymore. What is the current method to do so in Python3? Ideally I would like to be able to pass a string with the search name, and retrieve a set amount of images, at full size. I don't really mind which search engine is used; I just want to be sure that it is supported for the time being. Also I would like to avoid Selenium; I am planning to run this without any UI nor using the browser, all from terminal.
false
51,882,099
0
0
0
0
Found a pretty good solution using BeautifulSoup. It does not work on Google, since I get 403, but when faking the header in the request, is possible to get sometimes, data. I will have to experiment with different other sites. So far the workflow is to search in the browser so I can get the url to pass to beautifulsoup. Once I get the url in code, I replaced the query part with a variable, so I can pass it programmatically. Then I parse the output of beautifulsoup to extract the links to the images, and retrieve them using requests. I wish there was a public API to get also parameters like picture size and such, but I found nothing that works currently.
0
33
0
1
2018-08-16T17:04:00.000
python-3.x
Python - make a search and retrieve a set amount of images from a search engine
1
1
2
51,900,744
0
1
0
When running the commands in idle I can get an API call to work using oauth2. Then when trying to use the same lines of code in web2py I was getting an error that the web2py module wasn't found. So I installed Oauth2 into the module folder of the web2py project files. This changed the error from the oauth2 module not found to (ImportError('No module named version',), ) Any ideas to find a fix would be appreciated.
false
51,885,506
0
0
0
0
As an update I figured out that the issue was because Web2py doesn't support python 3 at this point in time.
0
132
0
0
2018-08-16T21:22:00.000
python,oauth-2.0,web2py
Web2py error: (ImportError('No module named version',), )
1
1
1
51,901,525
0
0
0
i am trying to recognise discord emotes. They are always between two : and don't contain space. e.g. :smile: I know how to split strings at delimiters, but how do i only split tokens that are within exactly two : and contain no space? Thanks in advance!
true
51,887,465
1.2
0
0
0
Thanks to @G_M i found the following solution: regex = re.compile(r':[A-Za-z0-9]+:') result = regex.findall(message.content) Will give me a list with all the emotes within a message, independent of where they are within the message.
0
76
0
0
2018-08-17T02:02:00.000
python,python-3.x,discord
find token between two delimiters - discord emotes
1
1
1
51,902,676
0
1
0
I tried finding it on AWS documentation but couldn't find it.. Is there a way to apply a security group "sg-123" on a running EC2 instance using Boto3? Thanks!
false
51,898,575
0.197375
0
0
1
Ok, found the answer: response = instance.modify_attribute(Groups=['sg-123'])
0
868
0
0
2018-08-17T15:26:00.000
python,amazon-web-services,amazon-ec2,boto3
Changing Security Groups for EC2 Instance using Boto3
1
1
1
51,899,113
0
1
0
I am trying to create map of the track with 30 or so points with Google Static Map API. The problem is that it supports only origin-23waypoints-end even with premium key. So the idea I got is to split the route into two and then merge those two images. The problem is that since the parts of the track will have different points it means that the map canvas won't be same which renders the merge (almost?) impossible. Anybody would have any idea how to solve this issue? I am helpless. Thank you very much for any ideas. :)
true
51,901,602
1.2
0
0
0
There is no good way to do this, to best of my knowledge. When solving the problem, we just went with multiple maps in the end. Everything is solvable yet this was not worth investing the time. It's technically not an answer but an information for anybody attempting this. If you would be able to solve this, please post your solution here and I will mark it.
0
240
0
0
2018-08-17T19:13:00.000
python,python-3.x,python-imaging-library,google-static-maps
Google Static Map API - image map with 25+ points
1
1
1
52,975,017
0
0
0
There are 3 servers on consumer side (A <-> B <-> C), 2 of them are our Apps (B (Node.js), C (Python)). All the communications among the servers are handled by WebSockets with TLS. C is not connected to A. I don't want a consumer to be able to run fake B or C server and connect it to real C or B server. A - client B - server for A and separate server with different port for C C - client Also there is the external D license server, which is reached via the Internet. So what is the best way to guarantee server identity?
false
51,912,845
0.197375
0
0
1
If these are programs that your customer runs on machines that they own and host, and the customer is the attacker you're trying to protect against, then there really is no way to do this. You can write a simple challenge-response protocol, where B and C only have to know each other's public keys to both be able to verify that the other side knows its own private key. But where are you going to store that private key? Somewhere that the program can access it—which means somewhere that your customer can access it. Of course you can try to obfuscate the keys. This works great if nobody cares about cracking your system—but then if nobody cares, you don't have to do anything. If cracking your system would be valuable to someone, they'll find a way to dig out the keys. At which point you're in an arms race: every time someone finds your keys, you have to think of a new way to obfuscate them to make them start all over. And even that won't work unless you can require your customers to stay up to date on all of their servers. Otherwise, once I've dug the keys out of your version 1.3.4, I can just keep running version 1.3.4 of server C, and my own fake server B. If you can force the servers to communicate with another server that you run and control out on the internet, you can make it much harder to crack the handshake (and harder to stick with old versions), but it's still nowhere near impossible. And really, that just shifts the attack surface to modifying the servers so they don't do the upstream check. What you're basically talking about here is the equivalent of Windows Activation or online game verification—which, of course, people have cracked. There are plenty of cases where it's worth doing something anyway: Often a small amount of obfuscation is worth it to deter casual hackery. Paying for one of the commercial solutions means you're now using "industry standard" protection, so you use the DMCA against anyone who attacks it, you can defend yourself if partners sue you over lost data, etc. If $200K/year worth of engineering can knock even 10% of the losses off $200M/year worth of content, spend the effort. But at this point, we're not talking about business issues, so you'll probably get better advice from a security consultant who can ask you the right questions about your business, than from a generic answer on a programming site.
0
67
0
0
2018-08-18T21:55:00.000
python,node.js,security,ssl,websocket
How to guarantee server identity?
1
1
1
51,913,061
0
0
0
I recently joined a Telegram bot that requires user interaction every few hours. Basically I Log into the bot, press a button, check text and then press another button. Is it possible to automate such a task? Thank you
false
51,916,924
0.197375
1
0
1
Telegram Bot API doesn't allow bots to interact with other bots. So bots won't be useful for such task.Only way to do that is to use Telegram Core API (the API used in telegram clients), make a custom Telegram client, and do the task through it.
0
177
0
0
2018-08-19T10:53:00.000
python-3.x,automation,telegram,telegram-bot,scrape
How to automate a Telegram bot?
1
1
1
51,965,732
0
0
0
I have an url: http://200.73.81.212/.CREDIT-UNION/update.php None of reg expressions I've found and develop myself works. I'm working on phishing mails dataset and there are lots of strange hyperlinks. This is one of mine: https?:\/\/([a-zA-z0-9]+.)+)|(www.[a-zA-Z0-9]+.([a-zA-Z0-9]+\.[a-zA-Z0-9]+)+)(((/[\.A-Za-z0-9]+))+/?. Of course no success. I work in Python. EDIT: I need a regex to catch this kind of url's and, also, any ordinary hyperlinks, like: https://cnn.com/ www.foxnews.com/story/122345678 Any thoughts?
false
51,919,931
0
0
0
0
While @datawrestler answer works for original question, I had to extend it to catch wider group of url's (I've edited the question). This url, seems to work, for the task: r"""(https?://www\.[a-zA-Z0-9]+(\.[a-zA-Z0-9]+)+(/[a-zA-Z0-9.@-]+){0,20})|\ (https?://[a-zA-Z0-9]+(\.[a-zA-Z0-9]+)+(/[a-zA-Z0-9.@-]+){0,20})|\ (www.[a-zA-Z0-9]+(\.[a-zA-Z0-9]+)+(/[a-zA-Z0-9.@-]+){0,20})""" Three alternatives: https?://www, https://domain, www.domain
0
348
0
1
2018-08-19T16:59:00.000
python,regex,python-3.x,http,regex-group
Regex to Catch Url
1
1
2
52,027,694
0