title
stringlengths
3
221
text
stringlengths
17
477k
parsed
listlengths
0
3.17k
Explaining Text classifier outcomes using LIME | by Maha Amami | Towards Data Science
In the previous post on leveraging explainability in real-world applications, I gave a brief introduction to XAI (eXplainability in AI), the motivation behind it, and the application of explainable models in the real-life scenarios. In this post, I will provide an introduction to LIME one of the most famous local explainable models and how to apply it to detect terms that make a question in the Quora platform insincere. The authors in [1] proposed LIME that is an algorithm explaining individual predictions of any classifier or regressor in a faithful and intelligible way, by approximating them locally with an interpretable model. For instance, an ML model predicts that a patient has the flu using a set of features (sneeze, weight, headache, no fatigue, and age), and LIME highlights the symptoms in the patient’s history that led to the prediction (the most important features). Sneeze and headache are portrayed as contributing to the flu prediction, while no fatigue is evidence against it. With these explanations, a doctor can make an informed decision about whether to trust the model’s prediction. Explaining a prediction is presenting textual or visual artifacts that provide qualitative understanding of the relationship between the instance’s components (e.g. words in text, patches in an image) and the model’s prediction [1]. LIME is a local surrogate model, which means that it is a trained model used to approximate the predictions of the underlying black-box model. But, it comes with the idea to generate variations of the data into the machine learning model and tests what happens to the predictions, using this perturbated data as a training set instead of using the original training data. In other words, LIME generates a new dataset consisting of permuted samples and the corresponding predictions of the black-box model. On this new dataset, LIME then trains an interpretable model (e.g., Lasso, decision tree, ...), which is weighted by the proximity of the sampled instances to the instance of interest. The dataset of the Quora Insincere Questions Classification task could be downloaded from this link. The training data includes the question that was asked, and whether it was identified as insincere. Let us look at two questions of this dataset and the corresponding classes (1 for insincere, 0 for sincere question): Insincere question: Why does Trump believe everything that Putin tells him? Is he a communist, or plain stupid? Sincere question: Can the strong correlation between latitude and prosperity be partially explained by another one (if proven to exist) between favourable ambient temperatures and brain enthropy? The preprocessing step consists of splitting the data to train and validation sets, then to vectorizing the questions to tf-idf vectors. The black box model is a logistic regression model having as input the tf-idf vectors. It is time now to apply LimeTextExplainer function to generate local explanations for predictions. The function needs as parameters the question to explain (of index 130609), the predicted label of the question generated from the black box model (the logistic regression), and the number of features used for explanation. The result of the above code is the following: Question: When will Quora stop so many utterly stupid questions being asked here, primarily by the unintelligent that insist on walking this earth?Probability (Insincere) = 0.745825811972627Probability (Sincere) = 0.254174188027373True Class is: insincere The classifier got this example right (it predicted insincere).The explanation is presented below as a list of weighted features using the following instruction: The result is: [('stupid', 0.3704823331676872), ('earth', 0.11362862926025367), ('Quora', 0.10379246842323496), ('insist', 0.09548389743268501), ('primarily', -0.07151150302754253), ('questions', 0.07000885924524448), ('utterly', 0.040867838409334646), ('asked', -0.036054558321806804), ('unintelligent', 0.017247304068062203), ('walking', -0.004154838656529393)] These weighted features are a linear model, which approximates the behavior of the logistic regression classifier in the vicinity of the test example. Roughly, if we remove ‘stupid’ and ‘earth’ from the question, the prediction should move towards the opposite class (Sincere) by about 0.48 (the sum of the weights for both features). Let’s see if this is the case. The result is: Original prediction: 0.745825811972627Prediction after removing some features: 0.33715161522095155Difference: -0.40867419675167543 As expected, the class is now sincere after removing the words, ‘earth’, and ‘stupid’ from the instance vocabulary. The results can be shown in LIME in different types of visualization. Notice that for each class, the words on the right side on the line are positive, and the words on the left side are negative. Thus, ‘stupid’ is positive for insincere, but negative for sincere. You can also get a bar plot of the explanations using the code below: LIME is able to explain the predictions of any type of classifier (SVM, neural nets, ...) locally. In this post, I applied it to the Quora questions dataset to explain what makes a question insincere in Quora, but it can also be integrated into images and structured data classifiers. You can access to more codes and examples following this link. If you have a question, feel free to comment below or ask it via email or Linkedin. And I’ll answer it. The entire code is posted in my GITHUB profile in this link. I will continue to post about XAI and other funny topics. Stay tuned!! [1] Ribeiro, M. T., Singh, S., & Guestrin, C. (2016, August). “ Why should I trust you?” Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1135–1144). LIME code: https://github.com/marcotcr/lime
[ { "code": null, "e": 404, "s": 171, "text": "In the previous post on leveraging explainability in real-world applications, I gave a brief introduction to XAI (eXplainability in AI), the motivation behind it, and the application of explainable models in the real-life scenarios." }, { "code": null, "e": 595, "s": 404, "text": "In this post, I will provide an introduction to LIME one of the most famous local explainable models and how to apply it to detect terms that make a question in the Quora platform insincere." }, { "code": null, "e": 809, "s": 595, "text": "The authors in [1] proposed LIME that is an algorithm explaining individual predictions of any classifier or regressor in a faithful and intelligible way, by approximating them locally with an interpretable model." }, { "code": null, "e": 1285, "s": 809, "text": "For instance, an ML model predicts that a patient has the flu using a set of features (sneeze, weight, headache, no fatigue, and age), and LIME highlights the symptoms in the patient’s history that led to the prediction (the most important features). Sneeze and headache are portrayed as contributing to the flu prediction, while no fatigue is evidence against it. With these explanations, a doctor can make an informed decision about whether to trust the model’s prediction." }, { "code": null, "e": 1518, "s": 1285, "text": "Explaining a prediction is presenting textual or visual artifacts that provide qualitative understanding of the relationship between the instance’s components (e.g. words in text, patches in an image) and the model’s prediction [1]." }, { "code": null, "e": 1890, "s": 1518, "text": "LIME is a local surrogate model, which means that it is a trained model used to approximate the predictions of the underlying black-box model. But, it comes with the idea to generate variations of the data into the machine learning model and tests what happens to the predictions, using this perturbated data as a training set instead of using the original training data." }, { "code": null, "e": 2209, "s": 1890, "text": "In other words, LIME generates a new dataset consisting of permuted samples and the corresponding predictions of the black-box model. On this new dataset, LIME then trains an interpretable model (e.g., Lasso, decision tree, ...), which is weighted by the proximity of the sampled instances to the instance of interest." }, { "code": null, "e": 2410, "s": 2209, "text": "The dataset of the Quora Insincere Questions Classification task could be downloaded from this link. The training data includes the question that was asked, and whether it was identified as insincere." }, { "code": null, "e": 2528, "s": 2410, "text": "Let us look at two questions of this dataset and the corresponding classes (1 for insincere, 0 for sincere question):" }, { "code": null, "e": 2640, "s": 2528, "text": "Insincere question: Why does Trump believe everything that Putin tells him? Is he a communist, or plain stupid?" }, { "code": null, "e": 2836, "s": 2640, "text": "Sincere question: Can the strong correlation between latitude and prosperity be partially explained by another one (if proven to exist) between favourable ambient temperatures and brain enthropy?" }, { "code": null, "e": 2973, "s": 2836, "text": "The preprocessing step consists of splitting the data to train and validation sets, then to vectorizing the questions to tf-idf vectors." }, { "code": null, "e": 3060, "s": 2973, "text": "The black box model is a logistic regression model having as input the tf-idf vectors." }, { "code": null, "e": 3382, "s": 3060, "text": "It is time now to apply LimeTextExplainer function to generate local explanations for predictions. The function needs as parameters the question to explain (of index 130609), the predicted label of the question generated from the black box model (the logistic regression), and the number of features used for explanation." }, { "code": null, "e": 3429, "s": 3382, "text": "The result of the above code is the following:" }, { "code": null, "e": 3685, "s": 3429, "text": "Question: When will Quora stop so many utterly stupid questions being asked here, primarily by the unintelligent that insist on walking this earth?Probability (Insincere) = 0.745825811972627Probability (Sincere) = 0.254174188027373True Class is: insincere" }, { "code": null, "e": 3847, "s": 3685, "text": "The classifier got this example right (it predicted insincere).The explanation is presented below as a list of weighted features using the following instruction:" }, { "code": null, "e": 3862, "s": 3847, "text": "The result is:" }, { "code": null, "e": 4211, "s": 3862, "text": "[('stupid', 0.3704823331676872), ('earth', 0.11362862926025367), ('Quora', 0.10379246842323496), ('insist', 0.09548389743268501), ('primarily', -0.07151150302754253), ('questions', 0.07000885924524448), ('utterly', 0.040867838409334646), ('asked', -0.036054558321806804), ('unintelligent', 0.017247304068062203), ('walking', -0.004154838656529393)]" }, { "code": null, "e": 4577, "s": 4211, "text": "These weighted features are a linear model, which approximates the behavior of the logistic regression classifier in the vicinity of the test example. Roughly, if we remove ‘stupid’ and ‘earth’ from the question, the prediction should move towards the opposite class (Sincere) by about 0.48 (the sum of the weights for both features). Let’s see if this is the case." }, { "code": null, "e": 4592, "s": 4577, "text": "The result is:" }, { "code": null, "e": 4723, "s": 4592, "text": "Original prediction: 0.745825811972627Prediction after removing some features: 0.33715161522095155Difference: -0.40867419675167543" }, { "code": null, "e": 4839, "s": 4723, "text": "As expected, the class is now sincere after removing the words, ‘earth’, and ‘stupid’ from the instance vocabulary." }, { "code": null, "e": 4909, "s": 4839, "text": "The results can be shown in LIME in different types of visualization." }, { "code": null, "e": 5104, "s": 4909, "text": "Notice that for each class, the words on the right side on the line are positive, and the words on the left side are negative. Thus, ‘stupid’ is positive for insincere, but negative for sincere." }, { "code": null, "e": 5174, "s": 5104, "text": "You can also get a bar plot of the explanations using the code below:" }, { "code": null, "e": 5522, "s": 5174, "text": "LIME is able to explain the predictions of any type of classifier (SVM, neural nets, ...) locally. In this post, I applied it to the Quora questions dataset to explain what makes a question insincere in Quora, but it can also be integrated into images and structured data classifiers. You can access to more codes and examples following this link." }, { "code": null, "e": 5626, "s": 5522, "text": "If you have a question, feel free to comment below or ask it via email or Linkedin. And I’ll answer it." }, { "code": null, "e": 5687, "s": 5626, "text": "The entire code is posted in my GITHUB profile in this link." }, { "code": null, "e": 5758, "s": 5687, "text": "I will continue to post about XAI and other funny topics. Stay tuned!!" }, { "code": null, "e": 6012, "s": 5758, "text": "[1] Ribeiro, M. T., Singh, S., & Guestrin, C. (2016, August). “ Why should I trust you?” Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1135–1144)." } ]
CONCAT() function in MySQL - GeeksforGeeks
07 Oct, 2020 CONCAT() function in MySQL is used to concatenating the given arguments. It may have one or more arguments. If all arguments are nonbinary strings, the result is a nonbinary string. If the arguments include any binary strings, the result is a binary string. If a numeric argument is given then it is converted to its equivalent nonbinary string form. Syntax : CONCAT(str1, str2, ...) Parameter : This method accepts N argument. str1, str2.str3.... : The input sting which we want to concatenate. Returns : It returns a new string after concatenating all input string. If any of the input string is NULL then it returns NULL. Example-1 :Concatenating 3 string using CONCAT Function. SELECT CONCAT('geeks', 'for', 'geeks') AS ConcatenatedString ; Output : Example-2 :Concatenating numeric string using CONCAT Function. SELECT CONCAT(19, 10, 5.60) AS ConcatenatedNumber ; Output : Example-3 :Concatenating string which includes a NULL String using CONCAT Function. SELECT CONCAT('geeks', 'for', 'geeks', NULL) AS ConcatenatedString ; Output : Example-4 :In this example we are going to concatenate string between column of a table. To demonstrate create a table named Student. CREATE TABLE Student( StudentId INT AUTO_INCREMENT, FirstName VARCHAR(100) NOT NULL, LastName VARCHAR(100) NOT NULL, Class VARCHAR(20) NOT NULL, City VARCHAR(20) NOT NULL, State VARCHAR(20) NOT NULL, PinNo INT NOT NULL, PRIMARY KEY(StudentId ) ); Now inserting some data to the Student table : INSERT INTO Student(FirstName, LastName, Class, City, State, PinNo ) VALUES ('Sayantan', 'Maity', 'X', 'Kolkata', 'WestBengal', 700001 ), ('Nitin', 'Shah', 'XI', 'Jalpaiguri', 'WestBengal', 735102 ), ('Aniket', 'Sharma', 'XI', 'Midnapore', 'WestBengal', 721211 ), ('Abdur', 'Ali', 'X', 'Malda', 'WestBengal', 732101 ), ('Sanjoy', 'Sharama', 'X', 'Kolkata', 'WestBengal', 700004 ) ; So, the Student table is : Select * From Student ; Now, we will concatenate FirstName and LastName to get FullName and City, State and PinNo to get Address using CONCAT Function. Select StudentId, FirstName, LastName, CONCAT(FirstName, ' ', LastName) AS FullName, CONCAT(City, ' ', State, ' ', PinNO) AS Address FROM Student; Output : DBMS-SQL mysql SQL SQL Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments How to Update Multiple Columns in Single Update Statement in SQL? What is Temporary Table in SQL? SQL Query to Find the Name of a Person Whose Name Starts with Specific Letter SQL using Python SQL | Subquery How to Write a SQL Query For a Specific Date Range and Date Time? SQL Query to Convert VARCHAR to INT SQL Query to Delete Duplicate Rows SQL Query to Compare Two Dates Window functions in SQL
[ { "code": null, "e": 23901, "s": 23873, "text": "\n07 Oct, 2020" }, { "code": null, "e": 24252, "s": 23901, "text": "CONCAT() function in MySQL is used to concatenating the given arguments. It may have one or more arguments. If all arguments are nonbinary strings, the result is a nonbinary string. If the arguments include any binary strings, the result is a binary string. If a numeric argument is given then it is converted to its equivalent nonbinary string form." }, { "code": null, "e": 24261, "s": 24252, "text": "Syntax :" }, { "code": null, "e": 24286, "s": 24261, "text": "CONCAT(str1, str2, ...)\n" }, { "code": null, "e": 24332, "s": 24286, "text": "Parameter : This method accepts N argument. " }, { "code": null, "e": 24400, "s": 24332, "text": "str1, str2.str3.... : The input sting which we want to concatenate." }, { "code": null, "e": 24529, "s": 24400, "text": "Returns : It returns a new string after concatenating all input string. If any of the input string is NULL then it returns NULL." }, { "code": null, "e": 24586, "s": 24529, "text": "Example-1 :Concatenating 3 string using CONCAT Function." }, { "code": null, "e": 24650, "s": 24586, "text": "SELECT CONCAT('geeks', 'for', 'geeks') AS ConcatenatedString ;\n" }, { "code": null, "e": 24659, "s": 24650, "text": "Output :" }, { "code": null, "e": 24722, "s": 24659, "text": "Example-2 :Concatenating numeric string using CONCAT Function." }, { "code": null, "e": 24775, "s": 24722, "text": "SELECT CONCAT(19, 10, 5.60) AS ConcatenatedNumber ;\n" }, { "code": null, "e": 24784, "s": 24775, "text": "Output :" }, { "code": null, "e": 24868, "s": 24784, "text": "Example-3 :Concatenating string which includes a NULL String using CONCAT Function." }, { "code": null, "e": 24938, "s": 24868, "text": "SELECT CONCAT('geeks', 'for', 'geeks', NULL) AS ConcatenatedString ;\n" }, { "code": null, "e": 24947, "s": 24938, "text": "Output :" }, { "code": null, "e": 25081, "s": 24947, "text": "Example-4 :In this example we are going to concatenate string between column of a table. To demonstrate create a table named Student." }, { "code": null, "e": 25334, "s": 25081, "text": "CREATE TABLE Student(\n\nStudentId INT AUTO_INCREMENT, \nFirstName VARCHAR(100) NOT NULL,\nLastName VARCHAR(100) NOT NULL,\nClass VARCHAR(20) NOT NULL,\nCity VARCHAR(20) NOT NULL,\nState VARCHAR(20) NOT NULL,\nPinNo INT NOT NULL,\nPRIMARY KEY(StudentId )\n\n);\n" }, { "code": null, "e": 25381, "s": 25334, "text": "Now inserting some data to the Student table :" }, { "code": null, "e": 25766, "s": 25381, "text": "INSERT INTO \nStudent(FirstName, LastName, Class, City, State, PinNo )\nVALUES\n('Sayantan', 'Maity', 'X', 'Kolkata', 'WestBengal', 700001 ),\n('Nitin', 'Shah', 'XI', 'Jalpaiguri', 'WestBengal', 735102 ),\n('Aniket', 'Sharma', 'XI', 'Midnapore', 'WestBengal', 721211 ),\n('Abdur', 'Ali', 'X', 'Malda', 'WestBengal', 732101 ),\n('Sanjoy', 'Sharama', 'X', 'Kolkata', 'WestBengal', 700004 ) ;\n" }, { "code": null, "e": 25793, "s": 25766, "text": "So, the Student table is :" }, { "code": null, "e": 25818, "s": 25793, "text": "Select * From Student ;\n" }, { "code": null, "e": 25946, "s": 25818, "text": "Now, we will concatenate FirstName and LastName to get FullName and City, State and PinNo to get Address using CONCAT Function." }, { "code": null, "e": 26117, "s": 25946, "text": "Select \n StudentId, FirstName, LastName, \n CONCAT(FirstName, ' ', LastName) AS FullName,\n CONCAT(City, ' ', State, ' ', PinNO) AS Address\n\n FROM Student; \n" }, { "code": null, "e": 26126, "s": 26117, "text": "Output :" }, { "code": null, "e": 26135, "s": 26126, "text": "DBMS-SQL" }, { "code": null, "e": 26141, "s": 26135, "text": "mysql" }, { "code": null, "e": 26145, "s": 26141, "text": "SQL" }, { "code": null, "e": 26149, "s": 26145, "text": "SQL" }, { "code": null, "e": 26247, "s": 26149, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 26256, "s": 26247, "text": "Comments" }, { "code": null, "e": 26269, "s": 26256, "text": "Old Comments" }, { "code": null, "e": 26335, "s": 26269, "text": "How to Update Multiple Columns in Single Update Statement in SQL?" }, { "code": null, "e": 26367, "s": 26335, "text": "What is Temporary Table in SQL?" }, { "code": null, "e": 26445, "s": 26367, "text": "SQL Query to Find the Name of a Person Whose Name Starts with Specific Letter" }, { "code": null, "e": 26462, "s": 26445, "text": "SQL using Python" }, { "code": null, "e": 26477, "s": 26462, "text": "SQL | Subquery" }, { "code": null, "e": 26543, "s": 26477, "text": "How to Write a SQL Query For a Specific Date Range and Date Time?" }, { "code": null, "e": 26579, "s": 26543, "text": "SQL Query to Convert VARCHAR to INT" }, { "code": null, "e": 26614, "s": 26579, "text": "SQL Query to Delete Duplicate Rows" }, { "code": null, "e": 26645, "s": 26614, "text": "SQL Query to Compare Two Dates" } ]
Common string operations in Python
The string module in Python’s standard library provides following useful constants, classes and a helper function called capwords() >>> import string >>> string.ascii_letters 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ' >>> string.ascii_lowercase 'abcdefghijklmnopqrstuvwxyz' >>> string.ascii_uppercase 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' >>> string.digits '0123456789' >>> string.hexdigits '0123456789abcdefABCDEF' >>> string.octdigits '01234567' >>> string.printable '0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!"#$%&\'()*+,-./:;<=>?@[\\]^_`{|}~ \t\n\r\x0b\x0c' >>> string.punctuation '!"#$%&\'()*+,-./:;<=>?@[\\]^_`{|}~' >>> string.whitespace ' \t\n\r\x0b\x0c' This function performs following − Splits the given string argument into words using str.split(). Splits the given string argument into words using str.split(). Capitalizes each word using str.capitalize() Capitalizes each word using str.capitalize() and joins the capitalized words using str.join(). and joins the capitalized words using str.join(). >>> text='All animals are equal. Some are more equal' >>> string.capwords(text) 'All Animals Are Equal. Some Are More Equal' Python’s built-in str class has format() method using which string can be formatted. Formatter object behaves similarly. This may be useful to write customized formatter class by subclassing this Formatter class. >>> from string import Formatter >>> f=Formatter() >>> f.format('name:{name}, age:{age}, marks:{marks}', name='Rahul', age=30, marks=50) 'name:Rahul, age:30, marks:50' This class is used to create a string template. It proves useful for simpler string substitutions. >>> from string import Template >>> text='My name is $name. I am $age years old' >>> t=Template(text) >>> t.substitute(name='Rahul', age=30) 'My name is Rahul. I am 30 years old'
[ { "code": null, "e": 1194, "s": 1062, "text": "The string module in Python’s standard library provides following useful constants, classes and a helper function called capwords()" }, { "code": null, "e": 1748, "s": 1194, "text": ">>> import string\n>>> string.ascii_letters\n'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ'\n>>> string.ascii_lowercase\n'abcdefghijklmnopqrstuvwxyz'\n>>> string.ascii_uppercase\n'ABCDEFGHIJKLMNOPQRSTUVWXYZ'\n>>> string.digits\n'0123456789'\n>>> string.hexdigits\n'0123456789abcdefABCDEF'\n>>> string.octdigits\n'01234567'\n>>> string.printable\n'0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!\"#$%&\\'()*+,-./:;<=>?@[\\\\]^_`{|}~ \\t\\n\\r\\x0b\\x0c'\n>>> string.punctuation\n'!\"#$%&\\'()*+,-./:;<=>?@[\\\\]^_`{|}~'\n>>> string.whitespace\n' \\t\\n\\r\\x0b\\x0c'" }, { "code": null, "e": 1783, "s": 1748, "text": "This function performs following −" }, { "code": null, "e": 1846, "s": 1783, "text": "Splits the given string argument into words using str.split()." }, { "code": null, "e": 1909, "s": 1846, "text": "Splits the given string argument into words using str.split()." }, { "code": null, "e": 1954, "s": 1909, "text": "Capitalizes each word using str.capitalize()" }, { "code": null, "e": 1999, "s": 1954, "text": "Capitalizes each word using str.capitalize()" }, { "code": null, "e": 2049, "s": 1999, "text": "and joins the capitalized words using str.join()." }, { "code": null, "e": 2099, "s": 2049, "text": "and joins the capitalized words using str.join()." }, { "code": null, "e": 2224, "s": 2099, "text": ">>> text='All animals are equal. Some are more equal'\n>>> string.capwords(text)\n'All Animals Are Equal. Some Are More Equal'" }, { "code": null, "e": 2437, "s": 2224, "text": "Python’s built-in str class has format() method using which string can be formatted. Formatter object behaves similarly. This may be useful to write customized formatter class by subclassing this Formatter class." }, { "code": null, "e": 2605, "s": 2437, "text": ">>> from string import Formatter\n>>> f=Formatter()\n>>> f.format('name:{name}, age:{age}, marks:{marks}', name='Rahul', age=30, marks=50)\n'name:Rahul, age:30, marks:50'" }, { "code": null, "e": 2704, "s": 2605, "text": "This class is used to create a string template. It proves useful for simpler string substitutions." }, { "code": null, "e": 2883, "s": 2704, "text": ">>> from string import Template\n>>> text='My name is $name. I am $age years old'\n>>> t=Template(text)\n>>> t.substitute(name='Rahul', age=30)\n'My name is Rahul. I am 30 years old'" } ]
What is the MySQL SELECT INTO Equivalent?
The SELECT INTO equivalent is CREATE TABLE AS SELECT statement. The syntax is as follows − CREATE TABLE yourNewTableName AS SELECT *FROM yourTableName; To understand the above concept, let us create a table. The query to create a table is as follows − mysql> create table selectIntoEquivalentDemo -> ( -> ClientId int NOT NULL AUTO_INCREMENT PRIMARY KEY, -> ClientName varchar(20), -> ClientAge int -> ); Query OK, 0 rows affected (0.71 sec) Insert some records in the table using insert command. The query is as follows − mysql> insert into selectIntoEquivalentDemo(ClientName,ClientAge) values('Larry',34); Query OK, 1 row affected (0.13 sec) mysql> insert into selectIntoEquivalentDemo(ClientName,ClientAge) values('Maxwell',44); Query OK, 1 row affected (0.06 sec) mysql> insert into selectIntoEquivalentDemo(ClientName,ClientAge) values('Bob',38); Query OK, 1 row affected (0.07 sec) mysql> insert into selectIntoEquivalentDemo(ClientName,ClientAge) values('David',39); Query OK, 1 row affected (0.09 sec) Display all records from the table using select statement. The query is as follows − mysql> select *from selectIntoEquivalentDemo Here is the output − +----------+------------+-----------+ | ClientId | ClientName | ClientAge | +----------+------------+-----------+ | 1 | Larry | 34 | | 2 | Maxwell | 44 | | 3 | Bob | 38 | | 4 | David | 39 | +----------+------------+-----------+ 4 rows in set (0.00 sec) The following is the query of SELECT INTO equivalent in MySQL − mysql> create table Client_information AS select *from selectIntoEquivalentDemo; Query OK, 4 rows affected (0.57 sec) Records: 4 Duplicates: 0 Warnings: 0 Now let us check the table records from the new table. The query is as follows − mysql> select *from Client_information; Here is the output − +----------+------------+-----------+ | ClientId | ClientName | ClientAge | +----------+------------+-----------+ | 1 | Larry | 34 | | 2 | Maxwell | 44 | | 3 | Bob | 38 | | 4 | David | 39 | +----------+------------+-----------+ 4 rows in set (0.00 sec)
[ { "code": null, "e": 1153, "s": 1062, "text": "The SELECT INTO equivalent is CREATE TABLE AS SELECT statement. The syntax is as follows −" }, { "code": null, "e": 1214, "s": 1153, "text": "CREATE TABLE yourNewTableName AS SELECT *FROM yourTableName;" }, { "code": null, "e": 1314, "s": 1214, "text": "To understand the above concept, let us create a table. The query to create a table is as follows −" }, { "code": null, "e": 1519, "s": 1314, "text": "mysql> create table selectIntoEquivalentDemo\n -> (\n -> ClientId int NOT NULL AUTO_INCREMENT PRIMARY KEY,\n -> ClientName varchar(20),\n -> ClientAge int\n -> );\nQuery OK, 0 rows affected (0.71 sec)" }, { "code": null, "e": 1600, "s": 1519, "text": "Insert some records in the table using insert command. The query is as follows −" }, { "code": null, "e": 2088, "s": 1600, "text": "mysql> insert into selectIntoEquivalentDemo(ClientName,ClientAge) values('Larry',34);\nQuery OK, 1 row affected (0.13 sec)\nmysql> insert into selectIntoEquivalentDemo(ClientName,ClientAge) values('Maxwell',44);\nQuery OK, 1 row affected (0.06 sec)\nmysql> insert into selectIntoEquivalentDemo(ClientName,ClientAge) values('Bob',38);\nQuery OK, 1 row affected (0.07 sec)\nmysql> insert into selectIntoEquivalentDemo(ClientName,ClientAge) values('David',39);\nQuery OK, 1 row affected (0.09 sec)" }, { "code": null, "e": 2173, "s": 2088, "text": "Display all records from the table using select statement. The query is as follows −" }, { "code": null, "e": 2218, "s": 2173, "text": "mysql> select *from selectIntoEquivalentDemo" }, { "code": null, "e": 2239, "s": 2218, "text": "Here is the output −" }, { "code": null, "e": 2568, "s": 2239, "text": "+----------+------------+-----------+\n| ClientId | ClientName | ClientAge |\n+----------+------------+-----------+\n| 1 | Larry | 34 |\n| 2 | Maxwell | 44 |\n| 3 | Bob | 38 |\n| 4 | David | 39 |\n+----------+------------+-----------+\n4 rows in set (0.00 sec)" }, { "code": null, "e": 2632, "s": 2568, "text": "The following is the query of SELECT INTO equivalent in MySQL −" }, { "code": null, "e": 2787, "s": 2632, "text": "mysql> create table Client_information AS select *from selectIntoEquivalentDemo;\nQuery OK, 4 rows affected (0.57 sec)\nRecords: 4 Duplicates: 0 Warnings: 0" }, { "code": null, "e": 2868, "s": 2787, "text": "Now let us check the table records from the new table. The query is as follows −" }, { "code": null, "e": 2908, "s": 2868, "text": "mysql> select *from Client_information;" }, { "code": null, "e": 2929, "s": 2908, "text": "Here is the output −" }, { "code": null, "e": 3258, "s": 2929, "text": "+----------+------------+-----------+\n| ClientId | ClientName | ClientAge |\n+----------+------------+-----------+\n| 1 | Larry | 34 |\n| 2 | Maxwell | 44 |\n| 3 | Bob | 38 |\n| 4 | David | 39 |\n+----------+------------+-----------+\n4 rows in set (0.00 sec)" } ]
HTML - <video> Tag
The HTML <video> tag is used to embed video into your web page, it has several video sources. <!DOCTYPE html> <html> <head> <title>HTML video Tag</title> </head> <body> <p>Run your first program using an Online Compiler (compileonline.com)</p> <br /> <video width = "500" height = "300" controls> <source src = "/html/compileonline.mp4" type = "video/mp4"> This browser doesn't support video tag. </video> </body> </html> This will produce the following result − Run your first program using an Online Compiler (compileonline.com) This tag supports all the global attributes described in − HTML Attribute Reference The HTML <video> tag also supports the following additional attributes − This tag supports all the event attributes described in − HTML Events Reference 19 Lectures 2 hours Anadi Sharma 16 Lectures 1.5 hours Anadi Sharma 18 Lectures 1.5 hours Frahaan Hussain 57 Lectures 5.5 hours DigiFisk (Programming Is Fun) 54 Lectures 6 hours DigiFisk (Programming Is Fun) 45 Lectures 5.5 hours DigiFisk (Programming Is Fun) Print Add Notes Bookmark this page
[ { "code": null, "e": 2468, "s": 2374, "text": "The HTML <video> tag is used to embed video into your web page, it has several video sources." }, { "code": null, "e": 2866, "s": 2468, "text": "<!DOCTYPE html>\n<html>\n\n <head>\n <title>HTML video Tag</title>\n </head>\n\n <body>\n <p>Run your first program using an Online Compiler (compileonline.com)</p>\n <br />\n \n <video width = \"500\" height = \"300\" controls>\n <source src = \"/html/compileonline.mp4\" type = \"video/mp4\">\n This browser doesn't support video tag.\n </video>\n </body>\n\n</html>" }, { "code": null, "e": 2907, "s": 2866, "text": "This will produce the following result −" }, { "code": null, "e": 2975, "s": 2907, "text": "Run your first program using an Online Compiler (compileonline.com)" }, { "code": null, "e": 3059, "s": 2975, "text": "This tag supports all the global attributes described in − HTML Attribute Reference" }, { "code": null, "e": 3132, "s": 3059, "text": "The HTML <video> tag also supports the following additional attributes −" }, { "code": null, "e": 3212, "s": 3132, "text": "This tag supports all the event attributes described in − HTML Events Reference" }, { "code": null, "e": 3245, "s": 3212, "text": "\n 19 Lectures \n 2 hours \n" }, { "code": null, "e": 3259, "s": 3245, "text": " Anadi Sharma" }, { "code": null, "e": 3294, "s": 3259, "text": "\n 16 Lectures \n 1.5 hours \n" }, { "code": null, "e": 3308, "s": 3294, "text": " Anadi Sharma" }, { "code": null, "e": 3343, "s": 3308, "text": "\n 18 Lectures \n 1.5 hours \n" }, { "code": null, "e": 3360, "s": 3343, "text": " Frahaan Hussain" }, { "code": null, "e": 3395, "s": 3360, "text": "\n 57 Lectures \n 5.5 hours \n" }, { "code": null, "e": 3426, "s": 3395, "text": " DigiFisk (Programming Is Fun)" }, { "code": null, "e": 3459, "s": 3426, "text": "\n 54 Lectures \n 6 hours \n" }, { "code": null, "e": 3490, "s": 3459, "text": " DigiFisk (Programming Is Fun)" }, { "code": null, "e": 3525, "s": 3490, "text": "\n 45 Lectures \n 5.5 hours \n" }, { "code": null, "e": 3556, "s": 3525, "text": " DigiFisk (Programming Is Fun)" }, { "code": null, "e": 3563, "s": 3556, "text": " Print" }, { "code": null, "e": 3574, "s": 3563, "text": " Add Notes" } ]
ReactJS – Cleaning up with useEffect hook
In this article, we are going to see how to clean up the subscriptions set up in the useEffect hook in the functional component. Once the effects are created, then they are needed to be cleaned up before the component gets removed from the DOM. For this, cleaning up effect is used to remove the previous useEffect hook’s effect before using this hook of the same component again. useEffect(()=>{ return ()=>{} } ,[]); In this example, we will build a React application which displays the coordinates of the mouse pointer when it is moved over the screen. For this to implement, we will write the code with both cleanup effects and without it. Example App.jsx import React, { useEffect, useState } from 'react'; function App() { return ( <div className="App"> <Comp /> </div> ); } function Comp() { useEffect(() => { document.addEventListener('mousemove', mouseHandler); }, []); const mouseHandler = (e) => { console.log(e.clientX, e.clientY); }; return ( <div> <h1>Tutorialspoint</h1> </div> ); } export default App; In the above example, we are not removing the previous useEffect hook’s data, which is affecting the new data returned by this hook. This will produce the following result. Example App.jsx import React, { useEffect, useState } from 'react'; function App() { return ( <div className="App"> <Comp /> </div> ); } function Comp() { useEffect(() => { document.addEventListener('mousemove', mouseHandler); return () => { document.removeEventListener('mousemove', mouseHandler); }; }, []); const mouseHandler = (e) => { console.log(e.clientX, e.clientY); }; return ( <div> <h1>Tutorialspoint</h1> </div> ); } export default App; In the above example, useEffect hook is called with the cleanup effect and thus, the effect of this hook will get removed every time the component gets destroyed. This will produce the following result.
[ { "code": null, "e": 1191, "s": 1062, "text": "In this article, we are going to see how to clean up the subscriptions set up in the useEffect hook in the functional component." }, { "code": null, "e": 1443, "s": 1191, "text": "Once the effects are created, then they are needed to be cleaned up before the component gets removed from the DOM. For this, cleaning up effect is used to remove the previous useEffect hook’s effect before using this hook of the same component again." }, { "code": null, "e": 1484, "s": 1443, "text": "useEffect(()=>{\n return ()=>{}\n}\n,[]);" }, { "code": null, "e": 1709, "s": 1484, "text": "In this example, we will build a React application which displays the coordinates of the mouse pointer when it is moved over the screen. For this to implement, we will write the code with both cleanup effects and without it." }, { "code": null, "e": 1717, "s": 1709, "text": "Example" }, { "code": null, "e": 1725, "s": 1717, "text": "App.jsx" }, { "code": null, "e": 2163, "s": 1725, "text": "import React, { useEffect, useState } from 'react';\n\nfunction App() {\n return (\n <div className=\"App\">\n <Comp />\n </div>\n );\n}\n\nfunction Comp() {\n\n useEffect(() => {\n document.addEventListener('mousemove', mouseHandler);\n }, []);\n\n const mouseHandler = (e) => {\n console.log(e.clientX, e.clientY);\n };\n\n return (\n <div>\n <h1>Tutorialspoint</h1>\n </div>\n );\n}\nexport default App;" }, { "code": null, "e": 2296, "s": 2163, "text": "In the above example, we are not removing the previous useEffect hook’s data, which is affecting the new data returned by this hook." }, { "code": null, "e": 2336, "s": 2296, "text": "This will produce the following result." }, { "code": null, "e": 2344, "s": 2336, "text": "Example" }, { "code": null, "e": 2352, "s": 2344, "text": "App.jsx" }, { "code": null, "e": 2888, "s": 2352, "text": "import React, { useEffect, useState } from 'react';\n\nfunction App() {\n return (\n <div className=\"App\">\n <Comp />\n </div>\n );\n}\n\nfunction Comp() {\n\n useEffect(() => {\n document.addEventListener('mousemove', mouseHandler);\n return () => {\n document.removeEventListener('mousemove', mouseHandler);\n };\n }, []);\n\n const mouseHandler = (e) => {\n console.log(e.clientX, e.clientY);\n };\n return (\n <div>\n <h1>Tutorialspoint</h1>\n </div>\n );\n}\nexport default App;" }, { "code": null, "e": 3051, "s": 2888, "text": "In the above example, useEffect hook is called with the cleanup effect and thus, the effect of this hook will get removed every time the component gets destroyed." }, { "code": null, "e": 3091, "s": 3051, "text": "This will produce the following result." } ]
Testing Glue Pyspark jobs. How to configure your Glue PySpark job... | by Vincent Claes | Towards Data Science
A typical use case for a Glue job is; you read data from S3; you do some transformations on that data; you dump the transformed data back to S3. When writing a PySpark job, you write your code and tests in Python and you use the PySpark library to execute your code on a Spark cluster. But how do I let both Python and Spark communicate with the same mocked S3 Bucket? In this article, I’ll show you how you can setup a mocked S3 bucket that you can access from your python process as well as from the Spark cluster. We are using Glue 1.0, which means Python 3.6.8, Spark/PySpark 2.4.3 and Hadoop 2.8.5.make sure; you have python 3.6.8 installed; you have java jdk 8 installed; you have spark 2.4.3 for hadoop 2.7 installed. note: Glue uses Hadoop 2.8.5, but for simplicity we use Hadoop 2.7 because it’s shipped with Spark 2.4.3. pipenv --python 3.6pipenv install moto[server]pipenv install boto3pipenv install pyspark==2.4.3 If you have followed the above steps, you should be able to run successfully the following script: 1 2 3 import osimport signalimport subprocessimport boto3from pyspark.sql import DataFramefrom pyspark.sql import SparkSession# start moto server, by default it runs on localhost on port 5000.process = subprocess.Popen( "moto_server s3", stdout=subprocess.PIPE, shell=True, preexec_fn=os.setsid)# create an s3 connection that points to the moto server. s3_conn = boto3.resource( "s3", endpoint_url="http://127.0.0.1:5000")# create an S3 bucket.s3_conn.create_bucket(Bucket="bucket")# configure pyspark to use hadoop-aws module.# notice that we reference the hadoop version we installed.os.environ[ "PYSPARK_SUBMIT_ARGS"] = '--packages "org.apache.hadoop:hadoop-aws:2.7.3" pyspark-shell'# get the spark session object and hadoop configuration.spark = SparkSession.builder.getOrCreate()hadoop_conf = spark.sparkContext._jsc.hadoopConfiguration()# mock the aws credentials to access s3.hadoop_conf.set("fs.s3a.access.key", "dummy-value")hadoop_conf.set("fs.s3a.secret.key", "dummy-value")# we point s3a to our moto server.hadoop_conf.set("fs.s3a.endpoint", "http://127.0.0.1:5000")# we need to configure hadoop to use s3a.hadoop_conf.set("fs.s3.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem")# create a pyspark dataframe.values = [("k1", 1), ("k2", 2)]columns = ["key", "value"]df = spark.createDataFrame(values, columns)# write the dataframe as csv to s3.df.write.csv("s3://bucket/source.csv")# read the dataset from s3df = spark.read.csv("s3://bucket/source.csv")# assert df is a DataFrameassert isinstance(df, DataFrame)# shut down the moto server.os.killpg(os.getpgid(process.pid), signal.SIGTERM)print("yeeey, the test ran without errors.") Copy and paste the above code to a file called “pyspark-mocked-s3.py” and execute: pipenv shellpython pyspark-mocked-s3.py The output will look like: (glue-test-1) bash-3.2$ python pyspark-mocked-s3.py* Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)...127.0.0.1 - - [28/Nov/2019 20:54:59] "HEAD /bucket/source.csv/part-00005-0f74bb8c-599f-4511-8bcf-8665c6c77cc3-c000.csv HTTP/1.1" 200 -127.0.0.1 - - [28/Nov/2019 20:54:59] "GET /bucket/source.csv/part-00005-0f74bb8c-599f-4511-8bcf-8665c6c77cc3-c000.csv HTTP/1.1" 206 -yeeey, the test ran without errors. The principles showed in the above script are applied in a more structured way in my repo testing-glue-pyspark-jobs. In this repo, you will find a Python file, test_glue_job.py. This file is an example of a test case for a Glue PySpark job. It combines the above logic with the principles outlined in an article I wrote about testing serverless services. Have a look at the test case and follow the steps in the readme to run the test. For convenience, I have added the test case below. Good luck! 1 https://stackoverflow.com/a/50242383/17711552 https://gist.github.com/tobilg/e03dbc474ba976b9f2353 https://github.com/spulec/moto/issues/1543#issuecomment-429000739
[ { "code": null, "e": 210, "s": 172, "text": "A typical use case for a Glue job is;" }, { "code": null, "e": 233, "s": 210, "text": "you read data from S3;" }, { "code": null, "e": 275, "s": 233, "text": "you do some transformations on that data;" }, { "code": null, "e": 317, "s": 275, "text": "you dump the transformed data back to S3." }, { "code": null, "e": 541, "s": 317, "text": "When writing a PySpark job, you write your code and tests in Python and you use the PySpark library to execute your code on a Spark cluster. But how do I let both Python and Spark communicate with the same mocked S3 Bucket?" }, { "code": null, "e": 689, "s": 541, "text": "In this article, I’ll show you how you can setup a mocked S3 bucket that you can access from your python process as well as from the Spark cluster." }, { "code": null, "e": 786, "s": 689, "text": "We are using Glue 1.0, which means Python 3.6.8, Spark/PySpark 2.4.3 and Hadoop 2.8.5.make sure;" }, { "code": null, "e": 819, "s": 786, "text": "you have python 3.6.8 installed;" }, { "code": null, "e": 850, "s": 819, "text": "you have java jdk 8 installed;" }, { "code": null, "e": 897, "s": 850, "text": "you have spark 2.4.3 for hadoop 2.7 installed." }, { "code": null, "e": 1003, "s": 897, "text": "note: Glue uses Hadoop 2.8.5, but for simplicity we use Hadoop 2.7 because it’s shipped with Spark 2.4.3." }, { "code": null, "e": 1099, "s": 1003, "text": "pipenv --python 3.6pipenv install moto[server]pipenv install boto3pipenv install pyspark==2.4.3" }, { "code": null, "e": 1204, "s": 1099, "text": "If you have followed the above steps, you should be able to run successfully the following script: 1 2 3" }, { "code": null, "e": 2854, "s": 1204, "text": "import osimport signalimport subprocessimport boto3from pyspark.sql import DataFramefrom pyspark.sql import SparkSession# start moto server, by default it runs on localhost on port 5000.process = subprocess.Popen( \"moto_server s3\", stdout=subprocess.PIPE, shell=True, preexec_fn=os.setsid)# create an s3 connection that points to the moto server. s3_conn = boto3.resource( \"s3\", endpoint_url=\"http://127.0.0.1:5000\")# create an S3 bucket.s3_conn.create_bucket(Bucket=\"bucket\")# configure pyspark to use hadoop-aws module.# notice that we reference the hadoop version we installed.os.environ[ \"PYSPARK_SUBMIT_ARGS\"] = '--packages \"org.apache.hadoop:hadoop-aws:2.7.3\" pyspark-shell'# get the spark session object and hadoop configuration.spark = SparkSession.builder.getOrCreate()hadoop_conf = spark.sparkContext._jsc.hadoopConfiguration()# mock the aws credentials to access s3.hadoop_conf.set(\"fs.s3a.access.key\", \"dummy-value\")hadoop_conf.set(\"fs.s3a.secret.key\", \"dummy-value\")# we point s3a to our moto server.hadoop_conf.set(\"fs.s3a.endpoint\", \"http://127.0.0.1:5000\")# we need to configure hadoop to use s3a.hadoop_conf.set(\"fs.s3.impl\", \"org.apache.hadoop.fs.s3a.S3AFileSystem\")# create a pyspark dataframe.values = [(\"k1\", 1), (\"k2\", 2)]columns = [\"key\", \"value\"]df = spark.createDataFrame(values, columns)# write the dataframe as csv to s3.df.write.csv(\"s3://bucket/source.csv\")# read the dataset from s3df = spark.read.csv(\"s3://bucket/source.csv\")# assert df is a DataFrameassert isinstance(df, DataFrame)# shut down the moto server.os.killpg(os.getpgid(process.pid), signal.SIGTERM)print(\"yeeey, the test ran without errors.\")" }, { "code": null, "e": 2937, "s": 2854, "text": "Copy and paste the above code to a file called “pyspark-mocked-s3.py” and execute:" }, { "code": null, "e": 2977, "s": 2937, "text": "pipenv shellpython pyspark-mocked-s3.py" }, { "code": null, "e": 3004, "s": 2977, "text": "The output will look like:" }, { "code": null, "e": 3419, "s": 3004, "text": "(glue-test-1) bash-3.2$ python pyspark-mocked-s3.py* Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)...127.0.0.1 - - [28/Nov/2019 20:54:59] \"HEAD /bucket/source.csv/part-00005-0f74bb8c-599f-4511-8bcf-8665c6c77cc3-c000.csv HTTP/1.1\" 200 -127.0.0.1 - - [28/Nov/2019 20:54:59] \"GET /bucket/source.csv/part-00005-0f74bb8c-599f-4511-8bcf-8665c6c77cc3-c000.csv HTTP/1.1\" 206 -yeeey, the test ran without errors." }, { "code": null, "e": 3536, "s": 3419, "text": "The principles showed in the above script are applied in a more structured way in my repo testing-glue-pyspark-jobs." }, { "code": null, "e": 3906, "s": 3536, "text": "In this repo, you will find a Python file, test_glue_job.py. This file is an example of a test case for a Glue PySpark job. It combines the above logic with the principles outlined in an article I wrote about testing serverless services. Have a look at the test case and follow the steps in the readme to run the test. For convenience, I have added the test case below." }, { "code": null, "e": 3917, "s": 3906, "text": "Good luck!" } ]
Java Examples - Extending an Array
How to extend an array after initialisation? Following example shows how to extend an array after initialization by creating an new array. public class Main { public static void main(String[] args) { String[] names = new String[] { "A", "B", "C" }; String[] extended = new String[5]; extended[3] = "D"; extended[4] = "E"; System.arraycopy(names, 0, extended, 0, names.length); for (String str : extended){ System.out.println(str); } } } The above code sample will produce the following result. A B C D E The following is an another Sample example of arrays expansion public class Main { public void extendArraySize() { String[] names = new String[] {"Sai", "Ram", "Krishna"}; String[] extended = new String[5]; extended[3] = "Prasad"; extended[4] = "Mammahe"; System.arraycopy(names, 0, extended, 0, names.length); for (String str : extended) System.out.println(str); } public static void main(String[] args) { new Main().extendArraySize(); } } The above code sample will produce the following result. Sai Ram Krishna Prasad Mammahe Print Add Notes Bookmark this page
[ { "code": null, "e": 2113, "s": 2068, "text": "How to extend an array after initialisation?" }, { "code": null, "e": 2207, "s": 2113, "text": "Following example shows how to extend an array after initialization by creating an new array." }, { "code": null, "e": 2569, "s": 2207, "text": "public class Main {\n public static void main(String[] args) {\n String[] names = new String[] { \"A\", \"B\", \"C\" };\n String[] extended = new String[5];\n extended[3] = \"D\";\n extended[4] = \"E\";\n System.arraycopy(names, 0, extended, 0, names.length);\n \n for (String str : extended){\n System.out.println(str);\n }\n }\n}" }, { "code": null, "e": 2626, "s": 2569, "text": "The above code sample will produce the following result." }, { "code": null, "e": 2637, "s": 2626, "text": "A\nB\nC\nD\nE\n" }, { "code": null, "e": 2700, "s": 2637, "text": "The following is an another Sample example of arrays expansion" }, { "code": null, "e": 3140, "s": 2700, "text": "public class Main {\n public void extendArraySize() {\n String[] names = new String[] {\"Sai\", \"Ram\", \"Krishna\"};\n String[] extended = new String[5];\n extended[3] = \"Prasad\";\n extended[4] = \"Mammahe\";\n System.arraycopy(names, 0, extended, 0, names.length);\n \n for (String str : extended) System.out.println(str);\n } \n public static void main(String[] args) {\n new Main().extendArraySize();\n }\n}" }, { "code": null, "e": 3197, "s": 3140, "text": "The above code sample will produce the following result." }, { "code": null, "e": 3229, "s": 3197, "text": "Sai\nRam\nKrishna\nPrasad\nMammahe\n" }, { "code": null, "e": 3236, "s": 3229, "text": " Print" }, { "code": null, "e": 3247, "s": 3236, "text": " Add Notes" } ]
Real-time pose estimation web application | Towards Data Science
I think we all can agree that 2020 was an insane year. To keep my sanity straight, I decided to revive an old project that I worked on a long time back with Omer Mintz on pose estimation using PoseNet. While reviving this project, what I wanted to achieve became clear to me: A pose and action estimation web application that relies on machine learning capabilities for “learning” new actions without compromising on performance. The results? Well, you can see yourself. The code is also shared on this Git Repository. We used data output provided by the PoseNet pre-trained model and applied some data engineering. With the help of some data exploratory, we found out that a KNN Machine algorithm can classify the results very well. The end result — a system that estimates the exercise a participant is doing. A web application that knows how to estimate what pose a participant found at (stand, squat, pushup). Count how many repeats a participant has done. High performance — Minimal delay should be found between rendering cycles. Application interactivity should not be affected. Easily extendable — Can learn new actions with minimal change Text to speech — Bonus Python using TensorFlow and NumPy— We need a way to apply EDA and train the model. React — For rendering and an interactive web application. TensorFlow.JS— To run the trained models and run ML algorithms on the browser. Canvas — Image rendering and modification. Web Workers— For performance in order not to overload the main thread. For the sake of pose detection, I’ve used the pre-trained model of PoseNet based on RestNet50 architecture. This pre-trained model allows us to capture the human part from an image, which, later on, will be used to estimate the actions. PoseNet is a pre-trained model for pose estimation, found under computer vision. The PoseNet model detects human figures in images and videos and provides the ability to determine the different parts of human(s) found in a frame. The PoseNet library handles the following: Data pre-processing ( Crop & resize, Scale the pixel values) Apply model on a given data using TensorFlow. Decode key points from a result. Calculate the confidence score for each part and the entire pose. The PoseNet model takes a processed camera image as the input.For better performance we are will work with frames of 224 X 224 pixels, this will allow us to handle and process less data. A reminder the PoseNet library will apply another resizing (as mentioned in the previous section). An object with: score — An overall confidence score of the Posekeypoints — A list of 17 elements, each element determine the result of a different keypoint (part) identified with, the x & y positions, part name, and a score score — An overall confidence score of the Pose keypoints — A list of 17 elements, each element determine the result of a different keypoint (part) identified with, the x & y positions, part name, and a score { score: float; keypoints: Array<{ // Array of the 17 keypoints identified position: {x: float, y: float}; part: EBodyParts; // the keys of the enum score: float; }>}enum EBodyParts { nose, leftEye, rightEye, leftEar, rightEar, leftShoulder, rightShoulder, leftElbow, rightElbow, leftWrist, rightWrist, leftHip, rightHip, leftKnee, rightKnee, leftAnkle, rightAnkle} The configuration I used for PoseNet was architecture: 'ResNet50'outputStride: 16quantBytes: 4inputResolution: {width: 224, height: 224} Architecture — ResNet50 or MobileNet v1 Output stride — The output stride determines how much we’re scaling down the output relative to the input image size. It affects the size of the layers and the model outputs. The higher the output stride, the smaller the resolution of layers in the network and the outputs, and correspondingly their accuracy. In this implementation, the output stride can have values of 8, 16, or 32. In other words, an output stride of 32 will result in the fastest performance but lowest accuracy, while 8 will result in the highest accuracy but slowest performance. Resolution = ((InputImageSize - 1) / OutputStride) + 1In my configuration Resolution = ((224- 1) / 16) + 1 = 14.9375 PoseNet allows us to use one of two model architectures: Mobilenet v1ResNet50 Mobilenet v1 ResNet50 The official PoseNet documentation mentioned that Mobilenet v1 is smaller and faster with lower accuracy than the ResNet50 architecture, which is larger, slower but more accurate. To better understand the difference between the two articles I highly recommend review those two articles: Mobilenet V1 architecture ResNet50 archietecture To transform those key points (X and Y coordinates) into action, we will need to apply more statistical power here, for that case, I decided to proceed with a clustering algorithm. More precisely, the KNN algorithm “Information is the oil of the 21st century, and analytics is the combustion engine.” Said Peter Sondergaard, SVP Gartner, in 2011. We are surrounded by data platforms. Data is just laying there waiting for us to pick it up, clean it and use it. Of course this task of “pick it up” and “clean it” is not that simple, Engineers and Data science strive for good data to use to train their models.I like to compare it to a beach treasure hunt using a metal detector; you have many metal things around but in rare cases where you will find a real treasure. My beach was youtube. More specifically, personal training videos on youtube where you can train along with the trainers by the same pose.So many poses, now all that is needed is to breakdown the videos into frames and categorize them into the correct pose (Stand, Squat, Push-up, Push-down, e.g.) In order to break down the videos into frames, I used the following simple python code: import osimport cv2def video_to_frames(video_path: str, destination: str): if not os.path.exists(os.path.join(os.getcwd(),'output')): os.mkdir(os.path.join(os.getcwd(),'output')) if not os.path.exists(destination): os.mkdir(destination) vid = cv2.VideoCapture(file_path) success,image = vid.read() # read the first image count = 0 while success: # in case there are more images - proceed cv2.imwrite(os.path.join(destination,f'frame{count}.jpg'), image) # write the image to the destination directory success,image = vid.read() # read the next image count += 1 After we extracted the frames, we can get our hands dirty with some categorization work. This effort requires us to mainly move files to the correct directory of the pose — is it a “squat” or a “stand” position. Now that we have our training set completely ready it’s time to train our model. After categorizing the images we are now able to proceed with the model training phase. But first, we need to think about how to handle the data. We know we have a classification problem where we have a set of features we want to output to a single class. The options are: Deep learning classification:Using deep learning for classification is the trend now, we can set training-test sets to identify the pose. Something like the YOLO model can help us identify if the image is of a squat, stand, push up, e.g.The main problem here is that it requires us tons of images to train on, very high compute power, and will probably lead us to low prediction confidence (for both F1 & Accuracy scores).Machine learning clustering algorithm on top of PoseNet outcome:We already have a very solid model to identify the different body parts of a participant. In that case, we can take an image and convert it to a tabular model, but the X and Y positions of body parts are not that helpful, but it is still something to begin with. Deep learning classification:Using deep learning for classification is the trend now, we can set training-test sets to identify the pose. Something like the YOLO model can help us identify if the image is of a squat, stand, push up, e.g.The main problem here is that it requires us tons of images to train on, very high compute power, and will probably lead us to low prediction confidence (for both F1 & Accuracy scores). Machine learning clustering algorithm on top of PoseNet outcome:We already have a very solid model to identify the different body parts of a participant. In that case, we can take an image and convert it to a tabular model, but the X and Y positions of body parts are not that helpful, but it is still something to begin with. We will proceed with option number 2. Now we need to prepare our features for the clustering algorithm. That’s means, instead of working with the X and Y positions of different body parts we need to have angles. That required reviving basic trigonometry formulas from the back of my mind to: Convert x & y points to lines Calculate lines vertex angles for: Left armpit angle — using the left shoulder, left elbow, and left hipRight armpit angle — using the right shoulder, right elbow, and right hipLeft shoulder angle — using the left shoulder, right shoulder, and left hipRight shoulder angle — using the right shoulder, left shoulder, and right hipLeft elbow angle — using the left elbow, left shoulder, and left wristRight elbow angle — using the right elbow, right shoulder, and right wristLeft hip angle — using the left hip, right hip, and left shoulderRight hip angle — using the right hip, left hip, and right shoulderLeft groin angle — using the left hip, left knee, and left ankleRight groin angle — using the right hip, right knee, and right ankleLeft knee angle — using the left knee, left ankle, and left hip,Right nee angle — using the right knee, right ankle, and right hip Calculate the slope of a person's pose in radians, this will help us to identify if the person is found in a vertical or horizontal position. This allowed us to get this data set. After we got the dataset ready we can do some analysis using PCA to get a visualization of the principal components, this will help us be more sure about the success rate of the classification procedure and to identify which algorithm will fit the most. Here is the google colab project of the PCA, thanks to Ethel V for helping set it up and fine-tune the features. As we can see the clusters are pretty obvious to classify (except squats and standing where there is some work to be done). I decided to go with KNN to apply the classification. KNN — k-nearest neighbors, a supervised statistical algorithm that fits both for classification and regression problems and is used in machine learning. Load the data and initialize K neighborsFor each example in the data: Load the data and initialize K neighbors For each example in the data: Calculate the distance between the current record from the dataset and a query example. Add the distance and the index of the example to a collection 3. Sort the collection of distances and indices from smallest to largest by their distances. 4. Pick the first K entries from the sorted collection 6. Get the labels of the selected K entries 7. Result: In the case of regression — return the mean of the K labels. In the case of classification — return the mode of the K labels. KNN fits our thanks to: Clear grouping of classes — In most cases, we can identify the group of classes very easily. Some outliers require more complex handling — KNN performs better than other classification algorithms (like SVM) to handle more complex data structure, especially non-linear. A small number of records in our dataset — Neural network-based solution will require Computational complexity — KNN requires less computational power for train/evaluation time than a Neural network. Also, due to the fact we have a small number of classes to pick from and a few hundred records in the dataset, we should not suffer a major decrease in performance. Using KNN we will classify the right action using the angles and slope. Later on, using the web application, we will use the different actions combinations to identify which exercise is performed by the participant. We will review this and more in the next chapter. Hope you enjoyed this article, stay tuned😊
[ { "code": null, "e": 373, "s": 171, "text": "I think we all can agree that 2020 was an insane year. To keep my sanity straight, I decided to revive an old project that I worked on a long time back with Omer Mintz on pose estimation using PoseNet." }, { "code": null, "e": 601, "s": 373, "text": "While reviving this project, what I wanted to achieve became clear to me: A pose and action estimation web application that relies on machine learning capabilities for “learning” new actions without compromising on performance." }, { "code": null, "e": 642, "s": 601, "text": "The results? Well, you can see yourself." }, { "code": null, "e": 690, "s": 642, "text": "The code is also shared on this Git Repository." }, { "code": null, "e": 983, "s": 690, "text": "We used data output provided by the PoseNet pre-trained model and applied some data engineering. With the help of some data exploratory, we found out that a KNN Machine algorithm can classify the results very well. The end result — a system that estimates the exercise a participant is doing." }, { "code": null, "e": 1085, "s": 983, "text": "A web application that knows how to estimate what pose a participant found at (stand, squat, pushup)." }, { "code": null, "e": 1132, "s": 1085, "text": "Count how many repeats a participant has done." }, { "code": null, "e": 1257, "s": 1132, "text": "High performance — Minimal delay should be found between rendering cycles. Application interactivity should not be affected." }, { "code": null, "e": 1319, "s": 1257, "text": "Easily extendable — Can learn new actions with minimal change" }, { "code": null, "e": 1342, "s": 1319, "text": "Text to speech — Bonus" }, { "code": null, "e": 1425, "s": 1342, "text": "Python using TensorFlow and NumPy— We need a way to apply EDA and train the model." }, { "code": null, "e": 1483, "s": 1425, "text": "React — For rendering and an interactive web application." }, { "code": null, "e": 1562, "s": 1483, "text": "TensorFlow.JS— To run the trained models and run ML algorithms on the browser." }, { "code": null, "e": 1605, "s": 1562, "text": "Canvas — Image rendering and modification." }, { "code": null, "e": 1676, "s": 1605, "text": "Web Workers— For performance in order not to overload the main thread." }, { "code": null, "e": 1784, "s": 1676, "text": "For the sake of pose detection, I’ve used the pre-trained model of PoseNet based on RestNet50 architecture." }, { "code": null, "e": 1913, "s": 1784, "text": "This pre-trained model allows us to capture the human part from an image, which, later on, will be used to estimate the actions." }, { "code": null, "e": 2143, "s": 1913, "text": "PoseNet is a pre-trained model for pose estimation, found under computer vision. The PoseNet model detects human figures in images and videos and provides the ability to determine the different parts of human(s) found in a frame." }, { "code": null, "e": 2186, "s": 2143, "text": "The PoseNet library handles the following:" }, { "code": null, "e": 2247, "s": 2186, "text": "Data pre-processing ( Crop & resize, Scale the pixel values)" }, { "code": null, "e": 2293, "s": 2247, "text": "Apply model on a given data using TensorFlow." }, { "code": null, "e": 2326, "s": 2293, "text": "Decode key points from a result." }, { "code": null, "e": 2392, "s": 2326, "text": "Calculate the confidence score for each part and the entire pose." }, { "code": null, "e": 2579, "s": 2392, "text": "The PoseNet model takes a processed camera image as the input.For better performance we are will work with frames of 224 X 224 pixels, this will allow us to handle and process less data." }, { "code": null, "e": 2678, "s": 2579, "text": "A reminder the PoseNet library will apply another resizing (as mentioned in the previous section)." }, { "code": null, "e": 2694, "s": 2678, "text": "An object with:" }, { "code": null, "e": 2902, "s": 2694, "text": "score — An overall confidence score of the Posekeypoints — A list of 17 elements, each element determine the result of a different keypoint (part) identified with, the x & y positions, part name, and a score" }, { "code": null, "e": 2950, "s": 2902, "text": "score — An overall confidence score of the Pose" }, { "code": null, "e": 3111, "s": 2950, "text": "keypoints — A list of 17 elements, each element determine the result of a different keypoint (part) identified with, the x & y positions, part name, and a score" }, { "code": null, "e": 3506, "s": 3111, "text": "{ score: float; keypoints: Array<{ // Array of the 17 keypoints identified position: {x: float, y: float}; part: EBodyParts; // the keys of the enum score: float; }>}enum EBodyParts { nose, leftEye, rightEye, leftEar, rightEar, leftShoulder, rightShoulder, leftElbow, rightElbow, leftWrist, rightWrist, leftHip, rightHip, leftKnee, rightKnee, leftAnkle, rightAnkle}" }, { "code": null, "e": 3547, "s": 3506, "text": "The configuration I used for PoseNet was" }, { "code": null, "e": 3643, "s": 3547, "text": "architecture: 'ResNet50'outputStride: 16quantBytes: 4inputResolution: {width: 224, height: 224}" }, { "code": null, "e": 3683, "s": 3643, "text": "Architecture — ResNet50 or MobileNet v1" }, { "code": null, "e": 4353, "s": 3683, "text": "Output stride — The output stride determines how much we’re scaling down the output relative to the input image size. It affects the size of the layers and the model outputs. The higher the output stride, the smaller the resolution of layers in the network and the outputs, and correspondingly their accuracy. In this implementation, the output stride can have values of 8, 16, or 32. In other words, an output stride of 32 will result in the fastest performance but lowest accuracy, while 8 will result in the highest accuracy but slowest performance. Resolution = ((InputImageSize - 1) / OutputStride) + 1In my configuration Resolution = ((224- 1) / 16) + 1 = 14.9375" }, { "code": null, "e": 4410, "s": 4353, "text": "PoseNet allows us to use one of two model architectures:" }, { "code": null, "e": 4431, "s": 4410, "text": "Mobilenet v1ResNet50" }, { "code": null, "e": 4444, "s": 4431, "text": "Mobilenet v1" }, { "code": null, "e": 4453, "s": 4444, "text": "ResNet50" }, { "code": null, "e": 4633, "s": 4453, "text": "The official PoseNet documentation mentioned that Mobilenet v1 is smaller and faster with lower accuracy than the ResNet50 architecture, which is larger, slower but more accurate." }, { "code": null, "e": 4740, "s": 4633, "text": "To better understand the difference between the two articles I highly recommend review those two articles:" }, { "code": null, "e": 4766, "s": 4740, "text": "Mobilenet V1 architecture" }, { "code": null, "e": 4789, "s": 4766, "text": "ResNet50 archietecture" }, { "code": null, "e": 5004, "s": 4789, "text": "To transform those key points (X and Y coordinates) into action, we will need to apply more statistical power here, for that case, I decided to proceed with a clustering algorithm. More precisely, the KNN algorithm" }, { "code": null, "e": 5136, "s": 5004, "text": "“Information is the oil of the 21st century, and analytics is the combustion engine.” Said Peter Sondergaard, SVP Gartner, in 2011." }, { "code": null, "e": 5250, "s": 5136, "text": "We are surrounded by data platforms. Data is just laying there waiting for us to pick it up, clean it and use it." }, { "code": null, "e": 5557, "s": 5250, "text": "Of course this task of “pick it up” and “clean it” is not that simple, Engineers and Data science strive for good data to use to train their models.I like to compare it to a beach treasure hunt using a metal detector; you have many metal things around but in rare cases where you will find a real treasure." }, { "code": null, "e": 5855, "s": 5557, "text": "My beach was youtube. More specifically, personal training videos on youtube where you can train along with the trainers by the same pose.So many poses, now all that is needed is to breakdown the videos into frames and categorize them into the correct pose (Stand, Squat, Push-up, Push-down, e.g.)" }, { "code": null, "e": 5943, "s": 5855, "text": "In order to break down the videos into frames, I used the following simple python code:" }, { "code": null, "e": 6557, "s": 5943, "text": "import osimport cv2def video_to_frames(video_path: str, destination: str): if not os.path.exists(os.path.join(os.getcwd(),'output')): os.mkdir(os.path.join(os.getcwd(),'output')) if not os.path.exists(destination): os.mkdir(destination) vid = cv2.VideoCapture(file_path) success,image = vid.read() # read the first image count = 0 while success: # in case there are more images - proceed cv2.imwrite(os.path.join(destination,f'frame{count}.jpg'), image) # write the image to the destination directory success,image = vid.read() # read the next image count += 1" }, { "code": null, "e": 6769, "s": 6557, "text": "After we extracted the frames, we can get our hands dirty with some categorization work. This effort requires us to mainly move files to the correct directory of the pose — is it a “squat” or a “stand” position." }, { "code": null, "e": 6850, "s": 6769, "text": "Now that we have our training set completely ready it’s time to train our model." }, { "code": null, "e": 6996, "s": 6850, "text": "After categorizing the images we are now able to proceed with the model training phase. But first, we need to think about how to handle the data." }, { "code": null, "e": 7106, "s": 6996, "text": "We know we have a classification problem where we have a set of features we want to output to a single class." }, { "code": null, "e": 7123, "s": 7106, "text": "The options are:" }, { "code": null, "e": 7872, "s": 7123, "text": "Deep learning classification:Using deep learning for classification is the trend now, we can set training-test sets to identify the pose. Something like the YOLO model can help us identify if the image is of a squat, stand, push up, e.g.The main problem here is that it requires us tons of images to train on, very high compute power, and will probably lead us to low prediction confidence (for both F1 & Accuracy scores).Machine learning clustering algorithm on top of PoseNet outcome:We already have a very solid model to identify the different body parts of a participant. In that case, we can take an image and convert it to a tabular model, but the X and Y positions of body parts are not that helpful, but it is still something to begin with." }, { "code": null, "e": 8295, "s": 7872, "text": "Deep learning classification:Using deep learning for classification is the trend now, we can set training-test sets to identify the pose. Something like the YOLO model can help us identify if the image is of a squat, stand, push up, e.g.The main problem here is that it requires us tons of images to train on, very high compute power, and will probably lead us to low prediction confidence (for both F1 & Accuracy scores)." }, { "code": null, "e": 8622, "s": 8295, "text": "Machine learning clustering algorithm on top of PoseNet outcome:We already have a very solid model to identify the different body parts of a participant. In that case, we can take an image and convert it to a tabular model, but the X and Y positions of body parts are not that helpful, but it is still something to begin with." }, { "code": null, "e": 8914, "s": 8622, "text": "We will proceed with option number 2. Now we need to prepare our features for the clustering algorithm. That’s means, instead of working with the X and Y positions of different body parts we need to have angles. That required reviving basic trigonometry formulas from the back of my mind to:" }, { "code": null, "e": 8944, "s": 8914, "text": "Convert x & y points to lines" }, { "code": null, "e": 8979, "s": 8944, "text": "Calculate lines vertex angles for:" }, { "code": null, "e": 9812, "s": 8979, "text": "Left armpit angle — using the left shoulder, left elbow, and left hipRight armpit angle — using the right shoulder, right elbow, and right hipLeft shoulder angle — using the left shoulder, right shoulder, and left hipRight shoulder angle — using the right shoulder, left shoulder, and right hipLeft elbow angle — using the left elbow, left shoulder, and left wristRight elbow angle — using the right elbow, right shoulder, and right wristLeft hip angle — using the left hip, right hip, and left shoulderRight hip angle — using the right hip, left hip, and right shoulderLeft groin angle — using the left hip, left knee, and left ankleRight groin angle — using the right hip, right knee, and right ankleLeft knee angle — using the left knee, left ankle, and left hip,Right nee angle — using the right knee, right ankle, and right hip" }, { "code": null, "e": 9954, "s": 9812, "text": "Calculate the slope of a person's pose in radians, this will help us to identify if the person is found in a vertical or horizontal position." }, { "code": null, "e": 9992, "s": 9954, "text": "This allowed us to get this data set." }, { "code": null, "e": 10246, "s": 9992, "text": "After we got the dataset ready we can do some analysis using PCA to get a visualization of the principal components, this will help us be more sure about the success rate of the classification procedure and to identify which algorithm will fit the most." }, { "code": null, "e": 10359, "s": 10246, "text": "Here is the google colab project of the PCA, thanks to Ethel V for helping set it up and fine-tune the features." }, { "code": null, "e": 10537, "s": 10359, "text": "As we can see the clusters are pretty obvious to classify (except squats and standing where there is some work to be done). I decided to go with KNN to apply the classification." }, { "code": null, "e": 10690, "s": 10537, "text": "KNN — k-nearest neighbors, a supervised statistical algorithm that fits both for classification and regression problems and is used in machine learning." }, { "code": null, "e": 10760, "s": 10690, "text": "Load the data and initialize K neighborsFor each example in the data:" }, { "code": null, "e": 10801, "s": 10760, "text": "Load the data and initialize K neighbors" }, { "code": null, "e": 10831, "s": 10801, "text": "For each example in the data:" }, { "code": null, "e": 10919, "s": 10831, "text": "Calculate the distance between the current record from the dataset and a query example." }, { "code": null, "e": 10981, "s": 10919, "text": "Add the distance and the index of the example to a collection" }, { "code": null, "e": 11074, "s": 10981, "text": "3. Sort the collection of distances and indices from smallest to largest by their distances." }, { "code": null, "e": 11129, "s": 11074, "text": "4. Pick the first K entries from the sorted collection" }, { "code": null, "e": 11173, "s": 11129, "text": "6. Get the labels of the selected K entries" }, { "code": null, "e": 11184, "s": 11173, "text": "7. Result:" }, { "code": null, "e": 11245, "s": 11184, "text": "In the case of regression — return the mean of the K labels." }, { "code": null, "e": 11310, "s": 11245, "text": "In the case of classification — return the mode of the K labels." }, { "code": null, "e": 11334, "s": 11310, "text": "KNN fits our thanks to:" }, { "code": null, "e": 11427, "s": 11334, "text": "Clear grouping of classes — In most cases, we can identify the group of classes very easily." }, { "code": null, "e": 11603, "s": 11427, "text": "Some outliers require more complex handling — KNN performs better than other classification algorithms (like SVM) to handle more complex data structure, especially non-linear." }, { "code": null, "e": 11689, "s": 11603, "text": "A small number of records in our dataset — Neural network-based solution will require" }, { "code": null, "e": 11968, "s": 11689, "text": "Computational complexity — KNN requires less computational power for train/evaluation time than a Neural network. Also, due to the fact we have a small number of classes to pick from and a few hundred records in the dataset, we should not suffer a major decrease in performance." }, { "code": null, "e": 12184, "s": 11968, "text": "Using KNN we will classify the right action using the angles and slope. Later on, using the web application, we will use the different actions combinations to identify which exercise is performed by the participant." }, { "code": null, "e": 12234, "s": 12184, "text": "We will review this and more in the next chapter." } ]
Understanding the Confusion Matrix and How to Implement it in Python | by Terence Shin | Towards Data Science
IntroductionWhat is a Confusion Matrix?Confusion Matrix MetricsExample of a 2x2 MatrixPython Code Introduction What is a Confusion Matrix? Confusion Matrix Metrics Example of a 2x2 Matrix Python Code Be sure to subscribe here or to my personal newsletter to never miss another article on data science guides, tricks and tips, life lessons, and more! Anyone can build a machine learning (ML) model with a few lines of code, but building a good machine learning model is a whole other story. What do I mean by a GOOD machine learning model? It depends, but generally, you’ll evaluate your machine learning model based on some predetermined metrics that you decide to use. When it comes to building classification models, you’ll most likely use a confusion matrix and related metrics to evaluate your model. Confusion matrices are not just useful in model evaluation but also model monitoring and model management! Don’t worry, we’re not talking about linear algebra matrices here! In this article, we’ll cover what a confusion matrix is, some key terms and metrics, an example of a 2x2 matrix, and all of the related python code! With that said, let’s dive into it! A confusion matrix, also known as an error matrix, is a summarized table used to assess the performance of a classification model. The number of correct and incorrect predictions are summarized with count values and broken down by each class. Below is an image of the structure of a 2x2 confusion matrix. To give an example, let’s say that there were ten instances where a classification model predicted ‘Yes’ in which the actual value was ‘Yes’. Then the number ten would go in the top left corner in the True Positive quadrant. This leads us to some key terms: Positive (P): Observation is positive (eg. is a dog). Negative (N): Observation is not positive (eg. is not a dog). True Positive (TP): Outcome where the model correctly predicts the positive class. True Negative (TN): Outcome where the model correctly predicts the negative class. False Positive (FP): Also called a type 1 error, an outcome where the model incorrectly predicts the positive class when it is actually negative. False Negative (FN): Also called a type 2 error, an outcome where the model incorrectly predicts the negative class when it is actually positive. Be sure to subscribe here or to my personal newsletter to never miss another article on data science guides, tricks and tips, life lessons, and more! Now that you understand the general structure of a confusion matrix as well as the associated key terms, we can dive into some of the main metrics that you can calculate from a confusion matrix. Note: this list is not exhaustive — if you want to see all of the metrics that you can calculate, check out Wikipedia’s page here. This is simply equal to the proportion of predictions that the model classified correctly. Precision is also known as positive predictive value and is the proportion of relevant instances among the retrieved instances. In other words, it answers the question “What proportion of positive identifications was actually correct?” Recall, also known as the sensitivity, hit rate, or the true positive rate (TPR), is the proportion of the total amount of relevant instances that were actually retrieved. It answers the question “What proportion of actual positives was identified correctly?” To really hit it home, the diagram below is a great way to remember the difference between precision and recall (it certainly helped me)! Specificity, also known as the true negative rate (TNR), measures the proportion of actual negatives that are correctly identified as such. It is the opposite of recall. The F1 score is a measure of a test’s accuracy — it is the harmonic mean of precision and recall. It can have a maximum score of 1 (perfect precision and recall) and a minimum of 0. Overall, it is a measure of the preciseness and robustness of your model. If this still isn’t making sense to you, it will after we take a look at the example below. Imagine that we created a machine learning model that predicts whether a patient has cancer or not. The table on the left shows twelve predictions that the model made as well as the actual result of each patient. With our paired-data, you can then fill out the confusion matrix using the structure that I showed above. Once this is filled in, we can learn a number of things about our model: Our model predicted that 4/12 (red + yellow) patients had cancer when there were actually 3/12 (red + blue) patients with cancer Our model has an accuracy of 9/12 or 75% ((red + green)/(total)) The recall of our model is equal to 2/(2+1) = 66% In reality, you would want the recall of a cancer detection model to be as close to 100% as possible. It’s far worse if a patient with cancer is diagnosed as cancer-free, as opposed to a cancer-free patient being diagnosed with cancer only to realize later with more testing that he/she doesn't have it. Below is a summary of code that you need to calculate the metrics above: # Confusion Matrixfrom sklearn.metrics import confusion_matrixconfusion_matrix(y_true, y_pred)# Accuracyfrom sklearn.metrics import accuracy_scoreaccuracy_score(y_true, y_pred)# Recallfrom sklearn.metrics import recall_scorerecall_score(y_true, y_pred, average=None)# Precisionfrom sklearn.metrics import precision_scoreprecision_score(y_true, y_pred, average=None) There are three ways you can calculate the F1 score in Python: # Method 1: sklearnfrom sklearn.metrics import f1_scoref1_score(y_true, y_pred, average=None)# Method 2: Manual CalculationF1 = 2 * (precision * recall) / (precision + recall)# Method 3: Classification report [BONUS]from sklearn.metrics import classification_reportprint(classification_report(y_true, y_pred, target_names=target_names)) Now that you know what a confusion matrix is as well as its associated metrics, you can effectively evaluate your classification ML models. This is also essential to understand even after you finish developing your ML model, as you’ll be leveraging these metrics in the model monitoring and model management stages of the machine learning life cycle. Be sure to subscribe here or to my personal newsletter to never miss another article on data science guides, tricks and tips, life lessons, and more! If you like my work and want to support me... The BEST way to support me is by following me on Medium here.Be one of the FIRST to follow me on Twitter here. I’ll be posting lots of updates and interesting stuff here!Also, be one of the FIRST to subscribe to my new YouTube channel here!Follow me on LinkedIn here.Sign up on my email list here.Check out my website, terenceshin.com. The BEST way to support me is by following me on Medium here. Be one of the FIRST to follow me on Twitter here. I’ll be posting lots of updates and interesting stuff here! Also, be one of the FIRST to subscribe to my new YouTube channel here! Follow me on LinkedIn here. Sign up on my email list here. Check out my website, terenceshin.com.
[ { "code": null, "e": 269, "s": 171, "text": "IntroductionWhat is a Confusion Matrix?Confusion Matrix MetricsExample of a 2x2 MatrixPython Code" }, { "code": null, "e": 282, "s": 269, "text": "Introduction" }, { "code": null, "e": 310, "s": 282, "text": "What is a Confusion Matrix?" }, { "code": null, "e": 335, "s": 310, "text": "Confusion Matrix Metrics" }, { "code": null, "e": 359, "s": 335, "text": "Example of a 2x2 Matrix" }, { "code": null, "e": 371, "s": 359, "text": "Python Code" }, { "code": null, "e": 521, "s": 371, "text": "Be sure to subscribe here or to my personal newsletter to never miss another article on data science guides, tricks and tips, life lessons, and more!" }, { "code": null, "e": 661, "s": 521, "text": "Anyone can build a machine learning (ML) model with a few lines of code, but building a good machine learning model is a whole other story." }, { "code": null, "e": 710, "s": 661, "text": "What do I mean by a GOOD machine learning model?" }, { "code": null, "e": 1083, "s": 710, "text": "It depends, but generally, you’ll evaluate your machine learning model based on some predetermined metrics that you decide to use. When it comes to building classification models, you’ll most likely use a confusion matrix and related metrics to evaluate your model. Confusion matrices are not just useful in model evaluation but also model monitoring and model management!" }, { "code": null, "e": 1150, "s": 1083, "text": "Don’t worry, we’re not talking about linear algebra matrices here!" }, { "code": null, "e": 1299, "s": 1150, "text": "In this article, we’ll cover what a confusion matrix is, some key terms and metrics, an example of a 2x2 matrix, and all of the related python code!" }, { "code": null, "e": 1335, "s": 1299, "text": "With that said, let’s dive into it!" }, { "code": null, "e": 1578, "s": 1335, "text": "A confusion matrix, also known as an error matrix, is a summarized table used to assess the performance of a classification model. The number of correct and incorrect predictions are summarized with count values and broken down by each class." }, { "code": null, "e": 1898, "s": 1578, "text": "Below is an image of the structure of a 2x2 confusion matrix. To give an example, let’s say that there were ten instances where a classification model predicted ‘Yes’ in which the actual value was ‘Yes’. Then the number ten would go in the top left corner in the True Positive quadrant. This leads us to some key terms:" }, { "code": null, "e": 1952, "s": 1898, "text": "Positive (P): Observation is positive (eg. is a dog)." }, { "code": null, "e": 2014, "s": 1952, "text": "Negative (N): Observation is not positive (eg. is not a dog)." }, { "code": null, "e": 2097, "s": 2014, "text": "True Positive (TP): Outcome where the model correctly predicts the positive class." }, { "code": null, "e": 2180, "s": 2097, "text": "True Negative (TN): Outcome where the model correctly predicts the negative class." }, { "code": null, "e": 2326, "s": 2180, "text": "False Positive (FP): Also called a type 1 error, an outcome where the model incorrectly predicts the positive class when it is actually negative." }, { "code": null, "e": 2472, "s": 2326, "text": "False Negative (FN): Also called a type 2 error, an outcome where the model incorrectly predicts the negative class when it is actually positive." }, { "code": null, "e": 2622, "s": 2472, "text": "Be sure to subscribe here or to my personal newsletter to never miss another article on data science guides, tricks and tips, life lessons, and more!" }, { "code": null, "e": 2817, "s": 2622, "text": "Now that you understand the general structure of a confusion matrix as well as the associated key terms, we can dive into some of the main metrics that you can calculate from a confusion matrix." }, { "code": null, "e": 2948, "s": 2817, "text": "Note: this list is not exhaustive — if you want to see all of the metrics that you can calculate, check out Wikipedia’s page here." }, { "code": null, "e": 3039, "s": 2948, "text": "This is simply equal to the proportion of predictions that the model classified correctly." }, { "code": null, "e": 3275, "s": 3039, "text": "Precision is also known as positive predictive value and is the proportion of relevant instances among the retrieved instances. In other words, it answers the question “What proportion of positive identifications was actually correct?”" }, { "code": null, "e": 3535, "s": 3275, "text": "Recall, also known as the sensitivity, hit rate, or the true positive rate (TPR), is the proportion of the total amount of relevant instances that were actually retrieved. It answers the question “What proportion of actual positives was identified correctly?”" }, { "code": null, "e": 3673, "s": 3535, "text": "To really hit it home, the diagram below is a great way to remember the difference between precision and recall (it certainly helped me)!" }, { "code": null, "e": 3843, "s": 3673, "text": "Specificity, also known as the true negative rate (TNR), measures the proportion of actual negatives that are correctly identified as such. It is the opposite of recall." }, { "code": null, "e": 4099, "s": 3843, "text": "The F1 score is a measure of a test’s accuracy — it is the harmonic mean of precision and recall. It can have a maximum score of 1 (perfect precision and recall) and a minimum of 0. Overall, it is a measure of the preciseness and robustness of your model." }, { "code": null, "e": 4191, "s": 4099, "text": "If this still isn’t making sense to you, it will after we take a look at the example below." }, { "code": null, "e": 4510, "s": 4191, "text": "Imagine that we created a machine learning model that predicts whether a patient has cancer or not. The table on the left shows twelve predictions that the model made as well as the actual result of each patient. With our paired-data, you can then fill out the confusion matrix using the structure that I showed above." }, { "code": null, "e": 4583, "s": 4510, "text": "Once this is filled in, we can learn a number of things about our model:" }, { "code": null, "e": 4712, "s": 4583, "text": "Our model predicted that 4/12 (red + yellow) patients had cancer when there were actually 3/12 (red + blue) patients with cancer" }, { "code": null, "e": 4777, "s": 4712, "text": "Our model has an accuracy of 9/12 or 75% ((red + green)/(total))" }, { "code": null, "e": 4827, "s": 4777, "text": "The recall of our model is equal to 2/(2+1) = 66%" }, { "code": null, "e": 5131, "s": 4827, "text": "In reality, you would want the recall of a cancer detection model to be as close to 100% as possible. It’s far worse if a patient with cancer is diagnosed as cancer-free, as opposed to a cancer-free patient being diagnosed with cancer only to realize later with more testing that he/she doesn't have it." }, { "code": null, "e": 5204, "s": 5131, "text": "Below is a summary of code that you need to calculate the metrics above:" }, { "code": null, "e": 5570, "s": 5204, "text": "# Confusion Matrixfrom sklearn.metrics import confusion_matrixconfusion_matrix(y_true, y_pred)# Accuracyfrom sklearn.metrics import accuracy_scoreaccuracy_score(y_true, y_pred)# Recallfrom sklearn.metrics import recall_scorerecall_score(y_true, y_pred, average=None)# Precisionfrom sklearn.metrics import precision_scoreprecision_score(y_true, y_pred, average=None)" }, { "code": null, "e": 5633, "s": 5570, "text": "There are three ways you can calculate the F1 score in Python:" }, { "code": null, "e": 5970, "s": 5633, "text": "# Method 1: sklearnfrom sklearn.metrics import f1_scoref1_score(y_true, y_pred, average=None)# Method 2: Manual CalculationF1 = 2 * (precision * recall) / (precision + recall)# Method 3: Classification report [BONUS]from sklearn.metrics import classification_reportprint(classification_report(y_true, y_pred, target_names=target_names))" }, { "code": null, "e": 6321, "s": 5970, "text": "Now that you know what a confusion matrix is as well as its associated metrics, you can effectively evaluate your classification ML models. This is also essential to understand even after you finish developing your ML model, as you’ll be leveraging these metrics in the model monitoring and model management stages of the machine learning life cycle." }, { "code": null, "e": 6471, "s": 6321, "text": "Be sure to subscribe here or to my personal newsletter to never miss another article on data science guides, tricks and tips, life lessons, and more!" }, { "code": null, "e": 6517, "s": 6471, "text": "If you like my work and want to support me..." }, { "code": null, "e": 6853, "s": 6517, "text": "The BEST way to support me is by following me on Medium here.Be one of the FIRST to follow me on Twitter here. I’ll be posting lots of updates and interesting stuff here!Also, be one of the FIRST to subscribe to my new YouTube channel here!Follow me on LinkedIn here.Sign up on my email list here.Check out my website, terenceshin.com." }, { "code": null, "e": 6915, "s": 6853, "text": "The BEST way to support me is by following me on Medium here." }, { "code": null, "e": 7025, "s": 6915, "text": "Be one of the FIRST to follow me on Twitter here. I’ll be posting lots of updates and interesting stuff here!" }, { "code": null, "e": 7096, "s": 7025, "text": "Also, be one of the FIRST to subscribe to my new YouTube channel here!" }, { "code": null, "e": 7124, "s": 7096, "text": "Follow me on LinkedIn here." }, { "code": null, "e": 7155, "s": 7124, "text": "Sign up on my email list here." } ]
How to reset or change the MySQL root password?
To reset or change the root password, first we need to open MySQL in the Administrator mode. There, we can see all the users as well as the host. The following is the query − mysql> USE mysql; Database changed mysql> SELECT user,host from user; Here is the output. +------------------+-----------+ | user | host | +------------------+-----------+ | John | % | | Mac | % | | Manish | % | | mysql.infoschema | % | | mysql.session | % | | mysql.sys | % | | root | % | | am | localhost | +------------------+-----------+ 8 rows in set (0.00 sec) Now, let us see the query to change the password. mysql> ALTER USER 'root'@'%' IDENTIFIED BY '123456'; Query OK, 0 rows affected (0.13 sec) mysql> ALTER USER 'Manish'@'%' IDENTIFIED BY '123456'; Query OK, 0 rows affected (0.14 sec) As you saw above, ‘Manish’ is a root and we have altered the password. The above query works in MySQL 5.7.6 and higher versions. To check if the password is reset or changed, we need to open the CMD and reach the directory in the system where the bin is present. Let us first try to login with the old password. As you can see in the above screenshot, the previous password that has been reset or changed is tried. The same won’t work. Now, we will try to open MySQL with the new password i.e ‘123456’ and it works.
[ { "code": null, "e": 1237, "s": 1062, "text": "To reset or change the root password, first we need to open MySQL in the Administrator mode. There, we can see all the users as well as the host. The following is the query −" }, { "code": null, "e": 1307, "s": 1237, "text": "mysql> USE mysql;\nDatabase changed\nmysql> SELECT user,host from user;" }, { "code": null, "e": 1327, "s": 1307, "text": "Here is the output." }, { "code": null, "e": 1749, "s": 1327, "text": "+------------------+-----------+\n| user | host |\n+------------------+-----------+\n| John | % |\n| Mac | % |\n| Manish | % |\n| mysql.infoschema | % |\n| mysql.session | % |\n| mysql.sys | % |\n| root | % |\n| am | localhost |\n+------------------+-----------+\n8 rows in set (0.00 sec)\n" }, { "code": null, "e": 1799, "s": 1749, "text": "Now, let us see the query to change the password." }, { "code": null, "e": 1982, "s": 1799, "text": "mysql> ALTER USER 'root'@'%' IDENTIFIED BY '123456';\nQuery OK, 0 rows affected (0.13 sec)\n\nmysql> ALTER USER 'Manish'@'%' IDENTIFIED BY '123456';\nQuery OK, 0 rows affected (0.14 sec)" }, { "code": null, "e": 2111, "s": 1982, "text": "As you saw above, ‘Manish’ is a root and we have altered the password. The above query works in MySQL 5.7.6 and higher versions." }, { "code": null, "e": 2294, "s": 2111, "text": "To check if the password is reset or changed, we need to open the CMD and reach the directory in the system where the bin is present. Let us first try to login with the old password." }, { "code": null, "e": 2498, "s": 2294, "text": "As you can see in the above screenshot, the previous password that has been reset or changed is tried. The same won’t work. Now, we will try to open MySQL with the new password i.e ‘123456’ and it works." } ]
Hazelcast - First Application
Hazelcast can be run in isolation (single node) or multiple nodes can be run to form a cluster. Let us first try starting a single instance. Now, let us try creating and using a single instance of Hazelcast cluster. For that, we will create SingleInstanceHazelcastExample.java file. package com.example.demo; import java.util.Map; import com.hazelcast.core.Hazelcast; import com.hazelcast.core.HazelcastInstance; public class SingleInstanceHazelcastExample { public static void main(String... args){ //initialize hazelcast server/instance HazelcastInstance hazelcast = Hazelcast.newHazelcastInstance(); System.out.println(“Hello world”); // perform a graceful shutdown hazelcast.shutdown(); } } Now let’s compile the code and execute it − mvn clean install java -cp target/demo-0.0.1-SNAPSHOT.jar com.example.demo.SingleInstanceHazelcastExample If you execute above code, the output would be − Hello World However, more importantly, you will also notice log lines from Hazelcast which signifies that Hazelcast has started. Since we are running this code only once, i.e., a single JVM, we would only have one member in our cluster. Jan 30, 2021 10:26:51 AM com.hazelcast.config.XmlConfigLocator INFO: Loading 'hazelcast-default.xml' from classpath. Jan 30, 2021 10:26:51 AM com.hazelcast.instance.AddressPicker INFO: [LOCAL] [dev] [3.12.12] Prefer IPv4 stack is true. Jan 30, 2021 10:26:52 AM com.hazelcast.instance.AddressPicker INFO: [LOCAL] [dev] [3.12.12] Picked [localhost]:5701, using socket ServerSocket[addr=/0:0:0:0:0:0:0:0,localport=5701], bind any local is true Jan 30, 2021 10:26:52 AM com.hazelcast.system ... Members {size:1, ver:1} [ Member [localhost]:5701 - 9b764311-9f74-40e5-8a0a-85193bce227b this ] Jan 30, 2021 10:26:56 AM com.hazelcast.core.LifecycleService INFO: [localhost]:5701 [dev] [3.12.12] [localhost]:5701 is STARTED ... You will also notice log lines from Hazelcast at the end which signifies Hazelcast was shutdown: INFO: [localhost]:5701 [dev] [3.12.12] Hazelcast Shutdown is completed in 784 ms. Jan 30, 2021 10:26:57 AM com.hazelcast.core.LifecycleService INFO: [localhost]:5701 [dev] [3.12.12] [localhost]:5701 is SHUTDOWN Now, let's create MultiInstanceHazelcastExample.java file (as below) which would be used for multi-instance cluster. package com.example.demo; import com.hazelcast.core.Hazelcast; import com.hazelcast.core.HazelcastInstance; public class MultiInstanceHazelcastExample { public static void main(String... args) throws InterruptedException{ //initialize hazelcast server/instance HazelcastInstance hazelcast = Hazelcast.newHazelcastInstance(); //print the socket address of this member and also the size of the cluster System.out.println(String.format("[%s]: No. of hazelcast members: %s", hazelcast.getCluster().getLocalMember().getSocketAddress(), hazelcast.getCluster().getMembers().size())); // wait for the member to join Thread.sleep(30000); //perform a graceful shutdown hazelcast.shutdown(); } } Let’s execute the following command on two different shells − java -cp .\target\demo-0.0.1-SNAPSHOT.jar com.example.demo.MultiInstanceHazelcastExample You would notice on the 1st shell that a Hazelcast instance has been started and a member has been assigned. Note the last line of output which says that there is a single member using port 5701. Jan 30, 2021 12:20:21 PM com.hazelcast.internal.cluster.ClusterService INFO: [localhost]:5701 [dev] [3.12.12] Members {size:1, ver:1} [ Member [localhost]:5701 - b0d5607b-47ab-47a2-b0eb-6c17c031fc2f this ] Jan 30, 2021 12:20:21 PM com.hazelcast.core.LifecycleService INFO: [localhost]:5701 [dev] [3.12.12] [localhost]:5701 is STARTED [/localhost:5701]: No. of hazelcast members: 1 You would notice on the 2nd shell that a Hazelcast instance has joined the 1st instance. Note the last line of the output which says that there are now two members using port 5702. INFO: [localhost]:5702 [dev] [3.12.12] Members {size:2, ver:2} [ Member [localhost]:5701 - b0d5607b-47ab-47a2-b0eb-6c17c031fc2f Member [localhost]:5702 - 037b5fd9-1a1e-46f2-ae59-14c7b9724ec6 this ] Jan 30, 2021 12:20:46 PM com.hazelcast.core.LifecycleService INFO: [localhost]:5702 [dev] [3.12.12] [localhost]:5702 is STARTED [/localhost:5702]: No. of hazelcast members: 2 Print Add Notes Bookmark this page
[ { "code": null, "e": 2104, "s": 1963, "text": "Hazelcast can be run in isolation (single node) or multiple nodes can be run to form a cluster. Let us first try starting a single instance." }, { "code": null, "e": 2246, "s": 2104, "text": "Now, let us try creating and using a single instance of Hazelcast cluster. For that, we will create SingleInstanceHazelcastExample.java file." }, { "code": null, "e": 2705, "s": 2246, "text": "package com.example.demo;\n\nimport java.util.Map;\nimport com.hazelcast.core.Hazelcast;\nimport com.hazelcast.core.HazelcastInstance;\n\npublic class SingleInstanceHazelcastExample {\n public static void main(String... args){\n //initialize hazelcast server/instance\n HazelcastInstance hazelcast = Hazelcast.newHazelcastInstance();\n System.out.println(“Hello world”);\n \n // perform a graceful shutdown\n hazelcast.shutdown();\n }\n}" }, { "code": null, "e": 2749, "s": 2705, "text": "Now let’s compile the code and execute it −" }, { "code": null, "e": 2856, "s": 2749, "text": "mvn clean install\njava -cp target/demo-0.0.1-SNAPSHOT.jar\ncom.example.demo.SingleInstanceHazelcastExample\n" }, { "code": null, "e": 2905, "s": 2856, "text": "If you execute above code, the output would be −" }, { "code": null, "e": 2918, "s": 2905, "text": "Hello World\n" }, { "code": null, "e": 3143, "s": 2918, "text": "However, more importantly, you will also notice log lines from Hazelcast which signifies that Hazelcast has started. Since we are running this code only once, i.e., a single JVM, we would only have one member in our cluster." }, { "code": null, "e": 4177, "s": 3143, "text": "Jan 30, 2021 10:26:51 AM com.hazelcast.config.XmlConfigLocator\nINFO: Loading 'hazelcast-default.xml' from classpath.\nJan 30, 2021 10:26:51 AM com.hazelcast.instance.AddressPicker\nINFO: [LOCAL] [dev] [3.12.12] Prefer IPv4 stack is true.\nJan 30, 2021 10:26:52 AM com.hazelcast.instance.AddressPicker\nINFO: [LOCAL] [dev] [3.12.12] Picked [localhost]:5701, using socket\nServerSocket[addr=/0:0:0:0:0:0:0:0,localport=5701], bind any local is true\nJan 30, 2021 10:26:52 AM com.hazelcast.system\n...\n\nMembers {size:1, ver:1} [\n Member [localhost]:5701 - 9b764311-9f74-40e5-8a0a-85193bce227b this\n]\n\nJan 30, 2021 10:26:56 AM com.hazelcast.core.LifecycleService\nINFO: [localhost]:5701 [dev] [3.12.12] [localhost]:5701 is STARTED\n...\n\nYou will also notice log lines from Hazelcast at the end which signifies\nHazelcast was shutdown:\nINFO: [localhost]:5701 [dev] [3.12.12] Hazelcast Shutdown is completed in 784 ms.\nJan 30, 2021 10:26:57 AM com.hazelcast.core.LifecycleService\nINFO: [localhost]:5701 [dev] [3.12.12] [localhost]:5701 is SHUTDOWN\n" }, { "code": null, "e": 4294, "s": 4177, "text": "Now, let's create MultiInstanceHazelcastExample.java file (as below) which would be used for multi-instance cluster." }, { "code": null, "e": 5080, "s": 4294, "text": "package com.example.demo;\n\nimport com.hazelcast.core.Hazelcast;\nimport com.hazelcast.core.HazelcastInstance;\n\npublic class MultiInstanceHazelcastExample {\n public static void main(String... args) throws InterruptedException{\n //initialize hazelcast server/instance\n HazelcastInstance hazelcast = Hazelcast.newHazelcastInstance();\n \n //print the socket address of this member and also the size of the cluster\n System.out.println(String.format(\"[%s]: No. of hazelcast members: %s\",\n hazelcast.getCluster().getLocalMember().getSocketAddress(),\n hazelcast.getCluster().getMembers().size()));\n \n // wait for the member to join\n Thread.sleep(30000);\n \n //perform a graceful shutdown\n hazelcast.shutdown();\n }\n}" }, { "code": null, "e": 5142, "s": 5080, "text": "Let’s execute the following command on two different shells −" }, { "code": null, "e": 5232, "s": 5142, "text": "java -cp .\\target\\demo-0.0.1-SNAPSHOT.jar\ncom.example.demo.MultiInstanceHazelcastExample\n" }, { "code": null, "e": 5428, "s": 5232, "text": "You would notice on the 1st shell that a Hazelcast instance has been started and a member has been assigned. Note the last line of output which says that there is a single member using port 5701." }, { "code": null, "e": 5812, "s": 5428, "text": "Jan 30, 2021 12:20:21 PM com.hazelcast.internal.cluster.ClusterService\nINFO: [localhost]:5701 [dev] [3.12.12]\nMembers {size:1, ver:1} [\n Member [localhost]:5701 - b0d5607b-47ab-47a2-b0eb-6c17c031fc2f this\n]\nJan 30, 2021 12:20:21 PM com.hazelcast.core.LifecycleService\nINFO: [localhost]:5701 [dev] [3.12.12] [localhost]:5701 is STARTED\n[/localhost:5701]: No. of hazelcast members: 1" }, { "code": null, "e": 5993, "s": 5812, "text": "You would notice on the 2nd shell that a Hazelcast instance has joined the 1st instance. Note the last line of the output which says that there are now two members using port 5702." }, { "code": null, "e": 6373, "s": 5993, "text": "INFO: [localhost]:5702 [dev] [3.12.12]\nMembers {size:2, ver:2} [\n Member [localhost]:5701 - b0d5607b-47ab-47a2-b0eb-6c17c031fc2f\n Member [localhost]:5702 - 037b5fd9-1a1e-46f2-ae59-14c7b9724ec6 this\n]\nJan 30, 2021 12:20:46 PM com.hazelcast.core.LifecycleService\nINFO: [localhost]:5702 [dev] [3.12.12] [localhost]:5702 is STARTED\n[/localhost:5702]: No. of hazelcast members: 2\n" }, { "code": null, "e": 6380, "s": 6373, "text": " Print" }, { "code": null, "e": 6391, "s": 6380, "text": " Add Notes" } ]
LISP - Do Construct
The do construct is also used for performing iteration using LISP. It provides a structured form of iteration. The syntax for do statement − (do ((variable1 value1 updated-value1) (variable2 value2 updated-value2) (variable3 value3 updated-value3) ...) (test return-value) (s-expressions) ) The initial values of each variable is evaluated and bound to the respective variable. The updated value in each clause corresponds to an optional update statement that specifies how the values of the variables will be updated with each iteration. After each iteration, the test is evaluated, and if it returns a non-nil or true, the return-value is evaluated and returned. The last s-expression(s) is optional. If present, they are executed after every iteration, until the test value returns true. Create a new source code file named main.lisp and type the following code in it − (do ((x 0 (+ 2 x)) (y 20 ( - y 2))) ((= x y)(- x y)) (format t "~% x = ~d y = ~d" x y) ) When you click the Execute button, or type Ctrl+E, LISP executes it immediately and the result returned is − x = 0 y = 20 x = 2 y = 18 x = 4 y = 16 x = 6 y = 14 x = 8 y = 12 79 Lectures 7 hours Arnold Higuit Print Add Notes Bookmark this page
[ { "code": null, "e": 2171, "s": 2060, "text": "The do construct is also used for performing iteration using LISP. It provides a structured form of iteration." }, { "code": null, "e": 2201, "s": 2171, "text": "The syntax for do statement −" }, { "code": null, "e": 2386, "s": 2201, "text": "(do ((variable1 value1 updated-value1)\n (variable2 value2 updated-value2)\n (variable3 value3 updated-value3)\n ...)\n (test return-value)\n (s-expressions)\n)\n" }, { "code": null, "e": 2634, "s": 2386, "text": "The initial values of each variable is evaluated and bound to the respective variable. The updated value in each clause corresponds to an optional update statement that specifies how the values of the variables will be updated with each iteration." }, { "code": null, "e": 2760, "s": 2634, "text": "After each iteration, the test is evaluated, and if it returns a non-nil or true, the return-value is evaluated and returned." }, { "code": null, "e": 2886, "s": 2760, "text": "The last s-expression(s) is optional. If present, they are executed after every iteration, until the test value returns true." }, { "code": null, "e": 2968, "s": 2886, "text": "Create a new source code file named main.lisp and type the following code in it −" }, { "code": null, "e": 3067, "s": 2968, "text": "(do ((x 0 (+ 2 x))\n (y 20 ( - y 2)))\n ((= x y)(- x y))\n (format t \"~% x = ~d y = ~d\" x y)\n)" }, { "code": null, "e": 3176, "s": 3067, "text": "When you click the Execute button, or type Ctrl+E, LISP executes it immediately and the result returned is −" }, { "code": null, "e": 3247, "s": 3176, "text": "x = 0 y = 20\nx = 2 y = 18\nx = 4 y = 16\nx = 6 y = 14\nx = 8 y = 12\n" }, { "code": null, "e": 3280, "s": 3247, "text": "\n 79 Lectures \n 7 hours \n" }, { "code": null, "e": 3295, "s": 3280, "text": " Arnold Higuit" }, { "code": null, "e": 3302, "s": 3295, "text": " Print" }, { "code": null, "e": 3313, "s": 3302, "text": " Add Notes" } ]
A basic Python Programming Challenge - GeeksforGeeks
22 Jul, 2021 Heya guys! I am back with another article my previous article on secure coding. This time we are not going to go into any theoretical stuff. Some months ago, I wrote a program in Python for my students so that they can practice basic BODMAS questions. The purpose was that the program should generate random set of questions (number of questions to be entered by the user) and then check whether the entered answer is correct or not. Now, obviously it was quite easy for me to code, But, the thing was I had to ensure that 5/2 = 2.5 is as much correct as 2.500. So, I just couldn’t go and match two strings. I had to come up with a different solution. Just to have fun and see if any of my students or volunteers could come up with a vulnerability in the program, I specifically wrote a weak program. Now, I have modified the program to make it easier for you all to identify the mistakes and the vulnerabilities in it. Now, here is what I want you to do: Don’t look at the code. Just compile it, run it and see if you can figure out the vulnerabilities in the code.If you can’t figure out the vulnerabilities in step 1 or even if you did, go and take a look at the program code and try to figure out what are the things you missed! Don’t look at the code. Just compile it, run it and see if you can figure out the vulnerabilities in the code. If you can’t figure out the vulnerabilities in step 1 or even if you did, go and take a look at the program code and try to figure out what are the things you missed! Once you are done, please comment what you think are the vulnerabilities in the code and how will you correct them! Here we go!! Given Input: 3 6 -1 Program for the small basic python Challenge ## Note: This program has been modified a bit for## GeeksForGeeks articleimport random,operatorprint ('===========================================') def randomCalc(i,j): ops = {'+':operator.add, '-':operator.sub, '*':operator.mul, '/':operator.truediv } num = [1,2,3,4] num1,num2 = num[i],num[j] op = (list(ops.keys()))[i] answer = round(ops.get(op)(num1,num2),3) print('What is {} {} {}?\n'.format(num1, op, num2)) return answer def askQuestion(i): answer = randomCalc(i,i+1) guess = float(input()) return guess == answer,answer def quiz(numOfQues): print('\nWelcome. This is a '+str(numOfQues)+' question math quiz.') print('Your answer should be correct to three decimal places.\n') score = 0 for i in range(numOfQues): correct,ans = askQuestion(i) if correct: score += 1 print('Correct!\n') else: print('Incorrect! The correct answer is ' + str(ans)+'\n') return ('Your score was {}/'+str(numOfQues)).format(score) # Driver Codeprint (quiz(3)) =========================================== Welcome. This is a 3 question math quiz Your answer should be correct to three decimal places. What is 1 + 2? Correct! What is 2 * 3? Correct! What is 3 - 4? Correct! Your score was 3/3 Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above!! About the author: Vishwesh Shrimali is an Undergraduate Mechanical Engineering student at BITS Pilani. He fulfils about all the requirements not taught in his branch- white hat hacker, network security operator, and an ex – Competitive Programmer. As a firm believer in power of Python, his majority work has been in the same language. Whenever he get some time apart from programming, attending classes, watching CSI Cyber, he go for a long walk and play guitar in silence. His motto of life is – “Enjoy your life, ‘cause it’s worth enjoying!” If you also wish to showcase your blog here, please see GBlog for guest blog writing on GeeksforGeeks. secure-coding Advanced Computer Subject GBlog Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Copying Files to and from Docker Containers Principal Component Analysis with Python ML | Stochastic Gradient Descent (SGD) Fuzzy Logic | Introduction Getting Started with System Design Roadmap to Become a Web Developer in 2022 Must Do Coding Questions for Companies like Amazon, Microsoft, Adobe, ... Socket Programming in C/C++ DSA Sheet by Love Babbar GET and POST requests using Python
[ { "code": null, "e": 24252, "s": 24224, "text": "\n22 Jul, 2021" }, { "code": null, "e": 25172, "s": 24252, "text": "Heya guys! I am back with another article my previous article on secure coding. This time we are not going to go into any theoretical stuff. Some months ago, I wrote a program in Python for my students so that they can practice basic BODMAS questions. The purpose was that the program should generate random set of questions (number of questions to be entered by the user) and then check whether the entered answer is correct or not. Now, obviously it was quite easy for me to code, But, the thing was I had to ensure that 5/2 = 2.5 is as much correct as 2.500. So, I just couldn’t go and match two strings. I had to come up with a different solution. Just to have fun and see if any of my students or volunteers could come up with a vulnerability in the program, I specifically wrote a weak program. Now, I have modified the program to make it easier for you all to identify the mistakes and the vulnerabilities in it." }, { "code": null, "e": 25208, "s": 25172, "text": "Now, here is what I want you to do:" }, { "code": null, "e": 25485, "s": 25208, "text": "Don’t look at the code. Just compile it, run it and see if you can figure out the vulnerabilities in the code.If you can’t figure out the vulnerabilities in step 1 or even if you did, go and take a look at the program code and try to figure out what are the things you missed!" }, { "code": null, "e": 25596, "s": 25485, "text": "Don’t look at the code. Just compile it, run it and see if you can figure out the vulnerabilities in the code." }, { "code": null, "e": 25763, "s": 25596, "text": "If you can’t figure out the vulnerabilities in step 1 or even if you did, go and take a look at the program code and try to figure out what are the things you missed!" }, { "code": null, "e": 25879, "s": 25763, "text": "Once you are done, please comment what you think are the vulnerabilities in the code and how will you correct them!" }, { "code": null, "e": 25892, "s": 25879, "text": "Here we go!!" }, { "code": null, "e": 25905, "s": 25892, "text": "Given Input:" }, { "code": null, "e": 25913, "s": 25905, "text": "3\n6\n-1\n" }, { "code": null, "e": 25958, "s": 25913, "text": "Program for the small basic python Challenge" }, { "code": "## Note: This program has been modified a bit for## GeeksForGeeks articleimport random,operatorprint ('===========================================') def randomCalc(i,j): ops = {'+':operator.add, '-':operator.sub, '*':operator.mul, '/':operator.truediv } num = [1,2,3,4] num1,num2 = num[i],num[j] op = (list(ops.keys()))[i] answer = round(ops.get(op)(num1,num2),3) print('What is {} {} {}?\\n'.format(num1, op, num2)) return answer def askQuestion(i): answer = randomCalc(i,i+1) guess = float(input()) return guess == answer,answer def quiz(numOfQues): print('\\nWelcome. This is a '+str(numOfQues)+' question math quiz.') print('Your answer should be correct to three decimal places.\\n') score = 0 for i in range(numOfQues): correct,ans = askQuestion(i) if correct: score += 1 print('Correct!\\n') else: print('Incorrect! The correct answer is ' + str(ans)+'\\n') return ('Your score was {}/'+str(numOfQues)).format(score) # Driver Codeprint (quiz(3))", "e": 27036, "s": 25958, "text": null }, { "code": null, "e": 27276, "s": 27036, "text": "===========================================\n\nWelcome. This is a 3 question math quiz\n Your answer should be correct to three decimal places.\n\nWhat is 1 + 2?\n\nCorrect!\n\nWhat is 2 * 3?\n\nCorrect!\n\nWhat is 3 - 4?\n\nCorrect!\n\nYour score was 3/3\n" }, { "code": null, "e": 27402, "s": 27276, "text": "Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above!!" }, { "code": null, "e": 27420, "s": 27402, "text": "About the author:" }, { "code": null, "e": 27947, "s": 27420, "text": "Vishwesh Shrimali is an Undergraduate Mechanical Engineering student at BITS Pilani. He fulfils about all the requirements not taught in his branch- white hat hacker, network security operator, and an ex – Competitive Programmer. As a firm believer in power of Python, his majority work has been in the same language. Whenever he get some time apart from programming, attending classes, watching CSI Cyber, he go for a long walk and play guitar in silence. His motto of life is – “Enjoy your life, ‘cause it’s worth enjoying!”" }, { "code": null, "e": 28050, "s": 27947, "text": "If you also wish to showcase your blog here, please see GBlog for guest blog writing on GeeksforGeeks." }, { "code": null, "e": 28066, "s": 28052, "text": "secure-coding" }, { "code": null, "e": 28092, "s": 28066, "text": "Advanced Computer Subject" }, { "code": null, "e": 28098, "s": 28092, "text": "GBlog" }, { "code": null, "e": 28105, "s": 28098, "text": "Python" }, { "code": null, "e": 28203, "s": 28105, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 28212, "s": 28203, "text": "Comments" }, { "code": null, "e": 28225, "s": 28212, "text": "Old Comments" }, { "code": null, "e": 28269, "s": 28225, "text": "Copying Files to and from Docker Containers" }, { "code": null, "e": 28310, "s": 28269, "text": "Principal Component Analysis with Python" }, { "code": null, "e": 28349, "s": 28310, "text": "ML | Stochastic Gradient Descent (SGD)" }, { "code": null, "e": 28376, "s": 28349, "text": "Fuzzy Logic | Introduction" }, { "code": null, "e": 28411, "s": 28376, "text": "Getting Started with System Design" }, { "code": null, "e": 28453, "s": 28411, "text": "Roadmap to Become a Web Developer in 2022" }, { "code": null, "e": 28527, "s": 28453, "text": "Must Do Coding Questions for Companies like Amazon, Microsoft, Adobe, ..." }, { "code": null, "e": 28555, "s": 28527, "text": "Socket Programming in C/C++" }, { "code": null, "e": 28580, "s": 28555, "text": "DSA Sheet by Love Babbar" } ]
Chef - Test Kitchen Setup
Test Kitchen is Chef’s integrated testing framework. It enables writing test recipes, which will run on the VMs once they are instantiated and converged using the cookbook. The test recipes run on that VM and can verify if everything works as expected. ChefSpec is something which only simulates a Chef run. Test kitchen boots up real node and runs Chef on it. Step 1 − Install test kitchen Ruby gem and test kitchen vagrant gem to enable test kitchen to use vagrant for spinning up test. $ gem install kitchen $ gem install kitchen-vagrant Step 2 − Set up test kitchen. This can be done by creating .kitchen.yml in the cookbook directory. driver_plugin: vagrant driver_config: require_chef_omnibus: true platforms: - name: ubuntu-12.04 driver_config: box: opscode-ubuntu-12.04 box_url: https://opscode-vm.s3.amazonaws.com/vagrant/opscode_ ubuntu-12.04_provisionerless.box suites: - name: default run_list: - recipe[minitest-handler] - recipe[my_cookbook_test] attributes: { my_cookbook: { greeting: 'Ohai, Minitest!'} } In the above code, one part defines that vagrant needs to spin up the VMs and it defines that you want Omnibus to install Chef on the target node. The second part defines which platform you want to test the cookbooks. Vagrant will always create and destroy new instances. You do not have to fear about the side effects with vagrant VMs you spin up using Vagrant file. Test kitchen can be considered as a temporary environment that helps to run and test cookbooks in a temporary environment that is similar to production. With test kitchen on, one can make sure that the given piece of code is working, before it is actually getting deployed on to testing, preproduction, and production environment. This feature of test kitchen is followed by many organizations as a set before putting the cookbooks in an actual working environment. Following are the steps involved in Test Kitchen Workflow. Use the following code to create a cookbook. $ chef generate cookbook motd_rhel Installing Cookbook Gems: Compiling Cookbooks... Recipe: code_generator::cookbook * directory[C:/chef/cookbooks/motd_rhel] action create - create new directory C:/chef/cookbooks/motd_rhel * template[C:/chef/cookbooks/motd_rhel/metadata.rb] action create_if_missing - create new file C:/chef/cookbooks/motd_rhel/metadata.rb - update content in file C:/chef/cookbooks/motd_rhel/metadata.rb from none to d6fcc2 (diff output suppressed by config) * template[C:/chef/cookbooks/motd_rhel/README.md] action create_if_missing - create new file C:/chef/cookbooks/motd_rhel/README.md - update content in file C:/chef/cookbooks/motd_rhel/README.md from none to 50deab (diff output suppressed by config) * cookbook_file[C:/chef/cookbooks/motd_rhel/chefignore] action create - create new file C:/chef/cookbooks/motd_rhel/chefignore - update content in file C:/chef/cookbooks/motd_rhel/chefignore from none to 15fac5 (diff output suppressed by config) * cookbook_file[C:/chef/cookbooks/motd_rhel/Berksfile] action create_if_missing - create new file C:/chef/cookbooks/motd_rhel/Berksfile - update content in file C:/chef/cookbooks/motd_rhel/Berksfile from none to 9f08dc (diff output suppressed by config) * template[C:/chef/cookbooks/motd_rhel/.kitchen.yml] action create_if_missing - create new file C:/chef/cookbooks/motd_rhel/.kitchen.yml - update content in file C:/chef/cookbooks/motd_rhel/.kitchen.yml from none to 49b92b (diff output suppressed by config) * directory[C:/chef/cookbooks/motd_rhel/test/integration/default/serverspec] action create - create new directory C:/chef/cookbooks/motd_rhel/test/integration/default/serverspec * directory[C:/chef/cookbooks/motd_rhel/test/integration/helpers/serverspec] action create - create new directory C:/chef/cookbooks/motd_rhel/test/integration/helpers/serverspec * cookbook_file [C:/chef/cookbooks/motd_rhel/test/integration/helpers/serverspec/spec_helper.rb] action create_if_missing - create new file C:/chef/cookbooks/motd_rhel/test/integration/helpers/serverspec/spec_helper.rb - update content in file C:/chef/cookbooks/motd_rhel/test/integration/helpers/serverspec/spec_helper.rb from none to d85df4 (diff output suppressed by config) * template [C:/chef/cookbooks/motd_rhel/test/integration/default/serverspec/defaul t_spec.rb] action create_if_missing - create new file C:/chef/cookbooks/motd_rhel/test/integration/default/serverspec/default_spec.rb - update content in file C:/chef/cookbooks/motd_rhel/test/integration/default/serverspec/default_spec.rb from none to 3fbdbd (diff output suppressed by config) * directory[C:/chef/cookbooks/motd_rhel/spec/unit/recipes] action create - create new directory C:/chef/cookbooks/motd_rhel/spec/unit/recipes * cookbook_file [C:/chef/cookbooks/motd_rhel/spec/spec_helper.rb] action create_if_missing - create new file C:/chef/cookbooks/motd_rhel/spec/spec_helper.rb - update content in file C:/chef/cookbooks/motd_rhel/spec/spec_helper.rb from none to 587075 (diff output suppressed by config) * template [C:/chef/cookbooks/motd_rhel/spec/unit/recipes/default_spec.rb] action create_if_missing - create new file C:/chef/cookbooks/motd_rhel/spec/unit/recipes/default_spec.rb - update content in file C:/chef/cookbooks/motd_rhel/spec/unit/recipes/default_spec.rb from none to ff3b17 (diff output suppressed by config) * directory[C:/chef/cookbooks/motd_rhel/recipes] action create - create new directory C:/chef/cookbooks/motd_rhel/recipes * template[C:/chef/cookbooks/motd_rhel/recipes/default.rb] action create_if_missing - create new file C:/chef/cookbooks/motd_rhel/recipes/default.rb - update content in file C:/chef/cookbooks/motd_rhel/recipes/default.rb from none to c4b029 (diff output suppressed by config) * execute[initialize-git] action run - execute git init . * cookbook_file[C:/chef/cookbooks/motd_rhel/.gitignore] action create - create new file C:/chef/cookbooks/motd_rhel/.gitignore - update content in file C:/chef/cookbooks/motd_rhel/.gitignore from none to 33d469 (diff output suppressed by config) * execute[git-add-new-files] action run - execute git add . * execute[git-commit-new-files] action run - execute git commit -m "Add generated cookbook content" Following is the Created Cookbook Structure as an output of the above code. driver: name: vagrant provisioner: name: chef_zero # verifier: # name: inspec # format: doc platforms: - name: ubuntu-14.04 suites: - name: default run_list: - recipe[motd_rhel::default] attributes: Drivers − It specifies the software which manages the machine. Provisioner − It provides specification on how Chef runs. We are using chef_zero because it enables to mimic a Chef server environment on the local machine. This allows to work with node attributes and Chef server specifications. Platform − This specifies the target operating system. Suites − It defines what one wants to apply on the virtual environment. Here, you define multiple definition. It is the location where you define the run list, which specifies which recipe to run and in which sequence we need to run. $ kitchen list Instance Driver Provisioner Verifier Transport Last Action ubuntu-1404 Vagrant ChefZero Busser Ssh <Not Created> $ kitchen create -----> Starting Kitchen (v1.4.2) -----> Creating <default-centos-72>... Bringing machine 'default' up with 'virtualbox' provider... ==> default: Box 'opscode-centos-7.2' could not be found. Attempting to find and install... default: Box Provider: virtualbox default: Box Version: >= 0 ==> default: Box file was not detected as metadata. Adding it directly... ==> default: Adding box 'opscode-centos-7.2' (v0) for provider: virtualbox default: Downloading: https://opscode-vmbento.s3.amazonaws.com/vagrant/virtualbox/ opscode_centos-7.1_chefprovisionerless.box[...] Vagrant instance <default-centos-72> created. Finished creating <default-centos-72> (3m12.01s). -----> Kitchen is finished. (3m12.60s) $ kitchen converge -----> Converging <default-centos-72>... Preparing files for transfer Preparing dna.json Resolving cookbook dependencies with Berkshelf 4.0.1... Removing non-cookbook files before transfer Preparing validation.pem Preparing client.rb -----> Chef Omnibus installation detected (install only if missing) Transferring files to <default-centos-72> Starting Chef Client, version 12.6.0 resolving cookbooks for run list: ["motd_rhel::default"] Synchronizing Cookbooks: - motd_rhel (0.1.0) Compiling Cookbooks... Converging 1 resources Recipe: motd_rhel::default (up to date) Running handlers: Running handlers complete Chef Client finished, 0/1 resources updated in 01 seconds Finished converging <default-centos-72> (0m3.57s). -----> Kitchen is finished. (0m4.55s) Kitchen login is used to test if the testing VM is provisioned correctly. $ kitchen login Last login: Thu Jan 30 19:02:14 2017 from 10.0.2.2 hostname: default-centos-72 fqdn: default-centos-72 memory: 244180kBcpu count: 1 $ exit Logout Connection to 127.0.0.1 closed. $ Kitchen destroy -----> Starting Kitchen (v1.4.2) -----> Destroying <default-centos-72>... ==> default: Forcing shutdown of VM... ==> default: Destroying VM and associated drives... Vagrant instance <default-centos-72> destroyed. Finished destroying <default-centos-72> (0m4.94s). -----> Kitchen is finished. (0m5.93s) Print Add Notes Bookmark this page
[ { "code": null, "e": 2633, "s": 2380, "text": "Test Kitchen is Chef’s integrated testing framework. It enables writing test recipes, which will run on the VMs once they are instantiated and converged using the cookbook. The test recipes run on that VM and can verify if everything works as expected." }, { "code": null, "e": 2741, "s": 2633, "text": "ChefSpec is something which only simulates a Chef run. Test kitchen boots up real node and runs Chef on it." }, { "code": null, "e": 2869, "s": 2741, "text": "Step 1 − Install test kitchen Ruby gem and test kitchen vagrant gem to enable test kitchen to use vagrant for spinning up test." }, { "code": null, "e": 2924, "s": 2869, "text": "$ gem install kitchen \n$ gem install kitchen-vagrant \n" }, { "code": null, "e": 3023, "s": 2924, "text": "Step 2 − Set up test kitchen. This can be done by creating .kitchen.yml in the cookbook directory." }, { "code": null, "e": 3455, "s": 3023, "text": "driver_plugin: vagrant \ndriver_config: \n require_chef_omnibus: true \nplatforms: \n - name: ubuntu-12.04 \n driver_config: \n box: opscode-ubuntu-12.04 \n box_url: https://opscode-vm.s3.amazonaws.com/vagrant/opscode_ \n ubuntu-12.04_provisionerless.box \nsuites: \n - name: default \nrun_list: \n - recipe[minitest-handler] \n - recipe[my_cookbook_test] \nattributes: { my_cookbook: { greeting: 'Ohai, Minitest!'} } " }, { "code": null, "e": 3602, "s": 3455, "text": "In the above code, one part defines that vagrant needs to spin up the VMs and it defines that you want Omnibus to install Chef on the target node." }, { "code": null, "e": 3823, "s": 3602, "text": "The second part defines which platform you want to test the cookbooks. Vagrant will always create and destroy new instances. You do not have to fear about the side effects with vagrant VMs you spin up using Vagrant file." }, { "code": null, "e": 4289, "s": 3823, "text": "Test kitchen can be considered as a temporary environment that helps to run and test cookbooks in a temporary environment that is similar to production. With test kitchen on, one can make sure that the given piece of code is working, before it is actually getting deployed on to testing, preproduction, and production environment. This feature of test kitchen is followed by many organizations as a set before putting the cookbooks in an actual working environment." }, { "code": null, "e": 4348, "s": 4289, "text": "Following are the steps involved in Test Kitchen Workflow." }, { "code": null, "e": 4393, "s": 4348, "text": "Use the following code to create a cookbook." }, { "code": null, "e": 9170, "s": 4393, "text": "$ chef generate cookbook motd_rhel \nInstalling Cookbook Gems: \n\nCompiling Cookbooks... \nRecipe: code_generator::cookbook\n * directory[C:/chef/cookbooks/motd_rhel] action create\n - create new directory C:/chef/cookbooks/motd_rhel\n \n * template[C:/chef/cookbooks/motd_rhel/metadata.rb] action create_if_missing\n - create new file C:/chef/cookbooks/motd_rhel/metadata.rb\n - update content in file C:/chef/cookbooks/motd_rhel/metadata.rb from none to \n d6fcc2 (diff output suppressed by config)\n \n * template[C:/chef/cookbooks/motd_rhel/README.md] action create_if_missing\n - create new file C:/chef/cookbooks/motd_rhel/README.md\n - update content in file C:/chef/cookbooks/motd_rhel/README.md from none to 50deab\n (diff output suppressed by config)\n \n * cookbook_file[C:/chef/cookbooks/motd_rhel/chefignore] action create\n - create new file C:/chef/cookbooks/motd_rhel/chefignore\n - update content in file C:/chef/cookbooks/motd_rhel/chefignore from none to 15fac5\n (diff output suppressed by config)\n \n * cookbook_file[C:/chef/cookbooks/motd_rhel/Berksfile] action create_if_missing\n - create new file C:/chef/cookbooks/motd_rhel/Berksfile\n - update content in file C:/chef/cookbooks/motd_rhel/Berksfile from none to 9f08dc\n (diff output suppressed by config)\n \n * template[C:/chef/cookbooks/motd_rhel/.kitchen.yml] action create_if_missing\n - create new file C:/chef/cookbooks/motd_rhel/.kitchen.yml\n - update content in file C:/chef/cookbooks/motd_rhel/.kitchen.yml\n from none to 49b92b (diff output suppressed by config)\n \n * directory[C:/chef/cookbooks/motd_rhel/test/integration/default/serverspec]\n action create \n - create new directory \n C:/chef/cookbooks/motd_rhel/test/integration/default/serverspec\n \n * directory[C:/chef/cookbooks/motd_rhel/test/integration/helpers/serverspec]\n action create \n - create new directory \n C:/chef/cookbooks/motd_rhel/test/integration/helpers/serverspec\n \n * cookbook_file\n [C:/chef/cookbooks/motd_rhel/test/integration/helpers/serverspec/spec_helper.rb]\n action create_if_missing\n - create new file \n C:/chef/cookbooks/motd_rhel/test/integration/helpers/serverspec/spec_helper.rb\n - update content in file\n C:/chef/cookbooks/motd_rhel/test/integration/helpers/serverspec/spec_helper.rb\n from none to d85df4 (diff output suppressed by config)\n \n * template\n [C:/chef/cookbooks/motd_rhel/test/integration/default/serverspec/defaul t_spec.rb]\n action create_if_missing\n - create new file\n C:/chef/cookbooks/motd_rhel/test/integration/default/serverspec/default_spec.rb\n - update content in file\n C:/chef/cookbooks/motd_rhel/test/integration/default/serverspec/default_spec.rb\n from none to 3fbdbd (diff output suppressed by config)\n \n * directory[C:/chef/cookbooks/motd_rhel/spec/unit/recipes] action create\n - create new directory C:/chef/cookbooks/motd_rhel/spec/unit/recipes\n \n * cookbook_file\n [C:/chef/cookbooks/motd_rhel/spec/spec_helper.rb] action create_if_missing\n - create new file C:/chef/cookbooks/motd_rhel/spec/spec_helper.rb\n - update content in file\n C:/chef/cookbooks/motd_rhel/spec/spec_helper.rb from none to 587075\n (diff output suppressed by config)\n \n * template\n [C:/chef/cookbooks/motd_rhel/spec/unit/recipes/default_spec.rb]\n action create_if_missing\n - create new file C:/chef/cookbooks/motd_rhel/spec/unit/recipes/default_spec.rb\n - update content in file\n C:/chef/cookbooks/motd_rhel/spec/unit/recipes/default_spec.rb\n from none to ff3b17 (diff output suppressed by config)\n \n * directory[C:/chef/cookbooks/motd_rhel/recipes] action create\n - create new directory C:/chef/cookbooks/motd_rhel/recipes\n \n * template[C:/chef/cookbooks/motd_rhel/recipes/default.rb] action create_if_missing\n - create new file C:/chef/cookbooks/motd_rhel/recipes/default.rb\n - update content in file\n C:/chef/cookbooks/motd_rhel/recipes/default.rb from none to c4b029\n (diff output suppressed by config) \n \n * execute[initialize-git] action run \n - execute git init . \n \n * cookbook_file[C:/chef/cookbooks/motd_rhel/.gitignore] action create\n - create new file C:/chef/cookbooks/motd_rhel/.gitignore\n - update content in file C:/chef/cookbooks/motd_rhel/.gitignore from none to 33d469\n (diff output suppressed by config)\n \n * execute[git-add-new-files] action run\n - execute git add .\n \n * execute[git-commit-new-files] action run \n - execute git commit -m \"Add generated cookbook content\" " }, { "code": null, "e": 9246, "s": 9170, "text": "Following is the Created Cookbook Structure as an output of the above code." }, { "code": null, "e": 9483, "s": 9246, "text": "driver: \n name: vagrant \nprovisioner: \n name: chef_zero \n# verifier: \n# name: inspec \n# format: doc \nplatforms: \n - name: ubuntu-14.04 \nsuites: \n - name: default \n run_list: \n - recipe[motd_rhel::default] \n attributes: " }, { "code": null, "e": 9546, "s": 9483, "text": "Drivers − It specifies the software which manages the machine." }, { "code": null, "e": 9776, "s": 9546, "text": "Provisioner − It provides specification on how Chef runs. We are using chef_zero because it enables to mimic a Chef server environment on the local machine. This allows to work with node attributes and Chef server specifications." }, { "code": null, "e": 9831, "s": 9776, "text": "Platform − This specifies the target operating system." }, { "code": null, "e": 10065, "s": 9831, "text": "Suites − It defines what one wants to apply on the virtual environment. Here, you define multiple definition. It is the location where you define the run list, which specifies which recipe to run and in which sequence we need to run." }, { "code": null, "e": 10211, "s": 10065, "text": "$ kitchen list \nInstance Driver Provisioner Verifier Transport Last Action \nubuntu-1404 Vagrant ChefZero Busser Ssh <Not Created> \n" }, { "code": null, "e": 11037, "s": 10211, "text": "$ kitchen create\n-----> Starting Kitchen (v1.4.2)\n-----> Creating <default-centos-72>...\n Bringing machine 'default' up with 'virtualbox' provider...\n ==> default: Box 'opscode-centos-7.2' could not be found.\n Attempting to find and install...\n default: Box Provider: virtualbox\n default: Box Version: >= 0\n ==> default: Box file was not detected as metadata. Adding it directly...\n ==> default: Adding box 'opscode-centos-7.2' (v0) for provider: virtualbox\n default: Downloading:\n https://opscode-vmbento.s3.amazonaws.com/vagrant/virtualbox/\n opscode_centos-7.1_chefprovisionerless.box[...]\n Vagrant instance <default-centos-72> created.\n Finished creating <default-centos-72> (3m12.01s).\n -----> Kitchen is finished. (3m12.60s)\n" }, { "code": null, "e": 12039, "s": 11037, "text": "$ kitchen converge \n-----> Converging <default-centos-72>... \n Preparing files for transfer \n Preparing dna.json \n Resolving cookbook dependencies with Berkshelf 4.0.1...\n Removing non-cookbook files before transfer \n Preparing validation.pem \n Preparing client.rb \n-----> Chef Omnibus installation detected (install only if missing) \n Transferring files to <default-centos-72> \n Starting Chef Client, version 12.6.0 \n resolving cookbooks for run list: [\"motd_rhel::default\"]\n Synchronizing Cookbooks: - motd_rhel (0.1.0) \n Compiling Cookbooks... Converging 1 resources \n Recipe: motd_rhel::default (up to date) \n Running handlers: Running handlers complete \n Chef Client finished, 0/1 resources updated in 01 seconds \n Finished converging <default-centos-72> (0m3.57s). \n -----> Kitchen is finished. (0m4.55s) \n" }, { "code": null, "e": 12113, "s": 12039, "text": "Kitchen login is used to test if the testing VM is provisioned correctly." }, { "code": null, "e": 12276, "s": 12113, "text": "$ kitchen login \nLast login: Thu Jan 30 19:02:14 2017 from 10.0.2.2 \nhostname: default-centos-72 \nfqdn: default-centos-72 \nmemory: 244180kBcpu count: 1 \n" }, { "code": null, "e": 12326, "s": 12276, "text": "$ exit \nLogout \nConnection to 127.0.0.1 closed. \n" }, { "code": null, "e": 12694, "s": 12326, "text": "$ Kitchen destroy \n-----> Starting Kitchen (v1.4.2) \n-----> Destroying <default-centos-72>... \n ==> default: Forcing shutdown of VM... \n ==> default: Destroying VM and associated drives... \n Vagrant instance <default-centos-72> destroyed. \n Finished destroying <default-centos-72> (0m4.94s). \n-----> Kitchen is finished. (0m5.93s) \n" }, { "code": null, "e": 12701, "s": 12694, "text": " Print" }, { "code": null, "e": 12712, "s": 12701, "text": " Add Notes" } ]
Pascal - Arithmetic Operators
Following table shows all the arithmetic operators supported by Pascal. Assume variable A holds 10 and variable B holds 20, then − The following example illustrates the arithmetic operators − program calculator; var a,b,c : integer; d: real; begin a:=21; b:=10; c := a + b; writeln(' Line 1 - Value of c is ', c ); c := a - b; writeln('Line 2 - Value of c is ', c ); c := a * b; writeln('Line 3 - Value of c is ', c ); d := a / b; writeln('Line 4 - Value of d is ', d:3:2 ); c := a mod b; writeln('Line 5 - Value of c is ' , c ); c := a div b; writeln('Line 6 - Value of c is ', c ); end. Please note that Pascal is very strongly typed programming language, so it would give an error if you try to store the results of a division in an integer type variable. When the above code is compiled and executed, it produces the following result: Line 1 - Value of c is 31 Line 2 - Value of c is 11 Line 3 - Value of c is 210 Line 4 - Value of d is 2.10 Line 5 - Value of c is 1 Line 6 - Value of c is 2 94 Lectures 8.5 hours Stone River ELearning Print Add Notes Bookmark this page
[ { "code": null, "e": 2214, "s": 2083, "text": "Following table shows all the arithmetic operators supported by Pascal. Assume variable A holds 10 and variable B holds 20, then −" }, { "code": null, "e": 2275, "s": 2214, "text": "The following example illustrates the arithmetic operators −" }, { "code": null, "e": 2742, "s": 2275, "text": "program calculator;\nvar\na,b,c : integer;\nd: real;\n\nbegin\n a:=21;\n b:=10;\n c := a + b;\n \n writeln(' Line 1 - Value of c is ', c );\n c := a - b;\n \n writeln('Line 2 - Value of c is ', c );\n c := a * b;\n \n writeln('Line 3 - Value of c is ', c );\n d := a / b;\n \n writeln('Line 4 - Value of d is ', d:3:2 );\n c := a mod b;\n \n writeln('Line 5 - Value of c is ' , c );\n c := a div b;\n \n writeln('Line 6 - Value of c is ', c );\nend." }, { "code": null, "e": 2992, "s": 2742, "text": "Please note that Pascal is very strongly typed programming language, so it would give an error if you try to store the results of a division in an integer type variable. When the above code is compiled and executed, it produces the following result:" }, { "code": null, "e": 3150, "s": 2992, "text": "Line 1 - Value of c is 31\nLine 2 - Value of c is 11\nLine 3 - Value of c is 210\nLine 4 - Value of d is 2.10\nLine 5 - Value of c is 1\nLine 6 - Value of c is 2\n" }, { "code": null, "e": 3185, "s": 3150, "text": "\n 94 Lectures \n 8.5 hours \n" }, { "code": null, "e": 3208, "s": 3185, "text": " Stone River ELearning" }, { "code": null, "e": 3215, "s": 3208, "text": " Print" }, { "code": null, "e": 3226, "s": 3215, "text": " Add Notes" } ]
SAS - DO WHILE Loop
This DO WHILE loop uses a WHILE condition. The SAS statements are repeatedly executed until the while condition becomes false. DO WHILE (variable condition); . . . SAS statements . . . ; END; DATA MYDATA; SUM = 0; VAR = 1; DO WHILE(VAR<6) ; SUM = SUM+VAR; VAR+1; END; PROC PRINT; RUN; When the above code is executed, it produces the following result 50 Lectures 5.5 hours Code And Create 124 Lectures 30 hours Juan Galvan 162 Lectures 31.5 hours Yossef Ayman Zedan 35 Lectures 2.5 hours Ermin Dedic 167 Lectures 45.5 hours Muslim Helalee Print Add Notes Bookmark this page
[ { "code": null, "e": 2710, "s": 2583, "text": "This DO WHILE loop uses a WHILE condition. The SAS statements are repeatedly executed until the while condition becomes false." }, { "code": null, "e": 2777, "s": 2710, "text": "DO WHILE (variable condition);\n. . . SAS statements . . . ;\nEND;\n" }, { "code": null, "e": 2882, "s": 2777, "text": "DATA MYDATA;\nSUM = 0;\nVAR = 1;\nDO WHILE(VAR<6) ;\n SUM = SUM+VAR;\n VAR+1;\nEND;\n PROC PRINT;\n RUN;" }, { "code": null, "e": 2948, "s": 2882, "text": "When the above code is executed, it produces the following result" }, { "code": null, "e": 2985, "s": 2950, "text": "\n 50 Lectures \n 5.5 hours \n" }, { "code": null, "e": 3002, "s": 2985, "text": " Code And Create" }, { "code": null, "e": 3037, "s": 3002, "text": "\n 124 Lectures \n 30 hours \n" }, { "code": null, "e": 3050, "s": 3037, "text": " Juan Galvan" }, { "code": null, "e": 3087, "s": 3050, "text": "\n 162 Lectures \n 31.5 hours \n" }, { "code": null, "e": 3107, "s": 3087, "text": " Yossef Ayman Zedan" }, { "code": null, "e": 3142, "s": 3107, "text": "\n 35 Lectures \n 2.5 hours \n" }, { "code": null, "e": 3155, "s": 3142, "text": " Ermin Dedic" }, { "code": null, "e": 3192, "s": 3155, "text": "\n 167 Lectures \n 45.5 hours \n" }, { "code": null, "e": 3208, "s": 3192, "text": " Muslim Helalee" }, { "code": null, "e": 3215, "s": 3208, "text": " Print" }, { "code": null, "e": 3226, "s": 3215, "text": " Add Notes" } ]
Testing the Assumptions of Linear Regression | by Shuangyuan (Sharon) Wei | Towards Data Science
It seems that nowadays when everyone is so much into all kinds of fancy machine learning algorithms, few people still care to ask: what are the key assumptions required for the Ordinary Least Squares (OLS) regression? How can I test if my model satisfies these assumptions? However, as simple linear regression is arguably the most popular modeling approach across every field in social science, I think it is worthwhile to do a quick recap of the fundamental assumptions for OLS and run some tests through building a linear regression model using the classic Boston Housing data. The Gauss-Markov assumptions assure that the OLS regression coefficients are the Best Linear Unbiased Estimates or BLUE. Linearity in parametersRandom sampling: the observed data represent a random sample from the populationNo perfect collinearity among covariatesZero conditional mean of error (i.e. E(μ|X) = 0) (also often referred as Exogeneity)Homoskedasticity (constant variance) of the errors Linearity in parameters Random sampling: the observed data represent a random sample from the population No perfect collinearity among covariates Zero conditional mean of error (i.e. E(μ|X) = 0) (also often referred as Exogeneity) Homoskedasticity (constant variance) of the errors It is important to note that OLS is unbiased (i.e. E(β*) = β) when assumptions 1–4 are satisfied. Heteroscedasticity has no effect on bias or consistency of OLS estimators, but it means OLS estimators are no longer BLUE and the OLS estimates of standard errors are incorrect. The Boston house prices dataset consists of 506 observations of 14 attributes: crim: per capita crime rate by townzn: proportion of residential land zoned for lots over 25,000 sq.ftindus: proportion of non-retail business acres per town chas: Charles River dummy variable (= 1 if tract bounds river; 0 otherwise)nox: nitric oxides concentration (parts per 10 million)rm: average number of rooms per dwellingage: proportion of owner-occupied units built prior to 1940dis: weighted distances to five Boston employment centresrad: index of accessibility to radial highwaystax: full-value property-tax rate per USD 10,000ptratio: pupil-teacher ratio by townblack: 1000(B — 0.63)2 where B is the proportion of blacks by townlstat: percentage of the lower status of the populationmedv: median value of owner-occupied homes in USD 1000's It is natural to ask the question what factors may affect and predict the housing prices by looking at the data. I start with some simple scatter plots to visually check the relationship between variables (see below). The set of charts below plot medv against crim, rm, age, lstat, and dis. It appears that the relationships do not seem linear. I also plot histograms to check the univariate distribution. And I decide to use log(medv) instead of medv as the dependent variable because the log transformation mitigates the skew. Similarly, I will use log(crim). Even though my intuition and the exploratory analysis plots tell me that the percentage of the lower status of the population, number of rooms, crime rate, and distances to five Boston employment centers are probably the most important predictors. In this article, I’d like to start with running a lass model with all the variables as the baseline also as a way to select features. Here is the R code: The lasso reg_base does not return any zero coefficients but I found the log(cim) and a few other variables are not significant. I excluded those insignificant variables in reg_1 model and here are the coefficients: Quick interpretation: 3.3% decrease in the median housing price for 1 unit change (1 percent) increase in the percentage of the lower status of the population (lstat), and 10% increase in the housing price for 1 unit change in the number of rooms. It makes sense to me. Residuals vs Fitted: the equally spread residuals around a horizontal line without distinct patterns are a good indication of having the linear relationships. If there are clear trends in the residual plot, or the plot looks like a funnel, these are clear indicators that the given linear model is inappropriate. Normal Q-Q shows if residuals are normally distributed. It’s good if residuals are lined well on the straight dashed line. Scale-Location can be used to check the assumption of equal variance (homoscedasticity). It’s good if we see a horizontal line with equally (randomly) spread points. The residuals vs fitted plot show that the linearity assumption is more or less satisfied. The log transformation takes care of the non-linearity. However, the scale-location plot indicates heteroscedasticity. print(bptest(reg_1, data = Boston, studentize = TRUE))studentized Breusch-Pagan testdata: reg_1BP = 64.991, df = 7, p-value = 1.51e-11 Rejecting the null hypothesis of homoscedasticity of Breusch-Pagan test indicates heteroscedasticity (HSK). We can use weighted least squares method (WLS) to correct for HSK. Since HSK has no effect on bias or consistency of OLS estimators and the WLS estimates are not very different from OLS. I will skip the correction for HSK inthis article. VIF stands for variance inflation factors. The general rule of thumb is that VIFs exceeding 4 warrant further investigation, while VIFs exceeding 10 are signs of serious multicollinearity requiring correction. vif(reg_1)rm dis rad tax ptratio black lstat1.780895 1.559949 6.231127 6.610060 1.408964 1.314205 2.414288 The VIF test shows collinearity for rad and tax. To mitigate the issue. I ran a new regression removing rad. The coefficients do not change much. I will not show them again to save some space. There is no simple way to check this assumption. First, checking whether the mean of residuals is zero is not the way to do it. As long as we include an intercept in the relationship, we can always assume that E (μ) = 0, since a nonzero mean for μ could be absorbed by the intercept term. You can read the math proof here. One way to check it is to plot the residuals against row numbers that are not assigned associated with the dependent variable. The residuals should be randomly and symmetrically distributed around zero across row numbers no matter how we sort the rows, which indicates no correlation between consecutive errors. I also plotted the residuals again the independent variables to inspect if there are obvious correlations. Lastly, I think it’s a good practice to always think and check for the omitted variable bias. I want to test the idea of including the criminal rate in the regression because I imagine that the criminal rate should affect the housing price. I compare including crim in reg_2 mode, with including log(crim) in the reg_3 model using Davidson-MacKinnon test. The interpretation of the Davidson-MacKinnon test is to reject the reg_3 specification. Reading the regression statistics, including crim improves the R-squared as expected, and the crim coefficient is significant. However, given the skewed distribution of crim and seemingly non-linear relationship between crim and log(medv), I am not fully convinced using crim as one of the predictors is necessary. To further check it, I decide to split the data into train and test data and run a simple out-of-sample test comparing reg_2 and reg_1. It turns out the original model reg_1 without crim returns smaller MSE on the test data. So I think it’s safe to conclude that reg_1 is the correct model. Here is the full code for running reg_base, reg_1, comparing reg_2 vs. reg_3, and subsequently comparing reg_2 vs. reg_1.
[ { "code": null, "e": 753, "s": 172, "text": "It seems that nowadays when everyone is so much into all kinds of fancy machine learning algorithms, few people still care to ask: what are the key assumptions required for the Ordinary Least Squares (OLS) regression? How can I test if my model satisfies these assumptions? However, as simple linear regression is arguably the most popular modeling approach across every field in social science, I think it is worthwhile to do a quick recap of the fundamental assumptions for OLS and run some tests through building a linear regression model using the classic Boston Housing data." }, { "code": null, "e": 874, "s": 753, "text": "The Gauss-Markov assumptions assure that the OLS regression coefficients are the Best Linear Unbiased Estimates or BLUE." }, { "code": null, "e": 1152, "s": 874, "text": "Linearity in parametersRandom sampling: the observed data represent a random sample from the populationNo perfect collinearity among covariatesZero conditional mean of error (i.e. E(μ|X) = 0) (also often referred as Exogeneity)Homoskedasticity (constant variance) of the errors" }, { "code": null, "e": 1176, "s": 1152, "text": "Linearity in parameters" }, { "code": null, "e": 1257, "s": 1176, "text": "Random sampling: the observed data represent a random sample from the population" }, { "code": null, "e": 1298, "s": 1257, "text": "No perfect collinearity among covariates" }, { "code": null, "e": 1383, "s": 1298, "text": "Zero conditional mean of error (i.e. E(μ|X) = 0) (also often referred as Exogeneity)" }, { "code": null, "e": 1434, "s": 1383, "text": "Homoskedasticity (constant variance) of the errors" }, { "code": null, "e": 1710, "s": 1434, "text": "It is important to note that OLS is unbiased (i.e. E(β*) = β) when assumptions 1–4 are satisfied. Heteroscedasticity has no effect on bias or consistency of OLS estimators, but it means OLS estimators are no longer BLUE and the OLS estimates of standard errors are incorrect." }, { "code": null, "e": 1789, "s": 1710, "text": "The Boston house prices dataset consists of 506 observations of 14 attributes:" }, { "code": null, "e": 2541, "s": 1789, "text": "crim: per capita crime rate by townzn: proportion of residential land zoned for lots over 25,000 sq.ftindus: proportion of non-retail business acres per town chas: Charles River dummy variable (= 1 if tract bounds river; 0 otherwise)nox: nitric oxides concentration (parts per 10 million)rm: average number of rooms per dwellingage: proportion of owner-occupied units built prior to 1940dis: weighted distances to five Boston employment centresrad: index of accessibility to radial highwaystax: full-value property-tax rate per USD 10,000ptratio: pupil-teacher ratio by townblack: 1000(B — 0.63)2 where B is the proportion of blacks by townlstat: percentage of the lower status of the populationmedv: median value of owner-occupied homes in USD 1000's" }, { "code": null, "e": 2886, "s": 2541, "text": "It is natural to ask the question what factors may affect and predict the housing prices by looking at the data. I start with some simple scatter plots to visually check the relationship between variables (see below). The set of charts below plot medv against crim, rm, age, lstat, and dis. It appears that the relationships do not seem linear." }, { "code": null, "e": 3103, "s": 2886, "text": "I also plot histograms to check the univariate distribution. And I decide to use log(medv) instead of medv as the dependent variable because the log transformation mitigates the skew. Similarly, I will use log(crim)." }, { "code": null, "e": 3505, "s": 3103, "text": "Even though my intuition and the exploratory analysis plots tell me that the percentage of the lower status of the population, number of rooms, crime rate, and distances to five Boston employment centers are probably the most important predictors. In this article, I’d like to start with running a lass model with all the variables as the baseline also as a way to select features. Here is the R code:" }, { "code": null, "e": 3721, "s": 3505, "text": "The lasso reg_base does not return any zero coefficients but I found the log(cim) and a few other variables are not significant. I excluded those insignificant variables in reg_1 model and here are the coefficients:" }, { "code": null, "e": 3991, "s": 3721, "text": "Quick interpretation: 3.3% decrease in the median housing price for 1 unit change (1 percent) increase in the percentage of the lower status of the population (lstat), and 10% increase in the housing price for 1 unit change in the number of rooms. It makes sense to me." }, { "code": null, "e": 4304, "s": 3991, "text": "Residuals vs Fitted: the equally spread residuals around a horizontal line without distinct patterns are a good indication of having the linear relationships. If there are clear trends in the residual plot, or the plot looks like a funnel, these are clear indicators that the given linear model is inappropriate." }, { "code": null, "e": 4427, "s": 4304, "text": "Normal Q-Q shows if residuals are normally distributed. It’s good if residuals are lined well on the straight dashed line." }, { "code": null, "e": 4593, "s": 4427, "text": "Scale-Location can be used to check the assumption of equal variance (homoscedasticity). It’s good if we see a horizontal line with equally (randomly) spread points." }, { "code": null, "e": 4803, "s": 4593, "text": "The residuals vs fitted plot show that the linearity assumption is more or less satisfied. The log transformation takes care of the non-linearity. However, the scale-location plot indicates heteroscedasticity." }, { "code": null, "e": 4938, "s": 4803, "text": "print(bptest(reg_1, data = Boston, studentize = TRUE))studentized Breusch-Pagan testdata: reg_1BP = 64.991, df = 7, p-value = 1.51e-11" }, { "code": null, "e": 5284, "s": 4938, "text": "Rejecting the null hypothesis of homoscedasticity of Breusch-Pagan test indicates heteroscedasticity (HSK). We can use weighted least squares method (WLS) to correct for HSK. Since HSK has no effect on bias or consistency of OLS estimators and the WLS estimates are not very different from OLS. I will skip the correction for HSK inthis article." }, { "code": null, "e": 5494, "s": 5284, "text": "VIF stands for variance inflation factors. The general rule of thumb is that VIFs exceeding 4 warrant further investigation, while VIFs exceeding 10 are signs of serious multicollinearity requiring correction." }, { "code": null, "e": 5601, "s": 5494, "text": "vif(reg_1)rm dis rad tax ptratio black lstat1.780895 1.559949 6.231127 6.610060 1.408964 1.314205 2.414288" }, { "code": null, "e": 5794, "s": 5601, "text": "The VIF test shows collinearity for rad and tax. To mitigate the issue. I ran a new regression removing rad. The coefficients do not change much. I will not show them again to save some space." }, { "code": null, "e": 6536, "s": 5794, "text": "There is no simple way to check this assumption. First, checking whether the mean of residuals is zero is not the way to do it. As long as we include an intercept in the relationship, we can always assume that E (μ) = 0, since a nonzero mean for μ could be absorbed by the intercept term. You can read the math proof here. One way to check it is to plot the residuals against row numbers that are not assigned associated with the dependent variable. The residuals should be randomly and symmetrically distributed around zero across row numbers no matter how we sort the rows, which indicates no correlation between consecutive errors. I also plotted the residuals again the independent variables to inspect if there are obvious correlations." }, { "code": null, "e": 6980, "s": 6536, "text": "Lastly, I think it’s a good practice to always think and check for the omitted variable bias. I want to test the idea of including the criminal rate in the regression because I imagine that the criminal rate should affect the housing price. I compare including crim in reg_2 mode, with including log(crim) in the reg_3 model using Davidson-MacKinnon test. The interpretation of the Davidson-MacKinnon test is to reject the reg_3 specification." }, { "code": null, "e": 7295, "s": 6980, "text": "Reading the regression statistics, including crim improves the R-squared as expected, and the crim coefficient is significant. However, given the skewed distribution of crim and seemingly non-linear relationship between crim and log(medv), I am not fully convinced using crim as one of the predictors is necessary." }, { "code": null, "e": 7586, "s": 7295, "text": "To further check it, I decide to split the data into train and test data and run a simple out-of-sample test comparing reg_2 and reg_1. It turns out the original model reg_1 without crim returns smaller MSE on the test data. So I think it’s safe to conclude that reg_1 is the correct model." } ]
Using For Loops in Python: Calculating Probabilities | by Michael Grogan | Towards Data Science
Loops are quite an important part of learning how to code in Python, and this is particularly true when it comes to implementing calculations across a large array of numbers. All too often, the temptation for statisticians and data scientists is to skip over the more mundane aspects of coding such as this — we assume that software engineers can simply reformat the code in the proper way. However, there are many situations where the person writing the code needs to understand both the statistics underlying the model as well as how to iterate the model output through loops — these two processes simply cannot be developed independently. Here is one example of how the use of for loops in Python can greatly enhance statistical analysis. In conducting probability analysis, the two variables that take account of the chance of an event happening are N (number of observations) and λ (lambda — our hit rate/chance of occurrence in a single interval). When we talk about a cumulative binomial probability distribution, we mean to say that the greater the number of trials, the higher the overall probability of an event occurring. probability = 1 — ((1 — λ)^N) For instance, the odds of rolling a number 6 on a fair die is 1/6. However, suppose that same die is rolled 10 times: 1 — ((1–0.1667)^10) = 0.8385 We see that the probability of rolling a number 6 now increases to 83.85%. Based on the law of large numbers, the larger the number of trials; the larger the probability of an event happening even if the probability within a single trial is very low. So, let us generate a cumulative binomial probability to demonstrate how probability increases given an increase in the number of trials. Here is a script that calculates the cumulative binomial probabilities without the use of loops. import numpy as npimport pandas as pdl = 0.02m = 0.04n = 0.06p=np.arange(0, 100, 1)h = 1 - lj = 1 - mk = 1 - nq = 1-(h**p)r = 1-(j**p)s = 1-(k**p) l, m, and n represent three individual probabilities. p represents the number of trials (up to 100) q, r, and s represent the cumulative binomial probabilities, i.e. the increase in probability for every unit increase in the number of trials Here is a sample of the generated output: >>> qarray([0., 0.02, 0.0396, 0.058808, 0.07763184, 0.0960792, 0.11415762, 0.13187447, 0.14923698, 0.16625224, ..., 0.8532841, 0.85621842, 0.85909405, 0.86191217, 0.86467392])>>> rarray([0., 0.04, 0.0784, 0.115264, 0.15065344, 0.1846273, 0.21724221, 0.24855252, 0.27861042, 0.307466, ..., 0.97930968, 0.9801373, 0.9809318, 0.98169453, 0.98242675])>>> sarray([0., 0.06, 0.1164, 0.169416, 0.21925104, 0.26609598, 0.31013022, 0.35152241, 0.39043106, 0.4270052, 0.46138489, 0.49370179, 0.52407969, 0.5526349, 0.57947681, ..., 0.99720008, 0.99736807, 0.99752599, 0.99767443, 0.99781396]) We see that for the probabilities q, r, and s - the cumulative probabilities increase at different rates for a given number of trials. That said, developing this model without using loops has a key disadvantage — namely that the individual probabilities can only take on the values as specified by the end user. What if we wish to iterate from 0.01 to 0.99 in succession? This time, the model will be built by using one individual probability variable that iterates through values 0.01 to 0.99, and the cumulative binomial probability will be calculated using 100 trials. import numpy as npimport pandas as pd# List comprehensionprobability=[x*0.01 for x in range(1,100)]probability=np.array(probability)probabilityh = 1 - probabilityh# Construct 2D arrayresult = 1-h[:, np.newaxis] ** np.arange(1,100)result The output of the generated probability variable is as follows: >>> probabilityarray([0.01, 0.02, 0.03, 0.04, 0.05, 0.06, 0.07, 0.08, 0.09, 0.1, ... 0.96, 0.97, 0.98, 0.99]) Note that for the probability variable, it is necessary to use list comprehensions. This is because Python’s range() function can only work with integers, not float values. More information is provided at the following Stack Overflow guide. You will note when looking at the last two lines in the code that a 2D array is constructed to calculate the cumulative binomial probabilities. When originally attempting to calculate these in lieu of using a 2D array, the arrays were calculated — but the values were not in the desired order. >>> for i in range(1,100,1):>>> print(1-(h**i))[0.01 0.02] [0.0199 0.0396]...[0.62653572 0.86191217][0.63027036 0.86467392] Instead, we wish to have the arrays in the order [0.01, 0.0199, ..., 0.62653572, 0.63027036] and [0.02, 0.0396, ..., 0.86191217, 0.86467392]. As explained in the following Reddit thread, transposing the above will not be of any use since h is a one-dimensional array. An alternative is to calculate a 2D array and then print it directly: >>> result = 1-h[:, np.newaxis] ** np.arange(1,100)>>> resultarray([[0.01, 0.0199, 0.029701, ..., 0.62653572, 0.63027036], [0.02, 0.0396, 0.058808, ..., 0.86191217, 0.86467392], [0.03, 0.0591, 0.087327, ..., 0.94946061, 0.9509768], ..., [0.97, 0.9991, 0.999973, ..., 1., 1., 1.], [0.98, 0.9996, 0.999992, ..., 1., 1., 1.], [0.99, 0.9999, 0.999999, ..., 1., 1.,1.]]) As can be seen from the above, the cumulative binomial probabilities from 0.01 right up to 0.99 is calculated. Using for loops in this manner has allowed us to iterate from 0.01 to 0.99 automatically — attempting to do this manually would have been far too cumbersome and error-prone. In this example, you have seen how to: Calculative cumulative binomial probabilities in Python Use for loops to iterate across a large range of values Employ list comprehensions to work with a range of float values Devise 2D arrays when unable to transpose values contained in a 1D array Many thanks for your time, and any questions or feedback are greatly appreciated. Disclaimer: This article is written on an “as is” basis and without warranty. It was written with the intention of providing an overview of data science concepts, and should not be interpreted as professional advice. The findings and interpretations in this article are those of the author and are not endorsed by or affiliated with any third-party mentioned in this article.
[ { "code": null, "e": 346, "s": 171, "text": "Loops are quite an important part of learning how to code in Python, and this is particularly true when it comes to implementing calculations across a large array of numbers." }, { "code": null, "e": 562, "s": 346, "text": "All too often, the temptation for statisticians and data scientists is to skip over the more mundane aspects of coding such as this — we assume that software engineers can simply reformat the code in the proper way." }, { "code": null, "e": 813, "s": 562, "text": "However, there are many situations where the person writing the code needs to understand both the statistics underlying the model as well as how to iterate the model output through loops — these two processes simply cannot be developed independently." }, { "code": null, "e": 913, "s": 813, "text": "Here is one example of how the use of for loops in Python can greatly enhance statistical analysis." }, { "code": null, "e": 1304, "s": 913, "text": "In conducting probability analysis, the two variables that take account of the chance of an event happening are N (number of observations) and λ (lambda — our hit rate/chance of occurrence in a single interval). When we talk about a cumulative binomial probability distribution, we mean to say that the greater the number of trials, the higher the overall probability of an event occurring." }, { "code": null, "e": 1334, "s": 1304, "text": "probability = 1 — ((1 — λ)^N)" }, { "code": null, "e": 1452, "s": 1334, "text": "For instance, the odds of rolling a number 6 on a fair die is 1/6. However, suppose that same die is rolled 10 times:" }, { "code": null, "e": 1481, "s": 1452, "text": "1 — ((1–0.1667)^10) = 0.8385" }, { "code": null, "e": 1556, "s": 1481, "text": "We see that the probability of rolling a number 6 now increases to 83.85%." }, { "code": null, "e": 1870, "s": 1556, "text": "Based on the law of large numbers, the larger the number of trials; the larger the probability of an event happening even if the probability within a single trial is very low. So, let us generate a cumulative binomial probability to demonstrate how probability increases given an increase in the number of trials." }, { "code": null, "e": 1967, "s": 1870, "text": "Here is a script that calculates the cumulative binomial probabilities without the use of loops." }, { "code": null, "e": 2114, "s": 1967, "text": "import numpy as npimport pandas as pdl = 0.02m = 0.04n = 0.06p=np.arange(0, 100, 1)h = 1 - lj = 1 - mk = 1 - nq = 1-(h**p)r = 1-(j**p)s = 1-(k**p)" }, { "code": null, "e": 2168, "s": 2114, "text": "l, m, and n represent three individual probabilities." }, { "code": null, "e": 2214, "s": 2168, "text": "p represents the number of trials (up to 100)" }, { "code": null, "e": 2356, "s": 2214, "text": "q, r, and s represent the cumulative binomial probabilities, i.e. the increase in probability for every unit increase in the number of trials" }, { "code": null, "e": 2398, "s": 2356, "text": "Here is a sample of the generated output:" }, { "code": null, "e": 2981, "s": 2398, "text": ">>> qarray([0., 0.02, 0.0396, 0.058808, 0.07763184, 0.0960792, 0.11415762, 0.13187447, 0.14923698, 0.16625224, ..., 0.8532841, 0.85621842, 0.85909405, 0.86191217, 0.86467392])>>> rarray([0., 0.04, 0.0784, 0.115264, 0.15065344, 0.1846273, 0.21724221, 0.24855252, 0.27861042, 0.307466, ..., 0.97930968, 0.9801373, 0.9809318, 0.98169453, 0.98242675])>>> sarray([0., 0.06, 0.1164, 0.169416, 0.21925104, 0.26609598, 0.31013022, 0.35152241, 0.39043106, 0.4270052, 0.46138489, 0.49370179, 0.52407969, 0.5526349, 0.57947681, ..., 0.99720008, 0.99736807, 0.99752599, 0.99767443, 0.99781396])" }, { "code": null, "e": 3116, "s": 2981, "text": "We see that for the probabilities q, r, and s - the cumulative probabilities increase at different rates for a given number of trials." }, { "code": null, "e": 3353, "s": 3116, "text": "That said, developing this model without using loops has a key disadvantage — namely that the individual probabilities can only take on the values as specified by the end user. What if we wish to iterate from 0.01 to 0.99 in succession?" }, { "code": null, "e": 3553, "s": 3353, "text": "This time, the model will be built by using one individual probability variable that iterates through values 0.01 to 0.99, and the cumulative binomial probability will be calculated using 100 trials." }, { "code": null, "e": 3790, "s": 3553, "text": "import numpy as npimport pandas as pd# List comprehensionprobability=[x*0.01 for x in range(1,100)]probability=np.array(probability)probabilityh = 1 - probabilityh# Construct 2D arrayresult = 1-h[:, np.newaxis] ** np.arange(1,100)result" }, { "code": null, "e": 3854, "s": 3790, "text": "The output of the generated probability variable is as follows:" }, { "code": null, "e": 3964, "s": 3854, "text": ">>> probabilityarray([0.01, 0.02, 0.03, 0.04, 0.05, 0.06, 0.07, 0.08, 0.09, 0.1, ... 0.96, 0.97, 0.98, 0.99])" }, { "code": null, "e": 4205, "s": 3964, "text": "Note that for the probability variable, it is necessary to use list comprehensions. This is because Python’s range() function can only work with integers, not float values. More information is provided at the following Stack Overflow guide." }, { "code": null, "e": 4499, "s": 4205, "text": "You will note when looking at the last two lines in the code that a 2D array is constructed to calculate the cumulative binomial probabilities. When originally attempting to calculate these in lieu of using a 2D array, the arrays were calculated — but the values were not in the desired order." }, { "code": null, "e": 4627, "s": 4499, "text": ">>> for i in range(1,100,1):>>> print(1-(h**i))[0.01 0.02] [0.0199 0.0396]...[0.62653572 0.86191217][0.63027036 0.86467392]" }, { "code": null, "e": 4769, "s": 4627, "text": "Instead, we wish to have the arrays in the order [0.01, 0.0199, ..., 0.62653572, 0.63027036] and [0.02, 0.0396, ..., 0.86191217, 0.86467392]." }, { "code": null, "e": 4895, "s": 4769, "text": "As explained in the following Reddit thread, transposing the above will not be of any use since h is a one-dimensional array." }, { "code": null, "e": 4965, "s": 4895, "text": "An alternative is to calculate a 2D array and then print it directly:" }, { "code": null, "e": 5331, "s": 4965, "text": ">>> result = 1-h[:, np.newaxis] ** np.arange(1,100)>>> resultarray([[0.01, 0.0199, 0.029701, ..., 0.62653572, 0.63027036], [0.02, 0.0396, 0.058808, ..., 0.86191217, 0.86467392], [0.03, 0.0591, 0.087327, ..., 0.94946061, 0.9509768], ..., [0.97, 0.9991, 0.999973, ..., 1., 1., 1.], [0.98, 0.9996, 0.999992, ..., 1., 1., 1.], [0.99, 0.9999, 0.999999, ..., 1., 1.,1.]])" }, { "code": null, "e": 5442, "s": 5331, "text": "As can be seen from the above, the cumulative binomial probabilities from 0.01 right up to 0.99 is calculated." }, { "code": null, "e": 5616, "s": 5442, "text": "Using for loops in this manner has allowed us to iterate from 0.01 to 0.99 automatically — attempting to do this manually would have been far too cumbersome and error-prone." }, { "code": null, "e": 5655, "s": 5616, "text": "In this example, you have seen how to:" }, { "code": null, "e": 5711, "s": 5655, "text": "Calculative cumulative binomial probabilities in Python" }, { "code": null, "e": 5767, "s": 5711, "text": "Use for loops to iterate across a large range of values" }, { "code": null, "e": 5831, "s": 5767, "text": "Employ list comprehensions to work with a range of float values" }, { "code": null, "e": 5904, "s": 5831, "text": "Devise 2D arrays when unable to transpose values contained in a 1D array" }, { "code": null, "e": 5986, "s": 5904, "text": "Many thanks for your time, and any questions or feedback are greatly appreciated." } ]
Python For Loops
A for loop is used for iterating over a sequence (that is either a list, a tuple, a dictionary, a set, or a string). This is less like the for keyword in other programming languages, and works more like an iterator method as found in other object-orientated programming languages. With the for loop we can execute a set of statements, once for each item in a list, tuple, set etc. Print each fruit in a fruit list: The for loop does not require an indexing variable to set beforehand. Even strings are iterable objects, they contain a sequence of characters: Loop through the letters in the word "banana": With the break statement we can stop the loop before it has looped through all the items: Exit the loop when x is "banana": Exit the loop when x is "banana", but this time the break comes before the print: With the continue statement we can stop the current iteration of the loop, and continue with the next: Do not print banana: The range() function returns a sequence of numbers, starting from 0 by default, and increments by 1 (by default), and ends at a specified number. Using the range() function: Note that range(6) is not the values of 0 to 6, but the values 0 to 5. The range() function defaults to 0 as a starting value, however it is possible to specify the starting value by adding a parameter: range(2, 6), which means values from 2 to 6 (but not including 6): Using the start parameter: The range() function defaults to increment the sequence by 1, however it is possible to specify the increment value by adding a third parameter: range(2, 30, 3): Increment the sequence with 3 (default is 1): The else keyword in a for loop specifies a block of code to be executed when the loop is finished: Print all numbers from 0 to 5, and print a message when the loop has ended: Note: The else block will NOT be executed if the loop is stopped by a break statement. Break the loop when x is 3, and see what happens with the else block: A nested loop is a loop inside a loop. The "inner loop" will be executed one time for each iteration of the "outer loop": Print each adjective for every fruit: for loops cannot be empty, but if you for some reason have a for loop with no content, put in the pass statement to avoid getting an error. Loop through the items in the fruits list. fruits = ["apple", "banana", "cherry"] x fruits print(x) Start the Exercise We just launchedW3Schools videos Get certifiedby completinga course today! If you want to report an error, or if you want to make a suggestion, do not hesitate to send us an e-mail: help@w3schools.com Your message has been sent to W3Schools.
[ { "code": null, "e": 118, "s": 0, "text": "A for loop is used for iterating over a sequence (that is either a list, a tuple, \na dictionary, a set, or a string)." }, { "code": null, "e": 282, "s": 118, "text": "This is less like the for keyword in other programming languages, and works more like an iterator method as found in other object-orientated programming languages." }, { "code": null, "e": 382, "s": 282, "text": "With the for loop we can execute a set of statements, once for each item in a list, tuple, set etc." }, { "code": null, "e": 416, "s": 382, "text": "Print each fruit in a fruit list:" }, { "code": null, "e": 486, "s": 416, "text": "The for loop does not require an indexing variable to set beforehand." }, { "code": null, "e": 560, "s": 486, "text": "Even strings are iterable objects, they contain a sequence of characters:" }, { "code": null, "e": 607, "s": 560, "text": "Loop through the letters in the word \"banana\":" }, { "code": null, "e": 698, "s": 607, "text": "With the break statement we can stop the \nloop before it has looped through all the items:" }, { "code": null, "e": 732, "s": 698, "text": "Exit the loop when x is \"banana\":" }, { "code": null, "e": 815, "s": 732, "text": "Exit the loop when x is \"banana\", \nbut this time the break comes before the print:" }, { "code": null, "e": 919, "s": 815, "text": "With the continue statement we can stop the \ncurrent iteration of the loop, and continue with the next:" }, { "code": null, "e": 940, "s": 919, "text": "Do not print banana:" }, { "code": null, "e": 1086, "s": 940, "text": "The range() function returns a sequence of numbers, starting from 0 by default, and increments by 1 (by default), and ends at a specified number." }, { "code": null, "e": 1114, "s": 1086, "text": "Using the range() function:" }, { "code": null, "e": 1185, "s": 1114, "text": "Note that range(6) is not the values of 0 to 6, but the values 0 to 5." }, { "code": null, "e": 1385, "s": 1185, "text": "The range() function defaults to 0 as a starting value, however it is possible to specify the starting value by adding a parameter: range(2, 6), which \nmeans values from 2 to 6 (but not including 6):" }, { "code": null, "e": 1412, "s": 1385, "text": "Using the start parameter:" }, { "code": null, "e": 1574, "s": 1412, "text": "The range() function defaults to increment the sequence by 1,\nhowever it is possible to specify the increment value by adding a third parameter: range(2, 30, 3):" }, { "code": null, "e": 1620, "s": 1574, "text": "Increment the sequence with 3 (default is 1):" }, { "code": null, "e": 1720, "s": 1620, "text": "The else keyword in a\nfor loop specifies a block of code to be \nexecuted when the loop is finished:" }, { "code": null, "e": 1796, "s": 1720, "text": "Print all numbers from 0 to 5, and print a message when the loop has ended:" }, { "code": null, "e": 1883, "s": 1796, "text": "Note: The else block will NOT be executed if the loop is stopped by a break statement." }, { "code": null, "e": 1954, "s": 1883, "text": "Break the loop when x is 3, and see what happens with the \nelse block:" }, { "code": null, "e": 1993, "s": 1954, "text": "A nested loop is a loop inside a loop." }, { "code": null, "e": 2077, "s": 1993, "text": "The \"inner loop\" will be executed one time for each iteration of the \"outer \nloop\":" }, { "code": null, "e": 2115, "s": 2077, "text": "Print each adjective for every fruit:" }, { "code": null, "e": 2256, "s": 2115, "text": "for loops cannot be empty, but if you for \nsome reason have a for loop with no content, put in the pass statement to avoid getting an error." }, { "code": null, "e": 2299, "s": 2256, "text": "Loop through the items in the fruits list." }, { "code": null, "e": 2361, "s": 2299, "text": "fruits = [\"apple\", \"banana\", \"cherry\"]\n x fruits\n print(x)\n" }, { "code": null, "e": 2380, "s": 2361, "text": "Start the Exercise" }, { "code": null, "e": 2413, "s": 2380, "text": "We just launchedW3Schools videos" }, { "code": null, "e": 2455, "s": 2413, "text": "Get certifiedby completinga course today!" }, { "code": null, "e": 2562, "s": 2455, "text": "If you want to report an error, or if you want to make a suggestion, do not hesitate to send us an e-mail:" }, { "code": null, "e": 2581, "s": 2562, "text": "help@w3schools.com" } ]
Common Time Series Data Analysis Methods and Forecasting Models in Python | by Yuefeng Zhang, PhD | Towards Data Science
A time series is a sequence of data samples taken in time order with equal time intervals. Time series include many kinds of real experimental data taken from various domains such as finance, medicine, scientific research (e.g., global warming, speech analysis, earthquakes), etc. [1][2]. Time series forecasting has many real applications in various areas such as forecasting of business (e.g., sales, stock), weather, decease, and others [1]. Given a traditional (time order independent) dataset for supervised machine learning for prediction, data exploration and preprocessing are required before features engineering can be performed, and the features engineering needs to be done before a machine learning model can be chosen and applied to the engineered features for prediction. Similarly to traditional dataset, given a time series dataset, data exploration and preprocessing are required before the time series data can be analyzed, and time series data analysis is required before a time series forecasting model can be chosen and applied to the analyzed dataset for forecasting. In this article, I use a global warming dataset from Kaggle [2] to demonstrate some of the common time series data preprocessing/analysis methods and time series forecasting models in Python. The demonstration consists of the following: Time series data preprocessing Time series data analysis Time series forecasting As described before, for a time series data, data preprocessing is required before data analysis can be performed. The first step towards data preprocessing is to load data from a csv file. Time order plays a critical role in time series data analysis and forecasting. In particular, each data sample in a time series must be associated with a unique point in time. This can be achieved in Pandas DataFrame/Series by using values of DatetimeIndex type as its index values. Once the earth surface temperature dataset in Kaggle [2] has been downloaded onto a local machine, the dataset csv file can be loaded into a Pandas DataFrame as follows: df_raw = pd.read_csv('./data/GlobalTemperatures.csv', parse_dates=['dt'], index_col='dt')df_raw.head() The option parse_dates is to tell Pandas to parse the string values in the dt column into Python datatime values, while the option index_col is to tell Pandas to convert the parsed values of the dt column into DatatimeIndex type and then use them as indices. For simplicity, I extract the LandAverageTemperature column as a Pandas Series for demonstration purpose in this article: df = df_raw['LandAverageTemperature'] Similarly to a traditional dataset, missing data frequently occurs in time series data, which must be handled before the data can be further preprocessed and analyzed. The following code can check how many data entries are missing: df.isnull().value_counts() There are 12 missing data entries in the earth surface temperature time series. Those missing values cannot simply be removed or set to zero without breaking the past time dependency. There are multiple ways of handling missing data in time series appropriately [3]: Forward fill Backward fill Linear interpolation Quadratic interpolation Mean of nearest neighbors Mean of seasonal couterparts I used forward fill to fill up the missing data entries in this article: df = df.ffill() Once the data preprocessing is done, the next step is to analyze data. As a common practice [1][3][4], the first step towards time series data analysis is to visualize the data. The code below uses the Pandas DataFrame/Series built-in plot method to plot the earth surface temperature time series: ax = df.plot(figsize=(16,5), title='Earth Surface Temperature')ax.set_xlabel("Date")ax.set_ylabel("Temperature") The above plot shows that the average temperature of the earth’s surface is around the range of [5, 12] and the overall trend is increasing slowly. No other obvious patterns show up in the plot due to mixture of different time series components such as base level, trend, seasonality, and other components such as error and random noise [1][3]. The time series can be decomposed into individual components for further analysis. To decompose a time series into components for further analysis, the time series can be modeled as an additive or multiplicative of base level, trend, seasonality, and error (including random noise) [3]: Additive time series:value = base level + trend + seasonality + error Multiplicative time series:value = base level x trend x seasonality x error The earth surface temperature time series is modeled as an additive time series in this article: additive = seasonal_decompose(df, model='additive', extrapolate_trend='freq') The option extrapolate_trend='freq' is to handle any missing values in the trend and residuals at the beginning of the time series [3]. In theory, the same dataset can be easily modeled as a multiplicative time series by replacing the option model=’additive’ with model=’multiplicative’. However, the multiplicative model cannot be applied to this particular dataset because the dataset contains zero and/or negative values, which are not allowed in multiplicative seasonality decomposition. The resulting components of the additive decomposition can be extracted to form a Pandas DataFrame as follows: additive_df = pd.concat([additive.seasonal, additive.trend, additive.resid, additive.observed], axis=1)additive_df.columns = ['seasonal', 'trend', 'resid', 'actual_values']additive_df.head() The code below is to visualize the additive decomposed components: trend, seasonal, and residual (i.e., base level + error). plt.rcParams.update({'figure.figsize': (10,10)})additive.plot().suptitle('Additive Decompose') For the earth surface temperature time series data, we are interested in its long term trend the most, which can be extracted as follows: trend = additive.trend Once the data preprocessing and analysis are done, time series forecasting can begin. This section presents the results of applying two common time series forecasting models to the earth surface temperature trend data: ARIMA (AutoRegressive Integrated Moving Average) LSTM (Long Short-Term Memory) An ARIMA model [1][4] is determined by three parameters: p: the autoregressive order d: the order of differencing to make the time series stationary q: the moving average order ARIMA model consists of three parts [4]: AutoRegression (AR), Moving Average (MA), and a constant: ARIMA = constant + AR + MA where AR = a linear combination of p consecutive values in the past time points (i.e., lags) MA = a linear combination of q consecutive forecast errors in the past time points (i.e., lagged forecast errors) Both AR and MA can only be applied to a stationary time series, which is achieved by differencing in ARIMA. 3.1.1 Determining the order of differencing d A time series is (weakly) stationary if its mean is constant (independent of time) and its autocovariance function between two different time points s and t of the time series only depends on the time interval |s - t| (i.e., lag), not a specific time point [1]. Time series forecasting works only for a stationary time series since only the behavior of a stationary time series is predictable. We can use the ADF test (Augmented Dickey Fuller test) [4] to check whether or not a time series is stationary. As an example, the code below is to check the earth surface temperature time series for stationarity: from statsmodels.tsa.stattools import adfullerresult = adfuller(trend.values)print('ADF Statistic: %f' % result[0])print('p-value: %f' % result[1]) The p-value of the test is 0.012992. The default null hypothesis of the ADF test is that the time series is non-stationary. Since the p-value of the ADF test above is less than the significance level of 0.05, we reject the null hypothesis and conclude that the time series is stationary (only trend stationary in this case). Generally speaking, the following need to be done to make a time series stationary: remove non-regular behaviors that can change mean and/or covariance over time remove regular behaviors such as trend and seasonality that can change mean and/or covariance over time Differencing is a popular data transform method for removing non-stationary behaviors (especially trend). The following code is to perform 1st and 2nd order of differencing against the earth surface temperature time series: from statsmodels.graphics.tsaplots import plot_acf, plot_pacf# Original Seriesfig, axes = plt.subplots(3, 2, sharex=True)axes[0, 0].plot(trend.values); axes[0, 0].set_title('Original Series')plot_acf(trend.values, ax=axes[0, 1]).suptitle('Original Series', fontsize=0)# 1st Differencingdiff1 = trend.diff().dropna()axes[1, 0].plot(diff1.values)axes[1, 0].set_title('1st Order Differencing')plot_acf(diff1.values, ax=axes[1, 1]).suptitle('1st Order Differencing', fontsize=0)# 2nd Differencingdiff2 = trend.diff().diff().dropna()axes[2, 0].plot(diff2.values)axes[2, 0].set_title('2nd Order Differencing')plot_acf(diff2.values, ax=axes[2, 1]).suptitle('2nd Order Differencing', fontsize=0) The figure below shows that 1st order of differencing is enough to remove the trend. 2nd order of differencing does not make any improvement. Thus the order of differencing d is set to 1 in this article. 3.1.2 Determining the autoregressive order p The autoregressive order p can be determined by analyzing the results of PACF (Partial Autocorrelation Function) on the 1st order of differencing of the time series data [1][4]: plt.rcParams.update({'figure.figsize':(9,3), 'figure.dpi':120})size = 100fig, axes = plt.subplots(1, 2, sharex=True)axes[0].plot(diff1.values[:size])axes[0].set_title('1st Order Differencing')axes[1].set(ylim=(0,5))plot_pacf(diff1.values[:size], lags=50, ax=axes[1]).suptitle('1st Order Differencing', fontsize=0) We can observe that the PACF lag 1 is well above the significance line (gray area). Thus the autoregressive order p is set to 1 in this article. 3.1.3 Determining the moving average order q The moving average order q can be determined by analyzing the results of ACF (Autocorrelation Function) on the 1st order of differencing of the time series data [1][4]: plt.rcParams.update({'figure.figsize':(9,3), 'figure.dpi':120})size = 100fig, axes = plt.subplots(1, 2, sharex=True)axes[0].plot(diff1.values[:size])axes[0].set_title('1st Order Differencing')axes[1].set(ylim=(0,1.2))plot_acf(diff1.values[:size], lags=50, ax=axes[1]).suptitle('1st Order Differencing', fontsize=0) We can observe that the ACF lag 1 is well above the significance line (gray area). Thus the moving average order q is set to 1 as well in this article. 3.1.4 Training ARIMA model The following code divides the earth surface temperature trend time series into training and testing sub-series first and then uses the training data to train an ARIMA model with the determined values of p =1, d =1, q = 1. A traditional dataset is typically randomly divided into training and testing subsets. However, this does not work for time series because it breaks the sequential time dependency. To avoid this problem, the temperature trend time series data is divided by keeping its original sequential order. from statsmodels.tsa.arima_model import ARIMAtrain = trend[:3000]test = trend[3000:]# order = (p=1, d=1, q=1)model = ARIMA(train, order=(1, 1, 1)) model = model.fit(disp=0) print(model.summary()) It can be seen from the above model training results that the P Values of the AR1 and MA1 in the p>|z| column are highly significant (<< 0.05). This indicates that the choices of p =1 and q =1 are appropriate. The code below is to plot the residuals. # Plot residual errorsresiduals = pd.DataFrame(model.resid)fig, ax = plt.subplots(1,2)residuals.plot(title="Residuals", ax=ax[0])residuals.plot(kind='kde', title='Density', ax=ax[1]) The plot of the residuals shows no patterns (i.e., with constant mean and variance) except for the first 20% of the time series. This indicates that the trained ARIMA model behaves appropriately. 3.1.5 Forecasting using the trained ARIMA model The code below is to use the trained ARIMA model to forecast 192 (this can be any positive integer) temperature values and then compare them with the testing time series: # Forecast: 192 forecasting values with 95% confidencefc, se, conf = model.forecast(192, alpha=0.05)# Make as pandas seriesfc_series = pd.Series(fc, index=test.index)lower_series = pd.Series(conf[:, 0], index=test.index)upper_series = pd.Series(conf[:, 1], index=test.index)# Plotplt.figure(figsize=(12,5), dpi=100)plt.plot(train, label='training')plt.plot(test, label='actual')plt.plot(fc_series, label='forecast')plt.fill_between(lower_series.index, lower_series, upper_series, color='k', alpha=.15)plt.title('Forecast vs Actuals')plt.legend(loc='upper left', fontsize=8) The forecasting results above show that the trained ARIMA model tends to forecast temperatures below the actual ones. This section presents the results of applying the well-known LSTM model to the earth surface temperature trend time series. 3.2.1 Preparing dataset Similarly to [6], the following code is to generate pairs of feature vector (a sequence of temperature values in the past time points) and label (target temperature at current time point) from the temperature time series for LSTM model training and evaluation. from numpy import arrayfrom keras.models import Sequentialfrom keras.layers import LSTMfrom keras.layers import Densedef split_sequence(sequence, n_steps): X, y = list(), list() for i in range(len(sequence)): # find the end of this pattern end_ix = i + n_steps # check if we are beyond the sequence if end_ix > len(sequence)-1: break # gather input and output parts of the pattern seq_x, seq_y = sequence[i:end_ix], sequence[end_ix] X.append(seq_x) y.append(seq_y) return array(X), array(y)# define input sequenceraw_seq = trend.tolist()# choose a number of time stepsn_steps = 12# split into samplesX, y = split_sequence(raw_seq, n_steps) For simplicity, I used the temperatures in the past 12 months to predict the temperature in next month in this article. The following are two samples of the generated dataset: The generated dataset is divided into two parts: the first 3,000 for model training and the rest of dataset for model testing: X_train = X[:3000]y_train = y[:3000]X_test = X[3000:]y_test = y[3000:] 3.2.2 Selecting LSTM model The following LSTM model [6] takes a sequence of temperature values as input and generates one target temperate as output. Since the temperature forecasting is a regression issue, the output of the LSTM model can take any value and thus there is no associated activation function. n_features = 1X_train = X_train.reshape((X_train.shape[0], X_train.shape[1], n_features))# define modelmodel = Sequential()model.add(LSTM(50, activation='relu', input_shape=(n_steps, n_features)))model.add(Dense(1))model.compile(optimizer='adam', loss='mse')# fit modelmodel.fit(X_train, y_train, epochs=200, verbose=1) 3.2.3 Training LSTM model The following are the results of model training: 3.2.4 Forecasting using the trained LSTM model Once the model training is done, the trained LSTM model can then be applied to the testing time series to forecast temperatures: X_test = X_test.reshape((X_test.shape[0], X_test.shape[1], n_features))y_pred = model.predict(X_test, verbose=0) The code below plots the predicted temperatures against the actual temperatures in the testing time series: def plot_forecosting(df1, df2, line_stype1='b-', line_stype2='r--', title="", xlabel='Date', ylabel='Temperature', dpi=100): plt.figure(figsize=(16,5), dpi=dpi) plt.plot(df1.index, df1, line_stype1, label='actual') plt.plot(df2.index, df2, line_stype2, label='forecast') plt.gca().set(title=title, xlabel=xlabel, ylabel=ylabel) plt.title('Forecast vs Actuals') plt.legend(loc='upper left', fontsize=8) plt.show()y_pred_1 = y_pred.reshape((y_pred.shape[0]))y_pred_series = pd.Series(y_pred_1)y_test_1 = y_test.reshape((y_test.shape[0]))y_test_series = pd.Series(y_test_1)plot_forecosting(y_test_series, y_pred_series, title='Land Average Temperature') The plot of the predicted temperatures against the entire temperature trend time series is done as follows: X_all = X.reshape((X.shape[0], X.shape[1], n_features))y_pred_all = model.predict(X_all, verbose=0)y_pred_all_1 = y_pred_all.reshape((y_pred_all.shape[0]))y_pred_all_series = pd.Series(y_pred_all_1)y_all = y.reshape((y.shape[0]))y_all_series = pd.Series(y_all)plot_forecosting(y_all_series, y_pred_all_series, title='Land Average Temperature') The forecasting results above shows that the forecasted temperatures followed the actual temperatures closely. In this article, I used a global warming dataset from Kaggle [2] to demonstrate some of the common time series data preprocessing/analysis practices and two widely adopted time series forecasting models ARIMA and LSTM in Python. As can be seen in Section 3, the performance of the ARIMA model relies heavily on data preprocessing and analysis to make a time series stationary, while LSTM can work on a time series with minimum data preprocessing and analysis (e.g., no need to remove trend by differencing in LSTM model). All of the source code used in this article is available in Github [7]. R. Shumway and D. Stoffer, Time Series Analysis and Its Applications, Springer, 4th Edition, 2017Climate Change: Earth Surface Temperature DataS. Prabhakaran, Time Series Analysis in Python — A Comprehensive Guide with ExamplesS. Prabhakaran, ARIMA Model — Complete Guide to Time Series Forecasting in PythonJ. Brownlee, How to Remove Trends and Seasonality with a Difference Transform in PythonJ. Brownlee, How to Develop LSTM Models for Time Series ForecastingY. Zhang, Jupyter notebook in Github R. Shumway and D. Stoffer, Time Series Analysis and Its Applications, Springer, 4th Edition, 2017 Climate Change: Earth Surface Temperature Data S. Prabhakaran, Time Series Analysis in Python — A Comprehensive Guide with Examples S. Prabhakaran, ARIMA Model — Complete Guide to Time Series Forecasting in Python J. Brownlee, How to Remove Trends and Seasonality with a Difference Transform in Python J. Brownlee, How to Develop LSTM Models for Time Series Forecasting Y. Zhang, Jupyter notebook in Github
[ { "code": null, "e": 617, "s": 172, "text": "A time series is a sequence of data samples taken in time order with equal time intervals. Time series include many kinds of real experimental data taken from various domains such as finance, medicine, scientific research (e.g., global warming, speech analysis, earthquakes), etc. [1][2]. Time series forecasting has many real applications in various areas such as forecasting of business (e.g., sales, stock), weather, decease, and others [1]." }, { "code": null, "e": 959, "s": 617, "text": "Given a traditional (time order independent) dataset for supervised machine learning for prediction, data exploration and preprocessing are required before features engineering can be performed, and the features engineering needs to be done before a machine learning model can be chosen and applied to the engineered features for prediction." }, { "code": null, "e": 1263, "s": 959, "text": "Similarly to traditional dataset, given a time series dataset, data exploration and preprocessing are required before the time series data can be analyzed, and time series data analysis is required before a time series forecasting model can be chosen and applied to the analyzed dataset for forecasting." }, { "code": null, "e": 1500, "s": 1263, "text": "In this article, I use a global warming dataset from Kaggle [2] to demonstrate some of the common time series data preprocessing/analysis methods and time series forecasting models in Python. The demonstration consists of the following:" }, { "code": null, "e": 1531, "s": 1500, "text": "Time series data preprocessing" }, { "code": null, "e": 1557, "s": 1531, "text": "Time series data analysis" }, { "code": null, "e": 1581, "s": 1557, "text": "Time series forecasting" }, { "code": null, "e": 1696, "s": 1581, "text": "As described before, for a time series data, data preprocessing is required before data analysis can be performed." }, { "code": null, "e": 1771, "s": 1696, "text": "The first step towards data preprocessing is to load data from a csv file." }, { "code": null, "e": 2054, "s": 1771, "text": "Time order plays a critical role in time series data analysis and forecasting. In particular, each data sample in a time series must be associated with a unique point in time. This can be achieved in Pandas DataFrame/Series by using values of DatetimeIndex type as its index values." }, { "code": null, "e": 2224, "s": 2054, "text": "Once the earth surface temperature dataset in Kaggle [2] has been downloaded onto a local machine, the dataset csv file can be loaded into a Pandas DataFrame as follows:" }, { "code": null, "e": 2327, "s": 2224, "text": "df_raw = pd.read_csv('./data/GlobalTemperatures.csv', parse_dates=['dt'], index_col='dt')df_raw.head()" }, { "code": null, "e": 2586, "s": 2327, "text": "The option parse_dates is to tell Pandas to parse the string values in the dt column into Python datatime values, while the option index_col is to tell Pandas to convert the parsed values of the dt column into DatatimeIndex type and then use them as indices." }, { "code": null, "e": 2708, "s": 2586, "text": "For simplicity, I extract the LandAverageTemperature column as a Pandas Series for demonstration purpose in this article:" }, { "code": null, "e": 2746, "s": 2708, "text": "df = df_raw['LandAverageTemperature']" }, { "code": null, "e": 2914, "s": 2746, "text": "Similarly to a traditional dataset, missing data frequently occurs in time series data, which must be handled before the data can be further preprocessed and analyzed." }, { "code": null, "e": 2978, "s": 2914, "text": "The following code can check how many data entries are missing:" }, { "code": null, "e": 3005, "s": 2978, "text": "df.isnull().value_counts()" }, { "code": null, "e": 3272, "s": 3005, "text": "There are 12 missing data entries in the earth surface temperature time series. Those missing values cannot simply be removed or set to zero without breaking the past time dependency. There are multiple ways of handling missing data in time series appropriately [3]:" }, { "code": null, "e": 3285, "s": 3272, "text": "Forward fill" }, { "code": null, "e": 3299, "s": 3285, "text": "Backward fill" }, { "code": null, "e": 3320, "s": 3299, "text": "Linear interpolation" }, { "code": null, "e": 3344, "s": 3320, "text": "Quadratic interpolation" }, { "code": null, "e": 3370, "s": 3344, "text": "Mean of nearest neighbors" }, { "code": null, "e": 3399, "s": 3370, "text": "Mean of seasonal couterparts" }, { "code": null, "e": 3472, "s": 3399, "text": "I used forward fill to fill up the missing data entries in this article:" }, { "code": null, "e": 3488, "s": 3472, "text": "df = df.ffill()" }, { "code": null, "e": 3559, "s": 3488, "text": "Once the data preprocessing is done, the next step is to analyze data." }, { "code": null, "e": 3666, "s": 3559, "text": "As a common practice [1][3][4], the first step towards time series data analysis is to visualize the data." }, { "code": null, "e": 3786, "s": 3666, "text": "The code below uses the Pandas DataFrame/Series built-in plot method to plot the earth surface temperature time series:" }, { "code": null, "e": 3899, "s": 3786, "text": "ax = df.plot(figsize=(16,5), title='Earth Surface Temperature')ax.set_xlabel(\"Date\")ax.set_ylabel(\"Temperature\")" }, { "code": null, "e": 4327, "s": 3899, "text": "The above plot shows that the average temperature of the earth’s surface is around the range of [5, 12] and the overall trend is increasing slowly. No other obvious patterns show up in the plot due to mixture of different time series components such as base level, trend, seasonality, and other components such as error and random noise [1][3]. The time series can be decomposed into individual components for further analysis." }, { "code": null, "e": 4531, "s": 4327, "text": "To decompose a time series into components for further analysis, the time series can be modeled as an additive or multiplicative of base level, trend, seasonality, and error (including random noise) [3]:" }, { "code": null, "e": 4601, "s": 4531, "text": "Additive time series:value = base level + trend + seasonality + error" }, { "code": null, "e": 4677, "s": 4601, "text": "Multiplicative time series:value = base level x trend x seasonality x error" }, { "code": null, "e": 4774, "s": 4677, "text": "The earth surface temperature time series is modeled as an additive time series in this article:" }, { "code": null, "e": 4852, "s": 4774, "text": "additive = seasonal_decompose(df, model='additive', extrapolate_trend='freq')" }, { "code": null, "e": 4988, "s": 4852, "text": "The option extrapolate_trend='freq' is to handle any missing values in the trend and residuals at the beginning of the time series [3]." }, { "code": null, "e": 5344, "s": 4988, "text": "In theory, the same dataset can be easily modeled as a multiplicative time series by replacing the option model=’additive’ with model=’multiplicative’. However, the multiplicative model cannot be applied to this particular dataset because the dataset contains zero and/or negative values, which are not allowed in multiplicative seasonality decomposition." }, { "code": null, "e": 5455, "s": 5344, "text": "The resulting components of the additive decomposition can be extracted to form a Pandas DataFrame as follows:" }, { "code": null, "e": 5646, "s": 5455, "text": "additive_df = pd.concat([additive.seasonal, additive.trend, additive.resid, additive.observed], axis=1)additive_df.columns = ['seasonal', 'trend', 'resid', 'actual_values']additive_df.head()" }, { "code": null, "e": 5771, "s": 5646, "text": "The code below is to visualize the additive decomposed components: trend, seasonal, and residual (i.e., base level + error)." }, { "code": null, "e": 5866, "s": 5771, "text": "plt.rcParams.update({'figure.figsize': (10,10)})additive.plot().suptitle('Additive Decompose')" }, { "code": null, "e": 6004, "s": 5866, "text": "For the earth surface temperature time series data, we are interested in its long term trend the most, which can be extracted as follows:" }, { "code": null, "e": 6027, "s": 6004, "text": "trend = additive.trend" }, { "code": null, "e": 6113, "s": 6027, "text": "Once the data preprocessing and analysis are done, time series forecasting can begin." }, { "code": null, "e": 6246, "s": 6113, "text": "This section presents the results of applying two common time series forecasting models to the earth surface temperature trend data:" }, { "code": null, "e": 6295, "s": 6246, "text": "ARIMA (AutoRegressive Integrated Moving Average)" }, { "code": null, "e": 6325, "s": 6295, "text": "LSTM (Long Short-Term Memory)" }, { "code": null, "e": 6382, "s": 6325, "text": "An ARIMA model [1][4] is determined by three parameters:" }, { "code": null, "e": 6410, "s": 6382, "text": "p: the autoregressive order" }, { "code": null, "e": 6474, "s": 6410, "text": "d: the order of differencing to make the time series stationary" }, { "code": null, "e": 6502, "s": 6474, "text": "q: the moving average order" }, { "code": null, "e": 6601, "s": 6502, "text": "ARIMA model consists of three parts [4]: AutoRegression (AR), Moving Average (MA), and a constant:" }, { "code": null, "e": 6628, "s": 6601, "text": "ARIMA = constant + AR + MA" }, { "code": null, "e": 6634, "s": 6628, "text": "where" }, { "code": null, "e": 6721, "s": 6634, "text": "AR = a linear combination of p consecutive values in the past time points (i.e., lags)" }, { "code": null, "e": 6835, "s": 6721, "text": "MA = a linear combination of q consecutive forecast errors in the past time points (i.e., lagged forecast errors)" }, { "code": null, "e": 6943, "s": 6835, "text": "Both AR and MA can only be applied to a stationary time series, which is achieved by differencing in ARIMA." }, { "code": null, "e": 6989, "s": 6943, "text": "3.1.1 Determining the order of differencing d" }, { "code": null, "e": 7251, "s": 6989, "text": "A time series is (weakly) stationary if its mean is constant (independent of time) and its autocovariance function between two different time points s and t of the time series only depends on the time interval |s - t| (i.e., lag), not a specific time point [1]." }, { "code": null, "e": 7383, "s": 7251, "text": "Time series forecasting works only for a stationary time series since only the behavior of a stationary time series is predictable." }, { "code": null, "e": 7597, "s": 7383, "text": "We can use the ADF test (Augmented Dickey Fuller test) [4] to check whether or not a time series is stationary. As an example, the code below is to check the earth surface temperature time series for stationarity:" }, { "code": null, "e": 7745, "s": 7597, "text": "from statsmodels.tsa.stattools import adfullerresult = adfuller(trend.values)print('ADF Statistic: %f' % result[0])print('p-value: %f' % result[1])" }, { "code": null, "e": 7782, "s": 7745, "text": "The p-value of the test is 0.012992." }, { "code": null, "e": 8070, "s": 7782, "text": "The default null hypothesis of the ADF test is that the time series is non-stationary. Since the p-value of the ADF test above is less than the significance level of 0.05, we reject the null hypothesis and conclude that the time series is stationary (only trend stationary in this case)." }, { "code": null, "e": 8154, "s": 8070, "text": "Generally speaking, the following need to be done to make a time series stationary:" }, { "code": null, "e": 8232, "s": 8154, "text": "remove non-regular behaviors that can change mean and/or covariance over time" }, { "code": null, "e": 8336, "s": 8232, "text": "remove regular behaviors such as trend and seasonality that can change mean and/or covariance over time" }, { "code": null, "e": 8442, "s": 8336, "text": "Differencing is a popular data transform method for removing non-stationary behaviors (especially trend)." }, { "code": null, "e": 8560, "s": 8442, "text": "The following code is to perform 1st and 2nd order of differencing against the earth surface temperature time series:" }, { "code": null, "e": 9248, "s": 8560, "text": "from statsmodels.graphics.tsaplots import plot_acf, plot_pacf# Original Seriesfig, axes = plt.subplots(3, 2, sharex=True)axes[0, 0].plot(trend.values); axes[0, 0].set_title('Original Series')plot_acf(trend.values, ax=axes[0, 1]).suptitle('Original Series', fontsize=0)# 1st Differencingdiff1 = trend.diff().dropna()axes[1, 0].plot(diff1.values)axes[1, 0].set_title('1st Order Differencing')plot_acf(diff1.values, ax=axes[1, 1]).suptitle('1st Order Differencing', fontsize=0)# 2nd Differencingdiff2 = trend.diff().diff().dropna()axes[2, 0].plot(diff2.values)axes[2, 0].set_title('2nd Order Differencing')plot_acf(diff2.values, ax=axes[2, 1]).suptitle('2nd Order Differencing', fontsize=0)" }, { "code": null, "e": 9452, "s": 9248, "text": "The figure below shows that 1st order of differencing is enough to remove the trend. 2nd order of differencing does not make any improvement. Thus the order of differencing d is set to 1 in this article." }, { "code": null, "e": 9497, "s": 9452, "text": "3.1.2 Determining the autoregressive order p" }, { "code": null, "e": 9675, "s": 9497, "text": "The autoregressive order p can be determined by analyzing the results of PACF (Partial Autocorrelation Function) on the 1st order of differencing of the time series data [1][4]:" }, { "code": null, "e": 9989, "s": 9675, "text": "plt.rcParams.update({'figure.figsize':(9,3), 'figure.dpi':120})size = 100fig, axes = plt.subplots(1, 2, sharex=True)axes[0].plot(diff1.values[:size])axes[0].set_title('1st Order Differencing')axes[1].set(ylim=(0,5))plot_pacf(diff1.values[:size], lags=50, ax=axes[1]).suptitle('1st Order Differencing', fontsize=0)" }, { "code": null, "e": 10134, "s": 9989, "text": "We can observe that the PACF lag 1 is well above the significance line (gray area). Thus the autoregressive order p is set to 1 in this article." }, { "code": null, "e": 10179, "s": 10134, "text": "3.1.3 Determining the moving average order q" }, { "code": null, "e": 10348, "s": 10179, "text": "The moving average order q can be determined by analyzing the results of ACF (Autocorrelation Function) on the 1st order of differencing of the time series data [1][4]:" }, { "code": null, "e": 10663, "s": 10348, "text": "plt.rcParams.update({'figure.figsize':(9,3), 'figure.dpi':120})size = 100fig, axes = plt.subplots(1, 2, sharex=True)axes[0].plot(diff1.values[:size])axes[0].set_title('1st Order Differencing')axes[1].set(ylim=(0,1.2))plot_acf(diff1.values[:size], lags=50, ax=axes[1]).suptitle('1st Order Differencing', fontsize=0)" }, { "code": null, "e": 10815, "s": 10663, "text": "We can observe that the ACF lag 1 is well above the significance line (gray area). Thus the moving average order q is set to 1 as well in this article." }, { "code": null, "e": 10842, "s": 10815, "text": "3.1.4 Training ARIMA model" }, { "code": null, "e": 11065, "s": 10842, "text": "The following code divides the earth surface temperature trend time series into training and testing sub-series first and then uses the training data to train an ARIMA model with the determined values of p =1, d =1, q = 1." }, { "code": null, "e": 11361, "s": 11065, "text": "A traditional dataset is typically randomly divided into training and testing subsets. However, this does not work for time series because it breaks the sequential time dependency. To avoid this problem, the temperature trend time series data is divided by keeping its original sequential order." }, { "code": null, "e": 11560, "s": 11361, "text": "from statsmodels.tsa.arima_model import ARIMAtrain = trend[:3000]test = trend[3000:]# order = (p=1, d=1, q=1)model = ARIMA(train, order=(1, 1, 1)) model = model.fit(disp=0) print(model.summary())" }, { "code": null, "e": 11770, "s": 11560, "text": "It can be seen from the above model training results that the P Values of the AR1 and MA1 in the p>|z| column are highly significant (<< 0.05). This indicates that the choices of p =1 and q =1 are appropriate." }, { "code": null, "e": 11811, "s": 11770, "text": "The code below is to plot the residuals." }, { "code": null, "e": 11994, "s": 11811, "text": "# Plot residual errorsresiduals = pd.DataFrame(model.resid)fig, ax = plt.subplots(1,2)residuals.plot(title=\"Residuals\", ax=ax[0])residuals.plot(kind='kde', title='Density', ax=ax[1])" }, { "code": null, "e": 12190, "s": 11994, "text": "The plot of the residuals shows no patterns (i.e., with constant mean and variance) except for the first 20% of the time series. This indicates that the trained ARIMA model behaves appropriately." }, { "code": null, "e": 12238, "s": 12190, "text": "3.1.5 Forecasting using the trained ARIMA model" }, { "code": null, "e": 12409, "s": 12238, "text": "The code below is to use the trained ARIMA model to forecast 192 (this can be any positive integer) temperature values and then compare them with the testing time series:" }, { "code": null, "e": 13000, "s": 12409, "text": "# Forecast: 192 forecasting values with 95% confidencefc, se, conf = model.forecast(192, alpha=0.05)# Make as pandas seriesfc_series = pd.Series(fc, index=test.index)lower_series = pd.Series(conf[:, 0], index=test.index)upper_series = pd.Series(conf[:, 1], index=test.index)# Plotplt.figure(figsize=(12,5), dpi=100)plt.plot(train, label='training')plt.plot(test, label='actual')plt.plot(fc_series, label='forecast')plt.fill_between(lower_series.index, lower_series, upper_series, color='k', alpha=.15)plt.title('Forecast vs Actuals')plt.legend(loc='upper left', fontsize=8)" }, { "code": null, "e": 13118, "s": 13000, "text": "The forecasting results above show that the trained ARIMA model tends to forecast temperatures below the actual ones." }, { "code": null, "e": 13242, "s": 13118, "text": "This section presents the results of applying the well-known LSTM model to the earth surface temperature trend time series." }, { "code": null, "e": 13266, "s": 13242, "text": "3.2.1 Preparing dataset" }, { "code": null, "e": 13527, "s": 13266, "text": "Similarly to [6], the following code is to generate pairs of feature vector (a sequence of temperature values in the past time points) and label (target temperature at current time point) from the temperature time series for LSTM model training and evaluation." }, { "code": null, "e": 14243, "s": 13527, "text": "from numpy import arrayfrom keras.models import Sequentialfrom keras.layers import LSTMfrom keras.layers import Densedef split_sequence(sequence, n_steps): X, y = list(), list() for i in range(len(sequence)): # find the end of this pattern end_ix = i + n_steps # check if we are beyond the sequence if end_ix > len(sequence)-1: break # gather input and output parts of the pattern seq_x, seq_y = sequence[i:end_ix], sequence[end_ix] X.append(seq_x) y.append(seq_y) return array(X), array(y)# define input sequenceraw_seq = trend.tolist()# choose a number of time stepsn_steps = 12# split into samplesX, y = split_sequence(raw_seq, n_steps)" }, { "code": null, "e": 14419, "s": 14243, "text": "For simplicity, I used the temperatures in the past 12 months to predict the temperature in next month in this article. The following are two samples of the generated dataset:" }, { "code": null, "e": 14546, "s": 14419, "text": "The generated dataset is divided into two parts: the first 3,000 for model training and the rest of dataset for model testing:" }, { "code": null, "e": 14619, "s": 14546, "text": "X_train = X[:3000]y_train = y[:3000]X_test = X[3000:]y_test = y[3000:]" }, { "code": null, "e": 14646, "s": 14619, "text": "3.2.2 Selecting LSTM model" }, { "code": null, "e": 14927, "s": 14646, "text": "The following LSTM model [6] takes a sequence of temperature values as input and generates one target temperate as output. Since the temperature forecasting is a regression issue, the output of the LSTM model can take any value and thus there is no associated activation function." }, { "code": null, "e": 15247, "s": 14927, "text": "n_features = 1X_train = X_train.reshape((X_train.shape[0], X_train.shape[1], n_features))# define modelmodel = Sequential()model.add(LSTM(50, activation='relu', input_shape=(n_steps, n_features)))model.add(Dense(1))model.compile(optimizer='adam', loss='mse')# fit modelmodel.fit(X_train, y_train, epochs=200, verbose=1)" }, { "code": null, "e": 15273, "s": 15247, "text": "3.2.3 Training LSTM model" }, { "code": null, "e": 15322, "s": 15273, "text": "The following are the results of model training:" }, { "code": null, "e": 15369, "s": 15322, "text": "3.2.4 Forecasting using the trained LSTM model" }, { "code": null, "e": 15498, "s": 15369, "text": "Once the model training is done, the trained LSTM model can then be applied to the testing time series to forecast temperatures:" }, { "code": null, "e": 15611, "s": 15498, "text": "X_test = X_test.reshape((X_test.shape[0], X_test.shape[1], n_features))y_pred = model.predict(X_test, verbose=0)" }, { "code": null, "e": 15719, "s": 15611, "text": "The code below plots the predicted temperatures against the actual temperatures in the testing time series:" }, { "code": null, "e": 16392, "s": 15719, "text": "def plot_forecosting(df1, df2, line_stype1='b-', line_stype2='r--', title=\"\", xlabel='Date', ylabel='Temperature', dpi=100): plt.figure(figsize=(16,5), dpi=dpi) plt.plot(df1.index, df1, line_stype1, label='actual') plt.plot(df2.index, df2, line_stype2, label='forecast') plt.gca().set(title=title, xlabel=xlabel, ylabel=ylabel) plt.title('Forecast vs Actuals') plt.legend(loc='upper left', fontsize=8) plt.show()y_pred_1 = y_pred.reshape((y_pred.shape[0]))y_pred_series = pd.Series(y_pred_1)y_test_1 = y_test.reshape((y_test.shape[0]))y_test_series = pd.Series(y_test_1)plot_forecosting(y_test_series, y_pred_series, title='Land Average Temperature')" }, { "code": null, "e": 16500, "s": 16392, "text": "The plot of the predicted temperatures against the entire temperature trend time series is done as follows:" }, { "code": null, "e": 16844, "s": 16500, "text": "X_all = X.reshape((X.shape[0], X.shape[1], n_features))y_pred_all = model.predict(X_all, verbose=0)y_pred_all_1 = y_pred_all.reshape((y_pred_all.shape[0]))y_pred_all_series = pd.Series(y_pred_all_1)y_all = y.reshape((y.shape[0]))y_all_series = pd.Series(y_all)plot_forecosting(y_all_series, y_pred_all_series, title='Land Average Temperature')" }, { "code": null, "e": 16955, "s": 16844, "text": "The forecasting results above shows that the forecasted temperatures followed the actual temperatures closely." }, { "code": null, "e": 17184, "s": 16955, "text": "In this article, I used a global warming dataset from Kaggle [2] to demonstrate some of the common time series data preprocessing/analysis practices and two widely adopted time series forecasting models ARIMA and LSTM in Python." }, { "code": null, "e": 17477, "s": 17184, "text": "As can be seen in Section 3, the performance of the ARIMA model relies heavily on data preprocessing and analysis to make a time series stationary, while LSTM can work on a time series with minimum data preprocessing and analysis (e.g., no need to remove trend by differencing in LSTM model)." }, { "code": null, "e": 17549, "s": 17477, "text": "All of the source code used in this article is available in Github [7]." }, { "code": null, "e": 18048, "s": 17549, "text": "R. Shumway and D. Stoffer, Time Series Analysis and Its Applications, Springer, 4th Edition, 2017Climate Change: Earth Surface Temperature DataS. Prabhakaran, Time Series Analysis in Python — A Comprehensive Guide with ExamplesS. Prabhakaran, ARIMA Model — Complete Guide to Time Series Forecasting in PythonJ. Brownlee, How to Remove Trends and Seasonality with a Difference Transform in PythonJ. Brownlee, How to Develop LSTM Models for Time Series ForecastingY. Zhang, Jupyter notebook in Github" }, { "code": null, "e": 18146, "s": 18048, "text": "R. Shumway and D. Stoffer, Time Series Analysis and Its Applications, Springer, 4th Edition, 2017" }, { "code": null, "e": 18193, "s": 18146, "text": "Climate Change: Earth Surface Temperature Data" }, { "code": null, "e": 18278, "s": 18193, "text": "S. Prabhakaran, Time Series Analysis in Python — A Comprehensive Guide with Examples" }, { "code": null, "e": 18360, "s": 18278, "text": "S. Prabhakaran, ARIMA Model — Complete Guide to Time Series Forecasting in Python" }, { "code": null, "e": 18448, "s": 18360, "text": "J. Brownlee, How to Remove Trends and Seasonality with a Difference Transform in Python" }, { "code": null, "e": 18516, "s": 18448, "text": "J. Brownlee, How to Develop LSTM Models for Time Series Forecasting" } ]
Building a multi-functionality Voice Assistant in 10 minutes | by Sakshi Butala | Towards Data Science
Nowadays people don’t have time to manually search the internet for information or the answers to their questions, rather they expect someone to do it for them, just like a personal assistant who listens to the commands provided and acts accordingly.Thanks to Artificial Intelligence, this personal assistant can now be available to everyone in the form of a voice assistant which is much faster and reliable than a human. An assistant that is even capable of accomplishing difficult tasks like placing an order online, playing music, turning on the lights, etc. just by listening to the users command In this article, I am going to show you how to build a voice assistant that responds to basic user queries. After reading this article you shall get a basic idea about what web scraping is and how can it be used to build a voice assistant. Note: You will need to have a basic understanding of the Python language to follow this article. Voice Assistant Voice Assistant 2. Web Scraping 3. Implementation Voice Assistant is a software agent that can perform tasks or services for an individual based on commands or questions. In general, voice assistants react to voice commands and give the user relevant information about his/her queries. The assistant can understand and react to specific commands given by the user like playing a song on YouTube or knowing about the weather. It will search and/or scrape the web to find the response to the command in order to satisfy the user. Presently voice assistants are already able to process orders of products, answer questions, perform actions like playing music or start a simple phone call with a friend. The implemented voice assistant can perform the following tasks: Provide weather detailsProvide corona updatesProvide latest newsSearch the meaning of a wordTake notesPlay YouTube videosShow location on Google MapsOpen websites on Google Chrome Provide weather details Provide corona updates Provide latest news Search the meaning of a word Take notes Play YouTube videos Show location on Google Maps Open websites on Google Chrome When building a voice assistant, there are 2 important libraries that you should consider. Python’s SpeechRecognition package helps the voice assistant understand the user. It can be implemented as follows: import speech_recognition as sr#initalises the recognizerr1 = sr.Recognizer()#uses microphone to take the inputwith sr.Microphone() as source: print('Listening..') #listens to the user audio = r1.listen(source) #recognises the audio and converts it to text audio = r1.recognize_google(audio) Python’s pyttsx3 package helps the voice assistant respond to the user by converting the text to audio. It can be implemented as follows: import pyttsx3#initialises pyttsx3engine = pyttsx3.init('sapi5')#converts the text to audioengine.say('Hello World')engine.runAndWait() Web Scraping refers to the extraction of data from websites. The data on the websites are unstructured. Web scraping helps collect these unstructured data and store it in a structured form. Some applications of web scraping include: Scraping Social Media such as Twitter to collect tweets and comments for performing sentiment analysis. Scraping E-Commerce Website such as Amazon to extract product information for data analysis and predicting market trends. Scraping E-Mail Address to collect email IDs and then send bulk emails for marketing and advertisement purpose. Scraping Google Images to create datasets for training a Machine Learning or Deep Learning model. Although it can be done manually, Python’s library Beautiful Soup makes it easier and faster to scrape the data. To extract data using web scraping with python, you need to follow these basic steps: Find the URL of the website that you want to scrapeExtract the entire code of the websiteInspect the website and find the data you want to extractFilter the code using html tags to get the desired dataStore the data in the required format Find the URL of the website that you want to scrape Extract the entire code of the website Inspect the website and find the data you want to extract Filter the code using html tags to get the desired data Store the data in the required format Let us start by importing the following libraries in you python notebook as shown below: import requests from bs4 import BeautifulSoup import reimport speech_recognition as sr from datetime import dateimport webbrowserimport pyttsx3 Now let us create our main function which consists of a bunch of if-else statements which tell the assistant how to respond under certain conditions. engine = pyttsx3.init('sapi5')r1 = sr.Recognizer()with sr.Microphone() as source: print('Listening..') engine.say('Listening') engine.runAndWait() audio = r1.listen(source) audio = r1.recognize_google(audio) if 'weather' in audio: print('..') words = audio.split(' ') print(words[-1]) scrape_weather(words[-1]) elif 'covid' in audio: print('..') words = audio.split(' ') corona_updates(words[-1]) elif 'meaning' in audio: print('..') words = audio.split(' ') print(words[-1]) scrape_meaning(words[-1]) elif 'take notes' in audio: print('..') take_notes() print('Noted!!') elif 'show notes' in audio: print('..') show_notes() print('Done') elif 'news' in audio: print('..') scrape_news() elif 'play' in audio: print('..') words = audio.split(' ') print(words[-1]) play_youtube(audio) elif 'open' in audio: print('..') words = audio.split('open') print(words[-1]) link = str(words[-1]) link = re.sub(' ', '', link) engine.say('Opening') engine.say(link) engine.runAndWait() link = f'https://{link}.com' print(link) webbrowser.open(link) elif 'where is' in audio: print('..') words = audio.split('where is') print(words[-1]) link = str(words[-1]) link = re.sub(' ', '', link) engine.say('Locating') engine.say(link) engine.runAndWait() link = f'https://www.google.co.in/maps/place/{link}' print(link) webbrowser.open(link) else: print(audio) print('Sorry, I do not understand that!') engine.say('Sorry, I do not understand that!') engine.runAndWait() Case 1: If the user wants to know about the Weather, he/she can ask the assistant “Hey! What is the weather today in Mumbai?” Since the word “weather” is present in the audio, the function scrape_weather(words[-1]) will be called with the parameter as “Mumbai”. Let us take a look at this function. def scrape_weather(city): url = 'https://www.google.com/search?q=accuweather+' + city page = requests.get(url)soup = BeautifulSoup(page.text, 'lxml')links = [a['href']for a in soup.findAll('a')] link = str(links[16]) link = link.split('=') link = str(link[1]).split('&') link = link[0] page = requests.get(link, headers={'User-Agent': 'Mozilla/5.0'})soup = BeautifulSoup(page.content, 'lxml') time = soup.find('p', attrs = {'class': 'cur-con-weather-card__subtitle'}) time = re.sub('\n', '', time.text) time = re.sub('\t', '', time) time = 'Time: ' + timetemperature = soup.find('div', attrs = {'class':'temp'}) temperature = 'Temperature: ' + temperature.text realfeel = soup.find('div', attrs = {'class': 'real-feel'}) realfeel = re.sub('\n', '',realfeel.text) realfeel = re.sub('\t', '',realfeel) realfeel = 'RealFeel: ' + realfeel[-3:] + 'C'climate = soup.find('span', attrs = {'class':'phrase'}) climate = "Climate: " + climate.text info = 'For more information visit: ' + link print('The weather for today is: ') print(time) print(temperature) print(realfeel) print(climate) print(info) engine.say('The weather for today is: ') engine.say(time) engine.say(temperature) engine.say(realfeel) engine.say(climate) engine.say('For more information visit accuweather.com' ) engine.runAndWait() We shall use the website “accuweather.com ” to scrape all the weather related information. The function request.get(url) sends a GET request to the URL whose entire HTML code is extracted using BeautifulSoup(page.text, ‘lxml’) Once the code is extracted, we shall inspect the code to find the data of interest. For example, the numeric value of temperature present in the following format <div class = "temp">26°</div> can be extracted using soup.find('div', attrs = {'class':'temp'}) Similarly we extract the time, the real feel and the climate, and using engine.say(), we make the assistant respond back to the user. Case 2: If the user wants current COVID-19 updates, he/she can ask the assistant “Hey! Can you give me the COVID updates of India?” or “Hey! Can you give me the COVID updates of the World?” Since the word “covid” is present in the audio, the function corona_updates(words[-1]) will be called with the parameter as “India” or “World” Let us take a look at this function. def corona_updates(audio): audio = audiourl = 'https://www.worldometers.info/coronavirus/' page = requests.get(url)soup = BeautifulSoup(page.content, 'lxml')totalcases = soup.findAll('div', attrs = {'class': 'maincounter-number'}) total_cases = [] for total in totalcases: total_cases.append(total.find('span').text)world_total = 'Total Coronavirus Cases: ' + total_cases[0] world_deaths = 'Total Deaths: ' + total_cases[1] world_recovered = 'Total Recovered: ' + total_cases[2] info = 'For more information visit: ' + 'https://www.worldometers.info/coronavirus/#countries'if 'world' in audio: print('World Updates: ') print(world_total) print(world_deaths) print(world_recovered) print(info)else: country = audiourl = 'https://www.worldometers.info/coronavirus/country/' + country.lower() + '/' page = requests.get(url)soup = BeautifulSoup(page.content, 'lxml')totalcases = soup.findAll('div', attrs = {'class': 'maincounter-number'}) total_cases = [] for total in totalcases: total_cases.append(total.find('span').text)total = 'Total Coronavirus Cases: ' + total_cases[0] deaths = 'Total Deaths: ' + total_cases[1] recovered = 'Total Recovered: ' + total_cases[2]info = 'For more information visit: ' + urlupdates = country + ' Updates: 'print(updates) print(total) print(deaths) print(recovered) print(info) engine.say(updates) engine.say(total) engine.say(deaths) engine.say(recovered) engine.say('For more information visit: worldometers.info') engine.runAndWait() We shall use the website “worldometers.info ” to scrape all the corona related information. The function request.get(url) sends a GET request to the URL whose entire HTML code is extracted using BeautifulSoup(page.text, ‘lxml’) Once the code is extracted, we shall inspect the code to find the numerical values of the total corona cases, total recovered and total deaths. These value is present inside a span of a div having class “maincounter-number” as shown below. <div id="maincounter-wrap" style="margin-top:15px"><h1>Coronavirus Cases:</h1><div class="maincounter-number"><span style="color:#aaa">25,091,068 </span></div></div> These can be extracted as follows. totalcases = soup.findAll('div', attrs = {'class': 'maincounter-number'}) total_cases = [] for total in totalcases: total_cases.append(total.find('span').text) We first find all the div elements having class “maincounter-number”. Then we iterate through each div to obtain the span containing the numerical value. Case 3: If the user wants to know about the News, he/she can ask the assistant “Hey! Can you give me the news updates?” Since the word “news” is present in the audio, the function scrape_news() will be called. def scrape_news(): url = 'https://news.google.com/topstories?hl=en-IN&gl=IN&ceid=IN:en ' page = requests.get(url) soup = BeautifulSoup(page.content, 'html.parser') news = soup.findAll('h3', attrs = {'class':'ipQwMb ekueJc RD0gLb'}) for n in news: print(n.text) print('\n') engine.say(n.text) print('For more information visit: ', url) engine.say('For more information visit google news') engine.runAndWait() We shall use “Google News” to scrape the headlines of news. The function request.get(url) sends a GET request to the URL whose entire HTML code is extracted using BeautifulSoup(page.text, ‘lxml’) Once the code is extracted, we shall inspect the code to find the headlines of the latest news. These headlines are present inside the href attribute of the h3 tag having class “ipQwMb ekueJc RD0gLb” as shown below. <h3 class="ipQwMb ekueJc RD0gLb"><a href="./articles/CAIiEA0DEuHOMc9oauy44TAAZmAqFggEKg4IACoGCAoww7k_MMevCDDW4AE?hl=en-IN&amp;gl=IN&amp;ceid=IN%3Aen" class="DY5T1d">Rhea Chakraborty arrest: Kubbra Sait reminds ‘still not a murderer’, Rhea Kapoor says ‘we settled on...</a></h3> We first find all the h3 elements having class “ipQwMb ekueJc RD0gLb”. Then we iterate through each element to obtain the text (news headline) present inside the href attribute. Case 4: If the user wants to know the Meaning of any word, he/she can ask the assistant “Hey! What is the meaning of scraping?” Since the word “meaning” is present in the audio, the function scrape_meaning(words[-1]) will be called with the parameter as “scraping” Let us take a look at this function. def scrape_meaning(audio): word = audio url = 'https://www.dictionary.com/browse/' + word page = requests.get(url) soup = BeautifulSoup(page.content, 'html.parser') soup meanings = soup.findAll('div', attrs = {'class': 'css-1o58fj8 e1hk9ate4'}) meaning = [x.text for x in meanings] first_meaning = meaning[0] for x in meaning: print(x) print('\n') engine.say(first_meaning) engine.runAndWait() We shall use the website “Dictionary.com” to scrape the meanings. The function request.get(url) sends a GET request to the URL whose entire HTML code is extracted using BeautifulSoup(page.text, ‘lxml’) Once the code is extracted, we shall inspect the code to find all html tags containing the meaning of the word passed as the parameter. These values are present inside the div having class “css-1o58fj8 e1hk9ate4” as shown below. <div value="1" class="css-kg6o37 e1q3nk1v3"><span class="one-click-content css-1p89gle e1q3nk1v4" data-term="that" data-linkid="nn1ov4">the act of a person or thing that <a href="/browse/scrape" class="luna-xref" data-linkid="nn1ov4">scrapes</a>. </span></div> We first find all the div elements having class “css-1o58fj8 e1hk9ate4”. Then we iterate through each element to obtain the text (meaning of the word)present inside the div. Case 5: If the user wants the assistant to Take Notes, he/she can ask the assistant “Hey! Can you take notes for me?” Since the word “take notes” is present in the audio, the function take_notes() will be called. Let us take a look at this function. def take_notes():r5 = sr.Recognizer() with sr.Microphone() as source: print('What is your "TO DO LIST" for today') engine.say('What is your "TO DO LIST" for today') engine.runAndWait() audio = r5.listen(source) audio = r5.recognize_google(audio) print(audio) today = date.today() today = str(today) with open('MyNotes.txt','a') as f: f.write('\n') f.write(today) f.write('\n') f.write(audio) f.write('\n') f.write('......') f.write('\n') f.close() engine.say('Notes Taken') engine.runAndWait() We start by initialising the recogniser to ask the user for their ‘To-Do list’. We then listen to the user and recognise the audio using recognize_google. Now we will open a notepad named “MyNotes.txt” and jot down the notes given by the user along with the date. We will then create another function named show_notes() which will read out the notes/To-Do list for today from the notepad named “MyNotes.txt”. def show_notes(): with open('MyNotes.txt', 'r') as f: task = f.read() task = task.split('......') engine.say(task[-2]) engine.runAndWait() Case 6: If the user wants to Play YouTube Video, he/she can ask the assistant “Hey! Can you play Hypnotic?” Since the word “play” is present in the audio, the function play_youtube(words[-1]) will be called with “hypnotic” passed as the parameter. Let us take a look at this function. def play_youtube(audio):url = 'https://www.google.com/search?q=youtube+' + audio headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36' } engine.say('Playing') engine.say(audio) engine.runAndWait() page = requests.get(url, headers=headers) soup = BeautifulSoup(page.content, 'html.parser') link = soup.findAll('div', attrs = {'class':'r'}) link = link[0] link = link.find('a') link = str(link) link = link.split('"') link = link[1]webbrowser.open(link) We will use Google Videos to search for the video title, and open the first link to play the YouTube video which is present in the div element having class ‘r’. Case 7: If the user wants to Search for Location, he/she can ask the assistant “Hey! Where is IIT Bombay?” Since the word “where is” is present in the audio, the following code will be executed. (This code is present inside the if-else loop of the main function) elif 'where is' in audio: print('..') words = audio.split('where is') print(words[-1]) link = str(words[-1]) link = re.sub(' ', '', link) engine.say('Locating') engine.say(link) engine.runAndWait() link = f'https://www.google.co.in/maps/place/{link}' print(link) webbrowser.open(link) We will join the location provided by the user with the Google Maps link and the use webbrowser.open(link) to open the link locating ‘IIT Bombay’. Case 8: If the user wants to Open a Website, he/she can ask the assistant “Hey! Can you open Towards Data Science?” Since the word “open” is present in the audio, the following code will be executed. (This code is present inside the if-else loop of the main function) elif 'open' in audio: print('..') words = audio.split('open') print(words[-1]) link = str(words[-1]) link = re.sub(' ', '', link) engine.say('Opening') engine.say(link) engine.runAndWait() link = f'https://{link}.com' print(link) webbrowser.open(link) We will join the website name provided by the user with the standard format of any URL i.e https://{website name}.com to open the website. So that is how we create a simple voice assistant. You can modify the code to to add more features like performing basic mathematical calculation, telling jokes, creating a reminder, changing desktop wallpapers, etc. You can find the whole code in my GitHub repository linked below.
[ { "code": null, "e": 773, "s": 171, "text": "Nowadays people don’t have time to manually search the internet for information or the answers to their questions, rather they expect someone to do it for them, just like a personal assistant who listens to the commands provided and acts accordingly.Thanks to Artificial Intelligence, this personal assistant can now be available to everyone in the form of a voice assistant which is much faster and reliable than a human. An assistant that is even capable of accomplishing difficult tasks like placing an order online, playing music, turning on the lights, etc. just by listening to the users command" }, { "code": null, "e": 1013, "s": 773, "text": "In this article, I am going to show you how to build a voice assistant that responds to basic user queries. After reading this article you shall get a basic idea about what web scraping is and how can it be used to build a voice assistant." }, { "code": null, "e": 1110, "s": 1013, "text": "Note: You will need to have a basic understanding of the Python language to follow this article." }, { "code": null, "e": 1126, "s": 1110, "text": "Voice Assistant" }, { "code": null, "e": 1142, "s": 1126, "text": "Voice Assistant" }, { "code": null, "e": 1158, "s": 1142, "text": "2. Web Scraping" }, { "code": null, "e": 1176, "s": 1158, "text": "3. Implementation" }, { "code": null, "e": 1412, "s": 1176, "text": "Voice Assistant is a software agent that can perform tasks or services for an individual based on commands or questions. In general, voice assistants react to voice commands and give the user relevant information about his/her queries." }, { "code": null, "e": 1654, "s": 1412, "text": "The assistant can understand and react to specific commands given by the user like playing a song on YouTube or knowing about the weather. It will search and/or scrape the web to find the response to the command in order to satisfy the user." }, { "code": null, "e": 1826, "s": 1654, "text": "Presently voice assistants are already able to process orders of products, answer questions, perform actions like playing music or start a simple phone call with a friend." }, { "code": null, "e": 1891, "s": 1826, "text": "The implemented voice assistant can perform the following tasks:" }, { "code": null, "e": 2071, "s": 1891, "text": "Provide weather detailsProvide corona updatesProvide latest newsSearch the meaning of a wordTake notesPlay YouTube videosShow location on Google MapsOpen websites on Google Chrome" }, { "code": null, "e": 2095, "s": 2071, "text": "Provide weather details" }, { "code": null, "e": 2118, "s": 2095, "text": "Provide corona updates" }, { "code": null, "e": 2138, "s": 2118, "text": "Provide latest news" }, { "code": null, "e": 2167, "s": 2138, "text": "Search the meaning of a word" }, { "code": null, "e": 2178, "s": 2167, "text": "Take notes" }, { "code": null, "e": 2198, "s": 2178, "text": "Play YouTube videos" }, { "code": null, "e": 2227, "s": 2198, "text": "Show location on Google Maps" }, { "code": null, "e": 2258, "s": 2227, "text": "Open websites on Google Chrome" }, { "code": null, "e": 2465, "s": 2258, "text": "When building a voice assistant, there are 2 important libraries that you should consider. Python’s SpeechRecognition package helps the voice assistant understand the user. It can be implemented as follows:" }, { "code": null, "e": 2772, "s": 2465, "text": "import speech_recognition as sr#initalises the recognizerr1 = sr.Recognizer()#uses microphone to take the inputwith sr.Microphone() as source: print('Listening..') #listens to the user audio = r1.listen(source) #recognises the audio and converts it to text audio = r1.recognize_google(audio)" }, { "code": null, "e": 2910, "s": 2772, "text": "Python’s pyttsx3 package helps the voice assistant respond to the user by converting the text to audio. It can be implemented as follows:" }, { "code": null, "e": 3046, "s": 2910, "text": "import pyttsx3#initialises pyttsx3engine = pyttsx3.init('sapi5')#converts the text to audioengine.say('Hello World')engine.runAndWait()" }, { "code": null, "e": 3236, "s": 3046, "text": "Web Scraping refers to the extraction of data from websites. The data on the websites are unstructured. Web scraping helps collect these unstructured data and store it in a structured form." }, { "code": null, "e": 3279, "s": 3236, "text": "Some applications of web scraping include:" }, { "code": null, "e": 3383, "s": 3279, "text": "Scraping Social Media such as Twitter to collect tweets and comments for performing sentiment analysis." }, { "code": null, "e": 3505, "s": 3383, "text": "Scraping E-Commerce Website such as Amazon to extract product information for data analysis and predicting market trends." }, { "code": null, "e": 3617, "s": 3505, "text": "Scraping E-Mail Address to collect email IDs and then send bulk emails for marketing and advertisement purpose." }, { "code": null, "e": 3715, "s": 3617, "text": "Scraping Google Images to create datasets for training a Machine Learning or Deep Learning model." }, { "code": null, "e": 3828, "s": 3715, "text": "Although it can be done manually, Python’s library Beautiful Soup makes it easier and faster to scrape the data." }, { "code": null, "e": 3914, "s": 3828, "text": "To extract data using web scraping with python, you need to follow these basic steps:" }, { "code": null, "e": 4153, "s": 3914, "text": "Find the URL of the website that you want to scrapeExtract the entire code of the websiteInspect the website and find the data you want to extractFilter the code using html tags to get the desired dataStore the data in the required format" }, { "code": null, "e": 4205, "s": 4153, "text": "Find the URL of the website that you want to scrape" }, { "code": null, "e": 4244, "s": 4205, "text": "Extract the entire code of the website" }, { "code": null, "e": 4302, "s": 4244, "text": "Inspect the website and find the data you want to extract" }, { "code": null, "e": 4358, "s": 4302, "text": "Filter the code using html tags to get the desired data" }, { "code": null, "e": 4396, "s": 4358, "text": "Store the data in the required format" }, { "code": null, "e": 4485, "s": 4396, "text": "Let us start by importing the following libraries in you python notebook as shown below:" }, { "code": null, "e": 4629, "s": 4485, "text": "import requests from bs4 import BeautifulSoup import reimport speech_recognition as sr from datetime import dateimport webbrowserimport pyttsx3" }, { "code": null, "e": 4779, "s": 4629, "text": "Now let us create our main function which consists of a bunch of if-else statements which tell the assistant how to respond under certain conditions." }, { "code": null, "e": 6683, "s": 4779, "text": "engine = pyttsx3.init('sapi5')r1 = sr.Recognizer()with sr.Microphone() as source: print('Listening..') engine.say('Listening') engine.runAndWait() audio = r1.listen(source) audio = r1.recognize_google(audio) if 'weather' in audio: print('..') words = audio.split(' ') print(words[-1]) scrape_weather(words[-1]) elif 'covid' in audio: print('..') words = audio.split(' ') corona_updates(words[-1]) elif 'meaning' in audio: print('..') words = audio.split(' ') print(words[-1]) scrape_meaning(words[-1]) elif 'take notes' in audio: print('..') take_notes() print('Noted!!') elif 'show notes' in audio: print('..') show_notes() print('Done') elif 'news' in audio: print('..') scrape_news() elif 'play' in audio: print('..') words = audio.split(' ') print(words[-1]) play_youtube(audio) elif 'open' in audio: print('..') words = audio.split('open') print(words[-1]) link = str(words[-1]) link = re.sub(' ', '', link) engine.say('Opening') engine.say(link) engine.runAndWait() link = f'https://{link}.com' print(link) webbrowser.open(link) elif 'where is' in audio: print('..') words = audio.split('where is') print(words[-1]) link = str(words[-1]) link = re.sub(' ', '', link) engine.say('Locating') engine.say(link) engine.runAndWait() link = f'https://www.google.co.in/maps/place/{link}' print(link) webbrowser.open(link) else: print(audio) print('Sorry, I do not understand that!') engine.say('Sorry, I do not understand that!') engine.runAndWait()" }, { "code": null, "e": 6809, "s": 6683, "text": "Case 1: If the user wants to know about the Weather, he/she can ask the assistant “Hey! What is the weather today in Mumbai?”" }, { "code": null, "e": 6945, "s": 6809, "text": "Since the word “weather” is present in the audio, the function scrape_weather(words[-1]) will be called with the parameter as “Mumbai”." }, { "code": null, "e": 6982, "s": 6945, "text": "Let us take a look at this function." }, { "code": null, "e": 8391, "s": 6982, "text": "def scrape_weather(city): url = 'https://www.google.com/search?q=accuweather+' + city page = requests.get(url)soup = BeautifulSoup(page.text, 'lxml')links = [a['href']for a in soup.findAll('a')] link = str(links[16]) link = link.split('=') link = str(link[1]).split('&') link = link[0] page = requests.get(link, headers={'User-Agent': 'Mozilla/5.0'})soup = BeautifulSoup(page.content, 'lxml') time = soup.find('p', attrs = {'class': 'cur-con-weather-card__subtitle'}) time = re.sub('\\n', '', time.text) time = re.sub('\\t', '', time) time = 'Time: ' + timetemperature = soup.find('div', attrs = {'class':'temp'}) temperature = 'Temperature: ' + temperature.text realfeel = soup.find('div', attrs = {'class': 'real-feel'}) realfeel = re.sub('\\n', '',realfeel.text) realfeel = re.sub('\\t', '',realfeel) realfeel = 'RealFeel: ' + realfeel[-3:] + 'C'climate = soup.find('span', attrs = {'class':'phrase'}) climate = \"Climate: \" + climate.text info = 'For more information visit: ' + link print('The weather for today is: ') print(time) print(temperature) print(realfeel) print(climate) print(info) engine.say('The weather for today is: ') engine.say(time) engine.say(temperature) engine.say(realfeel) engine.say(climate) engine.say('For more information visit accuweather.com' ) engine.runAndWait()" }, { "code": null, "e": 8618, "s": 8391, "text": "We shall use the website “accuweather.com ” to scrape all the weather related information. The function request.get(url) sends a GET request to the URL whose entire HTML code is extracted using BeautifulSoup(page.text, ‘lxml’)" }, { "code": null, "e": 8780, "s": 8618, "text": "Once the code is extracted, we shall inspect the code to find the data of interest. For example, the numeric value of temperature present in the following format" }, { "code": null, "e": 8810, "s": 8780, "text": "<div class = \"temp\">26°</div>" }, { "code": null, "e": 8833, "s": 8810, "text": "can be extracted using" }, { "code": null, "e": 8876, "s": 8833, "text": "soup.find('div', attrs = {'class':'temp'})" }, { "code": null, "e": 9010, "s": 8876, "text": "Similarly we extract the time, the real feel and the climate, and using engine.say(), we make the assistant respond back to the user." }, { "code": null, "e": 9200, "s": 9010, "text": "Case 2: If the user wants current COVID-19 updates, he/she can ask the assistant “Hey! Can you give me the COVID updates of India?” or “Hey! Can you give me the COVID updates of the World?”" }, { "code": null, "e": 9343, "s": 9200, "text": "Since the word “covid” is present in the audio, the function corona_updates(words[-1]) will be called with the parameter as “India” or “World”" }, { "code": null, "e": 9380, "s": 9343, "text": "Let us take a look at this function." }, { "code": null, "e": 11054, "s": 9380, "text": "def corona_updates(audio): audio = audiourl = 'https://www.worldometers.info/coronavirus/' page = requests.get(url)soup = BeautifulSoup(page.content, 'lxml')totalcases = soup.findAll('div', attrs = {'class': 'maincounter-number'}) total_cases = [] for total in totalcases: total_cases.append(total.find('span').text)world_total = 'Total Coronavirus Cases: ' + total_cases[0] world_deaths = 'Total Deaths: ' + total_cases[1] world_recovered = 'Total Recovered: ' + total_cases[2] info = 'For more information visit: ' + 'https://www.worldometers.info/coronavirus/#countries'if 'world' in audio: print('World Updates: ') print(world_total) print(world_deaths) print(world_recovered) print(info)else: country = audiourl = 'https://www.worldometers.info/coronavirus/country/' + country.lower() + '/' page = requests.get(url)soup = BeautifulSoup(page.content, 'lxml')totalcases = soup.findAll('div', attrs = {'class': 'maincounter-number'}) total_cases = [] for total in totalcases: total_cases.append(total.find('span').text)total = 'Total Coronavirus Cases: ' + total_cases[0] deaths = 'Total Deaths: ' + total_cases[1] recovered = 'Total Recovered: ' + total_cases[2]info = 'For more information visit: ' + urlupdates = country + ' Updates: 'print(updates) print(total) print(deaths) print(recovered) print(info) engine.say(updates) engine.say(total) engine.say(deaths) engine.say(recovered) engine.say('For more information visit: worldometers.info') engine.runAndWait()" }, { "code": null, "e": 11282, "s": 11054, "text": "We shall use the website “worldometers.info ” to scrape all the corona related information. The function request.get(url) sends a GET request to the URL whose entire HTML code is extracted using BeautifulSoup(page.text, ‘lxml’)" }, { "code": null, "e": 11426, "s": 11282, "text": "Once the code is extracted, we shall inspect the code to find the numerical values of the total corona cases, total recovered and total deaths." }, { "code": null, "e": 11522, "s": 11426, "text": "These value is present inside a span of a div having class “maincounter-number” as shown below." }, { "code": null, "e": 11688, "s": 11522, "text": "<div id=\"maincounter-wrap\" style=\"margin-top:15px\"><h1>Coronavirus Cases:</h1><div class=\"maincounter-number\"><span style=\"color:#aaa\">25,091,068 </span></div></div>" }, { "code": null, "e": 11723, "s": 11688, "text": "These can be extracted as follows." }, { "code": null, "e": 11897, "s": 11723, "text": "totalcases = soup.findAll('div', attrs = {'class': 'maincounter-number'}) total_cases = [] for total in totalcases: total_cases.append(total.find('span').text)" }, { "code": null, "e": 12051, "s": 11897, "text": "We first find all the div elements having class “maincounter-number”. Then we iterate through each div to obtain the span containing the numerical value." }, { "code": null, "e": 12171, "s": 12051, "text": "Case 3: If the user wants to know about the News, he/she can ask the assistant “Hey! Can you give me the news updates?”" }, { "code": null, "e": 12261, "s": 12171, "text": "Since the word “news” is present in the audio, the function scrape_news() will be called." }, { "code": null, "e": 12714, "s": 12261, "text": "def scrape_news(): url = 'https://news.google.com/topstories?hl=en-IN&gl=IN&ceid=IN:en ' page = requests.get(url) soup = BeautifulSoup(page.content, 'html.parser') news = soup.findAll('h3', attrs = {'class':'ipQwMb ekueJc RD0gLb'}) for n in news: print(n.text) print('\\n') engine.say(n.text) print('For more information visit: ', url) engine.say('For more information visit google news') engine.runAndWait()" }, { "code": null, "e": 12910, "s": 12714, "text": "We shall use “Google News” to scrape the headlines of news. The function request.get(url) sends a GET request to the URL whose entire HTML code is extracted using BeautifulSoup(page.text, ‘lxml’)" }, { "code": null, "e": 13006, "s": 12910, "text": "Once the code is extracted, we shall inspect the code to find the headlines of the latest news." }, { "code": null, "e": 13126, "s": 13006, "text": "These headlines are present inside the href attribute of the h3 tag having class “ipQwMb ekueJc RD0gLb” as shown below." }, { "code": null, "e": 13404, "s": 13126, "text": "<h3 class=\"ipQwMb ekueJc RD0gLb\"><a href=\"./articles/CAIiEA0DEuHOMc9oauy44TAAZmAqFggEKg4IACoGCAoww7k_MMevCDDW4AE?hl=en-IN&amp;gl=IN&amp;ceid=IN%3Aen\" class=\"DY5T1d\">Rhea Chakraborty arrest: Kubbra Sait reminds ‘still not a murderer’, Rhea Kapoor says ‘we settled on...</a></h3>" }, { "code": null, "e": 13582, "s": 13404, "text": "We first find all the h3 elements having class “ipQwMb ekueJc RD0gLb”. Then we iterate through each element to obtain the text (news headline) present inside the href attribute." }, { "code": null, "e": 13710, "s": 13582, "text": "Case 4: If the user wants to know the Meaning of any word, he/she can ask the assistant “Hey! What is the meaning of scraping?”" }, { "code": null, "e": 13847, "s": 13710, "text": "Since the word “meaning” is present in the audio, the function scrape_meaning(words[-1]) will be called with the parameter as “scraping”" }, { "code": null, "e": 13884, "s": 13847, "text": "Let us take a look at this function." }, { "code": null, "e": 14326, "s": 13884, "text": "def scrape_meaning(audio): word = audio url = 'https://www.dictionary.com/browse/' + word page = requests.get(url) soup = BeautifulSoup(page.content, 'html.parser') soup meanings = soup.findAll('div', attrs = {'class': 'css-1o58fj8 e1hk9ate4'}) meaning = [x.text for x in meanings] first_meaning = meaning[0] for x in meaning: print(x) print('\\n') engine.say(first_meaning) engine.runAndWait()" }, { "code": null, "e": 14528, "s": 14326, "text": "We shall use the website “Dictionary.com” to scrape the meanings. The function request.get(url) sends a GET request to the URL whose entire HTML code is extracted using BeautifulSoup(page.text, ‘lxml’)" }, { "code": null, "e": 14664, "s": 14528, "text": "Once the code is extracted, we shall inspect the code to find all html tags containing the meaning of the word passed as the parameter." }, { "code": null, "e": 14757, "s": 14664, "text": "These values are present inside the div having class “css-1o58fj8 e1hk9ate4” as shown below." }, { "code": null, "e": 15018, "s": 14757, "text": "<div value=\"1\" class=\"css-kg6o37 e1q3nk1v3\"><span class=\"one-click-content css-1p89gle e1q3nk1v4\" data-term=\"that\" data-linkid=\"nn1ov4\">the act of a person or thing that <a href=\"/browse/scrape\" class=\"luna-xref\" data-linkid=\"nn1ov4\">scrapes</a>. </span></div>" }, { "code": null, "e": 15192, "s": 15018, "text": "We first find all the div elements having class “css-1o58fj8 e1hk9ate4”. Then we iterate through each element to obtain the text (meaning of the word)present inside the div." }, { "code": null, "e": 15310, "s": 15192, "text": "Case 5: If the user wants the assistant to Take Notes, he/she can ask the assistant “Hey! Can you take notes for me?”" }, { "code": null, "e": 15405, "s": 15310, "text": "Since the word “take notes” is present in the audio, the function take_notes() will be called." }, { "code": null, "e": 15442, "s": 15405, "text": "Let us take a look at this function." }, { "code": null, "e": 16107, "s": 15442, "text": "def take_notes():r5 = sr.Recognizer() with sr.Microphone() as source: print('What is your \"TO DO LIST\" for today') engine.say('What is your \"TO DO LIST\" for today') engine.runAndWait() audio = r5.listen(source) audio = r5.recognize_google(audio) print(audio) today = date.today() today = str(today) with open('MyNotes.txt','a') as f: f.write('\\n') f.write(today) f.write('\\n') f.write(audio) f.write('\\n') f.write('......') f.write('\\n') f.close() engine.say('Notes Taken') engine.runAndWait()" }, { "code": null, "e": 16371, "s": 16107, "text": "We start by initialising the recogniser to ask the user for their ‘To-Do list’. We then listen to the user and recognise the audio using recognize_google. Now we will open a notepad named “MyNotes.txt” and jot down the notes given by the user along with the date." }, { "code": null, "e": 16516, "s": 16371, "text": "We will then create another function named show_notes() which will read out the notes/To-Do list for today from the notepad named “MyNotes.txt”." }, { "code": null, "e": 16682, "s": 16516, "text": "def show_notes(): with open('MyNotes.txt', 'r') as f: task = f.read() task = task.split('......') engine.say(task[-2]) engine.runAndWait() " }, { "code": null, "e": 16790, "s": 16682, "text": "Case 6: If the user wants to Play YouTube Video, he/she can ask the assistant “Hey! Can you play Hypnotic?”" }, { "code": null, "e": 16930, "s": 16790, "text": "Since the word “play” is present in the audio, the function play_youtube(words[-1]) will be called with “hypnotic” passed as the parameter." }, { "code": null, "e": 16967, "s": 16930, "text": "Let us take a look at this function." }, { "code": null, "e": 17550, "s": 16967, "text": "def play_youtube(audio):url = 'https://www.google.com/search?q=youtube+' + audio headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36' } engine.say('Playing') engine.say(audio) engine.runAndWait() page = requests.get(url, headers=headers) soup = BeautifulSoup(page.content, 'html.parser') link = soup.findAll('div', attrs = {'class':'r'}) link = link[0] link = link.find('a') link = str(link) link = link.split('\"') link = link[1]webbrowser.open(link)" }, { "code": null, "e": 17711, "s": 17550, "text": "We will use Google Videos to search for the video title, and open the first link to play the YouTube video which is present in the div element having class ‘r’." }, { "code": null, "e": 17818, "s": 17711, "text": "Case 7: If the user wants to Search for Location, he/she can ask the assistant “Hey! Where is IIT Bombay?”" }, { "code": null, "e": 17906, "s": 17818, "text": "Since the word “where is” is present in the audio, the following code will be executed." }, { "code": null, "e": 17974, "s": 17906, "text": "(This code is present inside the if-else loop of the main function)" }, { "code": null, "e": 18336, "s": 17974, "text": "elif 'where is' in audio: print('..') words = audio.split('where is') print(words[-1]) link = str(words[-1]) link = re.sub(' ', '', link) engine.say('Locating') engine.say(link) engine.runAndWait() link = f'https://www.google.co.in/maps/place/{link}' print(link) webbrowser.open(link)" }, { "code": null, "e": 18483, "s": 18336, "text": "We will join the location provided by the user with the Google Maps link and the use webbrowser.open(link) to open the link locating ‘IIT Bombay’." }, { "code": null, "e": 18599, "s": 18483, "text": "Case 8: If the user wants to Open a Website, he/she can ask the assistant “Hey! Can you open Towards Data Science?”" }, { "code": null, "e": 18683, "s": 18599, "text": "Since the word “open” is present in the audio, the following code will be executed." }, { "code": null, "e": 18751, "s": 18683, "text": "(This code is present inside the if-else loop of the main function)" }, { "code": null, "e": 19080, "s": 18751, "text": "elif 'open' in audio: print('..') words = audio.split('open') print(words[-1]) link = str(words[-1]) link = re.sub(' ', '', link) engine.say('Opening') engine.say(link) engine.runAndWait() link = f'https://{link}.com' print(link) webbrowser.open(link)" }, { "code": null, "e": 19219, "s": 19080, "text": "We will join the website name provided by the user with the standard format of any URL i.e https://{website name}.com to open the website." }, { "code": null, "e": 19436, "s": 19219, "text": "So that is how we create a simple voice assistant. You can modify the code to to add more features like performing basic mathematical calculation, telling jokes, creating a reminder, changing desktop wallpapers, etc." } ]
MongoDB - Covered Queries
In this chapter, we will learn about covered queries. As per the official MongoDB documentation, a covered query is a query in which − All the fields in the query are part of an index. All the fields returned in the query are in the same index. Since all the fields present in the query are part of an index, MongoDB matches the query conditions and returns the result using the same index without actually looking inside the documents. Since indexes are present in RAM, fetching data from indexes is much faster as compared to fetching data by scanning documents. To test covered queries, consider the following document in the users collection − { "_id": ObjectId("53402597d852426020000003"), "contact": "987654321", "dob": "01-01-1991", "gender": "M", "name": "Tom Benzamin", "user_name": "tombenzamin" } We will first create a compound index for the users collection on the fields gender and user_name using the following query − >db.users.createIndex({gender:1,user_name:1}) { "createdCollectionAutomatically" : false, "numIndexesBefore" : 1, "numIndexesAfter" : 2, "ok" : 1 } Now, this index will cover the following query − >db.users.find({gender:"M"},{user_name:1,_id:0}) { "user_name" : "tombenzamin" } That is to say that for the above query, MongoDB would not go looking into database documents. Instead it would fetch the required data from indexed data which is very fast. Since our index does not include _id field, we have explicitly excluded it from result set of our query, as MongoDB by default returns _id field in every query. So the following query would not have been covered inside the index created above − >db.users.find({gender:"M"},{user_name:1}) { "_id" : ObjectId("53402597d852426020000003"), "user_name" : "tombenzamin" } Lastly, remember that an index cannot cover a query if − Any of the indexed fields is an array Any of the indexed fields is a subdocument 44 Lectures 3 hours Arnab Chakraborty 54 Lectures 5.5 hours Eduonix Learning Solutions 44 Lectures 4.5 hours Kaushik Roy Chowdhury 40 Lectures 2.5 hours University Code 26 Lectures 8 hours Bassir Jafarzadeh 70 Lectures 2.5 hours Skillbakerystudios Print Add Notes Bookmark this page
[ { "code": null, "e": 2607, "s": 2553, "text": "In this chapter, we will learn about covered queries." }, { "code": null, "e": 2688, "s": 2607, "text": "As per the official MongoDB documentation, a covered query is a query in which −" }, { "code": null, "e": 2738, "s": 2688, "text": "All the fields in the query are part of an index." }, { "code": null, "e": 2798, "s": 2738, "text": "All the fields returned in the query are in the same index." }, { "code": null, "e": 3118, "s": 2798, "text": "Since all the fields present in the query are part of an index, MongoDB matches the query conditions and returns the result using the same index without actually looking inside the documents. Since indexes are present in RAM, fetching data from indexes is much faster as compared to fetching data by scanning documents." }, { "code": null, "e": 3201, "s": 3118, "text": "To test covered queries, consider the following document in the users collection −" }, { "code": null, "e": 3379, "s": 3201, "text": "{\n \"_id\": ObjectId(\"53402597d852426020000003\"),\n \"contact\": \"987654321\",\n \"dob\": \"01-01-1991\",\n \"gender\": \"M\",\n \"name\": \"Tom Benzamin\",\n \"user_name\": \"tombenzamin\"\n}" }, { "code": null, "e": 3505, "s": 3379, "text": "We will first create a compound index for the users collection on the fields gender and user_name using the following query −" }, { "code": null, "e": 3657, "s": 3505, "text": ">db.users.createIndex({gender:1,user_name:1})\n{\n\t\"createdCollectionAutomatically\" : false,\n\t\"numIndexesBefore\" : 1,\n\t\"numIndexesAfter\" : 2,\n\t\"ok\" : 1\n}" }, { "code": null, "e": 3706, "s": 3657, "text": "Now, this index will cover the following query −" }, { "code": null, "e": 3787, "s": 3706, "text": ">db.users.find({gender:\"M\"},{user_name:1,_id:0})\n{ \"user_name\" : \"tombenzamin\" }" }, { "code": null, "e": 3961, "s": 3787, "text": "That is to say that for the above query, MongoDB would not go looking into database documents. Instead it would fetch the required data from indexed data which is very fast." }, { "code": null, "e": 4206, "s": 3961, "text": "Since our index does not include _id field, we have explicitly excluded it from result set of our query, as MongoDB by default returns _id field in every query. So the following query would not have been covered inside the index created above −" }, { "code": null, "e": 4327, "s": 4206, "text": ">db.users.find({gender:\"M\"},{user_name:1})\n{ \"_id\" : ObjectId(\"53402597d852426020000003\"), \"user_name\" : \"tombenzamin\" }" }, { "code": null, "e": 4384, "s": 4327, "text": "Lastly, remember that an index cannot cover a query if −" }, { "code": null, "e": 4422, "s": 4384, "text": "Any of the indexed fields is an array" }, { "code": null, "e": 4465, "s": 4422, "text": "Any of the indexed fields is a subdocument" }, { "code": null, "e": 4498, "s": 4465, "text": "\n 44 Lectures \n 3 hours \n" }, { "code": null, "e": 4517, "s": 4498, "text": " Arnab Chakraborty" }, { "code": null, "e": 4552, "s": 4517, "text": "\n 54 Lectures \n 5.5 hours \n" }, { "code": null, "e": 4580, "s": 4552, "text": " Eduonix Learning Solutions" }, { "code": null, "e": 4615, "s": 4580, "text": "\n 44 Lectures \n 4.5 hours \n" }, { "code": null, "e": 4638, "s": 4615, "text": " Kaushik Roy Chowdhury" }, { "code": null, "e": 4673, "s": 4638, "text": "\n 40 Lectures \n 2.5 hours \n" }, { "code": null, "e": 4690, "s": 4673, "text": " University Code" }, { "code": null, "e": 4723, "s": 4690, "text": "\n 26 Lectures \n 8 hours \n" }, { "code": null, "e": 4742, "s": 4723, "text": " Bassir Jafarzadeh" }, { "code": null, "e": 4777, "s": 4742, "text": "\n 70 Lectures \n 2.5 hours \n" }, { "code": null, "e": 4797, "s": 4777, "text": " Skillbakerystudios" }, { "code": null, "e": 4804, "s": 4797, "text": " Print" }, { "code": null, "e": 4815, "s": 4804, "text": " Add Notes" } ]
Addition in Nested Tuples - Python - GeeksforGeeks
11 Nov, 2019 Sometimes, while working with records, we can have a problem in which we require to perform index wise addition of tuple elements. This can get complicated with tuple elements to be tuple and inner elements again be tuple. Let’s discuss certain ways in which this problem can be solved. Method #1 : Using zip() + nested generator expression The combination of above functions can be used to perform the task. In this, we combine the elements across tuples using zip(). The iterations and summation logic is provided by generator expression. # Python3 code to demonstrate working of# Addition in nested tuples# using zip() + nested generator expression # initialize tuplestest_tup1 = ((1, 3), (4, 5), (2, 9), (1, 10))test_tup2 = ((6, 7), (3, 9), (1, 1), (7, 3)) # printing original tuplesprint("The original tuple 1 : " + str(test_tup1))print("The original tuple 2 : " + str(test_tup2)) # Addition in nested tuples# using zip() + nested generator expressionres = tuple(tuple(a + b for a, b in zip(tup1, tup2))\ for tup1, tup2 in zip(test_tup1, test_tup2)) # printing resultprint("The resultant tuple after summation : " + str(res)) The original tuple 1 : ((1, 3), (4, 5), (2, 9), (1, 10)) The original tuple 2 : ((6, 7), (3, 9), (1, 1), (7, 3)) The resultant tuple after summation : ((7, 10), (7, 14), (3, 10), (8, 13)) Method #2 : Using isinstance() + zip() + loop + list comprehensionThe combination of above functions can be used to perform this particular task. In this, we check for the nesting type and perform recursion. This method can give flexibility of more than 1 level nesting. # Python3 code to demonstrate working of# Addition in nested tuples# using isinstance() + zip() + loop + list comprehension # function to perform task def tup_sum(tup1, tup2): if isinstance(tup1, (list, tuple)) and isinstance(tup2, (list, tuple)): return tuple(tup_sum(x, y) for x, y in zip(tup1, tup2)) return tup1 + tup2 # initialize tuplestest_tup1 = ((1, 3), (4, 5), (2, 9), (1, 10))test_tup2 = ((6, 7), (3, 9), (1, 1), (7, 3)) # printing original tuplesprint("The original tuple 1 : " + str(test_tup1))print("The original tuple 2 : " + str(test_tup2)) # Addition in nested tuples# using isinstance() + zip() + loop + list comprehensionres = tuple(tup_sum(x, y) for x, y in zip(test_tup1, test_tup2)) # printing resultprint("The resultant tuple after summation : " + str(res)) The original tuple 1 : ((1, 3), (4, 5), (2, 9), (1, 10)) The original tuple 2 : ((6, 7), (3, 9), (1, 1), (7, 3)) The resultant tuple after summation : ((7, 10), (7, 14), (3, 10), (8, 13)) Python tuple-programs Python Python Programs Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Python Dictionary Read a file line by line in Python How to Install PIP on Windows ? Different ways to create Pandas Dataframe Python String | replace() Python program to convert a list to string Defaultdict in Python Python | Split string into list of characters Python | Get dictionary keys as a list Python | Convert a list to dictionary
[ { "code": null, "e": 24466, "s": 24438, "text": "\n11 Nov, 2019" }, { "code": null, "e": 24753, "s": 24466, "text": "Sometimes, while working with records, we can have a problem in which we require to perform index wise addition of tuple elements. This can get complicated with tuple elements to be tuple and inner elements again be tuple. Let’s discuss certain ways in which this problem can be solved." }, { "code": null, "e": 24807, "s": 24753, "text": "Method #1 : Using zip() + nested generator expression" }, { "code": null, "e": 25007, "s": 24807, "text": "The combination of above functions can be used to perform the task. In this, we combine the elements across tuples using zip(). The iterations and summation logic is provided by generator expression." }, { "code": "# Python3 code to demonstrate working of# Addition in nested tuples# using zip() + nested generator expression # initialize tuplestest_tup1 = ((1, 3), (4, 5), (2, 9), (1, 10))test_tup2 = ((6, 7), (3, 9), (1, 1), (7, 3)) # printing original tuplesprint(\"The original tuple 1 : \" + str(test_tup1))print(\"The original tuple 2 : \" + str(test_tup2)) # Addition in nested tuples# using zip() + nested generator expressionres = tuple(tuple(a + b for a, b in zip(tup1, tup2))\\ for tup1, tup2 in zip(test_tup1, test_tup2)) # printing resultprint(\"The resultant tuple after summation : \" + str(res))", "e": 25606, "s": 25007, "text": null }, { "code": null, "e": 25795, "s": 25606, "text": "The original tuple 1 : ((1, 3), (4, 5), (2, 9), (1, 10))\nThe original tuple 2 : ((6, 7), (3, 9), (1, 1), (7, 3))\nThe resultant tuple after summation : ((7, 10), (7, 14), (3, 10), (8, 13))\n" }, { "code": null, "e": 26068, "s": 25797, "text": "Method #2 : Using isinstance() + zip() + loop + list comprehensionThe combination of above functions can be used to perform this particular task. In this, we check for the nesting type and perform recursion. This method can give flexibility of more than 1 level nesting." }, { "code": "# Python3 code to demonstrate working of# Addition in nested tuples# using isinstance() + zip() + loop + list comprehension # function to perform task def tup_sum(tup1, tup2): if isinstance(tup1, (list, tuple)) and isinstance(tup2, (list, tuple)): return tuple(tup_sum(x, y) for x, y in zip(tup1, tup2)) return tup1 + tup2 # initialize tuplestest_tup1 = ((1, 3), (4, 5), (2, 9), (1, 10))test_tup2 = ((6, 7), (3, 9), (1, 1), (7, 3)) # printing original tuplesprint(\"The original tuple 1 : \" + str(test_tup1))print(\"The original tuple 2 : \" + str(test_tup2)) # Addition in nested tuples# using isinstance() + zip() + loop + list comprehensionres = tuple(tup_sum(x, y) for x, y in zip(test_tup1, test_tup2)) # printing resultprint(\"The resultant tuple after summation : \" + str(res))", "e": 26866, "s": 26068, "text": null }, { "code": null, "e": 27055, "s": 26866, "text": "The original tuple 1 : ((1, 3), (4, 5), (2, 9), (1, 10))\nThe original tuple 2 : ((6, 7), (3, 9), (1, 1), (7, 3))\nThe resultant tuple after summation : ((7, 10), (7, 14), (3, 10), (8, 13))\n" }, { "code": null, "e": 27077, "s": 27055, "text": "Python tuple-programs" }, { "code": null, "e": 27084, "s": 27077, "text": "Python" }, { "code": null, "e": 27100, "s": 27084, "text": "Python Programs" }, { "code": null, "e": 27198, "s": 27100, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 27216, "s": 27198, "text": "Python Dictionary" }, { "code": null, "e": 27251, "s": 27216, "text": "Read a file line by line in Python" }, { "code": null, "e": 27283, "s": 27251, "text": "How to Install PIP on Windows ?" }, { "code": null, "e": 27325, "s": 27283, "text": "Different ways to create Pandas Dataframe" }, { "code": null, "e": 27351, "s": 27325, "text": "Python String | replace()" }, { "code": null, "e": 27394, "s": 27351, "text": "Python program to convert a list to string" }, { "code": null, "e": 27416, "s": 27394, "text": "Defaultdict in Python" }, { "code": null, "e": 27462, "s": 27416, "text": "Python | Split string into list of characters" }, { "code": null, "e": 27501, "s": 27462, "text": "Python | Get dictionary keys as a list" } ]
Copy Elements of One ArrayList to Another ArrayList with Java Collections Class
In order to copy elements of ArrayList to another ArrayList, we use the Collections.copy() method. It is used to copy all elements of a collection into another. Declaration −The java.util.Collections.copy() method is declared as follows − public static <T> void copy(List<? super T> dest,Lis<? extends T> src) where src is the source list object and dest is the destination list object. Let us see a program to copy elements of one ArrayList to Another ArrayList with Java Collections Class − Live Demo import java.util.*; public class Example { public static void main (String[] args) { List<String> zoo = new ArrayList<String>(); zoo.add("Zebra"); zoo.add("Lion"); zoo.add("Tiger"); List<String> list = new ArrayList<String>(); list.add("Hello"); list.add("Hi"); list.add("World"); Collections.copy(list,zoo); // copying the ArrayList zoo to the ArrayList list System.out.println(list); } } Zebra, Lion, Tiger]
[ { "code": null, "e": 1223, "s": 1062, "text": "In order to copy elements of ArrayList to another ArrayList, we use the Collections.copy() method. It is used to copy all elements of a collection into another." }, { "code": null, "e": 1301, "s": 1223, "text": "Declaration −The java.util.Collections.copy() method is declared as follows −" }, { "code": null, "e": 1372, "s": 1301, "text": "public static <T> void copy(List<? super T> dest,Lis<? extends T> src)" }, { "code": null, "e": 1449, "s": 1372, "text": "where src is the source list object and dest is the destination list object." }, { "code": null, "e": 1555, "s": 1449, "text": "Let us see a program to copy elements of one ArrayList to Another ArrayList with Java Collections Class −" }, { "code": null, "e": 1566, "s": 1555, "text": " Live Demo" }, { "code": null, "e": 2022, "s": 1566, "text": "import java.util.*;\npublic class Example {\n public static void main (String[] args) {\n List<String> zoo = new ArrayList<String>();\n zoo.add(\"Zebra\");\n zoo.add(\"Lion\");\n zoo.add(\"Tiger\");\n List<String> list = new ArrayList<String>();\n list.add(\"Hello\");\n list.add(\"Hi\");\n list.add(\"World\");\n Collections.copy(list,zoo); // copying the ArrayList zoo to the ArrayList list\n System.out.println(list);\n }\n}" }, { "code": null, "e": 2042, "s": 2022, "text": "Zebra, Lion, Tiger]" } ]
How to convert an integer to an ASCII value in Python?
ASCII character associated to an integer is obtained by chr() function. The argument for this function can be any number between 0 to 0xffff >>> chr(0xaa) 'a' >>> chr(0xff) 'ÿ' >>> chr(200) 'È' >>> chr(122) 'z'
[ { "code": null, "e": 1203, "s": 1062, "text": "ASCII character associated to an integer is obtained by chr() function. The argument for this function can be any number between 0 to 0xffff" }, { "code": null, "e": 1275, "s": 1203, "text": ">>> chr(0xaa)\n'a'\n>>> chr(0xff)\n'ÿ'\n>>> chr(200)\n'È'\n>>> chr(122)\n'z'" } ]
How to identify the nth sub element using xpath?
We can identify the nth sub element using xpath in the following ways − By adding square brackets with index. By adding square brackets with index. By using position () method in xpath. By using position () method in xpath. import org.openqa.selenium.By; import org.openqa.selenium.Keys; import org.openqa.selenium.WebDriver; import org.openqa.selenium.WebElement; import org.openqa.selenium.chrome.ChromeDriver; import java.util.concurrent.TimeUnit; public class SubElement { public static void main(String[] args) { System.setProperty("webdriver.chrome.driver", "C:\\Users\\ghs6kor\\Desktop\\Java\\chromedriver.exe"); WebDriver driver = new ChromeDriver(); String url = "https://www.tutorialspoint.com/index.htm"; driver.get(url); driver.manage().window().maximize(); driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS); // xpath using position() targeting the first element with type text driver.findElement(By.xpath("//input[@type='text'][position()=1]")) .click(); driver.close(); } } import org.openqa.selenium.By; import org.openqa.selenium.Keys; import org.openqa.selenium.WebDriver; import org.openqa.selenium.WebElement; import org.openqa.selenium.chrome.ChromeDriver; import java.util.concurrent.TimeUnit; public class RowCount { public static void main(String[] args) { System.setProperty("webdriver.chrome.driver", "C:\\Users\\ghs6kor\\Desktop\\Java\\chromedriver.exe"); WebDriver driver = new ChromeDriver(); String url = "https://www.tutorialspoint.com/plsql/plsql_basic_syntax.htm"; driver.get(url); driver.manage().window().maximize(); driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS); // xpath with index appended to get the data from the row 2 of table List<WebElement> rows = driver.findElements(By.xpath("//table/tbody/tr[2]/td")); System.out.println(“The number of data in row 2 is “+ rows.size()); driver.close(); } }
[ { "code": null, "e": 1134, "s": 1062, "text": "We can identify the nth sub element using xpath in the following ways −" }, { "code": null, "e": 1172, "s": 1134, "text": "By adding square brackets with index." }, { "code": null, "e": 1210, "s": 1172, "text": "By adding square brackets with index." }, { "code": null, "e": 1248, "s": 1210, "text": "By using position () method in xpath." }, { "code": null, "e": 1286, "s": 1248, "text": "By using position () method in xpath." }, { "code": null, "e": 2130, "s": 1286, "text": "import org.openqa.selenium.By;\nimport org.openqa.selenium.Keys;\nimport org.openqa.selenium.WebDriver;\nimport org.openqa.selenium.WebElement;\nimport org.openqa.selenium.chrome.ChromeDriver;\nimport java.util.concurrent.TimeUnit;\npublic class SubElement {\n public static void main(String[] args) {\n System.setProperty(\"webdriver.chrome.driver\", \"C:\\\\Users\\\\ghs6kor\\\\Desktop\\\\Java\\\\chromedriver.exe\");\n WebDriver driver = new ChromeDriver();\n String url = \"https://www.tutorialspoint.com/index.htm\";\n driver.get(url);\n driver.manage().window().maximize();\n driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS);\n // xpath using position() targeting the first element with type text\n driver.findElement(By.xpath(\"//input[@type='text'][position()=1]\"))\n .click();\n driver.close();\n }\n}" }, { "code": null, "e": 3068, "s": 2130, "text": "import org.openqa.selenium.By;\nimport org.openqa.selenium.Keys;\nimport org.openqa.selenium.WebDriver;\nimport org.openqa.selenium.WebElement;\nimport org.openqa.selenium.chrome.ChromeDriver;\nimport java.util.concurrent.TimeUnit;\npublic class RowCount {\n public static void main(String[] args) {\n System.setProperty(\"webdriver.chrome.driver\", \"C:\\\\Users\\\\ghs6kor\\\\Desktop\\\\Java\\\\chromedriver.exe\");\n WebDriver driver = new ChromeDriver();\n String url = \"https://www.tutorialspoint.com/plsql/plsql_basic_syntax.htm\";\n driver.get(url);\n driver.manage().window().maximize();\n driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS);\n // xpath with index appended to get the data from the row 2 of table\n List<WebElement> rows =\n driver.findElements(By.xpath(\"//table/tbody/tr[2]/td\"));\n System.out.println(“The number of data in row 2 is “+ rows.size());\n driver.close();\n }\n}" } ]
A Step-by-Step Tutorial for Conducting Sentiment Analysis | by Zijing Zhu | Towards Data Science
It is estimated that 80% of the world’s data is unstructured. Thus deriving information from unstructured data is an essential part of data analysis. Text mining is the process of deriving valuable insights from unstructured text data, and sentiment analysis is one applicant of text mining. It is using natural language processing and machine learning techniques to understand and classify subjective emotions from text data. In business settings, sentiment analysis is widely used in understanding customer reviews, detecting spam from emails, etc. This article is the first part of the tutorial that introduces the specific techniques used to conduct sentiment analysis with Python. To illustrate the procedures better, I will use one of my projects as an example, where I conduct news sentiment analysis on WTI crude oil future prices. I will present the important steps along with the corresponded Python code. Some background information The crude oil future prices have large short-run fluctuations. While the long-run equilibrium of any product is determined by the demand and supply conditions, the short-run fluctuations in prices are reflections of the market confidence and expectations toward this product. In this project, I use crude oil-related news articles to capture constantly updating market confidence and expectations, and predict the change of crude oil future prices by conducting sentiment analysis on news articles. Here are the steps to complete this analysis: 1, collecting data: web scraping news articles 2, preprocessing text data (this article) 3, text vectorizations: TFIDF 4, sentiment analysis with logistic regressions 5, deploy the model at Heroku using python flask web app I will discuss the second part, which is preprocessing the text data in this article. If you are interested in other parts, please follow the links to read more (coming up). Preprocessing text data I use tools from NLTK, Spacy, and some regular expressions to preprocess the news articles. To import the libraries and use the pre-built models in Spacy, you can use the following code: import spacyimport nltk# Initialize spacy ‘en’ model, keeping only component needed for lemmatization and creating an engine:nlp = spacy.load(‘en’, disable=[‘parser’, ‘ner’]) Afterwards, I use pandas to read in the data: The “Subject” and “Body” are the columns that I will apply text preprocessing procedures on. I preprocessed the news articles following the standard text mining procedures to extract useful features from the news contents, including tokenization, removing stopwords, and lemmatization. Tokenization The first step of preprocessing text data is to break every sentence into individual words, which is called tokenization. Taking individual words rather than sentences breaks down the connections between words. However, it is a common method used to analyze large sets of text data. It is efficient and convenient for computers to analyze the text data by examines what words appear in an article and how many times these words appear, and is sufficient to give insightful results. Take the first news article in my dataset as an example: You can use the NLTK tokenizer: Or you can use Spacy, remember NLP is the Spacy engine defined above: After tokenization, each news article will transform into a list of words, symbols, digits, and punctuation. You can specify whether you want to transform every word into a lowercase as well. The next step is to remove useless information. For example, symbols, digits, punctuations. I will use spacy combined with regex to remove them. import re#tokenization and remove punctuationswords = [str(token) for token in nlp(text) if not token.is_punct] #remove digits and other symbols except "@"--used to remove emailwords = [re.sub(r"[^A-Za-z@]", "", word) for word in words]#remove websites and email addresswords = [re.sub(r”\S+com”, “”, word) for word in words]words = [re.sub(r”\S+@\S+”, “”, word) for word in words]#remove empty spaces words = [word for word in words if word!=’ ‘] After applying the transformations above, this is how the original news article looks like: Stopwords After some transformation, the news article is much cleaner, but we still see some words we do not desire, for example, “and”, “we”, etc. The next step is to remove the useless words, namely, the stopwords. Stopwords are words that frequently appear in many articles, but without significant meanings. Examples of stopwords are ‘I’, ‘the’, ‘a’, ‘of’. These are the words that will not intervene in the understanding of articles if removed. To remove the stopwords, we can import the stopwords from the NLTK library. Besides, I also include other lists of stopwords that are widely used in economic analysis, including dates and time, more general words that are not economically meaningful, etc. This is how I construct the list of stopwords: #import other lists of stopwordswith open(‘StopWords_GenericLong.txt’, ‘r’) as f: x_gl = f.readlines()with open(‘StopWords_Names.txt’, ‘r’) as f: x_n = f.readlines()with open(‘StopWords_DatesandNumbers.txt’, ‘r’) as f: x_d = f.readlines()#import nltk stopwordsstopwords = nltk.corpus.stopwords.words(‘english’)#combine all stopwords[stopwords.append(x.rstrip()) for x in x_gl][stopwords.append(x.rstrip()) for x in x_n][stopwords.append(x.rstrip()) for x in x_d]#change all stopwords into lowercasestopwords_lower = [s.lower() for s in stopwords] and then exclude the stopwords from the news articles: words = [word.lower() for word in words if word.lower() not in stopwords_lower] Applying to the previous example, this is how it looks like: Lemmatization Removing stopwords, along with symbols, digits, and punctuation, each news article will transform into a list of meaningful words. However, to count the appearance of each word, it is essential to remove grammar tense and transform each word into its original form. For example, if we want to calculate how many times the word ‘open’ appears in a news article, we need to count the appearances of ‘open’, ‘opens’, ‘opened’. Thus, lemmatization is an essential step for text transformation. Another way of converting words to its original form is called stemming. Here is the difference between them: Lemmatization is taking a word into its original lemma, and stemming is taking the linguistic root of a word. I choose lemmatization over stemming because after stemming, some words become hard to understand. For the interpretation purpose, the lemma is better than the linguistic root. As shown above, lemmatization is very easy to implement with Spacy, where I call the .lemma_ function from spacy at the beginning. After lemmatization, each news article will transform into a list of words that are all in their original forms. The news article now changed into this: Summarize the steps Let’s summarize the steps in a function and apply the function in all articles: def text_preprocessing(str_input): #tokenization, remove punctuation, lemmatization words=[token.lemma_ for token in nlp(str_input) if not token.is_punct] # remove symbols, websites, email addresses words = [re.sub(r”[^A-Za-z@]”, “”, word) for word in words] words = [re.sub(r”\S+com”, “”, word) for word in words] words = [re.sub(r”\S+@\S+”, “”, word) for word in words] words = [word for word in words if word!=’ ‘] words = [word for word in words if len(word)!=0] #remove stopwords words=[word.lower() for word in words if word.lower() not in stopwords_lower] #combine a list into one string string = “ “.join(words) return string The function above, text_preprocessing() combines all the text preprocessing steps, here is output with the first news article: Before generalizing into all news articles, it is important to apply it on random news articles and see how it works, following the code below: import randomindex = random.randint(0, df.shape[0])text_preprocessing(df.iloc[index][‘Body’]) If there are some extra words you want to exclude for this particular project or some extra redundant information you want to remove, you can always revise the function before applying to all news articles. Here is a piece of randomly selected news article before and after tokenization, removing stopwords and lemmatization. If all looks great, you can apply the function to all news articles: df[‘news_cleaned’]=df[‘Body’].apply(text_preprocessing)df[‘subject_cleaned’]=df[‘Subject’].apply(text_preprocessing) Some remarks Text preprocessing is a very important part of text mining and sentiment analysis. There are a lot of ways of preprocessing the unstructured data to make it readable for computers for future analysis. For the next step, I will discuss the vectorizer I used to transform text data into a sparse matrix so that they can be used as input for quantitative analysis. If your analysis is simple and does not require a lot of customization in preprocessing the text data, the vectorizers usually have embedded functions to conduct the basic steps, like tokenization, removing stopwords. Or you can write your own function and specify your customized function in the vectorizer so you can preprocess and vectorize your data at the same time. If you want this way, your function needs to return a list of tokenized words rather than a long string. However, personally speaking, I prefer to preprocess the text data first before vectorization. In this way, I keep monitoring the performance of my function, and it is actually faster especially if you have a large data set. I will discuss the transformation process in my next article. Thank you for reading! Here is the list of all my blog posts. Check them out if you are interested!
[ { "code": null, "e": 1087, "s": 171, "text": "It is estimated that 80% of the world’s data is unstructured. Thus deriving information from unstructured data is an essential part of data analysis. Text mining is the process of deriving valuable insights from unstructured text data, and sentiment analysis is one applicant of text mining. It is using natural language processing and machine learning techniques to understand and classify subjective emotions from text data. In business settings, sentiment analysis is widely used in understanding customer reviews, detecting spam from emails, etc. This article is the first part of the tutorial that introduces the specific techniques used to conduct sentiment analysis with Python. To illustrate the procedures better, I will use one of my projects as an example, where I conduct news sentiment analysis on WTI crude oil future prices. I will present the important steps along with the corresponded Python code." }, { "code": null, "e": 1115, "s": 1087, "text": "Some background information" }, { "code": null, "e": 1660, "s": 1115, "text": "The crude oil future prices have large short-run fluctuations. While the long-run equilibrium of any product is determined by the demand and supply conditions, the short-run fluctuations in prices are reflections of the market confidence and expectations toward this product. In this project, I use crude oil-related news articles to capture constantly updating market confidence and expectations, and predict the change of crude oil future prices by conducting sentiment analysis on news articles. Here are the steps to complete this analysis:" }, { "code": null, "e": 1707, "s": 1660, "text": "1, collecting data: web scraping news articles" }, { "code": null, "e": 1749, "s": 1707, "text": "2, preprocessing text data (this article)" }, { "code": null, "e": 1779, "s": 1749, "text": "3, text vectorizations: TFIDF" }, { "code": null, "e": 1827, "s": 1779, "text": "4, sentiment analysis with logistic regressions" }, { "code": null, "e": 1884, "s": 1827, "text": "5, deploy the model at Heroku using python flask web app" }, { "code": null, "e": 2058, "s": 1884, "text": "I will discuss the second part, which is preprocessing the text data in this article. If you are interested in other parts, please follow the links to read more (coming up)." }, { "code": null, "e": 2082, "s": 2058, "text": "Preprocessing text data" }, { "code": null, "e": 2269, "s": 2082, "text": "I use tools from NLTK, Spacy, and some regular expressions to preprocess the news articles. To import the libraries and use the pre-built models in Spacy, you can use the following code:" }, { "code": null, "e": 2444, "s": 2269, "text": "import spacyimport nltk# Initialize spacy ‘en’ model, keeping only component needed for lemmatization and creating an engine:nlp = spacy.load(‘en’, disable=[‘parser’, ‘ner’])" }, { "code": null, "e": 2490, "s": 2444, "text": "Afterwards, I use pandas to read in the data:" }, { "code": null, "e": 2776, "s": 2490, "text": "The “Subject” and “Body” are the columns that I will apply text preprocessing procedures on. I preprocessed the news articles following the standard text mining procedures to extract useful features from the news contents, including tokenization, removing stopwords, and lemmatization." }, { "code": null, "e": 2789, "s": 2776, "text": "Tokenization" }, { "code": null, "e": 3271, "s": 2789, "text": "The first step of preprocessing text data is to break every sentence into individual words, which is called tokenization. Taking individual words rather than sentences breaks down the connections between words. However, it is a common method used to analyze large sets of text data. It is efficient and convenient for computers to analyze the text data by examines what words appear in an article and how many times these words appear, and is sufficient to give insightful results." }, { "code": null, "e": 3328, "s": 3271, "text": "Take the first news article in my dataset as an example:" }, { "code": null, "e": 3360, "s": 3328, "text": "You can use the NLTK tokenizer:" }, { "code": null, "e": 3430, "s": 3360, "text": "Or you can use Spacy, remember NLP is the Spacy engine defined above:" }, { "code": null, "e": 3767, "s": 3430, "text": "After tokenization, each news article will transform into a list of words, symbols, digits, and punctuation. You can specify whether you want to transform every word into a lowercase as well. The next step is to remove useless information. For example, symbols, digits, punctuations. I will use spacy combined with regex to remove them." }, { "code": null, "e": 4215, "s": 3767, "text": "import re#tokenization and remove punctuationswords = [str(token) for token in nlp(text) if not token.is_punct] #remove digits and other symbols except \"@\"--used to remove emailwords = [re.sub(r\"[^A-Za-z@]\", \"\", word) for word in words]#remove websites and email addresswords = [re.sub(r”\\S+com”, “”, word) for word in words]words = [re.sub(r”\\S+@\\S+”, “”, word) for word in words]#remove empty spaces words = [word for word in words if word!=’ ‘]" }, { "code": null, "e": 4307, "s": 4215, "text": "After applying the transformations above, this is how the original news article looks like:" }, { "code": null, "e": 4317, "s": 4307, "text": "Stopwords" }, { "code": null, "e": 5060, "s": 4317, "text": "After some transformation, the news article is much cleaner, but we still see some words we do not desire, for example, “and”, “we”, etc. The next step is to remove the useless words, namely, the stopwords. Stopwords are words that frequently appear in many articles, but without significant meanings. Examples of stopwords are ‘I’, ‘the’, ‘a’, ‘of’. These are the words that will not intervene in the understanding of articles if removed. To remove the stopwords, we can import the stopwords from the NLTK library. Besides, I also include other lists of stopwords that are widely used in economic analysis, including dates and time, more general words that are not economically meaningful, etc. This is how I construct the list of stopwords:" }, { "code": null, "e": 5607, "s": 5060, "text": "#import other lists of stopwordswith open(‘StopWords_GenericLong.txt’, ‘r’) as f: x_gl = f.readlines()with open(‘StopWords_Names.txt’, ‘r’) as f: x_n = f.readlines()with open(‘StopWords_DatesandNumbers.txt’, ‘r’) as f: x_d = f.readlines()#import nltk stopwordsstopwords = nltk.corpus.stopwords.words(‘english’)#combine all stopwords[stopwords.append(x.rstrip()) for x in x_gl][stopwords.append(x.rstrip()) for x in x_n][stopwords.append(x.rstrip()) for x in x_d]#change all stopwords into lowercasestopwords_lower = [s.lower() for s in stopwords]" }, { "code": null, "e": 5662, "s": 5607, "text": "and then exclude the stopwords from the news articles:" }, { "code": null, "e": 5742, "s": 5662, "text": "words = [word.lower() for word in words if word.lower() not in stopwords_lower]" }, { "code": null, "e": 5803, "s": 5742, "text": "Applying to the previous example, this is how it looks like:" }, { "code": null, "e": 5817, "s": 5803, "text": "Lemmatization" }, { "code": null, "e": 6417, "s": 5817, "text": "Removing stopwords, along with symbols, digits, and punctuation, each news article will transform into a list of meaningful words. However, to count the appearance of each word, it is essential to remove grammar tense and transform each word into its original form. For example, if we want to calculate how many times the word ‘open’ appears in a news article, we need to count the appearances of ‘open’, ‘opens’, ‘opened’. Thus, lemmatization is an essential step for text transformation. Another way of converting words to its original form is called stemming. Here is the difference between them:" }, { "code": null, "e": 6704, "s": 6417, "text": "Lemmatization is taking a word into its original lemma, and stemming is taking the linguistic root of a word. I choose lemmatization over stemming because after stemming, some words become hard to understand. For the interpretation purpose, the lemma is better than the linguistic root." }, { "code": null, "e": 6988, "s": 6704, "text": "As shown above, lemmatization is very easy to implement with Spacy, where I call the .lemma_ function from spacy at the beginning. After lemmatization, each news article will transform into a list of words that are all in their original forms. The news article now changed into this:" }, { "code": null, "e": 7008, "s": 6988, "text": "Summarize the steps" }, { "code": null, "e": 7088, "s": 7008, "text": "Let’s summarize the steps in a function and apply the function in all articles:" }, { "code": null, "e": 7801, "s": 7088, "text": "def text_preprocessing(str_input): #tokenization, remove punctuation, lemmatization words=[token.lemma_ for token in nlp(str_input) if not token.is_punct] # remove symbols, websites, email addresses words = [re.sub(r”[^A-Za-z@]”, “”, word) for word in words] words = [re.sub(r”\\S+com”, “”, word) for word in words] words = [re.sub(r”\\S+@\\S+”, “”, word) for word in words] words = [word for word in words if word!=’ ‘] words = [word for word in words if len(word)!=0] #remove stopwords words=[word.lower() for word in words if word.lower() not in stopwords_lower] #combine a list into one string string = “ “.join(words) return string" }, { "code": null, "e": 7929, "s": 7801, "text": "The function above, text_preprocessing() combines all the text preprocessing steps, here is output with the first news article:" }, { "code": null, "e": 8073, "s": 7929, "text": "Before generalizing into all news articles, it is important to apply it on random news articles and see how it works, following the code below:" }, { "code": null, "e": 8167, "s": 8073, "text": "import randomindex = random.randint(0, df.shape[0])text_preprocessing(df.iloc[index][‘Body’])" }, { "code": null, "e": 8493, "s": 8167, "text": "If there are some extra words you want to exclude for this particular project or some extra redundant information you want to remove, you can always revise the function before applying to all news articles. Here is a piece of randomly selected news article before and after tokenization, removing stopwords and lemmatization." }, { "code": null, "e": 8562, "s": 8493, "text": "If all looks great, you can apply the function to all news articles:" }, { "code": null, "e": 8679, "s": 8562, "text": "df[‘news_cleaned’]=df[‘Body’].apply(text_preprocessing)df[‘subject_cleaned’]=df[‘Subject’].apply(text_preprocessing)" }, { "code": null, "e": 8692, "s": 8679, "text": "Some remarks" }, { "code": null, "e": 9054, "s": 8692, "text": "Text preprocessing is a very important part of text mining and sentiment analysis. There are a lot of ways of preprocessing the unstructured data to make it readable for computers for future analysis. For the next step, I will discuss the vectorizer I used to transform text data into a sparse matrix so that they can be used as input for quantitative analysis." }, { "code": null, "e": 9756, "s": 9054, "text": "If your analysis is simple and does not require a lot of customization in preprocessing the text data, the vectorizers usually have embedded functions to conduct the basic steps, like tokenization, removing stopwords. Or you can write your own function and specify your customized function in the vectorizer so you can preprocess and vectorize your data at the same time. If you want this way, your function needs to return a list of tokenized words rather than a long string. However, personally speaking, I prefer to preprocess the text data first before vectorization. In this way, I keep monitoring the performance of my function, and it is actually faster especially if you have a large data set." } ]
How to change a column in an R data frame with some conditions?
Sometimes, the column value of a particular column has some relation with another column and we might need to change the value of that particular column based on some conditions. We need to make this change to check how the change in the values of a column can make an impact on the relationship between the two columns under consideration. In R, we can use single square brackets to make the changes in the column values. Consider the below data frame − > set.seed(1) > x1<-rpois(20,5) > x2<-rpois(20,2) > x3<-runif(20,2,5) > df<-data.frame(x1,x2,x3) > df x1 x2 x3 1 4 4 4.462839 2 4 1 3.941181 3 5 2 4.348798 4 8 0 3.659109 5 3 1 3.589159 6 8 1 4.368069 7 9 0 2.069994 8 6 1 3.431690 9 6 4 4.196941 10 2 1 4.078195 11 3 2 3.432859 12 3 2 4.583628 13 6 2 3.314291 14 4 1 2.734392 15 7 3 2.212037 16 5 2 2.298398 17 6 3 2.948815 18 11 0 3.555903 19 4 3 3.986015 20 7 2 3.220491 Suppose we want to subtract 2 from column 2 (x2) values if the column 3 values are greater than 3, then it can be done as shown below − > df$x2[df$x3 > 3] <- (df$x2[df$x3 > 3] - 2) > df x1 x2 x3 1 4 -2.375000 4.462839 2 4 -2.562500 3.941181 3 5 -2.400000 4.348798 4 8 -2.281250 3.659109 5 3 -2.777778 3.589159 6 8 -2.265625 4.368069 7 9 0.000000 2.069994 8 6 -2.234568 3.431690 9 6 -2.277778 4.196941 10 2 -2.361111 4.078195 11 3 -3.000000 3.432859 12 3 -2.666667 4.583628 13 6 -2.666667 3.314291 14 4 1.000000 2.734392 15 7 3.000000 2.212037 16 5 2.000000 2.298398 17 6 3.000000 2.948815 18 11 -2.388889 3.555903 19 4 -2.437500 3.986015 20 7 -2.285714 3.220491 If we want to multiple column 1 (x1) values to 2 for the values where x3 is less than 3 then it can be done as shown below − > df$x1[df$x3 < 3] <- (df$x1[df$x3 < 3]*2) > df x1 x2 x3 1 4 -1.0937500 4.462839 2 4 -1.1406250 3.941181 3 5 -0.8800000 4.348798 4 8 -0.5351562 3.659109 5 3 -1.5925926 3.589159 6 8 -0.5332031 4.368069 7 18 0.0000000 2.069994 8 6 -0.4705075 3.431690 9 6 -0.7129630 4.196941 10 2 -0.7268519 4.078195 11 3 -2.5000000 3.432859 12 3 -1.5555556 4.583628 13 6 -1.5555556 3.314291 14 8 1.0000000 2.734392 15 14 3.0000000 2.212037 16 10 2.0000000 2.298398 17 12 3.0000000 2.948815 18 11 -0.7314815 3.555903 19 4 -1.1093750 3.986015 20 7 -0.6122449 3.220491
[ { "code": null, "e": 1485, "s": 1062, "text": "Sometimes, the column value of a particular column has some relation with another column and we might need to change the value of that particular column based on some conditions. We need to make this change to check how the change in the values of a column can make an impact on the relationship between the two columns under consideration. In R, we can use single square brackets to make the changes in the column values." }, { "code": null, "e": 1517, "s": 1485, "text": "Consider the below data frame −" }, { "code": null, "e": 1940, "s": 1517, "text": "> set.seed(1)\n> x1<-rpois(20,5)\n> x2<-rpois(20,2)\n> x3<-runif(20,2,5)\n> df<-data.frame(x1,x2,x3)\n> df\nx1 x2 x3\n1 4 4 4.462839\n2 4 1 3.941181\n3 5 2 4.348798\n4 8 0 3.659109\n5 3 1 3.589159\n6 8 1 4.368069\n7 9 0 2.069994\n8 6 1 3.431690\n9 6 4 4.196941\n10 2 1 4.078195\n11 3 2 3.432859\n12 3 2 4.583628\n13 6 2 3.314291\n14 4 1 2.734392\n15 7 3 2.212037\n16 5 2 2.298398\n17 6 3 2.948815\n18 11 0 3.555903\n19 4 3 3.986015\n20 7 2 3.220491" }, { "code": null, "e": 2076, "s": 1940, "text": "Suppose we want to subtract 2 from column 2 (x2) values if the column 3 values are greater than 3, then it can be done as shown below −" }, { "code": null, "e": 2602, "s": 2076, "text": "> df$x2[df$x3 > 3] <- (df$x2[df$x3 > 3] - 2)\n> df\nx1 x2 x3\n1 4 -2.375000 4.462839\n2 4 -2.562500 3.941181\n3 5 -2.400000 4.348798\n4 8 -2.281250 3.659109\n5 3 -2.777778 3.589159\n6 8 -2.265625 4.368069\n7 9 0.000000 2.069994\n8 6 -2.234568 3.431690\n9 6 -2.277778 4.196941\n10 2 -2.361111 4.078195\n11 3 -3.000000 3.432859\n12 3 -2.666667 4.583628\n13 6 -2.666667 3.314291\n14 4 1.000000 2.734392\n15 7 3.000000 2.212037\n16 5 2.000000 2.298398\n17 6 3.000000 2.948815\n18 11 -2.388889 3.555903\n19 4 -2.437500 3.986015\n20 7 -2.285714 3.220491" }, { "code": null, "e": 2727, "s": 2602, "text": "If we want to multiple column 1 (x1) values to 2 for the values where x3 is less than 3 then it can be done as shown below −" }, { "code": null, "e": 3275, "s": 2727, "text": "> df$x1[df$x3 < 3] <- (df$x1[df$x3 < 3]*2)\n> df\nx1 x2 x3\n1 4 -1.0937500 4.462839\n2 4 -1.1406250 3.941181\n3 5 -0.8800000 4.348798\n4 8 -0.5351562 3.659109\n5 3 -1.5925926 3.589159\n6 8 -0.5332031 4.368069\n7 18 0.0000000 2.069994\n8 6 -0.4705075 3.431690\n9 6 -0.7129630 4.196941\n10 2 -0.7268519 4.078195\n11 3 -2.5000000 3.432859\n12 3 -1.5555556 4.583628\n13 6 -1.5555556 3.314291\n14 8 1.0000000 2.734392\n15 14 3.0000000 2.212037\n16 10 2.0000000 2.298398\n17 12 3.0000000 2.948815\n18 11 -0.7314815 3.555903\n19 4 -1.1093750 3.986015\n20 7 -0.6122449 3.220491" } ]
MySQL LIKE IN()?
You can implement MySQL Like IN() with the help of Regular Expression (regexp) as well. The syntax is as follows − select *from yourTableName where yourColumName regexp ‘value1|value2|value3......|valueN’; To understand the above logic, you need to create a table. Let us first create a table − mysql> create table INDemo -> ( -> Id int, -> Name varchar(100) -> ); Query OK, 0 rows affected (0.90 sec) Insert some records into the table. The query is as follows − mysql> insert into INDemo values(100,'John'); Query OK, 1 row affected (0.13 sec) mysql> insert into INDemo values(104,'Carol'); Query OK, 1 row affected (0.18 sec) mysql> insert into INDemo values(108,'David'); Query OK, 1 row affected (0.19 sec) mysql> insert into INDemo values(112,'Smith'); Query OK, 1 row affected (0.12 sec) mysql> insert into INDemo values(116,'Johnson'); Query OK, 1 row affected (0.17 sec) mysql> insert into INDemo values(120,'Sam'); Query OK, 1 row affected (0.16 sec) Now we can display all the records with the help of SELECT statement. The query is as follows − mysql> select *from INDemo; The following is the output − +------+---------+ | Id | Name | +------+---------+ | 100 | John | | 104 | Carol | | 108 | David | | 112 | Smith | | 116 | Johnson | | 120 | Sam | +------+---------+ 6 rows in set (0.00 sec) Use regexp that works like IN(). You can apply the above syntax which I have discussed in the beginning. The query is as follows − mysql> select *from INDemo where Id regexp '112|116|100'; The following is the output − +------+---------+ | Id | Name | +------+---------+ | 100 | John | | 112 | Smith | | 116 | Johnson | +------+---------+ 3 rows in set (0.21 sec) You will get the same output with IN(). Now, let us check it with the help of IN(). The query is as follows − mysql> select *from INDemo where Id IN(112,116,100); Here is the output +------+---------+ | Id | Name | +------+---------+ | 100 | John | | 112 | Smith | | 116 | Johnson | +------+---------+ 3 rows in set (0.00 sec) As you can see in the above output, we are getting the same result.
[ { "code": null, "e": 1177, "s": 1062, "text": "You can implement MySQL Like IN() with the help of Regular Expression (regexp) as well. The syntax is as follows −" }, { "code": null, "e": 1268, "s": 1177, "text": "select *from yourTableName where yourColumName regexp ‘value1|value2|value3......|valueN’;" }, { "code": null, "e": 1357, "s": 1268, "text": "To understand the above logic, you need to create a table. Let us first create a table −" }, { "code": null, "e": 1476, "s": 1357, "text": "mysql> create table INDemo\n -> (\n -> Id int,\n -> Name varchar(100)\n -> );\nQuery OK, 0 rows affected (0.90 sec)" }, { "code": null, "e": 1538, "s": 1476, "text": "Insert some records into the table. The query is as follows −" }, { "code": null, "e": 2035, "s": 1538, "text": "mysql> insert into INDemo values(100,'John');\nQuery OK, 1 row affected (0.13 sec)\nmysql> insert into INDemo values(104,'Carol');\nQuery OK, 1 row affected (0.18 sec)\nmysql> insert into INDemo values(108,'David');\nQuery OK, 1 row affected (0.19 sec)\nmysql> insert into INDemo values(112,'Smith');\nQuery OK, 1 row affected (0.12 sec)\nmysql> insert into INDemo values(116,'Johnson');\nQuery OK, 1 row affected (0.17 sec)\nmysql> insert into INDemo values(120,'Sam');\nQuery OK, 1 row affected (0.16 sec)" }, { "code": null, "e": 2131, "s": 2035, "text": "Now we can display all the records with the help of SELECT statement. The query is as follows −" }, { "code": null, "e": 2159, "s": 2131, "text": "mysql> select *from INDemo;" }, { "code": null, "e": 2189, "s": 2159, "text": "The following is the output −" }, { "code": null, "e": 2406, "s": 2189, "text": "+------+---------+\n| Id | Name |\n+------+---------+\n| 100 | John | \n| 104 | Carol |\n| 108 | David |\n| 112 | Smith |\n| 116 | Johnson |\n| 120 | Sam |\n+------+---------+\n6 rows in set (0.00 sec)" }, { "code": null, "e": 2537, "s": 2406, "text": "Use regexp that works like IN(). You can apply the above syntax which I have discussed in the beginning. The query is as follows −" }, { "code": null, "e": 2595, "s": 2537, "text": "mysql> select *from INDemo where Id regexp '112|116|100';" }, { "code": null, "e": 2625, "s": 2595, "text": "The following is the output −" }, { "code": null, "e": 2783, "s": 2625, "text": "+------+---------+\n| Id | Name |\n+------+---------+\n| 100 | John |\n| 112 | Smith |\n| 116 | Johnson |\n+------+---------+\n3 rows in set (0.21 sec)" }, { "code": null, "e": 2893, "s": 2783, "text": "You will get the same output with IN(). Now, let us check it with the help of IN(). The query is as follows −" }, { "code": null, "e": 2946, "s": 2893, "text": "mysql> select *from INDemo where Id IN(112,116,100);" }, { "code": null, "e": 2965, "s": 2946, "text": "Here is the output" }, { "code": null, "e": 3123, "s": 2965, "text": "+------+---------+\n| Id | Name |\n+------+---------+\n| 100 | John |\n| 112 | Smith |\n| 116 | Johnson |\n+------+---------+\n3 rows in set (0.00 sec)" }, { "code": null, "e": 3191, "s": 3123, "text": "As you can see in the above output, we are getting the same result." } ]
Fortran - do while Loop Construct
It repeats a statement or a group of statements while a given condition is true. It tests the condition before executing the loop body. do while (logical expr) statements end do program factorial implicit none ! define variables integer :: nfact = 1 integer :: n = 1 ! compute factorials do while (n <= 10) nfact = nfact * n n = n + 1 print*, n, " ", nfact end do end program factorial When the above code is compiled and executed, it produces the following result − 2 1 3 2 4 6 5 24 6 120 7 720 8 5040 9 40320 10 362880 11 3628800 Print Add Notes Bookmark this page
[ { "code": null, "e": 2282, "s": 2146, "text": "It repeats a statement or a group of statements while a given condition is true. It tests the condition before executing the loop body." }, { "code": null, "e": 2328, "s": 2282, "text": "do while (logical expr) \n statements\nend do" }, { "code": null, "e": 2603, "s": 2328, "text": "program factorial \nimplicit none \n\n ! define variables\n integer :: nfact = 1 \n integer :: n = 1 \n \n ! compute factorials \n do while (n <= 10) \n nfact = nfact * n \n n = n + 1\n print*, n, \" \", nfact \n end do \nend program factorial " }, { "code": null, "e": 2684, "s": 2603, "text": "When the above code is compiled and executed, it produces the following result −" }, { "code": null, "e": 2847, "s": 2684, "text": "2 1\n3 2\n4 6\n5 24\n6 120\n7 720\n8 5040\n9 40320\n10 362880\n11 3628800\n" }, { "code": null, "e": 2854, "s": 2847, "text": " Print" }, { "code": null, "e": 2865, "s": 2854, "text": " Add Notes" } ]
HTML Entity Parser - GeeksforGeeks
23 Jul, 2020 Given a string str which has various HTML Entities in it, the task is to replace these entities with their corresponding special character. HTML entity parser is the parser that takes HTML code as input and replaces all the entities of the special characters by the characters itself. The special characters and their entities for HTML are Quotation Mark: the entity is ", and symbol character is “. Below is the HTML Entities with their corresponding special characters is shown in the table below: Examples: Input: str = “17 &gt; 25 and 25 &lt; 17”Output: 17 > 25 and 25 < 17Explanation: In the above example &gt; isreplaced by corresponding special character> and &lt; is replaced by < Input: str = “&copy; is symbol of copyright”Output: © is symbol of copyrightExplanation: In the above example &copy; isreplaced by corresponding special character© Method 1 – using unordered_map: Below are the steps: Store the HTML Entity with their character in a Map.Traverse the given string and if any character ‘&’ is encountered then find which HTML Entity is present after this ampersand.Add the corresponding character with the Entity in the output string.Print the output string as the result. Store the HTML Entity with their character in a Map. Traverse the given string and if any character ‘&’ is encountered then find which HTML Entity is present after this ampersand. Add the corresponding character with the Entity in the output string. Print the output string as the result. Below is the implementation of the above approach: C++ // C++ program for the above approach#include <iostream>#include <unordered_map>using namespace std; class GfG {public: unordered_map<string, string> m; public: // Associating html entity with // special character void initializeMap() { m["""] = "\""; m["'"] = "'"; m["&"] = "&"; m[">"] = ">"; m["<"] = "<"; m["⁄"] = "/"; m[" "] = " "; m["®"] = "®"; m["©"] = "©"; } public: // Function that convert the given // HTML Entity to its parsed String string parseInputString(string input) { // Output string string output = ""; // Traverse the string for (int i = 0; i < input.size(); i++) { // If any ampersand is occurred if (input[i] == '&') { string buffer; while (i < input.size()) { buffer = buffer + input[i]; // If any Entity is found if (input[i] == ';' && m.find(buffer) != m.end()) { // Append the parsed // character output = output + m[buffer]; // Clear the buffer buffer = ""; i++; break; } else { i++; } } if (i >= input.size()) { output = output + buffer; break; } i--; } else { output = output + input[i]; } } // Return the parsed string return output; }}; // Driver Codeint main(){ // Given String string input = "17 > 25 and 25 < 17"; GfG g; // Initialised parsed string g.initializeMap(); // Function Call cout << g.parseInputString(input); return 0;} 17 > 25 and 25 < 17 Time Complexity: O(N)Auxiliary Space: O(N) Method 2 – using Pattern Matching:Below are the steps: Traverse the given string str.While traversing, if any character ‘&’ is encountered then find which HTML Entity is present after this ampersand.Add the corresponding character with the Entity in the output string from the above table of matched character in the above table.Print the output string as the result after traversing the above string. Traverse the given string str. While traversing, if any character ‘&’ is encountered then find which HTML Entity is present after this ampersand. Add the corresponding character with the Entity in the output string from the above table of matched character in the above table. Print the output string as the result after traversing the above string. Below is the implementation of the above approach: C++ // C++ program to Parse the HTML Entities#include <iostream>using namespace std; class GfG { public: string parseInputString(string input) { // To store parsed string string output = ""; for (int i = 0; i < input.size(); i++) { // Matching pattern of html // entity if (input[i] == '&') { string buffer; while (i < input.size()) { buffer = buffer + input[i]; // Check match for (\) if (input[i] == ';' && buffer == """) { output = output + "\""; buffer = ""; i++; break; } // Check match for (') else if (input[i] == ';' && buffer == "'") { output = output + "'"; buffer = ""; i++; break; } // Check match for (&) else if (input[i] == ';' && buffer == "&") { output = output + "&"; buffer = ""; i++; break; } // Check match for (>) else if (input[i] == ';' && buffer == ">") { output = output + ">"; buffer = ""; i++; break; } // Check match for (<) else if (input[i] == ';' && buffer == "<") { output = output + "<"; buffer = ""; i++; break; } // Check match for (/) else if (input[i] == ';' && buffer == "⁄") { output = output + "/"; buffer = ""; i++; break; } // Check match for (" ") else if (input[i] == ';' && buffer == " ") { output = output + " "; buffer = ""; i++; break; } // Check match for (®) else if (input[i] == ';' && buffer == "®") { output = output + "®"; buffer = ""; i++; break; } // Check match for (©) else if (input[i] == ';' && buffer == "©") { output = output + "©"; buffer = ""; i++; break; } else { i++; } } if (i >= input.size()) { output = output + buffer; break; } i--; } else { output = output + input[i]; } } // Return the parsed string return output; }}; // Driver Codeint main(){ // Given String string input = "17 > 25 and 25 < 17"; GfG g; // Initialised parsed string g.initializeMap(); // Function Call cout << g.parseInputString(input); return 0;} 17 > 25 and 25 < 17 Time Complexity: O(N)Auxiliary Space: O(N) Method 3 – using Regular Expression:Below are the steps: Store all the expression with it’s mapped value in a Map M.For each key in the map, create a regular expression using:regex e(key);Now replace the above regular expression formed with it’s mapped value in the Map M as:regex_replace(str, e, value);where,str is the input string,e is the expression formed in the above step, andval is the value mapped with expression e in the MapRepeat the above steps until all expression are not replaced. Store all the expression with it’s mapped value in a Map M. For each key in the map, create a regular expression using:regex e(key); regex e(key); Now replace the above regular expression formed with it’s mapped value in the Map M as:regex_replace(str, e, value);where,str is the input string,e is the expression formed in the above step, andval is the value mapped with expression e in the Map regex_replace(str, e, value);where,str is the input string,e is the expression formed in the above step, andval is the value mapped with expression e in the Map Repeat the above steps until all expression are not replaced. Below is the implementation of the above approach: C++ // C++ program for the above approach#include <iostream>#include <regex>#include <unordered_map>using namespace std; // Given Expression with mapped valueconst unordered_map<string, string> m;m = { { """, "\" }, { "'", "'" }, { "&", "&" }, { ">", ">" }, { "<", "<" }, { "⁄", "/" } }; // Function that converts the given// HTML Entity to its parsed StringstringparseInputString(string input){ for (auto& it : m) { // Create ReGex Expression regex e(it.first); // Replace the above expression // with mapped value using // regex_replace() input = regex_replace(input, e, it.second); } // Return the parsed string return input;} // Driver Codeint main(){ // Given String string input = "17 > 25 and 25 < 17"; // Function Call cout << parseInputString(input); return 0;} 17 > 25 and 25 < 17 Time Complexity: O(N)Auxiliary Space: O(N) cpp-map cpp-unordered_map HTML-Tags regular-expression HTML Pattern Searching Searching Strings Searching Strings Pattern Searching HTML Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Types of CSS (Cascading Style Sheet) HTML | <img> align Attribute Form validation using HTML and JavaScript HTML Introduction How to Upload Image into Database and Display it using PHP ? KMP Algorithm for Pattern Searching Rabin-Karp Algorithm for Pattern Searching Check if a string is substring of another Naive algorithm for Pattern Searching Boyer Moore Algorithm for Pattern Searching
[ { "code": null, "e": 24295, "s": 24267, "text": "\n23 Jul, 2020" }, { "code": null, "e": 24435, "s": 24295, "text": "Given a string str which has various HTML Entities in it, the task is to replace these entities with their corresponding special character." }, { "code": null, "e": 24695, "s": 24435, "text": "HTML entity parser is the parser that takes HTML code as input and replaces all the entities of the special characters by the characters itself. The special characters and their entities for HTML are Quotation Mark: the entity is \", and symbol character is “." }, { "code": null, "e": 24795, "s": 24695, "text": "Below is the HTML Entities with their corresponding special characters is shown in the table below:" }, { "code": null, "e": 24805, "s": 24795, "text": "Examples:" }, { "code": null, "e": 24984, "s": 24805, "text": "Input: str = “17 &gt; 25 and 25 &lt; 17”Output: 17 > 25 and 25 < 17Explanation: In the above example &gt; isreplaced by corresponding special character> and &lt; is replaced by <" }, { "code": null, "e": 25148, "s": 24984, "text": "Input: str = “&copy; is symbol of copyright”Output: © is symbol of copyrightExplanation: In the above example &copy; isreplaced by corresponding special character©" }, { "code": null, "e": 25201, "s": 25148, "text": "Method 1 – using unordered_map: Below are the steps:" }, { "code": null, "e": 25487, "s": 25201, "text": "Store the HTML Entity with their character in a Map.Traverse the given string and if any character ‘&’ is encountered then find which HTML Entity is present after this ampersand.Add the corresponding character with the Entity in the output string.Print the output string as the result." }, { "code": null, "e": 25540, "s": 25487, "text": "Store the HTML Entity with their character in a Map." }, { "code": null, "e": 25667, "s": 25540, "text": "Traverse the given string and if any character ‘&’ is encountered then find which HTML Entity is present after this ampersand." }, { "code": null, "e": 25737, "s": 25667, "text": "Add the corresponding character with the Entity in the output string." }, { "code": null, "e": 25776, "s": 25737, "text": "Print the output string as the result." }, { "code": null, "e": 25827, "s": 25776, "text": "Below is the implementation of the above approach:" }, { "code": null, "e": 25831, "s": 25827, "text": "C++" }, { "code": "// C++ program for the above approach#include <iostream>#include <unordered_map>using namespace std; class GfG {public: unordered_map<string, string> m; public: // Associating html entity with // special character void initializeMap() { m[\"\"\"] = \"\\\"\"; m[\"'\"] = \"'\"; m[\"&\"] = \"&\"; m[\">\"] = \">\"; m[\"<\"] = \"<\"; m[\"⁄\"] = \"/\"; m[\" \"] = \" \"; m[\"®\"] = \"®\"; m[\"©\"] = \"©\"; } public: // Function that convert the given // HTML Entity to its parsed String string parseInputString(string input) { // Output string string output = \"\"; // Traverse the string for (int i = 0; i < input.size(); i++) { // If any ampersand is occurred if (input[i] == '&') { string buffer; while (i < input.size()) { buffer = buffer + input[i]; // If any Entity is found if (input[i] == ';' && m.find(buffer) != m.end()) { // Append the parsed // character output = output + m[buffer]; // Clear the buffer buffer = \"\"; i++; break; } else { i++; } } if (i >= input.size()) { output = output + buffer; break; } i--; } else { output = output + input[i]; } } // Return the parsed string return output; }}; // Driver Codeint main(){ // Given String string input = \"17 > 25 and 25 < 17\"; GfG g; // Initialised parsed string g.initializeMap(); // Function Call cout << g.parseInputString(input); return 0;}", "e": 27939, "s": 25831, "text": null }, { "code": null, "e": 27960, "s": 27939, "text": "17 > 25 and 25 < 17\n" }, { "code": null, "e": 28003, "s": 27960, "text": "Time Complexity: O(N)Auxiliary Space: O(N)" }, { "code": null, "e": 28058, "s": 28003, "text": "Method 2 – using Pattern Matching:Below are the steps:" }, { "code": null, "e": 28405, "s": 28058, "text": "Traverse the given string str.While traversing, if any character ‘&’ is encountered then find which HTML Entity is present after this ampersand.Add the corresponding character with the Entity in the output string from the above table of matched character in the above table.Print the output string as the result after traversing the above string." }, { "code": null, "e": 28436, "s": 28405, "text": "Traverse the given string str." }, { "code": null, "e": 28551, "s": 28436, "text": "While traversing, if any character ‘&’ is encountered then find which HTML Entity is present after this ampersand." }, { "code": null, "e": 28682, "s": 28551, "text": "Add the corresponding character with the Entity in the output string from the above table of matched character in the above table." }, { "code": null, "e": 28755, "s": 28682, "text": "Print the output string as the result after traversing the above string." }, { "code": null, "e": 28806, "s": 28755, "text": "Below is the implementation of the above approach:" }, { "code": null, "e": 28810, "s": 28806, "text": "C++" }, { "code": "// C++ program to Parse the HTML Entities#include <iostream>using namespace std; class GfG { public: string parseInputString(string input) { // To store parsed string string output = \"\"; for (int i = 0; i < input.size(); i++) { // Matching pattern of html // entity if (input[i] == '&') { string buffer; while (i < input.size()) { buffer = buffer + input[i]; // Check match for (\\) if (input[i] == ';' && buffer == \"\"\") { output = output + \"\\\"\"; buffer = \"\"; i++; break; } // Check match for (') else if (input[i] == ';' && buffer == \"'\") { output = output + \"'\"; buffer = \"\"; i++; break; } // Check match for (&) else if (input[i] == ';' && buffer == \"&\") { output = output + \"&\"; buffer = \"\"; i++; break; } // Check match for (>) else if (input[i] == ';' && buffer == \">\") { output = output + \">\"; buffer = \"\"; i++; break; } // Check match for (<) else if (input[i] == ';' && buffer == \"<\") { output = output + \"<\"; buffer = \"\"; i++; break; } // Check match for (/) else if (input[i] == ';' && buffer == \"⁄\") { output = output + \"/\"; buffer = \"\"; i++; break; } // Check match for (\" \") else if (input[i] == ';' && buffer == \" \") { output = output + \" \"; buffer = \"\"; i++; break; } // Check match for (®) else if (input[i] == ';' && buffer == \"®\") { output = output + \"®\"; buffer = \"\"; i++; break; } // Check match for (©) else if (input[i] == ';' && buffer == \"©\") { output = output + \"©\"; buffer = \"\"; i++; break; } else { i++; } } if (i >= input.size()) { output = output + buffer; break; } i--; } else { output = output + input[i]; } } // Return the parsed string return output; }}; // Driver Codeint main(){ // Given String string input = \"17 > 25 and 25 < 17\"; GfG g; // Initialised parsed string g.initializeMap(); // Function Call cout << g.parseInputString(input); return 0;}", "e": 32597, "s": 28810, "text": null }, { "code": null, "e": 32618, "s": 32597, "text": "17 > 25 and 25 < 17\n" }, { "code": null, "e": 32661, "s": 32618, "text": "Time Complexity: O(N)Auxiliary Space: O(N)" }, { "code": null, "e": 32718, "s": 32661, "text": "Method 3 – using Regular Expression:Below are the steps:" }, { "code": null, "e": 33158, "s": 32718, "text": "Store all the expression with it’s mapped value in a Map M.For each key in the map, create a regular expression using:regex e(key);Now replace the above regular expression formed with it’s mapped value in the Map M as:regex_replace(str, e, value);where,str is the input string,e is the expression formed in the above step, andval is the value mapped with expression e in the MapRepeat the above steps until all expression are not replaced." }, { "code": null, "e": 33218, "s": 33158, "text": "Store all the expression with it’s mapped value in a Map M." }, { "code": null, "e": 33291, "s": 33218, "text": "For each key in the map, create a regular expression using:regex e(key);" }, { "code": null, "e": 33305, "s": 33291, "text": "regex e(key);" }, { "code": null, "e": 33553, "s": 33305, "text": "Now replace the above regular expression formed with it’s mapped value in the Map M as:regex_replace(str, e, value);where,str is the input string,e is the expression formed in the above step, andval is the value mapped with expression e in the Map" }, { "code": null, "e": 33714, "s": 33553, "text": "regex_replace(str, e, value);where,str is the input string,e is the expression formed in the above step, andval is the value mapped with expression e in the Map" }, { "code": null, "e": 33776, "s": 33714, "text": "Repeat the above steps until all expression are not replaced." }, { "code": null, "e": 33827, "s": 33776, "text": "Below is the implementation of the above approach:" }, { "code": null, "e": 33831, "s": 33827, "text": "C++" }, { "code": "// C++ program for the above approach#include <iostream>#include <regex>#include <unordered_map>using namespace std; // Given Expression with mapped valueconst unordered_map<string, string> m;m = { { \"\"\", \"\\\" }, { \"'\", \"'\" }, { \"&\", \"&\" }, { \">\", \">\" }, { \"<\", \"<\" }, { \"⁄\", \"/\" } }; // Function that converts the given// HTML Entity to its parsed StringstringparseInputString(string input){ for (auto& it : m) { // Create ReGex Expression regex e(it.first); // Replace the above expression // with mapped value using // regex_replace() input = regex_replace(input, e, it.second); } // Return the parsed string return input;} // Driver Codeint main(){ // Given String string input = \"17 > 25 and 25 < 17\"; // Function Call cout << parseInputString(input); return 0;}", "e": 34749, "s": 33831, "text": null }, { "code": null, "e": 34770, "s": 34749, "text": "17 > 25 and 25 < 17\n" }, { "code": null, "e": 34813, "s": 34770, "text": "Time Complexity: O(N)Auxiliary Space: O(N)" }, { "code": null, "e": 34821, "s": 34813, "text": "cpp-map" }, { "code": null, "e": 34839, "s": 34821, "text": "cpp-unordered_map" }, { "code": null, "e": 34849, "s": 34839, "text": "HTML-Tags" }, { "code": null, "e": 34868, "s": 34849, "text": "regular-expression" }, { "code": null, "e": 34873, "s": 34868, "text": "HTML" }, { "code": null, "e": 34891, "s": 34873, "text": "Pattern Searching" }, { "code": null, "e": 34901, "s": 34891, "text": "Searching" }, { "code": null, "e": 34909, "s": 34901, "text": "Strings" }, { "code": null, "e": 34919, "s": 34909, "text": "Searching" }, { "code": null, "e": 34927, "s": 34919, "text": "Strings" }, { "code": null, "e": 34945, "s": 34927, "text": "Pattern Searching" }, { "code": null, "e": 34950, "s": 34945, "text": "HTML" }, { "code": null, "e": 35048, "s": 34950, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 35057, "s": 35048, "text": "Comments" }, { "code": null, "e": 35070, "s": 35057, "text": "Old Comments" }, { "code": null, "e": 35107, "s": 35070, "text": "Types of CSS (Cascading Style Sheet)" }, { "code": null, "e": 35136, "s": 35107, "text": "HTML | <img> align Attribute" }, { "code": null, "e": 35178, "s": 35136, "text": "Form validation using HTML and JavaScript" }, { "code": null, "e": 35196, "s": 35178, "text": "HTML Introduction" }, { "code": null, "e": 35257, "s": 35196, "text": "How to Upload Image into Database and Display it using PHP ?" }, { "code": null, "e": 35293, "s": 35257, "text": "KMP Algorithm for Pattern Searching" }, { "code": null, "e": 35336, "s": 35293, "text": "Rabin-Karp Algorithm for Pattern Searching" }, { "code": null, "e": 35378, "s": 35336, "text": "Check if a string is substring of another" }, { "code": null, "e": 35416, "s": 35378, "text": "Naive algorithm for Pattern Searching" } ]
5 Different methods to find length of a string in C++ - GeeksforGeeks
16 Jan, 2020 The string is a sequence of characters or an array of characters. The declaration and definition of the string using an array of chars is similar to declaration and definition of an array of any other data type. Important points: The constructor of string class will set it to the C-style string, which ends at the ‘\0’.The size() function is consistent with other STL containers (like vector, map, etc.) and length() is consistent with most peoples intuitive notion of character strings like a word, sentence or paragraph. We say a paragraph’ss length not its size, so length() is to make things more readable. The constructor of string class will set it to the C-style string, which ends at the ‘\0’. The size() function is consistent with other STL containers (like vector, map, etc.) and length() is consistent with most peoples intuitive notion of character strings like a word, sentence or paragraph. We say a paragraph’ss length not its size, so length() is to make things more readable. Methods to find length of string Using string::size: The method string::size returns the length of the string, in terms of bytes.Using string::length: The method string::length returns the length of the string, in terms of bytes. Both string::size and string::length are synonyms and return the exact same value.Using C library function strlen() method: The C library function size_t strlen(const char *str) computes the length of the string str up to, but not including the terminating null character.Using while loop: Using the traditional method, To initialize the counter equals 0 and increment the counter from starting of string to end of string (terminating null character).Using for loop: To initialize the counter equals 0 and increment the counter from starting of string to end of string (terminating null character). Using string::size: The method string::size returns the length of the string, in terms of bytes. Using string::length: The method string::length returns the length of the string, in terms of bytes. Both string::size and string::length are synonyms and return the exact same value. Using C library function strlen() method: The C library function size_t strlen(const char *str) computes the length of the string str up to, but not including the terminating null character. Using while loop: Using the traditional method, To initialize the counter equals 0 and increment the counter from starting of string to end of string (terminating null character). Using for loop: To initialize the counter equals 0 and increment the counter from starting of string to end of string (terminating null character). Examples: Input: "Geeksforgeeks" Output: 13 Input: "Geeksforgeeks\0 345" Output: 13 Input: "Geeksforgeeks \0 345" Output: 14 // CPP program to illustrate// Different methods to find length// of a string#include <iostream>#include <string.h>using namespace std;int main(){ // String obj string str = "GeeksforGeeks"; // 1. size of string object using size() method cout << str.size() << endl; // 2. size of string object using length method cout << str.length() << endl; // 3. size using old style // size of string object using strlen function cout << strlen(str.c_str()) << endl; // The constructor of string will set it to the // C-style string, // which ends at the '\0' // 4. size of string object Using while loop // while 'NOT NULL' int i = 0; while (str[i]) i++; cout << i << endl; // 5. size of string object using for loop // for(; NOT NULL for (i = 0; str[i]; i++) ; cout << i << endl; return 0;} 13 13 13 13 13 This article is contributed by Prakhar Agrawal. If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to contribute@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. absh2702 cpp-string C++ CPP Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments C++ Classes and Objects Constructors in C++ Socket Programming in C/C++ Operator Overloading in C++ Multidimensional Arrays in C / C++ Copy Constructor in C++ vector erase() and clear() in C++ Virtual Function in C++ Templates in C++ with Examples unordered_map in C++ STL
[ { "code": null, "e": 24394, "s": 24366, "text": "\n16 Jan, 2020" }, { "code": null, "e": 24606, "s": 24394, "text": "The string is a sequence of characters or an array of characters. The declaration and definition of the string using an array of chars is similar to declaration and definition of an array of any other data type." }, { "code": null, "e": 24624, "s": 24606, "text": "Important points:" }, { "code": null, "e": 25006, "s": 24624, "text": "The constructor of string class will set it to the C-style string, which ends at the ‘\\0’.The size() function is consistent with other STL containers (like vector, map, etc.) and length() is consistent with most peoples intuitive notion of character strings like a word, sentence or paragraph. We say a paragraph’ss length not its size, so length() is to make things more readable." }, { "code": null, "e": 25097, "s": 25006, "text": "The constructor of string class will set it to the C-style string, which ends at the ‘\\0’." }, { "code": null, "e": 25389, "s": 25097, "text": "The size() function is consistent with other STL containers (like vector, map, etc.) and length() is consistent with most peoples intuitive notion of character strings like a word, sentence or paragraph. We say a paragraph’ss length not its size, so length() is to make things more readable." }, { "code": null, "e": 25422, "s": 25389, "text": "Methods to find length of string" }, { "code": null, "e": 26219, "s": 25422, "text": "Using string::size: The method string::size returns the length of the string, in terms of bytes.Using string::length: The method string::length returns the length of the string, in terms of bytes. Both string::size and string::length are synonyms and return the exact same value.Using C library function strlen() method: The C library function size_t strlen(const char *str) computes the length of the string str up to, but not including the terminating null character.Using while loop: Using the traditional method, To initialize the counter equals 0 and increment the counter from starting of string to end of string (terminating null character).Using for loop: To initialize the counter equals 0 and increment the counter from starting of string to end of string (terminating null character)." }, { "code": null, "e": 26316, "s": 26219, "text": "Using string::size: The method string::size returns the length of the string, in terms of bytes." }, { "code": null, "e": 26501, "s": 26316, "text": "Using string::length: The method string::length returns the length of the string, in terms of bytes. Both string::size and string::length are synonyms and return the exact same value." }, { "code": null, "e": 26692, "s": 26501, "text": "Using C library function strlen() method: The C library function size_t strlen(const char *str) computes the length of the string str up to, but not including the terminating null character." }, { "code": null, "e": 26872, "s": 26692, "text": "Using while loop: Using the traditional method, To initialize the counter equals 0 and increment the counter from starting of string to end of string (terminating null character)." }, { "code": null, "e": 27020, "s": 26872, "text": "Using for loop: To initialize the counter equals 0 and increment the counter from starting of string to end of string (terminating null character)." }, { "code": null, "e": 27030, "s": 27020, "text": "Examples:" }, { "code": null, "e": 27148, "s": 27030, "text": "Input: \"Geeksforgeeks\"\nOutput: 13\n\nInput: \"Geeksforgeeks\\0 345\"\nOutput: 13\n\nInput: \"Geeksforgeeks \\0 345\"\nOutput: 14\n" }, { "code": "// CPP program to illustrate// Different methods to find length// of a string#include <iostream>#include <string.h>using namespace std;int main(){ // String obj string str = \"GeeksforGeeks\"; // 1. size of string object using size() method cout << str.size() << endl; // 2. size of string object using length method cout << str.length() << endl; // 3. size using old style // size of string object using strlen function cout << strlen(str.c_str()) << endl; // The constructor of string will set it to the // C-style string, // which ends at the '\\0' // 4. size of string object Using while loop // while 'NOT NULL' int i = 0; while (str[i]) i++; cout << i << endl; // 5. size of string object using for loop // for(; NOT NULL for (i = 0; str[i]; i++) ; cout << i << endl; return 0;}", "e": 28025, "s": 27148, "text": null }, { "code": null, "e": 28041, "s": 28025, "text": "13\n13\n13\n13\n13\n" }, { "code": null, "e": 28344, "s": 28041, "text": "This article is contributed by Prakhar Agrawal. If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to contribute@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks." }, { "code": null, "e": 28469, "s": 28344, "text": "Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above." }, { "code": null, "e": 28478, "s": 28469, "text": "absh2702" }, { "code": null, "e": 28489, "s": 28478, "text": "cpp-string" }, { "code": null, "e": 28493, "s": 28489, "text": "C++" }, { "code": null, "e": 28497, "s": 28493, "text": "CPP" }, { "code": null, "e": 28595, "s": 28497, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 28604, "s": 28595, "text": "Comments" }, { "code": null, "e": 28617, "s": 28604, "text": "Old Comments" }, { "code": null, "e": 28641, "s": 28617, "text": "C++ Classes and Objects" }, { "code": null, "e": 28661, "s": 28641, "text": "Constructors in C++" }, { "code": null, "e": 28689, "s": 28661, "text": "Socket Programming in C/C++" }, { "code": null, "e": 28717, "s": 28689, "text": "Operator Overloading in C++" }, { "code": null, "e": 28752, "s": 28717, "text": "Multidimensional Arrays in C / C++" }, { "code": null, "e": 28776, "s": 28752, "text": "Copy Constructor in C++" }, { "code": null, "e": 28810, "s": 28776, "text": "vector erase() and clear() in C++" }, { "code": null, "e": 28834, "s": 28810, "text": "Virtual Function in C++" }, { "code": null, "e": 28865, "s": 28834, "text": "Templates in C++ with Examples" } ]
D3.js linkHorizontal() Method - GeeksforGeeks
14 Sep, 2020 The d3.linkHorizontal() method returns a new link generator with Horizontal tangents. It is typically used when the root is on the top/bottom edge with the children going down/up. Syntax: var link = d3.linkHorizontal() .x(function(d) { return d.x; }) .y(function(d) { return d.y; }); Parameters: This function does not take any parameter. Return Value: This method returns a new link generator. Example: HTML <!DOCTYPE html><html><head> <meta charset="utf-8"> <script src= "https://d3js.org/d3.v5.min.js"> </script></head> <body> <h1 style="text-align: center; color: green;"> GeeksforGeeks </h1> <h3 style="text-align: center;"> D3.js | linkHorizontal() Method </h3> <center> <svg id="gfg" width="200" height="200"></svg> </center> <script> var data = [ {source: [100,25], target: [200,175]}, {source: [100,25], target: [25,175]}]; // Horizontal link generator var link = d3.linkHorizontal() .source(function(d) { return [d.source[1], d.source[0]]; }) .target(function(d) { return [d.target[1], d.target[0]]; }); //Adding the link paths d3.select("#gfg") .selectAll("path") .data(data) .join("path") .attr("d", link) .classed("link", true); </script></body> </html> Output: D3.js JavaScript Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Convert a string to an integer in JavaScript Difference between var, let and const keywords in JavaScript Differences between Functional Components and Class Components in React Difference Between PUT and PATCH Request Node.js | fs.writeFileSync() Method Roadmap to Become a Web Developer in 2022 Installation of Node.js on Linux How to fetch data from an API in ReactJS ? Top 10 Projects For Beginners To Practice HTML and CSS Skills How to insert spaces/tabs in text using HTML/CSS?
[ { "code": null, "e": 24252, "s": 24224, "text": "\n14 Sep, 2020" }, { "code": null, "e": 24432, "s": 24252, "text": "The d3.linkHorizontal() method returns a new link generator with Horizontal tangents. It is typically used when the root is on the top/bottom edge with the children going down/up." }, { "code": null, "e": 24440, "s": 24432, "text": "Syntax:" }, { "code": null, "e": 24546, "s": 24440, "text": "var link = d3.linkHorizontal()\n .x(function(d) { return d.x; })\n .y(function(d) { return d.y; });\n\n" }, { "code": null, "e": 24601, "s": 24546, "text": "Parameters: This function does not take any parameter." }, { "code": null, "e": 24657, "s": 24601, "text": "Return Value: This method returns a new link generator." }, { "code": null, "e": 24666, "s": 24657, "text": "Example:" }, { "code": null, "e": 24671, "s": 24666, "text": "HTML" }, { "code": "<!DOCTYPE html><html><head> <meta charset=\"utf-8\"> <script src= \"https://d3js.org/d3.v5.min.js\"> </script></head> <body> <h1 style=\"text-align: center; color: green;\"> GeeksforGeeks </h1> <h3 style=\"text-align: center;\"> D3.js | linkHorizontal() Method </h3> <center> <svg id=\"gfg\" width=\"200\" height=\"200\"></svg> </center> <script> var data = [ {source: [100,25], target: [200,175]}, {source: [100,25], target: [25,175]}]; // Horizontal link generator var link = d3.linkHorizontal() .source(function(d) { return [d.source[1], d.source[0]]; }) .target(function(d) { return [d.target[1], d.target[0]]; }); //Adding the link paths d3.select(\"#gfg\") .selectAll(\"path\") .data(data) .join(\"path\") .attr(\"d\", link) .classed(\"link\", true); </script></body> </html>", "e": 25697, "s": 24671, "text": null }, { "code": null, "e": 25705, "s": 25697, "text": "Output:" }, { "code": null, "e": 25711, "s": 25705, "text": "D3.js" }, { "code": null, "e": 25722, "s": 25711, "text": "JavaScript" }, { "code": null, "e": 25739, "s": 25722, "text": "Web Technologies" }, { "code": null, "e": 25837, "s": 25739, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 25882, "s": 25837, "text": "Convert a string to an integer in JavaScript" }, { "code": null, "e": 25943, "s": 25882, "text": "Difference between var, let and const keywords in JavaScript" }, { "code": null, "e": 26015, "s": 25943, "text": "Differences between Functional Components and Class Components in React" }, { "code": null, "e": 26056, "s": 26015, "text": "Difference Between PUT and PATCH Request" }, { "code": null, "e": 26092, "s": 26056, "text": "Node.js | fs.writeFileSync() Method" }, { "code": null, "e": 26134, "s": 26092, "text": "Roadmap to Become a Web Developer in 2022" }, { "code": null, "e": 26167, "s": 26134, "text": "Installation of Node.js on Linux" }, { "code": null, "e": 26210, "s": 26167, "text": "How to fetch data from an API in ReactJS ?" }, { "code": null, "e": 26272, "s": 26210, "text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills" } ]
Python MongoDB - Find
You can read/retrieve stored documents from MongoDB using the find() method. This method retrieves and displays all the documents in MongoDB in a non-structured way. Following is the syntax of the find() method. >db.CollectionName.find() Assume we have inserted 3 documents into a database named testDB in a collection named sample using the following queries − > use testDB > db.createCollection("sample") > data = [ {"_id": "1001", "name" : "Ram", "age": "26", "city": "Hyderabad"}, {"_id": "1002", "name" : "Rahim", "age" : 27, "city" : "Bangalore" }, {"_id": "1003", "name" : "Robert", "age" : 28, "city" : "Mumbai" } ] > db.sample.insert(data) You can retrieve the inserted documents using the find() method as − > use testDB switched to db testDB > db.sample.find() { "_id" : "1001", "name" : "Ram", "age" : "26", "city" : "Hyderabad" } { "_id" : "1002", "name" : "Rahim", "age" : 27, "city" : "Bangalore" } { "_id" : "1003", "name" : "Robert", "age" : 28, "city" : "Mumbai" } > You can also retrieve first document in the collection using the findOne() method as − > db.sample.findOne() { "_id" : "1001", "name" : "Ram", "age" : "26", "city" : "Hyderabad" } The find_One() method of pymongo is used to retrieve a single document based on your query, in case of no matches this method returns nothing and if you doesn’t use any query it returns the first document of the collection. This method comes handy whenever you need to retrieve only one document of a result or, if you are sure that your query returns only one document. Following python example retrieve first document of a collection − from pymongo import MongoClient #Creating a pymongo client client = MongoClient('localhost', 27017) #Getting the database instance db = client['mydatabase'] #Creating a collection coll = db['example'] #Inserting document into a collection data = [ {"_id": "101", "name": "Ram", "age": "26", "city": "Hyderabad"}, {"_id": "102", "name": "Rahim", "age": "27", "city": "Bangalore"}, {"_id": "103", "name": "Robert", "age": "28", "city": "Mumbai"} ] res = coll.insert_many(data) print("Data inserted ......") print(res.inserted_ids) #Retrieving the first record using the find_one() method print("First record of the collection: ") print(coll.find_one()) #Retrieving a record with is 103 using the find_one() method print("Record whose id is 103: ") print(coll.find_one({"_id": "103"})) Data inserted ...... ['101', '102', '103'] First record of the collection: {'_id': '101', 'name': 'Ram', 'age': '26', 'city': 'Hyderabad'} Record whose id is 103: {'_id': '103', 'name': 'Robert', 'age': '28', 'city': 'Mumbai'} To get multiple documents in a single query (single call od find method), you can use the find() method of the pymongo. If haven’t passed any query, this returns all the documents of a collection and, if you have passed a query to this method, it returns all the matched documents. #Getting the database instance db = client['myDB'] #Creating a collection coll = db['example'] #Inserting document into a collection data = [ {"_id": "101", "name": "Ram", "age": "26", "city": "Hyderabad"}, {"_id": "102", "name": "Rahim", "age": "27", "city": "Bangalore"}, {"_id": "103", "name": "Robert", "age": "28", "city": "Mumbai"} ] res = coll.insert_many(data) print("Data inserted ......") #Retrieving all the records using the find() method print("Records of the collection: ") for doc1 in coll.find(): print(doc1) #Retrieving records with age greater than 26 using the find() method print("Record whose age is more than 26: ") for doc2 in coll.find({"age":{"$gt":"26"}}): print(doc2) Data inserted ...... Records of the collection: {'_id': '101', 'name': 'Ram', 'age': '26', 'city': 'Hyderabad'} {'_id': '102', 'name': 'Rahim', 'age': '27', 'city': 'Bangalore'} {'_id': '103', 'name': 'Robert', 'age': '28', 'city': 'Mumbai'} Record whose age is more than 26: {'_id': '102', 'name': 'Rahim', 'age': '27', 'city': 'Bangalore'} {'_id': '103', 'name': 'Robert', 'age': '28', 'city': 'Mumbai'} 187 Lectures 17.5 hours Malhar Lathkar 55 Lectures 8 hours Arnab Chakraborty 136 Lectures 11 hours In28Minutes Official 75 Lectures 13 hours Eduonix Learning Solutions 70 Lectures 8.5 hours Lets Kode It 63 Lectures 6 hours Abhilash Nelson Print Add Notes Bookmark this page
[ { "code": null, "e": 3371, "s": 3205, "text": "You can read/retrieve stored documents from MongoDB using the find() method. This method retrieves and displays all the documents in MongoDB in a non-structured way." }, { "code": null, "e": 3417, "s": 3371, "text": "Following is the syntax of the find() method." }, { "code": null, "e": 3444, "s": 3417, "text": ">db.CollectionName.find()\n" }, { "code": null, "e": 3568, "s": 3444, "text": "Assume we have inserted 3 documents into a database named testDB in a collection named sample using the following queries −" }, { "code": null, "e": 3865, "s": 3568, "text": "> use testDB\n> db.createCollection(\"sample\")\n> data = [\n {\"_id\": \"1001\", \"name\" : \"Ram\", \"age\": \"26\", \"city\": \"Hyderabad\"},\n {\"_id\": \"1002\", \"name\" : \"Rahim\", \"age\" : 27, \"city\" : \"Bangalore\" },\n {\"_id\": \"1003\", \"name\" : \"Robert\", \"age\" : 28, \"city\" : \"Mumbai\" }\n]\n> db.sample.insert(data)\n" }, { "code": null, "e": 3934, "s": 3865, "text": "You can retrieve the inserted documents using the find() method as −" }, { "code": null, "e": 4202, "s": 3934, "text": "> use testDB\nswitched to db testDB\n> db.sample.find()\n{ \"_id\" : \"1001\", \"name\" : \"Ram\", \"age\" : \"26\", \"city\" : \"Hyderabad\" }\n{ \"_id\" : \"1002\", \"name\" : \"Rahim\", \"age\" : 27, \"city\" : \"Bangalore\" }\n{ \"_id\" : \"1003\", \"name\" : \"Robert\", \"age\" : 28, \"city\" : \"Mumbai\" }\n>\n" }, { "code": null, "e": 4289, "s": 4202, "text": "You can also retrieve first document in the collection using the findOne() method as −" }, { "code": null, "e": 4383, "s": 4289, "text": "> db.sample.findOne()\n{ \"_id\" : \"1001\", \"name\" : \"Ram\", \"age\" : \"26\", \"city\" : \"Hyderabad\" }\n" }, { "code": null, "e": 4607, "s": 4383, "text": "The find_One() method of pymongo is used to retrieve a single document based on your query, in case of no matches this method returns nothing and if you doesn’t use any query it returns the first document of the collection." }, { "code": null, "e": 4754, "s": 4607, "text": "This method comes handy whenever you need to retrieve only one document of a result or, if you are sure that your query returns only one document." }, { "code": null, "e": 4821, "s": 4754, "text": "Following python example retrieve first document of a collection −" }, { "code": null, "e": 5619, "s": 4821, "text": "from pymongo import MongoClient\n\n#Creating a pymongo client\nclient = MongoClient('localhost', 27017)\n\n#Getting the database instance\ndb = client['mydatabase']\n\n#Creating a collection\ncoll = db['example']\n\n#Inserting document into a collection\ndata = [\n {\"_id\": \"101\", \"name\": \"Ram\", \"age\": \"26\", \"city\": \"Hyderabad\"},\n {\"_id\": \"102\", \"name\": \"Rahim\", \"age\": \"27\", \"city\": \"Bangalore\"},\n {\"_id\": \"103\", \"name\": \"Robert\", \"age\": \"28\", \"city\": \"Mumbai\"}\n]\nres = coll.insert_many(data)\nprint(\"Data inserted ......\")\nprint(res.inserted_ids)\n\n#Retrieving the first record using the find_one() method\nprint(\"First record of the collection: \")\nprint(coll.find_one())\n\n#Retrieving a record with is 103 using the find_one() method\nprint(\"Record whose id is 103: \")\nprint(coll.find_one({\"_id\": \"103\"}))" }, { "code": null, "e": 5847, "s": 5619, "text": "Data inserted ......\n['101', '102', '103']\nFirst record of the collection:\n{'_id': '101', 'name': 'Ram', 'age': '26', 'city': 'Hyderabad'}\nRecord whose id is 103:\n{'_id': '103', 'name': 'Robert', 'age': '28', 'city': 'Mumbai'}\n" }, { "code": null, "e": 6129, "s": 5847, "text": "To get multiple documents in a single query (single call od find method), you can use the find() method of the pymongo. If haven’t passed any query, this returns all the documents of a collection and, if you have passed a query to this method, it returns all the matched documents." }, { "code": null, "e": 6837, "s": 6129, "text": "#Getting the database instance\ndb = client['myDB']\n\n#Creating a collection\ncoll = db['example']\n\n#Inserting document into a collection\ndata = [\n {\"_id\": \"101\", \"name\": \"Ram\", \"age\": \"26\", \"city\": \"Hyderabad\"},\n {\"_id\": \"102\", \"name\": \"Rahim\", \"age\": \"27\", \"city\": \"Bangalore\"},\n {\"_id\": \"103\", \"name\": \"Robert\", \"age\": \"28\", \"city\": \"Mumbai\"}\n]\nres = coll.insert_many(data)\nprint(\"Data inserted ......\")\n\n#Retrieving all the records using the find() method\nprint(\"Records of the collection: \")\nfor doc1 in coll.find():\nprint(doc1)\n\n#Retrieving records with age greater than 26 using the find() method\nprint(\"Record whose age is more than 26: \")\nfor doc2 in coll.find({\"age\":{\"$gt\":\"26\"}}):\nprint(doc2)" }, { "code": null, "e": 7244, "s": 6837, "text": "Data inserted ......\nRecords of the collection:\n{'_id': '101', 'name': 'Ram', 'age': '26', 'city': 'Hyderabad'}\n{'_id': '102', 'name': 'Rahim', 'age': '27', 'city': 'Bangalore'}\n{'_id': '103', 'name': 'Robert', 'age': '28', 'city': 'Mumbai'}\nRecord whose age is more than 26:\n{'_id': '102', 'name': 'Rahim', 'age': '27', 'city': 'Bangalore'}\n{'_id': '103', 'name': 'Robert', 'age': '28', 'city': 'Mumbai'}\n" }, { "code": null, "e": 7281, "s": 7244, "text": "\n 187 Lectures \n 17.5 hours \n" }, { "code": null, "e": 7297, "s": 7281, "text": " Malhar Lathkar" }, { "code": null, "e": 7330, "s": 7297, "text": "\n 55 Lectures \n 8 hours \n" }, { "code": null, "e": 7349, "s": 7330, "text": " Arnab Chakraborty" }, { "code": null, "e": 7384, "s": 7349, "text": "\n 136 Lectures \n 11 hours \n" }, { "code": null, "e": 7406, "s": 7384, "text": " In28Minutes Official" }, { "code": null, "e": 7440, "s": 7406, "text": "\n 75 Lectures \n 13 hours \n" }, { "code": null, "e": 7468, "s": 7440, "text": " Eduonix Learning Solutions" }, { "code": null, "e": 7503, "s": 7468, "text": "\n 70 Lectures \n 8.5 hours \n" }, { "code": null, "e": 7517, "s": 7503, "text": " Lets Kode It" }, { "code": null, "e": 7550, "s": 7517, "text": "\n 63 Lectures \n 6 hours \n" }, { "code": null, "e": 7567, "s": 7550, "text": " Abhilash Nelson" }, { "code": null, "e": 7574, "s": 7567, "text": " Print" }, { "code": null, "e": 7585, "s": 7574, "text": " Add Notes" } ]
Concatenation of Zig-Zag String in n Rows | Practice | GeeksforGeeks
Given a string and number of rows ‘n’. Print the string formed by concatenating n rows when the input string is written in row-wise Zig-Zag fashion. Example 1: Input: str = "ABCDEFGH" n = 2 Output: "ACEGBDFH" Explanation: Let us write input string in Zig-Zag fashion in 2 rows. A C E G B D F H Now concatenate the two rows and ignore spaces in every row. We get "ACEGBDFH" Example 2: Input: str = "GEEKSFORGEEKS" n = 3 Output: GSGSEKFREKEOE Explanation: Let us write input string in Zig-Zag fashion in 3 rows. G S G S E K F R E K E O E Now concatenate the two rows and ignore spaces in every row. We get "GSGSEKFREKEOE" Your Task: You need not read input or print anything. Your task is to complete the function convert() which takes 2 arguments(string str, integer n) and returns the resultant string. Expected Time Complexity: O(|str|). Expected Auxiliary Space: O(|str|). Constraints: 1 ≤ N ≤ 105 +9 siddharthahazra6 months ago Hey here is official video editorial from GeeksforGeeks https://youtu.be/hwuUvJpQ1tI . Upvote this so that this remains on top. Thank you. 0 msandhiya82481 month ago EASY SOLUTION IN C++ MIGHT HELP YOU class Solution{ public: string convert(string s, int n) { vector<string>v(n,""); if(n==1){ return s; } int flag=1; int j=0; for(int i=0;i<s.length();i++){ v[j]+=s[i]; if(flag){ j++; } else{ j--; } if(j==0 or j==n-1){ flag=!flag; } } string ans=""; for(int i=0;i<n;i++){ ans+=v[i]; } return ans; } }; 0 chessnoobdj3 months ago C++ string convert(string s, int k) { if(k == 1) return s; int n = s.size(), j = 0; vector <vector<char>> v(k); bool flg = true; for(int i=0; i<n; i++){ v[j].push_back(s[i]); j += (flg == true) ? 1 : -1; if(j == 0 || j == k-1) flg = !flg; } string str = ""; for(auto i:v){ for(auto j:i) str += j; } return str; } 0 sinrepresion6 months ago def convert(self, Str, n): row = 0 direction = 1 output = [""]*n for char in Str: output[row] = output[row]+char if row < n-1 and direction == 1: row += 1 elif row > 0 and direction == -1: row -= 1 else: direction *= -1 row += direction return "".join(output) 0 ggupta4be206 months ago C++ code string convert(string str, int n) { int iter=n; string ans; int i=0; if(n==1) return str; while(iter>0&&i<str.length()) { if(iter==1) { int tvl=i; int top_add=1+(n-iter-1)*2; while(tvl<str.length()) { ans.push_back(str[tvl]); tvl+=(top_add+1); } break; } else if(iter==n) { int bottom_add=1+(iter-2)*2; int tvl=i; while(tvl<str.length()) { ans.push_back(str[tvl]); tvl+=(bottom_add+1); } i++; iter--; } else { int bottom_add=1+(iter-2)*2; int top_add=1+(n-iter-1)*2; int tvl=i; while(tvl<str.length()) { ans.push_back(str[tvl]); tvl+=(bottom_add+1); if(tvl<str.length()) ans.push_back(str[tvl]); tvl+=(top_add+1); } i++; iter--; } } return ans; } I have used O(1) extra space and time complexity is still the length of the string.The idea is to calculate the number of charachters needed to skip in the given string to reach the next charachter of our present level. 0 shreysomu22116 months ago //C++ class Solution{ public: string convert(string s, int n) { //code if(n==1){ return s; } string res[n]; int CurRow=0,mod =1; for(auto e:s){ res[CurRow].push_back(e); if(CurRow==0) mod = 1; if(CurRow== n-1) mod=-1; CurRow+= mod; } s.clear(); for(auto e:res){ s.append(e); } return s; }}; 0 priyankamessage6 months ago Hey all! Can anyone let me know why does this not work? string ans=""; for(int i=0;i<n;i++) { int j=i; while(j<=s.size()) { ans=ans+s[j]; j=j+n; } } return ans; 0 rishug7706 months ago Python approach if(n==1): return Str m=len(Str) d=dict() for i in range(1,n+1): d.update({i:''}) i=0 count=1 while(i<m): while(i<m and count<=n): d[count]+=Str[i] i+=1 count+=1 count-=2 while(i<m and count>=1): d[count]+=Str[i] i+=1 count-=1 count+=2 ans='' for i in range(1,n+1): ans+=d[i] return ans 0 iliyazali446 months ago def convert(self, s, n): # code here if n==1: return s ans="" flag="" cnt=0 v=[""]*n for i in range(len(s)): v[cnt]+=s[i] if cnt==n-1: flag="up" if cnt==0: flag="down" if flag=="up": cnt-=1 else: cnt+=1 return "".join(v) 0 ksridharan8296 months ago // easy implementation !!! // class Solution{ public: string convert(string s, int n) { vector<char> map[n+1]; int ok = 0; while (ok < s.size()){ for (int i = 1 ; i<=n and ok < s.size() ; i++,ok++){ map[i].push_back(s[ok]); } for (int i = n-1 ; i>1 and ok < s.size() ; i--,ok++){ map[i].push_back(s[ok]); } } string ans; for (int i = 1 ; i<= n ; i++){ for (auto x : map[i]){ ans += x; } } return ans; }}; We strongly recommend solving this problem on your own before viewing its editorial. Do you still want to view the editorial? Login to access your submissions. Problem Contest Reset the IDE using the second button on the top right corner. Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values. Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints. You can access the hints to get an idea about what is expected of you as well as the final solution code. You can view the solutions submitted by other users from the submission tab.
[ { "code": null, "e": 375, "s": 226, "text": "Given a string and number of rows ‘n’. Print the string formed by concatenating n rows when the input string is written in row-wise Zig-Zag fashion." }, { "code": null, "e": 386, "s": 375, "text": "Example 1:" }, { "code": null, "e": 619, "s": 386, "text": "Input: \nstr = \"ABCDEFGH\"\nn = 2\nOutput: \"ACEGBDFH\"\nExplanation: \nLet us write input string in \nZig-Zag fashion in 2 rows.\nA C E G \n B D F H\nNow concatenate the two rows and ignore \nspaces in every row. We get \"ACEGBDFH\"" }, { "code": null, "e": 630, "s": 619, "text": "Example 2:" }, { "code": null, "e": 915, "s": 630, "text": "Input: \nstr = \"GEEKSFORGEEKS\"\nn = 3\nOutput: GSGSEKFREKEOE\nExplanation: \nLet us write input string in \nZig-Zag fashion in 3 rows.\nG S G S\n E K F R E K\n E O E\nNow concatenate the two rows and ignore spaces\nin every row. We get \"GSGSEKFREKEOE\"" }, { "code": null, "e": 1098, "s": 915, "text": "Your Task:\nYou need not read input or print anything. Your task is to complete the function convert() which takes 2 arguments(string str, integer n) and returns the resultant string." }, { "code": null, "e": 1170, "s": 1098, "text": "Expected Time Complexity: O(|str|).\nExpected Auxiliary Space: O(|str|)." }, { "code": null, "e": 1195, "s": 1170, "text": "Constraints:\n1 ≤ N ≤ 105" }, { "code": null, "e": 1198, "s": 1195, "text": "+9" }, { "code": null, "e": 1226, "s": 1198, "text": "siddharthahazra6 months ago" }, { "code": null, "e": 1365, "s": 1226, "text": "Hey here is official video editorial from GeeksforGeeks https://youtu.be/hwuUvJpQ1tI . Upvote this so that this remains on top. Thank you." }, { "code": null, "e": 1367, "s": 1365, "text": "0" }, { "code": null, "e": 1392, "s": 1367, "text": "msandhiya82481 month ago" }, { "code": null, "e": 1428, "s": 1392, "text": "EASY SOLUTION IN C++ MIGHT HELP YOU" }, { "code": null, "e": 2024, "s": 1428, "text": "class Solution{\n public:\n string convert(string s, int n) {\n vector<string>v(n,\"\");\n if(n==1){\n return s;\n }\n int flag=1;\n int j=0;\n for(int i=0;i<s.length();i++){\n v[j]+=s[i];\n \n if(flag){\n j++;\n }\n else{\n j--;\n }\n if(j==0 or j==n-1){\n flag=!flag;\n }\n \n \n }\n string ans=\"\";\n for(int i=0;i<n;i++){\n ans+=v[i];\n }\n return ans;\n }\n};" }, { "code": null, "e": 2026, "s": 2024, "text": "0" }, { "code": null, "e": 2050, "s": 2026, "text": "chessnoobdj3 months ago" }, { "code": null, "e": 2054, "s": 2050, "text": "C++" }, { "code": null, "e": 2539, "s": 2054, "text": "string convert(string s, int k) {\n if(k == 1)\n return s;\n int n = s.size(), j = 0;\n vector <vector<char>> v(k);\n bool flg = true;\n for(int i=0; i<n; i++){\n v[j].push_back(s[i]);\n j += (flg == true) ? 1 : -1;\n if(j == 0 || j == k-1)\n flg = !flg;\n }\n string str = \"\";\n for(auto i:v){\n for(auto j:i)\n str += j;\n }\n return str;\n }" }, { "code": null, "e": 2541, "s": 2539, "text": "0" }, { "code": null, "e": 2566, "s": 2541, "text": "sinrepresion6 months ago" }, { "code": null, "e": 3006, "s": 2566, "text": "def convert(self, Str, n):\n row = 0\n direction = 1\n output = [\"\"]*n\n \n for char in Str:\n output[row] = output[row]+char\n if row < n-1 and direction == 1:\n row += 1\n elif row > 0 and direction == -1:\n row -= 1 \n else:\n direction *= -1\n row += direction\n \n return \"\".join(output)" }, { "code": null, "e": 3008, "s": 3006, "text": "0" }, { "code": null, "e": 3032, "s": 3008, "text": "ggupta4be206 months ago" }, { "code": null, "e": 3042, "s": 3032, "text": "C++ code " }, { "code": null, "e": 3078, "s": 3044, "text": "string convert(string str, int n)" }, { "code": null, "e": 3120, "s": 3078, "text": " { int iter=n; string ans; int i=0;" }, { "code": null, "e": 3146, "s": 3120, "text": " if(n==1) return str;" }, { "code": null, "e": 3923, "s": 3146, "text": " while(iter>0&&i<str.length()) { if(iter==1) { int tvl=i; int top_add=1+(n-iter-1)*2; while(tvl<str.length()) { ans.push_back(str[tvl]); tvl+=(top_add+1); } break; } else if(iter==n) { int bottom_add=1+(iter-2)*2; int tvl=i; while(tvl<str.length()) { ans.push_back(str[tvl]); tvl+=(bottom_add+1); } i++; iter--; } else { int bottom_add=1+(iter-2)*2; int top_add=1+(n-iter-1)*2; int tvl=i; while(tvl<str.length()) { ans.push_back(str[tvl]); tvl+=(bottom_add+1);" }, { "code": null, "e": 3998, "s": 3923, "text": " if(tvl<str.length()) ans.push_back(str[tvl]);" }, { "code": null, "e": 4106, "s": 3998, "text": " tvl+=(top_add+1); } i++; iter--; } } return ans; }" }, { "code": null, "e": 4328, "s": 4108, "text": "I have used O(1) extra space and time complexity is still the length of the string.The idea is to calculate the number of charachters needed to skip in the given string to reach the next charachter of our present level." }, { "code": null, "e": 4330, "s": 4328, "text": "0" }, { "code": null, "e": 4356, "s": 4330, "text": "shreysomu22116 months ago" }, { "code": null, "e": 4362, "s": 4356, "text": "//C++" }, { "code": null, "e": 4799, "s": 4362, "text": "class Solution{ public: string convert(string s, int n) { //code if(n==1){ return s; } string res[n]; int CurRow=0,mod =1; for(auto e:s){ res[CurRow].push_back(e); if(CurRow==0) mod = 1; if(CurRow== n-1) mod=-1; CurRow+= mod; } s.clear(); for(auto e:res){ s.append(e); } return s; }};" }, { "code": null, "e": 4801, "s": 4799, "text": "0" }, { "code": null, "e": 4829, "s": 4801, "text": "priyankamessage6 months ago" }, { "code": null, "e": 4885, "s": 4829, "text": "Hey all! Can anyone let me know why does this not work?" }, { "code": null, "e": 4901, "s": 4885, "text": " string ans=\"\";" }, { "code": null, "e": 4923, "s": 4901, "text": " for(int i=0;i<n;i++)" }, { "code": null, "e": 4927, "s": 4923, "text": " { " }, { "code": null, "e": 4939, "s": 4927, "text": " int j=i;" }, { "code": null, "e": 4963, "s": 4939, "text": " while(j<=s.size()) {" }, { "code": null, "e": 4983, "s": 4963, "text": " ans=ans+s[j];" }, { "code": null, "e": 4998, "s": 4983, "text": " j=j+n; " }, { "code": null, "e": 5004, "s": 4998, "text": " } " }, { "code": null, "e": 5006, "s": 5004, "text": "}" }, { "code": null, "e": 5019, "s": 5006, "text": " return ans;" }, { "code": null, "e": 5021, "s": 5019, "text": "0" }, { "code": null, "e": 5043, "s": 5021, "text": "rishug7706 months ago" }, { "code": null, "e": 5059, "s": 5043, "text": "Python approach" }, { "code": null, "e": 5572, "s": 5059, "text": " if(n==1): return Str m=len(Str) d=dict() for i in range(1,n+1): d.update({i:''}) i=0 count=1 while(i<m): while(i<m and count<=n): d[count]+=Str[i] i+=1 count+=1 count-=2 while(i<m and count>=1): d[count]+=Str[i] i+=1 count-=1 count+=2 ans='' for i in range(1,n+1): ans+=d[i] return ans" }, { "code": null, "e": 5574, "s": 5572, "text": "0" }, { "code": null, "e": 5598, "s": 5574, "text": "iliyazali446 months ago" }, { "code": null, "e": 5997, "s": 5598, "text": "def convert(self, s, n): # code here if n==1: return s ans=\"\" flag=\"\" cnt=0 v=[\"\"]*n for i in range(len(s)): v[cnt]+=s[i] if cnt==n-1: flag=\"up\" if cnt==0: flag=\"down\" if flag==\"up\": cnt-=1 else: cnt+=1 return \"\".join(v)" }, { "code": null, "e": 5999, "s": 5997, "text": "0" }, { "code": null, "e": 6025, "s": 5999, "text": "ksridharan8296 months ago" }, { "code": null, "e": 6055, "s": 6025, "text": "// easy implementation !!! //" }, { "code": null, "e": 6601, "s": 6055, "text": "class Solution{ public: string convert(string s, int n) { vector<char> map[n+1]; int ok = 0; while (ok < s.size()){ for (int i = 1 ; i<=n and ok < s.size() ; i++,ok++){ map[i].push_back(s[ok]); } for (int i = n-1 ; i>1 and ok < s.size() ; i--,ok++){ map[i].push_back(s[ok]); } } string ans; for (int i = 1 ; i<= n ; i++){ for (auto x : map[i]){ ans += x; } } return ans; }};" }, { "code": null, "e": 6747, "s": 6601, "text": "We strongly recommend solving this problem on your own before viewing its editorial. Do you still\n want to view the editorial?" }, { "code": null, "e": 6783, "s": 6747, "text": " Login to access your submissions. " }, { "code": null, "e": 6793, "s": 6783, "text": "\nProblem\n" }, { "code": null, "e": 6803, "s": 6793, "text": "\nContest\n" }, { "code": null, "e": 6866, "s": 6803, "text": "Reset the IDE using the second button on the top right corner." }, { "code": null, "e": 7014, "s": 6866, "text": "Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values." }, { "code": null, "e": 7222, "s": 7014, "text": "Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints." }, { "code": null, "e": 7328, "s": 7222, "text": "You can access the hints to get an idea about what is expected of you as well as the final solution code." } ]
MomentJS - Is Same or After
This method checks if the moment is same or after another moment. It returns true or false. moment().isSameOrAfter(Moment|String|Number|Date|Array); moment().isSameOrAfter(Moment|String|Number|Date|Array, String); var issameorafter = moment('2017-10-10').isSameOrAfter('2017-10-09'); We can use the units with isSameOrAfter() and the ones supported are year, month, week , day, hour, minute and second. var issameorafter = moment('2017-10-10').isSameOrAfter('2017-10-09', 'year'); var issameorafter = moment('2017-10-10').isSameOrAfter('2017-10-15', 'day'); Print Add Notes Bookmark this page
[ { "code": null, "e": 2052, "s": 1960, "text": "This method checks if the moment is same or after another moment. It returns true or false." }, { "code": null, "e": 2175, "s": 2052, "text": "moment().isSameOrAfter(Moment|String|Number|Date|Array);\nmoment().isSameOrAfter(Moment|String|Number|Date|Array, String);\n" }, { "code": null, "e": 2245, "s": 2175, "text": "var issameorafter = moment('2017-10-10').isSameOrAfter('2017-10-09');" }, { "code": null, "e": 2364, "s": 2245, "text": "We can use the units with isSameOrAfter() and the ones supported are year, month, week , day, hour, minute and second." }, { "code": null, "e": 2442, "s": 2364, "text": "var issameorafter = moment('2017-10-10').isSameOrAfter('2017-10-09', 'year');" }, { "code": null, "e": 2519, "s": 2442, "text": "var issameorafter = moment('2017-10-10').isSameOrAfter('2017-10-15', 'day');" }, { "code": null, "e": 2526, "s": 2519, "text": " Print" }, { "code": null, "e": 2537, "s": 2526, "text": " Add Notes" } ]
How to change the spacing between ticks in Matplotlib?
To set ticks on a fixed position or change the spacing between ticks in matplotlib, we can take the following steps − Create a figure and add a set of subplots. Create a figure and add a set of subplots. To set the ticks on a fixed position, create two lists with some values. To set the ticks on a fixed position, create two lists with some values. Use set_yticks and set_xticks methods to set the ticks on the axes. Use set_yticks and set_xticks methods to set the ticks on the axes. To display the figure, use show() method. To display the figure, use show() method. import matplotlib.pyplot as plt plt.rcParams["figure.figsize"] = [7.00, 3.50] plt.rcParams["figure.autolayout"] = True fig, ax = plt.subplots() xtick_loc = [0.20, 0.75, 0.30] ytick_loc = [0.12, 0.80, 0.76] ax.set_xticks(xtick_loc) ax.set_yticks(ytick_loc) plt.show()
[ { "code": null, "e": 1180, "s": 1062, "text": "To set ticks on a fixed position or change the spacing between ticks in matplotlib, we can take the following steps −" }, { "code": null, "e": 1223, "s": 1180, "text": "Create a figure and add a set of subplots." }, { "code": null, "e": 1266, "s": 1223, "text": "Create a figure and add a set of subplots." }, { "code": null, "e": 1339, "s": 1266, "text": "To set the ticks on a fixed position, create two lists with some values." }, { "code": null, "e": 1412, "s": 1339, "text": "To set the ticks on a fixed position, create two lists with some values." }, { "code": null, "e": 1480, "s": 1412, "text": "Use set_yticks and set_xticks methods to set the ticks on the axes." }, { "code": null, "e": 1548, "s": 1480, "text": "Use set_yticks and set_xticks methods to set the ticks on the axes." }, { "code": null, "e": 1590, "s": 1548, "text": "To display the figure, use show() method." }, { "code": null, "e": 1632, "s": 1590, "text": "To display the figure, use show() method." }, { "code": null, "e": 1899, "s": 1632, "text": "import matplotlib.pyplot as plt\nplt.rcParams[\"figure.figsize\"] = [7.00, 3.50]\nplt.rcParams[\"figure.autolayout\"] = True\nfig, ax = plt.subplots()\nxtick_loc = [0.20, 0.75, 0.30]\nytick_loc = [0.12, 0.80, 0.76]\nax.set_xticks(xtick_loc)\nax.set_yticks(ytick_loc)\nplt.show()" } ]
HTTP headers | Server-Timing - GeeksforGeeks
27 Sep, 2021 The HTTP Server-Timing header is a response-type header. This header is used to communicate between two or more metrics and descriptions for a given request-response cycle from the user agent. The HTTP Server-Timing header is useful to any back-end server timing metrics like read or write in any databases, accessing files, etc. It can be used in the developer tools to keep track of the server on the PerformanceServerTiming interface. The HTTP Server-Timing helps to perform in different ways to communicate with the metrics, those are listed below: metrics name metric with value metric with description metric with value and description Syntax: Server-Timing: metricsname| metricsvalue | metricsdescription Directives: There are no directives only need to mention the metrics name with all the details. Example: This example shows the single metrics. Server-Timing: cdn-cache This example shows the single metrics with the value. Server-Timing: edge; dur=33 This example shows the single metrics with description. Server-Timing: cdn-cache; desc=HIT This example shows the double metrics with description and value. Server-Timing: cdn-cache; desc=HIT, edge; dur=1 To check this Server-Timing in action go to Inspect Element -> Network check the response header for Server-Timing like below, Server-Timing is highlighted. Supported Browsers: The browsers compatible with HTTP headers Server-Timing are listed below: Google Chrome Firefox Safari Opera arorakashish0911 HTTP-headers Picked Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Top 10 Front End Developer Skills That You Need in 2022 How to fetch data from an API in ReactJS ? Difference between var, let and const keywords in JavaScript Convert a string to an integer in JavaScript Differences between Functional Components and Class Components in React How to redirect to another page in ReactJS ? How to Insert Form Data into Database using PHP ? How to pass data from child component to its parent in ReactJS ? How to execute PHP code using command line ? REST API (Introduction)
[ { "code": null, "e": 24525, "s": 24497, "text": "\n27 Sep, 2021" }, { "code": null, "e": 25079, "s": 24525, "text": "The HTTP Server-Timing header is a response-type header. This header is used to communicate between two or more metrics and descriptions for a given request-response cycle from the user agent. The HTTP Server-Timing header is useful to any back-end server timing metrics like read or write in any databases, accessing files, etc. It can be used in the developer tools to keep track of the server on the PerformanceServerTiming interface. The HTTP Server-Timing helps to perform in different ways to communicate with the metrics, those are listed below: " }, { "code": null, "e": 25092, "s": 25079, "text": "metrics name" }, { "code": null, "e": 25110, "s": 25092, "text": "metric with value" }, { "code": null, "e": 25134, "s": 25110, "text": "metric with description" }, { "code": null, "e": 25168, "s": 25134, "text": "metric with value and description" }, { "code": null, "e": 25178, "s": 25168, "text": "Syntax: " }, { "code": null, "e": 25240, "s": 25178, "text": "Server-Timing: metricsname| metricsvalue | metricsdescription" }, { "code": null, "e": 25337, "s": 25240, "text": "Directives: There are no directives only need to mention the metrics name with all the details. " }, { "code": null, "e": 25347, "s": 25337, "text": "Example: " }, { "code": null, "e": 25387, "s": 25347, "text": "This example shows the single metrics. " }, { "code": null, "e": 25412, "s": 25387, "text": "Server-Timing: cdn-cache" }, { "code": null, "e": 25467, "s": 25412, "text": "This example shows the single metrics with the value. " }, { "code": null, "e": 25495, "s": 25467, "text": "Server-Timing: edge; dur=33" }, { "code": null, "e": 25552, "s": 25495, "text": "This example shows the single metrics with description. " }, { "code": null, "e": 25587, "s": 25552, "text": "Server-Timing: cdn-cache; desc=HIT" }, { "code": null, "e": 25654, "s": 25587, "text": "This example shows the double metrics with description and value. " }, { "code": null, "e": 25702, "s": 25654, "text": "Server-Timing: cdn-cache; desc=HIT, edge; dur=1" }, { "code": null, "e": 25861, "s": 25702, "text": "To check this Server-Timing in action go to Inspect Element -> Network check the response header for Server-Timing like below, Server-Timing is highlighted. " }, { "code": null, "e": 25956, "s": 25861, "text": "Supported Browsers: The browsers compatible with HTTP headers Server-Timing are listed below: " }, { "code": null, "e": 25970, "s": 25956, "text": "Google Chrome" }, { "code": null, "e": 25978, "s": 25970, "text": "Firefox" }, { "code": null, "e": 25985, "s": 25978, "text": "Safari" }, { "code": null, "e": 25991, "s": 25985, "text": "Opera" }, { "code": null, "e": 26010, "s": 25993, "text": "arorakashish0911" }, { "code": null, "e": 26023, "s": 26010, "text": "HTTP-headers" }, { "code": null, "e": 26030, "s": 26023, "text": "Picked" }, { "code": null, "e": 26047, "s": 26030, "text": "Web Technologies" }, { "code": null, "e": 26145, "s": 26047, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 26154, "s": 26145, "text": "Comments" }, { "code": null, "e": 26167, "s": 26154, "text": "Old Comments" }, { "code": null, "e": 26223, "s": 26167, "text": "Top 10 Front End Developer Skills That You Need in 2022" }, { "code": null, "e": 26266, "s": 26223, "text": "How to fetch data from an API in ReactJS ?" }, { "code": null, "e": 26327, "s": 26266, "text": "Difference between var, let and const keywords in JavaScript" }, { "code": null, "e": 26372, "s": 26327, "text": "Convert a string to an integer in JavaScript" }, { "code": null, "e": 26444, "s": 26372, "text": "Differences between Functional Components and Class Components in React" }, { "code": null, "e": 26489, "s": 26444, "text": "How to redirect to another page in ReactJS ?" }, { "code": null, "e": 26539, "s": 26489, "text": "How to Insert Form Data into Database using PHP ?" }, { "code": null, "e": 26604, "s": 26539, "text": "How to pass data from child component to its parent in ReactJS ?" }, { "code": null, "e": 26649, "s": 26604, "text": "How to execute PHP code using command line ?" } ]
Computer Networks | Set 8 - GeeksforGeeks
27 Mar, 2017 Following questions have been asked in GATE CS 2008 exam. 1) What is the maximum size of data that the application layer can pass on to the TCP layer below?(A) Any size(B) 2^16 bytes-size of TCP header(C) 2^16 bytes(D) 1500 bytes Answer (A)Application layer can send any size of data. There is no limit defined by standards. The lower layers divides the data if needed. 2) A client process P needs to make a TCP connection to a server process S. Consider the following situation: the server process S executes a socket(), a bind() and a listen() system call in that order, following which it is preempted. Subsequently, the client process P executes a socket() system call followed by connect() system call to connect to the server process S. The server process has not executed any accept() system call. Which one of the following events could take place?(A) connect () system call returns successfully(B) connect () system call blocks(C) connect () system call returns an error(D) connect () system call results in a core dump Answer (C)Since accept() call is not executed then connect () gets no response for a time stamp to wait & then return no response server error. 3) A computer on a 10Mbps network is regulated by a token bucket. The token bucket is filled at a rate of 2Mbps. It is initially filled to capacity with 16Megabits. What is the maximum duration for which the computer can transmit at the full 10Mbps?(A) 1.6 seconds(B) 2 seconds(C) 5 seconds(D) 8 seconds Answer (B) New tokens are added at the rate of r bytes/sec which is 2Mbps in the given question. Capacity of the token bucket (b) = 16 Mbits Maximum possible transmission rate (M) = 10Mbps So the maximum burst time = b/(M-r) = 16/(10-2) = 2 seconds In the above formula, r is subtracted from M to calculate the maximum burst time. The reason for this subtraction is, new tokens are added at the rate of r while transmission happens at maximum transmission rate M. Please see GATE Corner for all previous year paper/solutions/explanations, syllabus, important dates, notes, etc. Please write comments if you find any of the answers/explanations incorrect, or you want to share more information about the topics discussed above GATE-CS-2008 Computer Networks GATE CS MCQ Computer Networks Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Caesar Cipher in Cryptography Socket Programming in Python UDP Server-Client implementation in C Differences between IPv4 and IPv6 Socket Programming in Java ACID Properties in DBMS Types of Operating Systems Page Replacement Algorithms in Operating Systems Normal Forms in DBMS Difference between Process and Thread
[ { "code": null, "e": 36594, "s": 36566, "text": "\n27 Mar, 2017" }, { "code": null, "e": 36652, "s": 36594, "text": "Following questions have been asked in GATE CS 2008 exam." }, { "code": null, "e": 36824, "s": 36652, "text": "1) What is the maximum size of data that the application layer can pass on to the TCP layer below?(A) Any size(B) 2^16 bytes-size of TCP header(C) 2^16 bytes(D) 1500 bytes" }, { "code": null, "e": 36964, "s": 36824, "text": "Answer (A)Application layer can send any size of data. There is no limit defined by standards. The lower layers divides the data if needed." }, { "code": null, "e": 37623, "s": 36964, "text": "2) A client process P needs to make a TCP connection to a server process S. Consider the following situation: the server process S executes a socket(), a bind() and a listen() system call in that order, following which it is preempted. Subsequently, the client process P executes a socket() system call followed by connect() system call to connect to the server process S. The server process has not executed any accept() system call. Which one of the following events could take place?(A) connect () system call returns successfully(B) connect () system call blocks(C) connect () system call returns an error(D) connect () system call results in a core dump" }, { "code": null, "e": 37767, "s": 37623, "text": "Answer (C)Since accept() call is not executed then connect () gets no response for a time stamp to wait & then return no response server error." }, { "code": null, "e": 38071, "s": 37767, "text": "3) A computer on a 10Mbps network is regulated by a token bucket. The token bucket is filled at a rate of 2Mbps. It is initially filled to capacity with 16Megabits. What is the maximum duration for which the computer can transmit at the full 10Mbps?(A) 1.6 seconds(B) 2 seconds(C) 5 seconds(D) 8 seconds" }, { "code": null, "e": 38082, "s": 38071, "text": "Answer (B)" }, { "code": null, "e": 38324, "s": 38082, "text": "New tokens are added at the rate of r bytes/sec which is \n2Mbps in the given question. \n\nCapacity of the token bucket (b) = 16 Mbits\nMaximum possible transmission rate (M) = 10Mbps\nSo the maximum burst time = b/(M-r) = 16/(10-2) = 2 seconds\n" }, { "code": null, "e": 38539, "s": 38324, "text": "In the above formula, r is subtracted from M to calculate the maximum burst time. The reason for this subtraction is, new tokens are added at the rate of r while transmission happens at maximum transmission rate M." }, { "code": null, "e": 38653, "s": 38539, "text": "Please see GATE Corner for all previous year paper/solutions/explanations, syllabus, important dates, notes, etc." }, { "code": null, "e": 38801, "s": 38653, "text": "Please write comments if you find any of the answers/explanations incorrect, or you want to share more information about the topics discussed above" }, { "code": null, "e": 38814, "s": 38801, "text": "GATE-CS-2008" }, { "code": null, "e": 38832, "s": 38814, "text": "Computer Networks" }, { "code": null, "e": 38840, "s": 38832, "text": "GATE CS" }, { "code": null, "e": 38844, "s": 38840, "text": "MCQ" }, { "code": null, "e": 38862, "s": 38844, "text": "Computer Networks" }, { "code": null, "e": 38960, "s": 38862, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 38990, "s": 38960, "text": "Caesar Cipher in Cryptography" }, { "code": null, "e": 39019, "s": 38990, "text": "Socket Programming in Python" }, { "code": null, "e": 39057, "s": 39019, "text": "UDP Server-Client implementation in C" }, { "code": null, "e": 39091, "s": 39057, "text": "Differences between IPv4 and IPv6" }, { "code": null, "e": 39118, "s": 39091, "text": "Socket Programming in Java" }, { "code": null, "e": 39142, "s": 39118, "text": "ACID Properties in DBMS" }, { "code": null, "e": 39169, "s": 39142, "text": "Types of Operating Systems" }, { "code": null, "e": 39218, "s": 39169, "text": "Page Replacement Algorithms in Operating Systems" }, { "code": null, "e": 39239, "s": 39218, "text": "Normal Forms in DBMS" } ]
What is Java String literal?
Strings, which are widely used in Java programming, are a sequence of characters. In Java programming language, strings are treated as objects. The Java platform provides the String class to create and manipulate strings. You can also create a String directly as − String greeting = "Hello world!"; A string literal should be enclosed in double quotes. Whenever it encounters a string literal in your code, the compiler creates a String object with its value in this case, "Hello world!'. Live Demo public class StringDemo { public static void main(String args[]) { String str = "Hello world!"; System.out.println( str ); } } Hello world!
[ { "code": null, "e": 1284, "s": 1062, "text": "Strings, which are widely used in Java programming, are a sequence of characters. In Java programming language, strings are treated as objects. The Java platform provides the String class to create and manipulate strings." }, { "code": null, "e": 1327, "s": 1284, "text": "You can also create a String directly as −" }, { "code": null, "e": 1362, "s": 1327, "text": "String greeting = \"Hello world!\";\n" }, { "code": null, "e": 1552, "s": 1362, "text": "A string literal should be enclosed in double quotes. Whenever it encounters a string literal in your code, the compiler creates a String object with its value in this case, \"Hello world!'." }, { "code": null, "e": 1563, "s": 1552, "text": " Live Demo" }, { "code": null, "e": 1708, "s": 1563, "text": "public class StringDemo {\n public static void main(String args[]) {\n String str = \"Hello world!\";\n System.out.println( str );\n }\n}" }, { "code": null, "e": 1722, "s": 1708, "text": "Hello world!\n" } ]
What is an optimistic concurrency control in DBMS?
All data items are updated at the end of the transaction, at the end, if any data item is found inconsistent with respect to the value in, then the transaction is rolled back. Check for conflicts at the end of the transaction. No checking while the transaction is executing. Checks are all made at once, so low transaction execution overhead. Updates are not applied until end-transaction. They are applied to local copies in a transaction space. The optimistic concurrency control has three phases, which are explained below − Various data items are read and stored in temporary variables (local copies). All operations are performed in these variables without updating the database. All concurrent data items are checked to ensure serializability will not be validated if the transaction updates are actually applied to the database. Any changes in the value cause the transaction rollback. The transaction timestamps are used and the write-sets and read-sets are maintained. To check that transaction A does not interfere with transaction B the following must hold − TransB completes its write phase before TransA starts the read phase. TransB completes its write phase before TransA starts the read phase. TransA starts its write phase after TransB completes its write phase, and the read set of TransA has no items in common with the write set of TransB. TransA starts its write phase after TransB completes its write phase, and the read set of TransA has no items in common with the write set of TransB. Both the read set and write set of TransA have no items in common with the write set of TransB and TransB completes its read before TransA completes its read Phase. Both the read set and write set of TransA have no items in common with the write set of TransB and TransB completes its read before TransA completes its read Phase. The transaction updates applied to the database if the validation is successful. Otherwise, updates are discarded and transactions are aborted and restarted. It does not use any locks hence deadlock free, however starvation problems of data items may occur. S: W1(X), r2(Y), r1(Y), r2(X). T1 -3 T2 – 4 Check whether timestamp ordering protocols allow schedule S. Initially for a data-item X, RTS(X)=0, WTS(X)=0 Initially for a data-item Y, RTS(Y)=0, WTS(Y)=0 For W1(X) : TS(Ti)<RTS(X) i.e. TS(T1)<RTS(X) TS(T1)<WTS(X) 3<0 (FALSE) =>goto else and perform write operation w1(X) and WTS(X)=3 For r2(Y): TS(T2)<WTS(Y) 4<0 (FALSE) =>goto else and perform read operation r2(Y) and RTS(Y)=4 For r1(Y) :TS(T1)<WTS(Y) 3<0 (FALSE) =>goto else and perform read operation r1(Y). For r2(X) : TS(T2)<WTS(X) 4<3 (FALSE) =>goto else and perform read operation r2(X) and RTS(X)=4
[ { "code": null, "e": 1238, "s": 1062, "text": "All data items are updated at the end of the transaction, at the end, if any data item is found inconsistent with respect to the value in, then the transaction is rolled back." }, { "code": null, "e": 1509, "s": 1238, "text": "Check for conflicts at the end of the transaction. No checking while the transaction is executing. Checks are all made at once, so low transaction execution overhead. Updates are not applied until end-transaction. They are applied to local copies in a transaction space." }, { "code": null, "e": 1590, "s": 1509, "text": "The optimistic concurrency control has three phases, which are explained below −" }, { "code": null, "e": 1747, "s": 1590, "text": "Various data items are read and stored in temporary variables (local copies). All operations are performed in these variables without updating the database." }, { "code": null, "e": 2040, "s": 1747, "text": "All concurrent data items are checked to ensure serializability will not be validated if the transaction updates are actually applied to the database. Any changes in the value cause the transaction rollback. The transaction timestamps are used and the write-sets and read-sets are maintained." }, { "code": null, "e": 2132, "s": 2040, "text": "To check that transaction A does not interfere with transaction B the following must hold −" }, { "code": null, "e": 2202, "s": 2132, "text": "TransB completes its write phase before TransA starts the read phase." }, { "code": null, "e": 2272, "s": 2202, "text": "TransB completes its write phase before TransA starts the read phase." }, { "code": null, "e": 2422, "s": 2272, "text": "TransA starts its write phase after TransB completes its write phase, and the read set of TransA has no items in common with the write set of TransB." }, { "code": null, "e": 2572, "s": 2422, "text": "TransA starts its write phase after TransB completes its write phase, and the read set of TransA has no items in common with the write set of TransB." }, { "code": null, "e": 2737, "s": 2572, "text": "Both the read set and write set of TransA have no items in common with the write set of TransB and TransB completes its read before TransA completes its read Phase." }, { "code": null, "e": 2902, "s": 2737, "text": "Both the read set and write set of TransA have no items in common with the write set of TransB and TransB completes its read before TransA completes its read Phase." }, { "code": null, "e": 3160, "s": 2902, "text": "The transaction updates applied to the database if the validation is successful. Otherwise, updates are discarded and transactions are aborted and restarted. It does not use any locks hence deadlock free, however starvation problems of data items may occur." }, { "code": null, "e": 3191, "s": 3160, "text": "S: W1(X), r2(Y), r1(Y), r2(X)." }, { "code": null, "e": 3197, "s": 3191, "text": "T1 -3" }, { "code": null, "e": 3204, "s": 3197, "text": "T2 – 4" }, { "code": null, "e": 3265, "s": 3204, "text": "Check whether timestamp ordering protocols allow schedule S." }, { "code": null, "e": 3313, "s": 3265, "text": "Initially for a data-item X, RTS(X)=0, WTS(X)=0" }, { "code": null, "e": 3361, "s": 3313, "text": "Initially for a data-item Y, RTS(Y)=0, WTS(Y)=0" }, { "code": null, "e": 3435, "s": 3361, "text": "For W1(X) : TS(Ti)<RTS(X) i.e.\n TS(T1)<RTS(X)\nTS(T1)<WTS(X)\n3<0 (FALSE)" }, { "code": null, "e": 3494, "s": 3435, "text": "=>goto else and perform write operation w1(X) and WTS(X)=3" }, { "code": null, "e": 3534, "s": 3494, "text": "For r2(Y): TS(T2)<WTS(Y)\n 4<0 (FALSE)" }, { "code": null, "e": 3592, "s": 3534, "text": "=>goto else and perform read operation r2(Y) and RTS(Y)=4" }, { "code": null, "e": 3632, "s": 3592, "text": "For r1(Y) :TS(T1)<WTS(Y)\n 3<0 (FALSE)" }, { "code": null, "e": 3678, "s": 3632, "text": "=>goto else and perform read operation r1(Y)." }, { "code": null, "e": 3719, "s": 3678, "text": "For r2(X) : TS(T2)<WTS(X)\n 4<3 (FALSE)" }, { "code": null, "e": 3777, "s": 3719, "text": "=>goto else and perform read operation r2(X) and RTS(X)=4" } ]
3 lesser-known pipe operators in Tidyverse | by Abhinav Malasi | Towards Data Science
Apart from hosting the main pipe operator %>% used by the Tidyverse community, the magrittr package in Tidyverse holds a few other pipe operators. The %>% pipe is widely used for data manipulations and is automatically loaded with Tidyverse. The pipe operator is used to execute multiple operations that are in sequence requiring the output of the previous operation as their input argument. So, the execution starts from the left-hand side with the data as the first argument that is passed to the function on its right and so on. This way a series of data manipulation can be achieved in a single step. So here we will discuss three other pipe operators from the magrittr package. Discuss the areas where the main pipe operator fails and how these functions can complement it. The tee pipe operator %T>% works almost like %>% operator, except in situations when one of the operations in a sequence of operations does not return a value. Tee pipe operator is helpful when we have the print() or plot() functions in a series of operations that too not necessarily at the end of the sequence. As print() and plot() functions do not return any value, in that case, we can use %T>% operator to use the last argument value to be assigned to the operation after the print()/plot() operation. Let us look at an example, where we write a sequence of operations using the main pipe operator %>%. # sequence of operations using main pipe operatorrnorm(100) %>% matrix(ncol=2) %>% sin() %>% plot() %>% colSums()# outputError in colSums(.) : 'x' must be an array of at least two dimensions So in the above operation, we see an error popping when executing the colSums() functions. This is because the plot() function does not return any value. To tackle this problem, we will use the tee pipe operator before the plot() function. What this will do is pass the value of sin() function as the arguments to both plot() and colSums() function, thus maintaining the flow of information. Redoing the above example with tee pipe operator. # using tee pipe operatorrnorm(100) %>% matrix(ncol=2) %>% sin() %T>% plot() %>% colSums()# output[1] 2.372528 -4.902566 We can see from form the above example, with the tee pipe operator the complete sequence of operations is executed. Exposition pipe operator %$% exposes the variable names of the data frame on the left as the matching argument names in the function on the right. In some functions in base R, there is no data =... argument. So to reference a variable from the data frame we have to use the $ operator as dataframe$variable1 and so on. So in those situations, we are dealing with multiple variables then we have to repeat the process of using the $ symbol with repeating the data frame name. In order for us to avoid this, we can use the exposition pipe. Let us use the cor() and lm() functions to understand the use of exposition pipe. We will use the mtcars dataset from base R. Example 1 using lm() function using %>% operator mtcars %>% lm(formula = disp~mpg)# outputCall:lm(formula = disp ~ mpg, data = .)Coefficients:(Intercept) mpg 580.88 -17.43 using %$% operator mtcars %$% lm(formula = disp~mpg)# outputCall:lm(formula = disp ~ mpg, data = .)Coefficients:(Intercept) mpg 580.88 -17.43 Example 2 using cor() function using %>% operator (case 1) mtcars %>% cor(disp,mpg)# outputError in cor(., disp, mpg) : invalid 'use' argumentIn addition: Warning message:In if (is.na(na.method)) stop("invalid 'use' argument") : the condition has length > 1 and only the first element will be used using %>% operator (case 2) cor(mtcars$disp,mtcars$mpg)# output[1] -0.8475514 using %$% operator mtcars %$% cor(disp, mpg)# output[1] -0.8475514 In example 1, we see that irrespective of the type of pipe operator in use, the two operations using two different pipes work perfectly fine. But in example 2, case 2 of %>% operator and %$% operator works. The key difference here lies in the type of arguments of the lm() and cor() functions. The lm() function has data as one of the arguments but the cor() function does not. So, the %>% and %$% pipe operators work fine with the lm() function. For the cor() function case, since the argument is either x or y (check the documentation). So, we have to explicitly tell the x and y values to come from the mtcars data frame by defining x and y arguments as mtcars$disp and mtcars$mpg. So to avoid repetition of the mtcars data frame, we can directly use the %$% pipe operator. The last one of the lesser-known pipe is the assignment pipe %<>% . The pipe is used when the variable is assigned to itself after going through certain operations. Let us look at an example a <- a %>% cos() %>% sin()# using assignment operatora %<>% cos() %>% sin() So, by using the assignment pipe operator we can remove the assignment operator <- . We explored three lesser-known pipe operators: tee, exposition, and assignment pipes, from magrittr package in Tidyverse. Further, we implemented these pipe operators in different settings to see how they complement the functioning of the main pipe operator, %>%. The tee pipe, %T>%, is useful when a series of operations have a function that does not return any value. In the case of exposition pipe, %$%, they are handy with base R functions that do not have data as an argument. And the assignment pipe, %<>%, avoids repetition when the variable is assigned to itself after series of operations. Thank you for reading. I hope you enjoyed the pipe functionalities. Please let me know if you have any feedback. https://magrittr.tidyverse.org/articles/magrittr.htmlhttps://r4ds.had.co.nz/pipes.htmlhttps://magrittr.tidyverse.org/reference/exposition.html https://magrittr.tidyverse.org/articles/magrittr.html https://r4ds.had.co.nz/pipes.html https://magrittr.tidyverse.org/reference/exposition.html You can connect with me on LinkedIn and Twitter to follow my data science and data visualization journey.
[ { "code": null, "e": 414, "s": 172, "text": "Apart from hosting the main pipe operator %>% used by the Tidyverse community, the magrittr package in Tidyverse holds a few other pipe operators. The %>% pipe is widely used for data manipulations and is automatically loaded with Tidyverse." }, { "code": null, "e": 777, "s": 414, "text": "The pipe operator is used to execute multiple operations that are in sequence requiring the output of the previous operation as their input argument. So, the execution starts from the left-hand side with the data as the first argument that is passed to the function on its right and so on. This way a series of data manipulation can be achieved in a single step." }, { "code": null, "e": 951, "s": 777, "text": "So here we will discuss three other pipe operators from the magrittr package. Discuss the areas where the main pipe operator fails and how these functions can complement it." }, { "code": null, "e": 1560, "s": 951, "text": "The tee pipe operator %T>% works almost like %>% operator, except in situations when one of the operations in a sequence of operations does not return a value. Tee pipe operator is helpful when we have the print() or plot() functions in a series of operations that too not necessarily at the end of the sequence. As print() and plot() functions do not return any value, in that case, we can use %T>% operator to use the last argument value to be assigned to the operation after the print()/plot() operation. Let us look at an example, where we write a sequence of operations using the main pipe operator %>%." }, { "code": null, "e": 1759, "s": 1560, "text": "# sequence of operations using main pipe operatorrnorm(100) %>% matrix(ncol=2) %>% sin() %>% plot() %>% colSums()# outputError in colSums(.) : 'x' must be an array of at least two dimensions" }, { "code": null, "e": 2151, "s": 1759, "text": "So in the above operation, we see an error popping when executing the colSums() functions. This is because the plot() function does not return any value. To tackle this problem, we will use the tee pipe operator before the plot() function. What this will do is pass the value of sin() function as the arguments to both plot() and colSums() function, thus maintaining the flow of information." }, { "code": null, "e": 2201, "s": 2151, "text": "Redoing the above example with tee pipe operator." }, { "code": null, "e": 2331, "s": 2201, "text": "# using tee pipe operatorrnorm(100) %>% matrix(ncol=2) %>% sin() %T>% plot() %>% colSums()# output[1] 2.372528 -4.902566" }, { "code": null, "e": 2447, "s": 2331, "text": "We can see from form the above example, with the tee pipe operator the complete sequence of operations is executed." }, { "code": null, "e": 3111, "s": 2447, "text": "Exposition pipe operator %$% exposes the variable names of the data frame on the left as the matching argument names in the function on the right. In some functions in base R, there is no data =... argument. So to reference a variable from the data frame we have to use the $ operator as dataframe$variable1 and so on. So in those situations, we are dealing with multiple variables then we have to repeat the process of using the $ symbol with repeating the data frame name. In order for us to avoid this, we can use the exposition pipe. Let us use the cor() and lm() functions to understand the use of exposition pipe. We will use the mtcars dataset from base R." }, { "code": null, "e": 3141, "s": 3111, "text": "Example 1 using lm() function" }, { "code": null, "e": 3160, "s": 3141, "text": "using %>% operator" }, { "code": null, "e": 3304, "s": 3160, "text": "mtcars %>% lm(formula = disp~mpg)# outputCall:lm(formula = disp ~ mpg, data = .)Coefficients:(Intercept) mpg 580.88 -17.43" }, { "code": null, "e": 3323, "s": 3304, "text": "using %$% operator" }, { "code": null, "e": 3467, "s": 3323, "text": "mtcars %$% lm(formula = disp~mpg)# outputCall:lm(formula = disp ~ mpg, data = .)Coefficients:(Intercept) mpg 580.88 -17.43" }, { "code": null, "e": 3498, "s": 3467, "text": "Example 2 using cor() function" }, { "code": null, "e": 3526, "s": 3498, "text": "using %>% operator (case 1)" }, { "code": null, "e": 3766, "s": 3526, "text": "mtcars %>% cor(disp,mpg)# outputError in cor(., disp, mpg) : invalid 'use' argumentIn addition: Warning message:In if (is.na(na.method)) stop(\"invalid 'use' argument\") : the condition has length > 1 and only the first element will be used" }, { "code": null, "e": 3794, "s": 3766, "text": "using %>% operator (case 2)" }, { "code": null, "e": 3844, "s": 3794, "text": "cor(mtcars$disp,mtcars$mpg)# output[1] -0.8475514" }, { "code": null, "e": 3863, "s": 3844, "text": "using %$% operator" }, { "code": null, "e": 3911, "s": 3863, "text": "mtcars %$% cor(disp, mpg)# output[1] -0.8475514" }, { "code": null, "e": 4688, "s": 3911, "text": "In example 1, we see that irrespective of the type of pipe operator in use, the two operations using two different pipes work perfectly fine. But in example 2, case 2 of %>% operator and %$% operator works. The key difference here lies in the type of arguments of the lm() and cor() functions. The lm() function has data as one of the arguments but the cor() function does not. So, the %>% and %$% pipe operators work fine with the lm() function. For the cor() function case, since the argument is either x or y (check the documentation). So, we have to explicitly tell the x and y values to come from the mtcars data frame by defining x and y arguments as mtcars$disp and mtcars$mpg. So to avoid repetition of the mtcars data frame, we can directly use the %$% pipe operator." }, { "code": null, "e": 4879, "s": 4688, "text": "The last one of the lesser-known pipe is the assignment pipe %<>% . The pipe is used when the variable is assigned to itself after going through certain operations. Let us look at an example" }, { "code": null, "e": 4955, "s": 4879, "text": "a <- a %>% cos() %>% sin()# using assignment operatora %<>% cos() %>% sin()" }, { "code": null, "e": 5040, "s": 4955, "text": "So, by using the assignment pipe operator we can remove the assignment operator <- ." }, { "code": null, "e": 5304, "s": 5040, "text": "We explored three lesser-known pipe operators: tee, exposition, and assignment pipes, from magrittr package in Tidyverse. Further, we implemented these pipe operators in different settings to see how they complement the functioning of the main pipe operator, %>%." }, { "code": null, "e": 5639, "s": 5304, "text": "The tee pipe, %T>%, is useful when a series of operations have a function that does not return any value. In the case of exposition pipe, %$%, they are handy with base R functions that do not have data as an argument. And the assignment pipe, %<>%, avoids repetition when the variable is assigned to itself after series of operations." }, { "code": null, "e": 5752, "s": 5639, "text": "Thank you for reading. I hope you enjoyed the pipe functionalities. Please let me know if you have any feedback." }, { "code": null, "e": 5895, "s": 5752, "text": "https://magrittr.tidyverse.org/articles/magrittr.htmlhttps://r4ds.had.co.nz/pipes.htmlhttps://magrittr.tidyverse.org/reference/exposition.html" }, { "code": null, "e": 5949, "s": 5895, "text": "https://magrittr.tidyverse.org/articles/magrittr.html" }, { "code": null, "e": 5983, "s": 5949, "text": "https://r4ds.had.co.nz/pipes.html" }, { "code": null, "e": 6040, "s": 5983, "text": "https://magrittr.tidyverse.org/reference/exposition.html" } ]
Bootstrap .btn-sm class
To create a small button in Bootstrap, use the .btn-sm class. You can try to run the following code to implement the btn-sm class Live Demo <!DOCTYPE html> <html> <head> <title>Bootstrap Example</title> <meta name = "viewport" content = "width=device-width, initial-scale = 1"> <link rel = "stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.1.1/css/bootstrap.min.css"> <script src = "https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <script src = "https://stackpath.bootstrapcdn.com/bootstrap/4.1.1/js/bootstrap.min.js"></script> </head> <body> <button type = "button" class = "btn btn-default btn-sm"> Small button </button> <button type = "button" class = "btn btn-default btn-sm"> Result </button> </body> </html>
[ { "code": null, "e": 1124, "s": 1062, "text": "To create a small button in Bootstrap, use the .btn-sm class." }, { "code": null, "e": 1192, "s": 1124, "text": "You can try to run the following code to implement the btn-sm class" }, { "code": null, "e": 1202, "s": 1192, "text": "Live Demo" }, { "code": null, "e": 1905, "s": 1202, "text": "<!DOCTYPE html>\n<html>\n <head>\n <title>Bootstrap Example</title>\n <meta name = \"viewport\" content = \"width=device-width, initial-scale = 1\">\n <link rel = \"stylesheet\" href=\"https://stackpath.bootstrapcdn.com/bootstrap/4.1.1/css/bootstrap.min.css\">\n <script src = \"https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js\"></script>\n <script src = \"https://stackpath.bootstrapcdn.com/bootstrap/4.1.1/js/bootstrap.min.js\"></script>\n </head>\n <body>\n <button type = \"button\" class = \"btn btn-default btn-sm\">\n Small button\n </button>\n <button type = \"button\" class = \"btn btn-default btn-sm\">\n Result\n </button>\n </body>\n</html>" } ]
Reading the CSV file into Dataframes in R - GeeksforGeeks
09 May, 2021 In this article, we will learn how to import or read a CSV file into a dataframe in R Programming Language. Data set in use: In order to import or read the given CSV file into our data frame, we first need to check our current working directory, and make sure that the CSV file is in the same directory as our R studio is in, or else it might show “File not found Error”. To check the current working directory we need to use getwd() function, and to change the current working directory to some other working directory, we need to use stewd() function. getwd() returns an absolute file-path representing the current working directory of the R process. Syntax: getwd() setwd(dir) used to set the working directory to dir. Syntax: setwd(path) Example: R # gives the current working directory getwd() # changes the location setwd("C:/Users/Vanshi/Desktop/gfg") Output: C:/Users/Vanshi/Documents Now that we have set our working path, we will import the CSV file into the data frame, and name our data frame as sdata. Here, we are reading the .csv file named “SampleData” using read.csv command, into our R studio, which means we are feeding the values to the Rstudio to extract some important information out of it. read.csv() function reads a file in table format and creates a data frame from it, with cases corresponding to lines and variables to fields in the file. Syntax: read.csv(file, header = TRUE, sep = “,”, quote = “\””, dec = “.”, fill = TRUE, comment.char = “”, ...) Arguments: file: the name of the file which the data are to be read from. header: a logical value indicating whether the file contains the names of the variables as its first line. If missing, the value is determined from the file format: header is set to TRUE if and only if the first row contains one fewer field than the number of columns. sep: the field separator character. Values on each line of the file are separated by this character. If sep = “” (the default for read.table) the separator is ‘white space’, that is one or more spaces, tabs, newlines or carriage returns. quote: the set of quoting characters. dec: the character used in the file for decimal points. fill: logical. If TRUE then in case the rows have unequal length, blank fields are implicitly added. comment.char: character: a character vector of length one containing a single character or an empty string. ... : Further arguments to be passed. Example: R sdata <- read.csv("SampleData.csv", header = TRUE, sep = ",") sdata # views the data frame formed from the csv file View(sdata) Output: Now that, we have created our dataframe, we can perform some operations on it. The data read according to the usage from dataframe. Given below are two examples who read the data as per their requirement. Example 1: R sdata <- read.csv( "SampleData.csv", header = TRUE, sep = ",") highspeed <- subset( sdata, sdata$speed == max(sdata$speed)) # views the subsetted value in # tabular form View(highspeed) Output: Example 2: R sdata <- read.csv( "SampleData.csv", header = TRUE, sep = ",") highfreq <- subset( sdata, sdata$cyc_freq == "Several times per week") # views the information, of the above # condition in tabular format View(highfreq) Output: Picked R-CSV R Language Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Change Color of Bars in Barchart using ggplot2 in R How to Change Axis Scales in R Plots? Group by function in R using Dplyr How to Split Column Into Multiple Columns in R DataFrame? How to filter R DataFrame by values in a column? Replace Specific Characters in String in R How to filter R dataframe by multiple conditions? R - if statement How to import an Excel File into R ? Time Series Analysis in R
[ { "code": null, "e": 24923, "s": 24892, "text": " \n09 May, 2021\n" }, { "code": null, "e": 25031, "s": 24923, "text": "In this article, we will learn how to import or read a CSV file into a dataframe in R Programming Language." }, { "code": null, "e": 25048, "s": 25031, "text": "Data set in use:" }, { "code": null, "e": 25295, "s": 25048, "text": "In order to import or read the given CSV file into our data frame, we first need to check our current working directory, and make sure that the CSV file is in the same directory as our R studio is in, or else it might show “File not found Error”." }, { "code": null, "e": 25477, "s": 25295, "text": "To check the current working directory we need to use getwd() function, and to change the current working directory to some other working directory, we need to use stewd() function." }, { "code": null, "e": 25576, "s": 25477, "text": "getwd() returns an absolute file-path representing the current working directory of the R process." }, { "code": null, "e": 25584, "s": 25576, "text": "Syntax:" }, { "code": null, "e": 25592, "s": 25584, "text": "getwd()" }, { "code": null, "e": 25645, "s": 25592, "text": "setwd(dir) used to set the working directory to dir." }, { "code": null, "e": 25654, "s": 25645, "text": "Syntax: " }, { "code": null, "e": 25666, "s": 25654, "text": "setwd(path)" }, { "code": null, "e": 25675, "s": 25666, "text": "Example:" }, { "code": null, "e": 25677, "s": 25675, "text": "R" }, { "code": "\n\n\n\n\n\n\n# gives the current working directory \ngetwd() \n \n# changes the location \nsetwd(\"C:/Users/Vanshi/Desktop/gfg\") \n\n\n\n\n\n", "e": 25814, "s": 25687, "text": null }, { "code": null, "e": 25822, "s": 25814, "text": "Output:" }, { "code": null, "e": 25848, "s": 25822, "text": "C:/Users/Vanshi/Documents" }, { "code": null, "e": 25971, "s": 25848, "text": "Now that we have set our working path, we will import the CSV file into the data frame, and name our data frame as sdata. " }, { "code": null, "e": 26170, "s": 25971, "text": "Here, we are reading the .csv file named “SampleData” using read.csv command, into our R studio, which means we are feeding the values to the Rstudio to extract some important information out of it." }, { "code": null, "e": 26324, "s": 26170, "text": "read.csv() function reads a file in table format and creates a data frame from it, with cases corresponding to lines and variables to fields in the file." }, { "code": null, "e": 26435, "s": 26324, "text": "Syntax: read.csv(file, header = TRUE, sep = “,”, quote = “\\””, dec = “.”, fill = TRUE, comment.char = “”, ...)" }, { "code": null, "e": 26446, "s": 26435, "text": "Arguments:" }, { "code": null, "e": 26509, "s": 26446, "text": "file: the name of the file which the data are to be read from." }, { "code": null, "e": 26778, "s": 26509, "text": "header: a logical value indicating whether the file contains the names of the variables as its first line. If missing, the value is determined from the file format: header is set to TRUE if and only if the first row contains one fewer field than the number of columns." }, { "code": null, "e": 27016, "s": 26778, "text": "sep: the field separator character. Values on each line of the file are separated by this character. If sep = “” (the default for read.table) the separator is ‘white space’, that is one or more spaces, tabs, newlines or carriage returns." }, { "code": null, "e": 27054, "s": 27016, "text": "quote: the set of quoting characters." }, { "code": null, "e": 27110, "s": 27054, "text": "dec: the character used in the file for decimal points." }, { "code": null, "e": 27211, "s": 27110, "text": "fill: logical. If TRUE then in case the rows have unequal length, blank fields are implicitly added." }, { "code": null, "e": 27319, "s": 27211, "text": "comment.char: character: a character vector of length one containing a single character or an empty string." }, { "code": null, "e": 27358, "s": 27319, "text": "... : Further arguments to be passed." }, { "code": null, "e": 27367, "s": 27358, "text": "Example:" }, { "code": null, "e": 27369, "s": 27367, "text": "R" }, { "code": "\n\n\n\n\n\n\nsdata <- read.csv(\"SampleData.csv\", header = TRUE, sep = \",\") \nsdata \n \n# views the data frame formed from the csv file \nView(sdata)\n\n\n\n\n\n", "e": 27526, "s": 27379, "text": null }, { "code": null, "e": 27534, "s": 27526, "text": "Output:" }, { "code": null, "e": 27739, "s": 27534, "text": "Now that, we have created our dataframe, we can perform some operations on it. The data read according to the usage from dataframe. Given below are two examples who read the data as per their requirement." }, { "code": null, "e": 27750, "s": 27739, "text": "Example 1:" }, { "code": null, "e": 27752, "s": 27750, "text": "R" }, { "code": "\n\n\n\n\n\n\nsdata <- read.csv( \n \"SampleData.csv\", header = TRUE, sep = \",\") \n \nhighspeed <- subset( \n sdata, sdata$speed == max(sdata$speed)) \n \n# views the subsetted value in \n# tabular form \nView(highspeed) \n\n\n\n\n\n", "e": 27979, "s": 27762, "text": null }, { "code": null, "e": 27987, "s": 27979, "text": "Output:" }, { "code": null, "e": 27998, "s": 27987, "text": "Example 2:" }, { "code": null, "e": 28000, "s": 27998, "text": "R" }, { "code": "\n\n\n\n\n\n\nsdata <- read.csv( \n \"SampleData.csv\", header = TRUE, sep = \",\") \n \nhighfreq <- subset( \n sdata, sdata$cyc_freq == \"Several times per week\") \n \n# views the information, of the above \n# condition in tabular format \nView(highfreq)\n\n\n\n\n\n", "e": 28257, "s": 28010, "text": null }, { "code": null, "e": 28265, "s": 28257, "text": "Output:" }, { "code": null, "e": 28274, "s": 28265, "text": "\nPicked\n" }, { "code": null, "e": 28282, "s": 28274, "text": "\nR-CSV\n" }, { "code": null, "e": 28295, "s": 28282, "text": "\nR Language\n" }, { "code": null, "e": 28500, "s": 28295, "text": "Writing code in comment? \n Please use ide.geeksforgeeks.org, \n generate link and share the link here.\n " }, { "code": null, "e": 28552, "s": 28500, "text": "Change Color of Bars in Barchart using ggplot2 in R" }, { "code": null, "e": 28590, "s": 28552, "text": "How to Change Axis Scales in R Plots?" }, { "code": null, "e": 28625, "s": 28590, "text": "Group by function in R using Dplyr" }, { "code": null, "e": 28683, "s": 28625, "text": "How to Split Column Into Multiple Columns in R DataFrame?" }, { "code": null, "e": 28732, "s": 28683, "text": "How to filter R DataFrame by values in a column?" }, { "code": null, "e": 28775, "s": 28732, "text": "Replace Specific Characters in String in R" }, { "code": null, "e": 28825, "s": 28775, "text": "How to filter R dataframe by multiple conditions?" }, { "code": null, "e": 28842, "s": 28825, "text": "R - if statement" }, { "code": null, "e": 28879, "s": 28842, "text": "How to import an Excel File into R ?" } ]
Python Tools/Utilities
The standard library comes with a number of modules that can be used both as modules and as command-line utilities. The dis module is the Python disassembler. It converts byte codes to a format that is slightly more appropriate for human consumption. You can run the disassembler from the command line. It compiles the given script and prints the disassembled byte codes to the STDOUT. You can also use dis as a module. The dis function takes a class, method, function or code object as its single argument. #!/usr/bin/python import dis def sum(): vara = 10 varb = 20 sum = vara + varb print "vara + varb = %d" % sum # Call dis function for the function. dis.dis(sum) This would produce the following result − 6 0 LOAD_CONST 1 (10) 3 STORE_FAST 0 (vara) 7 6 LOAD_CONST 2 (20) 9 STORE_FAST 1 (varb) 9 12 LOAD_FAST 0 (vara) 15 LOAD_FAST 1 (varb) 18 BINARY_ADD 19 STORE_FAST 2 (sum) 10 22 LOAD_CONST 3 ('vara + varb = %d') 25 LOAD_FAST 2 (sum) 28 BINARY_MODULO 29 PRINT_ITEM 30 PRINT_NEWLINE 31 LOAD_CONST 0 (None) 34 RETURN_VALUE The pdb module is the standard Python debugger. It is based on the bdb debugger framework. You can run the debugger from the command line (type n [or next] to go to the next line and help to get a list of available commands) − Before you try to run pdb.py, set your path properly to Python lib directory. So let us try with above example sum.py − $pdb.py sum.py > /test/sum.py(3)<module>() -> import dis (Pdb) n > /test/sum.py(5)<module>() -> def sum(): (Pdb) n >/test/sum.py(14)<module>() -> dis.dis(sum) (Pdb) n 6 0 LOAD_CONST 1 (10) 3 STORE_FAST 0 (vara) 7 6 LOAD_CONST 2 (20) 9 STORE_FAST 1 (varb) 9 12 LOAD_FAST 0 (vara) 15 LOAD_FAST 1 (varb) 18 BINARY_ADD 19 STORE_FAST 2 (sum) 10 22 LOAD_CONST 3 ('vara + varb = %d') 25 LOAD_FAST 2 (sum) 28 BINARY_MODULO 29 PRINT_ITEM 30 PRINT_NEWLINE 31 LOAD_CONST 0 (None) 34 RETURN_VALUE --Return-- > /test/sum.py(14)<module>()->None -v dis.dis(sum) (Pdb) n --Return-- > <string>(1)<module>()->None (Pdb) The profile module is the standard Python profiler. You can run the profiler from the command line − Let us try to profile the following program − #!/usr/bin/python vara = 10 varb = 20 sum = vara + varb print "vara + varb = %d" % sum Now, try running cProfile.py over this file sum.py as follows − $cProfile.py sum.py vara + varb = 30 4 function calls in 0.000 CPU seconds Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno 1 0.000 0.000 0.000 0.000 <string>:1(<module>) 1 0.000 0.000 0.000 0.000 sum.py:3(<module>) 1 0.000 0.000 0.000 0.000 {execfile} 1 0.000 0.000 0.000 0.000 {method ......} The tabnanny module checks Python source files for ambiguous indentation. If a file mixes tabs and spaces in a way that throws off indentation, no matter what tab size you're using, the nanny complains − Let us try to profile the following program − #!/usr/bin/python vara = 10 varb = 20 sum = vara + varb print "vara + varb = %d" % sum If you would try a correct file with tabnanny.py, then it won't complain as follows − $tabnanny.py -v sum.py 'sum.py': Clean bill of health. 187 Lectures 17.5 hours Malhar Lathkar 55 Lectures 8 hours Arnab Chakraborty 136 Lectures 11 hours In28Minutes Official 75 Lectures 13 hours Eduonix Learning Solutions 70 Lectures 8.5 hours Lets Kode It 63 Lectures 6 hours Abhilash Nelson Print Add Notes Bookmark this page
[ { "code": null, "e": 2360, "s": 2244, "text": "The standard library comes with a number of modules that can be used both as modules and as command-line utilities." }, { "code": null, "e": 2495, "s": 2360, "text": "The dis module is the Python disassembler. It converts byte codes to a format that is slightly more appropriate for human consumption." }, { "code": null, "e": 2752, "s": 2495, "text": "You can run the disassembler from the command line. It compiles the given script and prints the disassembled byte codes to the STDOUT. You can also use dis as a module. The dis function takes a class, method, function or code object as its single argument." }, { "code": null, "e": 2928, "s": 2752, "text": "#!/usr/bin/python\nimport dis\n\ndef sum():\n vara = 10\n varb = 20\n\n sum = vara + varb\n print \"vara + varb = %d\" % sum\n\n# Call dis function for the function.\n\ndis.dis(sum)" }, { "code": null, "e": 2970, "s": 2928, "text": "This would produce the following result −" }, { "code": null, "e": 3625, "s": 2970, "text": " 6 0 LOAD_CONST 1 (10)\n 3 STORE_FAST 0 (vara)\n\n 7 6 LOAD_CONST 2 (20)\n 9 STORE_FAST 1 (varb)\n\n 9 12 LOAD_FAST 0 (vara)\n 15 LOAD_FAST 1 (varb)\n 18 BINARY_ADD\n 19 STORE_FAST 2 (sum)\n\n 10 22 LOAD_CONST 3 ('vara + varb = %d')\n 25 LOAD_FAST 2 (sum)\n 28 BINARY_MODULO\n 29 PRINT_ITEM\n 30 PRINT_NEWLINE\n 31 LOAD_CONST 0 (None)\n 34 RETURN_VALUE\n" }, { "code": null, "e": 3716, "s": 3625, "text": "The pdb module is the standard Python debugger. It is based on the bdb debugger framework." }, { "code": null, "e": 3852, "s": 3716, "text": "You can run the debugger from the command line (type n [or next] to go to the next line and help to get a list of available commands) −" }, { "code": null, "e": 3972, "s": 3852, "text": "Before you try to run pdb.py, set your path properly to Python lib directory. So let us try with above example sum.py −" }, { "code": null, "e": 4910, "s": 3972, "text": "$pdb.py sum.py\n> /test/sum.py(3)<module>()\n-> import dis\n(Pdb) n\n> /test/sum.py(5)<module>()\n-> def sum():\n(Pdb) n\n>/test/sum.py(14)<module>()\n-> dis.dis(sum)\n(Pdb) n\n 6 0 LOAD_CONST 1 (10)\n 3 STORE_FAST 0 (vara)\n\n 7 6 LOAD_CONST 2 (20)\n 9 STORE_FAST 1 (varb)\n\n 9 12 LOAD_FAST 0 (vara)\n 15 LOAD_FAST 1 (varb)\n 18 BINARY_ADD\n 19 STORE_FAST 2 (sum)\n\n 10 22 LOAD_CONST 3 ('vara + varb = %d')\n 25 LOAD_FAST 2 (sum)\n 28 BINARY_MODULO\n 29 PRINT_ITEM\n 30 PRINT_NEWLINE\n 31 LOAD_CONST 0 (None)\n 34 RETURN_VALUE\n--Return--\n> /test/sum.py(14)<module>()->None\n-v dis.dis(sum)\n(Pdb) n\n--Return--\n> <string>(1)<module>()->None\n(Pdb)" }, { "code": null, "e": 5011, "s": 4910, "text": "The profile module is the standard Python profiler. You can run the profiler from the command line −" }, { "code": null, "e": 5057, "s": 5011, "text": "Let us try to profile the following program −" }, { "code": null, "e": 5146, "s": 5057, "text": "#!/usr/bin/python\n\nvara = 10\nvarb = 20\n\nsum = vara + varb\nprint \"vara + varb = %d\" % sum" }, { "code": null, "e": 5210, "s": 5146, "text": "Now, try running cProfile.py over this file sum.py as follows −" }, { "code": null, "e": 5615, "s": 5210, "text": "$cProfile.py sum.py\nvara + varb = 30\n 4 function calls in 0.000 CPU seconds\n\n Ordered by: standard name\n\nncalls tottime percall cumtime percall filename:lineno\n 1 0.000 0.000 0.000 0.000 <string>:1(<module>)\n 1 0.000 0.000 0.000 0.000 sum.py:3(<module>)\n 1 0.000 0.000 0.000 0.000 {execfile}\n 1 0.000 0.000 0.000 0.000 {method ......}" }, { "code": null, "e": 5819, "s": 5615, "text": "The tabnanny module checks Python source files for ambiguous indentation. If a file mixes tabs and spaces in a way that throws off indentation, no matter what tab size you're using, the nanny complains −" }, { "code": null, "e": 5865, "s": 5819, "text": "Let us try to profile the following program −" }, { "code": null, "e": 5954, "s": 5865, "text": "#!/usr/bin/python\n\nvara = 10\nvarb = 20\n\nsum = vara + varb\nprint \"vara + varb = %d\" % sum" }, { "code": null, "e": 6040, "s": 5954, "text": "If you would try a correct file with tabnanny.py, then it won't complain as follows −" }, { "code": null, "e": 6096, "s": 6040, "text": "$tabnanny.py -v sum.py\n'sum.py': Clean bill of health.\n" }, { "code": null, "e": 6133, "s": 6096, "text": "\n 187 Lectures \n 17.5 hours \n" }, { "code": null, "e": 6149, "s": 6133, "text": " Malhar Lathkar" }, { "code": null, "e": 6182, "s": 6149, "text": "\n 55 Lectures \n 8 hours \n" }, { "code": null, "e": 6201, "s": 6182, "text": " Arnab Chakraborty" }, { "code": null, "e": 6236, "s": 6201, "text": "\n 136 Lectures \n 11 hours \n" }, { "code": null, "e": 6258, "s": 6236, "text": " In28Minutes Official" }, { "code": null, "e": 6292, "s": 6258, "text": "\n 75 Lectures \n 13 hours \n" }, { "code": null, "e": 6320, "s": 6292, "text": " Eduonix Learning Solutions" }, { "code": null, "e": 6355, "s": 6320, "text": "\n 70 Lectures \n 8.5 hours \n" }, { "code": null, "e": 6369, "s": 6355, "text": " Lets Kode It" }, { "code": null, "e": 6402, "s": 6369, "text": "\n 63 Lectures \n 6 hours \n" }, { "code": null, "e": 6419, "s": 6402, "text": " Abhilash Nelson" }, { "code": null, "e": 6426, "s": 6419, "text": " Print" }, { "code": null, "e": 6437, "s": 6426, "text": " Add Notes" } ]
Normal Equation of Linear Regression | by Aerin Kim | Towards Data Science
Kyle is interviewing Cartman for a data scientist position. Cartman is a graduate student studying CS at Stanford. Kyle: Ok, let’s start with an easy question. Derive the linear regression from scratch. We have about 30 data points of housing price (y) and the size of houses (x). Cartman: Sure! Kyle: Good. Now, how will you solve this? Cartman: I’ll use the gradient descent. Kyle: Hmmm... Why? Cartman: That’s how we solve the optimization problem. Kyle: Can you think of other reasons why we would use G.D.? Cartman: ...... You don’t have to use the gradient descent in this case because there is a closed-form solution for Linear Regression, aka the Normal Equation. I think some people automatically go for Gradient Descent to solve Linear Regression because when we first learn about GD, our first implementation to practice is always linear regression. :-) However, Gradient Descent is definitely an overkill for Linear Regression when the sample size is relatively small. Kyle: Can you derive the normal equation? Kyle: Wrong! Cartman: 😱 Kyle: Above derivation is just a shortcut, not a legit answer to the question. Let’s derive it thoroughly using the definition of Linear Regression — minimizing squared error between the prediction and the truth label. The reason why we might use Gradient Descent in Linear Regression is because it might be computationally cheaper to find optima. Though in this case of sample size 30, Cartman could just have derived the closed form on the whiteboard. Why is Gradient Descent computationally cheaper, compared to the normal equation? Take a look at the normal equation that we just derived. It has the matrix inversion in it and inverting a matrix is an expensive operation. The design matrix X has k+1 columns where k is the number of predictors (x1, x2, x3,...) and m rows of samples (in our case 30). In real life, k is easily greater than 1,000 and the sample size will be greater than 100k. Since the matrix inversion is (O(n3)), inverting X′X (1,000 by 1,000 matrix) could take a while to calculate. Win extra points: When we use the Gradient Descent, we need to scale the data. When we use the Normal Equation, we don’t need to. 👍 Python code for implementation addicts: def normalEqn(X,y): """ Computes the closed-form solution to linear regression """ theta = np.linalg.inv(X.T.dot(X)).dot(X.T).dot(y) return theta
[ { "code": null, "e": 286, "s": 171, "text": "Kyle is interviewing Cartman for a data scientist position. Cartman is a graduate student studying CS at Stanford." }, { "code": null, "e": 452, "s": 286, "text": "Kyle: Ok, let’s start with an easy question. Derive the linear regression from scratch. We have about 30 data points of housing price (y) and the size of houses (x)." }, { "code": null, "e": 467, "s": 452, "text": "Cartman: Sure!" }, { "code": null, "e": 509, "s": 467, "text": "Kyle: Good. Now, how will you solve this?" }, { "code": null, "e": 549, "s": 509, "text": "Cartman: I’ll use the gradient descent." }, { "code": null, "e": 568, "s": 549, "text": "Kyle: Hmmm... Why?" }, { "code": null, "e": 623, "s": 568, "text": "Cartman: That’s how we solve the optimization problem." }, { "code": null, "e": 683, "s": 623, "text": "Kyle: Can you think of other reasons why we would use G.D.?" }, { "code": null, "e": 699, "s": 683, "text": "Cartman: ......" }, { "code": null, "e": 1152, "s": 699, "text": "You don’t have to use the gradient descent in this case because there is a closed-form solution for Linear Regression, aka the Normal Equation. I think some people automatically go for Gradient Descent to solve Linear Regression because when we first learn about GD, our first implementation to practice is always linear regression. :-) However, Gradient Descent is definitely an overkill for Linear Regression when the sample size is relatively small." }, { "code": null, "e": 1194, "s": 1152, "text": "Kyle: Can you derive the normal equation?" }, { "code": null, "e": 1207, "s": 1194, "text": "Kyle: Wrong!" }, { "code": null, "e": 1218, "s": 1207, "text": "Cartman: 😱" }, { "code": null, "e": 1437, "s": 1218, "text": "Kyle: Above derivation is just a shortcut, not a legit answer to the question. Let’s derive it thoroughly using the definition of Linear Regression — minimizing squared error between the prediction and the truth label." }, { "code": null, "e": 1672, "s": 1437, "text": "The reason why we might use Gradient Descent in Linear Regression is because it might be computationally cheaper to find optima. Though in this case of sample size 30, Cartman could just have derived the closed form on the whiteboard." }, { "code": null, "e": 1754, "s": 1672, "text": "Why is Gradient Descent computationally cheaper, compared to the normal equation?" }, { "code": null, "e": 2226, "s": 1754, "text": "Take a look at the normal equation that we just derived. It has the matrix inversion in it and inverting a matrix is an expensive operation. The design matrix X has k+1 columns where k is the number of predictors (x1, x2, x3,...) and m rows of samples (in our case 30). In real life, k is easily greater than 1,000 and the sample size will be greater than 100k. Since the matrix inversion is (O(n3)), inverting X′X (1,000 by 1,000 matrix) could take a while to calculate." }, { "code": null, "e": 2358, "s": 2226, "text": "Win extra points: When we use the Gradient Descent, we need to scale the data. When we use the Normal Equation, we don’t need to. 👍" }, { "code": null, "e": 2398, "s": 2358, "text": "Python code for implementation addicts:" } ]
Shortest Path between Cities | Practice | GeeksforGeeks
Geek lives in a special city where houses are arranged in a hierarchial manner. Starting from house number 1, each house leads to two more houses. 1 leads to 2 and 3. 2 leads to 4 and 5. 3 leads to 6 and 7. and so on. Given the house numbers on two houses x and y, find the length of the shortest path between them. Example 1: Input: x = 2, y = 6 Output: 3 Explanation: 1 / \ / \ 2 3 / \ / \ 4 5 6 7 / \ / \ / \ / \ 8 9 10 11 12 13 14 15 The length of the shortest path between 2 and 6 is 3. ie 2-> 1-> 3-> 6. Example 2: Input: x = 8, y = 10 Output: 4 Explanation: 8-> 4-> 2-> 5-> 10 The length of the shortest path between 8 and 10 is 4. Your Task: You don't need to read input or print anything. Complete the function shortestPath() which takes integers x and y as input parameters and returns the length of the shortest path from x to y. Expected Time Complexity: O(log(max(x,y))) Expected Auxiliary Space: O(1) Constraints: 1 <= x,y <= 109 +1 harendraseervi1234567893 months ago int shortestPath( int x, int y){ int lx=0; int ly=0; while(x!=y){ if(x>y){ x=x/2; lx++; } else if(y>x){ y=y/2; ly++; } } return lx+ly; } +1 keshavkumarshivanshu33 months ago Simple logic-Find lowest common ancestor i.e., find nearest root to both the number. int shortestPath( int x, int y){ if(x==y) return 0; if(x>y){ return 1+shortestPath(x/2,y); }else{ return 1+shortestPath(x,y/2); } } 0 kake13375 months ago class Solution{ public: int shortestPath( int x, int y){ int r=0; while(x!=y) { if(x>y) x/=2; else y/=2; r++; } return r; }}; 0 abhishekpanwar6976 months ago int shortestPath( int x, int y){ int count=0; while(x!=y) { if(x>y) x=x/2; else y=y/2; count++; } return count; } 0 Dhruv Mishra_18267 months ago Dhruv Mishra_1826 exe time 0.0/2.2 https://uploads.disquscdn.c... 0 Meikandanathan Pandian9 months ago Meikandanathan Pandian Accepted solution in java -> https://onlinegdb.com/sQAz2... +1 shady419 months ago shady41 The parent of any element will be floor(n/2)So we just keep finding parent for the larger until they become equal. int parent(int n){ return n/2; } int shortestPath( int x, int y){ if(x==y){ return 0; } int ans=0; while(x!=y){ ans+=1; if(x>y){ x=parent(x); } else{ y=parent(y); } } return ans; } +1 Nihal Chaturvedi11 months ago Nihal Chaturvedi The Question says for any number say x it leads to 2 more houses that are 2*x and (2*x)+1 . If you can remember this is exactly the case with heap.So we know in heap that for any element x parent is : (x-1)//2Here since we are starting from 1 parent will be x//2Now what is the intution is we try to go to the parent of larger of (x,y) untill we get x ==y or any one of them gets 1 .Now after the loop if they are not equal means one of them reached the 1 ( root ) so the other one should also reach the root . If they are equal then nothing more to do . Example say x= 8 and y =14We are running loop ---->x=8,y=14: y is larger so y gets y//2 i.e. 7 and count becomes 1x=8,y=7: x is larger so x gets x//2 i.e. 4 and count becomes 2x=4,y=7: y is larger so y gets y//2 i.e. 3 and count becomes 3x=4,y=3: x is larger so x gets x//2 i.e. 2 and count becomes 4x=2,y=3: y is larger so y gets y//2 i.e. 1 and count becomes 5 Now x=2,y=1 so end loopNow x and y are not equal so : y=1 then make x 1: x = x//2 --> x=1 and count becomes 6return count https://uploads.disquscdn.c... 0 Rithik Raj1 year ago Rithik Raj class Solution{ public: int shortestPath( int x, int y){ int a = max(x,y); int b = min(x,y);int cx = 0,cy = 0; while(a != b){ if(a>b){ a = a/2; cx++; }else if(b>a){ b = b/2; cy++; } } int ans = cy+cx; return ans; }}; 0 Rahul Kumar Vairagade1 year ago Rahul Kumar Vairagade Execution Time: 0.01 int shortestPath( int x, int y){ int spath = 0; while(x!=y){ if(x>y) x = x/2; else y = y/2; spath++; } return spath; } We strongly recommend solving this problem on your own before viewing its editorial. Do you still want to view the editorial? Login to access your submissions. Problem Contest Reset the IDE using the second button on the top right corner. Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values. Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints. You can access the hints to get an idea about what is expected of you as well as the final solution code. You can view the solutions submitted by other users from the submission tab.
[ { "code": null, "e": 560, "s": 238, "text": "Geek lives in a special city where houses are arranged in a hierarchial manner. Starting from house number 1, each house leads to two more houses. \n1 leads to 2 and 3. \n2 leads to 4 and 5. \n3 leads to 6 and 7. and so on. \nGiven the house numbers on two houses x and y, find the length of the shortest path between them. " }, { "code": null, "e": 572, "s": 560, "text": "\nExample 1:" }, { "code": null, "e": 883, "s": 572, "text": "Input:\nx = 2, y = 6\nOutput: 3\nExplanation:\n 1\n / \\\n / \\\n 2 3\n / \\ / \\\n 4 5 6 7 \n / \\ / \\ / \\ / \\\n 8 9 10 11 12 13 14 15\n\nThe length of the shortest path between 2 \nand 6 is 3. ie 2-> 1-> 3-> 6." }, { "code": null, "e": 895, "s": 883, "text": "\nExample 2:" }, { "code": null, "e": 1015, "s": 895, "text": "Input:\nx = 8, y = 10\nOutput: 4\nExplanation: 8-> 4-> 2-> 5-> 10\nThe length of the shortest path between 8 \nand 10 is 4. " }, { "code": null, "e": 1218, "s": 1015, "text": "\nYour Task:\nYou don't need to read input or print anything. Complete the function shortestPath() which takes integers x and y as input parameters and returns the length of the shortest path from x to y." }, { "code": null, "e": 1293, "s": 1218, "text": "\nExpected Time Complexity: O(log(max(x,y)))\nExpected Auxiliary Space: O(1)" }, { "code": null, "e": 1324, "s": 1293, "text": "\nConstraints: \n1 <= x,y <= 109" }, { "code": null, "e": 1327, "s": 1324, "text": "+1" }, { "code": null, "e": 1363, "s": 1327, "text": "harendraseervi1234567893 months ago" }, { "code": null, "e": 1632, "s": 1363, "text": " int shortestPath( int x, int y){ int lx=0; int ly=0; while(x!=y){ if(x>y){ x=x/2; lx++; } else if(y>x){ y=y/2; ly++; } } return lx+ly; }" }, { "code": null, "e": 1635, "s": 1632, "text": "+1" }, { "code": null, "e": 1669, "s": 1635, "text": "keshavkumarshivanshu33 months ago" }, { "code": null, "e": 1754, "s": 1669, "text": "Simple logic-Find lowest common ancestor i.e., find nearest root to both the number." }, { "code": null, "e": 1947, "s": 1754, "text": "int shortestPath( int x, int y){ \n if(x==y) return 0;\n if(x>y){\n return 1+shortestPath(x/2,y);\n }else{\n return 1+shortestPath(x,y/2);\n }\n }" }, { "code": null, "e": 1949, "s": 1947, "text": "0" }, { "code": null, "e": 1970, "s": 1949, "text": "kake13375 months ago" }, { "code": null, "e": 2143, "s": 1970, "text": "class Solution{ public: int shortestPath( int x, int y){ int r=0; while(x!=y) { if(x>y) x/=2; else y/=2; r++; } return r; }};" }, { "code": null, "e": 2145, "s": 2143, "text": "0" }, { "code": null, "e": 2175, "s": 2145, "text": "abhishekpanwar6976 months ago" }, { "code": null, "e": 2372, "s": 2175, "text": "int shortestPath( int x, int y){ int count=0; while(x!=y) { if(x>y) x=x/2; else y=y/2; count++; } return count; }" }, { "code": null, "e": 2374, "s": 2372, "text": "0" }, { "code": null, "e": 2404, "s": 2374, "text": "Dhruv Mishra_18267 months ago" }, { "code": null, "e": 2422, "s": 2404, "text": "Dhruv Mishra_1826" }, { "code": null, "e": 2439, "s": 2422, "text": "exe time 0.0/2.2" }, { "code": null, "e": 2471, "s": 2439, "text": " https://uploads.disquscdn.c..." }, { "code": null, "e": 2473, "s": 2471, "text": "0" }, { "code": null, "e": 2508, "s": 2473, "text": "Meikandanathan Pandian9 months ago" }, { "code": null, "e": 2531, "s": 2508, "text": "Meikandanathan Pandian" }, { "code": null, "e": 2591, "s": 2531, "text": "Accepted solution in java -> https://onlinegdb.com/sQAz2..." }, { "code": null, "e": 2594, "s": 2591, "text": "+1" }, { "code": null, "e": 2614, "s": 2594, "text": "shady419 months ago" }, { "code": null, "e": 2622, "s": 2614, "text": "shady41" }, { "code": null, "e": 2737, "s": 2622, "text": "The parent of any element will be floor(n/2)So we just keep finding parent for the larger until they become equal." }, { "code": null, "e": 3078, "s": 2737, "text": " int parent(int n){ return n/2; } int shortestPath( int x, int y){ if(x==y){ return 0; } int ans=0; while(x!=y){ ans+=1; if(x>y){ x=parent(x); } else{ y=parent(y); } } return ans; }" }, { "code": null, "e": 3081, "s": 3078, "text": "+1" }, { "code": null, "e": 3111, "s": 3081, "text": "Nihal Chaturvedi11 months ago" }, { "code": null, "e": 3128, "s": 3111, "text": "Nihal Chaturvedi" }, { "code": null, "e": 3684, "s": 3128, "text": "The Question says for any number say x it leads to 2 more houses that are 2*x and (2*x)+1 . If you can remember this is exactly the case with heap.So we know in heap that for any element x parent is : (x-1)//2Here since we are starting from 1 parent will be x//2Now what is the intution is we try to go to the parent of larger of (x,y) untill we get x ==y or any one of them gets 1 .Now after the loop if they are not equal means one of them reached the 1 ( root ) so the other one should also reach the root . If they are equal then nothing more to do ." }, { "code": null, "e": 4237, "s": 3684, "text": "Example say x= 8 and y =14We are running loop ---->x=8,y=14: y is larger so y gets y//2 i.e. 7 and count becomes 1x=8,y=7: x is larger so x gets x//2 i.e. 4 and count becomes 2x=4,y=7: y is larger so y gets y//2 i.e. 3 and count becomes 3x=4,y=3: x is larger so x gets x//2 i.e. 2 and count becomes 4x=2,y=3: y is larger so y gets y//2 i.e. 1 and count becomes 5 Now x=2,y=1 so end loopNow x and y are not equal so : y=1 then make x 1: x = x//2 --> x=1 and count becomes 6return count" }, { "code": null, "e": 4269, "s": 4237, "text": " https://uploads.disquscdn.c..." }, { "code": null, "e": 4271, "s": 4269, "text": "0" }, { "code": null, "e": 4292, "s": 4271, "text": "Rithik Raj1 year ago" }, { "code": null, "e": 4303, "s": 4292, "text": "Rithik Raj" }, { "code": null, "e": 4664, "s": 4303, "text": "class Solution{ public: int shortestPath( int x, int y){ int a = max(x,y); int b = min(x,y);int cx = 0,cy = 0; while(a != b){ if(a>b){ a = a/2; cx++; }else if(b>a){ b = b/2; cy++; } } int ans = cy+cx; return ans; }};" }, { "code": null, "e": 4666, "s": 4664, "text": "0" }, { "code": null, "e": 4698, "s": 4666, "text": "Rahul Kumar Vairagade1 year ago" }, { "code": null, "e": 4720, "s": 4698, "text": "Rahul Kumar Vairagade" }, { "code": null, "e": 4741, "s": 4720, "text": "Execution Time: 0.01" }, { "code": null, "e": 4955, "s": 4741, "text": "int shortestPath( int x, int y){ int spath = 0; while(x!=y){ if(x>y) x = x/2; else y = y/2; spath++; } return spath; }" }, { "code": null, "e": 5101, "s": 4955, "text": "We strongly recommend solving this problem on your own before viewing its editorial. Do you still\n want to view the editorial?" }, { "code": null, "e": 5137, "s": 5101, "text": " Login to access your submissions. " }, { "code": null, "e": 5147, "s": 5137, "text": "\nProblem\n" }, { "code": null, "e": 5157, "s": 5147, "text": "\nContest\n" }, { "code": null, "e": 5220, "s": 5157, "text": "Reset the IDE using the second button on the top right corner." }, { "code": null, "e": 5368, "s": 5220, "text": "Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values." }, { "code": null, "e": 5576, "s": 5368, "text": "Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints." }, { "code": null, "e": 5682, "s": 5576, "text": "You can access the hints to get an idea about what is expected of you as well as the final solution code." } ]
Difference between HTML and React Event Handling - GeeksforGeeks
30 Jun, 2021 Event handling in HTML and React are different from one another in terms of syntax and some rules. The reason behind this is that React works on the concept of virtual DOM, on the other hand, the HTML has access to the Real DOM all the time. We are going to see how we can add the events in HTML and how React differs in event handling. In HTML, we are directly writing the code for the Real DOM so in order to Real DOM to let know that we are referring to the javascript function or method we need to specify ” ( ) ” at the end of the string. If we do not want to go with this approach, there is one more approach using javascript. We need to use the addEventLisener to specify events and listener. Both method works fine, we have made one onclick using the first method and one using addEventlistener which both greet the user whenever the user clicks on that. As you can see the first button is not having any id, we are specifying the event using the first method which is “onclick”. It is clearly visible that we have provided “greet( )” as a string and also provided the parenthesis at the end (see the first script tag). The second method is using addEventListener, we have specified the event “Click” and given a callback, we can also give the method name. See this article. Example: index.htm <!DOCTYPE html><html lang="en"> <head> <meta charset="UTF-8" /> <meta http-equiv="X-UA-Compatible" content="IE=edge" /> <meta name="viewport" content= "width=device-width, initial-scale=1.0" /> <style> .btn { padding: 20px; background-color: blueviolet; color: white; font-size: 20px; } </style> <!-- script for onclick method --> <script> var greet = function () { window.alert("hello onclick event sent me"); }; </script></head> <body> <button class="btn" onclick="greet()"> Greet me using "onclick" </button> <button id="b1" class="btn"> Greet me using addEventListener </button> <!-- Script for addevnetListner --> <script> var button = document.getElementById("b1"); button.addEventListener("click", () => window.alert("hello addevnetlistner sent me") ); </script></body> </html> HTML event listening In React. we use the concept of virtual DOM, so all the events need to specify at the time of creating the component. Here in App.js file, we have defined one component App, which is having a button. We have used “onClick” event and we are providing a method name instead of a string. As in JSX, we specify javascript in “{ }” that is why the method name is in the { }. You can create React app using the following command: npx create-react-app nameoftheapp react file directory Example: App.js import React from 'react' export default function App() { const greet = () => { window.alert("onClick in React sent me"); } return ( <div> <button className="btn" onClick={greet}> Greet me using onClick React </button> </div> )} You can run your app using the following command: npm start React event handling In HTML, we specify event in html tags like onclick, onsubmit etc. and pass the string that contain the parenthesis at the end. In html, we can also add them afterword using external javascript using addEventListener. In React, we specify event at the time of creating our component. we use camel case convention in React i. e. onClick, onSubmit. In React, we bind them using method name only like onClick={greet}. addEventListener cannot be used in React component. These are some key differences in event handling we are going to see them in detail with the examples. References: https://www.geeksforgeeks.org/javascript-addeventlistener-with-examples/ Attention reader! Don’t stop learning now. Get hold of all the important HTML concepts with the Web Design for Beginners | HTML course. HTML-Questions Picked React-Questions Difference Between HTML ReactJS Web Technologies HTML Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Difference between var, let and const keywords in JavaScript Difference Between Method Overloading and Method Overriding in Java Difference between Prim's and Kruskal's algorithm for MST Difference between Internal and External fragmentation Differences and Applications of List, Tuple, Set and Dictionary in Python Top 10 Projects For Beginners To Practice HTML and CSS Skills How to insert spaces/tabs in text using HTML/CSS? How to set the default value for an HTML <select> element ? How to update Node.js and NPM to next version ? How to set input type date in dd-mm-yyyy format using HTML ?
[ { "code": null, "e": 24872, "s": 24844, "text": "\n30 Jun, 2021" }, { "code": null, "e": 25209, "s": 24872, "text": "Event handling in HTML and React are different from one another in terms of syntax and some rules. The reason behind this is that React works on the concept of virtual DOM, on the other hand, the HTML has access to the Real DOM all the time. We are going to see how we can add the events in HTML and how React differs in event handling." }, { "code": null, "e": 25572, "s": 25209, "text": "In HTML, we are directly writing the code for the Real DOM so in order to Real DOM to let know that we are referring to the javascript function or method we need to specify ” ( ) ” at the end of the string. If we do not want to go with this approach, there is one more approach using javascript. We need to use the addEventLisener to specify events and listener." }, { "code": null, "e": 26155, "s": 25572, "text": "Both method works fine, we have made one onclick using the first method and one using addEventlistener which both greet the user whenever the user clicks on that. As you can see the first button is not having any id, we are specifying the event using the first method which is “onclick”. It is clearly visible that we have provided “greet( )” as a string and also provided the parenthesis at the end (see the first script tag). The second method is using addEventListener, we have specified the event “Click” and given a callback, we can also give the method name. See this article." }, { "code": null, "e": 26164, "s": 26155, "text": "Example:" }, { "code": null, "e": 26174, "s": 26164, "text": "index.htm" }, { "code": "<!DOCTYPE html><html lang=\"en\"> <head> <meta charset=\"UTF-8\" /> <meta http-equiv=\"X-UA-Compatible\" content=\"IE=edge\" /> <meta name=\"viewport\" content= \"width=device-width, initial-scale=1.0\" /> <style> .btn { padding: 20px; background-color: blueviolet; color: white; font-size: 20px; } </style> <!-- script for onclick method --> <script> var greet = function () { window.alert(\"hello onclick event sent me\"); }; </script></head> <body> <button class=\"btn\" onclick=\"greet()\"> Greet me using \"onclick\" </button> <button id=\"b1\" class=\"btn\"> Greet me using addEventListener </button> <!-- Script for addevnetListner --> <script> var button = document.getElementById(\"b1\"); button.addEventListener(\"click\", () => window.alert(\"hello addevnetlistner sent me\") ); </script></body> </html>", "e": 27149, "s": 26174, "text": null }, { "code": null, "e": 27170, "s": 27149, "text": "HTML event listening" }, { "code": null, "e": 27542, "s": 27172, "text": "In React. we use the concept of virtual DOM, so all the events need to specify at the time of creating the component. Here in App.js file, we have defined one component App, which is having a button. We have used “onClick” event and we are providing a method name instead of a string. As in JSX, we specify javascript in “{ }” that is why the method name is in the { }." }, { "code": null, "e": 27596, "s": 27542, "text": "You can create React app using the following command:" }, { "code": null, "e": 27630, "s": 27596, "text": "npx create-react-app nameoftheapp" }, { "code": null, "e": 27651, "s": 27630, "text": "react file directory" }, { "code": null, "e": 27660, "s": 27651, "text": "Example:" }, { "code": null, "e": 27667, "s": 27660, "text": "App.js" }, { "code": "import React from 'react' export default function App() { const greet = () => { window.alert(\"onClick in React sent me\"); } return ( <div> <button className=\"btn\" onClick={greet}> Greet me using onClick React </button> </div> )}", "e": 27930, "s": 27667, "text": null }, { "code": null, "e": 27980, "s": 27930, "text": "You can run your app using the following command:" }, { "code": null, "e": 27990, "s": 27980, "text": "npm start" }, { "code": null, "e": 28011, "s": 27990, "text": "React event handling" }, { "code": null, "e": 28139, "s": 28011, "text": "In HTML, we specify event in html tags like onclick, onsubmit etc. and pass the string that contain the parenthesis at the end." }, { "code": null, "e": 28229, "s": 28139, "text": "In html, we can also add them afterword using external javascript using addEventListener." }, { "code": null, "e": 28295, "s": 28229, "text": "In React, we specify event at the time of creating our component." }, { "code": null, "e": 28358, "s": 28295, "text": "we use camel case convention in React i. e. onClick, onSubmit." }, { "code": null, "e": 28426, "s": 28358, "text": "In React, we bind them using method name only like onClick={greet}." }, { "code": null, "e": 28478, "s": 28426, "text": "addEventListener cannot be used in React component." }, { "code": null, "e": 28581, "s": 28478, "text": "These are some key differences in event handling we are going to see them in detail with the examples." }, { "code": null, "e": 28666, "s": 28581, "text": "References: https://www.geeksforgeeks.org/javascript-addeventlistener-with-examples/" }, { "code": null, "e": 28803, "s": 28666, "text": "Attention reader! Don’t stop learning now. Get hold of all the important HTML concepts with the Web Design for Beginners | HTML course." }, { "code": null, "e": 28818, "s": 28803, "text": "HTML-Questions" }, { "code": null, "e": 28825, "s": 28818, "text": "Picked" }, { "code": null, "e": 28841, "s": 28825, "text": "React-Questions" }, { "code": null, "e": 28860, "s": 28841, "text": "Difference Between" }, { "code": null, "e": 28865, "s": 28860, "text": "HTML" }, { "code": null, "e": 28873, "s": 28865, "text": "ReactJS" }, { "code": null, "e": 28890, "s": 28873, "text": "Web Technologies" }, { "code": null, "e": 28895, "s": 28890, "text": "HTML" }, { "code": null, "e": 28993, "s": 28895, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 29054, "s": 28993, "text": "Difference between var, let and const keywords in JavaScript" }, { "code": null, "e": 29122, "s": 29054, "text": "Difference Between Method Overloading and Method Overriding in Java" }, { "code": null, "e": 29180, "s": 29122, "text": "Difference between Prim's and Kruskal's algorithm for MST" }, { "code": null, "e": 29235, "s": 29180, "text": "Difference between Internal and External fragmentation" }, { "code": null, "e": 29309, "s": 29235, "text": "Differences and Applications of List, Tuple, Set and Dictionary in Python" }, { "code": null, "e": 29371, "s": 29309, "text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills" }, { "code": null, "e": 29421, "s": 29371, "text": "How to insert spaces/tabs in text using HTML/CSS?" }, { "code": null, "e": 29481, "s": 29421, "text": "How to set the default value for an HTML <select> element ?" }, { "code": null, "e": 29529, "s": 29481, "text": "How to update Node.js and NPM to next version ?" } ]
How to create a progress bar in HTML?
Use the <progress> tag to create a progress bar in HTML. The HTML <progress> tag specifies a completion progress of a task. It is displayed as a progress bar. The value of progress bar can be manipulated by JavaScript. The following are the attributes − You can try to run the following code to learn how to create a progress bar in a web page − <!DOCTYPE html> <html> <head> <title>HTML Progress Tag</title> </head> <body> <h1>Loading</h1> <progress value = "65" max = "100"/> </body> </html>
[ { "code": null, "e": 1281, "s": 1062, "text": "Use the <progress> tag to create a progress bar in HTML. The HTML <progress> tag specifies a completion progress of a task. It is displayed as a progress bar. The value of progress bar can be manipulated by JavaScript." }, { "code": null, "e": 1316, "s": 1281, "text": "The following are the attributes −" }, { "code": null, "e": 1408, "s": 1316, "text": "You can try to run the following code to learn how to create a progress bar in a web page −" }, { "code": null, "e": 1586, "s": 1408, "text": "<!DOCTYPE html>\n<html>\n <head>\n <title>HTML Progress Tag</title>\n </head>\n <body>\n <h1>Loading</h1>\n <progress value = \"65\" max = \"100\"/>\n </body>\n</html>" } ]
Groovy - for-in statement
The for-in statement is used to iterate through a set of values. The for-in statement is generally used in the following way. for(variable in range) { statement #1 statement #2 ... } The following diagram shows the diagrammatic explanation of this loop. Following is an example of a for-in statement − class Example { static void main(String[] args) { int[] array = [0,1,2,3]; for(int i in array) { println(i); } } } In the above example, we are first initializing an array of integers with 4 values of 0,1,2 and 3. We are then using our for loop statement to first define a variable i which then iterates through all of the integers in the array and prints the values accordingly. The output of the above code would be − 0 1 2 3 The for-in statement can also be used to loop through ranges. The following example shows how this can be accomplished. class Example { static void main(String[] args) { for(int i in 1..5) { println(i); } } } In the above example, we are actually looping through a range which is defined from 1 to 5 and printing the each value in the range. The output of the above code would be − 1 2 3 4 5 The for-in statement can also be used to loop through Map’s. The following example shows how this can be accomplished. class Example { static void main(String[] args) { def employee = ["Ken" : 21, "John" : 25, "Sally" : 22]; for(emp in employee) { println(emp); } } } In the above example, we are actually looping through a map which has a defined set of key value entries. The output of the above code would be − Ken = 21 John = 25 Sally = 22 52 Lectures 8 hours Krishna Sakinala 49 Lectures 2.5 hours Packt Publishing Print Add Notes Bookmark this page
[ { "code": null, "e": 2364, "s": 2238, "text": "The for-in statement is used to iterate through a set of values. The for-in statement is generally used in the following way." }, { "code": null, "e": 2435, "s": 2364, "text": "for(variable in range) { \n statement #1 \n statement #2 \n ... \n}\n" }, { "code": null, "e": 2506, "s": 2435, "text": "The following diagram shows the diagrammatic explanation of this loop." }, { "code": null, "e": 2554, "s": 2506, "text": "Following is an example of a for-in statement −" }, { "code": null, "e": 2712, "s": 2554, "text": "class Example { \n static void main(String[] args) { \n int[] array = [0,1,2,3]; \n\t\t\n for(int i in array) { \n println(i); \n } \n } \n}" }, { "code": null, "e": 3017, "s": 2712, "text": "In the above example, we are first initializing an array of integers with 4 values of 0,1,2 and 3. We are then using our for loop statement to first define a variable i which then iterates through all of the integers in the array and prints the values accordingly. The output of the above code would be −" }, { "code": null, "e": 3029, "s": 3017, "text": "0 \n1 \n2 \n3\n" }, { "code": null, "e": 3149, "s": 3029, "text": "The for-in statement can also be used to loop through ranges. The following example shows how this can be accomplished." }, { "code": null, "e": 3272, "s": 3149, "text": "class Example {\n static void main(String[] args) {\n\t\n for(int i in 1..5) {\n println(i);\n }\n\t\t\n } \n} " }, { "code": null, "e": 3445, "s": 3272, "text": "In the above example, we are actually looping through a range which is defined from 1 to 5 and printing the each value in the range. The output of the above code would be −" }, { "code": null, "e": 3461, "s": 3445, "text": "1 \n2 \n3 \n4 \n5 \n" }, { "code": null, "e": 3580, "s": 3461, "text": "The for-in statement can also be used to loop through Map’s. The following example shows how this can be accomplished." }, { "code": null, "e": 3765, "s": 3580, "text": "class Example {\n static void main(String[] args) {\n def employee = [\"Ken\" : 21, \"John\" : 25, \"Sally\" : 22];\n\t\t\n for(emp in employee) {\n println(emp);\n }\n }\n}" }, { "code": null, "e": 3911, "s": 3765, "text": "In the above example, we are actually looping through a map which has a defined set of key value entries. The output of the above code would be −" }, { "code": null, "e": 3945, "s": 3911, "text": "Ken = 21 \nJohn = 25 \nSally = 22 \n" }, { "code": null, "e": 3978, "s": 3945, "text": "\n 52 Lectures \n 8 hours \n" }, { "code": null, "e": 3996, "s": 3978, "text": " Krishna Sakinala" }, { "code": null, "e": 4031, "s": 3996, "text": "\n 49 Lectures \n 2.5 hours \n" }, { "code": null, "e": 4049, "s": 4031, "text": " Packt Publishing" }, { "code": null, "e": 4056, "s": 4049, "text": " Print" }, { "code": null, "e": 4067, "s": 4056, "text": " Add Notes" } ]
How to group data by time intervals in Python Pandas? | by Ankit Goel | Towards Data Science
If you have ever dealt with Time-Series data analysis, you would have come across these problems for sure — Combining data into certain intervals like based on each day, a week, or a month.Aggregating data in the time interval like if you are dealing with price data then problems like total amount added in an hour, or a day.Finding patterns for other features in the dataset based on a time interval. Combining data into certain intervals like based on each day, a week, or a month. Aggregating data in the time interval like if you are dealing with price data then problems like total amount added in an hour, or a day. Finding patterns for other features in the dataset based on a time interval. In this article, you will learn about how you can solve these problems with just one-line of code using only 2 different Pandas API’s i.e. resample() and Grouper(). As we know, the best way to learn something is to start applying it. So, I am going to use a sample time-series dataset provided by World Bank Open data and is related to the crowd-sourced price data collected from 15 countries. For more details about the data, refer Crowdsourced Price Data Collection Pilot. For this exercise, we are going to use data collected for Argentina. 📚 Resources: Google Colab Implementation | Github Repository | Dataset 📚 This data is collected by different contributors who participated in the survey conducted by the World Bank in the year 2015. The basic idea of the survey was to collect prices for different goods and services in different countries. We are going to use only a few columns from the dataset for the demo purposes — Pandas provides an API named as resample() which can be used to resample the data into different intervals. Let’s see a few examples of how we can use this — Let’s say we need to find how much amount was added by a contributor in an hour, we can simply do so using — # data re-sampled based on an hourdata.resample('H', on='created_at').price.sum()# outputcreated_at2015-12-14 18:00:00 5449.902015-12-14 19:00:00 15.982015-12-14 20:00:00 66.982015-12-14 21:00:00 0.002015-12-14 22:00:00 0.00 Here is what we are doing here — First, we resampled the data into an hour ‘H’ frequency for our date column i.e. created_at. We can use different frequencies, I will go through a few of them in this article. Check out Pandas Time Frequencies for a complete list of frequencies. You can even go up to nanoseconds.After this, we selected the ‘price’ from the resampled data. Later we will see how we can aggregate on multiple fields i.e. total amount, quantity, and the unique number of items in a single command.Computed the sum for all the prices. This will give us the total amount added in that hour. First, we resampled the data into an hour ‘H’ frequency for our date column i.e. created_at. We can use different frequencies, I will go through a few of them in this article. Check out Pandas Time Frequencies for a complete list of frequencies. You can even go up to nanoseconds. After this, we selected the ‘price’ from the resampled data. Later we will see how we can aggregate on multiple fields i.e. total amount, quantity, and the unique number of items in a single command. Computed the sum for all the prices. This will give us the total amount added in that hour. By default, the time interval starts from the starting of the hour i.e. the 0th minute like 18:00, 19:00, and so on. We can change that to start from different minutes of the hour using offset attribute like — # Starting at 15 minutes 10 seconds for each hourdata.resample('H', on='created_at', offset='15Min10s').price.sum()# Outputcreated_at2015-12-14 17:15:10 5370.002015-12-14 18:15:10 79.902015-12-14 19:15:10 64.562015-12-14 20:15:10 18.402015-12-14 21:15:10 0.00 Please note, you need to have Pandas version > 1.10 for the above command to work. In this example, we will see how we can resample the data based on each week. # data re-sampled based on an each week, just change the frequencydata.resample('W', on='created_at').price.sum()# outputcreated_at2015-12-20 4.305638e+042015-12-27 6.733851e+042016-01-03 4.443459e+042016-01-10 1.822236e+042016-01-17 1.908385e+05 By default, the week starts from Sunday, we can change that to start from different days i.e. let’s say if we would like to combine based on the week starting on Monday, we can do so using — # data re-sampled based on an each week, week starting Mondaydata.resample('W-MON', on='created_at').price.sum()# outputcreated_at2015-12-14 5.532860e+032015-12-21 3.850762e+042015-12-28 6.686329e+042016-01-04 5.392410e+042016-01-11 1.260869e+04 This is similar to what we have done in the examples before. # data re-sampled based on each monthdata.resample('M', on='created_at').price.sum()# Outputcreated_at2015-12-31 1.538769e+052016-01-31 4.297143e+052016-02-29 9.352684e+052016-03-31 7.425185e+062016-04-30 1.384351e+07 One observation to note here is that the output labels for each month are based on the last day of the month, we can use the ‘MS’ frequency to start it from 1st day of the month i.e. instead of 2015–12–31 it would be 2015–12–01 — # month frequency from start of the monthdata.resample('MS', on='created_at').price.sum()created_at2015-12-01 1.538769e+052016-01-01 4.297143e+052016-02-01 9.352684e+052016-03-01 7.425185e+062016-04-01 1.384351e+07 Often we need to apply different aggregations on different columns like in our example we might need to find — Unique items that were added in each hour.The total quantity that was added in each hour.The total amount that was added in each hour. Unique items that were added in each hour. The total quantity that was added in each hour. The total amount that was added in each hour. We can do so in a one-line by using agg() on the resampled data. Let’s see how we can do it — # aggregating multiple fields for each hourdata.resample('H', on='created_at').agg({'price':'sum', 'quantity':'sum','item_code':'nunique'}) In the above examples, we re-sampled the data and applied aggregations on it. What if we would like to group data by other fields in addition to time-interval? Pandas provide an API known as grouper() which can help us to do that. In this section, we will see how we can group data on different fields and analyze them for different intervals. Let’s say we need to analyze data based on store type for each month, we can do so using — # Grouping data based on month and store typedata.groupby([pd.Grouper(key='created_at', freq='M'), 'store_type']).price.sum().head(15)# Outputcreated_at store_type 2015-12-31 other 34300.00 public_semi_public_service 833.90 small_medium_shop 2484.23 specialized_shop 107086.002016-01-31 market 473.75 other 314741.00 private_service_provider 325.00 public_semi_public_service 276.79 small_medium_shop 31042.79 specialized_shop 29648.442016-02-29 market 1974.04 other 527950.00 private_service_provider 1620.00 public_semi_public_service 1028.52 small_medium_shop 224653.83 Let’s understand how I did it — First, we passed the Grouper object as part of the groupby statement which groups the data based on month i.e. ‘M’ frequency. This is similar to resample(), so whatever we discussed above applies here as well.We added store_type to the groupby so that for each month we can see different store types.For each group, we selected the price, calculated the sum, and selected the top 15 rows. First, we passed the Grouper object as part of the groupby statement which groups the data based on month i.e. ‘M’ frequency. This is similar to resample(), so whatever we discussed above applies here as well. We added store_type to the groupby so that for each month we can see different store types. For each group, we selected the price, calculated the sum, and selected the top 15 rows. As we did in the last example, we can do a similar thing for item_name as well. # Grouping data based on each month and item_namedata.groupby([pd.Grouper(key='created_at', freq='M'), 'item_name']).price.sum()# Outputcreated_at item_name 2015-12-31 Bar soap, solid, SB 33.17 Beer, domestic brand, single bottle, WKB 29.79 Black tea, BL 12.00 Black tea, in bags, WKB 60.99 Bread, white, sliced, WKB 85.45 ... 2016-08-31 Wheat flour, not self-rising, BL 150.38 White sugar, WKB 266.47 Women's haircut, basic hairdresser 7730.00 Wrist-watch, men's, CITIZEN Eco-Drive BM6060 52205.00 Yoghurt, plain, WKB 150.96 We can apply aggregation on multiple fields similarly the way we did using resample(). The only thing which is different here is that the data would be grouped by store_type as well and also, we can do NamedAggregation (assign a name to each aggregation) on groupby object which doesn’t work for re-sample. # grouping data and named aggregation on item_code, quantity, and pricedata.groupby([pd.Grouper(key='created_at', freq='M'), 'store_type']).agg(unique_items=('item_code', 'nunique'), total_quantity=('quantity','sum'), total_amount=('price','sum')) I hope this article will help you to save time in analyzing time-series data. I recommend you to check out the documentation for the resample() and grouper() API to know about other things you can do with them. If you would like to learn about other Pandas API’s which can help you with data analysis tasks then do checkout the article Pandas: Put Away Novice Data Analyst Status where I explained different things that you can do with Pandas. Let me know in the comments or ping me on LinkedIn if you are facing any problems with using Pandas or Data Analysis in general. We can try to solve them together. That’s all for now, see you in the next article. Cheers!!! Stay Safe!!! Keep Learning!!!
[ { "code": null, "e": 155, "s": 47, "text": "If you have ever dealt with Time-Series data analysis, you would have come across these problems for sure —" }, { "code": null, "e": 450, "s": 155, "text": "Combining data into certain intervals like based on each day, a week, or a month.Aggregating data in the time interval like if you are dealing with price data then problems like total amount added in an hour, or a day.Finding patterns for other features in the dataset based on a time interval." }, { "code": null, "e": 532, "s": 450, "text": "Combining data into certain intervals like based on each day, a week, or a month." }, { "code": null, "e": 670, "s": 532, "text": "Aggregating data in the time interval like if you are dealing with price data then problems like total amount added in an hour, or a day." }, { "code": null, "e": 747, "s": 670, "text": "Finding patterns for other features in the dataset based on a time interval." }, { "code": null, "e": 912, "s": 747, "text": "In this article, you will learn about how you can solve these problems with just one-line of code using only 2 different Pandas API’s i.e. resample() and Grouper()." }, { "code": null, "e": 1291, "s": 912, "text": "As we know, the best way to learn something is to start applying it. So, I am going to use a sample time-series dataset provided by World Bank Open data and is related to the crowd-sourced price data collected from 15 countries. For more details about the data, refer Crowdsourced Price Data Collection Pilot. For this exercise, we are going to use data collected for Argentina." }, { "code": null, "e": 1364, "s": 1291, "text": "📚 Resources: Google Colab Implementation | Github Repository | Dataset 📚" }, { "code": null, "e": 1678, "s": 1364, "text": "This data is collected by different contributors who participated in the survey conducted by the World Bank in the year 2015. The basic idea of the survey was to collect prices for different goods and services in different countries. We are going to use only a few columns from the dataset for the demo purposes —" }, { "code": null, "e": 1836, "s": 1678, "text": "Pandas provides an API named as resample() which can be used to resample the data into different intervals. Let’s see a few examples of how we can use this —" }, { "code": null, "e": 1945, "s": 1836, "text": "Let’s say we need to find how much amount was added by a contributor in an hour, we can simply do so using —" }, { "code": null, "e": 2200, "s": 1945, "text": "# data re-sampled based on an hourdata.resample('H', on='created_at').price.sum()# outputcreated_at2015-12-14 18:00:00 5449.902015-12-14 19:00:00 15.982015-12-14 20:00:00 66.982015-12-14 21:00:00 0.002015-12-14 22:00:00 0.00" }, { "code": null, "e": 2233, "s": 2200, "text": "Here is what we are doing here —" }, { "code": null, "e": 2804, "s": 2233, "text": "First, we resampled the data into an hour ‘H’ frequency for our date column i.e. created_at. We can use different frequencies, I will go through a few of them in this article. Check out Pandas Time Frequencies for a complete list of frequencies. You can even go up to nanoseconds.After this, we selected the ‘price’ from the resampled data. Later we will see how we can aggregate on multiple fields i.e. total amount, quantity, and the unique number of items in a single command.Computed the sum for all the prices. This will give us the total amount added in that hour." }, { "code": null, "e": 3085, "s": 2804, "text": "First, we resampled the data into an hour ‘H’ frequency for our date column i.e. created_at. We can use different frequencies, I will go through a few of them in this article. Check out Pandas Time Frequencies for a complete list of frequencies. You can even go up to nanoseconds." }, { "code": null, "e": 3285, "s": 3085, "text": "After this, we selected the ‘price’ from the resampled data. Later we will see how we can aggregate on multiple fields i.e. total amount, quantity, and the unique number of items in a single command." }, { "code": null, "e": 3377, "s": 3285, "text": "Computed the sum for all the prices. This will give us the total amount added in that hour." }, { "code": null, "e": 3587, "s": 3377, "text": "By default, the time interval starts from the starting of the hour i.e. the 0th minute like 18:00, 19:00, and so on. We can change that to start from different minutes of the hour using offset attribute like —" }, { "code": null, "e": 3876, "s": 3587, "text": "# Starting at 15 minutes 10 seconds for each hourdata.resample('H', on='created_at', offset='15Min10s').price.sum()# Outputcreated_at2015-12-14 17:15:10 5370.002015-12-14 18:15:10 79.902015-12-14 19:15:10 64.562015-12-14 20:15:10 18.402015-12-14 21:15:10 0.00" }, { "code": null, "e": 3959, "s": 3876, "text": "Please note, you need to have Pandas version > 1.10 for the above command to work." }, { "code": null, "e": 4037, "s": 3959, "text": "In this example, we will see how we can resample the data based on each week." }, { "code": null, "e": 4299, "s": 4037, "text": "# data re-sampled based on an each week, just change the frequencydata.resample('W', on='created_at').price.sum()# outputcreated_at2015-12-20 4.305638e+042015-12-27 6.733851e+042016-01-03 4.443459e+042016-01-10 1.822236e+042016-01-17 1.908385e+05" }, { "code": null, "e": 4490, "s": 4299, "text": "By default, the week starts from Sunday, we can change that to start from different days i.e. let’s say if we would like to combine based on the week starting on Monday, we can do so using —" }, { "code": null, "e": 4751, "s": 4490, "text": "# data re-sampled based on an each week, week starting Mondaydata.resample('W-MON', on='created_at').price.sum()# outputcreated_at2015-12-14 5.532860e+032015-12-21 3.850762e+042015-12-28 6.686329e+042016-01-04 5.392410e+042016-01-11 1.260869e+04" }, { "code": null, "e": 4812, "s": 4751, "text": "This is similar to what we have done in the examples before." }, { "code": null, "e": 5045, "s": 4812, "text": "# data re-sampled based on each monthdata.resample('M', on='created_at').price.sum()# Outputcreated_at2015-12-31 1.538769e+052016-01-31 4.297143e+052016-02-29 9.352684e+052016-03-31 7.425185e+062016-04-30 1.384351e+07" }, { "code": null, "e": 5275, "s": 5045, "text": "One observation to note here is that the output labels for each month are based on the last day of the month, we can use the ‘MS’ frequency to start it from 1st day of the month i.e. instead of 2015–12–31 it would be 2015–12–01 —" }, { "code": null, "e": 5505, "s": 5275, "text": "# month frequency from start of the monthdata.resample('MS', on='created_at').price.sum()created_at2015-12-01 1.538769e+052016-01-01 4.297143e+052016-02-01 9.352684e+052016-03-01 7.425185e+062016-04-01 1.384351e+07" }, { "code": null, "e": 5616, "s": 5505, "text": "Often we need to apply different aggregations on different columns like in our example we might need to find —" }, { "code": null, "e": 5751, "s": 5616, "text": "Unique items that were added in each hour.The total quantity that was added in each hour.The total amount that was added in each hour." }, { "code": null, "e": 5794, "s": 5751, "text": "Unique items that were added in each hour." }, { "code": null, "e": 5842, "s": 5794, "text": "The total quantity that was added in each hour." }, { "code": null, "e": 5888, "s": 5842, "text": "The total amount that was added in each hour." }, { "code": null, "e": 5982, "s": 5888, "text": "We can do so in a one-line by using agg() on the resampled data. Let’s see how we can do it —" }, { "code": null, "e": 6122, "s": 5982, "text": "# aggregating multiple fields for each hourdata.resample('H', on='created_at').agg({'price':'sum', 'quantity':'sum','item_code':'nunique'})" }, { "code": null, "e": 6353, "s": 6122, "text": "In the above examples, we re-sampled the data and applied aggregations on it. What if we would like to group data by other fields in addition to time-interval? Pandas provide an API known as grouper() which can help us to do that." }, { "code": null, "e": 6466, "s": 6353, "text": "In this section, we will see how we can group data on different fields and analyze them for different intervals." }, { "code": null, "e": 6557, "s": 6466, "text": "Let’s say we need to analyze data based on store type for each month, we can do so using —" }, { "code": null, "e": 7503, "s": 6557, "text": "# Grouping data based on month and store typedata.groupby([pd.Grouper(key='created_at', freq='M'), 'store_type']).price.sum().head(15)# Outputcreated_at store_type 2015-12-31 other 34300.00 public_semi_public_service 833.90 small_medium_shop 2484.23 specialized_shop 107086.002016-01-31 market 473.75 other 314741.00 private_service_provider 325.00 public_semi_public_service 276.79 small_medium_shop 31042.79 specialized_shop 29648.442016-02-29 market 1974.04 other 527950.00 private_service_provider 1620.00 public_semi_public_service 1028.52 small_medium_shop 224653.83" }, { "code": null, "e": 7535, "s": 7503, "text": "Let’s understand how I did it —" }, { "code": null, "e": 7924, "s": 7535, "text": "First, we passed the Grouper object as part of the groupby statement which groups the data based on month i.e. ‘M’ frequency. This is similar to resample(), so whatever we discussed above applies here as well.We added store_type to the groupby so that for each month we can see different store types.For each group, we selected the price, calculated the sum, and selected the top 15 rows." }, { "code": null, "e": 8134, "s": 7924, "text": "First, we passed the Grouper object as part of the groupby statement which groups the data based on month i.e. ‘M’ frequency. This is similar to resample(), so whatever we discussed above applies here as well." }, { "code": null, "e": 8226, "s": 8134, "text": "We added store_type to the groupby so that for each month we can see different store types." }, { "code": null, "e": 8315, "s": 8226, "text": "For each group, we selected the price, calculated the sum, and selected the top 15 rows." }, { "code": null, "e": 8395, "s": 8315, "text": "As we did in the last example, we can do a similar thing for item_name as well." }, { "code": null, "e": 9336, "s": 8395, "text": "# Grouping data based on each month and item_namedata.groupby([pd.Grouper(key='created_at', freq='M'), 'item_name']).price.sum()# Outputcreated_at item_name 2015-12-31 Bar soap, solid, SB 33.17 Beer, domestic brand, single bottle, WKB 29.79 Black tea, BL 12.00 Black tea, in bags, WKB 60.99 Bread, white, sliced, WKB 85.45 ... 2016-08-31 Wheat flour, not self-rising, BL 150.38 White sugar, WKB 266.47 Women's haircut, basic hairdresser 7730.00 Wrist-watch, men's, CITIZEN Eco-Drive BM6060 52205.00 Yoghurt, plain, WKB 150.96" }, { "code": null, "e": 9643, "s": 9336, "text": "We can apply aggregation on multiple fields similarly the way we did using resample(). The only thing which is different here is that the data would be grouped by store_type as well and also, we can do NamedAggregation (assign a name to each aggregation) on groupby object which doesn’t work for re-sample." }, { "code": null, "e": 9907, "s": 9643, "text": "# grouping data and named aggregation on item_code, quantity, and pricedata.groupby([pd.Grouper(key='created_at', freq='M'), 'store_type']).agg(unique_items=('item_code', 'nunique'), total_quantity=('quantity','sum'), total_amount=('price','sum'))" }, { "code": null, "e": 10118, "s": 9907, "text": "I hope this article will help you to save time in analyzing time-series data. I recommend you to check out the documentation for the resample() and grouper() API to know about other things you can do with them." }, { "code": null, "e": 10351, "s": 10118, "text": "If you would like to learn about other Pandas API’s which can help you with data analysis tasks then do checkout the article Pandas: Put Away Novice Data Analyst Status where I explained different things that you can do with Pandas." }, { "code": null, "e": 10564, "s": 10351, "text": "Let me know in the comments or ping me on LinkedIn if you are facing any problems with using Pandas or Data Analysis in general. We can try to solve them together. That’s all for now, see you in the next article." } ]
Discovering your Music Taste with Python and Spotify API | by 👩🏻‍💻 Kessie Zhang | Towards Data Science
It’s almost the end of 2020! If you have been using Spotify for years, you probably know at the end of each year, Spotify will provide premium users personalized insights, such as your favorite songs and artists, and how much time you spent on the services, etc. As a data scientist, I wanted to take a look at all the songs’ audio features from the Discovery Weekly playlist and see what music features I enjoy the most based on my listening history on Spotify. If you are new to API, you probably are wondering what an API is and does. In short, you can think of API as a shortcut into a web service’s database. It allows programmers to send and receive data without giving them full permission on the entire database. Check out Spotify Web API documentation to know more about what you can do with the API. In the following, To get started, you will need to log into your Spotify account or create a new Spotify account. Then you can go to your dashboard page. Now you can select create an app. Once this is done, you will receive both client ID and client secret ID. Although we’re not trying to create an app, we will need this client ID to access the same data. In this article, I wanted to extract the data from all the songs in my Discovery Weekly playlist. Feel free to create your own playlist if you want to work on a different playlist data instead. Here’s how you can get the playlist’s id: The Spotify URI should look something this: spotify:playlist:xxxxxxxxxxxxx. Spotipy is “a lightweight Python library for the Spotify Web API.” With Spotipy, we can get full access to all of the music data provided by the Spotify platform. It’s very simple to use. You can install the package using this command.!pip install spotipy. This is what the data frame looks like. As shown below, some features are very small while some features are very big. In this case, taking the average values of all the features might not be immediately comparable. Variables that are measured at different scales do not contribute equally to the analysis and might end up introducing biases. We need a way to apply feature scaling and compare the data points. I decided to use normalization, which is a scaling technique in which values are shifted and rescaled so that they end up ranging between 0 and 1. It is also known as Min-Max scaling. from sklearn.preprocessing import MinMaxScalermin_max_scaler = MinMaxScaler()music_feature.loc[:]=min_max_scaler.fit_transform(music_feature.loc[:]) Radar charts are the most effective when they are comparing various features. Based on the radar chart, you can tell I am a fan of acoustic music! 🎼🎹 Interested in making your own music taste radar chat? Follow the code below! There you have it! Now you know how to extract any data using Spotify's API, Python, and Spotipy. For the next step, we could use different ways to analyze and visualize Spotify’s data, such as building your own Spotify’s Recommendation Engine, visualizing your music taste over time, etc. I hope you find some inspiration here. And please, feel free to share your exciting project ideas in the comment section. Until next time, happy learning! 👩🏻‍💻 If you find this helpful, please follow me and check out my other blogs. ❤️
[ { "code": null, "e": 634, "s": 171, "text": "It’s almost the end of 2020! If you have been using Spotify for years, you probably know at the end of each year, Spotify will provide premium users personalized insights, such as your favorite songs and artists, and how much time you spent on the services, etc. As a data scientist, I wanted to take a look at all the songs’ audio features from the Discovery Weekly playlist and see what music features I enjoy the most based on my listening history on Spotify." }, { "code": null, "e": 999, "s": 634, "text": "If you are new to API, you probably are wondering what an API is and does. In short, you can think of API as a shortcut into a web service’s database. It allows programmers to send and receive data without giving them full permission on the entire database. Check out Spotify Web API documentation to know more about what you can do with the API. In the following," }, { "code": null, "e": 1135, "s": 999, "text": "To get started, you will need to log into your Spotify account or create a new Spotify account. Then you can go to your dashboard page." }, { "code": null, "e": 1242, "s": 1135, "text": "Now you can select create an app. Once this is done, you will receive both client ID and client secret ID." }, { "code": null, "e": 1339, "s": 1242, "text": "Although we’re not trying to create an app, we will need this client ID to access the same data." }, { "code": null, "e": 1533, "s": 1339, "text": "In this article, I wanted to extract the data from all the songs in my Discovery Weekly playlist. Feel free to create your own playlist if you want to work on a different playlist data instead." }, { "code": null, "e": 1575, "s": 1533, "text": "Here’s how you can get the playlist’s id:" }, { "code": null, "e": 1651, "s": 1575, "text": "The Spotify URI should look something this: spotify:playlist:xxxxxxxxxxxxx." }, { "code": null, "e": 1908, "s": 1651, "text": "Spotipy is “a lightweight Python library for the Spotify Web API.” With Spotipy, we can get full access to all of the music data provided by the Spotify platform. It’s very simple to use. You can install the package using this command.!pip install spotipy." }, { "code": null, "e": 1948, "s": 1908, "text": "This is what the data frame looks like." }, { "code": null, "e": 2319, "s": 1948, "text": "As shown below, some features are very small while some features are very big. In this case, taking the average values of all the features might not be immediately comparable. Variables that are measured at different scales do not contribute equally to the analysis and might end up introducing biases. We need a way to apply feature scaling and compare the data points." }, { "code": null, "e": 2503, "s": 2319, "text": "I decided to use normalization, which is a scaling technique in which values are shifted and rescaled so that they end up ranging between 0 and 1. It is also known as Min-Max scaling." }, { "code": null, "e": 2652, "s": 2503, "text": "from sklearn.preprocessing import MinMaxScalermin_max_scaler = MinMaxScaler()music_feature.loc[:]=min_max_scaler.fit_transform(music_feature.loc[:])" }, { "code": null, "e": 2802, "s": 2652, "text": "Radar charts are the most effective when they are comparing various features. Based on the radar chart, you can tell I am a fan of acoustic music! 🎼🎹" }, { "code": null, "e": 2879, "s": 2802, "text": "Interested in making your own music taste radar chat? Follow the code below!" }, { "code": null, "e": 3329, "s": 2879, "text": "There you have it! Now you know how to extract any data using Spotify's API, Python, and Spotipy. For the next step, we could use different ways to analyze and visualize Spotify’s data, such as building your own Spotify’s Recommendation Engine, visualizing your music taste over time, etc. I hope you find some inspiration here. And please, feel free to share your exciting project ideas in the comment section. Until next time, happy learning! 👩🏻‍💻" } ]
How can we change the resolution of a video in OpenCV using C++?
We used 'set()' class of OpenCV. Using 'set()' class, we can set the height and width of the frames. The following lines are setting the height and width of the video in our program. set(CAP_PROP_FRAME_WIDTH, 320); set(CAP_PROP_FRAME_HEIGHT, 240); The first line is setting the width of the frames into 320 pixel and the second line is setting the height of the frames to 240 pixels. These two lines together is forming a 320 x 240 resolution video stream. This is how we can simply change the resolution of video using OpenCV. The following program changes the resolution of the video stream taken from default camera − #include<opencv2/opencv.hpp>//OpenCV header to use VideoCapture class// #include<iostream> using namespace std; using namespace cv; int main() { Mat myImage;//Declaring a matrix to load the frames// namedWindow("Video Player");//Declaring the video to show the video// VideoCapture cap(0);//Declaring an object to capture stream of frames from default camera// cap.set(CAP_PROP_FRAME_WIDTH, 320);//Setting the width of the video cap.set(CAP_PROP_FRAME_HEIGHT, 240);//Setting the height of the video// if (!cap.isOpened()){ //This section prompt an error message if no video stream is found// cout << "No video stream detected" << endl; system("pause"); return-1; } while (true){ //Taking an everlasting loop to show the video// cap >> myImage; if (myImage.empty()){ //Breaking the loop if no video frame is detected// break; } imshow("Video Player", myImage);//Showing the video// char c = (char)waitKey(25);//Allowing 25 milliseconds frame processing time and initiating break condition// if (c == 27){ //If 'Esc' is entered break the loop// break; } } cap.release();//Releasing the buffer memory// return 0; } This program will play the video at 320 x 240 resolution.
[ { "code": null, "e": 1245, "s": 1062, "text": "We used 'set()' class of OpenCV. Using 'set()' class, we can set the height and width of the frames. The following lines are setting the height and width of the video in our program." }, { "code": null, "e": 1277, "s": 1245, "text": "set(CAP_PROP_FRAME_WIDTH, 320);" }, { "code": null, "e": 1310, "s": 1277, "text": "set(CAP_PROP_FRAME_HEIGHT, 240);" }, { "code": null, "e": 1590, "s": 1310, "text": "The first line is setting the width of the frames into 320 pixel and the second line is setting the height of the frames to 240 pixels. These two lines together is forming a 320 x 240 resolution video stream. This is how we can simply change the resolution of video using OpenCV." }, { "code": null, "e": 1683, "s": 1590, "text": "The following program changes the resolution of the video stream taken from default camera −" }, { "code": null, "e": 2906, "s": 1683, "text": "#include<opencv2/opencv.hpp>//OpenCV header to use VideoCapture class//\n#include<iostream>\nusing namespace std;\nusing namespace cv;\nint main() {\n Mat myImage;//Declaring a matrix to load the frames//\n namedWindow(\"Video Player\");//Declaring the video to show the video//\n VideoCapture cap(0);//Declaring an object to capture stream of frames from default camera//\n cap.set(CAP_PROP_FRAME_WIDTH, 320);//Setting the width of the video\n cap.set(CAP_PROP_FRAME_HEIGHT, 240);//Setting the height of the video//\n if (!cap.isOpened()){ //This section prompt an error message if no video stream is found//\n cout << \"No video stream detected\" << endl;\n system(\"pause\");\n return-1;\n }\n while (true){ //Taking an everlasting loop to show the video//\n cap >> myImage;\n if (myImage.empty()){ //Breaking the loop if no video frame is detected//\n break;\n }\n imshow(\"Video Player\", myImage);//Showing the video//\n char c = (char)waitKey(25);//Allowing 25 milliseconds frame processing time and initiating break condition//\n if (c == 27){ //If 'Esc' is entered break the loop//\n break;\n }\n }\n cap.release();//Releasing the buffer memory//\n return 0;\n}" }, { "code": null, "e": 2964, "s": 2906, "text": "This program will play the video at 320 x 240 resolution." } ]
Loading Well Log Data From DLIS using Python | by Andy McDonald | Towards Data Science
There are a number of different formats that well log and petrophysical data can be stored in. In the earlier articles and notebooks of this series, we have mainly focused on loading data from CSV files (here) and LAS files (here and here). Even though LAS files are one of the common formats, they have a flat structure with a header section containing metadata about the well and the file followed by a series of columns containing values for each logging curve. As they are flat, they can’t easily store array data. It is possible, but the individual elements of the array are split out into individual columns/curves within a LAS file as opposed to a single array. This is where DLIS files come in. Within this article, we will cover: the basics of loading a DLIS file exploring the contents and parameters within a DLIS file displaying processed acoustic waveform data We will not be covering acoustic waveform processing. Just the display of previously processed data. This article was inspired by the work of Erlend M. Viggen (https://erlend-viggen.no/dlis-files/) who has created an excellent Jupyter Notebook which goes into more detail about working with DLIS files. Digital Log Interchange Standard (DLIS) files are structured binary files that contain data tables for well information and well logging data. The file format was developed in the late 1980’s by Schlumberger and subsequently published in 1991 by the American Petroleum Institute to create a standardised well log data format. Full details of the standard format can be found here. The DLIS file format can often be difficult and awkward to work with at times due to the format being developed nearly 30 years ago, and different software packages and vendors can create their own flavours of DLIS by adding in new structures and object-types. DLIS files contain large amounts of metadata associated with the well and data. These sections do not contain the well data, these are stored within Frames, of which there can be many representing different logging passes/runs or processing stages (e.g. Raw or Interpreted). Frames are table objects which contain the well log data, where each column represents a logging curve, and that data is indexed by time or depth. Each logging curve within the frame is referred to as a channel. The channels can be a single dimension or multi-dimensional dlsio is a python library that has been developed by Equinor ASA to read DLIS files and Log Information Standard79 (LIS79) files. Details of the library can be found here. The data used within this article was sourced from the NLOG: Dutch Oil and Gas Portal. Privacy Notice: DLIS files can contain information that can identify individuals that were involved in the logging operations. To protect their identity from appearing in search engine results without their explicit consent, these fields have been hidden in this article. This article forms part of my Python & Petrophysics series. Details of the full series can be found here. You can also find my Jupyter Notebooks and datasets on my GitHub repository at the following link github.com To follow along with this article, the Jupyter Notebook can be found at the link above and the data file for this article can be found in the Data subfolder of the Python & Petrophysics repository. The first step with any project is to load in the libraries that we want to use. For this notebook we will be using NumPy for working with arrays, pandas for storing data, and matplotlib for displaying the data. To load the data, we will be using the dlisio library. Also, as we will be working with dataframes to view parameters, which can be numerous, we need to change the maximum number of rows that will be displayed when that dataframe is called. This is achieved by pd.set_option('display.max_rows', 500). As we are working with a single DLIS file, we can use the following code to load the file. A physical DLIS file can contain multiple logical files, therefore using this syntax allows the first file to be output to f and any subsequent logical files are placed into tail. We can see the contents of each of these by calling upon their names. If we call upon f, we can see that it returns a LogicalFile(00001_AC_WORK and if we call upon tail, we get a blank list, which lets us know that there are no other logical files within the DLIS. Which returns: LogicalFile(00001_AC_WORK)[] To view the high-level contents of the file we can use the .describe() method. This returns information about the number of frames, channels, and objects within the Logical File. When we apply this to f we can see we have a file with 4frames and 484 channels (logging curves), in addition to a number of known and unknown objects. Which returns: ------------Logical File------------Description : LogicalFile(FMS_DSI_138PUP)Frames : 4Channels : 484Known objects--FILE-HEADER : 1ORIGIN : 1AXIS : 50EQUIPMENT : 27TOOL : 5PARAMETER : 480CALIBRATION-MEASUREMENT : 22CALIBRATION-COEFFICIENT : 12CALIBRATION : 341PROCESS : 3CHANNEL : 484FRAME : 4Unknown objects--440-CHANNEL : 538440-PRESENTATION-DESCRIPTION : 1440-OP-CHANNEL : 573 The first set of metadata we will look at is the origin. This provides information about the source of the data within the file. Occasionally, data may originate from multiple sources so we need to account for this by unpacking the origins into two variables. We can always check if there is other origin information by printing the length of the list. When we view the length of origin_tail, we can see it has a length of 2. For this article, we will focus on origin. We can view the details of it, by calling upon describe(). This provides details about the field, well, and other file information. Which returns: ------Origin------name : DLIS_DEFINING_ORIGINorigin : 41copy : 0Logical file ID : FMS_DSI_138PUPFile set name and number : WINTERSHALL/L5-9 / 41File number and type : 170 / PLAYBACKField : L5Well (id/name) : / L5-9Produced by (code/name) : 440 / SchlumbergerProduced for : Wintershall Noordzee B.V.Run number : -1Descent number : -1Created : 2002-02-17 18:18:52Created by : OP, (version: 9C2-303)Other programs/services : MESTB: Micro Electrical Scanner - B (Slim) SGTL: Scintillation Gamma-Ray - L DTAA: Downhole Toolbus Adapter - A DSSTB: Dipole Shear Imager - B DTCA: DTS Telemetry CartridgeACTS: Auxiliary Compression Tension Sub - B DIP: Dip Computation DIR: Directional Survey Computation HOLEV: Integrated Hole/Cement Volume Frames within a DLIS file can represent different logging passes or different stages of data, such as raw well log measurements to petrophysical interpretations or processed data. Each frame has a number of properties. The example code below prints out the properties in an easy-to-read format. This returns the following summary. Which indicates that two frames exist within this file. With the first frame containing basic well log curves of bitsize (BIT), caliper (CAL), gamma ray (GR) and tension (TEN). The second frame contains the post-processed acoustic waveform data. Frame Name: 60BIndex Type: BOREHOLE-DEPTHDepth Interval: 0 - 0 0.1 inDepth Spacing: -60 0.1 inDirection: DECREASINGNum of Channels: 77Channel Names: [Channel(TDEP), Channel(BS), Channel(CS), Channel(TENS), Channel(ETIM), Channel(DEVI), Channel(P1AZ_MEST), Channel(ANOR), Channel(FINC), Channel(HAZI), Channel(P1AZ), Channel(RB), Channel(SDEV), Channel(GAT), Channel(GMT), Channel(ECGR), Channel(ITT), Channel(SPHI), Channel(DCI2), Channel(DCI4), Channel(SOBS), Channel(DTCO), Channel(DTSM), Channel(PR), Channel(VPVS), Channel(CHR2), Channel(DT2R), Channel(DTRP), Channel(CHRP), Channel(DTRS), Channel(CHRS), Channel(DTTP), Channel(CHTP), Channel(DTTS), Channel(CHTS), Channel(DT2), Channel(DT4P), Channel(DT4S), Channel(SPCF), Channel(DPTR), Channel(DPAZ), Channel(QUAF), Channel(DDIP), Channel(DDA), Channel(FCD), Channel(HDAR), Channel(RGR), Channel(TIME), Channel(CVEL), Channel(MSW1), Channel(MSW2), Channel(FNOR), Channel(SAS2), Channel(SAS4), Channel(PWF2), Channel(PWN2), Channel(PWF4), Channel(PWN4), Channel(SVEL), Channel(SSVE), Channel(SPR2), Channel(SPR4), Channel(SPT4), Channel(DF), Channel(CDF), Channel(CLOS), Channel(ED), Channel(ND), Channel(TVDE), Channel(VSEC), Channel(CWEL), Channel(AREA), Channel(AFCD), Channel(ABS), Channel(IHV), Channel(ICV), Channel(GR)]Frame Name: 10BIndex Type: BOREHOLE-DEPTHDepth Interval: 0 - 0 0.1 inDepth Spacing: -10 0.1 inDirection: DECREASINGNum of Channels: 4Channel Names: [Channel(TDEP), Channel(IDWD), Channel(TIME), Channel(SCD)]Frame Name: 1BIndex Type: BOREHOLE-DEPTHDepth Interval: 0 - 0 0.1 inDepth Spacing: -1 0.1 inDirection: DECREASINGNum of Channels: 84Channel Names: [Channel(TDEP), Channel(TIME), Channel(EV), Channel(BA28), Channel(BA17), Channel(BB17), Channel(BC13), Channel(BD13), Channel(BB28), Channel(BA13), Channel(BB13), Channel(BC17), Channel(BD17), Channel(BA22), Channel(BA23), Channel(BA24), Channel(BC28), Channel(BA25), Channel(BA26), Channel(BA27), Channel(BA11), Channel(BA12), Channel(BA14), Channel(BA15), Channel(BA16), Channel(BA18), Channel(BA21), Channel(BC11), Channel(BC12), Channel(BC14), Channel(BC15), Channel(BC16), Channel(BC18), Channel(BC21), Channel(BC22), Channel(BC23), Channel(BC24), Channel(BC25), Channel(BC26), Channel(BC27), Channel(BB22), Channel(BB23), Channel(BB24), Channel(BD28), Channel(BB25), Channel(BB26), Channel(BB27), Channel(BB11), Channel(BB12), Channel(BB14), Channel(BB15), Channel(BB16), Channel(BB18), Channel(BB21), Channel(BD11), Channel(BD12), Channel(BD14), Channel(BD15), Channel(BD16), Channel(BD18), Channel(BD21), Channel(BD22), Channel(BD23), Channel(BD24), Channel(BD25), Channel(BD26), Channel(BD27), Channel(SB1), Channel(DB1), Channel(DB2), Channel(DB3A), Channel(DB4A), Channel(SB2), Channel(DB1A), Channel(DB2A), Channel(DB3), Channel(DB4), Channel(FCAX), Channel(FCAY), Channel(FCAZ), Channel(FTIM), Channel(AZSNG), Channel(AZS1G), Channel(AZS2G)]Frame Name: 15BIndex Type: BOREHOLE-DEPTHDepth Interval: 0 - 0 0.1 inDepth Spacing: -15 0.1 inDirection: DECREASINGNum of Channels: 12Channel Names: [Channel(TDEP), Channel(TIME), Channel(C1), Channel(C2), Channel(U-MBAV), Channel(AX), Channel(AY), Channel(AZ), Channel(EI), Channel(FX), Channel(FY), Channel(FZ)] As seen earlier, we have a number of objects associated with the DLIS file. To make them easier to read we can create a short function that creates a pandas dataframe containing the parameters. The logging parameters can be accessed by calling upon f.parameters. To access the parameters, we can use the attributes name, long_name and values and pass these into the summary function. This returns a long table of each of the parameters. The example below is a small section of that table. From it, we can see parameters such as bottom log interval, borehole salinity and bottom hole temperature. The channels within a frame are the individual curves or arrays. To view a quick summary of these, we can pass in a number of attributes to the summary_dataframe() method. This returns yet another long table with all the curves contained within the file, and the frame the data belongs to. The tools object within the DLIS file contains information relating to the tools that were used to acquire the data. We can get a summary of the tools available be calling upon the summary_dataframe method. This returns a short table containing 5 tools: As we are looking to plot acoustic waveform data, we can look at the parameters for the DSSTB — Dipole Shear Imager tool. First, we need to grab the object from the dlis and then pass it into the summary_dataframe function. From the returned table, we can view each of the parameters that relate to the tool and the processing of the data. Now that some of the metadata has been explored, we can now attempt to access the data stored within the file. Frames and data can be accessed by calling upon the .object() for the file. First, we can assign the frames to variables, which will make things easier when accessing the data within them, especially if the frames contain channels/curves with the same name. The .object() method requires the type of the object being accessed, i.e. 'FRAME' or 'CHANNEL' and its name. In this case, we can refer back to the previous step which contains the channels and the frame names. We can see that the basic logging curves are in one frame and the acoustic data is in another. We can also directly access the channels for a specific curve. However, this can cause confusion when working with frames containing channels/curves with the same name. The example below shows how to call key properties of the channel/curve. Details of which can be found here. Which returns: Name: DTCOLong Name: Delta-T CompressionalUnits: us/ftDimension: [1] Now that we know how to access the frames and channels of the DLIS file, we can now assign variable names to the curves that we are looking to plot. In this article, we will be plotting: DTCO: Delta-T Compressional DTSM: Delta-T Shear SPR4: STC Slowness Projection, Receiver Array — Monopole P&S PWF4: DSST Packed Waveform Data — Monopole P&S We will also need to assign a depth curve (TDEP) from the frame. Looking back at the information section of the frame, the Depth Interval is 0.1 inches. This needs to be converted to metres by multiplying by 0.00254. When the depth min and max is printed out, we get the following range for the data: 4574.4384765625 - 4819.04052734375 To make an initial check on data, we can create a quick log plot of DTCO and DTSM against depth using matplotlib. We will start with setting up a subplot with two axes and using subplot2grid. The first axis will contain the semblance plot and the second will be twinned with the first. This allows the data to be plotted on the same y-axis. To plot the semblance data we need to use imshow. When we do this, we need to pass in the extent of the array both in terms of depth range (using depth.min() and depth.max()) and the data range (40 - 240 us/ft). On top of that, the DTCO and DTSM curves can be plotted. This allows us to see how these curves were picked from the semblance map. We can modify the plot to add in a subplot for the acoustic waveform data associated with the semblance map. If we look at the shape of wf_mono we can see it returns (1606, 8, 512). This indicates that the array is multi-dimensional. The middle number indicates that we have 8 receivers worth of data. To access the first receiver, which is usually the closest one to the transmitter array, we can create a slice of the data like so: This code returns the minimum and maximum values of the array, which can be used as a guide for scaling colours. Taking the plot code from the semblance map section, we can enhance it by adding another subplot. In this subplot, we will use another imshow() plot and pass in the relevant parameters. The vmin and vmax parameters can be used to tweak the image to bring out or reduce the detail within the waveform. This generates the following plot: Rather than rerunning the cell each time the depth and/or DT plot scales require changing, we can add a few interactive widgets to help with this. This can be achieved by importing ipywidgets and IPython.display. The plot code can be placed inside a function and decorated with the widgets code. In the example below, we are passing in MinDepth, MaxDepth, MinDT and MaxDT. All four of which can be called upon in the code. In this article, we have covered how to load a DLIS file using the dlisio Python library. Once the DLIS file is loaded, different parameter tables and logging curves can be viewed and extracted. We have also seen how we can take processed acoustic waveform data and plot it using matplotlib. DLIS files don’t have to be daunting to work with in Python. Once the basic structure and commands from dlisio are understood it becomes much simpler. Thanks for reading! If you have found this article useful, please feel free to check out my other articles looking at various aspects of Python and well log data. You can also find my code used in this article and others at GitHub. If you want to get in touch you can find me on LinkedIn or at my website. Interested in learning more about python and well log data or petrophysics? Follow me on Medium. Viggen, E.M. Extracing data from DLIS FilesViggen, E.M, Harstad, E., and Kvalsvik J. (2020), Getting started with acoustic well log data using the dlisio Python library on the Volve Data Village datasetNLOG: Dutch Oil and Gas Portal
[ { "code": null, "e": 875, "s": 172, "text": "There are a number of different formats that well log and petrophysical data can be stored in. In the earlier articles and notebooks of this series, we have mainly focused on loading data from CSV files (here) and LAS files (here and here). Even though LAS files are one of the common formats, they have a flat structure with a header section containing metadata about the well and the file followed by a series of columns containing values for each logging curve. As they are flat, they can’t easily store array data. It is possible, but the individual elements of the array are split out into individual columns/curves within a LAS file as opposed to a single array. This is where DLIS files come in." }, { "code": null, "e": 911, "s": 875, "text": "Within this article, we will cover:" }, { "code": null, "e": 945, "s": 911, "text": "the basics of loading a DLIS file" }, { "code": null, "e": 1002, "s": 945, "text": "exploring the contents and parameters within a DLIS file" }, { "code": null, "e": 1046, "s": 1002, "text": "displaying processed acoustic waveform data" }, { "code": null, "e": 1147, "s": 1046, "text": "We will not be covering acoustic waveform processing. Just the display of previously processed data." }, { "code": null, "e": 1349, "s": 1147, "text": "This article was inspired by the work of Erlend M. Viggen (https://erlend-viggen.no/dlis-files/) who has created an excellent Jupyter Notebook which goes into more detail about working with DLIS files." }, { "code": null, "e": 1991, "s": 1349, "text": "Digital Log Interchange Standard (DLIS) files are structured binary files that contain data tables for well information and well logging data. The file format was developed in the late 1980’s by Schlumberger and subsequently published in 1991 by the American Petroleum Institute to create a standardised well log data format. Full details of the standard format can be found here. The DLIS file format can often be difficult and awkward to work with at times due to the format being developed nearly 30 years ago, and different software packages and vendors can create their own flavours of DLIS by adding in new structures and object-types." }, { "code": null, "e": 2538, "s": 1991, "text": "DLIS files contain large amounts of metadata associated with the well and data. These sections do not contain the well data, these are stored within Frames, of which there can be many representing different logging passes/runs or processing stages (e.g. Raw or Interpreted). Frames are table objects which contain the well log data, where each column represents a logging curve, and that data is indexed by time or depth. Each logging curve within the frame is referred to as a channel. The channels can be a single dimension or multi-dimensional" }, { "code": null, "e": 2710, "s": 2538, "text": "dlsio is a python library that has been developed by Equinor ASA to read DLIS files and Log Information Standard79 (LIS79) files. Details of the library can be found here." }, { "code": null, "e": 2797, "s": 2710, "text": "The data used within this article was sourced from the NLOG: Dutch Oil and Gas Portal." }, { "code": null, "e": 3069, "s": 2797, "text": "Privacy Notice: DLIS files can contain information that can identify individuals that were involved in the logging operations. To protect their identity from appearing in search engine results without their explicit consent, these fields have been hidden in this article." }, { "code": null, "e": 3273, "s": 3069, "text": "This article forms part of my Python & Petrophysics series. Details of the full series can be found here. You can also find my Jupyter Notebooks and datasets on my GitHub repository at the following link" }, { "code": null, "e": 3284, "s": 3273, "text": "github.com" }, { "code": null, "e": 3482, "s": 3284, "text": "To follow along with this article, the Jupyter Notebook can be found at the link above and the data file for this article can be found in the Data subfolder of the Python & Petrophysics repository." }, { "code": null, "e": 3749, "s": 3482, "text": "The first step with any project is to load in the libraries that we want to use. For this notebook we will be using NumPy for working with arrays, pandas for storing data, and matplotlib for displaying the data. To load the data, we will be using the dlisio library." }, { "code": null, "e": 3995, "s": 3749, "text": "Also, as we will be working with dataframes to view parameters, which can be numerous, we need to change the maximum number of rows that will be displayed when that dataframe is called. This is achieved by pd.set_option('display.max_rows', 500)." }, { "code": null, "e": 4266, "s": 3995, "text": "As we are working with a single DLIS file, we can use the following code to load the file. A physical DLIS file can contain multiple logical files, therefore using this syntax allows the first file to be output to f and any subsequent logical files are placed into tail." }, { "code": null, "e": 4531, "s": 4266, "text": "We can see the contents of each of these by calling upon their names. If we call upon f, we can see that it returns a LogicalFile(00001_AC_WORK and if we call upon tail, we get a blank list, which lets us know that there are no other logical files within the DLIS." }, { "code": null, "e": 4546, "s": 4531, "text": "Which returns:" }, { "code": null, "e": 4575, "s": 4546, "text": "LogicalFile(00001_AC_WORK)[]" }, { "code": null, "e": 4906, "s": 4575, "text": "To view the high-level contents of the file we can use the .describe() method. This returns information about the number of frames, channels, and objects within the Logical File. When we apply this to f we can see we have a file with 4frames and 484 channels (logging curves), in addition to a number of known and unknown objects." }, { "code": null, "e": 4921, "s": 4906, "text": "Which returns:" }, { "code": null, "e": 5497, "s": 4921, "text": "------------Logical File------------Description : LogicalFile(FMS_DSI_138PUP)Frames : 4Channels : 484Known objects--FILE-HEADER : 1ORIGIN : 1AXIS : 50EQUIPMENT : 27TOOL : 5PARAMETER : 480CALIBRATION-MEASUREMENT : 22CALIBRATION-COEFFICIENT : 12CALIBRATION : 341PROCESS : 3CHANNEL : 484FRAME : 4Unknown objects--440-CHANNEL : 538440-PRESENTATION-DESCRIPTION : 1440-OP-CHANNEL : 573" }, { "code": null, "e": 5850, "s": 5497, "text": "The first set of metadata we will look at is the origin. This provides information about the source of the data within the file. Occasionally, data may originate from multiple sources so we need to account for this by unpacking the origins into two variables. We can always check if there is other origin information by printing the length of the list." }, { "code": null, "e": 6098, "s": 5850, "text": "When we view the length of origin_tail, we can see it has a length of 2. For this article, we will focus on origin. We can view the details of it, by calling upon describe(). This provides details about the field, well, and other file information." }, { "code": null, "e": 6113, "s": 6098, "text": "Which returns:" }, { "code": null, "e": 6962, "s": 6113, "text": "------Origin------name : DLIS_DEFINING_ORIGINorigin : 41copy : 0Logical file ID : FMS_DSI_138PUPFile set name and number : WINTERSHALL/L5-9 / 41File number and type : 170 / PLAYBACKField : L5Well (id/name) : / L5-9Produced by (code/name) : 440 / SchlumbergerProduced for : Wintershall Noordzee B.V.Run number : -1Descent number : -1Created : 2002-02-17 18:18:52Created by : OP, (version: 9C2-303)Other programs/services : MESTB: Micro Electrical Scanner - B (Slim) SGTL: Scintillation Gamma-Ray - L DTAA: Downhole Toolbus Adapter - A DSSTB: Dipole Shear Imager - B DTCA: DTS Telemetry CartridgeACTS: Auxiliary Compression Tension Sub - B DIP: Dip Computation DIR: Directional Survey Computation HOLEV: Integrated Hole/Cement Volume" }, { "code": null, "e": 7257, "s": 6962, "text": "Frames within a DLIS file can represent different logging passes or different stages of data, such as raw well log measurements to petrophysical interpretations or processed data. Each frame has a number of properties. The example code below prints out the properties in an easy-to-read format." }, { "code": null, "e": 7539, "s": 7257, "text": "This returns the following summary. Which indicates that two frames exist within this file. With the first frame containing basic well log curves of bitsize (BIT), caliper (CAL), gamma ray (GR) and tension (TEN). The second frame contains the post-processed acoustic waveform data." }, { "code": null, "e": 10820, "s": 7539, "text": "Frame Name: \t\t 60BIndex Type: \t\t BOREHOLE-DEPTHDepth Interval: \t 0 - 0 0.1 inDepth Spacing: \t\t -60 0.1 inDirection: \t\t DECREASINGNum of Channels: \t 77Channel Names: \t\t [Channel(TDEP), Channel(BS), Channel(CS), Channel(TENS), Channel(ETIM), Channel(DEVI), Channel(P1AZ_MEST), Channel(ANOR), Channel(FINC), Channel(HAZI), Channel(P1AZ), Channel(RB), Channel(SDEV), Channel(GAT), Channel(GMT), Channel(ECGR), Channel(ITT), Channel(SPHI), Channel(DCI2), Channel(DCI4), Channel(SOBS), Channel(DTCO), Channel(DTSM), Channel(PR), Channel(VPVS), Channel(CHR2), Channel(DT2R), Channel(DTRP), Channel(CHRP), Channel(DTRS), Channel(CHRS), Channel(DTTP), Channel(CHTP), Channel(DTTS), Channel(CHTS), Channel(DT2), Channel(DT4P), Channel(DT4S), Channel(SPCF), Channel(DPTR), Channel(DPAZ), Channel(QUAF), Channel(DDIP), Channel(DDA), Channel(FCD), Channel(HDAR), Channel(RGR), Channel(TIME), Channel(CVEL), Channel(MSW1), Channel(MSW2), Channel(FNOR), Channel(SAS2), Channel(SAS4), Channel(PWF2), Channel(PWN2), Channel(PWF4), Channel(PWN4), Channel(SVEL), Channel(SSVE), Channel(SPR2), Channel(SPR4), Channel(SPT4), Channel(DF), Channel(CDF), Channel(CLOS), Channel(ED), Channel(ND), Channel(TVDE), Channel(VSEC), Channel(CWEL), Channel(AREA), Channel(AFCD), Channel(ABS), Channel(IHV), Channel(ICV), Channel(GR)]Frame Name: \t\t 10BIndex Type: \t\t BOREHOLE-DEPTHDepth Interval: \t 0 - 0 0.1 inDepth Spacing: \t\t -10 0.1 inDirection: \t\t DECREASINGNum of Channels: \t 4Channel Names: \t\t [Channel(TDEP), Channel(IDWD), Channel(TIME), Channel(SCD)]Frame Name: \t\t 1BIndex Type: \t\t BOREHOLE-DEPTHDepth Interval: \t 0 - 0 0.1 inDepth Spacing: \t\t -1 0.1 inDirection: \t\t DECREASINGNum of Channels: \t 84Channel Names: \t\t [Channel(TDEP), Channel(TIME), Channel(EV), Channel(BA28), Channel(BA17), Channel(BB17), Channel(BC13), Channel(BD13), Channel(BB28), Channel(BA13), Channel(BB13), Channel(BC17), Channel(BD17), Channel(BA22), Channel(BA23), Channel(BA24), Channel(BC28), Channel(BA25), Channel(BA26), Channel(BA27), Channel(BA11), Channel(BA12), Channel(BA14), Channel(BA15), Channel(BA16), Channel(BA18), Channel(BA21), Channel(BC11), Channel(BC12), Channel(BC14), Channel(BC15), Channel(BC16), Channel(BC18), Channel(BC21), Channel(BC22), Channel(BC23), Channel(BC24), Channel(BC25), Channel(BC26), Channel(BC27), Channel(BB22), Channel(BB23), Channel(BB24), Channel(BD28), Channel(BB25), Channel(BB26), Channel(BB27), Channel(BB11), Channel(BB12), Channel(BB14), Channel(BB15), Channel(BB16), Channel(BB18), Channel(BB21), Channel(BD11), Channel(BD12), Channel(BD14), Channel(BD15), Channel(BD16), Channel(BD18), Channel(BD21), Channel(BD22), Channel(BD23), Channel(BD24), Channel(BD25), Channel(BD26), Channel(BD27), Channel(SB1), Channel(DB1), Channel(DB2), Channel(DB3A), Channel(DB4A), Channel(SB2), Channel(DB1A), Channel(DB2A), Channel(DB3), Channel(DB4), Channel(FCAX), Channel(FCAY), Channel(FCAZ), Channel(FTIM), Channel(AZSNG), Channel(AZS1G), Channel(AZS2G)]Frame Name: \t\t 15BIndex Type: \t\t BOREHOLE-DEPTHDepth Interval: \t 0 - 0 0.1 inDepth Spacing: \t\t -15 0.1 inDirection: \t\t DECREASINGNum of Channels: \t 12Channel Names: \t\t [Channel(TDEP), Channel(TIME), Channel(C1), Channel(C2), Channel(U-MBAV), Channel(AX), Channel(AY), Channel(AZ), Channel(EI), Channel(FX), Channel(FY), Channel(FZ)]" }, { "code": null, "e": 11014, "s": 10820, "text": "As seen earlier, we have a number of objects associated with the DLIS file. To make them easier to read we can create a short function that creates a pandas dataframe containing the parameters." }, { "code": null, "e": 11204, "s": 11014, "text": "The logging parameters can be accessed by calling upon f.parameters. To access the parameters, we can use the attributes name, long_name and values and pass these into the summary function." }, { "code": null, "e": 11416, "s": 11204, "text": "This returns a long table of each of the parameters. The example below is a small section of that table. From it, we can see parameters such as bottom log interval, borehole salinity and bottom hole temperature." }, { "code": null, "e": 11588, "s": 11416, "text": "The channels within a frame are the individual curves or arrays. To view a quick summary of these, we can pass in a number of attributes to the summary_dataframe() method." }, { "code": null, "e": 11706, "s": 11588, "text": "This returns yet another long table with all the curves contained within the file, and the frame the data belongs to." }, { "code": null, "e": 11913, "s": 11706, "text": "The tools object within the DLIS file contains information relating to the tools that were used to acquire the data. We can get a summary of the tools available be calling upon the summary_dataframe method." }, { "code": null, "e": 11960, "s": 11913, "text": "This returns a short table containing 5 tools:" }, { "code": null, "e": 12184, "s": 11960, "text": "As we are looking to plot acoustic waveform data, we can look at the parameters for the DSSTB — Dipole Shear Imager tool. First, we need to grab the object from the dlis and then pass it into the summary_dataframe function." }, { "code": null, "e": 12300, "s": 12184, "text": "From the returned table, we can view each of the parameters that relate to the tool and the processing of the data." }, { "code": null, "e": 12411, "s": 12300, "text": "Now that some of the metadata has been explored, we can now attempt to access the data stored within the file." }, { "code": null, "e": 12975, "s": 12411, "text": "Frames and data can be accessed by calling upon the .object() for the file. First, we can assign the frames to variables, which will make things easier when accessing the data within them, especially if the frames contain channels/curves with the same name. The .object() method requires the type of the object being accessed, i.e. 'FRAME' or 'CHANNEL' and its name. In this case, we can refer back to the previous step which contains the channels and the frame names. We can see that the basic logging curves are in one frame and the acoustic data is in another." }, { "code": null, "e": 13144, "s": 12975, "text": "We can also directly access the channels for a specific curve. However, this can cause confusion when working with frames containing channels/curves with the same name." }, { "code": null, "e": 13253, "s": 13144, "text": "The example below shows how to call key properties of the channel/curve. Details of which can be found here." }, { "code": null, "e": 13268, "s": 13253, "text": "Which returns:" }, { "code": null, "e": 13343, "s": 13268, "text": "Name: \t\tDTCOLong Name: \tDelta-T CompressionalUnits: \t\tus/ftDimension: \t[1]" }, { "code": null, "e": 13530, "s": 13343, "text": "Now that we know how to access the frames and channels of the DLIS file, we can now assign variable names to the curves that we are looking to plot. In this article, we will be plotting:" }, { "code": null, "e": 13558, "s": 13530, "text": "DTCO: Delta-T Compressional" }, { "code": null, "e": 13578, "s": 13558, "text": "DTSM: Delta-T Shear" }, { "code": null, "e": 13639, "s": 13578, "text": "SPR4: STC Slowness Projection, Receiver Array — Monopole P&S" }, { "code": null, "e": 13686, "s": 13639, "text": "PWF4: DSST Packed Waveform Data — Monopole P&S" }, { "code": null, "e": 13903, "s": 13686, "text": "We will also need to assign a depth curve (TDEP) from the frame. Looking back at the information section of the frame, the Depth Interval is 0.1 inches. This needs to be converted to metres by multiplying by 0.00254." }, { "code": null, "e": 13987, "s": 13903, "text": "When the depth min and max is printed out, we get the following range for the data:" }, { "code": null, "e": 14022, "s": 13987, "text": "4574.4384765625 - 4819.04052734375" }, { "code": null, "e": 14136, "s": 14022, "text": "To make an initial check on data, we can create a quick log plot of DTCO and DTSM against depth using matplotlib." }, { "code": null, "e": 14363, "s": 14136, "text": "We will start with setting up a subplot with two axes and using subplot2grid. The first axis will contain the semblance plot and the second will be twinned with the first. This allows the data to be plotted on the same y-axis." }, { "code": null, "e": 14575, "s": 14363, "text": "To plot the semblance data we need to use imshow. When we do this, we need to pass in the extent of the array both in terms of depth range (using depth.min() and depth.max()) and the data range (40 - 240 us/ft)." }, { "code": null, "e": 14707, "s": 14575, "text": "On top of that, the DTCO and DTSM curves can be plotted. This allows us to see how these curves were picked from the semblance map." }, { "code": null, "e": 15009, "s": 14707, "text": "We can modify the plot to add in a subplot for the acoustic waveform data associated with the semblance map. If we look at the shape of wf_mono we can see it returns (1606, 8, 512). This indicates that the array is multi-dimensional. The middle number indicates that we have 8 receivers worth of data." }, { "code": null, "e": 15141, "s": 15009, "text": "To access the first receiver, which is usually the closest one to the transmitter array, we can create a slice of the data like so:" }, { "code": null, "e": 15254, "s": 15141, "text": "This code returns the minimum and maximum values of the array, which can be used as a guide for scaling colours." }, { "code": null, "e": 15555, "s": 15254, "text": "Taking the plot code from the semblance map section, we can enhance it by adding another subplot. In this subplot, we will use another imshow() plot and pass in the relevant parameters. The vmin and vmax parameters can be used to tweak the image to bring out or reduce the detail within the waveform." }, { "code": null, "e": 15590, "s": 15555, "text": "This generates the following plot:" }, { "code": null, "e": 15803, "s": 15590, "text": "Rather than rerunning the cell each time the depth and/or DT plot scales require changing, we can add a few interactive widgets to help with this. This can be achieved by importing ipywidgets and IPython.display." }, { "code": null, "e": 16013, "s": 15803, "text": "The plot code can be placed inside a function and decorated with the widgets code. In the example below, we are passing in MinDepth, MaxDepth, MinDT and MaxDT. All four of which can be called upon in the code." }, { "code": null, "e": 16456, "s": 16013, "text": "In this article, we have covered how to load a DLIS file using the dlisio Python library. Once the DLIS file is loaded, different parameter tables and logging curves can be viewed and extracted. We have also seen how we can take processed acoustic waveform data and plot it using matplotlib. DLIS files don’t have to be daunting to work with in Python. Once the basic structure and commands from dlisio are understood it becomes much simpler." }, { "code": null, "e": 16476, "s": 16456, "text": "Thanks for reading!" }, { "code": null, "e": 16688, "s": 16476, "text": "If you have found this article useful, please feel free to check out my other articles looking at various aspects of Python and well log data. You can also find my code used in this article and others at GitHub." }, { "code": null, "e": 16762, "s": 16688, "text": "If you want to get in touch you can find me on LinkedIn or at my website." }, { "code": null, "e": 16859, "s": 16762, "text": "Interested in learning more about python and well log data or petrophysics? Follow me on Medium." } ]
Bits manipulation (Important tactics) in C++
Let’s first recall the about bits and the bitwise operator is short. Bit is a binary digit. It is the smallest unit of data that is understandable by the computer. In can have only one of the two values 0 (denotes OFF) and 1 (denotes ON). Bitwise operators are the operators that work a bit level in the program. These operators are used to manipulate bits in the program. In C, we have 6 bitwise operators − Bitwise AND (&) Bitwise AND (&) Bitwise OR (OR) Bitwise OR (OR) Bitwise XOR (XOR) Bitwise XOR (XOR) Bitwise left Shift (<<)/p> Bitwise left Shift (<<)/p> Bitwise right Shift (>>) Bitwise right Shift (>>) Bitwise not (~) Bitwise not (~) https://www.tutorialspoint.com/cprogramming/c_bitwise_operators.htm Now, let’s learn some important tactics i.e. things that can be helpful if you work with bits. We can swap two values using the bitwise XOR operator. The implementation is − Live Demo #include <stdio.h> int main(){ int x = 41; int y = 90; printf("Values before swapping! \n"); printf("x = %d \t", x); printf("y = %d \n", y); x = x ^ y; y = y ^ x; x = x ^ y; printf("Values after swapping! \n"); printf("x = %d \t", x); printf("y = %d \n", y); return 0; } Values before swapping! x = 41 y = 90 Values before swapping! x = 90 y = 41 For any integer value, we can find the most significant bit is an effective way. This is done using or operator along with a bitwise shift operator. This method can find the MSB in o(1) time complexity. The size of the integer should be predefined to create the program. Program to find MSB of 32-bit integer − Live Demo #include <stdio.h> int findMSB(int x){ x |= x>>1; x |= x>>2; x |= x>>4; x |= x>>8; x |= x>>16; x = x+1; return(x >> 1); } int main(){ int x = 49; printf("The number is %d\n", x); int msb = findMSB(x); printf("MSB of the number is %d\n", msb); } The number is 49 MSB of the number is 32 If we observe the XOR of 0 to n carefully we can derive a general pattern. Which is illustrated here − Live Demo #include <stdio.h> // Direct XOR of all numbers from 1 to n int findXORuptoN(int n){ switch( n%4){ case 0: return n; case 1: return 1; break; case 2: return n+1; break; case 3: return 0; break; default: break; } } int main(){ int n = 9870; int xorupton = findXORuptoN(n); printf("XOR of all number up to %d is %d\n", n, xorupton); } XOR of all number up to 9870 is 9871 Using the bitwise shifting operators, we can easily do the work and it will require less time. Live Demo #include <stdio.h> int countValues(int n){ int unset=0; while (n){ if ((n & 1) == 0) unset++; n=n>>1; } return (1<<unset); } int main(){ int n = 32; printf("%d", countValues(n)); } 32 There are inbuilt methods to find the number of leading and trailing zeroes of an integer, thanks to bit manipulation. * It is a GCC inbuilt function Live Demo #include <stdio.h> int main(){ int n = 32; printf("The integer value is %d\n", n); printf("Number of leading zeros is %d\n", __builtin_clz(n)); printf("Number of trailing zeros is %d\n",__builtin_clz(n)); } The integer value is 32 Number of leading zeros is 26 Number of trailing zeros is 26 To check, if a number is a power of 2, is made easy using a bitwise operator. Live Demo #include <stdio.h> int isPowerof2(int n){ return n && (!(n&(n-1))); } int main(){ int n = 22; if(isPowerof2(n)) printf("%d is a power of 2", n); else printf("%d is not a power of 2", n); } 22 is not a power of 2 This can be done using the fact that if there are more than 1 element the XOR of all subsets is always 0 and the number otherwise. Live Demo #include <stdio.h> int findsubsetXOR (int set[], int size){ if (size == 1){ return set[size - 1]; } else return 0; } int main (){ int set[] = { 45, 12 }; int size = sizeof (set) / sizeof (set[0]); printf ("The XOR of all subsets of set of size %d is %d\n", size, findsubsetXOR (set, size)); int set2[] = { 65 }; size = sizeof (set2) / sizeof (set2[0]); printf ("The XOR of all subsets of set of size %d is %d\n", size, findsubsetXOR (set2, size)); } The XOR of all subsets of set of size 2 is 0 The XOR of all subsets of set of size 1 is 65 auto keyword in C is employed to do the task. Live Demo #include <stdio.h> int main (){ auto integer = 0b0110110; printf("The integer conversion of binary number '0110110' is %d", integer); } The integer conversion of binary number '0110110' is 54 We can flip all the bits of a number by subtracting it from a number whose all bits are set. Number = 0110100 The number will all bits set = 1111111 Subtraction -> 1111111 - 0110100 = 1001011 (number with flipped bits) Live Demo #include <stdio.h> int main (){ int number = 23; int n = number; n |= n>>1; n |= n>>2; n |= n>>4; n |= n>>8; n |= n>>16; printf("The number is %d\n", number); printf("Number with reversed bits %d\n", n-number); } The number is 23 Number with reversed bits 8 Using the bitwise XOR operation, we can find if the bits of a number are in alternate patterns or not. The below code shows how to − Live Demo #include <stdio.h> int checkbitpattern(int n){ int result = n^(n>>1); if(((n+1)&n) == 0) return 1; else return 0; } int main (){ int number = 4; if(checkbitpattern == 1){ printf("Bits of %d are in alternate pattern", number); } else printf("Bits of %d are not in alternate pattern", number); } Bits of 4 are not in alternate pattern
[ { "code": null, "e": 1131, "s": 1062, "text": "Let’s first recall the about bits and the bitwise operator is short." }, { "code": null, "e": 1301, "s": 1131, "text": "Bit is a binary digit. It is the smallest unit of data that is understandable by the computer. In can have only one of the two values 0 (denotes OFF) and 1 (denotes ON)." }, { "code": null, "e": 1375, "s": 1301, "text": "Bitwise operators are the operators that work a bit level in the program." }, { "code": null, "e": 1435, "s": 1375, "text": "These operators are used to manipulate bits in the program." }, { "code": null, "e": 1471, "s": 1435, "text": "In C, we have 6 bitwise operators −" }, { "code": null, "e": 1487, "s": 1471, "text": "Bitwise AND (&)" }, { "code": null, "e": 1503, "s": 1487, "text": "Bitwise AND (&)" }, { "code": null, "e": 1519, "s": 1503, "text": "Bitwise OR (OR)" }, { "code": null, "e": 1535, "s": 1519, "text": "Bitwise OR (OR)" }, { "code": null, "e": 1553, "s": 1535, "text": "Bitwise XOR (XOR)" }, { "code": null, "e": 1571, "s": 1553, "text": "Bitwise XOR (XOR)" }, { "code": null, "e": 1598, "s": 1571, "text": "Bitwise left Shift (<<)/p>" }, { "code": null, "e": 1625, "s": 1598, "text": "Bitwise left Shift (<<)/p>" }, { "code": null, "e": 1650, "s": 1625, "text": "Bitwise right Shift (>>)" }, { "code": null, "e": 1675, "s": 1650, "text": "Bitwise right Shift (>>)" }, { "code": null, "e": 1691, "s": 1675, "text": "Bitwise not (~)" }, { "code": null, "e": 1707, "s": 1691, "text": "Bitwise not (~)" }, { "code": null, "e": 1775, "s": 1707, "text": "https://www.tutorialspoint.com/cprogramming/c_bitwise_operators.htm" }, { "code": null, "e": 1870, "s": 1775, "text": "Now, let’s learn some important tactics i.e. things that can be helpful if you work with bits." }, { "code": null, "e": 1949, "s": 1870, "text": "We can swap two values using the bitwise XOR operator. The implementation is −" }, { "code": null, "e": 1960, "s": 1949, "text": " Live Demo" }, { "code": null, "e": 2267, "s": 1960, "text": "#include <stdio.h>\nint main(){\n int x = 41;\n int y = 90;\n printf(\"Values before swapping! \\n\");\n printf(\"x = %d \\t\", x);\n printf(\"y = %d \\n\", y);\n x = x ^ y;\n y = y ^ x;\n x = x ^ y;\n printf(\"Values after swapping! \\n\");\n printf(\"x = %d \\t\", x);\n printf(\"y = %d \\n\", y);\n return 0;\n}" }, { "code": null, "e": 2343, "s": 2267, "text": "Values before swapping!\nx = 41 y = 90\nValues before swapping!\nx = 90 y = 41" }, { "code": null, "e": 2546, "s": 2343, "text": "For any integer value, we can find the most significant bit is an effective way. This is done using or operator along with a bitwise shift operator. This method can find the MSB in o(1) time complexity." }, { "code": null, "e": 2614, "s": 2546, "text": "The size of the integer should be predefined to create the program." }, { "code": null, "e": 2654, "s": 2614, "text": "Program to find MSB of 32-bit integer −" }, { "code": null, "e": 2665, "s": 2654, "text": " Live Demo" }, { "code": null, "e": 2943, "s": 2665, "text": "#include <stdio.h>\nint findMSB(int x){\n x |= x>>1;\n x |= x>>2;\n x |= x>>4;\n x |= x>>8;\n x |= x>>16;\n x = x+1;\n return(x >> 1);\n}\nint main(){\n int x = 49;\n printf(\"The number is %d\\n\", x);\n int msb = findMSB(x);\n printf(\"MSB of the number is %d\\n\", msb);\n}" }, { "code": null, "e": 2984, "s": 2943, "text": "The number is 49\nMSB of the number is 32" }, { "code": null, "e": 3087, "s": 2984, "text": "If we observe the XOR of 0 to n carefully we can derive a general pattern. Which is illustrated here −" }, { "code": null, "e": 3098, "s": 3087, "text": " Live Demo" }, { "code": null, "e": 3494, "s": 3098, "text": "#include <stdio.h>\n// Direct XOR of all numbers from 1 to n\nint findXORuptoN(int n){\n switch( n%4){\n case 0: return n;\n case 1: return 1;\n break;\n case 2: return n+1;\n break;\n case 3: return 0;\n break;\n default: break;\n }\n}\nint main(){\n int n = 9870;\n int xorupton = findXORuptoN(n);\n printf(\"XOR of all number up to %d is %d\\n\", n, xorupton);\n}" }, { "code": null, "e": 3531, "s": 3494, "text": "XOR of all number up to 9870 is 9871" }, { "code": null, "e": 3626, "s": 3531, "text": "Using the bitwise shifting operators, we can easily do the work and it will require less time." }, { "code": null, "e": 3637, "s": 3626, "text": " Live Demo" }, { "code": null, "e": 3857, "s": 3637, "text": "#include <stdio.h>\nint countValues(int n){\n int unset=0;\n while (n){\n if ((n & 1) == 0)\n unset++;\n n=n>>1;\n }\n return (1<<unset);\n}\nint main(){\n int n = 32;\n printf(\"%d\", countValues(n));\n}" }, { "code": null, "e": 3860, "s": 3857, "text": "32" }, { "code": null, "e": 3979, "s": 3860, "text": "There are inbuilt methods to find the number of leading and trailing zeroes of an integer, thanks to bit manipulation." }, { "code": null, "e": 4010, "s": 3979, "text": "* It is a GCC inbuilt function" }, { "code": null, "e": 4021, "s": 4010, "text": " Live Demo" }, { "code": null, "e": 4240, "s": 4021, "text": "#include <stdio.h>\nint main(){\n int n = 32;\n printf(\"The integer value is %d\\n\", n);\n printf(\"Number of leading zeros is %d\\n\", __builtin_clz(n));\n printf(\"Number of trailing zeros is %d\\n\",__builtin_clz(n));\n}" }, { "code": null, "e": 4325, "s": 4240, "text": "The integer value is 32\nNumber of leading zeros is 26\nNumber of trailing zeros is 26" }, { "code": null, "e": 4403, "s": 4325, "text": "To check, if a number is a power of 2, is made easy using a bitwise operator." }, { "code": null, "e": 4414, "s": 4403, "text": " Live Demo" }, { "code": null, "e": 4627, "s": 4414, "text": "#include <stdio.h>\nint isPowerof2(int n){\n return n && (!(n&(n-1)));\n}\nint main(){\n int n = 22;\n if(isPowerof2(n))\n printf(\"%d is a power of 2\", n);\n else\n printf(\"%d is not a power of 2\", n);\n}" }, { "code": null, "e": 4650, "s": 4627, "text": "22 is not a power of 2" }, { "code": null, "e": 4781, "s": 4650, "text": "This can be done using the fact that if there are more than 1 element the XOR of all subsets is always 0 and the number otherwise." }, { "code": null, "e": 4792, "s": 4781, "text": " Live Demo" }, { "code": null, "e": 5287, "s": 4792, "text": "#include <stdio.h>\nint findsubsetXOR (int set[], int size){\n if (size == 1){\n return set[size - 1];\n }\n else\n return 0;\n}\nint main (){\n int set[] = { 45, 12 };\n int size = sizeof (set) / sizeof (set[0]);\n printf (\"The XOR of all subsets of set of size %d is %d\\n\", size,\n findsubsetXOR (set, size));\n int set2[] = { 65 };\n size = sizeof (set2) / sizeof (set2[0]);\n printf (\"The XOR of all subsets of set of size %d is %d\\n\", size,\n findsubsetXOR (set2, size));\n}" }, { "code": null, "e": 5378, "s": 5287, "text": "The XOR of all subsets of set of size 2 is 0\nThe XOR of all subsets of set of size 1 is 65" }, { "code": null, "e": 5424, "s": 5378, "text": "auto keyword in C is employed to do the task." }, { "code": null, "e": 5435, "s": 5424, "text": " Live Demo" }, { "code": null, "e": 5577, "s": 5435, "text": "#include <stdio.h>\nint main (){\n auto integer = 0b0110110;\n printf(\"The integer conversion of binary number '0110110' is %d\", integer);\n}" }, { "code": null, "e": 5633, "s": 5577, "text": "The integer conversion of binary number '0110110' is 54" }, { "code": null, "e": 5726, "s": 5633, "text": "We can flip all the bits of a number by subtracting it from a number whose all bits are set." }, { "code": null, "e": 5852, "s": 5726, "text": "Number = 0110100\nThe number will all bits set = 1111111\nSubtraction -> 1111111 - 0110100 = 1001011 (number with flipped bits)" }, { "code": null, "e": 5863, "s": 5852, "text": " Live Demo" }, { "code": null, "e": 6103, "s": 5863, "text": "#include <stdio.h>\nint main (){\n int number = 23;\n int n = number;\n n |= n>>1;\n n |= n>>2;\n n |= n>>4;\n n |= n>>8;\n n |= n>>16;\n printf(\"The number is %d\\n\", number);\n printf(\"Number with reversed bits %d\\n\", n-number);\n}" }, { "code": null, "e": 6148, "s": 6103, "text": "The number is 23\nNumber with reversed bits 8" }, { "code": null, "e": 6281, "s": 6148, "text": "Using the bitwise XOR operation, we can find if the bits of a number are in alternate patterns or not. The below code shows how to −" }, { "code": null, "e": 6292, "s": 6281, "text": " Live Demo" }, { "code": null, "e": 6631, "s": 6292, "text": "#include <stdio.h>\nint checkbitpattern(int n){\n int result = n^(n>>1);\n if(((n+1)&n) == 0)\n return 1;\n else\n return 0;\n}\nint main (){\n int number = 4;\n if(checkbitpattern == 1){\n printf(\"Bits of %d are in alternate pattern\", number);\n }\n else\n printf(\"Bits of %d are not in alternate pattern\", number);\n}" }, { "code": null, "e": 6670, "s": 6631, "text": "Bits of 4 are not in alternate pattern" } ]
norm() function in C++ with Examples
In this article we will be discussing the working, syntax and examples of norm() function in C++ STL. norm() function is an inbuilt function in C++ STL, which is defined in <complex> header file. norm() function is used to get the norm value of a complex number. Norm value of a complex number is the squared magnitude of a number. So in simple words the function finds the squared magnitude of a complex number, including its imaginary and real number. double norm(ArithmeticType num); The function accepts following parameter(s) − num − The complex value which we want to work on. This function returns a norm value of num. complex<double> comp_num(6.9, 2.6); norm(comp_num); The value of norm of (6.9,2.6) is 54.37 Live Demo #include <bits/stdc++.h> using namespace std; int main (){ complex<double> comp_num(6.9, 2.6); cout<<"The value of norm of " << comp_num<< " is "; cout << norm(comp_num) << endl; return 0; } The value of norm of (6.9,2.6) is 54.37 Live Demo #include <bits/stdc++.h> using namespace std; int main (){ complex<double> comp_num(2.4, 1.9); cout<<"The value of norm of " << comp_num<< " is "; cout << norm(comp_num) << endl; return 0; } The value of norm of (2.4,1.9) is 9.37
[ { "code": null, "e": 1164, "s": 1062, "text": "In this article we will be discussing the working, syntax and examples of norm() function in C++ STL." }, { "code": null, "e": 1325, "s": 1164, "text": "norm() function is an inbuilt function in C++ STL, which is defined in <complex> header file. norm() function is used to get the norm value of a complex number." }, { "code": null, "e": 1516, "s": 1325, "text": "Norm value of a complex number is the squared magnitude of a number. So in simple words the function finds the squared magnitude of a complex number, including its imaginary and real number." }, { "code": null, "e": 1549, "s": 1516, "text": "double norm(ArithmeticType num);" }, { "code": null, "e": 1595, "s": 1549, "text": "The function accepts following parameter(s) −" }, { "code": null, "e": 1645, "s": 1595, "text": "num − The complex value which we want to work on." }, { "code": null, "e": 1688, "s": 1645, "text": "This function returns a norm value of num." }, { "code": null, "e": 1740, "s": 1688, "text": "complex<double> comp_num(6.9, 2.6);\nnorm(comp_num);" }, { "code": null, "e": 1780, "s": 1740, "text": "The value of norm of (6.9,2.6) is 54.37" }, { "code": null, "e": 1791, "s": 1780, "text": " Live Demo" }, { "code": null, "e": 1994, "s": 1791, "text": "#include <bits/stdc++.h>\nusing namespace std;\nint main (){\n complex<double> comp_num(6.9, 2.6);\n cout<<\"The value of norm of \" << comp_num<< \" is \";\n cout << norm(comp_num) << endl;\n return 0;\n}" }, { "code": null, "e": 2034, "s": 1994, "text": "The value of norm of (6.9,2.6) is 54.37" }, { "code": null, "e": 2045, "s": 2034, "text": " Live Demo" }, { "code": null, "e": 2248, "s": 2045, "text": "#include <bits/stdc++.h>\nusing namespace std;\nint main (){\n complex<double> comp_num(2.4, 1.9);\n cout<<\"The value of norm of \" << comp_num<< \" is \";\n cout << norm(comp_num) << endl;\n return 0;\n}" }, { "code": null, "e": 2287, "s": 2248, "text": "The value of norm of (2.4,1.9) is 9.37" } ]
Find sum of all elements in a matrix except the elements in row and-or column of given cell in Python
Suppose we have a 2D matrix and a set of cell indexes. Cell indices are represented as (i, j) where i is row and j is column, now, for every given cell index (i, j), we have to find the sums of all matrix elements excluding the elements present in ith row and/or jth column. So, if the input is like cell indices = [(0, 0), (1, 1), (0, 1)], then the output will be [19, 14, 20] To solve this, we will follow these steps − n := size of ind_arr n := size of ind_arr ans := a new list ans := a new list for i in range 0 to n, doSum := 0row := ind_arr[i, 0]col := ind_arr[i, 1]for j in range 0 to row count of mat, dofor k in range 0 to column count of map, doif j is not same as row and k is not same as col, thenSum := Sum + mat[j, k]insert Sum at the end of ans for i in range 0 to n, do Sum := 0 Sum := 0 row := ind_arr[i, 0] row := ind_arr[i, 0] col := ind_arr[i, 1] col := ind_arr[i, 1] for j in range 0 to row count of mat, dofor k in range 0 to column count of map, doif j is not same as row and k is not same as col, thenSum := Sum + mat[j, k] for j in range 0 to row count of mat, do for k in range 0 to column count of map, doif j is not same as row and k is not same as col, thenSum := Sum + mat[j, k] for k in range 0 to column count of map, do if j is not same as row and k is not same as col, thenSum := Sum + mat[j, k] if j is not same as row and k is not same as col, then Sum := Sum + mat[j, k] Sum := Sum + mat[j, k] insert Sum at the end of ans insert Sum at the end of ans return ans return ans Let us see the following implementation to get better understanding − Live Demo def show_sums(mat, ind_arr): n = len(ind_arr) ans = [] for i in range(0, n): Sum = 0 row = ind_arr[i][0] col = ind_arr[i][1] for j in range(0, len(mat)): for k in range(0, len(mat[0])): if j != row and k != col: Sum += mat[j][k] ans.append(Sum) return ans mat = [[2, 2, 3], [4, 5, 7], [6, 4, 3]] ind_arr = [(0, 0),(1, 1),(0, 1)] print(show_sums(mat, ind_arr)) mat = [[2, 2, 3], [4, 5, 7], [6, 4, 3]] ind_arr = [(0, 0),(1, 1),(0, 1) [19, 14, 20]
[ { "code": null, "e": 1337, "s": 1062, "text": "Suppose we have a 2D matrix and a set of cell indexes. Cell indices are represented as (i, j) where i is row and j is column, now, for every given cell index (i, j), we have to find the sums of all matrix elements excluding the elements present in ith row and/or jth column." }, { "code": null, "e": 1362, "s": 1337, "text": "So, if the input is like" }, { "code": null, "e": 1440, "s": 1362, "text": "cell indices = [(0, 0), (1, 1), (0, 1)], then the output will be [19, 14, 20]" }, { "code": null, "e": 1484, "s": 1440, "text": "To solve this, we will follow these steps −" }, { "code": null, "e": 1505, "s": 1484, "text": "n := size of ind_arr" }, { "code": null, "e": 1526, "s": 1505, "text": "n := size of ind_arr" }, { "code": null, "e": 1544, "s": 1526, "text": "ans := a new list" }, { "code": null, "e": 1562, "s": 1544, "text": "ans := a new list" }, { "code": null, "e": 1823, "s": 1562, "text": "for i in range 0 to n, doSum := 0row := ind_arr[i, 0]col := ind_arr[i, 1]for j in range 0 to row count of mat, dofor k in range 0 to column count of map, doif j is not same as row and k is not same as col, thenSum := Sum + mat[j, k]insert Sum at the end of ans" }, { "code": null, "e": 1849, "s": 1823, "text": "for i in range 0 to n, do" }, { "code": null, "e": 1858, "s": 1849, "text": "Sum := 0" }, { "code": null, "e": 1867, "s": 1858, "text": "Sum := 0" }, { "code": null, "e": 1888, "s": 1867, "text": "row := ind_arr[i, 0]" }, { "code": null, "e": 1909, "s": 1888, "text": "row := ind_arr[i, 0]" }, { "code": null, "e": 1930, "s": 1909, "text": "col := ind_arr[i, 1]" }, { "code": null, "e": 1951, "s": 1930, "text": "col := ind_arr[i, 1]" }, { "code": null, "e": 2111, "s": 1951, "text": "for j in range 0 to row count of mat, dofor k in range 0 to column count of map, doif j is not same as row and k is not same as col, thenSum := Sum + mat[j, k]" }, { "code": null, "e": 2152, "s": 2111, "text": "for j in range 0 to row count of mat, do" }, { "code": null, "e": 2272, "s": 2152, "text": "for k in range 0 to column count of map, doif j is not same as row and k is not same as col, thenSum := Sum + mat[j, k]" }, { "code": null, "e": 2316, "s": 2272, "text": "for k in range 0 to column count of map, do" }, { "code": null, "e": 2393, "s": 2316, "text": "if j is not same as row and k is not same as col, thenSum := Sum + mat[j, k]" }, { "code": null, "e": 2448, "s": 2393, "text": "if j is not same as row and k is not same as col, then" }, { "code": null, "e": 2471, "s": 2448, "text": "Sum := Sum + mat[j, k]" }, { "code": null, "e": 2494, "s": 2471, "text": "Sum := Sum + mat[j, k]" }, { "code": null, "e": 2523, "s": 2494, "text": "insert Sum at the end of ans" }, { "code": null, "e": 2552, "s": 2523, "text": "insert Sum at the end of ans" }, { "code": null, "e": 2563, "s": 2552, "text": "return ans" }, { "code": null, "e": 2574, "s": 2563, "text": "return ans" }, { "code": null, "e": 2644, "s": 2574, "text": "Let us see the following implementation to get better understanding −" }, { "code": null, "e": 2655, "s": 2644, "text": " Live Demo" }, { "code": null, "e": 3093, "s": 2655, "text": "def show_sums(mat, ind_arr):\n n = len(ind_arr)\n ans = []\n for i in range(0, n):\n Sum = 0\n row = ind_arr[i][0]\n col = ind_arr[i][1]\n for j in range(0, len(mat)):\n for k in range(0, len(mat[0])):\n if j != row and k != col:\n Sum += mat[j][k]\n ans.append(Sum)\n return ans\nmat = [[2, 2, 3], [4, 5, 7], [6, 4, 3]]\nind_arr = [(0, 0),(1, 1),(0, 1)]\nprint(show_sums(mat, ind_arr))" }, { "code": null, "e": 3165, "s": 3093, "text": "mat = [[2, 2, 3], [4, 5, 7], [6, 4, 3]] ind_arr = [(0, 0),(1, 1),(0, 1)" }, { "code": null, "e": 3178, "s": 3165, "text": "[19, 14, 20]" } ]
SQL Tryit Editor v1.6
CREATE TABLE Persons ( ID int NOT NULL, LastName varchar(255) NOT NULL, FirstName varchar(255) NOT NULL, Age int ); ​ Edit the SQL Statement, and click "Run SQL" to see the result. This SQL-Statement is not supported in the WebSQL Database. The example still works, because it uses a modified version of SQL. Your browser does not support WebSQL. Your are now using a light-version of the Try-SQL Editor, with a read-only Database. If you switch to a browser with WebSQL support, you can try any SQL statement, and play with the Database as much as you like. The Database can also be restored at any time. Our Try-SQL Editor uses WebSQL to demonstrate SQL. A Database-object is created in your browser, for testing purposes. You can try any SQL statement, and play with the Database as much as you like. The Database can be restored at any time, simply by clicking the "Restore Database" button. WebSQL stores a Database locally, on the user's computer. Each user gets their own Database object. WebSQL is supported in Chrome, Safari, Opera, and Edge(79). If you use another browser you will still be able to use our Try SQL Editor, but a different version, using a server-based ASP application, with a read-only Access Database, where users are not allowed to make any changes to the data.
[ { "code": null, "e": 23, "s": 0, "text": "CREATE TABLE Persons (" }, { "code": null, "e": 44, "s": 23, "text": " ID int NOT NULL," }, { "code": null, "e": 80, "s": 44, "text": " LastName varchar(255) NOT NULL," }, { "code": null, "e": 117, "s": 80, "text": " FirstName varchar(255) NOT NULL," }, { "code": null, "e": 129, "s": 117, "text": " Age int" }, { "code": null, "e": 132, "s": 129, "text": ");" }, { "code": null, "e": 134, "s": 132, "text": "​" }, { "code": null, "e": 197, "s": 134, "text": "Edit the SQL Statement, and click \"Run SQL\" to see the result." }, { "code": null, "e": 257, "s": 197, "text": "This SQL-Statement is not supported in the WebSQL Database." }, { "code": null, "e": 325, "s": 257, "text": "The example still works, because it uses a modified version of SQL." }, { "code": null, "e": 363, "s": 325, "text": "Your browser does not support WebSQL." }, { "code": null, "e": 448, "s": 363, "text": "Your are now using a light-version of the Try-SQL Editor, with a read-only Database." }, { "code": null, "e": 622, "s": 448, "text": "If you switch to a browser with WebSQL support, you can try any SQL statement, and play with the Database as much as you like. The Database can also be restored at any time." }, { "code": null, "e": 673, "s": 622, "text": "Our Try-SQL Editor uses WebSQL to demonstrate SQL." }, { "code": null, "e": 741, "s": 673, "text": "A Database-object is created in your browser, for testing purposes." }, { "code": null, "e": 912, "s": 741, "text": "You can try any SQL statement, and play with the Database as much as you like. The Database can be restored at any time, simply by clicking the \"Restore Database\" button." }, { "code": null, "e": 1012, "s": 912, "text": "WebSQL stores a Database locally, on the user's computer. Each user gets their own Database object." }, { "code": null, "e": 1072, "s": 1012, "text": "WebSQL is supported in Chrome, Safari, Opera, and Edge(79)." } ]
Difference between Tree Set and Hash Set in Java
Hash set and tree set both belong to the collection framework. HashSet is the implementation of the Set interface whereas Tree set implements sorted set. Tree set is backed by TreeMap while HashSet is backed by a hashmap. class TreeSetExmaple { public static void main(String[] args){ TreeSet<String> treeset = new TreeSet<String>(); treeset.add("Good"); treeset.add("For"); treeset.add("Health"); //Add Duplicate Element treeset.add("Good"); System.out.println("TreeSet : "); for (String temp : treeset) { System.out.println(temp); } } } TreeSet: For Good Health class HashSetExample { public static void main(String[] args){ HashSet<String> hashSet = new HashSet<String>(); hashSet.add("Good"); hashSet.add("For"); hashSet.add("Health"); //Add Duplicate Element hashSet.add("Good"); System.out.println("HashSet: "); for (String temp : hashSet) { System.out.println(temp); } } } HashSet: Health For Good
[ { "code": null, "e": 1284, "s": 1062, "text": "Hash set and tree set both belong to the collection framework. HashSet is the implementation of the Set interface whereas Tree set implements sorted set. Tree set is backed by TreeMap while HashSet is backed by a hashmap." }, { "code": null, "e": 1670, "s": 1284, "text": "class TreeSetExmaple {\n public static void main(String[] args){\n TreeSet<String> treeset = new TreeSet<String>();\n treeset.add(\"Good\");\n treeset.add(\"For\");\n treeset.add(\"Health\");\n //Add Duplicate Element\n treeset.add(\"Good\");\n System.out.println(\"TreeSet : \");\n for (String temp : treeset) {\n System.out.println(temp);\n }\n }\n}" }, { "code": null, "e": 1704, "s": 1670, "text": "TreeSet:\n For\n Good\n Health" }, { "code": null, "e": 2089, "s": 1704, "text": "class HashSetExample {\n public static void main(String[] args){\n HashSet<String> hashSet = new HashSet<String>();\n hashSet.add(\"Good\");\n hashSet.add(\"For\");\n hashSet.add(\"Health\");\n //Add Duplicate Element\n hashSet.add(\"Good\");\n System.out.println(\"HashSet: \");\n for (String temp : hashSet) {\n System.out.println(temp);\n }\n }\n}" }, { "code": null, "e": 2123, "s": 2089, "text": "HashSet:\n Health\n For\n Good" } ]
Recursion in array to find odd numbers and push to new variable JavaScript
We are required to write a recursive function, say pushRecursively(), which takes in an array of numbers and returns an object containing odd and even properties where odd is an array of odd numbers from input array and even an array of even numbers from input array. This has to be done using recursion and no type of loop method should be used. const arr = [12,4365,76,43,76,98,5,31,4]; const pushRecursively = (arr, len = 0, odd = [], even = []) => { if(len < arr.length){ arr[len] % 2 === 0 ? even.push(arr[len]) : odd.push(arr[len]); return pushRecursively(arr, ++len, odd, even); }; return { odd, even } }; console.log(pushRecursively(arr)); While the len variable reaches the end of array, we keep calling the function recursively, each time pushing the odd values to odd array and the evens to even array and once the len variable equals the length of array, we exit out of the function returning the object. The output of this code in the console will be − { odd: [ 4365, 43, 5, 31 ], even: [ 12, 76, 76, 98, 4 ] }
[ { "code": null, "e": 1409, "s": 1062, "text": "We are required to write a recursive function, say pushRecursively(), which takes in an array of\nnumbers and returns an object containing odd and even properties where odd is an array of odd\nnumbers from input array and even an array of even numbers from input array. This has to be\ndone using recursion and no type of loop method should be used." }, { "code": null, "e": 1746, "s": 1409, "text": "const arr = [12,4365,76,43,76,98,5,31,4];\nconst pushRecursively = (arr, len = 0, odd = [], even = []) => {\n if(len < arr.length){\n arr[len] % 2 === 0 ? even.push(arr[len]) : odd.push(arr[len]);\n return pushRecursively(arr, ++len, odd, even);\n };\n return {\n odd,\n even\n }\n};\nconsole.log(pushRecursively(arr));" }, { "code": null, "e": 2015, "s": 1746, "text": "While the len variable reaches the end of array, we keep calling the function recursively, each\ntime pushing the odd values to odd array and the evens to even array and once the len variable\nequals the length of array, we exit out of the function returning the object." }, { "code": null, "e": 2064, "s": 2015, "text": "The output of this code in the console will be −" }, { "code": null, "e": 2122, "s": 2064, "text": "{ odd: [ 4365, 43, 5, 31 ], even: [ 12, 76, 76, 98, 4 ] }" } ]
How to declare boolean variables in JavaScript?
A boolean variable in JavaScript has two values True or False. You can try to run the following code to learn how to work with Boolean variables − <!DOCTYPE html> <html> <body> <p>35 > 20</p> <button onclick="myValue()">Click for result</button> <p id="test"></p> <script> function myValue() { document.getElementById("test").innerHTML = Boolean(35 > 20); } </script> </body> </html>
[ { "code": null, "e": 1125, "s": 1062, "text": "A boolean variable in JavaScript has two values True or False." }, { "code": null, "e": 1209, "s": 1125, "text": "You can try to run the following code to learn how to work with Boolean variables −" }, { "code": null, "e": 1512, "s": 1209, "text": "<!DOCTYPE html>\n<html>\n <body>\n <p>35 > 20</p>\n <button onclick=\"myValue()\">Click for result</button>\n <p id=\"test\"></p>\n <script>\n function myValue() {\n document.getElementById(\"test\").innerHTML = Boolean(35 > 20);\n }\n </script>\n </body>\n</html>" } ]
R - Databases
The data is Relational database systems are stored in a normalized format. So, to carry out statistical computing we will need very advanced and complex Sql queries. But R can connect easily to many relational databases like MySql, Oracle, Sql server etc. and fetch records from them as a data frame. Once the data is available in the R environment, it becomes a normal R data set and can be manipulated or analyzed using all the powerful packages and functions. In this tutorial we will be using MySql as our reference database for connecting to R. R has a built-in package named "RMySQL" which provides native connectivity between with MySql database. You can install this package in the R environment using the following command. install.packages("RMySQL") Once the package is installed we create a connection object in R to connect to the database. It takes the username, password, database name and host name as input. # Create a connection Object to MySQL database. # We will connect to the sampel database named "sakila" that comes with MySql installation. mysqlconnection = dbConnect(MySQL(), user = 'root', password = '', dbname = 'sakila', host = 'localhost') # List the tables available in this database. dbListTables(mysqlconnection) When we execute the above code, it produces the following result − [1] "actor" "actor_info" [3] "address" "category" [5] "city" "country" [7] "customer" "customer_list" [9] "film" "film_actor" [11] "film_category" "film_list" [13] "film_text" "inventory" [15] "language" "nicer_but_slower_film_list" [17] "payment" "rental" [19] "sales_by_film_category" "sales_by_store" [21] "staff" "staff_list" [23] "store" We can query the database tables in MySql using the function dbSendQuery(). The query gets executed in MySql and the result set is returned using the R fetch() function. Finally it is stored as a data frame in R. # Query the "actor" tables to get all the rows. result = dbSendQuery(mysqlconnection, "select * from actor") # Store the result in a R data frame object. n = 5 is used to fetch first 5 rows. data.frame = fetch(result, n = 5) print(data.fame) When we execute the above code, it produces the following result − actor_id first_name last_name last_update 1 1 PENELOPE GUINESS 2006-02-15 04:34:33 2 2 NICK WAHLBERG 2006-02-15 04:34:33 3 3 ED CHASE 2006-02-15 04:34:33 4 4 JENNIFER DAVIS 2006-02-15 04:34:33 5 5 JOHNNY LOLLOBRIGIDA 2006-02-15 04:34:33 We can pass any valid select query to get the result. result = dbSendQuery(mysqlconnection, "select * from actor where last_name = 'TORN'") # Fetch all the records(with n = -1) and store it as a data frame. data.frame = fetch(result, n = -1) print(data) When we execute the above code, it produces the following result − actor_id first_name last_name last_update 1 18 DAN TORN 2006-02-15 04:34:33 2 94 KENNETH TORN 2006-02-15 04:34:33 3 102 WALTER TORN 2006-02-15 04:34:33 We can update the rows in a Mysql table by passing the update query to the dbSendQuery() function. dbSendQuery(mysqlconnection, "update mtcars set disp = 168.5 where hp = 110") After executing the above code we can see the table updated in the MySql Environment. dbSendQuery(mysqlconnection, "insert into mtcars(row_names, mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb) values('New Mazda RX4 Wag', 21, 6, 168.5, 110, 3.9, 2.875, 17.02, 0, 1, 4, 4)" ) After executing the above code we can see the row inserted into the table in the MySql Environment. We can create tables in the MySql using the function dbWriteTable(). It overwrites the table if it already exists and takes a data frame as input. # Create the connection object to the database where we want to create the table. mysqlconnection = dbConnect(MySQL(), user = 'root', password = '', dbname = 'sakila', host = 'localhost') # Use the R data frame "mtcars" to create the table in MySql. # All the rows of mtcars are taken inot MySql. dbWriteTable(mysqlconnection, "mtcars", mtcars[, ], overwrite = TRUE) After executing the above code we can see the table created in the MySql Environment. We can drop the tables in MySql database passing the drop table statement into the dbSendQuery() in the same way we used it for querying data from tables. dbSendQuery(mysqlconnection, 'drop table if exists mtcars') After executing the above code we can see the table is dropped in the MySql Environment. 12 Lectures 2 hours Nishant Malik 10 Lectures 1.5 hours Nishant Malik 12 Lectures 2.5 hours Nishant Malik 20 Lectures 2 hours Asif Hussain 10 Lectures 1.5 hours Nishant Malik 48 Lectures 6.5 hours Asif Hussain Print Add Notes Bookmark this page
[ { "code": null, "e": 2865, "s": 2402, "text": "The data is Relational database systems are stored in a normalized format. So, to carry out statistical computing we will need very advanced and complex Sql queries. But R can connect easily to many relational databases like MySql, Oracle, Sql server etc. and fetch records from them as a data frame. Once the data is available in the R environment, it becomes a normal R data set and can be manipulated or analyzed using all the powerful packages and functions." }, { "code": null, "e": 2952, "s": 2865, "text": "In this tutorial we will be using MySql as our reference database for connecting to R." }, { "code": null, "e": 3135, "s": 2952, "text": "R has a built-in package named \"RMySQL\" which provides native connectivity between with MySql database. You can install this package in the R environment using the following command." }, { "code": null, "e": 3162, "s": 3135, "text": "install.packages(\"RMySQL\")" }, { "code": null, "e": 3326, "s": 3162, "text": "Once the package is installed we create a connection object in R to connect to the database. It takes the username, password, database name and host name as input." }, { "code": null, "e": 3653, "s": 3326, "text": "# Create a connection Object to MySQL database.\n# We will connect to the sampel database named \"sakila\" that comes with MySql installation.\nmysqlconnection = dbConnect(MySQL(), user = 'root', password = '', dbname = 'sakila',\n host = 'localhost')\n\n# List the tables available in this database.\n dbListTables(mysqlconnection)" }, { "code": null, "e": 3720, "s": 3653, "text": "When we execute the above code, it produces the following result −" }, { "code": null, "e": 4448, "s": 3720, "text": " [1] \"actor\" \"actor_info\" \n [3] \"address\" \"category\" \n [5] \"city\" \"country\" \n [7] \"customer\" \"customer_list\" \n [9] \"film\" \"film_actor\" \n[11] \"film_category\" \"film_list\" \n[13] \"film_text\" \"inventory\" \n[15] \"language\" \"nicer_but_slower_film_list\"\n[17] \"payment\" \"rental\" \n[19] \"sales_by_film_category\" \"sales_by_store\" \n[21] \"staff\" \"staff_list\" \n[23] \"store\" \n" }, { "code": null, "e": 4661, "s": 4448, "text": "We can query the database tables in MySql using the function dbSendQuery(). The query gets executed in MySql and the result set is returned using the R fetch() function. Finally it is stored as a data frame in R." }, { "code": null, "e": 4904, "s": 4661, "text": "# Query the \"actor\" tables to get all the rows.\nresult = dbSendQuery(mysqlconnection, \"select * from actor\")\n\n# Store the result in a R data frame object. n = 5 is used to fetch first 5 rows.\ndata.frame = fetch(result, n = 5)\nprint(data.fame)" }, { "code": null, "e": 4971, "s": 4904, "text": "When we execute the above code, it produces the following result −" }, { "code": null, "e": 5390, "s": 4971, "text": " actor_id first_name last_name last_update\n1 1 PENELOPE GUINESS 2006-02-15 04:34:33\n2 2 NICK WAHLBERG 2006-02-15 04:34:33\n3 3 ED CHASE 2006-02-15 04:34:33\n4 4 JENNIFER DAVIS 2006-02-15 04:34:33\n5 5 JOHNNY LOLLOBRIGIDA 2006-02-15 04:34:33\n" }, { "code": null, "e": 5444, "s": 5390, "text": "We can pass any valid select query to get the result." }, { "code": null, "e": 5645, "s": 5444, "text": "result = dbSendQuery(mysqlconnection, \"select * from actor where last_name = 'TORN'\")\n\n# Fetch all the records(with n = -1) and store it as a data frame.\ndata.frame = fetch(result, n = -1)\nprint(data)" }, { "code": null, "e": 5712, "s": 5645, "text": "When we execute the above code, it produces the following result −" }, { "code": null, "e": 5997, "s": 5712, "text": " actor_id first_name last_name last_update\n1 18 DAN TORN 2006-02-15 04:34:33\n2 94 KENNETH TORN 2006-02-15 04:34:33\n3 102 WALTER TORN 2006-02-15 04:34:33\n" }, { "code": null, "e": 6096, "s": 5997, "text": "We can update the rows in a Mysql table by passing the update query to the dbSendQuery() function." }, { "code": null, "e": 6174, "s": 6096, "text": "dbSendQuery(mysqlconnection, \"update mtcars set disp = 168.5 where hp = 110\")" }, { "code": null, "e": 6260, "s": 6174, "text": "After executing the above code we can see the table updated in the MySql Environment." }, { "code": null, "e": 6463, "s": 6260, "text": "dbSendQuery(mysqlconnection,\n \"insert into mtcars(row_names, mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb)\n values('New Mazda RX4 Wag', 21, 6, 168.5, 110, 3.9, 2.875, 17.02, 0, 1, 4, 4)\"\n)" }, { "code": null, "e": 6563, "s": 6463, "text": "After executing the above code we can see the row inserted into the table in the MySql Environment." }, { "code": null, "e": 6710, "s": 6563, "text": "We can create tables in the MySql using the function dbWriteTable(). It overwrites the table if it already exists and takes a data frame as input." }, { "code": null, "e": 7082, "s": 6710, "text": "# Create the connection object to the database where we want to create the table.\nmysqlconnection = dbConnect(MySQL(), user = 'root', password = '', dbname = 'sakila', \n host = 'localhost')\n\n# Use the R data frame \"mtcars\" to create the table in MySql.\n# All the rows of mtcars are taken inot MySql.\ndbWriteTable(mysqlconnection, \"mtcars\", mtcars[, ], overwrite = TRUE)" }, { "code": null, "e": 7168, "s": 7082, "text": "After executing the above code we can see the table created in the MySql Environment." }, { "code": null, "e": 7323, "s": 7168, "text": "We can drop the tables in MySql database passing the drop table statement into the dbSendQuery() in the same way we used it for querying data from tables." }, { "code": null, "e": 7383, "s": 7323, "text": "dbSendQuery(mysqlconnection, 'drop table if exists mtcars')" }, { "code": null, "e": 7472, "s": 7383, "text": "After executing the above code we can see the table is dropped in the MySql Environment." }, { "code": null, "e": 7505, "s": 7472, "text": "\n 12 Lectures \n 2 hours \n" }, { "code": null, "e": 7520, "s": 7505, "text": " Nishant Malik" }, { "code": null, "e": 7555, "s": 7520, "text": "\n 10 Lectures \n 1.5 hours \n" }, { "code": null, "e": 7570, "s": 7555, "text": " Nishant Malik" }, { "code": null, "e": 7605, "s": 7570, "text": "\n 12 Lectures \n 2.5 hours \n" }, { "code": null, "e": 7620, "s": 7605, "text": " Nishant Malik" }, { "code": null, "e": 7653, "s": 7620, "text": "\n 20 Lectures \n 2 hours \n" }, { "code": null, "e": 7667, "s": 7653, "text": " Asif Hussain" }, { "code": null, "e": 7702, "s": 7667, "text": "\n 10 Lectures \n 1.5 hours \n" }, { "code": null, "e": 7717, "s": 7702, "text": " Nishant Malik" }, { "code": null, "e": 7752, "s": 7717, "text": "\n 48 Lectures \n 6.5 hours \n" }, { "code": null, "e": 7766, "s": 7752, "text": " Asif Hussain" }, { "code": null, "e": 7773, "s": 7766, "text": " Print" }, { "code": null, "e": 7784, "s": 7773, "text": " Add Notes" } ]
How to add a method to a JavaScript object?
Adding a method to a javascript object is easier than adding a method to an object constructor. We need to assign the method to the existing property to ensure task completion. In the following example, initially, the object type is created and later on, the properties of the object were created. Once the creation of properties is done, a method is assigned to each of the objects and the properties were accessed using the method as our requirement. Live Demo <html> <body> <p id = "prop"></p> <script> function Business(name, property, age, designation) { this.Name = name; this.prop = property; this.age = age; this.designation = designation; } var person1 = new Business("Trump", "$28.05billion", "73", "President"); var person2 = new Business("Jackma", "$35.6 billion", "54", "entrepeneur"); person1.det = function() { return this.Name + " "+" has a property of net worth "+ "" + this.prop; }; person2.det = function() { return this.Name + " "+" has a property of net worth "+ "" + this.prop; }; document.write(person2.det() +" and "+person1.det()); </script> </body> </html> Jackma has a property of net worth $35.6 billion and Trump has a property of net worth $28.05billion
[ { "code": null, "e": 1239, "s": 1062, "text": "Adding a method to a javascript object is easier than adding a method to an object constructor. We need to assign the method to the existing property to ensure task completion." }, { "code": null, "e": 1515, "s": 1239, "text": "In the following example, initially, the object type is created and later on, the properties of the object were created. Once the creation of properties is done, a method is assigned to each of the objects and the properties were accessed using the method as our requirement." }, { "code": null, "e": 1525, "s": 1515, "text": "Live Demo" }, { "code": null, "e": 2208, "s": 1525, "text": "<html>\n<body>\n<p id = \"prop\"></p>\n<script>\n function Business(name, property, age, designation) {\n this.Name = name;\n this.prop = property;\n this.age = age;\n this.designation = designation;\n }\n var person1 = new Business(\"Trump\", \"$28.05billion\", \"73\", \"President\");\n var person2 = new Business(\"Jackma\", \"$35.6 billion\", \"54\", \"entrepeneur\");\n person1.det = function() {\n return this.Name + \" \"+\" has a property of net worth \"+ \"\" + this.prop;\n };\n person2.det = function() {\n return this.Name + \" \"+\" has a property of net worth \"+ \"\" + this.prop;\n };\n document.write(person2.det() +\" and \"+person1.det());\n</script>\n</body>\n</html>" }, { "code": null, "e": 2309, "s": 2208, "text": "Jackma has a property of net worth $35.6 billion and Trump has a property of net worth $28.05billion" } ]
How can we show/hide the table header of a JTable in Java?
A JTable is a subclass of JComponent class for displaying complex data structures. A JTable can follow the Model View Controller (MVC) design pattern for displaying the data in rows and columns. The DefaultTableModel class is a subclass of AbstractTableModel and it can be used to add the rows and columns to a JTable dynamically. The DefaultTableCellRenderer class can extend JLabel class and it can be used to add images, colored text and etc. inside the JTable cell. We can hide the table header of a JTable by unchecking the JCheckBox and show the table header of a JTable by clicking the JCheckBox. import java.awt.*; import javax.swing.*; import javax.swing.table.*; public final class JTableHeaderHideTest extends JPanel { private final String[] columnNames = {"String", "Integer", "Boolean"}; private final Object[][] data = {{"Tutorials Point", 100, true}, {"Tutorix", 200, false}, {"Tutorials Point", 300, true}, {"Tutorix", 400, false}}; private final TableModel model = new DefaultTableModel(data, columnNames) { @Override public Class getColumnClass(int column) { return getValueAt(0, column).getClass(); } }; private final JTable table = new JTable(model); private final JScrollPane scrollPane = new JScrollPane(table); public JTableHeaderHideTest() { super(new BorderLayout()); add(scrollPane); JCheckBox check = new JCheckBox("JTableHeader visible: ", true); check.addActionListener(ae -> { JCheckBox cb = (JCheckBox) ae.getSource(); scrollPane.getColumnHeader().setVisible(cb.isSelected()); scrollPane.revalidate(); }); add(check, BorderLayout.NORTH); } public static void main(String[] args) { JFrame frame = new JFrame("JTableHeaderHide Test"); frame.setSize(375, 250); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); frame.getContentPane().add(new JTableHeaderHideTest()); frame.setLocationRelativeTo(null); frame.setVisible(true); } }
[ { "code": null, "e": 1666, "s": 1062, "text": "A JTable is a subclass of JComponent class for displaying complex data structures. A JTable can follow the Model View Controller (MVC) design pattern for displaying the data in rows and columns. The DefaultTableModel class is a subclass of AbstractTableModel and it can be used to add the rows and columns to a JTable dynamically. The DefaultTableCellRenderer class can extend JLabel class and it can be used to add images, colored text and etc. inside the JTable cell. We can hide the table header of a JTable by unchecking the JCheckBox and show the table header of a JTable by clicking the JCheckBox." }, { "code": null, "e": 3089, "s": 1666, "text": "import java.awt.*;\nimport javax.swing.*;\nimport javax.swing.table.*;\npublic final class JTableHeaderHideTest extends JPanel {\n private final String[] columnNames = {\"String\", \"Integer\", \"Boolean\"};\n private final Object[][] data = {{\"Tutorials Point\", 100, true}, {\"Tutorix\", 200, false}, {\"Tutorials Point\", 300, true}, {\"Tutorix\", 400, false}};\n private final TableModel model = new DefaultTableModel(data, columnNames) {\n @Override\n public Class getColumnClass(int column) {\n return getValueAt(0, column).getClass();\n }\n };\n private final JTable table = new JTable(model);\n private final JScrollPane scrollPane = new JScrollPane(table);\n public JTableHeaderHideTest() {\n super(new BorderLayout());\n add(scrollPane);\n JCheckBox check = new JCheckBox(\"JTableHeader visible: \", true);\n check.addActionListener(ae -> {\n JCheckBox cb = (JCheckBox) ae.getSource();\n scrollPane.getColumnHeader().setVisible(cb.isSelected());\n scrollPane.revalidate();\n });\n add(check, BorderLayout.NORTH);\n }\n public static void main(String[] args) {\n JFrame frame = new JFrame(\"JTableHeaderHide Test\");\n frame.setSize(375, 250);\n frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);\n frame.getContentPane().add(new JTableHeaderHideTest());\n frame.setLocationRelativeTo(null);\n frame.setVisible(true);\n }\n}" } ]
Removing minimize/maximize buttons in Tkinter
When we run our tkinter application, it initially displays a window that has an interface to display all the widgets. Eventually, we can remove the maximizing and minimizing property of the displayed window by using the resizable(boolean) method. It takes two Boolean values that refer to the status of width and height of the window. We generally disable the max and min resizing property by assigning zero to both values of width and height. #Import the required library from tkinter import* #Create an instance of tkinter frame win= Tk() #Set the geometry win.geometry("750x250") #Disable the resizable Property win.resizable(False, False) #Create an Label Widget Label(win, text= "This Window can't be resized", font= ('Helvetica 15 underline')).pack(pady=20) win.mainloop() Running the above code will display a non-resizable window.
[ { "code": null, "e": 1506, "s": 1062, "text": "When we run our tkinter application, it initially displays a window that has an interface to display all the widgets. Eventually, we can remove the maximizing and minimizing property of the displayed window by using the resizable(boolean) method. It takes two Boolean values that refer to the status of width and height of the window. We generally disable the max and min resizing property by assigning zero to both values of width and height." }, { "code": null, "e": 1841, "s": 1506, "text": "#Import the required library\nfrom tkinter import*\n#Create an instance of tkinter frame\nwin= Tk()\n#Set the geometry\nwin.geometry(\"750x250\")\n#Disable the resizable Property\nwin.resizable(False, False)\n#Create an Label Widget\nLabel(win, text= \"This Window can't be resized\", font= ('Helvetica 15\nunderline')).pack(pady=20)\nwin.mainloop()" }, { "code": null, "e": 1901, "s": 1841, "text": "Running the above code will display a non-resizable window." } ]
How to join two strings to convert to a single string in Python?
To join 2 strings in Python, we can use the concatenation operator, '+'. For example: str1 = "Hello" str2 = "World" str3 = str1 + str2 print str3 This will give us the output: HelloWorld We can also use str.join(seq) to join multiple strings together. For example: s = "-"; seq = ("a", "b", "c"); # This is sequence of strings. print s.join( seq ) This will give us the output: a-b-c Note that the str we give is used as the seperator when joining the strings together.
[ { "code": null, "e": 1148, "s": 1062, "text": "To join 2 strings in Python, we can use the concatenation operator, '+'. For example:" }, { "code": null, "e": 1208, "s": 1148, "text": "str1 = \"Hello\"\nstr2 = \"World\"\nstr3 = str1 + str2\nprint str3" }, { "code": null, "e": 1238, "s": 1208, "text": "This will give us the output:" }, { "code": null, "e": 1249, "s": 1238, "text": "HelloWorld" }, { "code": null, "e": 1327, "s": 1249, "text": "We can also use str.join(seq) to join multiple strings together. For example:" }, { "code": null, "e": 1410, "s": 1327, "text": "s = \"-\";\nseq = (\"a\", \"b\", \"c\"); # This is sequence of strings.\nprint s.join( seq )" }, { "code": null, "e": 1440, "s": 1410, "text": "This will give us the output:" }, { "code": null, "e": 1446, "s": 1440, "text": "a-b-c" }, { "code": null, "e": 1532, "s": 1446, "text": "Note that the str we give is used as the seperator when joining the strings together." } ]
Creating Classes in Python
The class statement creates a new class definition. The name of the class immediately follows the keyword class followed by a colon as follows − class ClassName: 'Optional class documentation string' class_suite The class has a documentation string, which can be accessed via ClassName.__doc__. The class_suite consists of all the component statements defining class members, data attributes and functions. Following is the example of a simple Python class − class Employee: 'Common base class for all employees' empCount = 0 def __init__(self, name, salary): self.name = name self.salary = salary Employee.empCount += 1 def displayCount(self): print "Total Employee %d" % Employee.empCount def displayEmployee(self): print "Name : ", self.name, ", Salary: ", self.salary The variable empCount is a class variable whose value is shared among all instances of a this class. This can be accessed as Employee.empCount from inside the class or outside the class. The first method __init__ is a special method, which is called class constructor or initialization method that Python calls when you create a new instance of this class. You declare other class methods like normal functions with the exception that the first argument to each method is self. Python adds the self argument to the list for you; you do not need to include it when you call the methods.
[ { "code": null, "e": 1207, "s": 1062, "text": "The class statement creates a new class definition. The name of the class immediately follows the keyword class followed by a colon as follows −" }, { "code": null, "e": 1274, "s": 1207, "text": "class ClassName:\n'Optional class documentation string'\nclass_suite" }, { "code": null, "e": 1357, "s": 1274, "text": "The class has a documentation string, which can be accessed via ClassName.__doc__." }, { "code": null, "e": 1469, "s": 1357, "text": "The class_suite consists of all the component statements defining class members, data attributes and functions." }, { "code": null, "e": 1521, "s": 1469, "text": "Following is the example of a simple Python class −" }, { "code": null, "e": 1879, "s": 1521, "text": "class Employee:\n 'Common base class for all employees'\n empCount = 0\n def __init__(self, name, salary):\n self.name = name\n self.salary = salary\n Employee.empCount += 1\n def displayCount(self):\n print \"Total Employee %d\" % Employee.empCount\n def displayEmployee(self):\n print \"Name : \", self.name, \", Salary: \", self.salary" }, { "code": null, "e": 2066, "s": 1879, "text": "The variable empCount is a class variable whose value is shared among all instances of a this class. This can be accessed as Employee.empCount from inside the class or outside the class." }, { "code": null, "e": 2236, "s": 2066, "text": "The first method __init__ is a special method, which is called class constructor or initialization method that Python calls when you create a new instance of this class." }, { "code": null, "e": 2465, "s": 2236, "text": "You declare other class methods like normal functions with the exception that the first argument to each method is self. Python adds the self argument to the list for you; you do not need to include it when you call the methods." } ]
Data Science with Medium Story Stats in Python | by Will Koehrsen | Towards Data Science
Medium is a great place to write: no distracting features, a large — yet civil — readership, and, best of all, no advertisements. However, one aspect where it falls short is in the statistics you can see for your articles. Sure, you can go to the stats page, but all you get to see is some plain numbers and a bar chart in an awful shade of green. There’s no in-depth analysis of any kind and no way to make sense of the data generated by your articles. It’s as if Medium said: “let’s build a great blogging platform, but make it as difficult as possible for writers to get insights from their stats.” Although I don’t care about using stats to maximize views (if I wanted to get the most views, all my articles would be 3-minute lists), as a data scientist, I can’t bear the thought of data going unexamined. Instead of just complaining about the poor state of Medium’s stats, I decided to do something about it and wrote a Python toolkit to allow anyone to quickly retrieve, analyze, interpret, and make beautiful, interactive plots of their Medium statistics. In this article, I’ll show how to use the tools, discuss how they work, and we’ll explore some insights from my Medium story stats. The full toolkit for you to use is on GitHub. You can see a usage Jupyter Notebook on GitHub here (unfortunately interactive plots don’t work on GitHub’s notebook viewer) or in full interactive glory on NBviewer here. Contributions to this toolkit are welcome! First, we need to retrieve some stats. When writing the toolkit, I spent 2 hours trying to figure out how to auto login to Medium in Python before deciding on the 15-second solution listed below. If you want to use my data, it’s already included in the toolkit, otherwise, follow the steps to use your data: Go to your Medium Stats Page.Scroll down to the bottom so all the stories’ stats are showing.Right click and save the page as stats.html in the toolkitdata/ directory Go to your Medium Stats Page. Scroll down to the bottom so all the stories’ stats are showing. Right click and save the page as stats.html in the toolkitdata/ directory This is demonstrated in the following clip: Next, open a Jupyter Notebook or Python terminal in the toolkit’s medium/ directory and run (again, you can use my included data): from retrieval import get_datadf = get_data(fname='stats.html') This will not only parse stats.html file and extracts all the information, it also goes online to every article, retrieves the entire article and metadata, and stores the results in a dataframe. For my 121 articles, this process took about 5 seconds! Now, we have a dataframe with complete info about our articles: (I’ve cut off the dataframe for display so there is even more data than shown.) Once we have this information, we can analyze it using any data science methods we know or we can use the tools in the code. Included in the Python toolkit are methods for making interactive plots, fitting the data with machine learning, interpreting relationships and generating future predictions. As a quick example, we can make a heatmap of the correlations in the data: The <tag> columns indicate whether the story has a specific tag. We can see the tag “Towards Data Science” has a 0.54 correlation with the number of “fans” indicating that attaching this tag to a story is positively correlated with the number of fans (as well as the number of claps). Most of these relationships are obvious (claps is positively correlated with fans) but if you want to maximize story views, you may be able to find some hints here. Another plot we can make in a single line of code is a scatterplot matrix (also affectionately called a “splom”) colored by the publication: (These plots are interactive which can be seen in NBviewer here). Before we get back to the analysis (there are a lot more plots to look forward to), it’s worth briefly discussing how these Python tools get and display all the data. The workhorses of the code are BeautifulSoup, requests, and plotly, which in my opinion, are as important for data science as the well-known pandas + numpy + matplotlib trio (as we’ll see, it’s time to retire matplotlib). From a first look at the Medium stats page, it doesn’t seem very structured. However, hidden beneath every page on the internet is HyperText Markup Language (HTML), a structured language for rendering web pages. Without Python, we might be forced to open up excel and start typing in those numbers (when I was at the Air Force, no joke, this would have been the accepted method) but, thanks to the BeautifulSoup library, we can make use of the structure to extract data. For example, to find the above table within the downloaded stats.html we use: Once we have a soup object, we step through it, at each point getting the data we need (HTML has a hierarchical tree structure referred to as a Document Object Model — DOM). From the table, we take out an individual row — representing one article — and extract a few pieces of data as follows: This might appear tedious, and it is when you have to do it by hand. It involves a lot of using the developer tools in Google Chrome to find the information you want within the HTML. Fortunately, the Python Medium stats toolkit does all this for you behind the scenes. You just need to type two lines of code! From the stats page, the code extracts metadata for each article, as well as the article link. Then, it grabs the article itself (not just the stats) using the requests library and it parses the article for any relevant data, also with BeautifulSoup. All of this is automated in the toolkit, but it’s worth taking a look at the code. Once you get familiar with these Python libraries, you start to realize how much data there is on the web just waiting for you to grab it. As a side note, the entire code takes about 2 minutes to run sequentially, but since waiting is unproductive, I wrote it to use multiprocessing and reduced the run time to about 10 seconds. The source code for data retrieval is here. This is a highly unscientific chart of my enjoyment of Python plots over time: The plotly library (with the cufflinks wrapper) has made plotting in Python enjoyable once again! It enables single-line-of-code fully-interactive charts that are so easy to make I have vowed to never write another line of matplotlib again. Contrast the two plots below both made with one line of code: On the left is matplotlib's effort— a static, boring chart — while on the right is plotly's work — a nice interactive chart which, critically, lets you make sense of your data quickly. All of the plotting in the toolkit is done with plotly which means much better charts in much less code. What’s more, plots in the notebook can be opened in the online plotly chart editor so you can add your own touches such as notes and final edits for publication: The analysis code implements univariate linear regressions, univariate polynomial regressions, multivariate linear regressions, and forecasting. This is done with standard data science tooling: numpy, statsmodels, scipy, and sklearn. For the full visualization and analysis code, see this script. Back to the analysis! I usually like to start off by looking at univariate — single variables — distributions. For this, we can use the following code: from plotly.offline import iplotfrom visuals import make_histiplot(make_hist(df, x='views', category='publication')) Clearly, I should keep publications in “Towards Data Science”! Most of my articles that are not in any publication are unlisted meaning they can only be viewed if you have the link (for that you need to follow me on Twitter). Since all of the data is time-based, there is also a method for making cumulative graphs showing your stats piling up over time: from visuals import make_cum_plotiplot(make_cum_plot(df, y=['word_count', 'views'])) Recently, I’ve had a massive spike in word count, because I released a bunch of articles I’ve been working on for a while. My views started to take off when I published my first articles on Towards Data Science. (As a note, the views aren’t quite correct because this assumes that all the views for a given article occur at one point in time, when the article is published. However, this is fine as a first approximation). The scatterplot is a simple yet effective method for visualizing relationships between two variables. A basic question we might want to ask is: does the percentage of people who read an article decrease with article length? The straightforward answer is yes: from visuals import make_scatter_plotiplot(make_scatter_plot(df, x='read_time', y='ratio')) As the length of the article — reading time — increases, the number of people who make it through the article clearly decreases and then levels out. With the scatterplot, we can make either axis a log scale and include a third variable on the plot by sizing or coloring the points according to a number or category. This is also done in one line of code: iplot(make_scatter_plot(df, x='read_time', y='views', ylog=True, scale='ratio')) The “Random Forest in Python” article is in many ways an outlier. It has the most views of any of my articles, yet takes 21 minutes to read! Although the reading ratio decreases with the length of the article, does the number of people reading or viewing the article as well? While our immediate answer would be yes, on closer analysis, it seems that the number of views may not decrease with reading time. To determine this, we can use the fitting capabilities of the tools. In this analysis, I limited the data to my articles published in Towards Data Science that are shorter than 5000 words and performed a linear regression of views (dependent variable) onto word count (independent variable). Because views can never be negative, the intercept is set to 0: from visuals import make_linear_regressionfigure, summary = make_linear_regression(tds_clean, x='word_count', y='views', intercept_0=True)iplot(figure) Contrary to what one might think, as the number of words increases (up to 5000) the number of views also increases! The summary for this fit shows the positive linear relationship and that the slope is statistically significant: There was once a private note left on one of my articles by a very nice lady which said essentially: “You write good articles, but they are too long. You should write shorter articles with bullet points instead of complete sentences.” Now, as a rule of thumb, I assume my readers are smart and can handle complete sentences. Therefore, I politely replied to this women (in bullet points) that I would continue to write articles that are exceedingly long. Based on this analysis, there is no reason to shorten articles (even if my goal were to maximize views), especially for the type of readers who pay attention to Towards Data Science. In fact, every word I add results in 14 more views! We are not limited to regressing one variable onto another in a linear manner. Another method we can use is polynomial regression where we allow higher degrees of the independent variable in our fit. However, we want to be careful as the increased flexibility can lead to overfitting especially with limited data. As a good point to keep in mind: when we have a flexible model, a closer fit to the data does not mean an accurate representation of reality! from visuals import make_poly_fitfigure, fit_stats = make_poly_fits(tds_clean, x='word_count', y='reads', degree=6)iplot(figure) Using any of the higher-degree fits to extrapolate beyond the data seen here would not be advisable because the predictions can be non-sensical (negative or extremely large). If we look at the statistics for the fits, we can see that the root mean squared error tends to decrease as the degree of the polynomial increases: A lower error means we fit the existing data better, but it does not mean we will be able to accurately generalize to new observations (a point we’ll see in a little bit). In data science, we want the parsimonious model, that is, the simplest model that is able to explain the data. We can also include more than one variable in our linear fits. This is known as multivariate regression since there are multiple independent variables. list_of_columns = ['read_time', 'edit_days', 'title_word_count', '<tag>Education', '<tag>Data Science', '<tag>Towards Data Science', '<tag>Machine Learning', '<tag>Python']figure, summary = make_linear_regression(tds, x=list_of_columns, y='fans', intercept_0=False)iplot(figure) There are some independent variables, such as the tags Python and Towards Data Science, that contribute to more fans, while others, such as the number of days spent editing, lead to a lower number of fans (at least according to the model). If you wanted to figure out how to get the most fans, you could use this fit and try to maximize it with the free parameters. The final tools in our toolkit are also my favorite: extrapolations of the number of views, fans, reads, or word counts far into the future. This might be complete nonsense, but that doesn’t mean it’s not enjoyable! It also serves to highlight the point that a more flexible fit — a higher degree of polynomial — does not lead to more accurate generalizations for new data. from visuals import make_extrapolationfigure, future_df = make_extrapolation(df, y='word_count', years=2.5, degree=3)iplot(figure) Looks like I have a lot of work set out ahead of me in order to meet the expected prediction! (The slider on the bottom allows you to zoom in to different places on the graph. You can play around with this in the fully interactive notebook). Getting a reasonable estimate requires adjusting the degree of the polynomial fit. However, because of the limited data, any estimate is likely to break down far into the future. Let’s do one more extrapolation to see how many reads I can expect: figure, future_df = make_extrapolation(tds, y='reads', years=1.5, degree=3)iplot(figure) You, my reader, also have your work set out for you! I don’t think these extrapolations are all that useful but they illustrate important points in data science: making a model more flexible does not mean it will be better able to predict the future, and, all models are approximations based on existing data. The Medium stats Python toolkit is a set of tools developed to allow anyone to quickly analyze their own medium article statistics. Although Medium itself does not provide great insights into your stats, that doesn’t prevent you from carrying out your own analysis with the right tools! There are few things more satisfying to me than making sense out of data — which is why I’m a data scientist— especially when that data is personal and/or useful. I’m not sure there are any major takeaways from this work — besides keep writing for Towards Data Science — but using these tools can demonstrate some important data science principles. Developing these tools was enjoyable and I’m working on making them better. I would appreciate any contributions (honestly, even if it’s a spelling mistake in a Jupyter Notebook, it helps) so check out the code if you want to help. Since this is my last article of the year, I would like to say thanks for reading — no matter how many stats you contributed to the totals, I could not have done this analysis without you! As we enter the new year, keep reading, keep writing code, keep doing data science, and keep making the world better. As always, I welcome feedback and discussion. I can be reached on Twitter @koehrsen_will.
[ { "code": null, "e": 626, "s": 172, "text": "Medium is a great place to write: no distracting features, a large — yet civil — readership, and, best of all, no advertisements. However, one aspect where it falls short is in the statistics you can see for your articles. Sure, you can go to the stats page, but all you get to see is some plain numbers and a bar chart in an awful shade of green. There’s no in-depth analysis of any kind and no way to make sense of the data generated by your articles." }, { "code": null, "e": 982, "s": 626, "text": "It’s as if Medium said: “let’s build a great blogging platform, but make it as difficult as possible for writers to get insights from their stats.” Although I don’t care about using stats to maximize views (if I wanted to get the most views, all my articles would be 3-minute lists), as a data scientist, I can’t bear the thought of data going unexamined." }, { "code": null, "e": 1367, "s": 982, "text": "Instead of just complaining about the poor state of Medium’s stats, I decided to do something about it and wrote a Python toolkit to allow anyone to quickly retrieve, analyze, interpret, and make beautiful, interactive plots of their Medium statistics. In this article, I’ll show how to use the tools, discuss how they work, and we’ll explore some insights from my Medium story stats." }, { "code": null, "e": 1628, "s": 1367, "text": "The full toolkit for you to use is on GitHub. You can see a usage Jupyter Notebook on GitHub here (unfortunately interactive plots don’t work on GitHub’s notebook viewer) or in full interactive glory on NBviewer here. Contributions to this toolkit are welcome!" }, { "code": null, "e": 1936, "s": 1628, "text": "First, we need to retrieve some stats. When writing the toolkit, I spent 2 hours trying to figure out how to auto login to Medium in Python before deciding on the 15-second solution listed below. If you want to use my data, it’s already included in the toolkit, otherwise, follow the steps to use your data:" }, { "code": null, "e": 2103, "s": 1936, "text": "Go to your Medium Stats Page.Scroll down to the bottom so all the stories’ stats are showing.Right click and save the page as stats.html in the toolkitdata/ directory" }, { "code": null, "e": 2133, "s": 2103, "text": "Go to your Medium Stats Page." }, { "code": null, "e": 2198, "s": 2133, "text": "Scroll down to the bottom so all the stories’ stats are showing." }, { "code": null, "e": 2272, "s": 2198, "text": "Right click and save the page as stats.html in the toolkitdata/ directory" }, { "code": null, "e": 2316, "s": 2272, "text": "This is demonstrated in the following clip:" }, { "code": null, "e": 2447, "s": 2316, "text": "Next, open a Jupyter Notebook or Python terminal in the toolkit’s medium/ directory and run (again, you can use my included data):" }, { "code": null, "e": 2511, "s": 2447, "text": "from retrieval import get_datadf = get_data(fname='stats.html')" }, { "code": null, "e": 2826, "s": 2511, "text": "This will not only parse stats.html file and extracts all the information, it also goes online to every article, retrieves the entire article and metadata, and stores the results in a dataframe. For my 121 articles, this process took about 5 seconds! Now, we have a dataframe with complete info about our articles:" }, { "code": null, "e": 3206, "s": 2826, "text": "(I’ve cut off the dataframe for display so there is even more data than shown.) Once we have this information, we can analyze it using any data science methods we know or we can use the tools in the code. Included in the Python toolkit are methods for making interactive plots, fitting the data with machine learning, interpreting relationships and generating future predictions." }, { "code": null, "e": 3281, "s": 3206, "text": "As a quick example, we can make a heatmap of the correlations in the data:" }, { "code": null, "e": 3731, "s": 3281, "text": "The <tag> columns indicate whether the story has a specific tag. We can see the tag “Towards Data Science” has a 0.54 correlation with the number of “fans” indicating that attaching this tag to a story is positively correlated with the number of fans (as well as the number of claps). Most of these relationships are obvious (claps is positively correlated with fans) but if you want to maximize story views, you may be able to find some hints here." }, { "code": null, "e": 3872, "s": 3731, "text": "Another plot we can make in a single line of code is a scatterplot matrix (also affectionately called a “splom”) colored by the publication:" }, { "code": null, "e": 3938, "s": 3872, "text": "(These plots are interactive which can be seen in NBviewer here)." }, { "code": null, "e": 4327, "s": 3938, "text": "Before we get back to the analysis (there are a lot more plots to look forward to), it’s worth briefly discussing how these Python tools get and display all the data. The workhorses of the code are BeautifulSoup, requests, and plotly, which in my opinion, are as important for data science as the well-known pandas + numpy + matplotlib trio (as we’ll see, it’s time to retire matplotlib)." }, { "code": null, "e": 4404, "s": 4327, "text": "From a first look at the Medium stats page, it doesn’t seem very structured." }, { "code": null, "e": 4876, "s": 4404, "text": "However, hidden beneath every page on the internet is HyperText Markup Language (HTML), a structured language for rendering web pages. Without Python, we might be forced to open up excel and start typing in those numbers (when I was at the Air Force, no joke, this would have been the accepted method) but, thanks to the BeautifulSoup library, we can make use of the structure to extract data. For example, to find the above table within the downloaded stats.html we use:" }, { "code": null, "e": 5170, "s": 4876, "text": "Once we have a soup object, we step through it, at each point getting the data we need (HTML has a hierarchical tree structure referred to as a Document Object Model — DOM). From the table, we take out an individual row — representing one article — and extract a few pieces of data as follows:" }, { "code": null, "e": 5480, "s": 5170, "text": "This might appear tedious, and it is when you have to do it by hand. It involves a lot of using the developer tools in Google Chrome to find the information you want within the HTML. Fortunately, the Python Medium stats toolkit does all this for you behind the scenes. You just need to type two lines of code!" }, { "code": null, "e": 5953, "s": 5480, "text": "From the stats page, the code extracts metadata for each article, as well as the article link. Then, it grabs the article itself (not just the stats) using the requests library and it parses the article for any relevant data, also with BeautifulSoup. All of this is automated in the toolkit, but it’s worth taking a look at the code. Once you get familiar with these Python libraries, you start to realize how much data there is on the web just waiting for you to grab it." }, { "code": null, "e": 6187, "s": 5953, "text": "As a side note, the entire code takes about 2 minutes to run sequentially, but since waiting is unproductive, I wrote it to use multiprocessing and reduced the run time to about 10 seconds. The source code for data retrieval is here." }, { "code": null, "e": 6266, "s": 6187, "text": "This is a highly unscientific chart of my enjoyment of Python plots over time:" }, { "code": null, "e": 6569, "s": 6266, "text": "The plotly library (with the cufflinks wrapper) has made plotting in Python enjoyable once again! It enables single-line-of-code fully-interactive charts that are so easy to make I have vowed to never write another line of matplotlib again. Contrast the two plots below both made with one line of code:" }, { "code": null, "e": 6754, "s": 6569, "text": "On the left is matplotlib's effort— a static, boring chart — while on the right is plotly's work — a nice interactive chart which, critically, lets you make sense of your data quickly." }, { "code": null, "e": 7021, "s": 6754, "text": "All of the plotting in the toolkit is done with plotly which means much better charts in much less code. What’s more, plots in the notebook can be opened in the online plotly chart editor so you can add your own touches such as notes and final edits for publication:" }, { "code": null, "e": 7318, "s": 7021, "text": "The analysis code implements univariate linear regressions, univariate polynomial regressions, multivariate linear regressions, and forecasting. This is done with standard data science tooling: numpy, statsmodels, scipy, and sklearn. For the full visualization and analysis code, see this script." }, { "code": null, "e": 7470, "s": 7318, "text": "Back to the analysis! I usually like to start off by looking at univariate — single variables — distributions. For this, we can use the following code:" }, { "code": null, "e": 7587, "s": 7470, "text": "from plotly.offline import iplotfrom visuals import make_histiplot(make_hist(df, x='views', category='publication'))" }, { "code": null, "e": 7813, "s": 7587, "text": "Clearly, I should keep publications in “Towards Data Science”! Most of my articles that are not in any publication are unlisted meaning they can only be viewed if you have the link (for that you need to follow me on Twitter)." }, { "code": null, "e": 7942, "s": 7813, "text": "Since all of the data is time-based, there is also a method for making cumulative graphs showing your stats piling up over time:" }, { "code": null, "e": 8027, "s": 7942, "text": "from visuals import make_cum_plotiplot(make_cum_plot(df, y=['word_count', 'views']))" }, { "code": null, "e": 8239, "s": 8027, "text": "Recently, I’ve had a massive spike in word count, because I released a bunch of articles I’ve been working on for a while. My views started to take off when I published my first articles on Towards Data Science." }, { "code": null, "e": 8450, "s": 8239, "text": "(As a note, the views aren’t quite correct because this assumes that all the views for a given article occur at one point in time, when the article is published. However, this is fine as a first approximation)." }, { "code": null, "e": 8709, "s": 8450, "text": "The scatterplot is a simple yet effective method for visualizing relationships between two variables. A basic question we might want to ask is: does the percentage of people who read an article decrease with article length? The straightforward answer is yes:" }, { "code": null, "e": 8801, "s": 8709, "text": "from visuals import make_scatter_plotiplot(make_scatter_plot(df, x='read_time', y='ratio'))" }, { "code": null, "e": 8950, "s": 8801, "text": "As the length of the article — reading time — increases, the number of people who make it through the article clearly decreases and then levels out." }, { "code": null, "e": 9156, "s": 8950, "text": "With the scatterplot, we can make either axis a log scale and include a third variable on the plot by sizing or coloring the points according to a number or category. This is also done in one line of code:" }, { "code": null, "e": 9295, "s": 9156, "text": "iplot(make_scatter_plot(df, x='read_time', y='views', ylog=True, scale='ratio'))" }, { "code": null, "e": 9436, "s": 9295, "text": "The “Random Forest in Python” article is in many ways an outlier. It has the most views of any of my articles, yet takes 21 minutes to read!" }, { "code": null, "e": 9771, "s": 9436, "text": "Although the reading ratio decreases with the length of the article, does the number of people reading or viewing the article as well? While our immediate answer would be yes, on closer analysis, it seems that the number of views may not decrease with reading time. To determine this, we can use the fitting capabilities of the tools." }, { "code": null, "e": 10058, "s": 9771, "text": "In this analysis, I limited the data to my articles published in Towards Data Science that are shorter than 5000 words and performed a linear regression of views (dependent variable) onto word count (independent variable). Because views can never be negative, the intercept is set to 0:" }, { "code": null, "e": 10247, "s": 10058, "text": "from visuals import make_linear_regressionfigure, summary = make_linear_regression(tds_clean, x='word_count', y='views', intercept_0=True)iplot(figure)" }, { "code": null, "e": 10476, "s": 10247, "text": "Contrary to what one might think, as the number of words increases (up to 5000) the number of views also increases! The summary for this fit shows the positive linear relationship and that the slope is statistically significant:" }, { "code": null, "e": 10711, "s": 10476, "text": "There was once a private note left on one of my articles by a very nice lady which said essentially: “You write good articles, but they are too long. You should write shorter articles with bullet points instead of complete sentences.”" }, { "code": null, "e": 11166, "s": 10711, "text": "Now, as a rule of thumb, I assume my readers are smart and can handle complete sentences. Therefore, I politely replied to this women (in bullet points) that I would continue to write articles that are exceedingly long. Based on this analysis, there is no reason to shorten articles (even if my goal were to maximize views), especially for the type of readers who pay attention to Towards Data Science. In fact, every word I add results in 14 more views!" }, { "code": null, "e": 11622, "s": 11166, "text": "We are not limited to regressing one variable onto another in a linear manner. Another method we can use is polynomial regression where we allow higher degrees of the independent variable in our fit. However, we want to be careful as the increased flexibility can lead to overfitting especially with limited data. As a good point to keep in mind: when we have a flexible model, a closer fit to the data does not mean an accurate representation of reality!" }, { "code": null, "e": 11800, "s": 11622, "text": "from visuals import make_poly_fitfigure, fit_stats = make_poly_fits(tds_clean, x='word_count', y='reads', degree=6)iplot(figure)" }, { "code": null, "e": 11975, "s": 11800, "text": "Using any of the higher-degree fits to extrapolate beyond the data seen here would not be advisable because the predictions can be non-sensical (negative or extremely large)." }, { "code": null, "e": 12123, "s": 11975, "text": "If we look at the statistics for the fits, we can see that the root mean squared error tends to decrease as the degree of the polynomial increases:" }, { "code": null, "e": 12406, "s": 12123, "text": "A lower error means we fit the existing data better, but it does not mean we will be able to accurately generalize to new observations (a point we’ll see in a little bit). In data science, we want the parsimonious model, that is, the simplest model that is able to explain the data." }, { "code": null, "e": 12558, "s": 12406, "text": "We can also include more than one variable in our linear fits. This is known as multivariate regression since there are multiple independent variables." }, { "code": null, "e": 12895, "s": 12558, "text": "list_of_columns = ['read_time', 'edit_days', 'title_word_count', '<tag>Education', '<tag>Data Science', '<tag>Towards Data Science', '<tag>Machine Learning', '<tag>Python']figure, summary = make_linear_regression(tds, x=list_of_columns, y='fans', intercept_0=False)iplot(figure)" }, { "code": null, "e": 13261, "s": 12895, "text": "There are some independent variables, such as the tags Python and Towards Data Science, that contribute to more fans, while others, such as the number of days spent editing, lead to a lower number of fans (at least according to the model). If you wanted to figure out how to get the most fans, you could use this fit and try to maximize it with the free parameters." }, { "code": null, "e": 13635, "s": 13261, "text": "The final tools in our toolkit are also my favorite: extrapolations of the number of views, fans, reads, or word counts far into the future. This might be complete nonsense, but that doesn’t mean it’s not enjoyable! It also serves to highlight the point that a more flexible fit — a higher degree of polynomial — does not lead to more accurate generalizations for new data." }, { "code": null, "e": 13825, "s": 13635, "text": "from visuals import make_extrapolationfigure, future_df = make_extrapolation(df, y='word_count', years=2.5, degree=3)iplot(figure)" }, { "code": null, "e": 14246, "s": 13825, "text": "Looks like I have a lot of work set out ahead of me in order to meet the expected prediction! (The slider on the bottom allows you to zoom in to different places on the graph. You can play around with this in the fully interactive notebook). Getting a reasonable estimate requires adjusting the degree of the polynomial fit. However, because of the limited data, any estimate is likely to break down far into the future." }, { "code": null, "e": 14314, "s": 14246, "text": "Let’s do one more extrapolation to see how many reads I can expect:" }, { "code": null, "e": 14442, "s": 14314, "text": "figure, future_df = make_extrapolation(tds, y='reads', years=1.5, degree=3)iplot(figure)" }, { "code": null, "e": 14752, "s": 14442, "text": "You, my reader, also have your work set out for you! I don’t think these extrapolations are all that useful but they illustrate important points in data science: making a model more flexible does not mean it will be better able to predict the future, and, all models are approximations based on existing data." }, { "code": null, "e": 15388, "s": 14752, "text": "The Medium stats Python toolkit is a set of tools developed to allow anyone to quickly analyze their own medium article statistics. Although Medium itself does not provide great insights into your stats, that doesn’t prevent you from carrying out your own analysis with the right tools! There are few things more satisfying to me than making sense out of data — which is why I’m a data scientist— especially when that data is personal and/or useful. I’m not sure there are any major takeaways from this work — besides keep writing for Towards Data Science — but using these tools can demonstrate some important data science principles." }, { "code": null, "e": 15927, "s": 15388, "text": "Developing these tools was enjoyable and I’m working on making them better. I would appreciate any contributions (honestly, even if it’s a spelling mistake in a Jupyter Notebook, it helps) so check out the code if you want to help. Since this is my last article of the year, I would like to say thanks for reading — no matter how many stats you contributed to the totals, I could not have done this analysis without you! As we enter the new year, keep reading, keep writing code, keep doing data science, and keep making the world better." } ]
Android - Event Handling
Events are a useful way to collect data about a user's interaction with interactive components of Applications. Like button presses or screen touch etc. The Android framework maintains an event queue as first-in, first-out (FIFO) basis. You can capture these events in your program and take appropriate action as per requirements. There are following three concepts related to Android Event Management − Event Listeners − An event listener is an interface in the View class that contains a single callback method. These methods will be called by the Android framework when the View to which the listener has been registered is triggered by user interaction with the item in the UI. Event Listeners − An event listener is an interface in the View class that contains a single callback method. These methods will be called by the Android framework when the View to which the listener has been registered is triggered by user interaction with the item in the UI. Event Listeners Registration − Event Registration is the process by which an Event Handler gets registered with an Event Listener so that the handler is called when the Event Listener fires the event. Event Listeners Registration − Event Registration is the process by which an Event Handler gets registered with an Event Listener so that the handler is called when the Event Listener fires the event. Event Handlers − When an event happens and we have registered an event listener for the event, the event listener calls the Event Handlers, which is the method that actually handles the event. Event Handlers − When an event happens and we have registered an event listener for the event, the event listener calls the Event Handlers, which is the method that actually handles the event. OnClickListener() This is called when the user either clicks or touches or focuses upon any widget like button, text, image etc. You will use onClick() event handler to handle such event. OnLongClickListener() This is called when the user either clicks or touches or focuses upon any widget like button, text, image etc. for one or more seconds. You will use onLongClick() event handler to handle such event. OnFocusChangeListener() This is called when the widget looses its focus ie. user goes away from the view item. You will use onFocusChange() event handler to handle such event. OnFocusChangeListener() This is called when the user is focused on the item and presses or releases a hardware key on the device. You will use onKey() event handler to handle such event. OnTouchListener() This is called when the user presses the key, releases the key, or any movement gesture on the screen. You will use onTouch() event handler to handle such event. OnMenuItemClickListener() This is called when the user selects a menu item. You will use onMenuItemClick() event handler to handle such event. onCreateContextMenuItemListener() This is called when the context menu is being built(as the result of a sustained "long click) There are many more event listeners available as a part of View class like OnHoverListener, OnDragListener etc which may be needed for your application. So I recommend to refer official documentation for Android application development in case you are going to develop a sophisticated apps. Event Registration is the process by which an Event Handler gets registered with an Event Listener so that the handler is called when the Event Listener fires the event. Though there are several tricky ways to register your event listener for any event, but I'm going to list down only top 3 ways, out of which you can use any of them based on the situation. Using an Anonymous Inner Class Using an Anonymous Inner Class Activity class implements the Listener interface. Activity class implements the Listener interface. Using Layout file activity_main.xml to specify event handler directly. Using Layout file activity_main.xml to specify event handler directly. Below section will provide you detailed examples on all the three scenarios − Users can interact with their devices by using hardware keys or buttons or touching the screen.Touching the screen puts the device into touch mode. The user can then interact with it by touching the on-screen virtual buttons, images, etc.You can check if the device is in touch mode by calling the View class’s isInTouchMode() method. A view or widget is usually highlighted or displays a flashing cursor when it’s in focus. This indicates that it’s ready to accept input from the user. isFocusable() − it returns true or false isFocusable() − it returns true or false isFocusableInTouchMode() − checks to see if the view is focusable in touch mode. (A view may be focusable when using a hardware key but not when the device is in touch mode) isFocusableInTouchMode() − checks to see if the view is focusable in touch mode. (A view may be focusable when using a hardware key but not when the device is in touch mode) android:foucsUp="@=id/button_l" public boolean onTouchEvent(motionEvent event){ switch(event.getAction()){ case TOUCH_DOWN: Toast.makeText(this,"you have clicked down Touch button",Toast.LENTH_LONG).show(); break(); case TOUCH_UP: Toast.makeText(this,"you have clicked up touch button",Toast.LENTH_LONG).show(); break; case TOUCH_MOVE: Toast.makeText(this,"you have clicked move touch button"Toast.LENTH_LONG).show(); break; } return super.onTouchEvent(event) ; } Here you will create an anonymous implementation of the listener and will be useful if each class is applied to a single control only and you have advantage to pass arguments to event handler. In this approach event handler methods can access private data of Activity. No reference is needed to call to Activity. But if you applied the handler to more than one control, you would have to cut and paste the code for the handler and if the code for the handler is long, it makes the code harder to maintain. Following are the simple steps to show how we will make use of separate Listener class to register and capture click event. Similar way you can implement your listener for any other required event type. Following is the content of the modified main activity file src/com.example.myapplication/MainActivity.java. This file can include each of the fundamental lifecycle methods. package com.example.myapplication; import android.app.ProgressDialog; import android.os.Bundle; import android.support.v7.app.ActionBarActivity; import android.view.View; import android.widget.Button; import android.widget.TextView; public class MainActivity extends ActionBarActivity { private ProgressDialog progress; Button b1,b2; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); progress = new ProgressDialog(this); b1=(Button)findViewById(R.id.button); b2=(Button)findViewById(R.id.button2); b1.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { TextView txtView = (TextView) findViewById(R.id.textView); txtView.setTextSize(25); } }); b2.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { TextView txtView = (TextView) findViewById(R.id.textView); txtView.setTextSize(55); } }); } } Following will be the content of res/layout/activity_main.xml file − <?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" android:paddingBottom="@dimen/activity_vertical_margin" android:paddingLeft="@dimen/activity_horizontal_margin" android:paddingRight="@dimen/activity_horizontal_margin" android:paddingTop="@dimen/activity_vertical_margin" tools:context=".MainActivity"> <TextView android:id="@+id/textView1" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Event Handling " android:layout_alignParentTop="true" android:layout_centerHorizontal="true" android:textSize="30dp"/> <TextView android:id="@+id/textView2" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Tutorials point " android:textColor="#ff87ff09" android:textSize="30dp" android:layout_above="@+id/imageButton" android:layout_centerHorizontal="true" android:layout_marginBottom="40dp" /> <ImageButton android:layout_width="wrap_content" android:layout_height="wrap_content" android:id="@+id/imageButton" android:src="@drawable/abc" android:layout_centerVertical="true" android:layout_centerHorizontal="true" /> <Button android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Small font" android:id="@+id/button" android:layout_below="@+id/imageButton" android:layout_centerHorizontal="true" /> <Button android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Large Font" android:id="@+id/button2" android:layout_below="@+id/button" android:layout_alignRight="@+id/button" android:layout_alignEnd="@+id/button" /> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Hello World!" android:id="@+id/textView" android:layout_below="@+id/button2" android:layout_centerHorizontal="true" android:textSize="25dp" /> </RelativeLayout> Following will be the content of res/values/strings.xml to define two new constants − <?xml version="1.0" encoding="utf-8"?> <resources> <string name="app_name">myapplication</string> </resources> Following is the default content of AndroidManifest.xml − <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.example.myapplication" > <application android:allowBackup="true" android:icon="@drawable/ic_launcher" android:label="@string/app_name" android:theme="@style/AppTheme" > <activity android:name="com.example.myapplication.MainActivity" android:label="@string/app_name" > <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> </manifest> Let's try to run your myapplication application. I assume you had created your AVD while doing environment setup. To run the app from Android Studio, open one of your project's activity files and click Run icon from the toolbar. Android Studio installs the app on your AVD and starts it and if everything is fine with your setup and application, it will display following Emulator window − Now you try to click on two buttons, one by one and you will see that font of the Hello World text will change, which happens because registered click event handler method is being called against each click event. I will recommend to try writing different event handlers for different event types and understand exact difference in different event types and their handling. Events related to menu, spinner, pickers widgets are little different but they are also based on the same concepts as explained above. 46 Lectures 7.5 hours Aditya Dua 32 Lectures 3.5 hours Sharad Kumar 9 Lectures 1 hours Abhilash Nelson 14 Lectures 1.5 hours Abhilash Nelson 15 Lectures 1.5 hours Abhilash Nelson 10 Lectures 1 hours Abhilash Nelson Print Add Notes Bookmark this page
[ { "code": null, "e": 3938, "s": 3607, "text": "Events are a useful way to collect data about a user's interaction with interactive components of Applications. Like button presses or screen touch etc. The Android framework maintains an event queue as first-in, first-out (FIFO) basis. You can capture these events in your program and take appropriate action as per requirements." }, { "code": null, "e": 4011, "s": 3938, "text": "There are following three concepts related to Android Event Management −" }, { "code": null, "e": 4289, "s": 4011, "text": "Event Listeners − An event listener is an interface in the View class that contains a single callback method. These methods will be called by the Android framework when the View to which the listener has been registered is triggered by user interaction with the item in the UI." }, { "code": null, "e": 4567, "s": 4289, "text": "Event Listeners − An event listener is an interface in the View class that contains a single callback method. These methods will be called by the Android framework when the View to which the listener has been registered is triggered by user interaction with the item in the UI." }, { "code": null, "e": 4769, "s": 4567, "text": "Event Listeners Registration − Event Registration is the process by which an Event Handler gets registered with an Event Listener so that the handler is called when the Event Listener fires the event." }, { "code": null, "e": 4971, "s": 4769, "text": "Event Listeners Registration − Event Registration is the process by which an Event Handler gets registered with an Event Listener so that the handler is called when the Event Listener fires the event." }, { "code": null, "e": 5164, "s": 4971, "text": "Event Handlers − When an event happens and we have registered an event listener for the event, the event listener calls the Event Handlers, which is the method that actually handles the event." }, { "code": null, "e": 5357, "s": 5164, "text": "Event Handlers − When an event happens and we have registered an event listener for the event, the event listener calls the Event Handlers, which is the method that actually handles the event." }, { "code": null, "e": 5375, "s": 5357, "text": "OnClickListener()" }, { "code": null, "e": 5546, "s": 5375, "text": "This is called when the user either clicks or touches or focuses upon any widget like button, text, image etc. You will use onClick() event handler to handle such event." }, { "code": null, "e": 5568, "s": 5546, "text": "OnLongClickListener()" }, { "code": null, "e": 5768, "s": 5568, "text": "This is called when the user either clicks or touches or focuses upon any widget like button, text, image etc. for one or more seconds. You will use onLongClick() event handler to handle such event." }, { "code": null, "e": 5792, "s": 5768, "text": "OnFocusChangeListener()" }, { "code": null, "e": 5945, "s": 5792, "text": "This is called when the widget looses its focus ie. user goes away from the view item. You will use onFocusChange() event handler to handle such event." }, { "code": null, "e": 5969, "s": 5945, "text": "OnFocusChangeListener()" }, { "code": null, "e": 6133, "s": 5969, "text": "This is called when the user is focused on the item and presses or releases a hardware key on the device. You will use onKey() event handler to handle such event." }, { "code": null, "e": 6151, "s": 6133, "text": "OnTouchListener()" }, { "code": null, "e": 6313, "s": 6151, "text": "This is called when the user presses the key, releases the key, or any movement gesture on the screen. You will use onTouch() event handler to handle such event." }, { "code": null, "e": 6339, "s": 6313, "text": "OnMenuItemClickListener()" }, { "code": null, "e": 6457, "s": 6339, "text": "This is called when the user selects a menu item. You will use onMenuItemClick() event handler to handle such event." }, { "code": null, "e": 6491, "s": 6457, "text": "onCreateContextMenuItemListener()" }, { "code": null, "e": 6585, "s": 6491, "text": "This is called when the context menu is being built(as the result of a sustained \"long click)" }, { "code": null, "e": 6876, "s": 6585, "text": "There are many more event listeners available as a part of View class like OnHoverListener, OnDragListener etc which may be needed for your application. So I recommend to refer official documentation for Android application development in case you are going to develop a sophisticated apps." }, { "code": null, "e": 7235, "s": 6876, "text": "Event Registration is the process by which an Event Handler gets registered with an Event Listener so that the handler is called when the Event Listener fires the event. Though there are several tricky ways to register your event listener for any event, but I'm going to list down only top 3 ways, out of which you can use any of them based on the situation." }, { "code": null, "e": 7266, "s": 7235, "text": "Using an Anonymous Inner Class" }, { "code": null, "e": 7297, "s": 7266, "text": "Using an Anonymous Inner Class" }, { "code": null, "e": 7347, "s": 7297, "text": "Activity class implements the Listener interface." }, { "code": null, "e": 7397, "s": 7347, "text": "Activity class implements the Listener interface." }, { "code": null, "e": 7468, "s": 7397, "text": "Using Layout file activity_main.xml to specify event handler directly." }, { "code": null, "e": 7539, "s": 7468, "text": "Using Layout file activity_main.xml to specify event handler directly." }, { "code": null, "e": 7617, "s": 7539, "text": "Below section will provide you detailed examples on all the three scenarios −" }, { "code": null, "e": 7952, "s": 7617, "text": "Users can interact with their devices by using hardware keys or buttons or touching the screen.Touching the screen puts the device into touch mode. The user can then interact with it by touching the on-screen virtual buttons, images, etc.You can check if the device is in touch mode by calling the View class’s isInTouchMode() method." }, { "code": null, "e": 8104, "s": 7952, "text": "A view or widget is usually highlighted or displays a flashing cursor when it’s in focus. This indicates that it’s ready to accept input from the user." }, { "code": null, "e": 8145, "s": 8104, "text": "isFocusable() − it returns true or false" }, { "code": null, "e": 8186, "s": 8145, "text": "isFocusable() − it returns true or false" }, { "code": null, "e": 8360, "s": 8186, "text": "isFocusableInTouchMode() − checks to see if the view is focusable in touch mode. (A view may be focusable when using a hardware key but not when the device is in touch mode)" }, { "code": null, "e": 8534, "s": 8360, "text": "isFocusableInTouchMode() − checks to see if the view is focusable in touch mode. (A view may be focusable when using a hardware key but not when the device is in touch mode)" }, { "code": null, "e": 8566, "s": 8534, "text": "android:foucsUp=\"@=id/button_l\"" }, { "code": null, "e": 9069, "s": 8566, "text": "public boolean onTouchEvent(motionEvent event){\n switch(event.getAction()){\n case TOUCH_DOWN:\n Toast.makeText(this,\"you have clicked down Touch button\",Toast.LENTH_LONG).show();\n break();\n \n case TOUCH_UP:\n Toast.makeText(this,\"you have clicked up touch button\",Toast.LENTH_LONG).show();\n break;\n \n case TOUCH_MOVE:\n Toast.makeText(this,\"you have clicked move touch button\"Toast.LENTH_LONG).show();\n break;\n }\n return super.onTouchEvent(event) ;\n}" }, { "code": null, "e": 9382, "s": 9069, "text": "Here you will create an anonymous implementation of the listener and will be useful if each class is applied to a single control only and you have advantage to pass arguments to event handler. In this approach event handler methods can access private data of Activity. No reference is needed to call to Activity." }, { "code": null, "e": 9575, "s": 9382, "text": "But if you applied the handler to more than one control, you would have to cut and paste the code for the handler and if the code for the handler is long, it makes the code harder to maintain." }, { "code": null, "e": 9778, "s": 9575, "text": "Following are the simple steps to show how we will make use of separate Listener class to register and capture click event. Similar way you can implement your listener for any other required event type." }, { "code": null, "e": 9952, "s": 9778, "text": "Following is the content of the modified main activity file src/com.example.myapplication/MainActivity.java. This file can include each of the fundamental lifecycle methods." }, { "code": null, "e": 11101, "s": 9952, "text": "package com.example.myapplication;\n\nimport android.app.ProgressDialog;\nimport android.os.Bundle;\nimport android.support.v7.app.ActionBarActivity;\nimport android.view.View;\nimport android.widget.Button;\nimport android.widget.TextView;\n\npublic class MainActivity extends ActionBarActivity {\n private ProgressDialog progress;\n Button b1,b2;\n\n @Override\n protected void onCreate(Bundle savedInstanceState) {\n super.onCreate(savedInstanceState);\n setContentView(R.layout.activity_main);\n progress = new ProgressDialog(this);\n\n b1=(Button)findViewById(R.id.button);\n b2=(Button)findViewById(R.id.button2);\n b1.setOnClickListener(new View.OnClickListener() {\n \n @Override\n public void onClick(View v) {\n TextView txtView = (TextView) findViewById(R.id.textView);\n txtView.setTextSize(25);\n }\n });\n\n b2.setOnClickListener(new View.OnClickListener() {\n \n @Override\n public void onClick(View v) {\n TextView txtView = (TextView) findViewById(R.id.textView);\n txtView.setTextSize(55);\n }\n });\n }\n}" }, { "code": null, "e": 11170, "s": 11101, "text": "Following will be the content of res/layout/activity_main.xml file −" }, { "code": null, "e": 13503, "s": 11170, "text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<RelativeLayout \n xmlns:android=\"http://schemas.android.com/apk/res/android\"\n xmlns:tools=\"http://schemas.android.com/tools\"\n android:layout_width=\"match_parent\"\n android:layout_height=\"match_parent\"\n android:paddingBottom=\"@dimen/activity_vertical_margin\"\n android:paddingLeft=\"@dimen/activity_horizontal_margin\"\n android:paddingRight=\"@dimen/activity_horizontal_margin\"\n android:paddingTop=\"@dimen/activity_vertical_margin\"\n tools:context=\".MainActivity\">\n \n <TextView\n android:id=\"@+id/textView1\"\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:text=\"Event Handling \"\n android:layout_alignParentTop=\"true\"\n android:layout_centerHorizontal=\"true\"\n android:textSize=\"30dp\"/>\n \n <TextView\n android:id=\"@+id/textView2\"\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:text=\"Tutorials point \"\n android:textColor=\"#ff87ff09\"\n android:textSize=\"30dp\"\n android:layout_above=\"@+id/imageButton\"\n android:layout_centerHorizontal=\"true\"\n android:layout_marginBottom=\"40dp\" />\n \n <ImageButton\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:id=\"@+id/imageButton\"\n android:src=\"@drawable/abc\"\n android:layout_centerVertical=\"true\"\n android:layout_centerHorizontal=\"true\" />\n \n <Button\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:text=\"Small font\"\n android:id=\"@+id/button\"\n android:layout_below=\"@+id/imageButton\"\n android:layout_centerHorizontal=\"true\" />\n \n <Button\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:text=\"Large Font\"\n android:id=\"@+id/button2\"\n android:layout_below=\"@+id/button\"\n android:layout_alignRight=\"@+id/button\"\n android:layout_alignEnd=\"@+id/button\" />\n \n <TextView\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:text=\"Hello World!\"\n android:id=\"@+id/textView\"\n android:layout_below=\"@+id/button2\"\n android:layout_centerHorizontal=\"true\"\n android:textSize=\"25dp\" />\n \n</RelativeLayout>" }, { "code": null, "e": 13590, "s": 13503, "text": "Following will be the content of res/values/strings.xml to define two new constants −" }, { "code": null, "e": 13704, "s": 13590, "text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<resources>\n <string name=\"app_name\">myapplication</string>\n</resources>" }, { "code": null, "e": 13763, "s": 13704, "text": "Following is the default content of AndroidManifest.xml −" }, { "code": null, "e": 14478, "s": 13763, "text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<manifest xmlns:android=\"http://schemas.android.com/apk/res/android\"\n package=\"com.example.myapplication\" >\n \n <application\n android:allowBackup=\"true\"\n android:icon=\"@drawable/ic_launcher\"\n android:label=\"@string/app_name\"\n android:theme=\"@style/AppTheme\" >\n \n <activity\n android:name=\"com.example.myapplication.MainActivity\"\n android:label=\"@string/app_name\" >\n \n <intent-filter>\n <action android:name=\"android.intent.action.MAIN\" />\n <category android:name=\"android.intent.category.LAUNCHER\" />\n </intent-filter>\n \n </activity>\n \n </application>\n</manifest>" }, { "code": null, "e": 14869, "s": 14478, "text": "Let's try to run your myapplication application. I assume you had created your AVD while doing environment setup. To run the app from Android Studio, open one of your project's activity files and click Run icon from the toolbar. Android Studio installs the app on your AVD and starts it and if everything is fine with your setup and application, it will display following Emulator window −" }, { "code": null, "e": 15083, "s": 14869, "text": "Now you try to click on two buttons, one by one and you will see that font of the Hello World text will change, which happens because registered click event handler method is being called against each click event." }, { "code": null, "e": 15378, "s": 15083, "text": "I will recommend to try writing different event handlers for different event types and understand exact difference in different event types and their handling. Events related to menu, spinner, pickers widgets are little different but they are also based on the same concepts as explained above." }, { "code": null, "e": 15413, "s": 15378, "text": "\n 46 Lectures \n 7.5 hours \n" }, { "code": null, "e": 15425, "s": 15413, "text": " Aditya Dua" }, { "code": null, "e": 15460, "s": 15425, "text": "\n 32 Lectures \n 3.5 hours \n" }, { "code": null, "e": 15474, "s": 15460, "text": " Sharad Kumar" }, { "code": null, "e": 15506, "s": 15474, "text": "\n 9 Lectures \n 1 hours \n" }, { "code": null, "e": 15523, "s": 15506, "text": " Abhilash Nelson" }, { "code": null, "e": 15558, "s": 15523, "text": "\n 14 Lectures \n 1.5 hours \n" }, { "code": null, "e": 15575, "s": 15558, "text": " Abhilash Nelson" }, { "code": null, "e": 15610, "s": 15575, "text": "\n 15 Lectures \n 1.5 hours \n" }, { "code": null, "e": 15627, "s": 15610, "text": " Abhilash Nelson" }, { "code": null, "e": 15660, "s": 15627, "text": "\n 10 Lectures \n 1 hours \n" }, { "code": null, "e": 15677, "s": 15660, "text": " Abhilash Nelson" }, { "code": null, "e": 15684, "s": 15677, "text": " Print" }, { "code": null, "e": 15695, "s": 15684, "text": " Add Notes" } ]
CakePHP - File upload
To work on file upload we are going to use the form helper. Here, is an example for file upload. Make Changes in the config/routes.php file, as shown in the following program. <?php use Cake\Http\Middleware\CsrfProtectionMiddleware; use Cake\Routing\Route\DashedRoute; use Cake\Routing\RouteBuilder; $routes->setRouteClass(DashedRoute::class); $routes->scope('/', function (RouteBuilder $builder) { $builder->registerMiddleware('csrf', new CsrfProtectionMiddleware([ 'httpOnly' => true, ])); $builder->applyMiddleware('csrf'); //$builder->connect('/pages',['controller'=>'Pages','action'=>'display', 'home']); $builder->connect('fileupload',['controller'=>'Files','action'=>'index']); $builder->fallbacks(); }); Create a FilesController.php file at src/Controller/FilesController.php. Copy the following code in the controller file. Ignore, if already created. Create uploads/ directory in src/. The files uploaded will be saved in uploads/ folder. <?php namespace App\Controller; use App\Controller\AppController; use Cake\View\Helper\FormHelper; class FilesController extends AppController { public function index(){ if ($this->request->is('post')) { $fileobject = $this->request->getData('submittedfile'); $uploadPath = '../uploads/'; $destination = $uploadPath.$fileobject->getClientFilename(); // Existing files with the same name will be replaced. $fileobject->moveTo($destination); } } } ?> Create a directory Files at src/Template and under that directory create a View file called index.php. Copy the following code in that file. <?php echo $this->Form->create(NULL, ['type' => 'file']); echo $this->l;Form->file('submittedfile'); echo $this->Form->button('Submit'); echo $this->Form->end(); $uploadPath ='../uploads/'; $files = scandir($uploadPath, 0); echo "Files uploaded in uploads/ are:<br/>"; for($i = 2; $i < count($files); $i++) echo "File is - ".$files[$i]."<br>"; ?> The files saved in uploads/ folder is listed for the user. Execute the above example by visiting the following URL − http://localhost/cakephp4/fileupload − When you execute the above code, you should see the following output − Print Add Notes Bookmark this page
[ { "code": null, "e": 2339, "s": 2242, "text": "To work on file upload we are going to use the form helper. Here, is an example for file upload." }, { "code": null, "e": 2418, "s": 2339, "text": "Make Changes in the config/routes.php file, as shown in the following program." }, { "code": null, "e": 2978, "s": 2418, "text": "<?php\nuse Cake\\Http\\Middleware\\CsrfProtectionMiddleware;\nuse Cake\\Routing\\Route\\DashedRoute;\nuse Cake\\Routing\\RouteBuilder;\n$routes->setRouteClass(DashedRoute::class);\n$routes->scope('/', function (RouteBuilder $builder) {\n $builder->registerMiddleware('csrf', new CsrfProtectionMiddleware([\n 'httpOnly' => true,\n ]));\n $builder->applyMiddleware('csrf');\n //$builder->connect('/pages',['controller'=>'Pages','action'=>'display', 'home']);\n $builder->connect('fileupload',['controller'=>'Files','action'=>'index']);\n $builder->fallbacks();\n});" }, { "code": null, "e": 3127, "s": 2978, "text": "Create a FilesController.php file at src/Controller/FilesController.php. Copy the following code in the controller file. Ignore, if already created." }, { "code": null, "e": 3215, "s": 3127, "text": "Create uploads/ directory in src/. The files uploaded will be saved in uploads/ folder." }, { "code": null, "e": 3769, "s": 3215, "text": "<?php\n namespace App\\Controller;\n use App\\Controller\\AppController;\n use Cake\\View\\Helper\\FormHelper;\n class FilesController extends AppController {\n public function index(){\n if ($this->request->is('post')) {\n $fileobject = $this->request->getData('submittedfile');\n $uploadPath = '../uploads/';\n $destination = $uploadPath.$fileobject->getClientFilename();\n // Existing files with the same name will be replaced.\n $fileobject->moveTo($destination);\n }\n }\n }\n?>" }, { "code": null, "e": 3910, "s": 3769, "text": "Create a directory Files at src/Template and under that directory create a View file called index.php. Copy the following code in that file." }, { "code": null, "e": 4287, "s": 3910, "text": "<?php\n echo $this->Form->create(NULL, ['type' => 'file']);\n echo $this->l;Form->file('submittedfile');\n echo $this->Form->button('Submit');\n echo $this->Form->end();\n $uploadPath ='../uploads/';\n $files = scandir($uploadPath, 0);\n echo \"Files uploaded in uploads/ are:<br/>\";\n for($i = 2; $i < count($files); $i++)\n echo \"File is - \".$files[$i].\"<br>\";\n?>" }, { "code": null, "e": 4404, "s": 4287, "text": "The files saved in uploads/ folder is listed for the user. Execute the above example by visiting the following URL −" }, { "code": null, "e": 4443, "s": 4404, "text": "http://localhost/cakephp4/fileupload −" }, { "code": null, "e": 4514, "s": 4443, "text": "When you execute the above code, you should see the following output −" }, { "code": null, "e": 4521, "s": 4514, "text": " Print" }, { "code": null, "e": 4532, "s": 4521, "text": " Add Notes" } ]
How to use Node.js REPL ? - GeeksforGeeks
15 Feb, 2022 Node.Js REPL or Read-Evaluate-Print Loop is an interactive shell for the Node.js environment which means we can write any valid Javascript code in it. This is used to test, evaluate, experiment, or debug code much easier and accessible way. It basically acts as the Browser’s Web dev tools’ Console for Javascript. To use the REPL, you must have Node.js downloaded for your Operating System. To check if the Node.Js is installed correctly, you can use the following command: node --version If you get a version number, you are good to go else, you need to fix your installation. So, we can now start to use the node.js REPL in your machine. How to Start the REPL: To start the Node.js REPL is quite easy and straightforward, you simply have to enter the word node into the Terminal/CMD/PowerShell as per your OS. node You can use any valid Javascript code in the prompt. We do not need to use console.log to print the value of the variables, simply the name of the variables might be sufficient in most cases. As we can see the prompt output is a bit more than plain text, it’s nice colored and even has autocompletion built-in. This makes the REPL more convenient and quick to test up some ideas before actually using them in the project. Exit from the REPL: To exit from the reply, you can press CTRL + D in Windows/Linux and CMD+D in macOS. Optionally, CTRL+C twice will also work to exit. Alternately we can also use the following to exit out of the REPL : .exit Using Javascript in REPL: We can use any valid javascript in the REPL. We can use variables, strings, concatenation, arithmetic, and all of the stuff that can be feasible in the REPL. There are limitations to what we can write in the REPL, like a bit longer and functional programs. This issue will be seen in the next section of REPL Commands. As we can see we have used a couple of concepts in Javascript like string interpolation, arithmetic, and working with arrays. Any valid and feasible Javascript can be used in the REPL and hence some core features of Javascript can be utilized in it. REPL Commands: There are a couple of commands and arguments to be used in the Node.Js REPL shell. We’ll explore some of them in this section. These commands are to be used in the REPL shell i.e after entering the command node into your terminal/CMD/PowerShell. These commands or characters are reserved in the REPL and hence provide some great features and enhance accessibility. Editor command: This command is used to stop the line-by-line evaluation and make an editor-like typing in the shell. It’s nothing like an editor but simply writing more longer and meaningful code as a form of a program. As we can see we can write more than one line in the shell which makes writing more sophisticated code with a lot of freedom being in the terminal. After writing the required code, we can save and evaluate the code by pressing CTRL + D or we can cancel the evaluation and hence abort the process by pressing CTRL + C. Save command: We can use the .save command to save the code of the current REPL session in a file. This might be really handy if you exit the REPL, all the code snippets will be lost and with this command, it becomes much easier to keep a backup at the user’s disposal. As we can see the code snippet from the REPL is saved into a file. The file in most cases would be a Js file quite obviously. The .save command is used along with the filename to store the contents of the REPL. Load command: The load command as opposed to the .save command loads the variables, functions, and other scopes of a file into the REPL. This is useful to load the existing code from a file for experimenting without re-writing the whole code again. As we can see, we loaded the file from the previous example, and it crammed it in a single block of code instead of rendering it line by line. We can extend the code as we want and again save it to the file if we want. This makes experimenting much easier and quicker avoiding repetitive writing of code and loading the Js code from a file makes it super useful as well. Clear command: The .clear command or .break command is used to break from the existing loop statements or multi-line inputs. We can see from the above example that after entering the .clear or .break command a new prompt appears and breaks out of the current input or statement These commands do not execute the code and return to the main prompt. Exit command: As said earlier the alternative to CTRL + D or CTRL + C (twice) is the command .exit. It basically exits the REPL. Help command: The help command as stated in the REPL header gives more information about the options and commands available in the Node.Js REPL. Underscore Variable: The underscore Variable (_) will give us the result of the last executed command or code. It can be a variable value, function return value, or anything which can return some kind of value, if there is nothing from the evaluation the REPL will default it to undefined. As we can see in the example, the _ variable gets the result of the last executed command in the shell. It can be undefined if there was just the declaration of a variable else it stores the result or return value of the executed command. Using Modules: We can even use Js modules in the Node.Js REPL. There are a couple of modules in the Node.Js REPL by default. You can get the list by pressing the TAB key twice. If you want to import other modules, you need to follow the following procedure: You need to firstly install the package via the npm package manager for Node.Js. npm install package_name After installing the module in the same directory, we can use the “require” command to get the module’s core functionalities. As we can see the command: const express = require('express') Here, express can be other modules as well. We even use the functions in the modules after writing the boilerplate code for the packages in the REPL. Even we can use a template as a file and load that into a REPL. This makes testing some regularly used modules very easy and quick. rkbhola5 NodeJS-Questions Picked Node.js Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Node.js Export Module How to connect Node.js with React.js ? Mongoose find() Function Difference between dependencies, devDependencies and peerDependencies Mongoose Populate() Method Remove elements from a JavaScript Array Convert a string to an integer in JavaScript Top 10 Projects For Beginners To Practice HTML and CSS Skills Difference between var, let and const keywords in JavaScript How to insert spaces/tabs in text using HTML/CSS?
[ { "code": null, "e": 26267, "s": 26239, "text": "\n15 Feb, 2022" }, { "code": null, "e": 26584, "s": 26267, "text": "Node.Js REPL or Read-Evaluate-Print Loop is an interactive shell for the Node.js environment which means we can write any valid Javascript code in it. This is used to test, evaluate, experiment, or debug code much easier and accessible way. It basically acts as the Browser’s Web dev tools’ Console for Javascript. " }, { "code": null, "e": 26744, "s": 26584, "text": "To use the REPL, you must have Node.js downloaded for your Operating System. To check if the Node.Js is installed correctly, you can use the following command:" }, { "code": null, "e": 26760, "s": 26744, "text": "node --version " }, { "code": null, "e": 26911, "s": 26760, "text": "If you get a version number, you are good to go else, you need to fix your installation. So, we can now start to use the node.js REPL in your machine." }, { "code": null, "e": 27083, "s": 26911, "text": "How to Start the REPL: To start the Node.js REPL is quite easy and straightforward, you simply have to enter the word node into the Terminal/CMD/PowerShell as per your OS." }, { "code": null, "e": 27088, "s": 27083, "text": "node" }, { "code": null, "e": 27281, "s": 27088, "text": "You can use any valid Javascript code in the prompt. We do not need to use console.log to print the value of the variables, simply the name of the variables might be sufficient in most cases. " }, { "code": null, "e": 27511, "s": 27281, "text": "As we can see the prompt output is a bit more than plain text, it’s nice colored and even has autocompletion built-in. This makes the REPL more convenient and quick to test up some ideas before actually using them in the project." }, { "code": null, "e": 27732, "s": 27511, "text": "Exit from the REPL: To exit from the reply, you can press CTRL + D in Windows/Linux and CMD+D in macOS. Optionally, CTRL+C twice will also work to exit. Alternately we can also use the following to exit out of the REPL :" }, { "code": null, "e": 27738, "s": 27732, "text": ".exit" }, { "code": null, "e": 28084, "s": 27738, "text": "Using Javascript in REPL: We can use any valid javascript in the REPL. We can use variables, strings, concatenation, arithmetic, and all of the stuff that can be feasible in the REPL. There are limitations to what we can write in the REPL, like a bit longer and functional programs. This issue will be seen in the next section of REPL Commands. " }, { "code": null, "e": 28334, "s": 28084, "text": "As we can see we have used a couple of concepts in Javascript like string interpolation, arithmetic, and working with arrays. Any valid and feasible Javascript can be used in the REPL and hence some core features of Javascript can be utilized in it." }, { "code": null, "e": 28715, "s": 28334, "text": "REPL Commands: There are a couple of commands and arguments to be used in the Node.Js REPL shell. We’ll explore some of them in this section. These commands are to be used in the REPL shell i.e after entering the command node into your terminal/CMD/PowerShell. These commands or characters are reserved in the REPL and hence provide some great features and enhance accessibility. " }, { "code": null, "e": 28936, "s": 28715, "text": "Editor command: This command is used to stop the line-by-line evaluation and make an editor-like typing in the shell. It’s nothing like an editor but simply writing more longer and meaningful code as a form of a program." }, { "code": null, "e": 29267, "s": 28948, "text": "As we can see we can write more than one line in the shell which makes writing more sophisticated code with a lot of freedom being in the terminal. After writing the required code, we can save and evaluate the code by pressing CTRL + D or we can cancel the evaluation and hence abort the process by pressing CTRL + C. " }, { "code": null, "e": 29537, "s": 29267, "text": "Save command: We can use the .save command to save the code of the current REPL session in a file. This might be really handy if you exit the REPL, all the code snippets will be lost and with this command, it becomes much easier to keep a backup at the user’s disposal." }, { "code": null, "e": 29761, "s": 29549, "text": "As we can see the code snippet from the REPL is saved into a file. The file in most cases would be a Js file quite obviously. The .save command is used along with the filename to store the contents of the REPL." }, { "code": null, "e": 30011, "s": 29761, "text": "Load command: The load command as opposed to the .save command loads the variables, functions, and other scopes of a file into the REPL. This is useful to load the existing code from a file for experimenting without re-writing the whole code again." }, { "code": null, "e": 30394, "s": 30023, "text": "As we can see, we loaded the file from the previous example, and it crammed it in a single block of code instead of rendering it line by line. We can extend the code as we want and again save it to the file if we want. This makes experimenting much easier and quicker avoiding repetitive writing of code and loading the Js code from a file makes it super useful as well." }, { "code": null, "e": 30520, "s": 30394, "text": "Clear command: The .clear command or .break command is used to break from the existing loop statements or multi-line inputs. " }, { "code": null, "e": 30755, "s": 30532, "text": "We can see from the above example that after entering the .clear or .break command a new prompt appears and breaks out of the current input or statement These commands do not execute the code and return to the main prompt." }, { "code": null, "e": 30885, "s": 30755, "text": "Exit command: As said earlier the alternative to CTRL + D or CTRL + C (twice) is the command .exit. It basically exits the REPL. " }, { "code": null, "e": 31037, "s": 30891, "text": "Help command: The help command as stated in the REPL header gives more information about the options and commands available in the Node.Js REPL. " }, { "code": null, "e": 31333, "s": 31043, "text": "Underscore Variable: The underscore Variable (_) will give us the result of the last executed command or code. It can be a variable value, function return value, or anything which can return some kind of value, if there is nothing from the evaluation the REPL will default it to undefined." }, { "code": null, "e": 31572, "s": 31333, "text": "As we can see in the example, the _ variable gets the result of the last executed command in the shell. It can be undefined if there was just the declaration of a variable else it stores the result or return value of the executed command." }, { "code": null, "e": 31750, "s": 31572, "text": "Using Modules: We can even use Js modules in the Node.Js REPL. There are a couple of modules in the Node.Js REPL by default. You can get the list by pressing the TAB key twice. " }, { "code": null, "e": 31912, "s": 31750, "text": "If you want to import other modules, you need to follow the following procedure: You need to firstly install the package via the npm package manager for Node.Js." }, { "code": null, "e": 31937, "s": 31912, "text": "npm install package_name" }, { "code": null, "e": 32064, "s": 31937, "text": "After installing the module in the same directory, we can use the “require” command to get the module’s core functionalities. " }, { "code": null, "e": 32091, "s": 32064, "text": "As we can see the command:" }, { "code": null, "e": 32126, "s": 32091, "text": "const express = require('express')" }, { "code": null, "e": 32408, "s": 32126, "text": "Here, express can be other modules as well. We even use the functions in the modules after writing the boilerplate code for the packages in the REPL. Even we can use a template as a file and load that into a REPL. This makes testing some regularly used modules very easy and quick." }, { "code": null, "e": 32417, "s": 32408, "text": "rkbhola5" }, { "code": null, "e": 32434, "s": 32417, "text": "NodeJS-Questions" }, { "code": null, "e": 32441, "s": 32434, "text": "Picked" }, { "code": null, "e": 32449, "s": 32441, "text": "Node.js" }, { "code": null, "e": 32466, "s": 32449, "text": "Web Technologies" }, { "code": null, "e": 32564, "s": 32466, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 32586, "s": 32564, "text": "Node.js Export Module" }, { "code": null, "e": 32625, "s": 32586, "text": "How to connect Node.js with React.js ?" }, { "code": null, "e": 32650, "s": 32625, "text": "Mongoose find() Function" }, { "code": null, "e": 32720, "s": 32650, "text": "Difference between dependencies, devDependencies and peerDependencies" }, { "code": null, "e": 32747, "s": 32720, "text": "Mongoose Populate() Method" }, { "code": null, "e": 32787, "s": 32747, "text": "Remove elements from a JavaScript Array" }, { "code": null, "e": 32832, "s": 32787, "text": "Convert a string to an integer in JavaScript" }, { "code": null, "e": 32894, "s": 32832, "text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills" }, { "code": null, "e": 32955, "s": 32894, "text": "Difference between var, let and const keywords in JavaScript" } ]
Node Jimp | rotate - GeeksforGeeks
19 Feb, 2021 Introduction The rotate() function is an inbuilt function in Nodejs | Jimp which rotates the image clockwise and the dimensions of the image remain same.Syntax: rotate(r, mode, cb) Parameter: r – This parameter stores the rotation angle for the image. mode – This is optional parameter which stores the scaling method. cb – This is optional parameter which is invoked when compilation is complete. Input Images: npm init -y npm install jimp --save Example 1: javascript // npm install --save jimp// import jimp library to the environmentvar Jimp = require('jimp'); // User-Defined Function to read the imagesasync function main() { const image = await Jimp.read('https://media.geeksforgeeks.org/wp-content/uploads/20190328185307/gfg28.png'); // rotate Function having a rotation as 55 image.rotate(55) .write('rotate1.png');} main(); console.log("Image Processing Completed"); Output: Example 2: With mode and cb (optional parameters) javascript // npm install --save jimp// import jimp library to the environmentvar Jimp = require('jimp'); // User-Defined Function to read the imagesasync function main() { const image = await Jimp.read('https://media.geeksforgeeks.org/wp-content/uploads/20190328185333/gfg111.png'); // rotate Function having rotation angle as 99, mode and callback function image.rotate(99, Jimp.RESIZE_BEZIER, function(err){ if (err) throw err; }) .write('rotate2.png');} main(); console.log("Image Processing Completed"); Output: Reference: https://www.npmjs.com/package/jimp mridulmanochagfg Image-Processing Node-Jimp Node.js Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Node.js fs.writeFile() Method Node.js fs.readFile() Method How to install the previous version of node.js and npm ? Difference between promise and async await in Node.js How to use an ES6 import in Node.js? Remove elements from a JavaScript Array Convert a string to an integer in JavaScript How to fetch data from an API in ReactJS ? How to insert spaces/tabs in text using HTML/CSS? Difference between var, let and const keywords in JavaScript
[ { "code": null, "e": 25603, "s": 25575, "text": "\n19 Feb, 2021" }, { "code": null, "e": 25766, "s": 25603, "text": "Introduction The rotate() function is an inbuilt function in Nodejs | Jimp which rotates the image clockwise and the dimensions of the image remain same.Syntax: " }, { "code": null, "e": 25786, "s": 25766, "text": "rotate(r, mode, cb)" }, { "code": null, "e": 25799, "s": 25786, "text": "Parameter: " }, { "code": null, "e": 25859, "s": 25799, "text": "r – This parameter stores the rotation angle for the image." }, { "code": null, "e": 25926, "s": 25859, "text": "mode – This is optional parameter which stores the scaling method." }, { "code": null, "e": 26005, "s": 25926, "text": "cb – This is optional parameter which is invoked when compilation is complete." }, { "code": null, "e": 26021, "s": 26005, "text": "Input Images: " }, { "code": null, "e": 26035, "s": 26023, "text": "npm init -y" }, { "code": null, "e": 26059, "s": 26035, "text": "npm install jimp --save" }, { "code": null, "e": 26071, "s": 26059, "text": "Example 1: " }, { "code": null, "e": 26082, "s": 26071, "text": "javascript" }, { "code": "// npm install --save jimp// import jimp library to the environmentvar Jimp = require('jimp'); // User-Defined Function to read the imagesasync function main() { const image = await Jimp.read('https://media.geeksforgeeks.org/wp-content/uploads/20190328185307/gfg28.png'); // rotate Function having a rotation as 55 image.rotate(55) .write('rotate1.png');} main(); console.log(\"Image Processing Completed\");", "e": 26493, "s": 26082, "text": null }, { "code": null, "e": 26503, "s": 26493, "text": "Output: " }, { "code": null, "e": 26554, "s": 26503, "text": "Example 2: With mode and cb (optional parameters) " }, { "code": null, "e": 26565, "s": 26554, "text": "javascript" }, { "code": "// npm install --save jimp// import jimp library to the environmentvar Jimp = require('jimp'); // User-Defined Function to read the imagesasync function main() { const image = await Jimp.read('https://media.geeksforgeeks.org/wp-content/uploads/20190328185333/gfg111.png'); // rotate Function having rotation angle as 99, mode and callback function image.rotate(99, Jimp.RESIZE_BEZIER, function(err){ if (err) throw err; }) .write('rotate2.png');} main(); console.log(\"Image Processing Completed\");", "e": 27087, "s": 26565, "text": null }, { "code": null, "e": 27097, "s": 27087, "text": "Output: " }, { "code": null, "e": 27144, "s": 27097, "text": "Reference: https://www.npmjs.com/package/jimp " }, { "code": null, "e": 27161, "s": 27144, "text": "mridulmanochagfg" }, { "code": null, "e": 27178, "s": 27161, "text": "Image-Processing" }, { "code": null, "e": 27188, "s": 27178, "text": "Node-Jimp" }, { "code": null, "e": 27196, "s": 27188, "text": "Node.js" }, { "code": null, "e": 27213, "s": 27196, "text": "Web Technologies" }, { "code": null, "e": 27311, "s": 27213, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 27341, "s": 27311, "text": "Node.js fs.writeFile() Method" }, { "code": null, "e": 27370, "s": 27341, "text": "Node.js fs.readFile() Method" }, { "code": null, "e": 27427, "s": 27370, "text": "How to install the previous version of node.js and npm ?" }, { "code": null, "e": 27481, "s": 27427, "text": "Difference between promise and async await in Node.js" }, { "code": null, "e": 27518, "s": 27481, "text": "How to use an ES6 import in Node.js?" }, { "code": null, "e": 27558, "s": 27518, "text": "Remove elements from a JavaScript Array" }, { "code": null, "e": 27603, "s": 27558, "text": "Convert a string to an integer in JavaScript" }, { "code": null, "e": 27646, "s": 27603, "text": "How to fetch data from an API in ReactJS ?" }, { "code": null, "e": 27696, "s": 27646, "text": "How to insert spaces/tabs in text using HTML/CSS?" } ]
Python - Smallest K values in Dictionary - GeeksforGeeks
17 Dec, 2019 Many times while working with Python dictionary, we can have a particular problem to find the K minima of values in numerous keys. This problem is quite common while working with web development domain. Let’s discuss several ways in which this task can be performed. Method #1 : itemgetter() + items() + sorted()The combination of above method is used to perform this particular task. In this, we just sort the dictionary values expressed using itemgetter() and accessed using items(). # Python3 code to demonstrate working of# Smallest K values in Dictionary# Using sorted() + itemgetter() + items()from operator import itemgetter # Initialize dictionarytest_dict = {'gfg' : 1, 'is' : 4, 'best' : 6, 'for' : 7, 'geeks' : 3 } # Initialize K K = 2 # printing original dictionaryprint("The original dictionary is : " + str(test_dict)) # Smallest K values in Dictionary# Using sorted() + itemgetter() + items()res = dict(sorted(test_dict.items(), key = itemgetter(1))[:K]) # printing resultprint("The minimum K value pairs are " + str(res)) The original dictionary is : {'geeks': 3, 'is': 4, 'for': 7, 'best': 6, 'gfg': 1} The minimum K value pairs are {'geeks': 3, 'gfg': 1} Method #2 : Using nsmallest()This task can be performed using the nsmallest function. This is inbuilt function in heapq library which internally performs this task and can be used to do it externally. Has the drawback of printing just keys not values. # Python3 code to demonstrate working of# Smallest K values in Dictionary# Using nsmallestfrom heapq import nsmallest # Initialize dictionarytest_dict = {'gfg' : 1, 'is' : 4, 'best' : 6, 'for' : 7, 'geeks' : 3 } # Initialize KK = 2 # printing original dictionaryprint("The original dictionary is : " + str(test_dict)) # Smallest K values in Dictionary# Using nsmallestres = nsmallest(K, test_dict, key = test_dict.get) # printing resultprint("The minimum K value pairs are " + str(res)) The original dictionary is : {'geeks': 3, 'best': 6, 'is': 4, 'gfg': 1, 'for': 7} The minimum K value pairs are ['gfg', 'geeks'] Python dictionary-programs Python Python Programs Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. How to Install PIP on Windows ? How To Convert Python Dictionary To JSON? Check if element exists in list in Python How to drop one or multiple columns in Pandas Dataframe Python Classes and Objects Defaultdict in Python Python | Get dictionary keys as a list Python | Split string into list of characters Python | Convert a list to dictionary How to print without newline in Python?
[ { "code": null, "e": 25555, "s": 25527, "text": "\n17 Dec, 2019" }, { "code": null, "e": 25822, "s": 25555, "text": "Many times while working with Python dictionary, we can have a particular problem to find the K minima of values in numerous keys. This problem is quite common while working with web development domain. Let’s discuss several ways in which this task can be performed." }, { "code": null, "e": 26041, "s": 25822, "text": "Method #1 : itemgetter() + items() + sorted()The combination of above method is used to perform this particular task. In this, we just sort the dictionary values expressed using itemgetter() and accessed using items()." }, { "code": "# Python3 code to demonstrate working of# Smallest K values in Dictionary# Using sorted() + itemgetter() + items()from operator import itemgetter # Initialize dictionarytest_dict = {'gfg' : 1, 'is' : 4, 'best' : 6, 'for' : 7, 'geeks' : 3 } # Initialize K K = 2 # printing original dictionaryprint(\"The original dictionary is : \" + str(test_dict)) # Smallest K values in Dictionary# Using sorted() + itemgetter() + items()res = dict(sorted(test_dict.items(), key = itemgetter(1))[:K]) # printing resultprint(\"The minimum K value pairs are \" + str(res))", "e": 26598, "s": 26041, "text": null }, { "code": null, "e": 26734, "s": 26598, "text": "The original dictionary is : {'geeks': 3, 'is': 4, 'for': 7, 'best': 6, 'gfg': 1}\nThe minimum K value pairs are {'geeks': 3, 'gfg': 1}\n" }, { "code": null, "e": 26988, "s": 26736, "text": "Method #2 : Using nsmallest()This task can be performed using the nsmallest function. This is inbuilt function in heapq library which internally performs this task and can be used to do it externally. Has the drawback of printing just keys not values." }, { "code": "# Python3 code to demonstrate working of# Smallest K values in Dictionary# Using nsmallestfrom heapq import nsmallest # Initialize dictionarytest_dict = {'gfg' : 1, 'is' : 4, 'best' : 6, 'for' : 7, 'geeks' : 3 } # Initialize KK = 2 # printing original dictionaryprint(\"The original dictionary is : \" + str(test_dict)) # Smallest K values in Dictionary# Using nsmallestres = nsmallest(K, test_dict, key = test_dict.get) # printing resultprint(\"The minimum K value pairs are \" + str(res))", "e": 27480, "s": 26988, "text": null }, { "code": null, "e": 27610, "s": 27480, "text": "The original dictionary is : {'geeks': 3, 'best': 6, 'is': 4, 'gfg': 1, 'for': 7}\nThe minimum K value pairs are ['gfg', 'geeks']\n" }, { "code": null, "e": 27637, "s": 27610, "text": "Python dictionary-programs" }, { "code": null, "e": 27644, "s": 27637, "text": "Python" }, { "code": null, "e": 27660, "s": 27644, "text": "Python Programs" }, { "code": null, "e": 27758, "s": 27660, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 27790, "s": 27758, "text": "How to Install PIP on Windows ?" }, { "code": null, "e": 27832, "s": 27790, "text": "How To Convert Python Dictionary To JSON?" }, { "code": null, "e": 27874, "s": 27832, "text": "Check if element exists in list in Python" }, { "code": null, "e": 27930, "s": 27874, "text": "How to drop one or multiple columns in Pandas Dataframe" }, { "code": null, "e": 27957, "s": 27930, "text": "Python Classes and Objects" }, { "code": null, "e": 27979, "s": 27957, "text": "Defaultdict in Python" }, { "code": null, "e": 28018, "s": 27979, "text": "Python | Get dictionary keys as a list" }, { "code": null, "e": 28064, "s": 28018, "text": "Python | Split string into list of characters" }, { "code": null, "e": 28102, "s": 28064, "text": "Python | Convert a list to dictionary" } ]
D3.js | d3.entries() Function - GeeksforGeeks
03 May, 2021 Disclaimer: in version 6 of D3.js function d3.entries got deprecated. Object.entries should be used instead. The d3.entries function in D3.js is used to return an array containing the property names and property values of the specified object.Syntax: d3.entries(object) Parameters: the function accepts a single parameter — a JavaScript object.Return Value: It returns an array containing the property names and values of the specified object.The programs below illustrate the d3.entries function in D3.js:Example 1: javascript <!DOCTYPE html><html> <head> <title> d3.entries() function</title> <script src='https://d3js.org/d3.v4.min.js'></script></head><body> <script> // Initialising an object var month = {"January": 1, "February": 2, "March": 3}; // Calling the d3.entries() function A = d3.entries(month); // Getting the key and value in pairs console.log(A); </script></body></html> Output: [{"key":"January","value":1},{"key":"February","value":2}, {"key":"March","value":3}] Example 2: javascript <!DOCTYPE html><html> <head> <title> d3.entries function</title> <script src='https://d3js.org/d3.v4.min.js'></script></head><body> <script> // Initialising an object var month = {"GeeksforGeeks": 0, "Geeks": 2, "Geek": 3, "gfg": 4}; // Calling the d3.entries function A = d3.entries(month); // Getting the key and value in pairs. console.log(A); </script></body></html> Output: [{"key":"GeeksforGeeks","value":0},{"key":"Geeks","value":2}, {"key":"Geek","value":3},{"key":"gfg","value":4}] Reference: https://devdocs.io/d3~5/d3-collection#entries anfauglit D3.js Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Remove elements from a JavaScript Array Convert a string to an integer in JavaScript Difference between var, let and const keywords in JavaScript How to create footer to stay at the bottom of a Web page? How to fetch data from an API in ReactJS ? What is REST API in Node.js ? Node.js fs.readFileSync() Method How to remove underline for anchors tag using CSS? How to set the default value for an HTML <select> element ? Node.js | fs.writeFileSync() Method
[ { "code": null, "e": 25917, "s": 25889, "text": "\n03 May, 2021" }, { "code": null, "e": 26026, "s": 25917, "text": "Disclaimer: in version 6 of D3.js function d3.entries got deprecated. Object.entries should be used instead." }, { "code": null, "e": 26170, "s": 26026, "text": "The d3.entries function in D3.js is used to return an array containing the property names and property values of the specified object.Syntax: " }, { "code": null, "e": 26189, "s": 26170, "text": "d3.entries(object)" }, { "code": null, "e": 26438, "s": 26189, "text": "Parameters: the function accepts a single parameter — a JavaScript object.Return Value: It returns an array containing the property names and values of the specified object.The programs below illustrate the d3.entries function in D3.js:Example 1: " }, { "code": null, "e": 26449, "s": 26438, "text": "javascript" }, { "code": "<!DOCTYPE html><html> <head> <title> d3.entries() function</title> <script src='https://d3js.org/d3.v4.min.js'></script></head><body> <script> // Initialising an object var month = {\"January\": 1, \"February\": 2, \"March\": 3}; // Calling the d3.entries() function A = d3.entries(month); // Getting the key and value in pairs console.log(A); </script></body></html>", "e": 26883, "s": 26449, "text": null }, { "code": null, "e": 26893, "s": 26883, "text": "Output: " }, { "code": null, "e": 27018, "s": 26893, "text": "[{\"key\":\"January\",\"value\":1},{\"key\":\"February\",\"value\":2},\n {\"key\":\"March\",\"value\":3}]" }, { "code": null, "e": 27031, "s": 27018, "text": "Example 2: " }, { "code": null, "e": 27042, "s": 27031, "text": "javascript" }, { "code": "<!DOCTYPE html><html> <head> <title> d3.entries function</title> <script src='https://d3js.org/d3.v4.min.js'></script></head><body> <script> // Initialising an object var month = {\"GeeksforGeeks\": 0, \"Geeks\": 2, \"Geek\": 3, \"gfg\": 4}; // Calling the d3.entries function A = d3.entries(month); // Getting the key and value in pairs. console.log(A); </script></body></html>", "e": 27518, "s": 27042, "text": null }, { "code": null, "e": 27528, "s": 27518, "text": "Output: " }, { "code": null, "e": 27661, "s": 27528, "text": "[{\"key\":\"GeeksforGeeks\",\"value\":0},{\"key\":\"Geeks\",\"value\":2},\n {\"key\":\"Geek\",\"value\":3},{\"key\":\"gfg\",\"value\":4}]" }, { "code": null, "e": 27719, "s": 27661, "text": "Reference: https://devdocs.io/d3~5/d3-collection#entries " }, { "code": null, "e": 27729, "s": 27719, "text": "anfauglit" }, { "code": null, "e": 27735, "s": 27729, "text": "D3.js" }, { "code": null, "e": 27752, "s": 27735, "text": "Web Technologies" }, { "code": null, "e": 27850, "s": 27752, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 27890, "s": 27850, "text": "Remove elements from a JavaScript Array" }, { "code": null, "e": 27935, "s": 27890, "text": "Convert a string to an integer in JavaScript" }, { "code": null, "e": 27996, "s": 27935, "text": "Difference between var, let and const keywords in JavaScript" }, { "code": null, "e": 28054, "s": 27996, "text": "How to create footer to stay at the bottom of a Web page?" }, { "code": null, "e": 28097, "s": 28054, "text": "How to fetch data from an API in ReactJS ?" }, { "code": null, "e": 28127, "s": 28097, "text": "What is REST API in Node.js ?" }, { "code": null, "e": 28160, "s": 28127, "text": "Node.js fs.readFileSync() Method" }, { "code": null, "e": 28211, "s": 28160, "text": "How to remove underline for anchors tag using CSS?" }, { "code": null, "e": 28271, "s": 28211, "text": "How to set the default value for an HTML <select> element ?" } ]
Choropleth Maps using Plotly in Python - GeeksforGeeks
05 Nov, 2021 Plotly is a Python library that is very popular among data scientists to create interactive data visualizations. One of the visualizations available in Plotly is Choropleth Maps. Choropleth maps are used to plot maps with shaded or patterned areas which are proportional to a statistical variable. They are composed of colored polygons. They are used for representing spatial variations of a quantity. To create them, we require two main types of inputs – Geometric information –this can be a GeoJSON file (here each feature has an id or some identifying value in properties, orthis can be built-in geometries of plotly – US states and world countries this can be a GeoJSON file (here each feature has an id or some identifying value in properties, or this can be built-in geometries of plotly – US states and world countries A list of values with feature identifier as index Syntax – plotly.express.choropleth((data_frame=None, lat=None, lon=None, locations=None, locationmode=None, geojson=None, color=None, scope=None, center=None, title=None, width=None, height=None) Parameters: lat = this value is used to position marks according to latitude on a map long = this value is used to position marks according to longitude on a map locations = this value is interpreted according to locationmode and mapped to longitude/latitude. locationmode = one of ‘ISO-3’, ‘USA-states’, or ‘country names’. this determines the set of locations used to match entries in locations to regions on the map. geojson = contains a Polygon feature collection, with IDs, which are references from locations color = used to assign color to marks scope = possible values – ‘world’, ‘usa’, ‘europe’, ‘asia’, ‘africa’, ‘north america’, or ‘south america’`Default is `’world’ unless projection is set to ‘albers usa’, which forces ‘usa’ center = sets the center point of the map Example: Python3 # code for creating choropleth map of USA states# import plotly libraryimport plotly # import plotly.express module# this module is used to create entire figures at onceimport plotly.express as px # create figurefig = px.choropleth(locationmode="USA-states", color=[1], scope="usa") fig.show() Output: A choropleth map can be used to highlight or depict specific areas. The implementation of achieving such functionality is given below. Example: Python3 #code for representing states of USA#pass list of states in locations#list will have two-letter abbreviations of statesfig = px.choropleth(locations=["CA","TX","NY"], locationmode="USA-states", color=[1,2,3], scope="usa") fig.show() Output: In this example, we will take a dataset of US-states and create a choropleth map for US Agriculture Exports by USA in 2011. Dataset Link – Click here Example: Python3 #import librariesimport pandas as pdimport plotly.express as px #import datadata = pd.read_csv('2011_us_ag_exports.csv') # create choropleth map for the data# color will be the column to be color-coded# locations is the column with sppatial coordinatesfig = px.choropleth(data, locations='code', locationmode="USA-states", color='total exports', scope="usa") fig.show() Output: rajeev0719singh Picked Python-Plotly Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. How to Install PIP on Windows ? Check if element exists in list in Python How To Convert Python Dictionary To JSON? Python Classes and Objects How to drop one or multiple columns in Pandas Dataframe Defaultdict in Python Python | Get unique values from a list Python | os.path.join() method Create a directory in Python Python | Pandas dataframe.groupby()
[ { "code": null, "e": 25537, "s": 25509, "text": "\n05 Nov, 2021" }, { "code": null, "e": 25940, "s": 25537, "text": "Plotly is a Python library that is very popular among data scientists to create interactive data visualizations. One of the visualizations available in Plotly is Choropleth Maps. Choropleth maps are used to plot maps with shaded or patterned areas which are proportional to a statistical variable. They are composed of colored polygons. They are used for representing spatial variations of a quantity." }, { "code": null, "e": 26008, "s": 25940, "text": "To create them, we require two main types of inputs – " }, { "code": null, "e": 26204, "s": 26008, "text": "Geometric information –this can be a GeoJSON file (here each feature has an id or some identifying value in properties, orthis can be built-in geometries of plotly – US states and world countries" }, { "code": null, "e": 26304, "s": 26204, "text": "this can be a GeoJSON file (here each feature has an id or some identifying value in properties, or" }, { "code": null, "e": 26378, "s": 26304, "text": "this can be built-in geometries of plotly – US states and world countries" }, { "code": null, "e": 26428, "s": 26378, "text": "A list of values with feature identifier as index" }, { "code": null, "e": 26624, "s": 26428, "text": "Syntax – plotly.express.choropleth((data_frame=None, lat=None, lon=None, locations=None, locationmode=None, geojson=None, color=None, scope=None, center=None, title=None, width=None, height=None)" }, { "code": null, "e": 26636, "s": 26624, "text": "Parameters:" }, { "code": null, "e": 26710, "s": 26636, "text": "lat = this value is used to position marks according to latitude on a map" }, { "code": null, "e": 26786, "s": 26710, "text": "long = this value is used to position marks according to longitude on a map" }, { "code": null, "e": 26884, "s": 26786, "text": "locations = this value is interpreted according to locationmode and mapped to longitude/latitude." }, { "code": null, "e": 27044, "s": 26884, "text": "locationmode = one of ‘ISO-3’, ‘USA-states’, or ‘country names’. this determines the set of locations used to match entries in locations to regions on the map." }, { "code": null, "e": 27139, "s": 27044, "text": "geojson = contains a Polygon feature collection, with IDs, which are references from locations" }, { "code": null, "e": 27177, "s": 27139, "text": "color = used to assign color to marks" }, { "code": null, "e": 27364, "s": 27177, "text": "scope = possible values – ‘world’, ‘usa’, ‘europe’, ‘asia’, ‘africa’, ‘north america’, or ‘south america’`Default is `’world’ unless projection is set to ‘albers usa’, which forces ‘usa’" }, { "code": null, "e": 27406, "s": 27364, "text": "center = sets the center point of the map" }, { "code": null, "e": 27415, "s": 27406, "text": "Example:" }, { "code": null, "e": 27423, "s": 27415, "text": "Python3" }, { "code": "# code for creating choropleth map of USA states# import plotly libraryimport plotly # import plotly.express module# this module is used to create entire figures at onceimport plotly.express as px # create figurefig = px.choropleth(locationmode=\"USA-states\", color=[1], scope=\"usa\") fig.show()", "e": 27717, "s": 27423, "text": null }, { "code": null, "e": 27725, "s": 27717, "text": "Output:" }, { "code": null, "e": 27860, "s": 27725, "text": "A choropleth map can be used to highlight or depict specific areas. The implementation of achieving such functionality is given below." }, { "code": null, "e": 27869, "s": 27860, "text": "Example:" }, { "code": null, "e": 27877, "s": 27869, "text": "Python3" }, { "code": "#code for representing states of USA#pass list of states in locations#list will have two-letter abbreviations of statesfig = px.choropleth(locations=[\"CA\",\"TX\",\"NY\"], locationmode=\"USA-states\", color=[1,2,3], scope=\"usa\") fig.show()", "e": 28110, "s": 27877, "text": null }, { "code": null, "e": 28118, "s": 28110, "text": "Output:" }, { "code": null, "e": 28242, "s": 28118, "text": "In this example, we will take a dataset of US-states and create a choropleth map for US Agriculture Exports by USA in 2011." }, { "code": null, "e": 28268, "s": 28242, "text": "Dataset Link – Click here" }, { "code": null, "e": 28277, "s": 28268, "text": "Example:" }, { "code": null, "e": 28285, "s": 28277, "text": "Python3" }, { "code": "#import librariesimport pandas as pdimport plotly.express as px #import datadata = pd.read_csv('2011_us_ag_exports.csv') # create choropleth map for the data# color will be the column to be color-coded# locations is the column with sppatial coordinatesfig = px.choropleth(data, locations='code', locationmode=\"USA-states\", color='total exports', scope=\"usa\") fig.show()", "e": 28674, "s": 28285, "text": null }, { "code": null, "e": 28682, "s": 28674, "text": "Output:" }, { "code": null, "e": 28698, "s": 28682, "text": "rajeev0719singh" }, { "code": null, "e": 28705, "s": 28698, "text": "Picked" }, { "code": null, "e": 28719, "s": 28705, "text": "Python-Plotly" }, { "code": null, "e": 28726, "s": 28719, "text": "Python" }, { "code": null, "e": 28824, "s": 28726, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 28856, "s": 28824, "text": "How to Install PIP on Windows ?" }, { "code": null, "e": 28898, "s": 28856, "text": "Check if element exists in list in Python" }, { "code": null, "e": 28940, "s": 28898, "text": "How To Convert Python Dictionary To JSON?" }, { "code": null, "e": 28967, "s": 28940, "text": "Python Classes and Objects" }, { "code": null, "e": 29023, "s": 28967, "text": "How to drop one or multiple columns in Pandas Dataframe" }, { "code": null, "e": 29045, "s": 29023, "text": "Defaultdict in Python" }, { "code": null, "e": 29084, "s": 29045, "text": "Python | Get unique values from a list" }, { "code": null, "e": 29115, "s": 29084, "text": "Python | os.path.join() method" }, { "code": null, "e": 29144, "s": 29115, "text": "Create a directory in Python" } ]
Check if Array forms an increasing-decreasing sequence or vice versa - GeeksforGeeks
01 Apr, 2022 Given an array arr[] of N integers, the task is to find if the array can be divided into 2 sub-array such that the first sub-array is strictly increasing and the second sub-array is strictly decreasing or vice versa. If the given array can be divided then print “Yes” else print “No”. Examples: Input: arr[] = {3, 1, -2, -2, -1, 3} Output: Yes Explanation: First sub-array {3, 1, -2} which is strictly decreasing and second sub-array is {-2, 1, 3} is strictly increasing. Input: arr[] = {1, 1, 2, 3, 4, 5} Output: No Explanation: The entire array is increasing. Naive Approach: The naive idea is to divide the array into two subarrays at every possible index and explicitly check if the first subarray is strictly increasing and the second subarray is strictly decreasing or vice-versa. If we can break any subarray then print “Yes” else print “No”. Time Complexity: O(N2) Auxiliary Space: O(1) Efficient Approach: To optimize the above approach, traverse the array and check for the strictly increasing sequence and then check for strictly decreasing subsequence or vice-versa. Below are the steps: If arr[1] > arr[0], then check for strictly increasing then strictly decreasing as: Check for every consecutive pair until at any index i arr[i + 1] is less than arr[i].Now from index i + 1 check for every consecutive pair check if arr[i + 1] is less than arr[i] till the end of the array or not. If at any index i, arr[i] is less than arr[i + 1] then break the loop.If we reach the end in the above step then print “Yes” Else print “No”.If arr[1] < arr[0], then check for strictly decreasing then strictly increasing as: Check for every consecutive pair until at any index i arr[i + 1] is greater than arr[i].Now from index i + 1 check for every consecutive pair check if arr[i + 1] is greater than arr[i] till the end of the array or not. If at any index i, arr[i] is greater than arr[i + 1] then break the loop.If we reach the end in the above step then print “Yes” Else print “No”. If arr[1] > arr[0], then check for strictly increasing then strictly decreasing as: Check for every consecutive pair until at any index i arr[i + 1] is less than arr[i].Now from index i + 1 check for every consecutive pair check if arr[i + 1] is less than arr[i] till the end of the array or not. If at any index i, arr[i] is less than arr[i + 1] then break the loop.If we reach the end in the above step then print “Yes” Else print “No”. Check for every consecutive pair until at any index i arr[i + 1] is less than arr[i]. Now from index i + 1 check for every consecutive pair check if arr[i + 1] is less than arr[i] till the end of the array or not. If at any index i, arr[i] is less than arr[i + 1] then break the loop. If we reach the end in the above step then print “Yes” Else print “No”. If arr[1] < arr[0], then check for strictly decreasing then strictly increasing as: Check for every consecutive pair until at any index i arr[i + 1] is greater than arr[i].Now from index i + 1 check for every consecutive pair check if arr[i + 1] is greater than arr[i] till the end of the array or not. If at any index i, arr[i] is greater than arr[i + 1] then break the loop.If we reach the end in the above step then print “Yes” Else print “No”. Check for every consecutive pair until at any index i arr[i + 1] is greater than arr[i]. Now from index i + 1 check for every consecutive pair check if arr[i + 1] is greater than arr[i] till the end of the array or not. If at any index i, arr[i] is greater than arr[i + 1] then break the loop. If we reach the end in the above step then print “Yes” Else print “No”. Below is the implementation of above approach: C++ Java Python3 C# Javascript // C++ program for the above approach#include <bits/stdc++.h>using namespace std; // Function to check if the given array// forms an increasing decreasing// sequence or vice versabool canMake(int n, int ar[]){ // Base Case if (n == 1) return true; else { // First subarray is // strictly increasing if (ar[0] < ar[1]) { int i = 1; // Check for strictly // increasing condition // & find the break point while (i < n && ar[i - 1] < ar[i]) { i++; } // Check for strictly // decreasing condition // & find the break point while (i + 1 < n && ar[i] > ar[i + 1]) { i++; } // If i is equal to // length of array if (i >= n - 1) return true; else return false; } // First subarray is // strictly Decreasing else if (ar[0] > ar[1]) { int i = 1; // Check for strictly // increasing condition // & find the break point while (i < n && ar[i - 1] > ar[i]) { i++; } // Check for strictly // increasing condition // & find the break point while (i + 1 < n && ar[i] < ar[i + 1]) { i++; } // If i is equal to // length of array - 1 if (i >= n - 1) return true; else return false; } // Condition if ar[0] == ar[1] else { for (int i = 2; i < n; i++) { if (ar[i - 1] <= ar[i]) return false; } return true; } }} // Driver Codeint main(){ // Given array arr[] int arr[] = { 1, 2, 3, 4, 5 }; int n = sizeof arr / sizeof arr[0]; // Function Call if (canMake(n, arr)) { cout << "Yes"; } else { cout << "No"; } return 0;} // Java program for the above approachimport java.util.*;class GFG{ // Function to check if the given array// forms an increasing decreasing// sequence or vice versastatic boolean canMake(int n, int ar[]){ // Base Case if (n == 1) return true; else { // First subarray is // strictly increasing if (ar[0] < ar[1]) { int i = 1; // Check for strictly // increasing condition // & find the break point while (i < n && ar[i - 1] < ar[i]) { i++; } // Check for strictly // decreasing condition // & find the break point while (i + 1 < n && ar[i] > ar[i + 1]) { i++; } // If i is equal to // length of array if (i >= n - 1) return true; else return false; } // First subarray is // strictly Decreasing else if (ar[0] > ar[1]) { int i = 1; // Check for strictly // increasing condition // & find the break point while (i < n && ar[i - 1] > ar[i]) { i++; } // Check for strictly // increasing condition // & find the break point while (i + 1 < n && ar[i] < ar[i + 1]) { i++; } // If i is equal to // length of array - 1 if (i >= n - 1) return true; else return false; } // Condition if ar[0] == ar[1] else { for (int i = 2; i < n; i++) { if (ar[i - 1] <= ar[i]) return false; } return true; } }} // Driver Codepublic static void main(String[] args){ // Given array arr[] int arr[] = { 1, 2, 3, 4, 5 }; int n = arr.length; // Function Call if (!canMake(n, arr)) { System.out.print("Yes"); } else { System.out.print("No"); }}} // This code is contributed by Rohit_ranjan # Python3 program for the above approach # Function to check if the given array# forms an increasing decreasing# sequence or vice versadef canMake(n, ar): # Base Case if (n == 1): return True; else: # First subarray is # strictly increasing if (ar[0] < ar[1]): i = 1; # Check for strictly # increasing condition # & find the break point while (i < n and ar[i - 1] < ar[i]): i += 1; # Check for strictly # decreasing condition # & find the break point while (i + 1 < n and ar[i] > ar[i + 1]): i += 1; # If i is equal to # length of array if (i >= n - 1): return True; else: return False; # First subarray is # strictly Decreasing elif (ar[0] > ar[1]): i = 1; # Check for strictly # increasing condition # & find the break point while (i < n and ar[i - 1] > ar[i]): i += 1; # Check for strictly # increasing condition # & find the break point while (i + 1 < n and ar[i] < ar[i + 1]): i += 1; # If i is equal to # length of array - 1 if (i >= n - 1): return True; else: return False; # Condition if ar[0] == ar[1] else: for i in range(2, n): if (ar[i - 1] <= ar[i]): return False; return True; # Driver Code # Given array arrarr = [1, 2, 3, 4, 5];n = len(arr); # Function Callif (canMake(n, arr)==False): print("Yes");else: print("No"); # This code is contributed by PrinciRaj1992 // C# program for the above approachusing System;class GFG{ // Function to check if the given array// forms an increasing decreasing// sequence or vice versastatic bool canMake(int n, int []ar){ // Base Case if (n == 1) return true; else { // First subarray is // strictly increasing if (ar[0] < ar[1]) { int i = 1; // Check for strictly // increasing condition // & find the break point while (i < n && ar[i - 1] < ar[i]) { i++; } // Check for strictly // decreasing condition // & find the break point while (i + 1 < n && ar[i] > ar[i + 1]) { i++; } // If i is equal to // length of array if (i >= n - 1) return true; else return false; } // First subarray is // strictly Decreasing else if (ar[0] > ar[1]) { int i = 1; // Check for strictly // increasing condition // & find the break point while (i < n && ar[i - 1] > ar[i]) { i++; } // Check for strictly // increasing condition // & find the break point while (i + 1 < n && ar[i] < ar[i + 1]) { i++; } // If i is equal to // length of array - 1 if (i >= n - 1) return true; else return false; } // Condition if ar[0] == ar[1] else { for (int i = 2; i < n; i++) { if (ar[i - 1] <= ar[i]) return false; } return true; } }} // Driver Codepublic static void Main(String[] args){ // Given array []arr int []arr = { 1, 2, 3, 4, 5 }; int n = arr.Length; // Function Call if (!canMake(n, arr)) { Console.Write("Yes"); } else { Console.Write("No"); }}} // This code is contributed by Rajput-Ji <script> // Javascript program for the above approach // Function to check if the given array// forms an increasing decreasing// sequence or vice versafunction canMake(n, ar){ // Base Case if (n == 1) return true; else { // First subarray is // strictly increasing if (ar[0] < ar[1]) { let i = 1; // Check for strictly // increasing condition // & find the break point while (i < n && ar[i - 1] < ar[i]) { i++; } // Check for strictly // decreasing condition // & find the break point while (i + 1 < n && ar[i] > ar[i + 1]) { i++; } // If i is equal to // length of array if (i >= n - 1) return true; else return false; } // First subarray is // strictly Decreasing else if (ar[0] > ar[1]) { let i = 1; // Check for strictly // increasing condition // & find the break point while (i < n && ar[i - 1] > ar[i]) { i++; } // Check for strictly // increasing condition // & find the break point while (i + 1 < n && ar[i] < ar[i + 1]) { i++; } // If i is equal to // length of array - 1 if (i >= n - 1) return true; else return false; } // Condition if ar[0] == ar[1] else { for(let i = 2; i < n; i++) { if (ar[i - 1] <= ar[i]) return false; } return true; } }} // Driver Code // Given array arr[]let arr = [ 1, 2, 3, 4, 5 ];let n = arr.length; // Function Callif (!canMake(n, arr)){ document.write("Yes");}else{ document.write("No");} // This code is contributed by sravan kumar </script> No Time Complexity: O(N) Auxiliary Space: O(1) Rohit_ranjan Rajput-Ji princiraj1992 sravankumar8128 simranarora5sos Algorithms-Greedy Algorithms Greedy Algorithms Arrays Greedy Arrays Greedy Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Maximum and minimum of an array using minimum number of comparisons Top 50 Array Coding Problems for Interviews Stack Data Structure (Introduction and Program) Introduction to Arrays Multidimensional Arrays in Java Dijkstra's shortest path algorithm | Greedy Algo-7 Prim’s Minimum Spanning Tree (MST) | Greedy Algo-5 Kruskal’s Minimum Spanning Tree Algorithm | Greedy Algo-2 Write a program to print all permutations of a given string Huffman Coding | Greedy Algo-3
[ { "code": null, "e": 26791, "s": 26763, "text": "\n01 Apr, 2022" }, { "code": null, "e": 27076, "s": 26791, "text": "Given an array arr[] of N integers, the task is to find if the array can be divided into 2 sub-array such that the first sub-array is strictly increasing and the second sub-array is strictly decreasing or vice versa. If the given array can be divided then print “Yes” else print “No”." }, { "code": null, "e": 27087, "s": 27076, "text": "Examples: " }, { "code": null, "e": 27264, "s": 27087, "text": "Input: arr[] = {3, 1, -2, -2, -1, 3} Output: Yes Explanation: First sub-array {3, 1, -2} which is strictly decreasing and second sub-array is {-2, 1, 3} is strictly increasing." }, { "code": null, "e": 27356, "s": 27264, "text": "Input: arr[] = {1, 1, 2, 3, 4, 5} Output: No Explanation: The entire array is increasing. " }, { "code": null, "e": 27689, "s": 27356, "text": "Naive Approach: The naive idea is to divide the array into two subarrays at every possible index and explicitly check if the first subarray is strictly increasing and the second subarray is strictly decreasing or vice-versa. If we can break any subarray then print “Yes” else print “No”. Time Complexity: O(N2) Auxiliary Space: O(1)" }, { "code": null, "e": 27895, "s": 27689, "text": "Efficient Approach: To optimize the above approach, traverse the array and check for the strictly increasing sequence and then check for strictly decreasing subsequence or vice-versa. Below are the steps: " }, { "code": null, "e": 28781, "s": 27895, "text": "If arr[1] > arr[0], then check for strictly increasing then strictly decreasing as: Check for every consecutive pair until at any index i arr[i + 1] is less than arr[i].Now from index i + 1 check for every consecutive pair check if arr[i + 1] is less than arr[i] till the end of the array or not. If at any index i, arr[i] is less than arr[i + 1] then break the loop.If we reach the end in the above step then print “Yes” Else print “No”.If arr[1] < arr[0], then check for strictly decreasing then strictly increasing as: Check for every consecutive pair until at any index i arr[i + 1] is greater than arr[i].Now from index i + 1 check for every consecutive pair check if arr[i + 1] is greater than arr[i] till the end of the array or not. If at any index i, arr[i] is greater than arr[i + 1] then break the loop.If we reach the end in the above step then print “Yes” Else print “No”." }, { "code": null, "e": 29220, "s": 28781, "text": "If arr[1] > arr[0], then check for strictly increasing then strictly decreasing as: Check for every consecutive pair until at any index i arr[i + 1] is less than arr[i].Now from index i + 1 check for every consecutive pair check if arr[i + 1] is less than arr[i] till the end of the array or not. If at any index i, arr[i] is less than arr[i + 1] then break the loop.If we reach the end in the above step then print “Yes” Else print “No”." }, { "code": null, "e": 29306, "s": 29220, "text": "Check for every consecutive pair until at any index i arr[i + 1] is less than arr[i]." }, { "code": null, "e": 29505, "s": 29306, "text": "Now from index i + 1 check for every consecutive pair check if arr[i + 1] is less than arr[i] till the end of the array or not. If at any index i, arr[i] is less than arr[i + 1] then break the loop." }, { "code": null, "e": 29577, "s": 29505, "text": "If we reach the end in the above step then print “Yes” Else print “No”." }, { "code": null, "e": 30025, "s": 29577, "text": "If arr[1] < arr[0], then check for strictly decreasing then strictly increasing as: Check for every consecutive pair until at any index i arr[i + 1] is greater than arr[i].Now from index i + 1 check for every consecutive pair check if arr[i + 1] is greater than arr[i] till the end of the array or not. If at any index i, arr[i] is greater than arr[i + 1] then break the loop.If we reach the end in the above step then print “Yes” Else print “No”." }, { "code": null, "e": 30114, "s": 30025, "text": "Check for every consecutive pair until at any index i arr[i + 1] is greater than arr[i]." }, { "code": null, "e": 30319, "s": 30114, "text": "Now from index i + 1 check for every consecutive pair check if arr[i + 1] is greater than arr[i] till the end of the array or not. If at any index i, arr[i] is greater than arr[i + 1] then break the loop." }, { "code": null, "e": 30391, "s": 30319, "text": "If we reach the end in the above step then print “Yes” Else print “No”." }, { "code": null, "e": 30439, "s": 30391, "text": "Below is the implementation of above approach: " }, { "code": null, "e": 30443, "s": 30439, "text": "C++" }, { "code": null, "e": 30448, "s": 30443, "text": "Java" }, { "code": null, "e": 30456, "s": 30448, "text": "Python3" }, { "code": null, "e": 30459, "s": 30456, "text": "C#" }, { "code": null, "e": 30470, "s": 30459, "text": "Javascript" }, { "code": "// C++ program for the above approach#include <bits/stdc++.h>using namespace std; // Function to check if the given array// forms an increasing decreasing// sequence or vice versabool canMake(int n, int ar[]){ // Base Case if (n == 1) return true; else { // First subarray is // strictly increasing if (ar[0] < ar[1]) { int i = 1; // Check for strictly // increasing condition // & find the break point while (i < n && ar[i - 1] < ar[i]) { i++; } // Check for strictly // decreasing condition // & find the break point while (i + 1 < n && ar[i] > ar[i + 1]) { i++; } // If i is equal to // length of array if (i >= n - 1) return true; else return false; } // First subarray is // strictly Decreasing else if (ar[0] > ar[1]) { int i = 1; // Check for strictly // increasing condition // & find the break point while (i < n && ar[i - 1] > ar[i]) { i++; } // Check for strictly // increasing condition // & find the break point while (i + 1 < n && ar[i] < ar[i + 1]) { i++; } // If i is equal to // length of array - 1 if (i >= n - 1) return true; else return false; } // Condition if ar[0] == ar[1] else { for (int i = 2; i < n; i++) { if (ar[i - 1] <= ar[i]) return false; } return true; } }} // Driver Codeint main(){ // Given array arr[] int arr[] = { 1, 2, 3, 4, 5 }; int n = sizeof arr / sizeof arr[0]; // Function Call if (canMake(n, arr)) { cout << \"Yes\"; } else { cout << \"No\"; } return 0;}", "e": 32605, "s": 30470, "text": null }, { "code": "// Java program for the above approachimport java.util.*;class GFG{ // Function to check if the given array// forms an increasing decreasing// sequence or vice versastatic boolean canMake(int n, int ar[]){ // Base Case if (n == 1) return true; else { // First subarray is // strictly increasing if (ar[0] < ar[1]) { int i = 1; // Check for strictly // increasing condition // & find the break point while (i < n && ar[i - 1] < ar[i]) { i++; } // Check for strictly // decreasing condition // & find the break point while (i + 1 < n && ar[i] > ar[i + 1]) { i++; } // If i is equal to // length of array if (i >= n - 1) return true; else return false; } // First subarray is // strictly Decreasing else if (ar[0] > ar[1]) { int i = 1; // Check for strictly // increasing condition // & find the break point while (i < n && ar[i - 1] > ar[i]) { i++; } // Check for strictly // increasing condition // & find the break point while (i + 1 < n && ar[i] < ar[i + 1]) { i++; } // If i is equal to // length of array - 1 if (i >= n - 1) return true; else return false; } // Condition if ar[0] == ar[1] else { for (int i = 2; i < n; i++) { if (ar[i - 1] <= ar[i]) return false; } return true; } }} // Driver Codepublic static void main(String[] args){ // Given array arr[] int arr[] = { 1, 2, 3, 4, 5 }; int n = arr.length; // Function Call if (!canMake(n, arr)) { System.out.print(\"Yes\"); } else { System.out.print(\"No\"); }}} // This code is contributed by Rohit_ranjan", "e": 34810, "s": 32605, "text": null }, { "code": "# Python3 program for the above approach # Function to check if the given array# forms an increasing decreasing# sequence or vice versadef canMake(n, ar): # Base Case if (n == 1): return True; else: # First subarray is # strictly increasing if (ar[0] < ar[1]): i = 1; # Check for strictly # increasing condition # & find the break point while (i < n and ar[i - 1] < ar[i]): i += 1; # Check for strictly # decreasing condition # & find the break point while (i + 1 < n and ar[i] > ar[i + 1]): i += 1; # If i is equal to # length of array if (i >= n - 1): return True; else: return False; # First subarray is # strictly Decreasing elif (ar[0] > ar[1]): i = 1; # Check for strictly # increasing condition # & find the break point while (i < n and ar[i - 1] > ar[i]): i += 1; # Check for strictly # increasing condition # & find the break point while (i + 1 < n and ar[i] < ar[i + 1]): i += 1; # If i is equal to # length of array - 1 if (i >= n - 1): return True; else: return False; # Condition if ar[0] == ar[1] else: for i in range(2, n): if (ar[i - 1] <= ar[i]): return False; return True; # Driver Code # Given array arrarr = [1, 2, 3, 4, 5];n = len(arr); # Function Callif (canMake(n, arr)==False): print(\"Yes\");else: print(\"No\"); # This code is contributed by PrinciRaj1992", "e": 36644, "s": 34810, "text": null }, { "code": "// C# program for the above approachusing System;class GFG{ // Function to check if the given array// forms an increasing decreasing// sequence or vice versastatic bool canMake(int n, int []ar){ // Base Case if (n == 1) return true; else { // First subarray is // strictly increasing if (ar[0] < ar[1]) { int i = 1; // Check for strictly // increasing condition // & find the break point while (i < n && ar[i - 1] < ar[i]) { i++; } // Check for strictly // decreasing condition // & find the break point while (i + 1 < n && ar[i] > ar[i + 1]) { i++; } // If i is equal to // length of array if (i >= n - 1) return true; else return false; } // First subarray is // strictly Decreasing else if (ar[0] > ar[1]) { int i = 1; // Check for strictly // increasing condition // & find the break point while (i < n && ar[i - 1] > ar[i]) { i++; } // Check for strictly // increasing condition // & find the break point while (i + 1 < n && ar[i] < ar[i + 1]) { i++; } // If i is equal to // length of array - 1 if (i >= n - 1) return true; else return false; } // Condition if ar[0] == ar[1] else { for (int i = 2; i < n; i++) { if (ar[i - 1] <= ar[i]) return false; } return true; } }} // Driver Codepublic static void Main(String[] args){ // Given array []arr int []arr = { 1, 2, 3, 4, 5 }; int n = arr.Length; // Function Call if (!canMake(n, arr)) { Console.Write(\"Yes\"); } else { Console.Write(\"No\"); }}} // This code is contributed by Rajput-Ji", "e": 38832, "s": 36644, "text": null }, { "code": "<script> // Javascript program for the above approach // Function to check if the given array// forms an increasing decreasing// sequence or vice versafunction canMake(n, ar){ // Base Case if (n == 1) return true; else { // First subarray is // strictly increasing if (ar[0] < ar[1]) { let i = 1; // Check for strictly // increasing condition // & find the break point while (i < n && ar[i - 1] < ar[i]) { i++; } // Check for strictly // decreasing condition // & find the break point while (i + 1 < n && ar[i] > ar[i + 1]) { i++; } // If i is equal to // length of array if (i >= n - 1) return true; else return false; } // First subarray is // strictly Decreasing else if (ar[0] > ar[1]) { let i = 1; // Check for strictly // increasing condition // & find the break point while (i < n && ar[i - 1] > ar[i]) { i++; } // Check for strictly // increasing condition // & find the break point while (i + 1 < n && ar[i] < ar[i + 1]) { i++; } // If i is equal to // length of array - 1 if (i >= n - 1) return true; else return false; } // Condition if ar[0] == ar[1] else { for(let i = 2; i < n; i++) { if (ar[i - 1] <= ar[i]) return false; } return true; } }} // Driver Code // Given array arr[]let arr = [ 1, 2, 3, 4, 5 ];let n = arr.length; // Function Callif (!canMake(n, arr)){ document.write(\"Yes\");}else{ document.write(\"No\");} // This code is contributed by sravan kumar </script>", "e": 40937, "s": 38832, "text": null }, { "code": null, "e": 40940, "s": 40937, "text": "No" }, { "code": null, "e": 40987, "s": 40942, "text": "Time Complexity: O(N) Auxiliary Space: O(1) " }, { "code": null, "e": 41000, "s": 40987, "text": "Rohit_ranjan" }, { "code": null, "e": 41010, "s": 41000, "text": "Rajput-Ji" }, { "code": null, "e": 41024, "s": 41010, "text": "princiraj1992" }, { "code": null, "e": 41040, "s": 41024, "text": "sravankumar8128" }, { "code": null, "e": 41056, "s": 41040, "text": "simranarora5sos" }, { "code": null, "e": 41085, "s": 41056, "text": "Algorithms-Greedy Algorithms" }, { "code": null, "e": 41103, "s": 41085, "text": "Greedy Algorithms" }, { "code": null, "e": 41110, "s": 41103, "text": "Arrays" }, { "code": null, "e": 41117, "s": 41110, "text": "Greedy" }, { "code": null, "e": 41124, "s": 41117, "text": "Arrays" }, { "code": null, "e": 41131, "s": 41124, "text": "Greedy" }, { "code": null, "e": 41229, "s": 41131, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 41297, "s": 41229, "text": "Maximum and minimum of an array using minimum number of comparisons" }, { "code": null, "e": 41341, "s": 41297, "text": "Top 50 Array Coding Problems for Interviews" }, { "code": null, "e": 41389, "s": 41341, "text": "Stack Data Structure (Introduction and Program)" }, { "code": null, "e": 41412, "s": 41389, "text": "Introduction to Arrays" }, { "code": null, "e": 41444, "s": 41412, "text": "Multidimensional Arrays in Java" }, { "code": null, "e": 41495, "s": 41444, "text": "Dijkstra's shortest path algorithm | Greedy Algo-7" }, { "code": null, "e": 41546, "s": 41495, "text": "Prim’s Minimum Spanning Tree (MST) | Greedy Algo-5" }, { "code": null, "e": 41604, "s": 41546, "text": "Kruskal’s Minimum Spanning Tree Algorithm | Greedy Algo-2" }, { "code": null, "e": 41664, "s": 41604, "text": "Write a program to print all permutations of a given string" } ]
How to randomly select rows of an array in Python with NumPy ? - GeeksforGeeks
25 Feb, 2021 In this article, we will see two different methods on how to randomly select rows of an array in Python with NumPy. Let’s see different methods by which we can select random rows of an array: Method 1: We will be using the function shuffle(). The shuffle() function shuffles the rows of an array randomly and then we will display a random row of the 2D array. Python3 # import modulesimport randomimport numpy as np # create 2D arraydata = np.arange(50).reshape((5, 10)) # display original arrayprint("Array:")print(data) # row manipulationnp.random.shuffle(data) # display random rowsprint("\nRandom row:")rows = data[:1, :]print(rows) Output: Method 2: First create an array, then apply the sample() method to it and display a single row. Python3 # import modulesimport randomimport numpy as np # create 2D arraydata = np.arange(50).reshape((5, 10)) # display original arrayprint("Array:")print(data) # row manipulationrows_id = random.sample(range(0, data.shape[1]-1), 1) # display random rowsprint("\nRandom row:")row = data[rows_id, :]print(row) Output: Method 3: We will be using the function choice(). The choices() method returns multiple random elements from the list with replacement. Now lets, select rows from the list of random integers that we have created. Python3 # import modulesimport randomimport numpy as np # create 2D arraydata = np.arange(50).reshape((5, 10)) # display original arrayprint("Array:")print(data) # row manipulationnumber_of_rows = data.shape[0]random_indices = np.random.choice(number_of_rows, size=1, replace=False) # display random rowsprint("\nRandom row:")row = data[random_indices, :]print(row) Output: Picked Python numpy-program Python numpy-Random Python-numpy Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. How to Install PIP on Windows ? Check if element exists in list in Python How To Convert Python Dictionary To JSON? Python Classes and Objects How to drop one or multiple columns in Pandas Dataframe Python | Get unique values from a list Defaultdict in Python Python | os.path.join() method Create a directory in Python Python | Pandas dataframe.groupby()
[ { "code": null, "e": 25537, "s": 25509, "text": "\n25 Feb, 2021" }, { "code": null, "e": 25730, "s": 25537, "text": " In this article, we will see two different methods on how to randomly select rows of an array in Python with NumPy. Let’s see different methods by which we can select random rows of an array:" }, { "code": null, "e": 25898, "s": 25730, "text": "Method 1: We will be using the function shuffle(). The shuffle() function shuffles the rows of an array randomly and then we will display a random row of the 2D array." }, { "code": null, "e": 25906, "s": 25898, "text": "Python3" }, { "code": "# import modulesimport randomimport numpy as np # create 2D arraydata = np.arange(50).reshape((5, 10)) # display original arrayprint(\"Array:\")print(data) # row manipulationnp.random.shuffle(data) # display random rowsprint(\"\\nRandom row:\")rows = data[:1, :]print(rows)", "e": 26179, "s": 25906, "text": null }, { "code": null, "e": 26187, "s": 26179, "text": "Output:" }, { "code": null, "e": 26283, "s": 26187, "text": "Method 2: First create an array, then apply the sample() method to it and display a single row." }, { "code": null, "e": 26291, "s": 26283, "text": "Python3" }, { "code": "# import modulesimport randomimport numpy as np # create 2D arraydata = np.arange(50).reshape((5, 10)) # display original arrayprint(\"Array:\")print(data) # row manipulationrows_id = random.sample(range(0, data.shape[1]-1), 1) # display random rowsprint(\"\\nRandom row:\")row = data[rows_id, :]print(row)", "e": 26627, "s": 26291, "text": null }, { "code": null, "e": 26635, "s": 26627, "text": "Output:" }, { "code": null, "e": 26772, "s": 26635, "text": "Method 3: We will be using the function choice(). The choices() method returns multiple random elements from the list with replacement. " }, { "code": null, "e": 26849, "s": 26772, "text": "Now lets, select rows from the list of random integers that we have created." }, { "code": null, "e": 26857, "s": 26849, "text": "Python3" }, { "code": "# import modulesimport randomimport numpy as np # create 2D arraydata = np.arange(50).reshape((5, 10)) # display original arrayprint(\"Array:\")print(data) # row manipulationnumber_of_rows = data.shape[0]random_indices = np.random.choice(number_of_rows, size=1, replace=False) # display random rowsprint(\"\\nRandom row:\")row = data[random_indices, :]print(row)", "e": 27287, "s": 26857, "text": null }, { "code": null, "e": 27295, "s": 27287, "text": "Output:" }, { "code": null, "e": 27302, "s": 27295, "text": "Picked" }, { "code": null, "e": 27323, "s": 27302, "text": "Python numpy-program" }, { "code": null, "e": 27343, "s": 27323, "text": "Python numpy-Random" }, { "code": null, "e": 27356, "s": 27343, "text": "Python-numpy" }, { "code": null, "e": 27363, "s": 27356, "text": "Python" }, { "code": null, "e": 27461, "s": 27363, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 27493, "s": 27461, "text": "How to Install PIP on Windows ?" }, { "code": null, "e": 27535, "s": 27493, "text": "Check if element exists in list in Python" }, { "code": null, "e": 27577, "s": 27535, "text": "How To Convert Python Dictionary To JSON?" }, { "code": null, "e": 27604, "s": 27577, "text": "Python Classes and Objects" }, { "code": null, "e": 27660, "s": 27604, "text": "How to drop one or multiple columns in Pandas Dataframe" }, { "code": null, "e": 27699, "s": 27660, "text": "Python | Get unique values from a list" }, { "code": null, "e": 27721, "s": 27699, "text": "Defaultdict in Python" }, { "code": null, "e": 27752, "s": 27721, "text": "Python | os.path.join() method" }, { "code": null, "e": 27781, "s": 27752, "text": "Create a directory in Python" } ]
Koko Eating Bananas - GeeksforGeeks
27 Oct, 2021 Given N piles of bananas, the ith pile has piles[i] bananas and H hours time until guards return (N < H). Find the minimum (S) bananas to eat per hour such that Koko can eat all the bananas within H hours. Each hour, Koko chooses some pile of bananas and eats S bananas from that pile. If the pile has less than S bananas, then she consumes all of them, and won’t eat any more bananas during that hour. Examples: Input: piles = [3, 6, 7, 11], H = 8Output: 4Explanation: Koko will eat 4 bananas per hour to finish all the bananas Input: piles = [30, 11, 23, 4, 20], H = 6Output: 23Explanation: Koko will eat 23 bananas per hour to finish all the bananas Naive Approach: Koko must eat at least one banana per hour. Let lower bound be start. The maximum number of bananas Koko can eat in one hour is the maximum number of bananas from all piles. This is the maximum possible value of S. Let upper bound ends. Having search interval from start to end and using linear search, for every value of S, it can be checked if this speed of eating bananas is valid or not. The first valid value of S will be the slowest speed and the desired answer. Time Complexity: O(N * W), where W is maximum bananas from all piles Approach: Given problem can be solved efficiently by using binary search on answer technique: Create a boolean function to check if the chosen speed (bananas/hour) is enough to eat all bananas within given H hours time or not Lower limit of S is 1 banana/hr as Koko must eat one banana per hour, and Upper limit is the maximum bananas of all piles Apply binary search on the possible answer range to get minimum value of SIf the boolean function satisfies the mid value reduce higher to midElse update lower limit to mid + 1 If the boolean function satisfies the mid value reduce higher to mid Else update lower limit to mid + 1 C++ Java Python3 C# Javascript // C++ implementation for the above approach #include <bits/stdc++.h>using namespace std; bool check(vector<int>& bananas, int mid_val, int H){ int time = 0; for (int i = 0; i < bananas.size(); i++) { // to get the ceil value if (bananas[i] % mid_val != 0) { // in case of odd number time += ((bananas[i] / mid_val) + 1); } else { // in case of even number time += (bananas[i] / mid_val); } } // check if time is less than // or equals to given hour if (time <= H) { return true; } else { return false; }} int minEatingSpeed(vector<int>& piles, int H){ // as minimum speed of eating must be 1 int start = 1; // Maximum speed of eating // is the maximum bananas in given piles int end = *max_element(piles.begin(), piles.end()); while (start < end) { int mid = start + (end - start) / 2; // Check if the mid(hours) is valid if ((check(piles, mid, H)) == true) { // If valid continue to search // lower speed end = mid; } else { // If cant finish bananas in given // hours, then increase the speed start = mid + 1; } } return end;} // Driver codeint main(){ vector<int> piles = { 30, 11, 23, 4, 20 }; int H = 6; cout << minEatingSpeed(piles, H); return 0;} // Java implementation for the above approach import java.util.*; class GFG{ static boolean check(int[] bananas, int mid_val, int H){ int time = 0; for (int i = 0; i < bananas.length; i++) { // to get the ceil value if (bananas[i] % mid_val != 0) { // in case of odd number time += ((bananas[i] / mid_val) + 1); } else { // in case of even number time += (bananas[i] / mid_val); } } // check if time is less than // or equals to given hour if (time <= H) { return true; } else { return false; }} static int minEatingSpeed(int []piles, int H){ // as minimum speed of eating must be 1 int start = 1; // Maximum speed of eating // is the maximum bananas in given piles int end = Arrays.stream(piles).max().getAsInt(); while (start < end) { int mid = start + (end - start) / 2; // Check if the mid(hours) is valid if ((check(piles, mid, H)) == true) { // If valid continue to search // lower speed end = mid; } else { // If cant finish bananas in given // hours, then increase the speed start = mid + 1; } } return end;} // Driver codepublic static void main(String[] args){ int []piles = { 30, 11, 23, 4, 20 }; int H = 6; System.out.print(minEatingSpeed(piles, H));}} // This code is contributed by 29AjayKumar # Python implementation for the above approachdef check(bananas, mid_val, H): time = 0; for i in range(len(bananas)): # to get the ceil value if (bananas[i] % mid_val != 0): # in case of odd number time += bananas[i] // mid_val + 1; else: # in case of even number time += bananas[i] // mid_val # check if time is less than # or equals to given hour if (time <= H): return True; else: return False; def minEatingSpeed(piles, H): # as minimum speed of eating must be 1 start = 1; # Maximum speed of eating # is the maximum bananas in given piles end = sorted(piles.copy(), reverse=True)[0] while (start < end): mid = start + (end - start) // 2; # Check if the mid(hours) is valid if (check(piles, mid, H) == True): # If valid continue to search # lower speed end = mid; else: # If cant finish bananas in given # hours, then increase the speed start = mid + 1; return end; # Driver codepiles = [30, 11, 23, 4, 20];H = 6;print(minEatingSpeed(piles, H)); # This code is contributed by gfgking. // C# implementation for the above approachusing System;using System.Linq;public class GFG{ static bool check(int[] bananas, int mid_val, int H){ int time = 0; for (int i = 0; i < bananas.Length; i++) { // to get the ceil value if (bananas[i] % mid_val != 0) { // in case of odd number time += ((bananas[i] / mid_val) + 1); } else { // in case of even number time += (bananas[i] / mid_val); } } // check if time is less than // or equals to given hour if (time <= H) { return true; } else { return false; }} static int minEatingSpeed(int []piles, int H){ // as minimum speed of eating must be 1 int start = 1; // Maximum speed of eating // is the maximum bananas in given piles int end = piles.Max(); while (start < end) { int mid = start + (end - start) / 2; // Check if the mid(hours) is valid if ((check(piles, mid, H)) == true) { // If valid continue to search // lower speed end = mid; } else { // If cant finish bananas in given // hours, then increase the speed start = mid + 1; } } return end;} // Driver codepublic static void Main(String[] args){ int []piles = { 30, 11, 23, 4, 20 }; int H = 6; Console.Write(minEatingSpeed(piles, H));}} // This code is contributed by shikhasingrajput <script>// Javascript implementation for the above approach function check(bananas, mid_val, H) { let time = 0; for (let i = 0; i < bananas.length; i++) { // to get the ceil value if (bananas[i] % mid_val != 0) { // in case of odd number time += Math.floor(bananas[i] / mid_val) + 1; } else { // in case of even number time += Math.floor(bananas[i] / mid_val); } } // check if time is less than // or equals to given hour if (time <= H) { return true; } else { return false; }} function minEatingSpeed(piles, H) { // as minimum speed of eating must be 1 let start = 1; // Maximum speed of eating // is the maximum bananas in given piles let end = [...piles].sort((a, b) => b - a)[0]; while (start < end) { let mid = start + Math.floor((end - start) / 2); // Check if the mid(hours) is valid if (check(piles, mid, H) == true) { // If valid continue to search // lower speed end = mid; } else { // If cant finish bananas in given // hours, then increase the speed start = mid + 1; } } return end;} // Driver code let piles = [30, 11, 23, 4, 20];let H = 6;document.write(minEatingSpeed(piles, H)); </script> 23 Time Complexity: O(N log W) (W is the max bananas from all piles)Auxiliary Space: O(1) 29AjayKumar gfgking shikhasingrajput Airbnb Arrays Greedy Searching Arrays Searching Greedy Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Count pairs with given sum Chocolate Distribution Problem Window Sliding Technique Reversal algorithm for array rotation Next Greater Element Dijkstra's shortest path algorithm | Greedy Algo-7 Prim’s Minimum Spanning Tree (MST) | Greedy Algo-5 Kruskal’s Minimum Spanning Tree Algorithm | Greedy Algo-2 Write a program to print all permutations of a given string Huffman Coding | Greedy Algo-3
[ { "code": null, "e": 26067, "s": 26039, "text": "\n27 Oct, 2021" }, { "code": null, "e": 26471, "s": 26067, "text": "Given N piles of bananas, the ith pile has piles[i] bananas and H hours time until guards return (N < H). Find the minimum (S) bananas to eat per hour such that Koko can eat all the bananas within H hours. Each hour, Koko chooses some pile of bananas and eats S bananas from that pile. If the pile has less than S bananas, then she consumes all of them, and won’t eat any more bananas during that hour. " }, { "code": null, "e": 26481, "s": 26471, "text": "Examples:" }, { "code": null, "e": 26598, "s": 26481, "text": "Input: piles = [3, 6, 7, 11], H = 8Output: 4Explanation: Koko will eat 4 bananas per hour to finish all the bananas" }, { "code": null, "e": 26723, "s": 26598, "text": "Input: piles = [30, 11, 23, 4, 20], H = 6Output: 23Explanation: Koko will eat 23 bananas per hour to finish all the bananas" }, { "code": null, "e": 27208, "s": 26723, "text": "Naive Approach: Koko must eat at least one banana per hour. Let lower bound be start. The maximum number of bananas Koko can eat in one hour is the maximum number of bananas from all piles. This is the maximum possible value of S. Let upper bound ends. Having search interval from start to end and using linear search, for every value of S, it can be checked if this speed of eating bananas is valid or not. The first valid value of S will be the slowest speed and the desired answer." }, { "code": null, "e": 27278, "s": 27208, "text": "Time Complexity: O(N * W), where W is maximum bananas from all piles " }, { "code": null, "e": 27372, "s": 27278, "text": "Approach: Given problem can be solved efficiently by using binary search on answer technique:" }, { "code": null, "e": 27504, "s": 27372, "text": "Create a boolean function to check if the chosen speed (bananas/hour) is enough to eat all bananas within given H hours time or not" }, { "code": null, "e": 27626, "s": 27504, "text": "Lower limit of S is 1 banana/hr as Koko must eat one banana per hour, and Upper limit is the maximum bananas of all piles" }, { "code": null, "e": 27803, "s": 27626, "text": "Apply binary search on the possible answer range to get minimum value of SIf the boolean function satisfies the mid value reduce higher to midElse update lower limit to mid + 1" }, { "code": null, "e": 27872, "s": 27803, "text": "If the boolean function satisfies the mid value reduce higher to mid" }, { "code": null, "e": 27907, "s": 27872, "text": "Else update lower limit to mid + 1" }, { "code": null, "e": 27911, "s": 27907, "text": "C++" }, { "code": null, "e": 27916, "s": 27911, "text": "Java" }, { "code": null, "e": 27924, "s": 27916, "text": "Python3" }, { "code": null, "e": 27927, "s": 27924, "text": "C#" }, { "code": null, "e": 27938, "s": 27927, "text": "Javascript" }, { "code": "// C++ implementation for the above approach #include <bits/stdc++.h>using namespace std; bool check(vector<int>& bananas, int mid_val, int H){ int time = 0; for (int i = 0; i < bananas.size(); i++) { // to get the ceil value if (bananas[i] % mid_val != 0) { // in case of odd number time += ((bananas[i] / mid_val) + 1); } else { // in case of even number time += (bananas[i] / mid_val); } } // check if time is less than // or equals to given hour if (time <= H) { return true; } else { return false; }} int minEatingSpeed(vector<int>& piles, int H){ // as minimum speed of eating must be 1 int start = 1; // Maximum speed of eating // is the maximum bananas in given piles int end = *max_element(piles.begin(), piles.end()); while (start < end) { int mid = start + (end - start) / 2; // Check if the mid(hours) is valid if ((check(piles, mid, H)) == true) { // If valid continue to search // lower speed end = mid; } else { // If cant finish bananas in given // hours, then increase the speed start = mid + 1; } } return end;} // Driver codeint main(){ vector<int> piles = { 30, 11, 23, 4, 20 }; int H = 6; cout << minEatingSpeed(piles, H); return 0;}", "e": 29366, "s": 27938, "text": null }, { "code": "// Java implementation for the above approach import java.util.*; class GFG{ static boolean check(int[] bananas, int mid_val, int H){ int time = 0; for (int i = 0; i < bananas.length; i++) { // to get the ceil value if (bananas[i] % mid_val != 0) { // in case of odd number time += ((bananas[i] / mid_val) + 1); } else { // in case of even number time += (bananas[i] / mid_val); } } // check if time is less than // or equals to given hour if (time <= H) { return true; } else { return false; }} static int minEatingSpeed(int []piles, int H){ // as minimum speed of eating must be 1 int start = 1; // Maximum speed of eating // is the maximum bananas in given piles int end = Arrays.stream(piles).max().getAsInt(); while (start < end) { int mid = start + (end - start) / 2; // Check if the mid(hours) is valid if ((check(piles, mid, H)) == true) { // If valid continue to search // lower speed end = mid; } else { // If cant finish bananas in given // hours, then increase the speed start = mid + 1; } } return end;} // Driver codepublic static void main(String[] args){ int []piles = { 30, 11, 23, 4, 20 }; int H = 6; System.out.print(minEatingSpeed(piles, H));}} // This code is contributed by 29AjayKumar", "e": 30845, "s": 29366, "text": null }, { "code": "# Python implementation for the above approachdef check(bananas, mid_val, H): time = 0; for i in range(len(bananas)): # to get the ceil value if (bananas[i] % mid_val != 0): # in case of odd number time += bananas[i] // mid_val + 1; else: # in case of even number time += bananas[i] // mid_val # check if time is less than # or equals to given hour if (time <= H): return True; else: return False; def minEatingSpeed(piles, H): # as minimum speed of eating must be 1 start = 1; # Maximum speed of eating # is the maximum bananas in given piles end = sorted(piles.copy(), reverse=True)[0] while (start < end): mid = start + (end - start) // 2; # Check if the mid(hours) is valid if (check(piles, mid, H) == True): # If valid continue to search # lower speed end = mid; else: # If cant finish bananas in given # hours, then increase the speed start = mid + 1; return end; # Driver codepiles = [30, 11, 23, 4, 20];H = 6;print(minEatingSpeed(piles, H)); # This code is contributed by gfgking.", "e": 31963, "s": 30845, "text": null }, { "code": "// C# implementation for the above approachusing System;using System.Linq;public class GFG{ static bool check(int[] bananas, int mid_val, int H){ int time = 0; for (int i = 0; i < bananas.Length; i++) { // to get the ceil value if (bananas[i] % mid_val != 0) { // in case of odd number time += ((bananas[i] / mid_val) + 1); } else { // in case of even number time += (bananas[i] / mid_val); } } // check if time is less than // or equals to given hour if (time <= H) { return true; } else { return false; }} static int minEatingSpeed(int []piles, int H){ // as minimum speed of eating must be 1 int start = 1; // Maximum speed of eating // is the maximum bananas in given piles int end = piles.Max(); while (start < end) { int mid = start + (end - start) / 2; // Check if the mid(hours) is valid if ((check(piles, mid, H)) == true) { // If valid continue to search // lower speed end = mid; } else { // If cant finish bananas in given // hours, then increase the speed start = mid + 1; } } return end;} // Driver codepublic static void Main(String[] args){ int []piles = { 30, 11, 23, 4, 20 }; int H = 6; Console.Write(minEatingSpeed(piles, H));}} // This code is contributed by shikhasingrajput", "e": 33429, "s": 31963, "text": null }, { "code": "<script>// Javascript implementation for the above approach function check(bananas, mid_val, H) { let time = 0; for (let i = 0; i < bananas.length; i++) { // to get the ceil value if (bananas[i] % mid_val != 0) { // in case of odd number time += Math.floor(bananas[i] / mid_val) + 1; } else { // in case of even number time += Math.floor(bananas[i] / mid_val); } } // check if time is less than // or equals to given hour if (time <= H) { return true; } else { return false; }} function minEatingSpeed(piles, H) { // as minimum speed of eating must be 1 let start = 1; // Maximum speed of eating // is the maximum bananas in given piles let end = [...piles].sort((a, b) => b - a)[0]; while (start < end) { let mid = start + Math.floor((end - start) / 2); // Check if the mid(hours) is valid if (check(piles, mid, H) == true) { // If valid continue to search // lower speed end = mid; } else { // If cant finish bananas in given // hours, then increase the speed start = mid + 1; } } return end;} // Driver code let piles = [30, 11, 23, 4, 20];let H = 6;document.write(minEatingSpeed(piles, H)); </script>", "e": 34637, "s": 33429, "text": null }, { "code": null, "e": 34643, "s": 34640, "text": "23" }, { "code": null, "e": 34733, "s": 34645, "text": "Time Complexity: O(N log W) (W is the max bananas from all piles)Auxiliary Space: O(1)" }, { "code": null, "e": 34747, "s": 34735, "text": "29AjayKumar" }, { "code": null, "e": 34755, "s": 34747, "text": "gfgking" }, { "code": null, "e": 34772, "s": 34755, "text": "shikhasingrajput" }, { "code": null, "e": 34779, "s": 34772, "text": "Airbnb" }, { "code": null, "e": 34786, "s": 34779, "text": "Arrays" }, { "code": null, "e": 34793, "s": 34786, "text": "Greedy" }, { "code": null, "e": 34803, "s": 34793, "text": "Searching" }, { "code": null, "e": 34810, "s": 34803, "text": "Arrays" }, { "code": null, "e": 34820, "s": 34810, "text": "Searching" }, { "code": null, "e": 34827, "s": 34820, "text": "Greedy" }, { "code": null, "e": 34925, "s": 34827, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 34952, "s": 34925, "text": "Count pairs with given sum" }, { "code": null, "e": 34983, "s": 34952, "text": "Chocolate Distribution Problem" }, { "code": null, "e": 35008, "s": 34983, "text": "Window Sliding Technique" }, { "code": null, "e": 35046, "s": 35008, "text": "Reversal algorithm for array rotation" }, { "code": null, "e": 35067, "s": 35046, "text": "Next Greater Element" }, { "code": null, "e": 35118, "s": 35067, "text": "Dijkstra's shortest path algorithm | Greedy Algo-7" }, { "code": null, "e": 35169, "s": 35118, "text": "Prim’s Minimum Spanning Tree (MST) | Greedy Algo-5" }, { "code": null, "e": 35227, "s": 35169, "text": "Kruskal’s Minimum Spanning Tree Algorithm | Greedy Algo-2" }, { "code": null, "e": 35287, "s": 35227, "text": "Write a program to print all permutations of a given string" } ]
std::generate in C++ - GeeksforGeeks
21 Jul, 2017 std::generate, as the name suggests is an STL algorithm, which is used to generate numbers based upon a generator function, and then, it assigns those values to the elements in the container in the range [first, last). The generator function has to be defined by the user, and it is called successively for assigning the numbers. Template function: void generate (ForwardIterator first, ForwardIterator last, Generator gen); first: Forward iterator pointing to the first element of the container. last: Forward iterator pointing to the last element of the container. gen: A generator function, based upon which values will be assigned. Returns: none Since, it has a void return type, so it does not return any value. // C++ program to demonstrate the use of std::generate#include <iostream>#include <vector>#include <algorithm> // Defining the generator functionint gen(){ static int i = 0; return ++i;} using namespace std;int main(){ int i; // Declaring a vector of size 10 vector<int> v1(10); // using std::generate std::generate(v1.begin(), v1.end(), gen); vector<int>::iterator i1; for (i1 = v1.begin(); i1 != v1.end(); ++i1) { cout << *i1 << " "; } return 0;} Output: 1 2 3 4 5 6 7 8 9 10 Next: std::generate_n in C++ This article is contributed by Mrigendra Singh. If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to contribute@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. cpp-algorithm-library STL C++ STL CPP Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Inheritance in C++ Map in C++ Standard Template Library (STL) C++ Classes and Objects Virtual Function in C++ Bitwise Operators in C/C++ Constructors in C++ Templates in C++ with Examples Operator Overloading in C++ Socket Programming in C/C++ Object Oriented Programming in C++
[ { "code": null, "e": 25597, "s": 25569, "text": "\n21 Jul, 2017" }, { "code": null, "e": 25816, "s": 25597, "text": "std::generate, as the name suggests is an STL algorithm, which is used to generate numbers based upon a generator function, and then, it assigns those values to the elements in the container in the range [first, last)." }, { "code": null, "e": 25927, "s": 25816, "text": "The generator function has to be defined by the user, and it is called successively for assigning the numbers." }, { "code": null, "e": 25946, "s": 25927, "text": "Template function:" }, { "code": null, "e": 26317, "s": 25946, "text": "void generate (ForwardIterator first, ForwardIterator last, Generator gen);\n\nfirst: Forward iterator pointing to the first element of the container.\nlast: Forward iterator pointing to the last element of the container.\ngen: A generator function, based upon which values will be assigned.\n\nReturns: none\nSince, it has a void return type, so it does not return any value.\n" }, { "code": "// C++ program to demonstrate the use of std::generate#include <iostream>#include <vector>#include <algorithm> // Defining the generator functionint gen(){ static int i = 0; return ++i;} using namespace std;int main(){ int i; // Declaring a vector of size 10 vector<int> v1(10); // using std::generate std::generate(v1.begin(), v1.end(), gen); vector<int>::iterator i1; for (i1 = v1.begin(); i1 != v1.end(); ++i1) { cout << *i1 << \" \"; } return 0;}", "e": 26814, "s": 26317, "text": null }, { "code": null, "e": 26822, "s": 26814, "text": "Output:" }, { "code": null, "e": 26844, "s": 26822, "text": "1 2 3 4 5 6 7 8 9 10\n" }, { "code": null, "e": 26873, "s": 26844, "text": "Next: std::generate_n in C++" }, { "code": null, "e": 27176, "s": 26873, "text": "This article is contributed by Mrigendra Singh. If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to contribute@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks." }, { "code": null, "e": 27301, "s": 27176, "text": "Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above." }, { "code": null, "e": 27323, "s": 27301, "text": "cpp-algorithm-library" }, { "code": null, "e": 27327, "s": 27323, "text": "STL" }, { "code": null, "e": 27331, "s": 27327, "text": "C++" }, { "code": null, "e": 27335, "s": 27331, "text": "STL" }, { "code": null, "e": 27339, "s": 27335, "text": "CPP" }, { "code": null, "e": 27437, "s": 27339, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 27456, "s": 27437, "text": "Inheritance in C++" }, { "code": null, "e": 27499, "s": 27456, "text": "Map in C++ Standard Template Library (STL)" }, { "code": null, "e": 27523, "s": 27499, "text": "C++ Classes and Objects" }, { "code": null, "e": 27547, "s": 27523, "text": "Virtual Function in C++" }, { "code": null, "e": 27574, "s": 27547, "text": "Bitwise Operators in C/C++" }, { "code": null, "e": 27594, "s": 27574, "text": "Constructors in C++" }, { "code": null, "e": 27625, "s": 27594, "text": "Templates in C++ with Examples" }, { "code": null, "e": 27653, "s": 27625, "text": "Operator Overloading in C++" }, { "code": null, "e": 27681, "s": 27653, "text": "Socket Programming in C/C++" } ]
Find the sum of infinite series 1^2.x^0 + 2^2.x^1 + 3^2.x^2 + 4^2.x^3 +....... - GeeksforGeeks
22 Mar, 2021 Given an infinite series and a value x, the task is to find its sum. Below is the infinite series 1^2*x^0 + 2^2*x^1 + 3^2*x^2 + 4^2*x^3 +....... upto infinity, where x belongs to (-1, 1) Examples: Input: x = 0.5 Output: 12 Input: x = 0.9 Output: 1900 Approach:Though the given series is not an Arithmetico-Geometric series, however, the differences and so on, forms an AP. So, we can use the Method of Differences.Hence, the sum will be (1+x)/(1-x)^3.Below is the implementation of above approach: C++ Java Python C# PHP Javascript // C++ implementation of above approach#include <iostream>#include <math.h> using namespace std; // Function to calculate sumdouble solve_sum(double x){ // Return sum return (1 + x) / pow(1 - x, 3);} // Driver codeint main(){ // declaration of value of x double x = 0.5; // Function call to calculate // the sum when x=0.5 cout << solve_sum(x); return 0;} // Java Program to find//sum of the given infinite seriesimport java.util.*; class solution{static double calculateSum(double x){ // Returning the final sumreturn (1 + x) / Math.pow(1 - x, 3); } //Driver codepublic static void main(String ar[]){ double x=0.5; System.out.println((int)calculateSum(x)); }}//This code is contributed by Surendra_Gangwar # Python implementation of above approach # Function to calculate sumdef solve_sum(x): # Return sum return (1 + x)/pow(1-x, 3) # driver code # declaration of value of xx = 0.5 # Function call to calculate the sum when x = 0.5print(int(solve_sum(x))) // C# Program to find sum of// the given infinite seriesusing System; class GFG{static double calculateSum(double x){ // Returning the final sumreturn (1 + x) / Math.Pow(1 - x, 3); } // Driver codepublic static void Main(){ double x = 0.5; Console.WriteLine((int)calculateSum(x));}} // This code is contributed// by inder_verma.. <?php// PHP implementation of// above approach // Function to calculate sumfunction solve_sum($x){ // Return sum return (1 + $x) / pow(1 - $x, 3);} // Driver code // declaration of value of x$x = 0.5; // Function call to calculate// the sum when x=0.5echo solve_sum($x); // This code is contributed// by inder_verma?> <script>// javascript Program to find//sum of the given infinite series function calculateSum(x){ // Returning the final sumreturn (1 + x) / Math.pow(1 - x, 3); } //Driver code var x=0.5;document.write(parseInt(calculateSum(x))); // This code is contributed by 29AjayKumar </script> 12 SURENDRA_GANGWAR inderDuMCA 29AjayKumar series series-sum Mathematical Mathematical series Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Program to print prime numbers from 1 to N. Segment Tree | Set 1 (Sum of given range) Modular multiplicative inverse Count all possible paths from top left to bottom right of a mXn matrix Fizz Buzz Implementation Check if a number is Palindrome Program to multiply two matrices Count ways to reach the n'th stair Merge two sorted arrays with O(1) extra space Generate all permutation of a set in Python
[ { "code": null, "e": 25961, "s": 25933, "text": "\n22 Mar, 2021" }, { "code": null, "e": 26061, "s": 25961, "text": "Given an infinite series and a value x, the task is to find its sum. Below is the infinite series " }, { "code": null, "e": 26150, "s": 26061, "text": "1^2*x^0 + 2^2*x^1 + 3^2*x^2 + 4^2*x^3 +....... upto infinity, where x belongs to (-1, 1)" }, { "code": null, "e": 26162, "s": 26150, "text": "Examples: " }, { "code": null, "e": 26217, "s": 26162, "text": "Input: x = 0.5\nOutput: 12\n\nInput: x = 0.9\nOutput: 1900" }, { "code": null, "e": 26468, "s": 26219, "text": "Approach:Though the given series is not an Arithmetico-Geometric series, however, the differences and so on, forms an AP. So, we can use the Method of Differences.Hence, the sum will be (1+x)/(1-x)^3.Below is the implementation of above approach: " }, { "code": null, "e": 26472, "s": 26468, "text": "C++" }, { "code": null, "e": 26477, "s": 26472, "text": "Java" }, { "code": null, "e": 26484, "s": 26477, "text": "Python" }, { "code": null, "e": 26487, "s": 26484, "text": "C#" }, { "code": null, "e": 26491, "s": 26487, "text": "PHP" }, { "code": null, "e": 26502, "s": 26491, "text": "Javascript" }, { "code": "// C++ implementation of above approach#include <iostream>#include <math.h> using namespace std; // Function to calculate sumdouble solve_sum(double x){ // Return sum return (1 + x) / pow(1 - x, 3);} // Driver codeint main(){ // declaration of value of x double x = 0.5; // Function call to calculate // the sum when x=0.5 cout << solve_sum(x); return 0;}", "e": 26884, "s": 26502, "text": null }, { "code": "// Java Program to find//sum of the given infinite seriesimport java.util.*; class solution{static double calculateSum(double x){ // Returning the final sumreturn (1 + x) / Math.pow(1 - x, 3); } //Driver codepublic static void main(String ar[]){ double x=0.5; System.out.println((int)calculateSum(x)); }}//This code is contributed by Surendra_Gangwar", "e": 27246, "s": 26884, "text": null }, { "code": "# Python implementation of above approach # Function to calculate sumdef solve_sum(x): # Return sum return (1 + x)/pow(1-x, 3) # driver code # declaration of value of xx = 0.5 # Function call to calculate the sum when x = 0.5print(int(solve_sum(x)))", "e": 27502, "s": 27246, "text": null }, { "code": "// C# Program to find sum of// the given infinite seriesusing System; class GFG{static double calculateSum(double x){ // Returning the final sumreturn (1 + x) / Math.Pow(1 - x, 3); } // Driver codepublic static void Main(){ double x = 0.5; Console.WriteLine((int)calculateSum(x));}} // This code is contributed// by inder_verma..", "e": 27842, "s": 27502, "text": null }, { "code": "<?php// PHP implementation of// above approach // Function to calculate sumfunction solve_sum($x){ // Return sum return (1 + $x) / pow(1 - $x, 3);} // Driver code // declaration of value of x$x = 0.5; // Function call to calculate// the sum when x=0.5echo solve_sum($x); // This code is contributed// by inder_verma?>", "e": 28177, "s": 27842, "text": null }, { "code": "<script>// javascript Program to find//sum of the given infinite series function calculateSum(x){ // Returning the final sumreturn (1 + x) / Math.pow(1 - x, 3); } //Driver code var x=0.5;document.write(parseInt(calculateSum(x))); // This code is contributed by 29AjayKumar </script>", "e": 28466, "s": 28177, "text": null }, { "code": null, "e": 28469, "s": 28466, "text": "12" }, { "code": null, "e": 28488, "s": 28471, "text": "SURENDRA_GANGWAR" }, { "code": null, "e": 28499, "s": 28488, "text": "inderDuMCA" }, { "code": null, "e": 28511, "s": 28499, "text": "29AjayKumar" }, { "code": null, "e": 28518, "s": 28511, "text": "series" }, { "code": null, "e": 28529, "s": 28518, "text": "series-sum" }, { "code": null, "e": 28542, "s": 28529, "text": "Mathematical" }, { "code": null, "e": 28555, "s": 28542, "text": "Mathematical" }, { "code": null, "e": 28562, "s": 28555, "text": "series" }, { "code": null, "e": 28660, "s": 28562, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 28704, "s": 28660, "text": "Program to print prime numbers from 1 to N." }, { "code": null, "e": 28746, "s": 28704, "text": "Segment Tree | Set 1 (Sum of given range)" }, { "code": null, "e": 28777, "s": 28746, "text": "Modular multiplicative inverse" }, { "code": null, "e": 28848, "s": 28777, "text": "Count all possible paths from top left to bottom right of a mXn matrix" }, { "code": null, "e": 28873, "s": 28848, "text": "Fizz Buzz Implementation" }, { "code": null, "e": 28905, "s": 28873, "text": "Check if a number is Palindrome" }, { "code": null, "e": 28938, "s": 28905, "text": "Program to multiply two matrices" }, { "code": null, "e": 28973, "s": 28938, "text": "Count ways to reach the n'th stair" }, { "code": null, "e": 29019, "s": 28973, "text": "Merge two sorted arrays with O(1) extra space" } ]
Minimum number of Appends needed to make a string palindrome - GeeksforGeeks
02 Dec, 2021 Given a string s we need to tell minimum characters to be appended (insertion at the end) to make a string palindrome. Examples: Input : s = "abede" Output : 2 We can make string palindrome as "abedeba" by adding ba at the end of the string. Input : s = "aabb" Output : 2 We can make string palindrome as"aabbaa" by adding aa at the end of the string. The solution can be achieved by removing characters from the beginning of the string one by one and checking if the string is palindrome or not. For Example, consider the above string, s = “abede”. We check if the string is palindrome or not. The result is false, then we remove the character from the beginning of a string and now string becomes “bede”.We check if the string is palindrome or not. The result is again false, then we remove the character from the beginning of a string and now the string becomes “ede”.We check if the string is palindrome or not. The result is true, so the output becomes 2 which is the number of characters removed from the string. C++ Java Python3 C# Javascript // C program to find minimum number of appends// needed to make a string Palindrome#include<stdio.h>#include<string.h>#include<stdbool.h> // Checking if the string is palindrome or notbool isPalindrome(char *str){ int len = strlen(str); // single character is always palindrome if (len == 1) return true; // pointing to first character char *ptr1 = str; // pointing to last character char *ptr2 = str+len-1; while (ptr2 > ptr1) { if (*ptr1 != *ptr2) return false; ptr1++; ptr2--; } return true;} // Recursive function to count number of appendsint noOfAppends(char s[]){ if (isPalindrome(s)) return 0; // Removing first character of string by // incrementing base address pointer. s++; return 1 + noOfAppends(s);} // Driver program to test above functionsint main(){ char s[] = "abede"; printf("%d\n", noOfAppends(s)); return 0;} // Java program to find minimum number of appends// needed to make a string Palindromeclass GFG{ // Checking if the string is palindrome or notstatic boolean isPalindrome(char []str){ int len = str.length; // single character is always palindrome if (len == 1) return true; // pointing to first character int ptr1 = 0; // pointing to last character int ptr2 = len-1; while (ptr2 >= ptr1) { if (str[ptr1] != str[ptr2]) return false; ptr1++; ptr2--; } return true;} // Recursive function to count number of appendsstatic int noOfAppends(String s){ if (isPalindrome(s.toCharArray())) return 0; // Removing first character of string by // incrementing base address pointer. s=s.substring(1); return 1 + noOfAppends(s);} // Driver codepublic static void main(String arr[]){ String s = "abede"; System.out.printf("%d\n", noOfAppends(s));}} // This code contributed by Rajput-Ji # Python3 program to find minimum number of appends# needed to make a String Palindrome # Checking if the String is palindrome or notdef isPalindrome(Str): Len = len(Str) # single character is always palindrome if (Len == 1): return True # pointing to first character ptr1 = 0 # pointing to last character ptr2 = Len - 1 while (ptr2 > ptr1): if (Str[ptr1] != Str[ptr2]): return False ptr1 += 1 ptr2 -= 1 return True # Recursive function to count number of appendsdef noOfAppends(s): if (isPalindrome(s)): return 0 # Removing first character of String by # incrementing base address pointer. del s[0] return 1 + noOfAppends(s) # Driver Codese = "abede"s = [i for i in se]print(noOfAppends(s)) # This code is contributed by Mohit Kumar // C# program to find minimum number of appends// needed to make a string Palindromeusing System; class GFG{ // Checking if the string is palindrome or notstatic Boolean isPalindrome(char []str){ int len = str.Length; // single character is always palindrome if (len == 1) return true; // pointing to first character char ptr1 = str[0]; // pointing to last character char ptr2 = str[len-1]; while (ptr2 > ptr1) { if (ptr1 != ptr2) return false; ptr1++; ptr2--; } return true;} // Recursive function to count number of appendsstatic int noOfAppends(String s){ if (isPalindrome(s.ToCharArray())) return 0; // Removing first character of string by // incrementing base address pointer. s=s.Substring(1); return 1 + noOfAppends(s);} // Driver codepublic static void Main(String []arr){ String s = "abede"; Console.Write("{0}\n", noOfAppends(s));}} // This code has been contributed by 29AjayKumar <script>// Javascript program to find minimum number of appends// needed to make a string Palindrome // Checking if the string is palindrome or not function isPalindrome(str) { let len = str.length; // single character is always palindrome if (len == 1) return true; // pointing to first character let ptr1 = 0; // pointing to last character let ptr2 = len-1; while (ptr2 >= ptr1) { if (str[ptr1] != str[ptr2]) return false; ptr1++; ptr2--; } return true; } // Recursive function to count number of appends function noOfAppends(s) { if (isPalindrome(s.split(""))) return 0; // Removing first character of string by // incrementing base address pointer. s=s.substring(1); return 1 + noOfAppends(s); } // Driver code let s = "abede"; document.write(noOfAppends(s)); // This code is contributed by unknown2108</script> 2 The above approach is described and O(n**2) approach. Efficient Approach: We also have an algorithm taking the help of the Knuth Morris Pratt Algorithm which is O(n) Time Complexity. The basic idea behind the approach is that we calculate the largest substring from the end can be calculated and the length of the string minus this value is the minimum number of appends. The logic is intuitive, we need not append the palindrome and only those which do not form the palindrome. To find this largest palindrome from the end, we reverse the string, calculate the DFA and reverse the string again(thus gaining back the original string) and find the final state, which represents the number of matches of the string with the revered string and hence we get the largest substring that is a palindrome from the end, in O(n) time. Below is the implementation of the above approach: C++ // CPP program for above approach#include <algorithm>#include <iostream>#include <string>using namespace std; // This class builds the dfa and// precomputes the state.// See KMP algorithm for explanationclass kmp_numeric {private: int n; int** dfa; public: kmp_numeric(string& s) { n = s.length(); int c = 256; // Create dfa dfa = new int*[n]; // Iterate from 0 to n for (int i = 0; i < n; i++) dfa[i] = new int; int x = 0; // Iterate from 0 to n for (int i = 0; i < c; i++) dfa[0][i] = 0; // Initialise dfa[0][s[0]] = 1 dfa[0][s[0]] = 1; // Iterate i from 1 to n-1 for (int i = 1; i < n; i++) { // Iterate j from 0 to c - 1 for (int j = 0; j < c; j++) { dfa[i][j] = dfa[x][j]; } dfa[i][s[i]] = i + 1; x = dfa[x][s[i]]; } } // This function finds the overlap // between two strings,by // changing the state. int longest_overlap(string& query) { // q1 is length of query int ql = query.length(); int state = 0; // Iterate from 0 to q1 - 1 for (int i = 0; i < ql; i++) { state = dfa[state][query[i]]; } return state; }}; int min_appends(string& s){ // Reverse the string. reverse(s.begin(), s.end()); // Build the DFA for the // reversed String kmp_numeric kmp = s; // Get the original string back reverse(s.begin(), s.end()); // Largest overlap in this case is the // largest string from the end which // is a palindrome. int ans = s.length() - kmp.longest_overlap(s); return ans;} // Driver Codeint main(){ string s = "deep"; // Answer : 3 string t = "sososososos"; // Answer : 0 cout << min_appends(s) << endl; cout << min_appends(t) << endl;} 3 0 Suggestion by: Pratik Priyadarsan Related Article : Dynamic Programming | Set 28 (Minimum insertions to form a palindrome)This article is contributed by Shubham Chaudhary. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to review-team@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. Rajput-Ji 29AjayKumar mohit kumar 29 pratik0718 abhinaygupta98 unknown2108 kumaripunam984122 gulshankumarar231 palindrome Strings Strings palindrome Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Array of Strings in C++ (5 Different Ways to Create) Convert string to char array in C++ Check whether two strings are anagram of each other Caesar Cipher in Cryptography Top 50 String Coding Problems for Interviews Length of the longest substring without repeating characters Reverse words in a given string How to split a string in C/C++, Python and Java? Print all the duplicates in the input string stringstream in C++ and its applications
[ { "code": null, "e": 26373, "s": 26345, "text": "\n02 Dec, 2021" }, { "code": null, "e": 26493, "s": 26373, "text": "Given a string s we need to tell minimum characters to be appended (insertion at the end) to make a string palindrome. " }, { "code": null, "e": 26504, "s": 26493, "text": "Examples: " }, { "code": null, "e": 26728, "s": 26504, "text": "Input : s = \"abede\"\nOutput : 2\nWe can make string palindrome as \"abedeba\"\nby adding ba at the end of the string.\n\nInput : s = \"aabb\"\nOutput : 2\nWe can make string palindrome as\"aabbaa\"\nby adding aa at the end of the string." }, { "code": null, "e": 27396, "s": 26728, "text": "The solution can be achieved by removing characters from the beginning of the string one by one and checking if the string is palindrome or not. For Example, consider the above string, s = “abede”. We check if the string is palindrome or not. The result is false, then we remove the character from the beginning of a string and now string becomes “bede”.We check if the string is palindrome or not. The result is again false, then we remove the character from the beginning of a string and now the string becomes “ede”.We check if the string is palindrome or not. The result is true, so the output becomes 2 which is the number of characters removed from the string. " }, { "code": null, "e": 27400, "s": 27396, "text": "C++" }, { "code": null, "e": 27405, "s": 27400, "text": "Java" }, { "code": null, "e": 27413, "s": 27405, "text": "Python3" }, { "code": null, "e": 27416, "s": 27413, "text": "C#" }, { "code": null, "e": 27427, "s": 27416, "text": "Javascript" }, { "code": "// C program to find minimum number of appends// needed to make a string Palindrome#include<stdio.h>#include<string.h>#include<stdbool.h> // Checking if the string is palindrome or notbool isPalindrome(char *str){ int len = strlen(str); // single character is always palindrome if (len == 1) return true; // pointing to first character char *ptr1 = str; // pointing to last character char *ptr2 = str+len-1; while (ptr2 > ptr1) { if (*ptr1 != *ptr2) return false; ptr1++; ptr2--; } return true;} // Recursive function to count number of appendsint noOfAppends(char s[]){ if (isPalindrome(s)) return 0; // Removing first character of string by // incrementing base address pointer. s++; return 1 + noOfAppends(s);} // Driver program to test above functionsint main(){ char s[] = \"abede\"; printf(\"%d\\n\", noOfAppends(s)); return 0;}", "e": 28367, "s": 27427, "text": null }, { "code": "// Java program to find minimum number of appends// needed to make a string Palindromeclass GFG{ // Checking if the string is palindrome or notstatic boolean isPalindrome(char []str){ int len = str.length; // single character is always palindrome if (len == 1) return true; // pointing to first character int ptr1 = 0; // pointing to last character int ptr2 = len-1; while (ptr2 >= ptr1) { if (str[ptr1] != str[ptr2]) return false; ptr1++; ptr2--; } return true;} // Recursive function to count number of appendsstatic int noOfAppends(String s){ if (isPalindrome(s.toCharArray())) return 0; // Removing first character of string by // incrementing base address pointer. s=s.substring(1); return 1 + noOfAppends(s);} // Driver codepublic static void main(String arr[]){ String s = \"abede\"; System.out.printf(\"%d\\n\", noOfAppends(s));}} // This code contributed by Rajput-Ji", "e": 29359, "s": 28367, "text": null }, { "code": "# Python3 program to find minimum number of appends# needed to make a String Palindrome # Checking if the String is palindrome or notdef isPalindrome(Str): Len = len(Str) # single character is always palindrome if (Len == 1): return True # pointing to first character ptr1 = 0 # pointing to last character ptr2 = Len - 1 while (ptr2 > ptr1): if (Str[ptr1] != Str[ptr2]): return False ptr1 += 1 ptr2 -= 1 return True # Recursive function to count number of appendsdef noOfAppends(s): if (isPalindrome(s)): return 0 # Removing first character of String by # incrementing base address pointer. del s[0] return 1 + noOfAppends(s) # Driver Codese = \"abede\"s = [i for i in se]print(noOfAppends(s)) # This code is contributed by Mohit Kumar", "e": 30191, "s": 29359, "text": null }, { "code": "// C# program to find minimum number of appends// needed to make a string Palindromeusing System; class GFG{ // Checking if the string is palindrome or notstatic Boolean isPalindrome(char []str){ int len = str.Length; // single character is always palindrome if (len == 1) return true; // pointing to first character char ptr1 = str[0]; // pointing to last character char ptr2 = str[len-1]; while (ptr2 > ptr1) { if (ptr1 != ptr2) return false; ptr1++; ptr2--; } return true;} // Recursive function to count number of appendsstatic int noOfAppends(String s){ if (isPalindrome(s.ToCharArray())) return 0; // Removing first character of string by // incrementing base address pointer. s=s.Substring(1); return 1 + noOfAppends(s);} // Driver codepublic static void Main(String []arr){ String s = \"abede\"; Console.Write(\"{0}\\n\", noOfAppends(s));}} // This code has been contributed by 29AjayKumar", "e": 31191, "s": 30191, "text": null }, { "code": "<script>// Javascript program to find minimum number of appends// needed to make a string Palindrome // Checking if the string is palindrome or not function isPalindrome(str) { let len = str.length; // single character is always palindrome if (len == 1) return true; // pointing to first character let ptr1 = 0; // pointing to last character let ptr2 = len-1; while (ptr2 >= ptr1) { if (str[ptr1] != str[ptr2]) return false; ptr1++; ptr2--; } return true; } // Recursive function to count number of appends function noOfAppends(s) { if (isPalindrome(s.split(\"\"))) return 0; // Removing first character of string by // incrementing base address pointer. s=s.substring(1); return 1 + noOfAppends(s); } // Driver code let s = \"abede\"; document.write(noOfAppends(s)); // This code is contributed by unknown2108</script>", "e": 32192, "s": 31191, "text": null }, { "code": null, "e": 32194, "s": 32192, "text": "2" }, { "code": null, "e": 32249, "s": 32194, "text": "The above approach is described and O(n**2) approach. " }, { "code": null, "e": 32269, "s": 32249, "text": "Efficient Approach:" }, { "code": null, "e": 32378, "s": 32269, "text": "We also have an algorithm taking the help of the Knuth Morris Pratt Algorithm which is O(n) Time Complexity." }, { "code": null, "e": 33020, "s": 32378, "text": "The basic idea behind the approach is that we calculate the largest substring from the end can be calculated and the length of the string minus this value is the minimum number of appends. The logic is intuitive, we need not append the palindrome and only those which do not form the palindrome. To find this largest palindrome from the end, we reverse the string, calculate the DFA and reverse the string again(thus gaining back the original string) and find the final state, which represents the number of matches of the string with the revered string and hence we get the largest substring that is a palindrome from the end, in O(n) time." }, { "code": null, "e": 33071, "s": 33020, "text": "Below is the implementation of the above approach:" }, { "code": null, "e": 33075, "s": 33071, "text": "C++" }, { "code": "// CPP program for above approach#include <algorithm>#include <iostream>#include <string>using namespace std; // This class builds the dfa and// precomputes the state.// See KMP algorithm for explanationclass kmp_numeric {private: int n; int** dfa; public: kmp_numeric(string& s) { n = s.length(); int c = 256; // Create dfa dfa = new int*[n]; // Iterate from 0 to n for (int i = 0; i < n; i++) dfa[i] = new int; int x = 0; // Iterate from 0 to n for (int i = 0; i < c; i++) dfa[0][i] = 0; // Initialise dfa[0][s[0]] = 1 dfa[0][s[0]] = 1; // Iterate i from 1 to n-1 for (int i = 1; i < n; i++) { // Iterate j from 0 to c - 1 for (int j = 0; j < c; j++) { dfa[i][j] = dfa[x][j]; } dfa[i][s[i]] = i + 1; x = dfa[x][s[i]]; } } // This function finds the overlap // between two strings,by // changing the state. int longest_overlap(string& query) { // q1 is length of query int ql = query.length(); int state = 0; // Iterate from 0 to q1 - 1 for (int i = 0; i < ql; i++) { state = dfa[state][query[i]]; } return state; }}; int min_appends(string& s){ // Reverse the string. reverse(s.begin(), s.end()); // Build the DFA for the // reversed String kmp_numeric kmp = s; // Get the original string back reverse(s.begin(), s.end()); // Largest overlap in this case is the // largest string from the end which // is a palindrome. int ans = s.length() - kmp.longest_overlap(s); return ans;} // Driver Codeint main(){ string s = \"deep\"; // Answer : 3 string t = \"sososososos\"; // Answer : 0 cout << min_appends(s) << endl; cout << min_appends(t) << endl;}", "e": 34974, "s": 33075, "text": null }, { "code": null, "e": 34978, "s": 34974, "text": "3\n0" }, { "code": null, "e": 35014, "s": 34978, "text": "Suggestion by: Pratik Priyadarsan " }, { "code": null, "e": 35529, "s": 35014, "text": " Related Article : Dynamic Programming | Set 28 (Minimum insertions to form a palindrome)This article is contributed by Shubham Chaudhary. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to review-team@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. " }, { "code": null, "e": 35539, "s": 35529, "text": "Rajput-Ji" }, { "code": null, "e": 35551, "s": 35539, "text": "29AjayKumar" }, { "code": null, "e": 35566, "s": 35551, "text": "mohit kumar 29" }, { "code": null, "e": 35577, "s": 35566, "text": "pratik0718" }, { "code": null, "e": 35592, "s": 35577, "text": "abhinaygupta98" }, { "code": null, "e": 35604, "s": 35592, "text": "unknown2108" }, { "code": null, "e": 35622, "s": 35604, "text": "kumaripunam984122" }, { "code": null, "e": 35640, "s": 35622, "text": "gulshankumarar231" }, { "code": null, "e": 35651, "s": 35640, "text": "palindrome" }, { "code": null, "e": 35659, "s": 35651, "text": "Strings" }, { "code": null, "e": 35667, "s": 35659, "text": "Strings" }, { "code": null, "e": 35678, "s": 35667, "text": "palindrome" }, { "code": null, "e": 35776, "s": 35678, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 35829, "s": 35776, "text": "Array of Strings in C++ (5 Different Ways to Create)" }, { "code": null, "e": 35865, "s": 35829, "text": "Convert string to char array in C++" }, { "code": null, "e": 35917, "s": 35865, "text": "Check whether two strings are anagram of each other" }, { "code": null, "e": 35947, "s": 35917, "text": "Caesar Cipher in Cryptography" }, { "code": null, "e": 35992, "s": 35947, "text": "Top 50 String Coding Problems for Interviews" }, { "code": null, "e": 36053, "s": 35992, "text": "Length of the longest substring without repeating characters" }, { "code": null, "e": 36085, "s": 36053, "text": "Reverse words in a given string" }, { "code": null, "e": 36134, "s": 36085, "text": "How to split a string in C/C++, Python and Java?" }, { "code": null, "e": 36179, "s": 36134, "text": "Print all the duplicates in the input string" } ]
How to remove non-alphanumeric characters in PHP? - GeeksforGeeks
10 Oct, 2018 Non-alphanumeric characters can be remove by using preg_replace() function. This function perform regular expression search and replace. The function preg_replace() searches for string specified by pattern and replaces pattern with replacement if found. Examples: Input : !@GeeksforGeeks2018? Output : GeeksforGeeks2018 Input : Geeks For Geeks Output : GeeksForGeeks Syntax: int preg_match( $pattern, $replacement_string, $original_string ) Parameter: This function accepts three parameter as mentioned above and described below: $pattern: The pattern that is searched in the string. It must be a regular expression. $replacement_string: The matched pattern is replaced by the replacement_string. $original_string: It is the original string in which searching and replacement is done. Return value: After the replacement has occurred, the modified string will be returned. If no matches are found, the original string remains unchanged. Method 1: The regular expression ‘/[\W]/’ matches all the non-alphanumeric characters and replace them with ‘ ‘ (empty string). $str = preg_replace( '/[\W]/', '', $str); In the regular expression W is a meta-character that is preceded by a backslash (\W) that acts to give the combination a special meaning. It means a combination of non-alphanumeric characters. Example: <?php // string containing non-alphanumeric characters$str="!@GeeksforGeeks2018?"; // preg_replace function to remove the// non-alphanumeric characters$str = preg_replace( '/[\W]/', '', $str); // print the stringecho($str);?> GeeksforGeeks2018 Method 2: The regular expression ‘/[^a-z0-9 ]/i’ matches all the non-alphanumeric characters and replace them with ‘ ‘ (null string). $str = preg_replace( '/[^a-z0-9 ]/i', '', $str); In the regular expression: i: It is used for case insensitive. a-z: It is used for all lowercase letters, don’t need to specify A-Z because of i (case insensitive) already mentioned in the statement. 0-9: It is used to match all digits. Example: <?php // string containing non-alphanumeric characters$str="!@GeeksforGeeks2018?"; // preg_replace function to remove the // non-alphanumeric characters$str = preg_replace( '/[^a-z0-9]/i', '', $str); // print the stringecho($str);?> GeeksforGeeks2018 Picked PHP PHP Programs Web Technologies PHP Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. How to Insert Form Data into Database using PHP ? How to convert array to string in PHP ? PHP | Converting string to Date and DateTime Comparing two dates in PHP How to receive JSON POST with PHP ? How to Insert Form Data into Database using PHP ? How to convert array to string in PHP ? How to call PHP function on the click of a Button ? Comparing two dates in PHP How to pass JavaScript variables to PHP ?
[ { "code": null, "e": 26333, "s": 26305, "text": "\n10 Oct, 2018" }, { "code": null, "e": 26587, "s": 26333, "text": "Non-alphanumeric characters can be remove by using preg_replace() function. This function perform regular expression search and replace. The function preg_replace() searches for string specified by pattern and replaces pattern with replacement if found." }, { "code": null, "e": 26597, "s": 26587, "text": "Examples:" }, { "code": null, "e": 26702, "s": 26597, "text": "Input : !@GeeksforGeeks2018?\nOutput : GeeksforGeeks2018\n\nInput : Geeks For Geeks\nOutput : GeeksForGeeks\n" }, { "code": null, "e": 26710, "s": 26702, "text": "Syntax:" }, { "code": null, "e": 26776, "s": 26710, "text": "int preg_match( $pattern, $replacement_string, $original_string )" }, { "code": null, "e": 26865, "s": 26776, "text": "Parameter: This function accepts three parameter as mentioned above and described below:" }, { "code": null, "e": 26952, "s": 26865, "text": "$pattern: The pattern that is searched in the string. It must be a regular expression." }, { "code": null, "e": 27032, "s": 26952, "text": "$replacement_string: The matched pattern is replaced by the replacement_string." }, { "code": null, "e": 27120, "s": 27032, "text": "$original_string: It is the original string in which searching and replacement is done." }, { "code": null, "e": 27134, "s": 27120, "text": "Return value:" }, { "code": null, "e": 27208, "s": 27134, "text": "After the replacement has occurred, the modified string will be returned." }, { "code": null, "e": 27272, "s": 27208, "text": "If no matches are found, the original string remains unchanged." }, { "code": null, "e": 27400, "s": 27272, "text": "Method 1: The regular expression ‘/[\\W]/’ matches all the non-alphanumeric characters and replace them with ‘ ‘ (empty string)." }, { "code": null, "e": 27442, "s": 27400, "text": "$str = preg_replace( '/[\\W]/', '', $str);" }, { "code": null, "e": 27635, "s": 27442, "text": "In the regular expression W is a meta-character that is preceded by a backslash (\\W) that acts to give the combination a special meaning. It means a combination of non-alphanumeric characters." }, { "code": null, "e": 27644, "s": 27635, "text": "Example:" }, { "code": "<?php // string containing non-alphanumeric characters$str=\"!@GeeksforGeeks2018?\"; // preg_replace function to remove the// non-alphanumeric characters$str = preg_replace( '/[\\W]/', '', $str); // print the stringecho($str);?>", "e": 27874, "s": 27644, "text": null }, { "code": null, "e": 27893, "s": 27874, "text": "GeeksforGeeks2018\n" }, { "code": null, "e": 28027, "s": 27893, "text": "Method 2: The regular expression ‘/[^a-z0-9 ]/i’ matches all the non-alphanumeric characters and replace them with ‘ ‘ (null string)." }, { "code": null, "e": 28076, "s": 28027, "text": "$str = preg_replace( '/[^a-z0-9 ]/i', '', $str);" }, { "code": null, "e": 28103, "s": 28076, "text": "In the regular expression:" }, { "code": null, "e": 28139, "s": 28103, "text": "i: It is used for case insensitive." }, { "code": null, "e": 28276, "s": 28139, "text": "a-z: It is used for all lowercase letters, don’t need to specify A-Z because of i (case insensitive) already mentioned in the statement." }, { "code": null, "e": 28313, "s": 28276, "text": "0-9: It is used to match all digits." }, { "code": null, "e": 28322, "s": 28313, "text": "Example:" }, { "code": "<?php // string containing non-alphanumeric characters$str=\"!@GeeksforGeeks2018?\"; // preg_replace function to remove the // non-alphanumeric characters$str = preg_replace( '/[^a-z0-9]/i', '', $str); // print the stringecho($str);?>", "e": 28562, "s": 28322, "text": null }, { "code": null, "e": 28581, "s": 28562, "text": "GeeksforGeeks2018\n" }, { "code": null, "e": 28588, "s": 28581, "text": "Picked" }, { "code": null, "e": 28592, "s": 28588, "text": "PHP" }, { "code": null, "e": 28605, "s": 28592, "text": "PHP Programs" }, { "code": null, "e": 28622, "s": 28605, "text": "Web Technologies" }, { "code": null, "e": 28626, "s": 28622, "text": "PHP" }, { "code": null, "e": 28724, "s": 28626, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 28774, "s": 28724, "text": "How to Insert Form Data into Database using PHP ?" }, { "code": null, "e": 28814, "s": 28774, "text": "How to convert array to string in PHP ?" }, { "code": null, "e": 28859, "s": 28814, "text": "PHP | Converting string to Date and DateTime" }, { "code": null, "e": 28886, "s": 28859, "text": "Comparing two dates in PHP" }, { "code": null, "e": 28922, "s": 28886, "text": "How to receive JSON POST with PHP ?" }, { "code": null, "e": 28972, "s": 28922, "text": "How to Insert Form Data into Database using PHP ?" }, { "code": null, "e": 29012, "s": 28972, "text": "How to convert array to string in PHP ?" }, { "code": null, "e": 29064, "s": 29012, "text": "How to call PHP function on the click of a Button ?" }, { "code": null, "e": 29091, "s": 29064, "text": "Comparing two dates in PHP" } ]
Maximum element between two nodes of BST - GeeksforGeeks
19 Apr, 2022 Given an array of N elements and two integers A, B which belong to the given array. Create a Binary Search Tree by inserting elements from arr[0] to arr[n-1]. The task is to find the maximum element in the path from A to B.Examples : Input : arr[] = { 18, 36, 9, 6, 12, 10, 1, 8 }, a = 1, b = 10. Output : 12 Path from 1 to 10 contains { 1, 6, 9, 12, 10 }. Maximum element is 12. The idea is to find Lowest Common Ancestor of node ‘a’ and node ‘b’. Then search maximum node between LCA and ‘a’, also find maximum node between LCA and ‘b’. Answer will be maximum node of two. C++ Java Python3 C# Javascript // C++ program to find maximum element in the path// between two Nodes of Binary Search Tree.#include <bits/stdc++.h>using namespace std; struct Node{ struct Node *left, *right; int data;}; // Create and return a pointer of new Node.Node *createNode(int x){ Node *p = new Node; p -> data = x; p -> left = p -> right = NULL; return p;} // Insert a new Node in Binary Search Tree.void insertNode(struct Node *root, int x){ Node *p = root, *q = NULL; while (p != NULL) { q = p; if (p -> data < x) p = p -> right; else p = p -> left; } if (q == NULL) p = createNode(x); else { if (q -> data < x) q -> right = createNode(x); else q -> left = createNode(x); }} // Return the maximum element between a Node// and its given ancestor.int maxelpath(Node *q, int x){ Node *p = q; int mx = INT_MIN; // Traversing the path between ancestor and // Node and finding maximum element. while (p -> data != x) { if (p -> data > x) { mx = max(mx, p -> data); p = p -> left; } else { mx = max(mx, p -> data); p = p -> right; } } return max(mx, x);} // Return maximum element in the path between// two given Node of BST.int maximumElement(struct Node *root, int x, int y){ Node *p = root; // Finding the LCA of Node x and Node y while ((x < p -> data && y < p -> data) || (x > p -> data && y > p -> data)) { // Checking if both the Node lie on the // left side of the parent p. if (x < p -> data && y < p -> data) p = p -> left; // Checking if both the Node lie on the // right side of the parent p. else if (x > p -> data && y > p -> data) p = p -> right; } // Return the maximum of maximum elements occur // in path from ancestor to both Node. return max(maxelpath(p, x), maxelpath(p, y));} // Driver Codeint main(){ int arr[] = { 18, 36, 9, 6, 12, 10, 1, 8 }; int a = 1, b = 10; int n = sizeof(arr) / sizeof(arr[0]); // Creating the root of Binary Search Tree struct Node *root = createNode(arr[0]); // Inserting Nodes in Binary Search Tree for (int i = 1; i < n; i++) insertNode(root, arr[i]); cout << maximumElement(root, a, b) << endl; return 0;} // Java program to find maximum element in the path// between two Nodes of Binary Search Tree.class Solution{ static class Node{ Node left, right; int data;} // Create and return a pointer of new Node.static Node createNode(int x){ Node p = new Node(); p . data = x; p . left = p . right = null; return p;} // Insert a new Node in Binary Search Tree.static void insertNode( Node root, int x){ Node p = root, q = null; while (p != null) { q = p; if (p . data < x) p = p . right; else p = p . left; } if (q == null) p = createNode(x); else { if (q . data < x) q . right = createNode(x); else q . left = createNode(x); }} // Return the maximum element between a Node// and its given ancestor.static int maxelpath(Node q, int x){ Node p = q; int mx = -1; // Traversing the path between ancestor and // Node and finding maximum element. while (p . data != x) { if (p . data > x) { mx = Math.max(mx, p . data); p = p . left; } else { mx = Math.max(mx, p . data); p = p . right; } } return Math.max(mx, x);} // Return maximum element in the path between// two given Node of BST.static int maximumElement( Node root, int x, int y){ Node p = root; // Finding the LCA of Node x and Node y while ((x < p . data && y < p . data) || (x > p . data && y > p . data)) { // Checking if both the Node lie on the // left side of the parent p. if (x < p . data && y < p . data) p = p . left; // Checking if both the Node lie on the // right side of the parent p. else if (x > p . data && y > p . data) p = p . right; } // Return the maximum of maximum elements occur // in path from ancestor to both Node. return Math.max(maxelpath(p, x), maxelpath(p, y));} // Driver Codepublic static void main(String args[]){ int arr[] = { 18, 36, 9, 6, 12, 10, 1, 8 }; int a = 1, b = 10; int n =arr.length; // Creating the root of Binary Search Tree Node root = createNode(arr[0]); // Inserting Nodes in Binary Search Tree for (int i = 1; i < n; i++) insertNode(root, arr[i]); System.out.println( maximumElement(root, a, b) ); }}//contributed by Arnab Kundu # Python 3 program to find maximum element# in the path between two Nodes of Binary# Search Tree. # Create and return a pointer of new Node.class createNode: # Constructor to create a new node def __init__(self, data): self.data = data self.left = None self.right = None # Insert a new Node in Binary Search Tree.def insertNode(root, x): p, q = root, None while p != None: q = p if p.data < x: p = p.right else: p = p.left if q == None: p = createNode(x) else: if q.data < x: q.right = createNode(x) else: q.left = createNode(x) # Return the maximum element between a# Node and its given ancestor.def maxelpath(q, x): p = q mx = -999999999999 # Traversing the path between ancestor # and Node and finding maximum element. while p.data != x: if p.data > x: mx = max(mx, p.data) p = p.left else: mx = max(mx, p.data) p = p.right return max(mx, x) # Return maximum element in the path# between two given Node of BST.def maximumElement(root, x, y): p = root # Finding the LCA of Node x and Node y while ((x < p.data and y < p.data) or (x > p.data and y > p.data)): # Checking if both the Node lie on # the left side of the parent p. if x < p.data and y < p.data: p = p.left # Checking if both the Node lie on # the right side of the parent p. elif x > p.data and y > p.data: p = p.right # Return the maximum of maximum elements # occur in path from ancestor to both Node. return max(maxelpath(p, x), maxelpath(p, y)) # Driver Codeif __name__ == '__main__': arr = [ 18, 36, 9, 6, 12, 10, 1, 8] a, b = 1, 10 n = len(arr) # Creating the root of Binary Search Tree root = createNode(arr[0]) # Inserting Nodes in Binary Search Tree for i in range(1,n): insertNode(root, arr[i]) print(maximumElement(root, a, b)) # This code is contributed by PranchalK using System; // C# program to find maximum element in the path// between two Nodes of Binary Search Tree.public class Solution{ public class Node{ public Node left, right; public int data;} // Create and return a pointer of new Node.public static Node createNode(int x){ Node p = new Node(); p.data = x; p.left = p.right = null; return p;} // Insert a new Node in Binary Search Tree.public static void insertNode(Node root, int x){ Node p = root, q = null; while (p != null) { q = p; if (p.data < x) { p = p.right; } else { p = p.left; } } if (q == null) { p = createNode(x); } else { if (q.data < x) { q.right = createNode(x); } else { q.left = createNode(x); } }} // Return the maximum element between a Node// and its given ancestor.public static int maxelpath(Node q, int x){ Node p = q; int mx = -1; // Traversing the path between ancestor and // Node and finding maximum element. while (p.data != x) { if (p.data > x) { mx = Math.Max(mx, p.data); p = p.left; } else { mx = Math.Max(mx, p.data); p = p.right; } } return Math.Max(mx, x);} // Return maximum element in the path between// two given Node of BST.public static int maximumElement(Node root, int x, int y){ Node p = root; // Finding the LCA of Node x and Node y while ((x < p.data && y < p.data) || (x > p.data && y > p.data)) { // Checking if both the Node lie on the // left side of the parent p. if (x < p.data && y < p.data) { p = p.left; } // Checking if both the Node lie on the // right side of the parent p. else if (x > p.data && y > p.data) { p = p.right; } } // Return the maximum of maximum elements occur // in path from ancestor to both Node. return Math.Max(maxelpath(p, x), maxelpath(p, y));} // Driver Codepublic static void Main(string[] args){ int[] arr = new int[] {18, 36, 9, 6, 12, 10, 1, 8}; int a = 1, b = 10; int n = arr.Length; // Creating the root of Binary Search Tree Node root = createNode(arr[0]); // Inserting Nodes in Binary Search Tree for (int i = 1; i < n; i++) { insertNode(root, arr[i]); } Console.WriteLine(maximumElement(root, a, b)); }} // This code is contributed by Shrikant13 <script> // JavaScript program to find// maximum element in the path// between two Nodes of Binary// Search Tree. class Node { constructor(val) { this.data = val; this.left = null; this.right = null; } } // Create and return a pointer of new Node. function createNode(x) {var p = new Node(); p.data = x; p.left = p.right = null; return p; } // Insert a new Node in Binary Search Tree. function insertNode(root , x) { var p = root, q = null; while (p != null) { q = p; if (p.data < x) p = p.right; else p = p.left; } if (q == null) p = createNode(x); else { if (q.data < x) q.right = createNode(x); else q.left = createNode(x); } } // Return the maximum element between a Node // and its given ancestor. function maxelpath(q , x) { var p = q; var mx = -1; // Traversing the path between ancestor and // Node and finding maximum element. while (p.data != x) { if (p.data > x) { mx = Math.max(mx, p.data); p = p.left; } else { mx = Math.max(mx, p.data); p = p.right; } } return Math.max(mx, x); } // Return maximum element in the path between // two given Node of BST. function maximumElement(root , x , y) { var p = root; // Finding the LCA of Node x and Node y while ((x < p.data && y < p.data) || (x > p.data && y > p.data)) { // Checking if both the Node lie on the // left side of the parent p. if (x < p.data && y < p.data) p = p.left; // Checking if both the Node lie on the // right side of the parent p. else if (x > p.data && y > p.data) p = p.right; } // Return the maximum of maximum elements occur // in path from ancestor to both Node. return Math.max(maxelpath(p, x), maxelpath(p, y)); } // Driver Code var arr = [ 18, 36, 9, 6, 12, 10, 1, 8 ]; var a = 1, b = 10; var n = arr.length; // Creating the root of Binary Search Tree var root = createNode(arr[0]); // Inserting Nodes in Binary Search Tree for (i = 1; i < n; i++) insertNode(root, arr[i]); document.write(maximumElement(root, a, b)); // This code contributed by gauravrajput1 </script> Output: 12 Time complexity: O(h) where h is height of BSTThis article is contributed by Anuj Chauhan. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to review-team@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. sanjeetkumarSingh andrew1234 shrikanth13 PranchalKatiyar GauravRajput1 simmytarika5 LCA Binary Search Tree Binary Search Tree Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Sorted Array to Balanced BST Inorder Successor in Binary Search Tree Optimal Binary Search Tree | DP-24 Find the node with minimum value in a Binary Search Tree Overview of Data Structures | Set 2 (Binary Tree, BST, Heap and Hash) Difference between Binary Tree and Binary Search Tree Lowest Common Ancestor in a Binary Search Tree. Binary Tree to Binary Search Tree Conversion Find k-th smallest element in BST (Order Statistics in BST) Convert a normal BST to Balanced BST
[ { "code": null, "e": 26047, "s": 26019, "text": "\n19 Apr, 2022" }, { "code": null, "e": 26283, "s": 26047, "text": "Given an array of N elements and two integers A, B which belong to the given array. Create a Binary Search Tree by inserting elements from arr[0] to arr[n-1]. The task is to find the maximum element in the path from A to B.Examples : " }, { "code": null, "e": 26376, "s": 26283, "text": "Input : arr[] = { 18, 36, 9, 6, 12, 10, 1, 8 }, \n a = 1, \n b = 10.\nOutput : 12" }, { "code": null, "e": 26450, "s": 26378, "text": "Path from 1 to 10 contains { 1, 6, 9, 12, 10 }. Maximum element is 12. " }, { "code": null, "e": 26646, "s": 26450, "text": "The idea is to find Lowest Common Ancestor of node ‘a’ and node ‘b’. Then search maximum node between LCA and ‘a’, also find maximum node between LCA and ‘b’. Answer will be maximum node of two. " }, { "code": null, "e": 26650, "s": 26646, "text": "C++" }, { "code": null, "e": 26655, "s": 26650, "text": "Java" }, { "code": null, "e": 26663, "s": 26655, "text": "Python3" }, { "code": null, "e": 26666, "s": 26663, "text": "C#" }, { "code": null, "e": 26677, "s": 26666, "text": "Javascript" }, { "code": "// C++ program to find maximum element in the path// between two Nodes of Binary Search Tree.#include <bits/stdc++.h>using namespace std; struct Node{ struct Node *left, *right; int data;}; // Create and return a pointer of new Node.Node *createNode(int x){ Node *p = new Node; p -> data = x; p -> left = p -> right = NULL; return p;} // Insert a new Node in Binary Search Tree.void insertNode(struct Node *root, int x){ Node *p = root, *q = NULL; while (p != NULL) { q = p; if (p -> data < x) p = p -> right; else p = p -> left; } if (q == NULL) p = createNode(x); else { if (q -> data < x) q -> right = createNode(x); else q -> left = createNode(x); }} // Return the maximum element between a Node// and its given ancestor.int maxelpath(Node *q, int x){ Node *p = q; int mx = INT_MIN; // Traversing the path between ancestor and // Node and finding maximum element. while (p -> data != x) { if (p -> data > x) { mx = max(mx, p -> data); p = p -> left; } else { mx = max(mx, p -> data); p = p -> right; } } return max(mx, x);} // Return maximum element in the path between// two given Node of BST.int maximumElement(struct Node *root, int x, int y){ Node *p = root; // Finding the LCA of Node x and Node y while ((x < p -> data && y < p -> data) || (x > p -> data && y > p -> data)) { // Checking if both the Node lie on the // left side of the parent p. if (x < p -> data && y < p -> data) p = p -> left; // Checking if both the Node lie on the // right side of the parent p. else if (x > p -> data && y > p -> data) p = p -> right; } // Return the maximum of maximum elements occur // in path from ancestor to both Node. return max(maxelpath(p, x), maxelpath(p, y));} // Driver Codeint main(){ int arr[] = { 18, 36, 9, 6, 12, 10, 1, 8 }; int a = 1, b = 10; int n = sizeof(arr) / sizeof(arr[0]); // Creating the root of Binary Search Tree struct Node *root = createNode(arr[0]); // Inserting Nodes in Binary Search Tree for (int i = 1; i < n; i++) insertNode(root, arr[i]); cout << maximumElement(root, a, b) << endl; return 0;}", "e": 29088, "s": 26677, "text": null }, { "code": "// Java program to find maximum element in the path// between two Nodes of Binary Search Tree.class Solution{ static class Node{ Node left, right; int data;} // Create and return a pointer of new Node.static Node createNode(int x){ Node p = new Node(); p . data = x; p . left = p . right = null; return p;} // Insert a new Node in Binary Search Tree.static void insertNode( Node root, int x){ Node p = root, q = null; while (p != null) { q = p; if (p . data < x) p = p . right; else p = p . left; } if (q == null) p = createNode(x); else { if (q . data < x) q . right = createNode(x); else q . left = createNode(x); }} // Return the maximum element between a Node// and its given ancestor.static int maxelpath(Node q, int x){ Node p = q; int mx = -1; // Traversing the path between ancestor and // Node and finding maximum element. while (p . data != x) { if (p . data > x) { mx = Math.max(mx, p . data); p = p . left; } else { mx = Math.max(mx, p . data); p = p . right; } } return Math.max(mx, x);} // Return maximum element in the path between// two given Node of BST.static int maximumElement( Node root, int x, int y){ Node p = root; // Finding the LCA of Node x and Node y while ((x < p . data && y < p . data) || (x > p . data && y > p . data)) { // Checking if both the Node lie on the // left side of the parent p. if (x < p . data && y < p . data) p = p . left; // Checking if both the Node lie on the // right side of the parent p. else if (x > p . data && y > p . data) p = p . right; } // Return the maximum of maximum elements occur // in path from ancestor to both Node. return Math.max(maxelpath(p, x), maxelpath(p, y));} // Driver Codepublic static void main(String args[]){ int arr[] = { 18, 36, 9, 6, 12, 10, 1, 8 }; int a = 1, b = 10; int n =arr.length; // Creating the root of Binary Search Tree Node root = createNode(arr[0]); // Inserting Nodes in Binary Search Tree for (int i = 1; i < n; i++) insertNode(root, arr[i]); System.out.println( maximumElement(root, a, b) ); }}//contributed by Arnab Kundu", "e": 31513, "s": 29088, "text": null }, { "code": "# Python 3 program to find maximum element# in the path between two Nodes of Binary# Search Tree. # Create and return a pointer of new Node.class createNode: # Constructor to create a new node def __init__(self, data): self.data = data self.left = None self.right = None # Insert a new Node in Binary Search Tree.def insertNode(root, x): p, q = root, None while p != None: q = p if p.data < x: p = p.right else: p = p.left if q == None: p = createNode(x) else: if q.data < x: q.right = createNode(x) else: q.left = createNode(x) # Return the maximum element between a# Node and its given ancestor.def maxelpath(q, x): p = q mx = -999999999999 # Traversing the path between ancestor # and Node and finding maximum element. while p.data != x: if p.data > x: mx = max(mx, p.data) p = p.left else: mx = max(mx, p.data) p = p.right return max(mx, x) # Return maximum element in the path# between two given Node of BST.def maximumElement(root, x, y): p = root # Finding the LCA of Node x and Node y while ((x < p.data and y < p.data) or (x > p.data and y > p.data)): # Checking if both the Node lie on # the left side of the parent p. if x < p.data and y < p.data: p = p.left # Checking if both the Node lie on # the right side of the parent p. elif x > p.data and y > p.data: p = p.right # Return the maximum of maximum elements # occur in path from ancestor to both Node. return max(maxelpath(p, x), maxelpath(p, y)) # Driver Codeif __name__ == '__main__': arr = [ 18, 36, 9, 6, 12, 10, 1, 8] a, b = 1, 10 n = len(arr) # Creating the root of Binary Search Tree root = createNode(arr[0]) # Inserting Nodes in Binary Search Tree for i in range(1,n): insertNode(root, arr[i]) print(maximumElement(root, a, b)) # This code is contributed by PranchalK", "e": 33608, "s": 31513, "text": null }, { "code": "using System; // C# program to find maximum element in the path// between two Nodes of Binary Search Tree.public class Solution{ public class Node{ public Node left, right; public int data;} // Create and return a pointer of new Node.public static Node createNode(int x){ Node p = new Node(); p.data = x; p.left = p.right = null; return p;} // Insert a new Node in Binary Search Tree.public static void insertNode(Node root, int x){ Node p = root, q = null; while (p != null) { q = p; if (p.data < x) { p = p.right; } else { p = p.left; } } if (q == null) { p = createNode(x); } else { if (q.data < x) { q.right = createNode(x); } else { q.left = createNode(x); } }} // Return the maximum element between a Node// and its given ancestor.public static int maxelpath(Node q, int x){ Node p = q; int mx = -1; // Traversing the path between ancestor and // Node and finding maximum element. while (p.data != x) { if (p.data > x) { mx = Math.Max(mx, p.data); p = p.left; } else { mx = Math.Max(mx, p.data); p = p.right; } } return Math.Max(mx, x);} // Return maximum element in the path between// two given Node of BST.public static int maximumElement(Node root, int x, int y){ Node p = root; // Finding the LCA of Node x and Node y while ((x < p.data && y < p.data) || (x > p.data && y > p.data)) { // Checking if both the Node lie on the // left side of the parent p. if (x < p.data && y < p.data) { p = p.left; } // Checking if both the Node lie on the // right side of the parent p. else if (x > p.data && y > p.data) { p = p.right; } } // Return the maximum of maximum elements occur // in path from ancestor to both Node. return Math.Max(maxelpath(p, x), maxelpath(p, y));} // Driver Codepublic static void Main(string[] args){ int[] arr = new int[] {18, 36, 9, 6, 12, 10, 1, 8}; int a = 1, b = 10; int n = arr.Length; // Creating the root of Binary Search Tree Node root = createNode(arr[0]); // Inserting Nodes in Binary Search Tree for (int i = 1; i < n; i++) { insertNode(root, arr[i]); } Console.WriteLine(maximumElement(root, a, b)); }} // This code is contributed by Shrikant13", "e": 36164, "s": 33608, "text": null }, { "code": "<script> // JavaScript program to find// maximum element in the path// between two Nodes of Binary// Search Tree. class Node { constructor(val) { this.data = val; this.left = null; this.right = null; } } // Create and return a pointer of new Node. function createNode(x) {var p = new Node(); p.data = x; p.left = p.right = null; return p; } // Insert a new Node in Binary Search Tree. function insertNode(root , x) { var p = root, q = null; while (p != null) { q = p; if (p.data < x) p = p.right; else p = p.left; } if (q == null) p = createNode(x); else { if (q.data < x) q.right = createNode(x); else q.left = createNode(x); } } // Return the maximum element between a Node // and its given ancestor. function maxelpath(q , x) { var p = q; var mx = -1; // Traversing the path between ancestor and // Node and finding maximum element. while (p.data != x) { if (p.data > x) { mx = Math.max(mx, p.data); p = p.left; } else { mx = Math.max(mx, p.data); p = p.right; } } return Math.max(mx, x); } // Return maximum element in the path between // two given Node of BST. function maximumElement(root , x , y) { var p = root; // Finding the LCA of Node x and Node y while ((x < p.data && y < p.data) || (x > p.data && y > p.data)) { // Checking if both the Node lie on the // left side of the parent p. if (x < p.data && y < p.data) p = p.left; // Checking if both the Node lie on the // right side of the parent p. else if (x > p.data && y > p.data) p = p.right; } // Return the maximum of maximum elements occur // in path from ancestor to both Node. return Math.max(maxelpath(p, x), maxelpath(p, y)); } // Driver Code var arr = [ 18, 36, 9, 6, 12, 10, 1, 8 ]; var a = 1, b = 10; var n = arr.length; // Creating the root of Binary Search Tree var root = createNode(arr[0]); // Inserting Nodes in Binary Search Tree for (i = 1; i < n; i++) insertNode(root, arr[i]); document.write(maximumElement(root, a, b)); // This code contributed by gauravrajput1 </script>", "e": 38820, "s": 36164, "text": null }, { "code": null, "e": 38830, "s": 38820, "text": "Output: " }, { "code": null, "e": 38833, "s": 38830, "text": "12" }, { "code": null, "e": 39300, "s": 38833, "text": "Time complexity: O(h) where h is height of BSTThis article is contributed by Anuj Chauhan. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to review-team@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. " }, { "code": null, "e": 39318, "s": 39300, "text": "sanjeetkumarSingh" }, { "code": null, "e": 39329, "s": 39318, "text": "andrew1234" }, { "code": null, "e": 39341, "s": 39329, "text": "shrikanth13" }, { "code": null, "e": 39357, "s": 39341, "text": "PranchalKatiyar" }, { "code": null, "e": 39371, "s": 39357, "text": "GauravRajput1" }, { "code": null, "e": 39384, "s": 39371, "text": "simmytarika5" }, { "code": null, "e": 39388, "s": 39384, "text": "LCA" }, { "code": null, "e": 39407, "s": 39388, "text": "Binary Search Tree" }, { "code": null, "e": 39426, "s": 39407, "text": "Binary Search Tree" }, { "code": null, "e": 39524, "s": 39426, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 39553, "s": 39524, "text": "Sorted Array to Balanced BST" }, { "code": null, "e": 39593, "s": 39553, "text": "Inorder Successor in Binary Search Tree" }, { "code": null, "e": 39628, "s": 39593, "text": "Optimal Binary Search Tree | DP-24" }, { "code": null, "e": 39685, "s": 39628, "text": "Find the node with minimum value in a Binary Search Tree" }, { "code": null, "e": 39755, "s": 39685, "text": "Overview of Data Structures | Set 2 (Binary Tree, BST, Heap and Hash)" }, { "code": null, "e": 39809, "s": 39755, "text": "Difference between Binary Tree and Binary Search Tree" }, { "code": null, "e": 39857, "s": 39809, "text": "Lowest Common Ancestor in a Binary Search Tree." }, { "code": null, "e": 39902, "s": 39857, "text": "Binary Tree to Binary Search Tree Conversion" }, { "code": null, "e": 39962, "s": 39902, "text": "Find k-th smallest element in BST (Order Statistics in BST)" } ]
Program to calculate Height and Depth of a node in a Binary Tree - GeeksforGeeks
30 Jun, 2021 Given a Binary Tree consisting of N nodes and a integer K, the task is to find the depth and height of the node with value K in the Binary Tree. The depth of a node is the number of edges present in path from the root node of a tree to that node.The height of a node is the number of edges present in the longest path connecting that node to a leaf node. Examples: Input: K = 25, 5 / \ 10 15 / \ / \20 25 30 35 \ 45Output:Depth of node 25 = 2Height of node 25 = 1Explanation:The number of edges in the path from root node to the node 25 is 2. Therefore, depth of the node 25 is 2.The number of edges in the longest path connecting the node 25 to any leaf node is 1. Therefore, height of the node 25 is 1. Input: K = 10, 5 / \ 10 15 / \ / \20 25 30 35 \ 45Output: Depth of node 10 = 1Height of node 10 = 2 Approach: The problem can be solved based on the following observations: Depth of a node K (of a Binary Tree) = Number of edges in the path connecting the root to the node K = Number of ancestors of K (excluding K itself). Follow the steps below to find the depth of the given node: If the tree is empty, print -1. Otherwise, perform the following steps:Initialize a variable, say dist as -1.Check if the node K is equal to the given node.Otherwise, check if it is present in either of the subtrees, by recursively checking for the left and right subtrees respectively.If found to be true, print the value of dist + 1.Otherwise, print dist. Initialize a variable, say dist as -1. Check if the node K is equal to the given node. Otherwise, check if it is present in either of the subtrees, by recursively checking for the left and right subtrees respectively. If found to be true, print the value of dist + 1. Otherwise, print dist. Height of a node K (of a Binary Tree) = Number of edges in the longest path connecting K to any leaf node. Follow the steps below to find the height of the given node: If the tree is empty, print -1. Otherwise, perform the following steps:Calculate the height of the left subtree recursively.Calculate the height of the right subtree recursively.Update height of the current node by adding 1 to the maximum of the two heights obtained in the previous step. Store the height in a variable, say ans.If the current node is equal to the given node K, print the value of ans as the required answer. Calculate the height of the left subtree recursively. Calculate the height of the right subtree recursively. Update height of the current node by adding 1 to the maximum of the two heights obtained in the previous step. Store the height in a variable, say ans. If the current node is equal to the given node K, print the value of ans as the required answer. Below is the implementation of the above approach: C++ Java Python3 C# Javascript // C++ program for the above approach#include <bits/stdc++.h>using namespace std; // Structure of a Binary Tree Nodestruct Node { int data; Node *left, *right;}; // Utility function to create// a new Binary Tree NodeNode* newNode(int item){ Node* temp = new Node; temp->data = item; temp->left = temp->right = NULL; return temp;} // Function to find the depth of// a given node in a Binary Treeint findDepth(Node* root, int x){ // Base case if (root == NULL) return -1; // Initialize distance as -1 int dist = -1; // Check if x is current node= if ((root->data == x) // Otherwise, check if x is // present in the left subtree || (dist = findDepth(root->left, x)) >= 0 // Otherwise, check if x is // present in the right subtree || (dist = findDepth(root->right, x)) >= 0) // Return depth of the node return dist + 1; return dist;} // Helper function to find the height// of a given node in the binary treeint findHeightUtil(Node* root, int x, int& height){ // Base Case if (root == NULL) { return -1; } // Store the maximum height of // the left and right subtree int leftHeight = findHeightUtil( root->left, x, height); int rightHeight = findHeightUtil( root->right, x, height); // Update height of the current node int ans = max(leftHeight, rightHeight) + 1; // If current node is the required node if (root->data == x) height = ans; return ans;} // Function to find the height of// a given node in a Binary Treeint findHeight(Node* root, int x){ // Store the height of // the given node int h = -1; // Stores height of the Tree int maxHeight = findHeightUtil(root, x, h); // Return the height return h;} // Driver Codeint main(){ // Binary Tree Formation Node* root = newNode(5); root->left = newNode(10); root->right = newNode(15); root->left->left = newNode(20); root->left->right = newNode(25); root->left->right->right = newNode(45); root->right->left = newNode(30); root->right->right = newNode(35); int k = 25; // Function call to find the // depth of a given node cout << "Depth: " << findDepth(root, k) << "\n"; // Function call to find the // height of a given node cout << "Height: " << findHeight(root, k); return 0;} // Java program for the above approachimport java.util.*;class GFG{ static int height = -1; // Structure of a Binary Tree Nodestatic class Node{ int data; Node left; Node right;}; // Utility function to create// a new Binary Tree Nodestatic Node newNode(int item){ Node temp = new Node(); temp.data = item; temp.left = temp.right = null; return temp;} // Function to find the depth of// a given node in a Binary Treestatic int findDepth(Node root, int x){ // Base case if (root == null) return -1; // Initialize distance as -1 int dist = -1; // Check if x is current node= if ((root.data == x)|| // Otherwise, check if x is // present in the left subtree (dist = findDepth(root.left, x)) >= 0 || // Otherwise, check if x is // present in the right subtree (dist = findDepth(root.right, x)) >= 0) // Return depth of the node return dist + 1; return dist;} // Helper function to find the height// of a given node in the binary treestatic int findHeightUtil(Node root, int x){ // Base Case if (root == null) { return -1; } // Store the maximum height of // the left and right subtree int leftHeight = findHeightUtil(root.left, x); int rightHeight = findHeightUtil(root.right, x); // Update height of the current node int ans = Math.max(leftHeight, rightHeight) + 1; // If current node is the required node if (root.data == x) height = ans; return ans;} // Function to find the height of// a given node in a Binary Treestatic int findHeight(Node root, int x){ // Stores height of the Tree findHeightUtil(root, x); // Return the height return height;} // Driver Codepublic static void main(String []args){ // Binary Tree Formation Node root = newNode(5); root.left = newNode(10); root.right = newNode(15); root.left.left = newNode(20); root.left.right = newNode(25); root.left.right.right = newNode(45); root.right.left = newNode(30); root.right.right = newNode(35); int k = 25; // Function call to find the // depth of a given node System.out.println("Depth: " + findDepth(root, k)); // Function call to find the // height of a given node System.out.println("Height: " + findHeight(root, k));}} // This code is contributed by SURENDRA_GANGWAR # Python3 program for the above approach # Structure of a Binary Tree Nodeclass Node: def __init__(self, x): self.data = x self.left = None self.right = None # Function to find the depth of# a given node in a Binary Treedef findDepth(root, x): # Base case if (root == None): return -1 # Initialize distance as -1 dist = -1 # Check if x is current node= if (root.data == x): return dist + 1 dist = findDepth(root.left, x) if dist >= 0: return dist + 1 dist = findDepth(root.right, x) if dist >= 0: return dist + 1 return dist # Helper function to find the height# of a given node in the binary treedef findHeightUtil(root, x): global height # Base Case if (root == None): return -1 # Store the maximum height of # the left and right subtree leftHeight = findHeightUtil(root.left, x) rightHeight = findHeightUtil(root.right, x) # Update height of the current node ans = max(leftHeight, rightHeight) + 1 # If current node is the required node if (root.data == x): height = ans return ans # Function to find the height of# a given node in a Binary Treedef findHeight(root, x): global height # Stores height of the Tree maxHeight = findHeightUtil(root, x) # Return the height return height # Driver Codeif __name__ == '__main__': # Binary Tree Formation height = -1 root = Node(5) root.left = Node(10) root.right = Node(15) root.left.left = Node(20) root.left.right = Node(25) root.left.right.right = Node(45) root.right.left = Node(30) root.right.right = Node(35) k = 25 # Function call to find the # depth of a given node print("Depth: ",findDepth(root, k)) # Function call to find the # height of a given node print("Height: ",findHeight(root, k)) # This code is contributed by mohit kumar 29. // C# program for the above approachusing System;using System.Collections.Generic; class GFG{ static int height = -1; // Structure of a Binary Tree Nodeclass Node{ public int data; public Node left; public Node right;}; // Utility function to create// a new Binary Tree Nodestatic Node newNode(int item){ Node temp = new Node(); temp.data = item; temp.left = temp.right = null; return temp;} // Function to find the depth of// a given node in a Binary Treestatic int findDepth(Node root, int x){ // Base case if (root == null) return -1; // Initialize distance as -1 int dist = -1; // Check if x is current node= if ((root.data == x)|| // Otherwise, check if x is // present in the left subtree (dist = findDepth(root.left, x)) >= 0 || // Otherwise, check if x is // present in the right subtree (dist = findDepth(root.right, x)) >= 0) // Return depth of the node return dist + 1; return dist;} // Helper function to find the height// of a given node in the binary treestatic int findHeightUtil(Node root, int x){ // Base Case if (root == null) { return -1; } // Store the maximum height of // the left and right subtree int leftHeight = findHeightUtil(root.left, x); int rightHeight = findHeightUtil(root.right, x); // Update height of the current node int ans = Math.Max(leftHeight, rightHeight) + 1; // If current node is the required node if (root.data == x) height = ans; return ans;} // Function to find the height of// a given node in a Binary Treestatic int findHeight(Node root, int x){ // Stores height of the Tree findHeightUtil(root, x); // Return the height return height;} // Driver Codepublic static void Main(){ // Binary Tree Formation Node root = newNode(5); root.left = newNode(10); root.right = newNode(15); root.left.left = newNode(20); root.left.right = newNode(25); root.left.right.right = newNode(45); root.right.left = newNode(30); root.right.right = newNode(35); int k = 25; // Function call to find the // depth of a given node Console.WriteLine("Depth: " + findDepth(root, k)); // Function call to find the // height of a given node Console.WriteLine("Height: " + findHeight(root, k));}} // This code is contributed by ipg2016107 <script> // JavaScript program for the above approach var height = -1; // Structure of a Binary Tree Nodeclass Node{ constructor() { this.data = 0; this.left = null; this.right = null; }}; // Utility function to create// a new Binary Tree Nodefunction newNode(item){ var temp = new Node(); temp.data = item; temp.left = temp.right = null; return temp;} // Function to find the depth of// a given node in a Binary Treefunction findDepth(root, x){ // Base case if (root == null) return -1; // Initialize distance as -1 var dist = -1; // Check if x is current node= if ((root.data == x)|| // Otherwise, check if x is // present in the left subtree (dist = findDepth(root.left, x)) >= 0 || // Otherwise, check if x is // present in the right subtree (dist = findDepth(root.right, x)) >= 0) // Return depth of the node return dist + 1; return dist;} // Helper function to find the height// of a given node in the binary treefunction findHeightUtil(root, x){ // Base Case if (root == null) { return -1; } // Store the maximum height of // the left and right subtree var leftHeight = findHeightUtil(root.left, x); var rightHeight = findHeightUtil(root.right, x); // Update height of the current node var ans = Math.max(leftHeight, rightHeight) + 1; // If current node is the required node if (root.data == x) height = ans; return ans;} // Function to find the height of// a given node in a Binary Treefunction findHeight(root, x){ // Stores height of the Tree findHeightUtil(root, x); // Return the height return height;} // Driver Code// Binary Tree Formationvar root = newNode(5);root.left = newNode(10);root.right = newNode(15);root.left.left = newNode(20);root.left.right = newNode(25);root.left.right.right = newNode(45);root.right.left = newNode(30);root.right.right = newNode(35);var k = 25;// Function call to find the// depth of a given nodedocument.write("Depth: " + findDepth(root, k)+"<br>");// Function call to find the// height of a given nodedocument.write("Height: " + findHeight(root, k)); </script> Depth: 2 Height: 1 Time Complexity: O(N)Auxiliary Space: O(1) mohit kumar 29 ipg2016107 SURENDRA_GANGWAR noob2000 Binary Tree Height of a Tree Tree Traversals Recursion Searching Tree Searching Recursion Tree Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Practice Questions for Recursion | Set 1 Sum of natural numbers using recursion Recursively Reversing a linked list (A simple implementation) Generating subarrays using recursion Recursive Insertion Sort Binary Search Maximum and minimum of an array using minimum number of comparisons Linear Search Search an element in a sorted and rotated array Find the Missing Number
[ { "code": null, "e": 26249, "s": 26221, "text": "\n30 Jun, 2021" }, { "code": null, "e": 26395, "s": 26249, "text": "Given a Binary Tree consisting of N nodes and a integer K, the task is to find the depth and height of the node with value K in the Binary Tree. " }, { "code": null, "e": 26605, "s": 26395, "text": "The depth of a node is the number of edges present in path from the root node of a tree to that node.The height of a node is the number of edges present in the longest path connecting that node to a leaf node." }, { "code": null, "e": 26615, "s": 26605, "text": "Examples:" }, { "code": null, "e": 27014, "s": 26615, "text": "Input: K = 25, 5 / \\ 10 15 / \\ / \\20 25 30 35 \\ 45Output:Depth of node 25 = 2Height of node 25 = 1Explanation:The number of edges in the path from root node to the node 25 is 2. Therefore, depth of the node 25 is 2.The number of edges in the longest path connecting the node 25 to any leaf node is 1. Therefore, height of the node 25 is 1." }, { "code": null, "e": 27173, "s": 27014, "text": "Input: K = 10, 5 / \\ 10 15 / \\ / \\20 25 30 35 \\ 45Output: Depth of node 10 = 1Height of node 10 = 2" }, { "code": null, "e": 27246, "s": 27173, "text": "Approach: The problem can be solved based on the following observations:" }, { "code": null, "e": 27398, "s": 27246, "text": " Depth of a node K (of a Binary Tree) = Number of edges in the path connecting the root to the node K = Number of ancestors of K (excluding K itself). " }, { "code": null, "e": 27458, "s": 27398, "text": "Follow the steps below to find the depth of the given node:" }, { "code": null, "e": 27490, "s": 27458, "text": "If the tree is empty, print -1." }, { "code": null, "e": 27816, "s": 27490, "text": "Otherwise, perform the following steps:Initialize a variable, say dist as -1.Check if the node K is equal to the given node.Otherwise, check if it is present in either of the subtrees, by recursively checking for the left and right subtrees respectively.If found to be true, print the value of dist + 1.Otherwise, print dist." }, { "code": null, "e": 27855, "s": 27816, "text": "Initialize a variable, say dist as -1." }, { "code": null, "e": 27903, "s": 27855, "text": "Check if the node K is equal to the given node." }, { "code": null, "e": 28034, "s": 27903, "text": "Otherwise, check if it is present in either of the subtrees, by recursively checking for the left and right subtrees respectively." }, { "code": null, "e": 28084, "s": 28034, "text": "If found to be true, print the value of dist + 1." }, { "code": null, "e": 28107, "s": 28084, "text": "Otherwise, print dist." }, { "code": null, "e": 28215, "s": 28107, "text": "Height of a node K (of a Binary Tree) = Number of edges in the longest path connecting K to any leaf node. " }, { "code": null, "e": 28276, "s": 28215, "text": "Follow the steps below to find the height of the given node:" }, { "code": null, "e": 28308, "s": 28276, "text": "If the tree is empty, print -1." }, { "code": null, "e": 28702, "s": 28308, "text": "Otherwise, perform the following steps:Calculate the height of the left subtree recursively.Calculate the height of the right subtree recursively.Update height of the current node by adding 1 to the maximum of the two heights obtained in the previous step. Store the height in a variable, say ans.If the current node is equal to the given node K, print the value of ans as the required answer." }, { "code": null, "e": 28756, "s": 28702, "text": "Calculate the height of the left subtree recursively." }, { "code": null, "e": 28811, "s": 28756, "text": "Calculate the height of the right subtree recursively." }, { "code": null, "e": 28963, "s": 28811, "text": "Update height of the current node by adding 1 to the maximum of the two heights obtained in the previous step. Store the height in a variable, say ans." }, { "code": null, "e": 29060, "s": 28963, "text": "If the current node is equal to the given node K, print the value of ans as the required answer." }, { "code": null, "e": 29111, "s": 29060, "text": "Below is the implementation of the above approach:" }, { "code": null, "e": 29115, "s": 29111, "text": "C++" }, { "code": null, "e": 29120, "s": 29115, "text": "Java" }, { "code": null, "e": 29128, "s": 29120, "text": "Python3" }, { "code": null, "e": 29131, "s": 29128, "text": "C#" }, { "code": null, "e": 29142, "s": 29131, "text": "Javascript" }, { "code": "// C++ program for the above approach#include <bits/stdc++.h>using namespace std; // Structure of a Binary Tree Nodestruct Node { int data; Node *left, *right;}; // Utility function to create// a new Binary Tree NodeNode* newNode(int item){ Node* temp = new Node; temp->data = item; temp->left = temp->right = NULL; return temp;} // Function to find the depth of// a given node in a Binary Treeint findDepth(Node* root, int x){ // Base case if (root == NULL) return -1; // Initialize distance as -1 int dist = -1; // Check if x is current node= if ((root->data == x) // Otherwise, check if x is // present in the left subtree || (dist = findDepth(root->left, x)) >= 0 // Otherwise, check if x is // present in the right subtree || (dist = findDepth(root->right, x)) >= 0) // Return depth of the node return dist + 1; return dist;} // Helper function to find the height// of a given node in the binary treeint findHeightUtil(Node* root, int x, int& height){ // Base Case if (root == NULL) { return -1; } // Store the maximum height of // the left and right subtree int leftHeight = findHeightUtil( root->left, x, height); int rightHeight = findHeightUtil( root->right, x, height); // Update height of the current node int ans = max(leftHeight, rightHeight) + 1; // If current node is the required node if (root->data == x) height = ans; return ans;} // Function to find the height of// a given node in a Binary Treeint findHeight(Node* root, int x){ // Store the height of // the given node int h = -1; // Stores height of the Tree int maxHeight = findHeightUtil(root, x, h); // Return the height return h;} // Driver Codeint main(){ // Binary Tree Formation Node* root = newNode(5); root->left = newNode(10); root->right = newNode(15); root->left->left = newNode(20); root->left->right = newNode(25); root->left->right->right = newNode(45); root->right->left = newNode(30); root->right->right = newNode(35); int k = 25; // Function call to find the // depth of a given node cout << \"Depth: \" << findDepth(root, k) << \"\\n\"; // Function call to find the // height of a given node cout << \"Height: \" << findHeight(root, k); return 0;}", "e": 31566, "s": 29142, "text": null }, { "code": "// Java program for the above approachimport java.util.*;class GFG{ static int height = -1; // Structure of a Binary Tree Nodestatic class Node{ int data; Node left; Node right;}; // Utility function to create// a new Binary Tree Nodestatic Node newNode(int item){ Node temp = new Node(); temp.data = item; temp.left = temp.right = null; return temp;} // Function to find the depth of// a given node in a Binary Treestatic int findDepth(Node root, int x){ // Base case if (root == null) return -1; // Initialize distance as -1 int dist = -1; // Check if x is current node= if ((root.data == x)|| // Otherwise, check if x is // present in the left subtree (dist = findDepth(root.left, x)) >= 0 || // Otherwise, check if x is // present in the right subtree (dist = findDepth(root.right, x)) >= 0) // Return depth of the node return dist + 1; return dist;} // Helper function to find the height// of a given node in the binary treestatic int findHeightUtil(Node root, int x){ // Base Case if (root == null) { return -1; } // Store the maximum height of // the left and right subtree int leftHeight = findHeightUtil(root.left, x); int rightHeight = findHeightUtil(root.right, x); // Update height of the current node int ans = Math.max(leftHeight, rightHeight) + 1; // If current node is the required node if (root.data == x) height = ans; return ans;} // Function to find the height of// a given node in a Binary Treestatic int findHeight(Node root, int x){ // Stores height of the Tree findHeightUtil(root, x); // Return the height return height;} // Driver Codepublic static void main(String []args){ // Binary Tree Formation Node root = newNode(5); root.left = newNode(10); root.right = newNode(15); root.left.left = newNode(20); root.left.right = newNode(25); root.left.right.right = newNode(45); root.right.left = newNode(30); root.right.right = newNode(35); int k = 25; // Function call to find the // depth of a given node System.out.println(\"Depth: \" + findDepth(root, k)); // Function call to find the // height of a given node System.out.println(\"Height: \" + findHeight(root, k));}} // This code is contributed by SURENDRA_GANGWAR", "e": 33978, "s": 31566, "text": null }, { "code": "# Python3 program for the above approach # Structure of a Binary Tree Nodeclass Node: def __init__(self, x): self.data = x self.left = None self.right = None # Function to find the depth of# a given node in a Binary Treedef findDepth(root, x): # Base case if (root == None): return -1 # Initialize distance as -1 dist = -1 # Check if x is current node= if (root.data == x): return dist + 1 dist = findDepth(root.left, x) if dist >= 0: return dist + 1 dist = findDepth(root.right, x) if dist >= 0: return dist + 1 return dist # Helper function to find the height# of a given node in the binary treedef findHeightUtil(root, x): global height # Base Case if (root == None): return -1 # Store the maximum height of # the left and right subtree leftHeight = findHeightUtil(root.left, x) rightHeight = findHeightUtil(root.right, x) # Update height of the current node ans = max(leftHeight, rightHeight) + 1 # If current node is the required node if (root.data == x): height = ans return ans # Function to find the height of# a given node in a Binary Treedef findHeight(root, x): global height # Stores height of the Tree maxHeight = findHeightUtil(root, x) # Return the height return height # Driver Codeif __name__ == '__main__': # Binary Tree Formation height = -1 root = Node(5) root.left = Node(10) root.right = Node(15) root.left.left = Node(20) root.left.right = Node(25) root.left.right.right = Node(45) root.right.left = Node(30) root.right.right = Node(35) k = 25 # Function call to find the # depth of a given node print(\"Depth: \",findDepth(root, k)) # Function call to find the # height of a given node print(\"Height: \",findHeight(root, k)) # This code is contributed by mohit kumar 29.", "e": 35903, "s": 33978, "text": null }, { "code": "// C# program for the above approachusing System;using System.Collections.Generic; class GFG{ static int height = -1; // Structure of a Binary Tree Nodeclass Node{ public int data; public Node left; public Node right;}; // Utility function to create// a new Binary Tree Nodestatic Node newNode(int item){ Node temp = new Node(); temp.data = item; temp.left = temp.right = null; return temp;} // Function to find the depth of// a given node in a Binary Treestatic int findDepth(Node root, int x){ // Base case if (root == null) return -1; // Initialize distance as -1 int dist = -1; // Check if x is current node= if ((root.data == x)|| // Otherwise, check if x is // present in the left subtree (dist = findDepth(root.left, x)) >= 0 || // Otherwise, check if x is // present in the right subtree (dist = findDepth(root.right, x)) >= 0) // Return depth of the node return dist + 1; return dist;} // Helper function to find the height// of a given node in the binary treestatic int findHeightUtil(Node root, int x){ // Base Case if (root == null) { return -1; } // Store the maximum height of // the left and right subtree int leftHeight = findHeightUtil(root.left, x); int rightHeight = findHeightUtil(root.right, x); // Update height of the current node int ans = Math.Max(leftHeight, rightHeight) + 1; // If current node is the required node if (root.data == x) height = ans; return ans;} // Function to find the height of// a given node in a Binary Treestatic int findHeight(Node root, int x){ // Stores height of the Tree findHeightUtil(root, x); // Return the height return height;} // Driver Codepublic static void Main(){ // Binary Tree Formation Node root = newNode(5); root.left = newNode(10); root.right = newNode(15); root.left.left = newNode(20); root.left.right = newNode(25); root.left.right.right = newNode(45); root.right.left = newNode(30); root.right.right = newNode(35); int k = 25; // Function call to find the // depth of a given node Console.WriteLine(\"Depth: \" + findDepth(root, k)); // Function call to find the // height of a given node Console.WriteLine(\"Height: \" + findHeight(root, k));}} // This code is contributed by ipg2016107", "e": 38334, "s": 35903, "text": null }, { "code": "<script> // JavaScript program for the above approach var height = -1; // Structure of a Binary Tree Nodeclass Node{ constructor() { this.data = 0; this.left = null; this.right = null; }}; // Utility function to create// a new Binary Tree Nodefunction newNode(item){ var temp = new Node(); temp.data = item; temp.left = temp.right = null; return temp;} // Function to find the depth of// a given node in a Binary Treefunction findDepth(root, x){ // Base case if (root == null) return -1; // Initialize distance as -1 var dist = -1; // Check if x is current node= if ((root.data == x)|| // Otherwise, check if x is // present in the left subtree (dist = findDepth(root.left, x)) >= 0 || // Otherwise, check if x is // present in the right subtree (dist = findDepth(root.right, x)) >= 0) // Return depth of the node return dist + 1; return dist;} // Helper function to find the height// of a given node in the binary treefunction findHeightUtil(root, x){ // Base Case if (root == null) { return -1; } // Store the maximum height of // the left and right subtree var leftHeight = findHeightUtil(root.left, x); var rightHeight = findHeightUtil(root.right, x); // Update height of the current node var ans = Math.max(leftHeight, rightHeight) + 1; // If current node is the required node if (root.data == x) height = ans; return ans;} // Function to find the height of// a given node in a Binary Treefunction findHeight(root, x){ // Stores height of the Tree findHeightUtil(root, x); // Return the height return height;} // Driver Code// Binary Tree Formationvar root = newNode(5);root.left = newNode(10);root.right = newNode(15);root.left.left = newNode(20);root.left.right = newNode(25);root.left.right.right = newNode(45);root.right.left = newNode(30);root.right.right = newNode(35);var k = 25;// Function call to find the// depth of a given nodedocument.write(\"Depth: \" + findDepth(root, k)+\"<br>\");// Function call to find the// height of a given nodedocument.write(\"Height: \" + findHeight(root, k)); </script>", "e": 40583, "s": 38334, "text": null }, { "code": null, "e": 40602, "s": 40583, "text": "Depth: 2\nHeight: 1" }, { "code": null, "e": 40647, "s": 40604, "text": "Time Complexity: O(N)Auxiliary Space: O(1)" }, { "code": null, "e": 40664, "s": 40649, "text": "mohit kumar 29" }, { "code": null, "e": 40675, "s": 40664, "text": "ipg2016107" }, { "code": null, "e": 40692, "s": 40675, "text": "SURENDRA_GANGWAR" }, { "code": null, "e": 40701, "s": 40692, "text": "noob2000" }, { "code": null, "e": 40713, "s": 40701, "text": "Binary Tree" }, { "code": null, "e": 40730, "s": 40713, "text": "Height of a Tree" }, { "code": null, "e": 40746, "s": 40730, "text": "Tree Traversals" }, { "code": null, "e": 40756, "s": 40746, "text": "Recursion" }, { "code": null, "e": 40766, "s": 40756, "text": "Searching" }, { "code": null, "e": 40771, "s": 40766, "text": "Tree" }, { "code": null, "e": 40781, "s": 40771, "text": "Searching" }, { "code": null, "e": 40791, "s": 40781, "text": "Recursion" }, { "code": null, "e": 40796, "s": 40791, "text": "Tree" }, { "code": null, "e": 40894, "s": 40796, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 40935, "s": 40894, "text": "Practice Questions for Recursion | Set 1" }, { "code": null, "e": 40974, "s": 40935, "text": "Sum of natural numbers using recursion" }, { "code": null, "e": 41036, "s": 40974, "text": "Recursively Reversing a linked list (A simple implementation)" }, { "code": null, "e": 41073, "s": 41036, "text": "Generating subarrays using recursion" }, { "code": null, "e": 41098, "s": 41073, "text": "Recursive Insertion Sort" }, { "code": null, "e": 41112, "s": 41098, "text": "Binary Search" }, { "code": null, "e": 41180, "s": 41112, "text": "Maximum and minimum of an array using minimum number of comparisons" }, { "code": null, "e": 41194, "s": 41180, "text": "Linear Search" }, { "code": null, "e": 41242, "s": 41194, "text": "Search an element in a sorted and rotated array" } ]
Check for Presence of Common Elements between Objects in R Programming - is.element() Function - GeeksforGeeks
15 Jun, 2020 is.element() function in R Language is used to check if elements of first Objects are present in second Object or not. It returns TRUE for each equal value. Syntax: is.element(x, y) Parameters:x and y: Objects with sequence of items Example 1: # R program to illustrate # the use of is.element() function # Vector 1 x1 <- c(1, 2, 3) # Vector 2 x2 <- c(1:6) # Calling is.element() Function is.element(x1, x2) is.element(x2, x1) Output: [1] TRUE TRUE TRUE [1] TRUE TRUE TRUE FALSE FALSE FALSE Example 2: # R program to illustrate # the use of is.element() function # Data frame 1 data_x <- data.frame(x1 = c(5, 3, 7), x2 = c(1, 4, 2)) # Data frame 2 data_y <- data.frame(y1 = c(2, 3, 4), y2 = c(1, 4, 2), y3 = c(3, 4, 5)) # Calling is.element() Functionis.element(data_x, data_y) is.element(data_y, data_x) Output: [1] FALSE TRUE [1] FALSE TRUE FALSE R Array-Functions R DataFrame-Function R Matrix-Function R Object-Function R Vector-Function R Language Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Filter data by multiple conditions in R using Dplyr Loops in R (for, while, repeat) Change Color of Bars in Barchart using ggplot2 in R How to change Row Names of DataFrame in R ? Group by function in R using Dplyr How to Change Axis Scales in R Plots? How to Split Column Into Multiple Columns in R DataFrame? R Programming Language - Introduction K-Means Clustering in R Programming Replace Specific Characters in String in R
[ { "code": null, "e": 26273, "s": 26245, "text": "\n15 Jun, 2020" }, { "code": null, "e": 26430, "s": 26273, "text": "is.element() function in R Language is used to check if elements of first Objects are present in second Object or not. It returns TRUE for each equal value." }, { "code": null, "e": 26455, "s": 26430, "text": "Syntax: is.element(x, y)" }, { "code": null, "e": 26506, "s": 26455, "text": "Parameters:x and y: Objects with sequence of items" }, { "code": null, "e": 26517, "s": 26506, "text": "Example 1:" }, { "code": "# R program to illustrate # the use of is.element() function # Vector 1 x1 <- c(1, 2, 3) # Vector 2 x2 <- c(1:6) # Calling is.element() Function is.element(x1, x2) is.element(x2, x1) ", "e": 26734, "s": 26517, "text": null }, { "code": null, "e": 26742, "s": 26734, "text": "Output:" }, { "code": null, "e": 26802, "s": 26742, "text": "[1] TRUE TRUE TRUE\n[1] TRUE TRUE TRUE FALSE FALSE FALSE\n" }, { "code": null, "e": 26813, "s": 26802, "text": "Example 2:" }, { "code": "# R program to illustrate # the use of is.element() function # Data frame 1 data_x <- data.frame(x1 = c(5, 3, 7), x2 = c(1, 4, 2)) # Data frame 2 data_y <- data.frame(y1 = c(2, 3, 4), y2 = c(1, 4, 2), y3 = c(3, 4, 5)) # Calling is.element() Functionis.element(data_x, data_y) is.element(data_y, data_x) ", "e": 27206, "s": 26813, "text": null }, { "code": null, "e": 27214, "s": 27206, "text": "Output:" }, { "code": null, "e": 27253, "s": 27214, "text": "[1] FALSE TRUE\n[1] FALSE TRUE FALSE\n" }, { "code": null, "e": 27271, "s": 27253, "text": "R Array-Functions" }, { "code": null, "e": 27292, "s": 27271, "text": "R DataFrame-Function" }, { "code": null, "e": 27310, "s": 27292, "text": "R Matrix-Function" }, { "code": null, "e": 27328, "s": 27310, "text": "R Object-Function" }, { "code": null, "e": 27346, "s": 27328, "text": "R Vector-Function" }, { "code": null, "e": 27357, "s": 27346, "text": "R Language" }, { "code": null, "e": 27455, "s": 27357, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 27507, "s": 27455, "text": "Filter data by multiple conditions in R using Dplyr" }, { "code": null, "e": 27539, "s": 27507, "text": "Loops in R (for, while, repeat)" }, { "code": null, "e": 27591, "s": 27539, "text": "Change Color of Bars in Barchart using ggplot2 in R" }, { "code": null, "e": 27635, "s": 27591, "text": "How to change Row Names of DataFrame in R ?" }, { "code": null, "e": 27670, "s": 27635, "text": "Group by function in R using Dplyr" }, { "code": null, "e": 27708, "s": 27670, "text": "How to Change Axis Scales in R Plots?" }, { "code": null, "e": 27766, "s": 27708, "text": "How to Split Column Into Multiple Columns in R DataFrame?" }, { "code": null, "e": 27804, "s": 27766, "text": "R Programming Language - Introduction" }, { "code": null, "e": 27840, "s": 27804, "text": "K-Means Clustering in R Programming" } ]
PostgreSQL - UUID Data Type - GeeksforGeeks
22 Feb, 2021 UUID is an abbreviation for Universal Unique Identifier defined by RFC 4122 and has a size of 128-bit. It is created using internal algorithms that always generate a unique value. PostgreSQL has its own UUID data type and provides modules to generate them. UUID is generally used in distributed systems as it guarantees a singularity better than the SERIAL data type which produces only singular values within a sole database.PostgreSQL enables you to store and compare UUID values but it does not incorporate functions for producing the UUID values in its core. Instead, it depends on the third-party modules that offer certain algorithms to generate UUIDs. For example the uuid-ossp module offers some handy functions that carry out standard algorithms for generating UUIDs. To install the “uuid-ossp” extension use the below command: CREATE EXTENSION IF NOT EXISTS "uuid-ossp"; For generating a UUID values based on the blend of computer’s MAC address, present timestamp, and a random value, the uuid_generate_v1() function can be used as shown below: SELECT uuid_generate_v1(); It would result in output similar to the image below: For generating a UUID value solely based on random numbers, the uuid_generate_v4() function can be used as shown below: SELECT uuid_generate_v4(); It would result in output similar to the image below:Example:In this example we will make a table whose primary key is a UUID data type. In supplement, the values of the primary key column will be produced automatically through the uuid_generate_v4() function.First, create a contacts table using the following statement: CREATE TABLE contacts ( contact_id uuid DEFAULT uuid_generate_v4 (), first_name VARCHAR NOT NULL, last_name VARCHAR NOT NULL, email VARCHAR NOT NULL, phone VARCHAR, PRIMARY KEY (contact_id) ); Now we insert some data to our contacts table as below: INSERT INTO contacts ( first_name, last_name, email, phone ) VALUES ( 'Raju', 'Kumar', 'rajukumar@gmail.com', '408-237-2345' ), ( 'Nikhil', 'Aggarwal', 'nikhilaggarwal@gmail.com', '408-237-2344' ), ( 'Anshul', 'Aggarwal', 'anagg@hotmail.com', '408-237-2343' ); Now we query all rows in the contacts table using the below command: SELECT * FROM contacts; Output: postgreSQL postgreSQL-dataTypes PostgreSQL Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. PostgreSQL - Create Auto-increment Column using SERIAL PostgreSQL - CREATE PROCEDURE PostgreSQL - Joins PostgreSQL - GROUP BY clause PostgreSQL - DROP INDEX PostgreSQL - REPLACE Function PostgreSQL - Copy Table PostgreSQL - CREATE SCHEMA PostgreSQL - Rename Table PostgreSQL - ROW_NUMBER Function
[ { "code": null, "e": 25167, "s": 25139, "text": "\n22 Feb, 2021" }, { "code": null, "e": 25347, "s": 25167, "text": "UUID is an abbreviation for Universal Unique Identifier defined by RFC 4122 and has a size of 128-bit. It is created using internal algorithms that always generate a unique value." }, { "code": null, "e": 25944, "s": 25347, "text": "PostgreSQL has its own UUID data type and provides modules to generate them. UUID is generally used in distributed systems as it guarantees a singularity better than the SERIAL data type which produces only singular values within a sole database.PostgreSQL enables you to store and compare UUID values but it does not incorporate functions for producing the UUID values in its core. Instead, it depends on the third-party modules that offer certain algorithms to generate UUIDs. For example the uuid-ossp module offers some handy functions that carry out standard algorithms for generating UUIDs." }, { "code": null, "e": 26004, "s": 25944, "text": "To install the “uuid-ossp” extension use the below command:" }, { "code": null, "e": 26048, "s": 26004, "text": "CREATE EXTENSION IF NOT EXISTS \"uuid-ossp\";" }, { "code": null, "e": 26222, "s": 26048, "text": "For generating a UUID values based on the blend of computer’s MAC address, present timestamp, and a random value, the uuid_generate_v1() function can be used as shown below:" }, { "code": null, "e": 26249, "s": 26222, "text": "SELECT uuid_generate_v1();" }, { "code": null, "e": 26303, "s": 26249, "text": "It would result in output similar to the image below:" }, { "code": null, "e": 26423, "s": 26303, "text": "For generating a UUID value solely based on random numbers, the uuid_generate_v4() function can be used as shown below:" }, { "code": null, "e": 26450, "s": 26423, "text": "SELECT uuid_generate_v4();" }, { "code": null, "e": 26772, "s": 26450, "text": "It would result in output similar to the image below:Example:In this example we will make a table whose primary key is a UUID data type. In supplement, the values of the primary key column will be produced automatically through the uuid_generate_v4() function.First, create a contacts table using the following statement:" }, { "code": null, "e": 26989, "s": 26772, "text": "CREATE TABLE contacts (\n contact_id uuid DEFAULT uuid_generate_v4 (),\n first_name VARCHAR NOT NULL,\n last_name VARCHAR NOT NULL,\n email VARCHAR NOT NULL,\n phone VARCHAR,\n PRIMARY KEY (contact_id)\n);" }, { "code": null, "e": 27045, "s": 26989, "text": "Now we insert some data to our contacts table as below:" }, { "code": null, "e": 27442, "s": 27045, "text": "INSERT INTO contacts (\n first_name,\n last_name,\n email,\n phone\n)\nVALUES\n (\n 'Raju',\n 'Kumar',\n 'rajukumar@gmail.com',\n '408-237-2345'\n ),\n (\n 'Nikhil',\n 'Aggarwal',\n 'nikhilaggarwal@gmail.com',\n '408-237-2344'\n ),\n (\n 'Anshul',\n 'Aggarwal',\n 'anagg@hotmail.com',\n '408-237-2343'\n );" }, { "code": null, "e": 27511, "s": 27442, "text": "Now we query all rows in the contacts table using the below command:" }, { "code": null, "e": 27543, "s": 27511, "text": "SELECT\n *\nFROM\n contacts;" }, { "code": null, "e": 27551, "s": 27543, "text": "Output:" }, { "code": null, "e": 27562, "s": 27551, "text": "postgreSQL" }, { "code": null, "e": 27583, "s": 27562, "text": "postgreSQL-dataTypes" }, { "code": null, "e": 27594, "s": 27583, "text": "PostgreSQL" }, { "code": null, "e": 27692, "s": 27594, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 27747, "s": 27692, "text": "PostgreSQL - Create Auto-increment Column using SERIAL" }, { "code": null, "e": 27777, "s": 27747, "text": "PostgreSQL - CREATE PROCEDURE" }, { "code": null, "e": 27796, "s": 27777, "text": "PostgreSQL - Joins" }, { "code": null, "e": 27825, "s": 27796, "text": "PostgreSQL - GROUP BY clause" }, { "code": null, "e": 27849, "s": 27825, "text": "PostgreSQL - DROP INDEX" }, { "code": null, "e": 27879, "s": 27849, "text": "PostgreSQL - REPLACE Function" }, { "code": null, "e": 27903, "s": 27879, "text": "PostgreSQL - Copy Table" }, { "code": null, "e": 27930, "s": 27903, "text": "PostgreSQL - CREATE SCHEMA" }, { "code": null, "e": 27956, "s": 27930, "text": "PostgreSQL - Rename Table" } ]
Python Tkinter | Create LabelFrame and add widgets to it - GeeksforGeeks
02 Apr, 2019 Tkinter is a Python module which is used to create GUI (Graphical User Interface) applications. It is a widely used module which comes along with the Python. It consists of various types of widgets which can be used to make GUI more user-friendly and attractive as well as functionality can be increased. LabelFrame can be created as follows: -> import tkinter -> create root -> create LabelFrame as child of root label_frame = ttk.LabelFrame(parent, value = options, ...) Code #1: Creating LabelFrame and adding a message to it. # Import only those methods# which are mentioned below, this way of# importing methods is efficientfrom tkinter import Tk, mainloop, LEFT, TOPfrom tkinter.ttk import * # Creating tkinter window with fixed geometryroot = Tk()root.geometry('250x150') # This will create a LabelFramelabel_frame = LabelFrame(root, text = 'This is Label Frame')label_frame.pack(expand = 'yes', fill = 'both') label1 = Label(label_frame, text = '1. This is a Label.')label1.place(x = 0, y = 5) label2 = Label(label_frame, text = '2. This is another Label.')label2.place(x = 0, y = 35) label3 = Label(label_frame, text = '3. We can add multiple\n widgets in it.') label3.place(x = 0, y = 65) # This creates an infinite loop which generally# waits for any interrupt (like keyboard or# mouse) to terminatemainloop() Output: Code #2: Adding Button and CheckButton widgets inside LabelFrame. # Import only those methods# which are mentioned below, this way of# importing methods is efficientfrom tkinter import Tk, mainloop, LEFT, TOPfrom tkinter.ttk import * # Creating tkinter window with fixed geometryroot = Tk()root.geometry('250x150') # This will create a LabelFramelabel_frame = LabelFrame(root, text = 'This is Label Frame')label_frame.pack(expand = 'yes', fill = 'both') # Buttonsbtn1 = Button(label_frame, text = 'Button 1')btn1.place(x = 30, y = 10)btn2 = Button(label_frame, text = 'Button 2')btn2.place(x = 130, y = 10) # Checkbuttonschkbtn1 = Checkbutton(label_frame, text = 'Checkbutton 1')chkbtn1.place(x = 30, y = 50)chkbtn2 = Checkbutton(label_frame, text = 'Checkbutton 2')chkbtn2.place(x = 30, y = 80) # This creates infinite loop which generally# waits for any interrupt (like keyboard or# mouse) to terminatemainloop() Output: Note: One can also add another LabelFrame inside another LabelFrame, as well as one can do styling of any LabelFrame like we do the styling of other widgets. Python-gui Python-tkinter Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. How to Install PIP on Windows ? Check if element exists in list in Python How To Convert Python Dictionary To JSON? Python Classes and Objects How to drop one or multiple columns in Pandas Dataframe Defaultdict in Python Python | Get unique values from a list Python | os.path.join() method Create a directory in Python Python | Pandas dataframe.groupby()
[ { "code": null, "e": 25537, "s": 25509, "text": "\n02 Apr, 2019" }, { "code": null, "e": 25842, "s": 25537, "text": "Tkinter is a Python module which is used to create GUI (Graphical User Interface) applications. It is a widely used module which comes along with the Python. It consists of various types of widgets which can be used to make GUI more user-friendly and attractive as well as functionality can be increased." }, { "code": null, "e": 25880, "s": 25842, "text": "LabelFrame can be created as follows:" }, { "code": null, "e": 25952, "s": 25880, "text": "-> import tkinter\n-> create root\n-> create LabelFrame as child of root\n" }, { "code": null, "e": 26011, "s": 25952, "text": "label_frame = ttk.LabelFrame(parent, value = options, ...)" }, { "code": null, "e": 26068, "s": 26011, "text": "Code #1: Creating LabelFrame and adding a message to it." }, { "code": "# Import only those methods# which are mentioned below, this way of# importing methods is efficientfrom tkinter import Tk, mainloop, LEFT, TOPfrom tkinter.ttk import * # Creating tkinter window with fixed geometryroot = Tk()root.geometry('250x150') # This will create a LabelFramelabel_frame = LabelFrame(root, text = 'This is Label Frame')label_frame.pack(expand = 'yes', fill = 'both') label1 = Label(label_frame, text = '1. This is a Label.')label1.place(x = 0, y = 5) label2 = Label(label_frame, text = '2. This is another Label.')label2.place(x = 0, y = 35) label3 = Label(label_frame, text = '3. We can add multiple\\n widgets in it.') label3.place(x = 0, y = 65) # This creates an infinite loop which generally# waits for any interrupt (like keyboard or# mouse) to terminatemainloop()", "e": 26879, "s": 26068, "text": null }, { "code": null, "e": 26953, "s": 26879, "text": "Output: Code #2: Adding Button and CheckButton widgets inside LabelFrame." }, { "code": "# Import only those methods# which are mentioned below, this way of# importing methods is efficientfrom tkinter import Tk, mainloop, LEFT, TOPfrom tkinter.ttk import * # Creating tkinter window with fixed geometryroot = Tk()root.geometry('250x150') # This will create a LabelFramelabel_frame = LabelFrame(root, text = 'This is Label Frame')label_frame.pack(expand = 'yes', fill = 'both') # Buttonsbtn1 = Button(label_frame, text = 'Button 1')btn1.place(x = 30, y = 10)btn2 = Button(label_frame, text = 'Button 2')btn2.place(x = 130, y = 10) # Checkbuttonschkbtn1 = Checkbutton(label_frame, text = 'Checkbutton 1')chkbtn1.place(x = 30, y = 50)chkbtn2 = Checkbutton(label_frame, text = 'Checkbutton 2')chkbtn2.place(x = 30, y = 80) # This creates infinite loop which generally# waits for any interrupt (like keyboard or# mouse) to terminatemainloop()", "e": 27807, "s": 26953, "text": null }, { "code": null, "e": 27815, "s": 27807, "text": "Output:" }, { "code": null, "e": 27973, "s": 27815, "text": "Note: One can also add another LabelFrame inside another LabelFrame, as well as one can do styling of any LabelFrame like we do the styling of other widgets." }, { "code": null, "e": 27984, "s": 27973, "text": "Python-gui" }, { "code": null, "e": 27999, "s": 27984, "text": "Python-tkinter" }, { "code": null, "e": 28006, "s": 27999, "text": "Python" }, { "code": null, "e": 28104, "s": 28006, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 28136, "s": 28104, "text": "How to Install PIP on Windows ?" }, { "code": null, "e": 28178, "s": 28136, "text": "Check if element exists in list in Python" }, { "code": null, "e": 28220, "s": 28178, "text": "How To Convert Python Dictionary To JSON?" }, { "code": null, "e": 28247, "s": 28220, "text": "Python Classes and Objects" }, { "code": null, "e": 28303, "s": 28247, "text": "How to drop one or multiple columns in Pandas Dataframe" }, { "code": null, "e": 28325, "s": 28303, "text": "Defaultdict in Python" }, { "code": null, "e": 28364, "s": 28325, "text": "Python | Get unique values from a list" }, { "code": null, "e": 28395, "s": 28364, "text": "Python | os.path.join() method" }, { "code": null, "e": 28424, "s": 28395, "text": "Create a directory in Python" } ]
CLR Parser (with Examples) - GeeksforGeeks
05 Jan, 2022 LR parsers :It is an efficient bottom up syntax analysis technique that can be used to parse large classes of context-free grammar is called LR(k) parsing. L stands for left to right scanningR stands for rightmost derivation in reverse0 stands for no. of input symbols of lookaheadFor more reference ,kindly visit https://www.geeksforgeeks.org/lr-parser/ Advantages of LR parsing : It recognises virtually all programming language constructs for which CFG can be written It is able to detect syntatic errors It is an efficient non-backtracking shift reducing parsing method. Types of LR parsing methods : SLRCLRLALR SLR CLR LALR CLR Parser :The CLR parser stands for canonical LR parser.It is a more powerful LR parser.It makes use of lookahead symbols. This method uses a large set of items called LR(1) items.The main difference between LR(0) and LR(1) items is that,in LR(1) items ,its possible to carry more information in a state,which will rule out useless reduction states.This extra information is incorporated into the state by the lookahead symbol. The general syntax becomes [A->∝.B, a ]where A->∝.B is production and a is a terminal or right end marker $LR(1) items=LR(0) items + look ahead How to add lookahead with the production?CASE 1 – A->∝.BC, a Suppose this is the 0th production.Now, since ‘ . ‘ precedes B,so we have to write B’s productions as well. B->.D [1st production] Suppose this is B’s production. The look ahead of this production is given as we look at previous productions ie 0th production. Whatever is after B, we find FIRST(of that value) , that is the lookahead of 1st production.So,here in 0th production, after B, C is there. assume FIRST(C)=d, then 1st production become B->.D, d CASE 2 –Now if the 0th production was like this, A->∝.B, a Here, we can see there’s nothing after B. So the lookahead of 0th production will be the lookahead of 1st production. ie- B->.D, a CASE 3 –Assume a production A->a|b A->a,$ [0th production] A->b,$ [1st production] Here, the 1st production is a part of the previous production, so the lookahead will be the same as that of its previous production.These are the 2 rules of look ahead. Steps for constructing CLR parsing table : Writing augmented grammarLR(1) collection of items to be foundDefining 2 functions:goto[list of terminals] and action[list of non-terminals] in the CLR parsing table Writing augmented grammar LR(1) collection of items to be found Defining 2 functions:goto[list of terminals] and action[list of non-terminals] in the CLR parsing table EXAMPLE Construct a CLR parsing table for the given context free grammar S-->AA A-->aA|b Solution :STEP 1 – Find augmented grammar The augmented grammar of the given grammar is:- S'-->.S ,$ [0th production] S-->.AA ,$ [1st production] A-->.aA ,a|b [2nd production] A-->.b ,a|b [3rd production] Let’s apply the rule of lookahead to the above productions The initial look ahead is always $ Now, the 1st production came into existence because of ‘ . ‘ Before ‘S’ in 0th production.There is nothing after ‘S’, so the lookahead of 0th production will be the lookahead of 1st production. ie: S–>.AA ,$ Now, the 2nd production came into existence because of ‘ . ‘ Before ‘A’ in the 1st production.After ‘A’, there’s ‘A’. So, FIRST(A) is a,bTherefore,the look ahead for the 2nd production becomes a|b. Now, the 3rd production is a part of the 2nd production.So, the look ahead will be the same. STEP 2 – Find LR(0) collection of itemsBelow is the figure showing the LR(0) collection of items. We will understand everything one by one. The terminals of this grammar are {a,b}The non-terminals of this grammar are {S,A} RULE- If any non-terminal has ‘ . ‘ preceding it, we have to write all its production and add ‘ . ‘ preceding each of its production.from each state to the next state, the ‘ . ‘ shifts to one place to the right.All the rules of lookahead apply here. If any non-terminal has ‘ . ‘ preceding it, we have to write all its production and add ‘ . ‘ preceding each of its production. from each state to the next state, the ‘ . ‘ shifts to one place to the right. All the rules of lookahead apply here. In the figure, I0 consists of augmented grammar. Io goes to I1 when ‘ . ‘ of 0th production is shifted towards the right of S(S’->S.). This state is the accept state . S is seen by the compiler. Since I1 is a part of the 0th production, the lookahead is the same ie $ Io goes to I2 when ‘ . ‘ of 1st production is shifted towards right (S->A.A) . A is seen by the compiler. Since I2 is a part of the 1st production, the lookahead is the same i.e. $. I0 goes to I3 when ‘ . ‘ of the 2nd production is shifted towards right (A->a.A) . a is seen by the compiler. Since I3 is a part of the 2nd production, the lookahead is the same ie a|b. I0 goes to I4 when ‘ . ‘ of the 3rd production is shifted towards right (A->b.) . b is seen by the compiler. Since I4 is a part of the 3rd production, the lookahead is the same i.e. a | b. I2 goes to I5 when ‘ . ‘ of 1st production is shifted towards right (S->AA.) . A is seen by the compiler. Since I5 is a part of the 1st production, the lookahead is the same i.e. $. I2 goes to I6 when ‘ . ‘ of 2nd production is shifted towards the right (A->a.A) . A is seen by the compiler. Since I6 is a part of the 2nd production, the lookahead is the same i.e. $. I2 goes to I7 when ‘ . ‘ of 3rd production is shifted towards right (A->b.) . A is seen by the compiler. Since I6 is a part of the 3rd production, the lookahead is the same i.e. $. I3 goes to I3 when ‘ . ‘ of the 2nd production is shifted towards right (A->a.A) . a is seen by the compiler. Since I3 is a part of the 2nd production, the lookahead is the same i.e. a|b. I3 goes to I8 when ‘ . ‘ of 2nd production is shifted towards the right (A->aA.) . A is seen by the compiler. Since I8 is a part of the 2nd production, the lookahead is the same i.e. a|b. I6 goes to I9 when ‘ . ‘ of 2nd production is shifted towards the right (A->aA.) . A is seen by the compiler. Since I9 is a part of the 2nd production, the lookahead is the same i.e. $. I6 goes to I6 when ‘ . ‘ of the 2nd production is shifted towards right (A->a.A) . a is seen by the compiler. Since I6 is a part of the 2nd production, the lookahead is the same i.e. $. I6 goes to I7 when ‘ . ‘ of the 3rd production is shifted towards right (A->b.) . b is seen by the compiler. Since I6 is a part of the 3rd production, the lookahead is the same ie $. STEP 3- defining 2 functions:goto[list of terminals] and action[list of non-terminals] in the parsing table.Below is the CLR parsing table $ is by default a non terminal which takes accepting state. 0,1,2,3,4,5,6,7,8,9 denotes I0,I1,I2,I3,I4,I5,I6,I7,I8,I9 I0 gives A in I2, so 2 is added to the A column and 0 row. I0 gives S in I1,so 1 is added to the S column and 1st row. similarly 5 is written in A column and 2nd row, 8 is written in A column and 3rd row, 9 is written in A column and 6th row. I0 gives a in I3, so S3(shift 3) is added to a column and 0 row. I0 gives b in I4, so S4(shift 4) is added to the b column and 0 row. Similarly, S6(shift 6) is added on ‘a’ column and 2,6 row ,S7(shift 7) is added on b column and 2,6 row,S3(shift 3) is added on ‘a’ column and 3 row ,S4(shift 4) is added on b column and 3 row. I4 is reduced as ‘ . ‘ is at the end. I4 is the 3rd production of grammar. So write r3(reduce 3) in lookahead columns. The lookahead of I4 are a and b, so write R3 in a and b column. I5 is reduced as ‘ . ‘ is at the end. I5 is the 1st production of grammar. So write r1(reduce 1) in lookahead columns. The lookahead of I5 is $ so write R1 in $ column. Similarly, write R2 in a,b column and 8th row, write R2 in $ column and 9th row. abhijithoyur surinderdawra388 Picked Compiler Design GATE CS Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Directed Acyclic graph in Compiler Design (with examples) S - attributed and L - attributed SDTs in Syntax directed translation Issues in the design of a code generator Error Handling in Compiler Design Error detection and Recovery in Compiler Layers of OSI Model ACID Properties in DBMS TCP/IP Model Types of Operating Systems Normal Forms in DBMS
[ { "code": null, "e": 25665, "s": 25637, "text": "\n05 Jan, 2022" }, { "code": null, "e": 26025, "s": 25665, "text": "LR parsers :It is an efficient bottom up syntax analysis technique that can be used to parse large classes of context-free grammar is called LR(k) parsing. L stands for left to right scanningR stands for rightmost derivation in reverse0 stands for no. of input symbols of lookaheadFor more reference ,kindly visit https://www.geeksforgeeks.org/lr-parser/ " }, { "code": null, "e": 26052, "s": 26025, "text": "Advantages of LR parsing :" }, { "code": null, "e": 26142, "s": 26052, "text": "It recognises virtually all programming language constructs for which CFG can be written" }, { "code": null, "e": 26179, "s": 26142, "text": "It is able to detect syntatic errors" }, { "code": null, "e": 26246, "s": 26179, "text": "It is an efficient non-backtracking shift reducing parsing method." }, { "code": null, "e": 26276, "s": 26246, "text": "Types of LR parsing methods :" }, { "code": null, "e": 26287, "s": 26276, "text": "SLRCLRLALR" }, { "code": null, "e": 26291, "s": 26287, "text": "SLR" }, { "code": null, "e": 26295, "s": 26291, "text": "CLR" }, { "code": null, "e": 26300, "s": 26295, "text": "LALR" }, { "code": null, "e": 26876, "s": 26300, "text": "CLR Parser :The CLR parser stands for canonical LR parser.It is a more powerful LR parser.It makes use of lookahead symbols. This method uses a large set of items called LR(1) items.The main difference between LR(0) and LR(1) items is that,in LR(1) items ,its possible to carry more information in a state,which will rule out useless reduction states.This extra information is incorporated into the state by the lookahead symbol. The general syntax becomes [A->∝.B, a ]where A->∝.B is production and a is a terminal or right end marker $LR(1) items=LR(0) items + look ahead" }, { "code": null, "e": 26926, "s": 26876, "text": "How to add lookahead with the production?CASE 1 –" }, { "code": null, "e": 26938, "s": 26926, "text": "A->∝.BC, a " }, { "code": null, "e": 27046, "s": 26938, "text": "Suppose this is the 0th production.Now, since ‘ . ‘ precedes B,so we have to write B’s productions as well." }, { "code": null, "e": 27069, "s": 27046, "text": "B->.D [1st production]" }, { "code": null, "e": 27384, "s": 27069, "text": "Suppose this is B’s production. The look ahead of this production is given as we look at previous productions ie 0th production. Whatever is after B, we find FIRST(of that value) , that is the lookahead of 1st production.So,here in 0th production, after B, C is there. assume FIRST(C)=d, then 1st production become" }, { "code": null, "e": 27393, "s": 27384, "text": "B->.D, d" }, { "code": null, "e": 27442, "s": 27393, "text": "CASE 2 –Now if the 0th production was like this," }, { "code": null, "e": 27453, "s": 27442, "text": "A->∝.B, a " }, { "code": null, "e": 27575, "s": 27453, "text": "Here, we can see there’s nothing after B. So the lookahead of 0th production will be the lookahead of 1st production. ie-" }, { "code": null, "e": 27584, "s": 27575, "text": "B->.D, a" }, { "code": null, "e": 27619, "s": 27584, "text": "CASE 3 –Assume a production A->a|b" }, { "code": null, "e": 27667, "s": 27619, "text": "A->a,$ [0th production]\nA->b,$ [1st production]" }, { "code": null, "e": 27836, "s": 27667, "text": "Here, the 1st production is a part of the previous production, so the lookahead will be the same as that of its previous production.These are the 2 rules of look ahead." }, { "code": null, "e": 27879, "s": 27836, "text": "Steps for constructing CLR parsing table :" }, { "code": null, "e": 28045, "s": 27879, "text": "Writing augmented grammarLR(1) collection of items to be foundDefining 2 functions:goto[list of terminals] and action[list of non-terminals] in the CLR parsing table" }, { "code": null, "e": 28071, "s": 28045, "text": "Writing augmented grammar" }, { "code": null, "e": 28109, "s": 28071, "text": "LR(1) collection of items to be found" }, { "code": null, "e": 28213, "s": 28109, "text": "Defining 2 functions:goto[list of terminals] and action[list of non-terminals] in the CLR parsing table" }, { "code": null, "e": 28286, "s": 28213, "text": "EXAMPLE Construct a CLR parsing table for the given context free grammar" }, { "code": null, "e": 28306, "s": 28286, "text": "S-->AA \nA-->aA|b" }, { "code": null, "e": 28348, "s": 28306, "text": "Solution :STEP 1 – Find augmented grammar" }, { "code": null, "e": 28396, "s": 28348, "text": "The augmented grammar of the given grammar is:-" }, { "code": null, "e": 28527, "s": 28396, "text": "S'-->.S ,$ [0th production] \nS-->.AA ,$ [1st production] \nA-->.aA ,a|b [2nd production] \nA-->.b ,a|b [3rd production]" }, { "code": null, "e": 28586, "s": 28527, "text": "Let’s apply the rule of lookahead to the above productions" }, { "code": null, "e": 28621, "s": 28586, "text": "The initial look ahead is always $" }, { "code": null, "e": 28830, "s": 28621, "text": "Now, the 1st production came into existence because of ‘ . ‘ Before ‘S’ in 0th production.There is nothing after ‘S’, so the lookahead of 0th production will be the lookahead of 1st production. ie: S–>.AA ,$" }, { "code": null, "e": 29029, "s": 28830, "text": "Now, the 2nd production came into existence because of ‘ . ‘ Before ‘A’ in the 1st production.After ‘A’, there’s ‘A’. So, FIRST(A) is a,bTherefore,the look ahead for the 2nd production becomes a|b." }, { "code": null, "e": 29122, "s": 29029, "text": "Now, the 3rd production is a part of the 2nd production.So, the look ahead will be the same." }, { "code": null, "e": 29262, "s": 29122, "text": "STEP 2 – Find LR(0) collection of itemsBelow is the figure showing the LR(0) collection of items. We will understand everything one by one." }, { "code": null, "e": 29345, "s": 29262, "text": "The terminals of this grammar are {a,b}The non-terminals of this grammar are {S,A}" }, { "code": null, "e": 29351, "s": 29345, "text": "RULE-" }, { "code": null, "e": 29595, "s": 29351, "text": "If any non-terminal has ‘ . ‘ preceding it, we have to write all its production and add ‘ . ‘ preceding each of its production.from each state to the next state, the ‘ . ‘ shifts to one place to the right.All the rules of lookahead apply here." }, { "code": null, "e": 29723, "s": 29595, "text": "If any non-terminal has ‘ . ‘ preceding it, we have to write all its production and add ‘ . ‘ preceding each of its production." }, { "code": null, "e": 29802, "s": 29723, "text": "from each state to the next state, the ‘ . ‘ shifts to one place to the right." }, { "code": null, "e": 29841, "s": 29802, "text": "All the rules of lookahead apply here." }, { "code": null, "e": 29890, "s": 29841, "text": "In the figure, I0 consists of augmented grammar." }, { "code": null, "e": 30110, "s": 29890, "text": "Io goes to I1 when ‘ . ‘ of 0th production is shifted towards the right of S(S’->S.). This state is the accept state . S is seen by the compiler. Since I1 is a part of the 0th production, the lookahead is the same ie $" }, { "code": null, "e": 30293, "s": 30110, "text": "Io goes to I2 when ‘ . ‘ of 1st production is shifted towards right (S->A.A) . A is seen by the compiler. Since I2 is a part of the 1st production, the lookahead is the same i.e. $." }, { "code": null, "e": 30480, "s": 30293, "text": "I0 goes to I3 when ‘ . ‘ of the 2nd production is shifted towards right (A->a.A) . a is seen by the compiler. Since I3 is a part of the 2nd production, the lookahead is the same ie a|b." }, { "code": null, "e": 30670, "s": 30480, "text": "I0 goes to I4 when ‘ . ‘ of the 3rd production is shifted towards right (A->b.) . b is seen by the compiler. Since I4 is a part of the 3rd production, the lookahead is the same i.e. a | b." }, { "code": null, "e": 30853, "s": 30670, "text": "I2 goes to I5 when ‘ . ‘ of 1st production is shifted towards right (S->AA.) . A is seen by the compiler. Since I5 is a part of the 1st production, the lookahead is the same i.e. $." }, { "code": null, "e": 31040, "s": 30853, "text": "I2 goes to I6 when ‘ . ‘ of 2nd production is shifted towards the right (A->a.A) . A is seen by the compiler. Since I6 is a part of the 2nd production, the lookahead is the same i.e. $." }, { "code": null, "e": 31222, "s": 31040, "text": "I2 goes to I7 when ‘ . ‘ of 3rd production is shifted towards right (A->b.) . A is seen by the compiler. Since I6 is a part of the 3rd production, the lookahead is the same i.e. $." }, { "code": null, "e": 31411, "s": 31222, "text": "I3 goes to I3 when ‘ . ‘ of the 2nd production is shifted towards right (A->a.A) . a is seen by the compiler. Since I3 is a part of the 2nd production, the lookahead is the same i.e. a|b." }, { "code": null, "e": 31600, "s": 31411, "text": "I3 goes to I8 when ‘ . ‘ of 2nd production is shifted towards the right (A->aA.) . A is seen by the compiler. Since I8 is a part of the 2nd production, the lookahead is the same i.e. a|b." }, { "code": null, "e": 31787, "s": 31600, "text": "I6 goes to I9 when ‘ . ‘ of 2nd production is shifted towards the right (A->aA.) . A is seen by the compiler. Since I9 is a part of the 2nd production, the lookahead is the same i.e. $." }, { "code": null, "e": 31974, "s": 31787, "text": "I6 goes to I6 when ‘ . ‘ of the 2nd production is shifted towards right (A->a.A) . a is seen by the compiler. Since I6 is a part of the 2nd production, the lookahead is the same i.e. $." }, { "code": null, "e": 32158, "s": 31974, "text": "I6 goes to I7 when ‘ . ‘ of the 3rd production is shifted towards right (A->b.) . b is seen by the compiler. Since I6 is a part of the 3rd production, the lookahead is the same ie $." }, { "code": null, "e": 32298, "s": 32158, "text": "STEP 3- defining 2 functions:goto[list of terminals] and action[list of non-terminals] in the parsing table.Below is the CLR parsing table" }, { "code": null, "e": 32358, "s": 32298, "text": "$ is by default a non terminal which takes accepting state." }, { "code": null, "e": 32416, "s": 32358, "text": "0,1,2,3,4,5,6,7,8,9 denotes I0,I1,I2,I3,I4,I5,I6,I7,I8,I9" }, { "code": null, "e": 32475, "s": 32416, "text": "I0 gives A in I2, so 2 is added to the A column and 0 row." }, { "code": null, "e": 32535, "s": 32475, "text": "I0 gives S in I1,so 1 is added to the S column and 1st row." }, { "code": null, "e": 32662, "s": 32535, "text": "similarly 5 is written in A column and 2nd row, 8 is written in A column and 3rd row, 9 is written in A column and 6th row." }, { "code": null, "e": 32727, "s": 32662, "text": "I0 gives a in I3, so S3(shift 3) is added to a column and 0 row." }, { "code": null, "e": 32796, "s": 32727, "text": "I0 gives b in I4, so S4(shift 4) is added to the b column and 0 row." }, { "code": null, "e": 32990, "s": 32796, "text": "Similarly, S6(shift 6) is added on ‘a’ column and 2,6 row ,S7(shift 7) is added on b column and 2,6 row,S3(shift 3) is added on ‘a’ column and 3 row ,S4(shift 4) is added on b column and 3 row." }, { "code": null, "e": 33173, "s": 32990, "text": "I4 is reduced as ‘ . ‘ is at the end. I4 is the 3rd production of grammar. So write r3(reduce 3) in lookahead columns. The lookahead of I4 are a and b, so write R3 in a and b column." }, { "code": null, "e": 33342, "s": 33173, "text": "I5 is reduced as ‘ . ‘ is at the end. I5 is the 1st production of grammar. So write r1(reduce 1) in lookahead columns. The lookahead of I5 is $ so write R1 in $ column." }, { "code": null, "e": 33423, "s": 33342, "text": "Similarly, write R2 in a,b column and 8th row, write R2 in $ column and 9th row." }, { "code": null, "e": 33436, "s": 33423, "text": "abhijithoyur" }, { "code": null, "e": 33453, "s": 33436, "text": "surinderdawra388" }, { "code": null, "e": 33460, "s": 33453, "text": "Picked" }, { "code": null, "e": 33476, "s": 33460, "text": "Compiler Design" }, { "code": null, "e": 33484, "s": 33476, "text": "GATE CS" }, { "code": null, "e": 33582, "s": 33484, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 33640, "s": 33582, "text": "Directed Acyclic graph in Compiler Design (with examples)" }, { "code": null, "e": 33710, "s": 33640, "text": "S - attributed and L - attributed SDTs in Syntax directed translation" }, { "code": null, "e": 33751, "s": 33710, "text": "Issues in the design of a code generator" }, { "code": null, "e": 33785, "s": 33751, "text": "Error Handling in Compiler Design" }, { "code": null, "e": 33826, "s": 33785, "text": "Error detection and Recovery in Compiler" }, { "code": null, "e": 33846, "s": 33826, "text": "Layers of OSI Model" }, { "code": null, "e": 33870, "s": 33846, "text": "ACID Properties in DBMS" }, { "code": null, "e": 33883, "s": 33870, "text": "TCP/IP Model" }, { "code": null, "e": 33910, "s": 33883, "text": "Types of Operating Systems" } ]
FuzzyWuzzy: Fuzzy String Matching in Python | Towards Data Science
If you have dealt with text data before, you know that its issues are the hardest to deal with. There is just no one-size-fits-all solution to text problems and for each dataset, you have to come up with new ways to clean your data. In one of my previous articles, I talked about the worst-case scenario of such problems: For example, consider this worst-case scenario: you are working on a survey data conducted across the USA and there is a state column for the state of each observation in the dataset. There are 50 states in the USA and imagine all the damn variations of state names people can come up with. You are in even bigger problem if data collectors decide to use abbreviations: CA, ca, Ca, Caliphornia, Californa, Calfornia, calipornia, CAL, CALI, ... Such columns will always be filled with typos, errors, inconsistencies. The problems related to text often arise because of free-text during data collection. They will be full of typos, inconsistencies, whatever you can name. Of course, the most basic problems can be solved using simple regular expressions or built-in Python functions but for cases like above, which occur very often, you have to arm yourself with more complex tools. Today’s special is fuzzywuzzy, a package with a very simple API which helps us to calculate string similarity. Get the notebook and data used in this article from this Kaggle notebook or from this GitHub repo. To understand string matching, let’s get you up to speed with Minimum Edit Distance. As humans, we have no trouble at all if two or more strings are similar or not. To create this ability in computers, many algorithms were created and almost all of them depend on Minimum Edit Distance. Minimum Edit Distance (MED) is the least possible amount of steps needed to transition from one string to another. MED is calculated using only 4 operations: Insertion Deletion Substitution Replacing consecutive characters Consider these two words: Program and Sonogram: To get from Program to Sonogram, we need 3 steps: Add the letter ‘S’ to the beginning of ‘Program’.Substitute ‘P’ with ‘O’.Substitute ‘R’ with ‘N’. Add the letter ‘S’ to the beginning of ‘Program’. Substitute ‘P’ with ‘O’. Substitute ‘R’ with ‘N’. As I said, there are many algorithms to calculate MED: Damerau-Levenshtein Levenshtein Hamming Jaro Distance Also, there are packages that use these algorithms: nltk, fuzzywuzzy, textdistance, difflib, ... In this article, we will only cover fuzzywuzzy. Even though the basic installation can be done easily with pip, there are some other options or caveats to fuzzwuzzy's installation: Using PIP via PyPI (standard): pip install fuzzywuzzy The above method installs the default up-to-date version of the package. At first, I installed it using this method. But whenever I imported it, it started giving a warning saying that the package itself is very slow and I should install python-Levenshtein package for more speed. If you hate warnings in your Jupyter Notebook like me, here is how you can install extra dependencies: Directly install python-Levenshtein: pip install python-Levenshtein or pip install fuzzywuzzy[speedup] Warning for Windows users: if you don’t have Microsoft Visual Studio build tools installed, installing python-Levenshtein fails. You can download MVS Build Tools from here. To get started with fuzzywuzzy, we first import fuzz sub-module: from fuzzywuzzy import fuzz In this sub-module, there are 5 functions for different methods of comparison between 2 strings. The most flexible and best one for everyday use is WRatio (Weighted Ratio) function: Here, we are comparing ‘Python’ to ‘Cython’. The output returns a percentage between 0 and 100, 0 being not similar at all and 100 being identical: All the functions of fuzzywuzzy are case-insensitive: WRatio is also very good for partial strings with different orderings: Apart from WRatio, there are 4 other functions to compute string similarity: fuzz.ratio fuzz.partial_ratio fuzz.token_sort_ratio fuzz.token_set_ratio fuzz.ratio is perfect for strings with similar lengths and order: For strings with differing lengths, it is better to use `fuzz.patial_ratio’: If the strings have the same meaning but their order is different, use fuzz.token_sort_ratio: For more edge cases, there is fuzz.token_set_ratio: As you see, these 5 functions are full of caveats. Their comparison is a whole another topic so I am leaving you a link to the article written by the package creators which explains their difference beautifully. I think you already saw that WRatio function gives the middle ground for all the functions of fuzzywuzzy. For many edge cases and different issues, it is best to use WRatio for best results. Now we have some understanding fuzzywuzzy's different functions, we can move on to more complex problems. With real-life data, most of the time you have to find the most similar value to your string from a list of options. Consider this example: We have to find the best matches to Mercedez-Benz to replace them with the correct spelling of the cars. We can loop over each value but such a process could take too long if there are millions of options to choose from. Since this operation is so commonly used, fuzzywuzzy provides us with a helpful sub-module: from fuzzywuzzy import process With this sub-module, you can extract the best matches to your string from a sequence of strings. Let’s solve our initial problem: The parameters of interest in process.extract are query, choices and limit. This function computes the similarity of strings given in query from a sequence of options given in choices and returns a list of tuples. limit controls the number of tuples to return. Each of these tuples contains two elements, the first one is the matching string and the second one is the similarity score. Under the hood, process.extract uses the default WRatio function. However, depending on your case and knowing the differences between the 5 functions you can change the scoring function with scorer: If you have many options, it is best to stick with WRatio because it is the most flexible. In the process module, there are other functions that perform a similar operation. process.extractOne returns only one output which contains the string with the highest matching score: Now we are ready to tackle a real-world problem. I will load the raw data to practice: cars.shape(8504, 4) I used this dataset in one of my personal projects and the task was to correct the spelling of each vehicle make and model according to the correct values given in another file: After loading the pickle file, make_model is now a dictionary containing the correct spelling of each car make as keys and the correct spelling of models under each key. For example, let’s see the spellings of makes and models of Toyota cars: Now, let’s subset the raw data for Toyota cars: >>> cars[cars['vehicle_make'] == 'TOYOTA'] The dataset contains up to a hundred unique car makes like Audi, Bentley, BMW and each one contains several models that are full of edge cases. We cannot just convert each one to title case or lower case. We also don’t know if these contain any spelling errors or inconsistencies and visual search is not an option for such big datasets. There are also some cases where make labels with more than one word divide the name with a space while others with a dash. If you have this many inconsistencies and there is not a clear pattern, use string matching. Let’s start by cleaning up make labels. For comparison, here are the make labels in both datasets: I think the differences are obvious. We will use process.extract to match each makes with the correct spelling: As you see, the make labels which exist in the make_model got converted into their correct spelling. Now, it is time for model labels: The last two code snippets were a little hairy. To fully understand how they are working, you should get some practice on process.extract. There you go! If you did not know string matching, the task would have been impossible and even Regular Expressions would not have been able to help you. Read more articles related to the topic:
[ { "code": null, "e": 369, "s": 47, "text": "If you have dealt with text data before, you know that its issues are the hardest to deal with. There is just no one-size-fits-all solution to text problems and for each dataset, you have to come up with new ways to clean your data. In one of my previous articles, I talked about the worst-case scenario of such problems:" }, { "code": null, "e": 885, "s": 369, "text": "For example, consider this worst-case scenario: you are working on a survey data conducted across the USA and there is a state column for the state of each observation in the dataset. There are 50 states in the USA and imagine all the damn variations of state names people can come up with. You are in even bigger problem if data collectors decide to use abbreviations: CA, ca, Ca, Caliphornia, Californa, Calfornia, calipornia, CAL, CALI, ... Such columns will always be filled with typos, errors, inconsistencies." }, { "code": null, "e": 1250, "s": 885, "text": "The problems related to text often arise because of free-text during data collection. They will be full of typos, inconsistencies, whatever you can name. Of course, the most basic problems can be solved using simple regular expressions or built-in Python functions but for cases like above, which occur very often, you have to arm yourself with more complex tools." }, { "code": null, "e": 1361, "s": 1250, "text": "Today’s special is fuzzywuzzy, a package with a very simple API which helps us to calculate string similarity." }, { "code": null, "e": 1460, "s": 1361, "text": "Get the notebook and data used in this article from this Kaggle notebook or from this GitHub repo." }, { "code": null, "e": 1747, "s": 1460, "text": "To understand string matching, let’s get you up to speed with Minimum Edit Distance. As humans, we have no trouble at all if two or more strings are similar or not. To create this ability in computers, many algorithms were created and almost all of them depend on Minimum Edit Distance." }, { "code": null, "e": 1905, "s": 1747, "text": "Minimum Edit Distance (MED) is the least possible amount of steps needed to transition from one string to another. MED is calculated using only 4 operations:" }, { "code": null, "e": 1915, "s": 1905, "text": "Insertion" }, { "code": null, "e": 1924, "s": 1915, "text": "Deletion" }, { "code": null, "e": 1937, "s": 1924, "text": "Substitution" }, { "code": null, "e": 1970, "s": 1937, "text": "Replacing consecutive characters" }, { "code": null, "e": 2018, "s": 1970, "text": "Consider these two words: Program and Sonogram:" }, { "code": null, "e": 2068, "s": 2018, "text": "To get from Program to Sonogram, we need 3 steps:" }, { "code": null, "e": 2166, "s": 2068, "text": "Add the letter ‘S’ to the beginning of ‘Program’.Substitute ‘P’ with ‘O’.Substitute ‘R’ with ‘N’." }, { "code": null, "e": 2216, "s": 2166, "text": "Add the letter ‘S’ to the beginning of ‘Program’." }, { "code": null, "e": 2241, "s": 2216, "text": "Substitute ‘P’ with ‘O’." }, { "code": null, "e": 2266, "s": 2241, "text": "Substitute ‘R’ with ‘N’." }, { "code": null, "e": 2321, "s": 2266, "text": "As I said, there are many algorithms to calculate MED:" }, { "code": null, "e": 2341, "s": 2321, "text": "Damerau-Levenshtein" }, { "code": null, "e": 2353, "s": 2341, "text": "Levenshtein" }, { "code": null, "e": 2361, "s": 2353, "text": "Hamming" }, { "code": null, "e": 2375, "s": 2361, "text": "Jaro Distance" }, { "code": null, "e": 2472, "s": 2375, "text": "Also, there are packages that use these algorithms: nltk, fuzzywuzzy, textdistance, difflib, ..." }, { "code": null, "e": 2520, "s": 2472, "text": "In this article, we will only cover fuzzywuzzy." }, { "code": null, "e": 2653, "s": 2520, "text": "Even though the basic installation can be done easily with pip, there are some other options or caveats to fuzzwuzzy's installation:" }, { "code": null, "e": 2684, "s": 2653, "text": "Using PIP via PyPI (standard):" }, { "code": null, "e": 2707, "s": 2684, "text": "pip install fuzzywuzzy" }, { "code": null, "e": 3091, "s": 2707, "text": "The above method installs the default up-to-date version of the package. At first, I installed it using this method. But whenever I imported it, it started giving a warning saying that the package itself is very slow and I should install python-Levenshtein package for more speed. If you hate warnings in your Jupyter Notebook like me, here is how you can install extra dependencies:" }, { "code": null, "e": 3128, "s": 3091, "text": "Directly install python-Levenshtein:" }, { "code": null, "e": 3159, "s": 3128, "text": "pip install python-Levenshtein" }, { "code": null, "e": 3162, "s": 3159, "text": "or" }, { "code": null, "e": 3194, "s": 3162, "text": "pip install fuzzywuzzy[speedup]" }, { "code": null, "e": 3367, "s": 3194, "text": "Warning for Windows users: if you don’t have Microsoft Visual Studio build tools installed, installing python-Levenshtein fails. You can download MVS Build Tools from here." }, { "code": null, "e": 3432, "s": 3367, "text": "To get started with fuzzywuzzy, we first import fuzz sub-module:" }, { "code": null, "e": 3460, "s": 3432, "text": "from fuzzywuzzy import fuzz" }, { "code": null, "e": 3642, "s": 3460, "text": "In this sub-module, there are 5 functions for different methods of comparison between 2 strings. The most flexible and best one for everyday use is WRatio (Weighted Ratio) function:" }, { "code": null, "e": 3790, "s": 3642, "text": "Here, we are comparing ‘Python’ to ‘Cython’. The output returns a percentage between 0 and 100, 0 being not similar at all and 100 being identical:" }, { "code": null, "e": 3844, "s": 3790, "text": "All the functions of fuzzywuzzy are case-insensitive:" }, { "code": null, "e": 3915, "s": 3844, "text": "WRatio is also very good for partial strings with different orderings:" }, { "code": null, "e": 3992, "s": 3915, "text": "Apart from WRatio, there are 4 other functions to compute string similarity:" }, { "code": null, "e": 4003, "s": 3992, "text": "fuzz.ratio" }, { "code": null, "e": 4022, "s": 4003, "text": "fuzz.partial_ratio" }, { "code": null, "e": 4044, "s": 4022, "text": "fuzz.token_sort_ratio" }, { "code": null, "e": 4065, "s": 4044, "text": "fuzz.token_set_ratio" }, { "code": null, "e": 4131, "s": 4065, "text": "fuzz.ratio is perfect for strings with similar lengths and order:" }, { "code": null, "e": 4208, "s": 4131, "text": "For strings with differing lengths, it is better to use `fuzz.patial_ratio’:" }, { "code": null, "e": 4302, "s": 4208, "text": "If the strings have the same meaning but their order is different, use fuzz.token_sort_ratio:" }, { "code": null, "e": 4354, "s": 4302, "text": "For more edge cases, there is fuzz.token_set_ratio:" }, { "code": null, "e": 4566, "s": 4354, "text": "As you see, these 5 functions are full of caveats. Their comparison is a whole another topic so I am leaving you a link to the article written by the package creators which explains their difference beautifully." }, { "code": null, "e": 4757, "s": 4566, "text": "I think you already saw that WRatio function gives the middle ground for all the functions of fuzzywuzzy. For many edge cases and different issues, it is best to use WRatio for best results." }, { "code": null, "e": 5003, "s": 4757, "text": "Now we have some understanding fuzzywuzzy's different functions, we can move on to more complex problems. With real-life data, most of the time you have to find the most similar value to your string from a list of options. Consider this example:" }, { "code": null, "e": 5316, "s": 5003, "text": "We have to find the best matches to Mercedez-Benz to replace them with the correct spelling of the cars. We can loop over each value but such a process could take too long if there are millions of options to choose from. Since this operation is so commonly used, fuzzywuzzy provides us with a helpful sub-module:" }, { "code": null, "e": 5347, "s": 5316, "text": "from fuzzywuzzy import process" }, { "code": null, "e": 5478, "s": 5347, "text": "With this sub-module, you can extract the best matches to your string from a sequence of strings. Let’s solve our initial problem:" }, { "code": null, "e": 5864, "s": 5478, "text": "The parameters of interest in process.extract are query, choices and limit. This function computes the similarity of strings given in query from a sequence of options given in choices and returns a list of tuples. limit controls the number of tuples to return. Each of these tuples contains two elements, the first one is the matching string and the second one is the similarity score." }, { "code": null, "e": 6063, "s": 5864, "text": "Under the hood, process.extract uses the default WRatio function. However, depending on your case and knowing the differences between the 5 functions you can change the scoring function with scorer:" }, { "code": null, "e": 6154, "s": 6063, "text": "If you have many options, it is best to stick with WRatio because it is the most flexible." }, { "code": null, "e": 6339, "s": 6154, "text": "In the process module, there are other functions that perform a similar operation. process.extractOne returns only one output which contains the string with the highest matching score:" }, { "code": null, "e": 6426, "s": 6339, "text": "Now we are ready to tackle a real-world problem. I will load the raw data to practice:" }, { "code": null, "e": 6446, "s": 6426, "text": "cars.shape(8504, 4)" }, { "code": null, "e": 6624, "s": 6446, "text": "I used this dataset in one of my personal projects and the task was to correct the spelling of each vehicle make and model according to the correct values given in another file:" }, { "code": null, "e": 6794, "s": 6624, "text": "After loading the pickle file, make_model is now a dictionary containing the correct spelling of each car make as keys and the correct spelling of models under each key." }, { "code": null, "e": 6867, "s": 6794, "text": "For example, let’s see the spellings of makes and models of Toyota cars:" }, { "code": null, "e": 6915, "s": 6867, "text": "Now, let’s subset the raw data for Toyota cars:" }, { "code": null, "e": 6958, "s": 6915, "text": ">>> cars[cars['vehicle_make'] == 'TOYOTA']" }, { "code": null, "e": 7512, "s": 6958, "text": "The dataset contains up to a hundred unique car makes like Audi, Bentley, BMW and each one contains several models that are full of edge cases. We cannot just convert each one to title case or lower case. We also don’t know if these contain any spelling errors or inconsistencies and visual search is not an option for such big datasets. There are also some cases where make labels with more than one word divide the name with a space while others with a dash. If you have this many inconsistencies and there is not a clear pattern, use string matching." }, { "code": null, "e": 7611, "s": 7512, "text": "Let’s start by cleaning up make labels. For comparison, here are the make labels in both datasets:" }, { "code": null, "e": 7723, "s": 7611, "text": "I think the differences are obvious. We will use process.extract to match each makes with the correct spelling:" }, { "code": null, "e": 7858, "s": 7723, "text": "As you see, the make labels which exist in the make_model got converted into their correct spelling. Now, it is time for model labels:" }, { "code": null, "e": 7997, "s": 7858, "text": "The last two code snippets were a little hairy. To fully understand how they are working, you should get some practice on process.extract." }, { "code": null, "e": 8151, "s": 7997, "text": "There you go! If you did not know string matching, the task would have been impossible and even Regular Expressions would not have been able to help you." } ]
7 forecasting techniques you’ll never use, but should know them anyway | by Mahbubul Alam | Towards Data Science
The time series forecasting toolbox is like the Swiss army knife — many options to choose from. These options often leave data scientists overwhelmed, puzzled, and sometimes outright confused. Last time I checked, there are at least 25 different techniques. But fortunately, they are really not much different from one another. For example, the ARIMA techniques (e.g. AR, MA, ARIMA, SARIMA, ARIMAX) appear different, but in reality one is just a variation of another. Today, however, I’m going to write about some simple techniques that people rarely talk about yet extremely useful to understand forecasting basics. These models are known as “benchmark” or “baseline” forecasting. As you will see below, these techniques are rarely applied in practice, but they help build forecasting intuition upon which to add additional layers of complexity. First I’ll demo some codes as examples, then talk about their similarities and differences in the latter part of the article. Let’s first implement three techniques: Naive, Mean and Drift models. The dataset I am using for the demo is the air passenger dataset with only one variable. And of course I’ll be using python! Import libraries and data import pandas as pdimport matplotlib.pyplot as pltimport numpy as npdf = pd.read_csv("../gasprice.csv") Split data into training and testing sets train = df.iloc[0:556, ]test = df.iloc[556:,]yhat = test.copy().drop('value', axis=1) Modeling # model buildingyhat['naive'] = train.loc[len(train)-1, 'value'] #Naiveyhat['average'] = train['value'].mean() #Averageyhat['drift'] = train.loc[len(train)-1]['value'] + (train.loc[len(train)-1]['value'] - train.loc[0]['value'])/len(train)* np.linspace(0, len(yhat)-1, len(yhat)) # Drift# visualizationplt.figure(figsize=(12,4))plt.plot(train['value'], label = "train")plt.plot(test['value'], label = "test")plt.plot(yhat['naive'], label = "naive")plt.plot(yhat['average'], label = "mean")plt.plot(yhat['drift'], label = "drift")plt.legend()plt.show() Evaluation # model evaluationeval = pd.concat([test, yhat], axis = 1)eval['error_naive'] = eval['value'] - eval['naive']mae_naive = np.mean(np.abs(eval['error_naive']))rmse_naive = np.sqrt(np.mean(eval['error_naive']**2))print('MAE_naive:', round(mae_naive))print('RMSE_naive:', round(rmse_naive))eval = pd.concat([test, yhat], axis = 1)eval['error_average'] = eval['value'] - eval['average']mae_average = np.mean(np.abs(eval['error_average']))rmse_average = np.sqrt(np.mean(eval['error_average']**2))print('MAE_average:', round(mae_average))print('RMSE_average:', round(rmse_average))eval = pd.concat([test, yhat], axis = 1)eval['error_drift'] = eval['value'] - eval['drift']mae_drift = np.mean(np.abs(eval['error_drift']))rmse_drift = np.sqrt(np.mean(eval['error_drift']**2))print('MAE_drift:', round(mae_drift))print('RMSE_drift:', round(rmse_drift)) MAE_naive: 16 RMSE_naive: 20 MAE_average: 25 RMSE_average: 28 MAE_drift: 17 RMSE_drift: 21 Now let’s get into some details to understand basics. Naive Forecast: Naive Forecast: In Naive forecast the future value is assumed to be equal to the past value. So the sales volume of a particular product on Wednesday would be similar to Tuesday’s sales. Naive forecast acts much like a null hypothesis against which to compare an alternative hypothesis — sales revenue will be different tomorrow because of such and such reasons. 2. Seasonal Naive: Seasonal naive, as the name suggests, factors in seasonality in its forecast. So in a way, it’s an improvement over Naive method. In this case, the revenue forecast for December would be equal to the revenue in the previous year’s December. This is done to factor in holiday effects. Again, it still works like a null hypothesis but considers seasonality as its key improvement over Naive forecast. 3. Mean Model Naive forecast takes one past value and uses it as a predicted value. Mean model, in contrast, takes all the past observations, makes an average, and uses this average as the forecast value. If data is randomly distributed, without clear patterns and trends (also known as the white noise), a mean model works as a better benchmark than a naive model. 4. Drift model Drift model is yet another variation of Naive forecast, but with an obvious improvement. As in Naive, it takes the last observation, then adjusts the observation based on variation in past values. Forecast value = past observation +/- average change in past observations 5. Linear Trend Mean model described above is a horizontal, constant line that doesn’t change over time because it works on training data without a trend. However, if a trend is detected, a linear model provides a better forecast value than a Mean model. Forecasting using Linear Trend in practice is actually the line of best fit (i.e. regression line) of the following form: Y(t) = alpha + beta*t An RSME or R2 value determines how good the fitted line is for prediction. 6. Random Walk In this case the forecast value “walks” a random step ahead from its current position (similar to Brownian Motion). Like a walking toddler, the next step can be in any random direction but isn’t too far from where the last step was. Y(t+1)=Y(t) + noise(t) The stock price on Wednesday will likely be close to Tuesday’s closing price, so a Random Walk provides a reasonable guestimate. But it’s not suitable to predict too many time-steps ahead, because, well, each step is random. 7. Geometric Random Walk In Geometric Random Walk, the forecast for the next value will be equal to the last value plus a constant change (e.g. a percentage monthly increase in revenue). Ŷ(t) = Y(t-1) + α It’s also called the “random-walk-with-growth model”. Stock prices in the long-term follow somewhat a Geometric Random Walk model. The purpose of this article was to unearth some non-typical time series forecasting techniques. Even though they are not used in practice, they are an essential stepping-stone to build intuition for how forecasting works and how to develop advanced forecasting models. Additional articles are in the pipeline on some advanced forecasting techniques, so stay tuned. For news and updates, you can find/follow me on Twitter.
[ { "code": null, "e": 240, "s": 47, "text": "The time series forecasting toolbox is like the Swiss army knife — many options to choose from. These options often leave data scientists overwhelmed, puzzled, and sometimes outright confused." }, { "code": null, "e": 515, "s": 240, "text": "Last time I checked, there are at least 25 different techniques. But fortunately, they are really not much different from one another. For example, the ARIMA techniques (e.g. AR, MA, ARIMA, SARIMA, ARIMAX) appear different, but in reality one is just a variation of another." }, { "code": null, "e": 729, "s": 515, "text": "Today, however, I’m going to write about some simple techniques that people rarely talk about yet extremely useful to understand forecasting basics. These models are known as “benchmark” or “baseline” forecasting." }, { "code": null, "e": 1020, "s": 729, "text": "As you will see below, these techniques are rarely applied in practice, but they help build forecasting intuition upon which to add additional layers of complexity. First I’ll demo some codes as examples, then talk about their similarities and differences in the latter part of the article." }, { "code": null, "e": 1215, "s": 1020, "text": "Let’s first implement three techniques: Naive, Mean and Drift models. The dataset I am using for the demo is the air passenger dataset with only one variable. And of course I’ll be using python!" }, { "code": null, "e": 1241, "s": 1215, "text": "Import libraries and data" }, { "code": null, "e": 1345, "s": 1241, "text": "import pandas as pdimport matplotlib.pyplot as pltimport numpy as npdf = pd.read_csv(\"../gasprice.csv\")" }, { "code": null, "e": 1387, "s": 1345, "text": "Split data into training and testing sets" }, { "code": null, "e": 1473, "s": 1387, "text": "train = df.iloc[0:556, ]test = df.iloc[556:,]yhat = test.copy().drop('value', axis=1)" }, { "code": null, "e": 1482, "s": 1473, "text": "Modeling" }, { "code": null, "e": 2035, "s": 1482, "text": "# model buildingyhat['naive'] = train.loc[len(train)-1, 'value'] #Naiveyhat['average'] = train['value'].mean() #Averageyhat['drift'] = train.loc[len(train)-1]['value'] + (train.loc[len(train)-1]['value'] - train.loc[0]['value'])/len(train)* np.linspace(0, len(yhat)-1, len(yhat)) # Drift# visualizationplt.figure(figsize=(12,4))plt.plot(train['value'], label = \"train\")plt.plot(test['value'], label = \"test\")plt.plot(yhat['naive'], label = \"naive\")plt.plot(yhat['average'], label = \"mean\")plt.plot(yhat['drift'], label = \"drift\")plt.legend()plt.show()" }, { "code": null, "e": 2046, "s": 2035, "text": "Evaluation" }, { "code": null, "e": 2889, "s": 2046, "text": "# model evaluationeval = pd.concat([test, yhat], axis = 1)eval['error_naive'] = eval['value'] - eval['naive']mae_naive = np.mean(np.abs(eval['error_naive']))rmse_naive = np.sqrt(np.mean(eval['error_naive']**2))print('MAE_naive:', round(mae_naive))print('RMSE_naive:', round(rmse_naive))eval = pd.concat([test, yhat], axis = 1)eval['error_average'] = eval['value'] - eval['average']mae_average = np.mean(np.abs(eval['error_average']))rmse_average = np.sqrt(np.mean(eval['error_average']**2))print('MAE_average:', round(mae_average))print('RMSE_average:', round(rmse_average))eval = pd.concat([test, yhat], axis = 1)eval['error_drift'] = eval['value'] - eval['drift']mae_drift = np.mean(np.abs(eval['error_drift']))rmse_drift = np.sqrt(np.mean(eval['error_drift']**2))print('MAE_drift:', round(mae_drift))print('RMSE_drift:', round(rmse_drift))" }, { "code": null, "e": 2903, "s": 2889, "text": "MAE_naive: 16" }, { "code": null, "e": 2918, "s": 2903, "text": "RMSE_naive: 20" }, { "code": null, "e": 2934, "s": 2918, "text": "MAE_average: 25" }, { "code": null, "e": 2951, "s": 2934, "text": "RMSE_average: 28" }, { "code": null, "e": 2965, "s": 2951, "text": "MAE_drift: 17" }, { "code": null, "e": 2980, "s": 2965, "text": "RMSE_drift: 21" }, { "code": null, "e": 3034, "s": 2980, "text": "Now let’s get into some details to understand basics." }, { "code": null, "e": 3050, "s": 3034, "text": "Naive Forecast:" }, { "code": null, "e": 3066, "s": 3050, "text": "Naive Forecast:" }, { "code": null, "e": 3237, "s": 3066, "text": "In Naive forecast the future value is assumed to be equal to the past value. So the sales volume of a particular product on Wednesday would be similar to Tuesday’s sales." }, { "code": null, "e": 3413, "s": 3237, "text": "Naive forecast acts much like a null hypothesis against which to compare an alternative hypothesis — sales revenue will be different tomorrow because of such and such reasons." }, { "code": null, "e": 3432, "s": 3413, "text": "2. Seasonal Naive:" }, { "code": null, "e": 3716, "s": 3432, "text": "Seasonal naive, as the name suggests, factors in seasonality in its forecast. So in a way, it’s an improvement over Naive method. In this case, the revenue forecast for December would be equal to the revenue in the previous year’s December. This is done to factor in holiday effects." }, { "code": null, "e": 3831, "s": 3716, "text": "Again, it still works like a null hypothesis but considers seasonality as its key improvement over Naive forecast." }, { "code": null, "e": 3845, "s": 3831, "text": "3. Mean Model" }, { "code": null, "e": 4036, "s": 3845, "text": "Naive forecast takes one past value and uses it as a predicted value. Mean model, in contrast, takes all the past observations, makes an average, and uses this average as the forecast value." }, { "code": null, "e": 4197, "s": 4036, "text": "If data is randomly distributed, without clear patterns and trends (also known as the white noise), a mean model works as a better benchmark than a naive model." }, { "code": null, "e": 4212, "s": 4197, "text": "4. Drift model" }, { "code": null, "e": 4409, "s": 4212, "text": "Drift model is yet another variation of Naive forecast, but with an obvious improvement. As in Naive, it takes the last observation, then adjusts the observation based on variation in past values." }, { "code": null, "e": 4483, "s": 4409, "text": "Forecast value = past observation +/- average change in past observations" }, { "code": null, "e": 4499, "s": 4483, "text": "5. Linear Trend" }, { "code": null, "e": 4738, "s": 4499, "text": "Mean model described above is a horizontal, constant line that doesn’t change over time because it works on training data without a trend. However, if a trend is detected, a linear model provides a better forecast value than a Mean model." }, { "code": null, "e": 4860, "s": 4738, "text": "Forecasting using Linear Trend in practice is actually the line of best fit (i.e. regression line) of the following form:" }, { "code": null, "e": 4882, "s": 4860, "text": "Y(t) = alpha + beta*t" }, { "code": null, "e": 4957, "s": 4882, "text": "An RSME or R2 value determines how good the fitted line is for prediction." }, { "code": null, "e": 4972, "s": 4957, "text": "6. Random Walk" }, { "code": null, "e": 5205, "s": 4972, "text": "In this case the forecast value “walks” a random step ahead from its current position (similar to Brownian Motion). Like a walking toddler, the next step can be in any random direction but isn’t too far from where the last step was." }, { "code": null, "e": 5228, "s": 5205, "text": "Y(t+1)=Y(t) + noise(t)" }, { "code": null, "e": 5453, "s": 5228, "text": "The stock price on Wednesday will likely be close to Tuesday’s closing price, so a Random Walk provides a reasonable guestimate. But it’s not suitable to predict too many time-steps ahead, because, well, each step is random." }, { "code": null, "e": 5478, "s": 5453, "text": "7. Geometric Random Walk" }, { "code": null, "e": 5640, "s": 5478, "text": "In Geometric Random Walk, the forecast for the next value will be equal to the last value plus a constant change (e.g. a percentage monthly increase in revenue)." }, { "code": null, "e": 5659, "s": 5640, "text": "Ŷ(t) = Y(t-1) + α" }, { "code": null, "e": 5790, "s": 5659, "text": "It’s also called the “random-walk-with-growth model”. Stock prices in the long-term follow somewhat a Geometric Random Walk model." }, { "code": null, "e": 6059, "s": 5790, "text": "The purpose of this article was to unearth some non-typical time series forecasting techniques. Even though they are not used in practice, they are an essential stepping-stone to build intuition for how forecasting works and how to develop advanced forecasting models." } ]
Sqrt, sqrtl, and sqrtf in C++ programming
This article demonstrates the usage of math class essentials functions sqrt(), sqrtl(), and sqrtf() to calculate the square root of double, long, and float type variables with precision respectively. The Math class of C++ offers a wide range of functions to calculate mathematical calculations including sin, cos, square root, ceil, floor, etc..It is, therefore, mandatory to import the definition of <cmath> header class library in the program in order to avail all calculative methods. The double sqrtl () method of the Math class returns the square root of a double variable with precision. The syntax of this function is as follows; double sqrt(double arg) The following c++ code constructs define a double type variable with an initialization value to calculate its square root value. Then, the math class method sqrt() accepts these values and yield the result with a precision value as follows; Live Demo #include <cmath> #include <iomanip> #include <iostream> using namespace std; int main(){ double val = 225.0; cout << fixed << setprecision(5) << sqrt(val); return (0); } As seen below, the output of this program is being produced with a precision 5 as follows; 15.00000 The long double sqrtl () method of the Math class returns the square root of a long double variable with precision. The syntax of this function is as follows; long double sqrtl(long double arg) An illustration is given below to calculate the square root of a long double variable supplied by the using owing to the Math.sqrtl() method as follows; Live Demo #include <cmath> #include <iomanip> #include <iostream> using namespace std; int main(){ long long int val = 1000000000000000000; cout << fixed << setprecision(10) << sqrt(val); return (0); } After compilation of the program using a code editor, the calculated value of the input long type variable is seen as below; 1000000000.000000000 The float sqrtf () method of the Math class returns the square root of a float type variable with precision. The syntax of this function is as follows; float sqrtf(float arg) As per the syntax, the program supply a float type variable in the sqrtf() method in pursuit of calculating the square root as following; Live Demo #include <cmath> #include <iomanip> #include <iostream> using namespace std; int main(){ float val = 300.0; cout << fixed << setprecision(5) << sqrtf(val); return (0); } The output of the supplied float type variable is squarely rooted as below; 17.32051
[ { "code": null, "e": 1550, "s": 1062, "text": "This article demonstrates the usage of math class essentials functions sqrt(), sqrtl(), and sqrtf() to calculate the square root of double, long, and float type variables with precision respectively. The Math class of C++ offers a wide range of functions to calculate mathematical calculations\nincluding sin, cos, square root, ceil, floor, etc..It is, therefore, mandatory to import the definition of <cmath> header class library in the program in order to avail all calculative methods." }, { "code": null, "e": 1699, "s": 1550, "text": "The double sqrtl () method of the Math class returns the square root of a double variable with precision. The syntax of this function is as follows;" }, { "code": null, "e": 1723, "s": 1699, "text": "double sqrt(double arg)" }, { "code": null, "e": 1964, "s": 1723, "text": "The following c++ code constructs define a double type variable with an initialization value to calculate its square root value. Then, the math class method sqrt() accepts these values and yield the result with a precision value as follows;" }, { "code": null, "e": 1975, "s": 1964, "text": " Live Demo" }, { "code": null, "e": 2154, "s": 1975, "text": "#include <cmath>\n#include <iomanip>\n#include <iostream>\nusing namespace std;\nint main(){\n double val = 225.0;\n cout << fixed << setprecision(5) << sqrt(val);\n return (0);\n}" }, { "code": null, "e": 2245, "s": 2154, "text": "As seen below, the output of this program is being produced with a precision 5 as follows;" }, { "code": null, "e": 2254, "s": 2245, "text": "15.00000" }, { "code": null, "e": 2413, "s": 2254, "text": "The long double sqrtl () method of the Math class returns the square root of a long double variable with precision. The syntax of this function is as follows;" }, { "code": null, "e": 2448, "s": 2413, "text": "long double sqrtl(long double arg)" }, { "code": null, "e": 2601, "s": 2448, "text": "An illustration is given below to calculate the square root of a long double variable supplied by the using owing to the Math.sqrtl() method as follows;" }, { "code": null, "e": 2612, "s": 2601, "text": " Live Demo" }, { "code": null, "e": 2813, "s": 2612, "text": "#include <cmath>\n#include <iomanip>\n#include <iostream>\nusing namespace std;\nint main(){\n long long int val = 1000000000000000000;\n cout << fixed << setprecision(10) << sqrt(val);\n return (0);\n}" }, { "code": null, "e": 2938, "s": 2813, "text": "After compilation of the program using a code editor, the calculated value of the input long type variable is seen as below;" }, { "code": null, "e": 2959, "s": 2938, "text": "1000000000.000000000" }, { "code": null, "e": 3111, "s": 2959, "text": "The float sqrtf () method of the Math class returns the square root of a float type variable with precision. The syntax of this function is as follows;" }, { "code": null, "e": 3134, "s": 3111, "text": "float sqrtf(float arg)" }, { "code": null, "e": 3272, "s": 3134, "text": "As per the syntax, the program supply a float type variable in the sqrtf() method in pursuit of calculating the square root as following;" }, { "code": null, "e": 3283, "s": 3272, "text": " Live Demo" }, { "code": null, "e": 3462, "s": 3283, "text": "#include <cmath>\n#include <iomanip>\n#include <iostream>\nusing namespace std;\nint main(){\n float val = 300.0;\n cout << fixed << setprecision(5) << sqrtf(val);\n return (0);\n}" }, { "code": null, "e": 3538, "s": 3462, "text": "The output of the supplied float type variable is squarely rooted as below;" }, { "code": null, "e": 3547, "s": 3538, "text": "17.32051" } ]
MFC - Timer
A timer is a non-spatial object that uses recurring lapses of time from a computer or from your application. To work, every lapse of period, the control sends a message to the operating system. Unlike most other controls, the MFC timer has neither a button to represent it nor a class. To create a timer, you simply call the CWnd::SetTimer() method. This function call creates a timer for your application. Like the other controls, a timer uses an identifier. Let us create a new MFC dialog based application. Step 1 − Remove the Caption and set its ID to IDC_STATIC_TXT Step 2 − Add the value variable for text control. Step 3 − Go to the class view in solution. Step 4 − Click the CMFCTimeDlg class. Step 5 − In the Properties window, click the Messages button. Step 6 − Click the WM_TIMER field and click the arrow of its combo box. Select OnTimer and implement the event. void CMFCTimerDlg::OnTimer(UINT_PTR nIDEvent) { // TODO: Add your message handler code here and/or call default CTime CurrentTime = CTime::GetCurrentTime(); int iHours = CurrentTime.GetHour(); int iMinutes = CurrentTime.GetMinute(); int iSeconds = CurrentTime.GetSecond(); CString strHours, strMinutes, strSeconds; if (iHours < 10) strHours.Format(_T("0%d"), iHours); else strHours.Format(_T("%d"), iHours); if (iMinutes < 10) strMinutes.Format(_T("0%d"), iMinutes); else strMinutes.Format(_T("%d"), iMinutes); if (iSeconds < 10) strSeconds.Format(_T("0%d"), iSeconds); else strSeconds.Format(_T("%d"), iSeconds); m_strTimer.Format(_T("%s:%s:%s"), strHours, strMinutes, strSeconds); UpdateData(FALSE); CDialogEx::OnTimer(nIDEvent); } Step 7 − When the above code is compiled and executed, you will see the following output. Print Add Notes Bookmark this page
[ { "code": null, "e": 2527, "s": 2067, "text": "A timer is a non-spatial object that uses recurring lapses of time from a computer or from your application. To work, every lapse of period, the control sends a message to the operating system. Unlike most other controls, the MFC timer has neither a button to represent it nor a class. To create a timer, you simply call the CWnd::SetTimer() method. This function call creates a timer for your application. Like the other controls, a timer uses an identifier." }, { "code": null, "e": 2577, "s": 2527, "text": "Let us create a new MFC dialog based application." }, { "code": null, "e": 2639, "s": 2577, "text": "Step 1 − Remove the Caption and set its ID to IDC_STATIC_TXT" }, { "code": null, "e": 2689, "s": 2639, "text": "Step 2 − Add the value variable for text control." }, { "code": null, "e": 2732, "s": 2689, "text": "Step 3 − Go to the class view in solution." }, { "code": null, "e": 2771, "s": 2732, "text": "Step 4 − Click the CMFCTimeDlg class." }, { "code": null, "e": 2833, "s": 2771, "text": "Step 5 − In the Properties window, click the Messages button." }, { "code": null, "e": 2946, "s": 2833, "text": "Step 6 − Click the WM_TIMER field and click the arrow of its combo box. Select OnTimer and implement the event." }, { "code": null, "e": 3804, "s": 2946, "text": "void CMFCTimerDlg::OnTimer(UINT_PTR nIDEvent) { \n // TODO: Add your message handler code here and/or call default \n CTime CurrentTime = CTime::GetCurrentTime(); \n\t\n int iHours = CurrentTime.GetHour(); \n int iMinutes = CurrentTime.GetMinute(); \n int iSeconds = CurrentTime.GetSecond(); \n CString strHours, strMinutes, strSeconds; \n \n if (iHours < 10) \n strHours.Format(_T(\"0%d\"), iHours); \n else \n strHours.Format(_T(\"%d\"), iHours); \n \n if (iMinutes < 10) \n strMinutes.Format(_T(\"0%d\"), iMinutes); \n else \n strMinutes.Format(_T(\"%d\"), iMinutes); \n \n if (iSeconds < 10) \n strSeconds.Format(_T(\"0%d\"), iSeconds); \n else \n strSeconds.Format(_T(\"%d\"), iSeconds); \n \n m_strTimer.Format(_T(\"%s:%s:%s\"), strHours, strMinutes, strSeconds); \n \n UpdateData(FALSE); \n CDialogEx::OnTimer(nIDEvent); \n}" }, { "code": null, "e": 3895, "s": 3804, "text": "Step 7 − When the above code is compiled and executed, you will see the following output." }, { "code": null, "e": 3902, "s": 3895, "text": " Print" }, { "code": null, "e": 3913, "s": 3902, "text": " Add Notes" } ]
How to concatenate variable in a string in jQuery?
You can easily concatenate variable in a string in jQuery, using the jQuery html() method. Try to run the following code to learn how to use variable in a string with jQuery − Live Demo <!DOCTYPE html> <html> <head> <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.2.1/jquery.min.js"></script> <script> $(function(){ $("a").click(function(){ var id= $(".id").html(); $('.demo').html("<br><div class='new' id='" + id + "'>Welcome to </div>"); }); }); </script> </head> <body> <div class="wrap"> <a href="#">Click me</a> <div class="demo"></div> <div class="id">Qries</div> </div> </body> </html>
[ { "code": null, "e": 1238, "s": 1062, "text": "You can easily concatenate variable in a string in jQuery, using the jQuery html() method. Try to run the following code to learn how to use variable in a string with jQuery −" }, { "code": null, "e": 1248, "s": 1238, "text": "Live Demo" }, { "code": null, "e": 1744, "s": 1248, "text": "<!DOCTYPE html>\n<html>\n<head>\n<script src=\"https://ajax.googleapis.com/ajax/libs/jquery/3.2.1/jquery.min.js\"></script>\n<script>\n $(function(){\n $(\"a\").click(function(){\n var id= $(\".id\").html();\n $('.demo').html(\"<br><div class='new' id='\" + id + \"'>Welcome to </div>\"); \n });\n });\n </script>\n</head>\n<body>\n <div class=\"wrap\">\n <a href=\"#\">Click me</a>\n <div class=\"demo\"></div>\n <div class=\"id\">Qries</div>\n </div>\n</body>\n</html>" } ]
How to Build a Data Science Portfolio Website using Python | by Frank Andrade | Towards Data Science
As a data scientist, you need to have a portfolio website that helps you showcase your projects and profile in one place. You might already have a Github and LinkedIn page, but don’t expect a potential employer to look through all your code and posts to know more about you. Building a portfolio website can be as easy as using a WordPress or GitHub template; however, creating a website on your own will help you add more customization while learning new things you can do in Python. Although building a website usually requires knowledge beyond Python, we don’t need to become experts in other programming languages to create a portfolio website. This is why I decided to do this guide to walk you through the essential stuff you need to build and deploy your data science portfolio website. Table of Contents1. Planning the Website - What to include - Get a Custom Domain Name2. How to Build the Website - Backends: Flask vs Django - Front End: Bootstrap (+ HTML, CSS, Javascript)3. Deployment Before start writing the code to build your portfolio website, take some time to plan what sections the website will have. Make sure your portfolio website has at least some of the sections listed below. Portfolio: This will be the most important page of the website. List the most important data science projects you’ve finished so far. Add a brief description and a link to the source code. If you’ve written an article about that project, then include the link. About me: This section will help people know about your skills, background, and anything relevant about you. Get in touch: Here you should add a form, so people can fill in their name, email, and write a message to get in contact with you. In addition to this, you can add the links to your GitHub and LinkedIn. If you finished many data science projects and have a lot to write about your experience as a data scientist, create one page per section and then add a landing page that summarizes the other sections. However, if you have little to write about, then one page should be enough to contain all the sections mentioned before. If necessary add more sections to make your portfolio website stand out. A domain name is the location of a website. It’s the text that a user types into a browser window to reach a website; for example, the domain name for Google is google.com. Although we haven't built the website yet, at least you should check the availability of the domain name you wish to have. There are many domain registrars such as GoDaddy and NameCheap where you can see whether the domain name is available or not. In case the domain name is available, don’t wait until you finish building the website to buy it, otherwise, it might no longer be available after weeks or months. Domain names are usually cheap so in case something goes wrong, you will not lose much money. Two of the most popular frameworks to build websites with Python are Flask and Django. Django is a high-level Python web framework that enables the developer to create websites without third-party libraries and tools. In contrast, Flask is a microframework that offers the basic features of a web app. It aims to maintain its lightweight simplicity and still extensible usage. Which one should you use? This will depend a lot on the size of your project. Flask is more suited to smaller, less complicated applications, while Django is designed for larger, more complex, and high-load applications. If you want to create a simple portfolio website, Flask might be the best option. It’s not only more adequate for small projects, but also the easiest to learn. Flask is more Pythonic than Django because the code of flask Web Application most of the time is more explicit than Django code. This makes Flask easy for Python coders to pick up. That said, if you plan to create a more complex website with multiple functionalities, you should Django. Also, if you’re into web development, learning Django might be worthier since it’s more popular than Flask. Below there’s a web search comparison I made on Google Trend about these 2 frameworks over a period of 5 years. The chart above reveals more popularity of Django over Flask. That said, learning any of these frameworks will help you improve your Python skills. You can read a deeper comparison of both frameworks in this article. There are a lot of free Django and Flask courses available on YouTube. I personally watched this complete Django series where you can learn how to build a blog application. There’s also a Flask series available on the same channel. Another project I tried after learning the basics was this Django Ecommerce Website. After completing those courses you can check this video tutorial that shows an introduction to a very basic portfolio resume website, so you get some inspiration and start building your own. So far we have successfully built the bones of the website, but to make the website good-looking we need to use other tools. Web developers need a fair knowledge of HTML, CSS, and Javascript to create a website; however, if our goal is to create a basic data science portfolio website, we can save weeks studying any of those programming languages using Bootstrap. Bootstrap is a collection of HTML, CSS, and JavaScript tools for creating and building web pages and web applications. With Bootstrap, we can focus on the development work, without worrying about design, and get a good-looking website up and running quickly. On top of that, Bootstrap is mobile-friendly, so the website will still look good on phones This is great! Thanks to Bootstrap, we don’t need to be experts on JavaScript or CSS to make your website better looking (we still need to know at least the basics though) Below you can find some basic Bootstrap templates you can use for your website. Starter template Navigation header Make sure you follow the Django/Flask free courses I mentioned before. There you will find when and how you should implement these Bootstrap templates in your code. Note: As I mentioned before, you should at least understand the basics of HTML, CSS, and JavaScript code. In my experience, HTML is used more frequently so consider checking this free HTML course. So far the website we built can only be accessible on our local machine. Naturally, we want to make our website available for anyone with internet access, so we’re going to use Heroku to take care of it. Heroku is a platform that allows us to easily deploy and host applications without setting up everything on our own manually. To work with Heroku you have to create an account first. After this, you need to do a couple of things to set up Heroku. This process might take some minutes, so check this video tutorial to learn step-by-step how to deploy your web application with Heroku. Note: Although you can host a project for free with Heroku, if there’s no web traffic in a 30-minute period, they will put your website to sleep. If someone accesses your website, it will become active after a short delay. To avoid this behavior, you can upgrade to Heroku’s Hobby plan. That’s it! Now you have a good idea of how to build a basic data science portfolio website using Python. With this, you will be able to customize your website and learn beyond the usual Python stuff we use for data science. Join my email list with 3k+ people to get my Python for Data Science Cheat Sheet I use in all my tutorials (Free PDF)
[ { "code": null, "e": 447, "s": 172, "text": "As a data scientist, you need to have a portfolio website that helps you showcase your projects and profile in one place. You might already have a Github and LinkedIn page, but don’t expect a potential employer to look through all your code and posts to know more about you." }, { "code": null, "e": 657, "s": 447, "text": "Building a portfolio website can be as easy as using a WordPress or GitHub template; however, creating a website on your own will help you add more customization while learning new things you can do in Python." }, { "code": null, "e": 966, "s": 657, "text": "Although building a website usually requires knowledge beyond Python, we don’t need to become experts in other programming languages to create a portfolio website. This is why I decided to do this guide to walk you through the essential stuff you need to build and deploy your data science portfolio website." }, { "code": null, "e": 1169, "s": 966, "text": "Table of Contents1. Planning the Website - What to include - Get a Custom Domain Name2. How to Build the Website - Backends: Flask vs Django - Front End: Bootstrap (+ HTML, CSS, Javascript)3. Deployment" }, { "code": null, "e": 1373, "s": 1169, "text": "Before start writing the code to build your portfolio website, take some time to plan what sections the website will have. Make sure your portfolio website has at least some of the sections listed below." }, { "code": null, "e": 1634, "s": 1373, "text": "Portfolio: This will be the most important page of the website. List the most important data science projects you’ve finished so far. Add a brief description and a link to the source code. If you’ve written an article about that project, then include the link." }, { "code": null, "e": 1743, "s": 1634, "text": "About me: This section will help people know about your skills, background, and anything relevant about you." }, { "code": null, "e": 1946, "s": 1743, "text": "Get in touch: Here you should add a form, so people can fill in their name, email, and write a message to get in contact with you. In addition to this, you can add the links to your GitHub and LinkedIn." }, { "code": null, "e": 2269, "s": 1946, "text": "If you finished many data science projects and have a lot to write about your experience as a data scientist, create one page per section and then add a landing page that summarizes the other sections. However, if you have little to write about, then one page should be enough to contain all the sections mentioned before." }, { "code": null, "e": 2342, "s": 2269, "text": "If necessary add more sections to make your portfolio website stand out." }, { "code": null, "e": 2515, "s": 2342, "text": "A domain name is the location of a website. It’s the text that a user types into a browser window to reach a website; for example, the domain name for Google is google.com." }, { "code": null, "e": 2764, "s": 2515, "text": "Although we haven't built the website yet, at least you should check the availability of the domain name you wish to have. There are many domain registrars such as GoDaddy and NameCheap where you can see whether the domain name is available or not." }, { "code": null, "e": 3022, "s": 2764, "text": "In case the domain name is available, don’t wait until you finish building the website to buy it, otherwise, it might no longer be available after weeks or months. Domain names are usually cheap so in case something goes wrong, you will not lose much money." }, { "code": null, "e": 3399, "s": 3022, "text": "Two of the most popular frameworks to build websites with Python are Flask and Django. Django is a high-level Python web framework that enables the developer to create websites without third-party libraries and tools. In contrast, Flask is a microframework that offers the basic features of a web app. It aims to maintain its lightweight simplicity and still extensible usage." }, { "code": null, "e": 3620, "s": 3399, "text": "Which one should you use? This will depend a lot on the size of your project. Flask is more suited to smaller, less complicated applications, while Django is designed for larger, more complex, and high-load applications." }, { "code": null, "e": 3962, "s": 3620, "text": "If you want to create a simple portfolio website, Flask might be the best option. It’s not only more adequate for small projects, but also the easiest to learn. Flask is more Pythonic than Django because the code of flask Web Application most of the time is more explicit than Django code. This makes Flask easy for Python coders to pick up." }, { "code": null, "e": 4288, "s": 3962, "text": "That said, if you plan to create a more complex website with multiple functionalities, you should Django. Also, if you’re into web development, learning Django might be worthier since it’s more popular than Flask. Below there’s a web search comparison I made on Google Trend about these 2 frameworks over a period of 5 years." }, { "code": null, "e": 4505, "s": 4288, "text": "The chart above reveals more popularity of Django over Flask. That said, learning any of these frameworks will help you improve your Python skills. You can read a deeper comparison of both frameworks in this article." }, { "code": null, "e": 5013, "s": 4505, "text": "There are a lot of free Django and Flask courses available on YouTube. I personally watched this complete Django series where you can learn how to build a blog application. There’s also a Flask series available on the same channel. Another project I tried after learning the basics was this Django Ecommerce Website. After completing those courses you can check this video tutorial that shows an introduction to a very basic portfolio resume website, so you get some inspiration and start building your own." }, { "code": null, "e": 5138, "s": 5013, "text": "So far we have successfully built the bones of the website, but to make the website good-looking we need to use other tools." }, { "code": null, "e": 5378, "s": 5138, "text": "Web developers need a fair knowledge of HTML, CSS, and Javascript to create a website; however, if our goal is to create a basic data science portfolio website, we can save weeks studying any of those programming languages using Bootstrap." }, { "code": null, "e": 5729, "s": 5378, "text": "Bootstrap is a collection of HTML, CSS, and JavaScript tools for creating and building web pages and web applications. With Bootstrap, we can focus on the development work, without worrying about design, and get a good-looking website up and running quickly. On top of that, Bootstrap is mobile-friendly, so the website will still look good on phones" }, { "code": null, "e": 5981, "s": 5729, "text": "This is great! Thanks to Bootstrap, we don’t need to be experts on JavaScript or CSS to make your website better looking (we still need to know at least the basics though) Below you can find some basic Bootstrap templates you can use for your website." }, { "code": null, "e": 5998, "s": 5981, "text": "Starter template" }, { "code": null, "e": 6016, "s": 5998, "text": "Navigation header" }, { "code": null, "e": 6181, "s": 6016, "text": "Make sure you follow the Django/Flask free courses I mentioned before. There you will find when and how you should implement these Bootstrap templates in your code." }, { "code": null, "e": 6378, "s": 6181, "text": "Note: As I mentioned before, you should at least understand the basics of HTML, CSS, and JavaScript code. In my experience, HTML is used more frequently so consider checking this free HTML course." }, { "code": null, "e": 6582, "s": 6378, "text": "So far the website we built can only be accessible on our local machine. Naturally, we want to make our website available for anyone with internet access, so we’re going to use Heroku to take care of it." }, { "code": null, "e": 6966, "s": 6582, "text": "Heroku is a platform that allows us to easily deploy and host applications without setting up everything on our own manually. To work with Heroku you have to create an account first. After this, you need to do a couple of things to set up Heroku. This process might take some minutes, so check this video tutorial to learn step-by-step how to deploy your web application with Heroku." }, { "code": null, "e": 7253, "s": 6966, "text": "Note: Although you can host a project for free with Heroku, if there’s no web traffic in a 30-minute period, they will put your website to sleep. If someone accesses your website, it will become active after a short delay. To avoid this behavior, you can upgrade to Heroku’s Hobby plan." }, { "code": null, "e": 7477, "s": 7253, "text": "That’s it! Now you have a good idea of how to build a basic data science portfolio website using Python. With this, you will be able to customize your website and learn beyond the usual Python stuff we use for data science." } ]
How can we print all the capital letters of a given string in Java?
The Character class is a subclass of Object class and it wraps a value of the primitive type char in an object. An object of type Character class contains a single field whose type is char. We can print all the uppercase letters by iterating the characters of a string in a loop and check individual characters are uppercase letters or not using isUpperCase() method and it is a static method of a Character class. public static boolean isUpperCase(char ch) public class PrintUpperCaseLetterStringTest { public static void main(String[] args) { String str = "Welcome To Tutorials Point India"; for(int i = 0; i < str.length(); i++) { if(Character.isUpperCase(str.charAt(i))) { System.out.println(str.charAt(i)); } } } } W T T P I
[ { "code": null, "e": 1252, "s": 1062, "text": "The Character class is a subclass of Object class and it wraps a value of the primitive type char in an object. An object of type Character class contains a single field whose type is char." }, { "code": null, "e": 1477, "s": 1252, "text": "We can print all the uppercase letters by iterating the characters of a string in a loop and check individual characters are uppercase letters or not using isUpperCase() method and it is a static method of a Character class." }, { "code": null, "e": 1520, "s": 1477, "text": "public static boolean isUpperCase(char ch)" }, { "code": null, "e": 1838, "s": 1520, "text": "public class PrintUpperCaseLetterStringTest {\n public static void main(String[] args) {\n String str = \"Welcome To Tutorials Point India\";\n for(int i = 0; i < str.length(); i++) {\n if(Character.isUpperCase(str.charAt(i))) {\n System.out.println(str.charAt(i));\n }\n }\n }\n}" }, { "code": null, "e": 1848, "s": 1838, "text": "W\nT\nT\nP\nI" } ]
DLL - Quick Guide
Dynamic linking is a mechanism that links applications to libraries at run time. The libraries remain in their own files and are not copied into the executable files of the applications. DLLs link to an application when the application is run, rather than when it is created. DLLs may contain links to other DLLs. Many times, DLLs are placed in files with different extensions such as .EXE, .DRV or .DLL. Given below are a few advantages of having DLL files. DLL files don't get loaded into the RAM together with the main program; they don't occupy space unless required. When a DLL file is needed, it is loaded and run. For example, as long as a user of Microsoft Word is editing a document, the printer DLL file is not required in RAM. If the user decides to print the document, then the Word application causes the printer DLL file to be loaded and run. A DLL helps promote developing modular programs. It helps you develop large programs that require multiple language versions or a program that requires modular architecture. An example of a modular program is an accounting program having many modules that can be dynamically loaded at run-time. When a function within a DLL needs an update or a fix, the deployment and installation of the DLL does not require the program to be relinked with the DLL. Additionally, if multiple programs use the same DLL, then all of them get benefited from the update or the fix. This issue may occur more frequently when you use a third-party DLL that is regularly updated or fixed. Applications and DLLs can link to other DLLs automatically, if the DLL linkage is specified in the IMPORTS section of the module definition file as a part of the compile. Else, you can explicitly load them using the Windows LoadLibrary function. COMDLG32.DLL - Controls the dialog boxes. COMDLG32.DLL - Controls the dialog boxes. GDI32.DLL - Contains numerous functions for drawing graphics, displaying text, and managing fonts. GDI32.DLL - Contains numerous functions for drawing graphics, displaying text, and managing fonts. KERNEL32.DLL - Contains hundreds of functions for the management of memory and various processes. KERNEL32.DLL - Contains hundreds of functions for the management of memory and various processes. USER32.DLL - Contains numerous user interface functions. Involved in the creation of program windows and their interactions with each other. USER32.DLL - Contains numerous user interface functions. Involved in the creation of program windows and their interactions with each other. First, we will discuss the issues and the requirements that you should consider while developing your own DLLs. When you load a DLL in an application, two methods of linking let you call the exported DLL functions. The two methods of linking are: load-time dynamic linking, and run-time dynamic linking. In load-time dynamic linking, an application makes explicit calls to the exported DLL functions like local functions. To use load-time dynamic linking, provide a header (.h) file and an import library (.lib) file, when you compile and link the application. When you do this, the linker will provide the system with the information that is required to load the DLL and resolve the exported DLL function locations at load time. In runtime dynamic linking, an application calls either the LoadLibrary function or the LoadLibraryEx function to load the DLL at runtime. After the DLL is successfully loaded, you use the GetProcAddress function, to obtain the address of the exported DLL function that you want to call. When you use runtime dynamic linking, you do not need an import library file. The following list describes the application criteria for choosing between load-time dynamic linking and runtime dynamic linking: Startup performance : If the initial startup performance of the application is important, you should use run-time dynamic linking. Startup performance : If the initial startup performance of the application is important, you should use run-time dynamic linking. Ease of use : In load-time dynamic linking, the exported DLL functions are like local functions. It helps you call these functions easily. Ease of use : In load-time dynamic linking, the exported DLL functions are like local functions. It helps you call these functions easily. Application logic : In runtime dynamic linking, an application can branch to load different modules as required. This is important when you develop multiple-language versions. Application logic : In runtime dynamic linking, an application can branch to load different modules as required. This is important when you develop multiple-language versions. When you create a DLL, you can optionally specify an entry point function. The entry point function is called when processes or threads attach themselves to the DLL or detach themselves from the DLL. You can use the entry point function to initialize or destroy data structures as required by the DLL. Additionally, if the application is multithreaded, you can use thread local storage (TLS) to allocate memory that is private to each thread in the entry point function. The following code is an example of the DLL entry point function. BOOL APIENTRY DllMain( HANDLE hModule, // Handle to DLL module DWORD ul_reason_for_call, LPVOID lpReserved ) // Reserved { switch ( ul_reason_for_call ) { case DLL_PROCESS_ATTACHED: // A process is loading the DLL. break; case DLL_THREAD_ATTACHED: // A process is creating a new thread. break; case DLL_THREAD_DETACH: // A thread exits normally. break; case DLL_PROCESS_DETACH: // A process unloads the DLL. break; } return TRUE; } When the entry point function returns a FALSE value, the application will not start if you are using load-time dynamic linking. If you are using runtime dynamic linking, only the individual DLL will not load. The entry point function should only perform simple initialization tasks and should not call any other DLL loading or termination functions. For example, in the entry point function, you should not directly or indirectly call the LoadLibrary function or the LoadLibraryEx function. Additionally, you should not call the FreeLibrary function when the process is terminating. WARNING : In multithreaded applications, make sure that access to the DLL global data is synchronized (thread safe) to avoid possible data corruption. To do this, use TLS to provide unique data for each thread. To export DLL functions, you can either add a function keyword to the exported DLL functions or create a module definition (.def) file that lists the exported DLL functions. To use a function keyword, you must declare each function that you want to export with the following keyword: __declspec(dllexport) To use exported DLL functions in the application, you must declare each function that you want to import with the following keyword: __declspec(dllimport) Typically, you would use one header file having define statement and an ifdef statement to separate the export statement and the import statement. You can also use a module definition file to declare exported DLL functions. When you use a module definition file, you do not have to add the function keyword to the exported DLL functions. In the module definition file, you declare the LIBRARY statement and the EXPORTS statement for the DLL. The following code is an example of a definition file. // SampleDLL.def // LIBRARY "sampleDLL" EXPORTS HelloWorld In Microsoft Visual C++ 6.0, you can create a DLL by selecting either the Win32 Dynamic-Link Library project type or the MFC AppWizard (dll) project type. The following code is an example of a DLL that was created in Visual C++ by using the Win32 Dynamic-Link Library project type. // SampleDLL.cpp #include "stdafx.h" #define EXPORTING_DLL #include "sampleDLL.h" BOOL APIENTRY DllMain( HANDLE hModule, DWORD ul_reason_for_call, LPVOID lpReserved ) { return TRUE; } void HelloWorld() { MessageBox( NULL, TEXT("Hello World"), TEXT("In a DLL"), MB_OK); } // File: SampleDLL.h // #ifndef INDLL_H #define INDLL_H #ifdef EXPORTING_DLL extern __declspec(dllexport) void HelloWorld() ; #else extern __declspec(dllimport) void HelloWorld() ; #endif #endif The following code is an example of a Win32 Application project that calls the exported DLL function in the SampleDLL DLL. // SampleApp.cpp #include "stdafx.h" #include "sampleDLL.h" int APIENTRY WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow) { HelloWorld(); return 0; } NOTE : In load-time dynamic linking, you must link the SampleDLL.lib import library that is created when you build the SampleDLL project. In runtime dynamic linking, you use code that is similar to the following code to call the SampleDLL.dll exported DLL function. ... typedef VOID (*DLLPROC) (LPTSTR); ... HINSTANCE hinstDLL; DLLPROC HelloWorld; BOOL fFreeDLL; hinstDLL = LoadLibrary("sampleDLL.dll"); if (hinstDLL != NULL) { HelloWorld = (DLLPROC) GetProcAddress(hinstDLL, "HelloWorld"); if (HelloWorld != NULL) (HelloWorld); fFreeDLL = FreeLibrary(hinstDLL); } ... When you compile and link the SampleDLL application, the Windows operating system searches for the SampleDLL DLL in the following locations in this order: The application folder The application folder The current folder The current folder The Windows system folder (The GetSystemDirectory function returns the path of the Windows system folder). The Windows system folder (The GetSystemDirectory function returns the path of the Windows system folder). The Windows folder (The GetWindowsDirectory function returns the path of the Windows folder). The Windows folder (The GetWindowsDirectory function returns the path of the Windows folder). In order to use a DLL, it has to be registered by having appropriate references entered in the Registry. It sometimes happens that a Registry reference gets corrupted and the functions of the DLL cannot be used anymore. The DLL can be re-registered by opening Start-Run and entering the following command: regsvr32 somefile.dll This command assumes that somefile.dll is in a directory or folder that is in the PATH. Otherwise, the full path for the DLL must be used. A DLL file can also be unregistered by using the switch "/u" as shown below. regsvr32 /u somefile.dll This can be used to toggle a service on and off. Several tools are available to help you troubleshoot DLL problems. Some of them are discussed below. The Dependency Walker tool (depends.exe) can recursively scan for all the dependent DLLs that are used by a program. When you open a program in Dependency Walker, the Dependency Walker performs the following checks: Checks for missing DLLs. Checks for program files or DLLs that are not valid. Checks that import functions and export functions match. Checks for circular dependency errors. Checks for modules that are not valid because the modules are for a different operating system. By using Dependency Walker, you can document all the DLLs that a program uses. It may help prevent and correct DLL problems that may occur in the future. Dependency Walker is located in the following directory when you install Microsoft Visual Studio 6.0: drive\Program Files\Microsoft Visual Studio\Common\Tools The DLL Universal Problem Solver (DUPS) tool is used to audit, compare, document, and display DLL information. The following list describes the utilities that make up the DUPS tool: Dlister.exe - This utility enumerates all the DLLs on the computer and logs the information to a text file or to a database file. Dlister.exe - This utility enumerates all the DLLs on the computer and logs the information to a text file or to a database file. Dcomp.exe - This utility compares the DLLs that are listed in two text files and produces a third text file that contains the differences. Dcomp.exe - This utility compares the DLLs that are listed in two text files and produces a third text file that contains the differences. Dtxt2DB.exe - This utility loads the text files that are created by using the Dlister.exe utility and the Dcomp.exe utility into the dllHell database. Dtxt2DB.exe - This utility loads the text files that are created by using the Dlister.exe utility and the Dcomp.exe utility into the dllHell database. DlgDtxt2DB.exe - This utility provides a graphical user interface (GUI) version of the Dtxt2DB.exe utility. DlgDtxt2DB.exe - This utility provides a graphical user interface (GUI) version of the Dtxt2DB.exe utility. Keep the following tips in mind while writing a DLL: Use proper calling convention (C or stdcall). Use proper calling convention (C or stdcall). Be aware of the correct order of arguments passed to the function. Be aware of the correct order of arguments passed to the function. NEVER resize arrays or concatenate strings using the arguments passed directly to a function. Remember, the parameters you pass are LabVIEW data. Changing array or string sizes may result in a crash by overwriting other data stored in LabVIEW memory. You MAY resize arrays or concatenate strings if you pass a LabVIEW Array Handle or LabVIEW String Handle and are using the Visual C++ compiler or Symantec compiler to compile your DLL. NEVER resize arrays or concatenate strings using the arguments passed directly to a function. Remember, the parameters you pass are LabVIEW data. Changing array or string sizes may result in a crash by overwriting other data stored in LabVIEW memory. You MAY resize arrays or concatenate strings if you pass a LabVIEW Array Handle or LabVIEW String Handle and are using the Visual C++ compiler or Symantec compiler to compile your DLL. While passing strings to a function, select the correct type of string to pass. C or Pascal or LabVIEW string Handle. While passing strings to a function, select the correct type of string to pass. C or Pascal or LabVIEW string Handle. Pascal strings are limited to 255 characters in length. Pascal strings are limited to 255 characters in length. C strings are NULL terminated. If your DLL function returns numeric data in a binary string format (for example, via GPIB or the serial port), it may return NULL values as a part of the data string. In such cases, passing arrays of short (8-bit) integers is most reliable. C strings are NULL terminated. If your DLL function returns numeric data in a binary string format (for example, via GPIB or the serial port), it may return NULL values as a part of the data string. In such cases, passing arrays of short (8-bit) integers is most reliable. If you are working with arrays or strings of data, ALWAYS pass a buffer or array that is large enough to hold any results placed in the buffer by the function unless you are passing them as LabVIEW handles, in which case you can resize them using CIN functions under Visual C++ or Symantec compiler. If you are working with arrays or strings of data, ALWAYS pass a buffer or array that is large enough to hold any results placed in the buffer by the function unless you are passing them as LabVIEW handles, in which case you can resize them using CIN functions under Visual C++ or Symantec compiler. List DLL functions in the EXPORTS section of the module definition file if you are using _stdcall. List DLL functions in the EXPORTS section of the module definition file if you are using _stdcall. List DLL functions that other applications call in the module definition file EXPORTS section or to include the _declspec (dllexport) keyword in the function declaration. List DLL functions that other applications call in the module definition file EXPORTS section or to include the _declspec (dllexport) keyword in the function declaration. If you use a C++ compiler, export functions with the extern .C.{} statement in your header file in order to prevent name mangling. If you use a C++ compiler, export functions with the extern .C.{} statement in your header file in order to prevent name mangling. If you are writing your own DLL, you should not recompile a DLL while the DLL is loaded into the memory by another application. Before recompiling a DLL, ensure that all applications using that particular DLL are unloaded from the memory. It ensures that the DLL itself is not loaded into the memory. You may fail to rebuild correctly if you forget this and your compiler does not warn you. If you are writing your own DLL, you should not recompile a DLL while the DLL is loaded into the memory by another application. Before recompiling a DLL, ensure that all applications using that particular DLL are unloaded from the memory. It ensures that the DLL itself is not loaded into the memory. You may fail to rebuild correctly if you forget this and your compiler does not warn you. Test your DLLs with another program to ensure that the function (and the DLL) behave correctly. Testing it with the debugger of your compiler or a simple C program in which you can call a function in a DLL will help you identify whether possible difficulties are inherent to the DLL or LabVIEW related. Test your DLLs with another program to ensure that the function (and the DLL) behave correctly. Testing it with the debugger of your compiler or a simple C program in which you can call a function in a DLL will help you identify whether possible difficulties are inherent to the DLL or LabVIEW related. We have seen how to write a DLL and how to create a "Hello World" program. That example must have given you an idea about the basic concept of creating a DLL. Here, we will give a description of creating DLLs using Delphi, Borland C++, and again VC++. Let us take these examples one by one. How to write and call DLL's within Delphi How to write and call DLL's within Delphi Making DLL's from the Borland C++ Builder IDE Making DLL's from the Borland C++ Builder IDE Making DLL's in Microsoft Visual C++ 6.0 Making DLL's in Microsoft Visual C++ 6.0 Print Add Notes Bookmark this page
[ { "code": null, "e": 1966, "s": 1652, "text": "Dynamic linking is a mechanism that links applications to libraries at run time. The libraries remain in their own files and are not copied into the executable files of the applications. DLLs link to an application when the application is run, rather than when it is created. DLLs may contain links to other DLLs." }, { "code": null, "e": 2057, "s": 1966, "text": "Many times, DLLs are placed in files with different extensions such as .EXE, .DRV or .DLL." }, { "code": null, "e": 2111, "s": 2057, "text": "Given below are a few advantages of having DLL files." }, { "code": null, "e": 2509, "s": 2111, "text": "DLL files don't get loaded into the RAM together with the main program; they don't occupy space unless required. When a DLL file is needed, it is loaded and run. For example, as long as a user of Microsoft Word is editing a document, the printer DLL file is not required in RAM. If the user decides to print the document, then the Word application causes the printer DLL file to be loaded and run." }, { "code": null, "e": 2804, "s": 2509, "text": "A DLL helps promote developing modular programs. It helps you develop large programs that require multiple language versions or a program that requires modular architecture. An example of a modular program is an accounting program having many modules that can be dynamically loaded at run-time." }, { "code": null, "e": 3176, "s": 2804, "text": "When a function within a DLL needs an update or a fix, the deployment and installation of the DLL does not require the program to be relinked with the DLL. Additionally, if multiple programs use the same DLL, then all of them get benefited from the update or the fix. This issue may occur more frequently when you use a third-party DLL that is regularly updated or fixed." }, { "code": null, "e": 3422, "s": 3176, "text": "Applications and DLLs can link to other DLLs automatically, if the DLL linkage is specified in the IMPORTS section of the module definition file as a part of the compile. Else, you can explicitly load them using the Windows LoadLibrary function." }, { "code": null, "e": 3464, "s": 3422, "text": "COMDLG32.DLL - Controls the dialog boxes." }, { "code": null, "e": 3506, "s": 3464, "text": "COMDLG32.DLL - Controls the dialog boxes." }, { "code": null, "e": 3605, "s": 3506, "text": "GDI32.DLL - Contains numerous functions for drawing graphics, displaying text, and managing fonts." }, { "code": null, "e": 3704, "s": 3605, "text": "GDI32.DLL - Contains numerous functions for drawing graphics, displaying text, and managing fonts." }, { "code": null, "e": 3802, "s": 3704, "text": "KERNEL32.DLL - Contains hundreds of functions for the management of memory and various processes." }, { "code": null, "e": 3900, "s": 3802, "text": "KERNEL32.DLL - Contains hundreds of functions for the management of memory and various processes." }, { "code": null, "e": 4041, "s": 3900, "text": "USER32.DLL - Contains numerous user interface functions. Involved in the creation of program windows and their interactions with each other." }, { "code": null, "e": 4182, "s": 4041, "text": "USER32.DLL - Contains numerous user interface functions. Involved in the creation of program windows and their interactions with each other." }, { "code": null, "e": 4294, "s": 4182, "text": "First, we will discuss the issues and the requirements that you should consider while developing your own DLLs." }, { "code": null, "e": 4429, "s": 4294, "text": "When you load a DLL in an application, two methods of linking let you call the exported DLL functions. The two methods of linking are:" }, { "code": null, "e": 4460, "s": 4429, "text": "load-time dynamic linking, and" }, { "code": null, "e": 4486, "s": 4460, "text": "run-time dynamic linking." }, { "code": null, "e": 4912, "s": 4486, "text": "In load-time dynamic linking, an application makes explicit calls to the exported DLL functions like local functions. To use load-time dynamic linking, provide a header (.h) file and an import library (.lib) file, when you compile and link the application. When you do this, the linker will provide the system with the information that is required to load the DLL and resolve the exported DLL function locations at load time." }, { "code": null, "e": 5278, "s": 4912, "text": "In runtime dynamic linking, an application calls either the LoadLibrary function or the LoadLibraryEx function to load the DLL at runtime. After the DLL is successfully loaded, you use the GetProcAddress function, to obtain the address of the exported DLL function that you want to call. When you use runtime dynamic linking, you do not need an import library file." }, { "code": null, "e": 5408, "s": 5278, "text": "The following list describes the application criteria for choosing between load-time dynamic linking and runtime dynamic linking:" }, { "code": null, "e": 5539, "s": 5408, "text": "Startup performance : If the initial startup performance of the application is important, you should use run-time dynamic linking." }, { "code": null, "e": 5670, "s": 5539, "text": "Startup performance : If the initial startup performance of the application is important, you should use run-time dynamic linking." }, { "code": null, "e": 5809, "s": 5670, "text": "Ease of use : In load-time dynamic linking, the exported DLL functions are like local functions. It helps you call these functions easily." }, { "code": null, "e": 5948, "s": 5809, "text": "Ease of use : In load-time dynamic linking, the exported DLL functions are like local functions. It helps you call these functions easily." }, { "code": null, "e": 6124, "s": 5948, "text": "Application logic : In runtime dynamic linking, an application can branch to load different modules as required. This is important when you develop multiple-language versions." }, { "code": null, "e": 6300, "s": 6124, "text": "Application logic : In runtime dynamic linking, an application can branch to load different modules as required. This is important when you develop multiple-language versions." }, { "code": null, "e": 6602, "s": 6300, "text": "When you create a DLL, you can optionally specify an entry point function. The entry point function is called when processes or threads attach themselves to the DLL or detach themselves from the DLL. You can use the entry point function to initialize or destroy data structures as required by the DLL." }, { "code": null, "e": 6837, "s": 6602, "text": "Additionally, if the application is multithreaded, you can use thread local storage (TLS) to allocate memory that is private to each thread in the entry point function. The following code is an example of the DLL entry point function." }, { "code": null, "e": 7354, "s": 6837, "text": "BOOL APIENTRY DllMain(\nHANDLE hModule,\t// Handle to DLL module DWORD ul_reason_for_call, LPVOID lpReserved ) // Reserved\n{\n switch ( ul_reason_for_call )\n {\n case DLL_PROCESS_ATTACHED:\n // A process is loading the DLL.\n break;\n case DLL_THREAD_ATTACHED:\n // A process is creating a new thread.\n break;\n case DLL_THREAD_DETACH:\n // A thread exits normally.\n break;\n case DLL_PROCESS_DETACH:\n // A process unloads the DLL.\n break;\n }\n return TRUE;\n}" }, { "code": null, "e": 7563, "s": 7354, "text": "When the entry point function returns a FALSE value, the application will not start if you are using load-time dynamic linking. If you are using runtime dynamic linking, only the individual DLL will not load." }, { "code": null, "e": 7938, "s": 7563, "text": "The entry point function should only perform simple initialization tasks and should not call any other DLL loading or termination functions. For example, in the entry point function, you should not directly or indirectly call the LoadLibrary function or the LoadLibraryEx function. Additionally, you should not call the FreeLibrary function when the process is terminating.\n" }, { "code": null, "e": 8149, "s": 7938, "text": "WARNING : In multithreaded applications, make sure that access to the DLL global data is synchronized (thread safe) to avoid possible data corruption. To do this, use TLS to provide unique data for each thread." }, { "code": null, "e": 8323, "s": 8149, "text": "To export DLL functions, you can either add a function keyword to the exported DLL functions or create a module definition (.def) file that lists the exported DLL functions." }, { "code": null, "e": 8433, "s": 8323, "text": "To use a function keyword, you must declare each function that you want to export with the following keyword:" }, { "code": null, "e": 8456, "s": 8433, "text": " __declspec(dllexport)" }, { "code": null, "e": 8589, "s": 8456, "text": "To use exported DLL functions in the application, you must declare each function that you want to import with the following keyword:" }, { "code": null, "e": 8613, "s": 8589, "text": " __declspec(dllimport)\n" }, { "code": null, "e": 8760, "s": 8613, "text": "Typically, you would use one header file having define statement and an ifdef statement to separate the export statement and the import statement." }, { "code": null, "e": 9110, "s": 8760, "text": "You can also use a module definition file to declare exported DLL functions. When you use a module definition file, you do not have to add the function keyword to the exported DLL functions. In the module definition file, you declare the LIBRARY statement and the EXPORTS statement for the DLL. The following code is an example of a definition file." }, { "code": null, "e": 9174, "s": 9110, "text": "// SampleDLL.def\n//\nLIBRARY \"sampleDLL\"\n\nEXPORTS\n HelloWorld\n" }, { "code": null, "e": 9329, "s": 9174, "text": "In Microsoft Visual C++ 6.0, you can create a DLL by selecting either the Win32 Dynamic-Link Library project type or the MFC AppWizard (dll) project type." }, { "code": null, "e": 9456, "s": 9329, "text": "The following code is an example of a DLL that was created in Visual C++ by using the Win32 Dynamic-Link Library project type." }, { "code": null, "e": 9741, "s": 9456, "text": "// SampleDLL.cpp\n\n#include \"stdafx.h\"\n#define EXPORTING_DLL\n#include \"sampleDLL.h\"\n\nBOOL APIENTRY DllMain( HANDLE hModule, DWORD ul_reason_for_call, LPVOID lpReserved )\n{\n return TRUE;\n}\n\nvoid HelloWorld()\n{\n MessageBox( NULL, TEXT(\"Hello World\"), \n TEXT(\"In a DLL\"), MB_OK);\n}" }, { "code": null, "e": 9938, "s": 9741, "text": "// File: SampleDLL.h\n//\n#ifndef INDLL_H\n#define INDLL_H\n\n#ifdef EXPORTING_DLL\nextern __declspec(dllexport) void HelloWorld() ;\n#else\nextern __declspec(dllimport) void HelloWorld() ;\n#endif\n\n#endif" }, { "code": null, "e": 10061, "s": 9938, "text": "The following code is an example of a Win32 Application project that calls the exported DLL function in the SampleDLL DLL." }, { "code": null, "e": 10269, "s": 10061, "text": "// SampleApp.cpp \n\n#include \"stdafx.h\"\n#include \"sampleDLL.h\"\n\nint APIENTRY WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow)\n{ \t\n HelloWorld();\n return 0;\n}" }, { "code": null, "e": 10407, "s": 10269, "text": "NOTE : In load-time dynamic linking, you must link the SampleDLL.lib import library that is created when you build the SampleDLL project." }, { "code": null, "e": 10535, "s": 10407, "text": "In runtime dynamic linking, you use code that is similar to the following code to call the SampleDLL.dll exported DLL function." }, { "code": null, "e": 10854, "s": 10535, "text": "...\ntypedef VOID (*DLLPROC) (LPTSTR);\n...\nHINSTANCE hinstDLL;\nDLLPROC HelloWorld;\nBOOL fFreeDLL;\n\nhinstDLL = LoadLibrary(\"sampleDLL.dll\");\nif (hinstDLL != NULL)\n{\n HelloWorld = (DLLPROC) GetProcAddress(hinstDLL, \"HelloWorld\");\n\t\n if (HelloWorld != NULL)\n (HelloWorld);\n\n fFreeDLL = FreeLibrary(hinstDLL);\n}\n..." }, { "code": null, "e": 11009, "s": 10854, "text": "When you compile and link the SampleDLL application, the Windows operating system searches for the SampleDLL DLL in the following locations in this order:" }, { "code": null, "e": 11032, "s": 11009, "text": "The application folder" }, { "code": null, "e": 11055, "s": 11032, "text": "The application folder" }, { "code": null, "e": 11074, "s": 11055, "text": "The current folder" }, { "code": null, "e": 11093, "s": 11074, "text": "The current folder" }, { "code": null, "e": 11200, "s": 11093, "text": "The Windows system folder (The GetSystemDirectory function returns the path of the Windows system folder)." }, { "code": null, "e": 11307, "s": 11200, "text": "The Windows system folder (The GetSystemDirectory function returns the path of the Windows system folder)." }, { "code": null, "e": 11401, "s": 11307, "text": "The Windows folder (The GetWindowsDirectory function returns the path of the Windows folder)." }, { "code": null, "e": 11495, "s": 11401, "text": "The Windows folder (The GetWindowsDirectory function returns the path of the Windows folder)." }, { "code": null, "e": 11801, "s": 11495, "text": "In order to use a DLL, it has to be registered by having appropriate references entered in the Registry. It sometimes happens that a Registry reference gets corrupted and the functions of the DLL cannot be used anymore. The DLL can be re-registered by opening Start-Run and entering the following command:" }, { "code": null, "e": 11824, "s": 11801, "text": "regsvr32 somefile.dll\n" }, { "code": null, "e": 12040, "s": 11824, "text": "This command assumes that somefile.dll is in a directory or folder that is in the PATH. Otherwise, the full path for the DLL must be used. A DLL file can also be unregistered by using the switch \"/u\" as shown below." }, { "code": null, "e": 12066, "s": 12040, "text": "regsvr32 /u somefile.dll\n" }, { "code": null, "e": 12115, "s": 12066, "text": "This can be used to toggle a service on and off." }, { "code": null, "e": 12216, "s": 12115, "text": "Several tools are available to help you troubleshoot DLL problems. Some of them are discussed below." }, { "code": null, "e": 12432, "s": 12216, "text": "The Dependency Walker tool (depends.exe) can recursively scan for all the dependent DLLs that are used by a program. When you open a program in Dependency Walker, the Dependency Walker performs the following checks:" }, { "code": null, "e": 12457, "s": 12432, "text": "Checks for missing DLLs." }, { "code": null, "e": 12510, "s": 12457, "text": "Checks for program files or DLLs that are not valid." }, { "code": null, "e": 12567, "s": 12510, "text": "Checks that import functions and export functions match." }, { "code": null, "e": 12606, "s": 12567, "text": "Checks for circular dependency errors." }, { "code": null, "e": 12702, "s": 12606, "text": "Checks for modules that are not valid because the modules are for a different operating system." }, { "code": null, "e": 12958, "s": 12702, "text": "By using Dependency Walker, you can document all the DLLs that a program uses. It may help prevent and correct DLL problems that may occur in the future. Dependency Walker is located in the following directory when you install Microsoft Visual Studio 6.0:" }, { "code": null, "e": 13016, "s": 12958, "text": "drive\\Program Files\\Microsoft Visual Studio\\Common\\Tools\n" }, { "code": null, "e": 13198, "s": 13016, "text": "The DLL Universal Problem Solver (DUPS) tool is used to audit, compare, document, and display DLL information. The following list describes the utilities that make up the DUPS tool:" }, { "code": null, "e": 13328, "s": 13198, "text": "Dlister.exe - This utility enumerates all the DLLs on the computer and logs the information to a text file or to a database file." }, { "code": null, "e": 13458, "s": 13328, "text": "Dlister.exe - This utility enumerates all the DLLs on the computer and logs the information to a text file or to a database file." }, { "code": null, "e": 13597, "s": 13458, "text": "Dcomp.exe - This utility compares the DLLs that are listed in two text files and produces a third text file that contains the differences." }, { "code": null, "e": 13736, "s": 13597, "text": "Dcomp.exe - This utility compares the DLLs that are listed in two text files and produces a third text file that contains the differences." }, { "code": null, "e": 13887, "s": 13736, "text": "Dtxt2DB.exe - This utility loads the text files that are created by using the Dlister.exe utility and the Dcomp.exe utility into the dllHell database." }, { "code": null, "e": 14038, "s": 13887, "text": "Dtxt2DB.exe - This utility loads the text files that are created by using the Dlister.exe utility and the Dcomp.exe utility into the dllHell database." }, { "code": null, "e": 14146, "s": 14038, "text": "DlgDtxt2DB.exe - This utility provides a graphical user interface (GUI) version of the Dtxt2DB.exe utility." }, { "code": null, "e": 14254, "s": 14146, "text": "DlgDtxt2DB.exe - This utility provides a graphical user interface (GUI) version of the Dtxt2DB.exe utility." }, { "code": null, "e": 14307, "s": 14254, "text": "Keep the following tips in mind while writing a DLL:" }, { "code": null, "e": 14353, "s": 14307, "text": "Use proper calling convention (C or stdcall)." }, { "code": null, "e": 14399, "s": 14353, "text": "Use proper calling convention (C or stdcall)." }, { "code": null, "e": 14466, "s": 14399, "text": "Be aware of the correct order of arguments passed to the function." }, { "code": null, "e": 14533, "s": 14466, "text": "Be aware of the correct order of arguments passed to the function." }, { "code": null, "e": 14969, "s": 14533, "text": "NEVER resize arrays or concatenate strings using the arguments passed directly to a function. Remember, the parameters you pass are LabVIEW data. Changing array or string sizes may result in a crash by overwriting other data stored in LabVIEW memory. You MAY resize arrays or concatenate strings if you pass a LabVIEW Array Handle or LabVIEW String Handle and are using the Visual C++ compiler or Symantec compiler to compile your DLL." }, { "code": null, "e": 15405, "s": 14969, "text": "NEVER resize arrays or concatenate strings using the arguments passed directly to a function. Remember, the parameters you pass are LabVIEW data. Changing array or string sizes may result in a crash by overwriting other data stored in LabVIEW memory. You MAY resize arrays or concatenate strings if you pass a LabVIEW Array Handle or LabVIEW String Handle and are using the Visual C++ compiler or Symantec compiler to compile your DLL." }, { "code": null, "e": 15523, "s": 15405, "text": "While passing strings to a function, select the correct type of string to pass. C or Pascal or LabVIEW string Handle." }, { "code": null, "e": 15641, "s": 15523, "text": "While passing strings to a function, select the correct type of string to pass. C or Pascal or LabVIEW string Handle." }, { "code": null, "e": 15697, "s": 15641, "text": "Pascal strings are limited to 255 characters in length." }, { "code": null, "e": 15753, "s": 15697, "text": "Pascal strings are limited to 255 characters in length." }, { "code": null, "e": 16026, "s": 15753, "text": "C strings are NULL terminated. If your DLL function returns numeric data in a binary string format (for example, via GPIB or the serial port), it may return NULL values as a part of the data string. In such cases, passing arrays of short (8-bit) integers is most reliable." }, { "code": null, "e": 16299, "s": 16026, "text": "C strings are NULL terminated. If your DLL function returns numeric data in a binary string format (for example, via GPIB or the serial port), it may return NULL values as a part of the data string. In such cases, passing arrays of short (8-bit) integers is most reliable." }, { "code": null, "e": 16599, "s": 16299, "text": "If you are working with arrays or strings of data, ALWAYS pass a buffer or array that is large enough to hold any results placed in the buffer by the function unless you are passing them as LabVIEW handles, in which case you can resize them using CIN functions under Visual C++ or Symantec compiler." }, { "code": null, "e": 16899, "s": 16599, "text": "If you are working with arrays or strings of data, ALWAYS pass a buffer or array that is large enough to hold any results placed in the buffer by the function unless you are passing them as LabVIEW handles, in which case you can resize them using CIN functions under Visual C++ or Symantec compiler." }, { "code": null, "e": 16998, "s": 16899, "text": "List DLL functions in the EXPORTS section of the module definition file if you are using _stdcall." }, { "code": null, "e": 17097, "s": 16998, "text": "List DLL functions in the EXPORTS section of the module definition file if you are using _stdcall." }, { "code": null, "e": 17268, "s": 17097, "text": "List DLL functions that other applications call in the module definition file EXPORTS section or to include the _declspec (dllexport) keyword in the function declaration." }, { "code": null, "e": 17439, "s": 17268, "text": "List DLL functions that other applications call in the module definition file EXPORTS section or to include the _declspec (dllexport) keyword in the function declaration." }, { "code": null, "e": 17570, "s": 17439, "text": "If you use a C++ compiler, export functions with the extern .C.{} statement in your header file in order to prevent name mangling." }, { "code": null, "e": 17701, "s": 17570, "text": "If you use a C++ compiler, export functions with the extern .C.{} statement in your header file in order to prevent name mangling." }, { "code": null, "e": 18092, "s": 17701, "text": "If you are writing your own DLL, you should not recompile a DLL while the DLL is loaded into the memory by another application. Before recompiling a DLL, ensure that all applications using that particular DLL are unloaded from the memory. It ensures that the DLL itself is not loaded into the memory. You may fail to rebuild correctly if you forget this and your compiler does not warn you." }, { "code": null, "e": 18483, "s": 18092, "text": "If you are writing your own DLL, you should not recompile a DLL while the DLL is loaded into the memory by another application. Before recompiling a DLL, ensure that all applications using that particular DLL are unloaded from the memory. It ensures that the DLL itself is not loaded into the memory. You may fail to rebuild correctly if you forget this and your compiler does not warn you." }, { "code": null, "e": 18786, "s": 18483, "text": "Test your DLLs with another program to ensure that the function (and the DLL) behave correctly. Testing it with the debugger of your compiler or a simple C program in which you can call a function in a DLL will help you identify whether possible difficulties are inherent to the DLL or LabVIEW related." }, { "code": null, "e": 19089, "s": 18786, "text": "Test your DLLs with another program to ensure that the function (and the DLL) behave correctly. Testing it with the debugger of your compiler or a simple C program in which you can call a function in a DLL will help you identify whether possible difficulties are inherent to the DLL or LabVIEW related." }, { "code": null, "e": 19248, "s": 19089, "text": "We have seen how to write a DLL and how to create a \"Hello World\" program. That example must have given you an idea about the basic concept of creating a DLL." }, { "code": null, "e": 19341, "s": 19248, "text": "Here, we will give a description of creating DLLs using Delphi, Borland C++, and again VC++." }, { "code": null, "e": 19380, "s": 19341, "text": "Let us take these examples one by one." }, { "code": null, "e": 19422, "s": 19380, "text": "How to write and call DLL's within Delphi" }, { "code": null, "e": 19464, "s": 19422, "text": "How to write and call DLL's within Delphi" }, { "code": null, "e": 19510, "s": 19464, "text": "Making DLL's from the Borland C++ Builder IDE" }, { "code": null, "e": 19556, "s": 19510, "text": "Making DLL's from the Borland C++ Builder IDE" }, { "code": null, "e": 19597, "s": 19556, "text": "Making DLL's in Microsoft Visual C++ 6.0" }, { "code": null, "e": 19638, "s": 19597, "text": "Making DLL's in Microsoft Visual C++ 6.0" }, { "code": null, "e": 19645, "s": 19638, "text": " Print" }, { "code": null, "e": 19656, "s": 19645, "text": " Add Notes" } ]
Getting the remainder value after division in Julia - mod() Method - GeeksforGeeks
21 Apr, 2020 The mod() is an inbuilt function in julia which is used to return remainder when the specified dividend is divided by divisor. Syntax: mod(x, y) Parameters: x: Specified dividend. y: Specified divisor. Returns: It returns remainder when the specified dividend is divided by divisor. Example 1: # Julia program to illustrate # the use of mod() method # Getting remainder when the specified# dividend is divided by divisor.println(mod(0, 3))println(mod(1, 1))println(mod(7, 2)) Output: 0 0 1 Example 2: # Julia program to illustrate # the use of mod() method # Getting remainder when the specified# dividend is divided by divisor.println(mod(5, 3))println(mod(10, 2))println(mod(7.3, 2))println(mod(1.8, 0))println(mod(-6, 4))println(mod(-3, -2)) Output: 2 0 1.2999999999999998 NaN 2 -1 Julia Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Get array dimensions and size of a dimension in Julia - size() Method Decision Making in Julia (if, if-else, Nested-if, if-elseif-else ladder) Searching in Array for a given element in Julia Reverse array elements in Julia - reverse(), reverse!() and reverseind() Methods Exception handling in Julia Find maximum element along with its index in Julia - findmax() Method Get number of elements of array in Julia - length() Method Working with Date and Time in Julia Getting last element of an array in Julia - last() Method Working with Excel Files in Julia
[ { "code": null, "e": 24153, "s": 24125, "text": "\n21 Apr, 2020" }, { "code": null, "e": 24280, "s": 24153, "text": "The mod() is an inbuilt function in julia which is used to return remainder when the specified dividend is divided by divisor." }, { "code": null, "e": 24298, "s": 24280, "text": "Syntax: mod(x, y)" }, { "code": null, "e": 24310, "s": 24298, "text": "Parameters:" }, { "code": null, "e": 24333, "s": 24310, "text": "x: Specified dividend." }, { "code": null, "e": 24355, "s": 24333, "text": "y: Specified divisor." }, { "code": null, "e": 24436, "s": 24355, "text": "Returns: It returns remainder when the specified dividend is divided by divisor." }, { "code": null, "e": 24447, "s": 24436, "text": "Example 1:" }, { "code": "# Julia program to illustrate # the use of mod() method # Getting remainder when the specified# dividend is divided by divisor.println(mod(0, 3))println(mod(1, 1))println(mod(7, 2))", "e": 24630, "s": 24447, "text": null }, { "code": null, "e": 24638, "s": 24630, "text": "Output:" }, { "code": null, "e": 24645, "s": 24638, "text": "0\n0\n1\n" }, { "code": null, "e": 24656, "s": 24645, "text": "Example 2:" }, { "code": "# Julia program to illustrate # the use of mod() method # Getting remainder when the specified# dividend is divided by divisor.println(mod(5, 3))println(mod(10, 2))println(mod(7.3, 2))println(mod(1.8, 0))println(mod(-6, 4))println(mod(-3, -2))", "e": 24901, "s": 24656, "text": null }, { "code": null, "e": 24909, "s": 24901, "text": "Output:" }, { "code": null, "e": 24942, "s": 24909, "text": "2\n0\n1.2999999999999998\nNaN\n2\n-1\n" }, { "code": null, "e": 24948, "s": 24942, "text": "Julia" }, { "code": null, "e": 25046, "s": 24948, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 25055, "s": 25046, "text": "Comments" }, { "code": null, "e": 25068, "s": 25055, "text": "Old Comments" }, { "code": null, "e": 25138, "s": 25068, "text": "Get array dimensions and size of a dimension in Julia - size() Method" }, { "code": null, "e": 25211, "s": 25138, "text": "Decision Making in Julia (if, if-else, Nested-if, if-elseif-else ladder)" }, { "code": null, "e": 25259, "s": 25211, "text": "Searching in Array for a given element in Julia" }, { "code": null, "e": 25340, "s": 25259, "text": "Reverse array elements in Julia - reverse(), reverse!() and reverseind() Methods" }, { "code": null, "e": 25368, "s": 25340, "text": "Exception handling in Julia" }, { "code": null, "e": 25438, "s": 25368, "text": "Find maximum element along with its index in Julia - findmax() Method" }, { "code": null, "e": 25497, "s": 25438, "text": "Get number of elements of array in Julia - length() Method" }, { "code": null, "e": 25533, "s": 25497, "text": "Working with Date and Time in Julia" }, { "code": null, "e": 25591, "s": 25533, "text": "Getting last element of an array in Julia - last() Method" } ]
Find the first repeated character in a string using C++.
Suppose we have a string; we have to find the first character that is repeated. So is the string is “Hello Friends”, the first repeated character will be l. As there are two l’s one after another. To solve this, we will use the hashing technique. Create one hash table, scan each character one by one, if the character is not present, then insert into a hash table, if it is already present, then return that character. Live Demo #include<iostream> #include<unordered_set> using namespace std; char getFirstRepeatingChar(string &s) { unordered_set<char> hash; for (int i=0; i<s.length(); i++) { char c = s[i]; if (hash.find(c) != hash.end()) return c; else hash.insert(c); } return '\0'; } int main () { string str = "Hello Friends"; cout << "First repeating character is: " << getFirstRepeatingChar(str); } First repeating character is: l
[ { "code": null, "e": 1259, "s": 1062, "text": "Suppose we have a string; we have to find the first character that is repeated. So is the string is “Hello Friends”, the first repeated character will be l. As there are two l’s one after another." }, { "code": null, "e": 1482, "s": 1259, "text": "To solve this, we will use the hashing technique. Create one hash table, scan each character one by one, if the character is not present, then insert into a hash table, if it is already present, then return that character." }, { "code": null, "e": 1493, "s": 1482, "text": " Live Demo" }, { "code": null, "e": 1925, "s": 1493, "text": "#include<iostream>\n#include<unordered_set>\nusing namespace std;\nchar getFirstRepeatingChar(string &s) {\n unordered_set<char> hash;\n for (int i=0; i<s.length(); i++) {\n char c = s[i];\n if (hash.find(c) != hash.end())\n return c;\n else\n hash.insert(c);\n }\n return '\\0';\n}\nint main () {\n string str = \"Hello Friends\";\n cout << \"First repeating character is: \" << getFirstRepeatingChar(str);\n}" }, { "code": null, "e": 1957, "s": 1925, "text": "First repeating character is: l" } ]
__exit__ in Python - GeeksforGeeks
06 Dec, 2019 Context manager is used for managing resources used by the program. After completion of usage, we have to release memory and terminate connections between files. If they are not released then it will lead to resource leakage and may cause the system to either slow down or crash. Even if we do not release resources, context managers implicitly performs this task. Refer the below article to get the idea about basics of Context Manager. Context Manager This is a method of ContextManager class. The __exit__ method takes care of releasing the resources occupied with the current code snippet. This method must be executed no matter what after we are done with the resources. This method contains instructions for properly closing the resource handler so that the resource is freed for further use by other programs in the OS. If an exception is raised; its type, value, and traceback are passed as arguments to __exit__(). Otherwise, three None arguments are supplied. If the exception is suppressed, then the return value from the __exit__() method will be True, otherwise, False. syntax: __exit__(self, exception_type, exception_value, exception_traceback) parameters:exception_type: indicates class of exception.exception_value: indicates type of exception . like divide_by_zero error, floating_point_error, which are types of arithmetic exception.exception_traceback: traceback is a report which has all of the information needed to solve the exception. # Example 1:. # Python program creating a # context manager class ContextManager(): def __init__(self): print('init method called') def __enter__(self): print('enter method called') return self def __exit__(self, exc_type, exc_value, exc_traceback): print('exit method called') with ContextManager() as manager: print('with statement block') Output : init method called enter method called with statement block exit method called # Example 2: Understanding parameters of __exit__(). We will create a context manager that will be used to divide two numbers. If the # Python program to demonstrate# __exit__ method class Divide: def __init__(self, num1, num2): self.num1 = num1 self.num2 = num2 def __enter__(self): print("Inside __enter__") return self def __exit__(self, exc_type, exc_value, traceback): print("\nInside __exit__") print("\nExecution type:", exc_type) print("\nExecution value:", exc_value) print("\nTraceback:", traceback) def divide_by_zero(self): # causes ZeroDivisionError exception print(self.num1 / self.num2) # Driver's codewith Divide(3, 1) as r: r.divide_by_zero() print("................................................") # will raise a ZeroDivisionErrorwith Divide(3, 0) as r: r.divide_by_zero() Output: Inside __enter__ 3.0 Inside __exit__ Execution type: None Execution value: None Traceback: None ................................................ Inside __enter__ Inside __exit__ Execution type: Execution value: division by zero Traceback: Traceback (most recent call last): File "gfg.py", line 32, in r.divide_by_zero() File "gfg.py", line 21, in divide_by_zero print(self.num1 / self.num2) ZeroDivisionError: division by zero Python-Miscellaneous Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. How to Install PIP on Windows ? How To Convert Python Dictionary To JSON? How to drop one or multiple columns in Pandas Dataframe Check if element exists in list in Python Selecting rows in pandas DataFrame based on conditions Python | os.path.join() method Defaultdict in Python Create a directory in Python Python | Get unique values from a list Python | Pandas dataframe.groupby()
[ { "code": null, "e": 24316, "s": 24288, "text": "\n06 Dec, 2019" }, { "code": null, "e": 24681, "s": 24316, "text": "Context manager is used for managing resources used by the program. After completion of usage, we have to release memory and terminate connections between files. If they are not released then it will lead to resource leakage and may cause the system to either slow down or crash. Even if we do not release resources, context managers implicitly performs this task." }, { "code": null, "e": 24754, "s": 24681, "text": "Refer the below article to get the idea about basics of Context Manager." }, { "code": null, "e": 24770, "s": 24754, "text": "Context Manager" }, { "code": null, "e": 25143, "s": 24770, "text": "This is a method of ContextManager class. The __exit__ method takes care of releasing the resources occupied with the current code snippet. This method must be executed no matter what after we are done with the resources. This method contains instructions for properly closing the resource handler so that the resource is freed for further use by other programs in the OS." }, { "code": null, "e": 25399, "s": 25143, "text": "If an exception is raised; its type, value, and traceback are passed as arguments to __exit__(). Otherwise, three None arguments are supplied. If the exception is suppressed, then the return value from the __exit__() method will be True, otherwise, False." }, { "code": null, "e": 25476, "s": 25399, "text": "syntax: __exit__(self, exception_type, exception_value, exception_traceback)" }, { "code": null, "e": 25775, "s": 25476, "text": "parameters:exception_type: indicates class of exception.exception_value: indicates type of exception . like divide_by_zero error, floating_point_error, which are types of arithmetic exception.exception_traceback: traceback is a report which has all of the information needed to solve the exception." }, { "code": null, "e": 25789, "s": 25775, "text": "# Example 1:." }, { "code": "# Python program creating a # context manager class ContextManager(): def __init__(self): print('init method called') def __enter__(self): print('enter method called') return self def __exit__(self, exc_type, exc_value, exc_traceback): print('exit method called') with ContextManager() as manager: print('with statement block')", "e": 26196, "s": 25789, "text": null }, { "code": null, "e": 26205, "s": 26196, "text": "Output :" }, { "code": null, "e": 26285, "s": 26205, "text": "init method called\nenter method called\nwith statement block\nexit method called\n" }, { "code": null, "e": 26419, "s": 26285, "text": "# Example 2: Understanding parameters of __exit__(). We will create a context manager that will be used to divide two numbers. If the" }, { "code": "# Python program to demonstrate# __exit__ method class Divide: def __init__(self, num1, num2): self.num1 = num1 self.num2 = num2 def __enter__(self): print(\"Inside __enter__\") return self def __exit__(self, exc_type, exc_value, traceback): print(\"\\nInside __exit__\") print(\"\\nExecution type:\", exc_type) print(\"\\nExecution value:\", exc_value) print(\"\\nTraceback:\", traceback) def divide_by_zero(self): # causes ZeroDivisionError exception print(self.num1 / self.num2) # Driver's codewith Divide(3, 1) as r: r.divide_by_zero() print(\"................................................\") # will raise a ZeroDivisionErrorwith Divide(3, 0) as r: r.divide_by_zero()", "e": 27178, "s": 26419, "text": null }, { "code": null, "e": 27186, "s": 27178, "text": "Output:" }, { "code": null, "e": 27638, "s": 27186, "text": "Inside __enter__\n3.0\n\nInside __exit__\n\nExecution type: None\n\nExecution value: None\n\nTraceback: None\n................................................\nInside __enter__\n\nInside __exit__\n\nExecution type: \n\nExecution value: division by zero\n\nTraceback: \nTraceback (most recent call last):\n File \"gfg.py\", line 32, in \n r.divide_by_zero()\n File \"gfg.py\", line 21, in divide_by_zero\n print(self.num1 / self.num2)\nZeroDivisionError: division by zero\n\n" }, { "code": null, "e": 27659, "s": 27638, "text": "Python-Miscellaneous" }, { "code": null, "e": 27666, "s": 27659, "text": "Python" }, { "code": null, "e": 27764, "s": 27666, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 27796, "s": 27764, "text": "How to Install PIP on Windows ?" }, { "code": null, "e": 27838, "s": 27796, "text": "How To Convert Python Dictionary To JSON?" }, { "code": null, "e": 27894, "s": 27838, "text": "How to drop one or multiple columns in Pandas Dataframe" }, { "code": null, "e": 27936, "s": 27894, "text": "Check if element exists in list in Python" }, { "code": null, "e": 27991, "s": 27936, "text": "Selecting rows in pandas DataFrame based on conditions" }, { "code": null, "e": 28022, "s": 27991, "text": "Python | os.path.join() method" }, { "code": null, "e": 28044, "s": 28022, "text": "Defaultdict in Python" }, { "code": null, "e": 28073, "s": 28044, "text": "Create a directory in Python" }, { "code": null, "e": 28112, "s": 28073, "text": "Python | Get unique values from a list" } ]